text
stringlengths
6
128k
# SSC: Semantic Scan Context for Large-Scale Place Recognition Lin Li1, Xin Kong1, Xiangrui Zhao1, Tianxin Huang1 and Yong Liu1,∗ 1Lin Li, Xin Kong, Xiangrui Zhao, Tianxin Huang and Yong Liu are with the Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, P. R. China. (*Yong Liu is the corresponding author, email: [email protected]). ###### Abstract Place recognition gives a SLAM system the ability to correct cumulative errors. Unlike images that contain rich texture features, point clouds are almost pure geometric information which makes place recognition based on point clouds challenging. Existing works usually encode low-level features such as coordinate, normal, reflection intensity, etc., as local or global descriptors to represent scenes. Besides, they often ignore the translation between point clouds when matching descriptors. Different from most existing methods, we explore the use of high-level features, namely semantics, to improve the descriptor’s representation ability. Also, when matching descriptors, we try to correct the translation between point clouds to improve accuracy. Concretely, we propose a novel global descriptor, Semantic Scan Context, which explores semantic information to represent scenes more effectively. We also present a two-step global semantic ICP to obtain the 3D pose ($x$, $y$, $yaw$) used to align the point cloud to improve matching performance. Our experiments on the KITTI dataset show that our approach outperforms the state-of-the-art methods with a large margin. Our code is available at: https://github.com/lilin-hitcrt/SSC. ## I INTRODUCTION Simultaneous Localization and Mapping (SLAM) has rapidly developed in recent decades as critical technologies for autonomous vehicles and robots. Place recognition represents the ability of robots to recognize previously visited places, which can build global constraints for the SLAM system to eliminate the odometry’s cumulative errors and establish a globally consistent map[1]. Place recognition is usually conducted by using images or point clouds. Since point cloud data is rarely affected by environmental factors such as illumination and seasonal changes, LiDAR-based methods have received widespread attention in recent years. Most existing works on LiDAR-based place recognition are achieved by encoding the point cloud into global or local descriptors and then matching the descriptors. They usually use low-level features such as coordinates[2, 3, 4, 5, 6], normal[7], reflection intensity[8, 9, 10, 7], etc. In recent years, with the development of point cloud deep learning, many LiDAR-based object detection[11] and semantic segmentation[12, 13] methods have been proposed, making it possible to obtain semantic information from point clouds. However, there are still only a few LiDAR-based works trying to use semantic information[14, 15, 7]. Figure 1: An example of place recognition using semantic scan context. It is a partial map of the KITTI sequence 08, where the frames 720 and 1500 form a reverse loop. The lower part of the figure is the semantic scan context corresponding to the two frames. Since the directions of them are opposite, the descriptors are quite different, while the aligned one shown in Fig. 2 is easy to distinguish. Figure 2: The pipeline of our approach. It mainly consists of two parts: two-step global semantic ICP and Semantic Scan Context. First, we conduct semantic segmentation on the raw point cloud. Then we use semantic information to retain representative objects and project them onto the x-y plane. The two-step global semantic ICP is performed on the projected cloud to get the 3D pose ($\Delta x,\Delta y,\theta)$. Finally, we use the 3D pose to align the original clouds and generate global descriptors (Semantic Scan Context). The similarity score is obtained by matching SSC. For place recognition, when a robot passes through a place visited before, it does not mean that the two poses are the same. Instead, the robot may walk through the original area from any direction, and there may be a small amount of translation from the original position. Many existing works consider the robot’s orientation, namely rotation, and realize the invariance of rotation[14, 3, 4, 10]. They may think that the small translation will not strongly impact the recognition result and therefore ignore it. However, we find that simply ignoring the translation for the scan context-based methods will greatly reduce the similarity of the positive samples, making them difficult to identify. In this paper, we propose a novel global descriptor named Semantic Scan Context (SSC), which explores semantic information to enhance the expressive power of descriptors. We also propose a two-step global semantic ICP that can produce reliable results regardless of the pose initialization, to obtain the 3D pose $(x,y,yaw)$ of the point cloud. The pose is then used to align the point clouds to reduce the influence of rotation and translation on the similarity of the descriptors. Furthermore, it can also provide good initial values for 6D ICP algorithms to refine the global pose further. Fig. 1 is a demonstration of our results. The main contribution is summarized as follows: * • We propose a novel global descriptor for LiDAR-based place recognition, which exploits semantic information to encode the 3D scenes effectively. * • We propose a two-step global semantic ICP, which doesn’t require any initial values, to obtain the 3D pose $(x,y,yaw)$ of the point clouds. * • We align point clouds with the obtained 3D poses to eliminate the influence of rotation and translation error on the similarity of the descriptors, which can also further benefit the SLAM system as good initial poses. * • Exhaustive experiments on the KITTI odometry dataset show that our approach achieves state-of-the-art performance both in place recognition and pose estimation. ## II RELATED WORK According to the features used, we can divide the place recognition methods into three categories: geometry-based, semi-semantic-based, semantic-based. Geometry-based methods: Spin image [2] establishes a local coordinate system for each point, then projects the point into the 2D space and counts the number of points in different areas in the 2D space to form a spin image. ESF [16] proposes a shape descriptor that combines angle, point-distance, and area to boost the recognition rate. M2DP [5] projects the point cloud into multiple 2D planes and generates a density signature for each plane’s points. The left and right singular vectors of those signatures are used as the global descriptors. Scan context [3, 4] converts the point cloud to polar coordinates and then divides it into blocks along the azimuth and radial directions. Lastly, it encodes the z coordinate of the highest point in each block as a 2D global descriptor. LocNet [6] divides a point cloud into rings, generates a distance histogram for each ring, and stitches all histograms to form a global descriptor. Then a siamese network is used to score the similarity between the descriptors. LiDAR Iris [17] extracts a binary signature image for each point cloud then uses the Hamming distance of two corresponding binary signature images as the similarity. Seed [18] segments the point cloud into different objects and encodes the topological information of the segmented objects into the global descriptor. The above methods have achieved good results by encoding low-level geometric structures into descriptors. It can be expected that integrating more advanced features can further enhance the discriminative power of descriptors. Semi-semantic-based methods: Some methods use non-geometric information to construct descriptors, such as reflection intensity or learning-based features extracted by neural networks. Such features are related to the object type but do not clearly indicate the semantic category, so we classify these methods as semi-semantic based. ISHOT [9] and ISC [10] exploit the intensity information of the point cloud for place recognition. SegMatch [19] and SegMap [20] cluster a point cloud into segments. Then they extract features for each segment and use the kNN algorithm to identify corresponds. PointNetVLAD [21] combines PointNet [22] and NetVLAD [23] to extract global descriptors from the 3D point clouds end-to-end. $L^{3}$-Net [24] selects key points from the given point cloud then uses a PointNet to learn local descriptors for each key point. OREOS [25] projects the 3D point cloud into a 2D range image and proposes a convolutional neural network to extract the global descriptor. DH3D [26] designs a siamese network to learn 3D local features from the raw 3D point clouds, then use an attention mechanism to aggregate these local features as the global descriptor. LPD-Net [27] proposes the adaptive local feature extraction module and the graph-based neighborhood aggregation module to extract local features of the point cloud; then, as the PointNetVLAD, they use the NetVLAD to generate the global descriptor. MinkLoc3D [28] uses a sparse voxelized point cloud representation and sparse 3D convolutions to compute a discriminative 3D point cloud descriptor. SeqSphereVLAD [29] projects the point cloud onto a spherical view, extracts features on it and sequences those features to form a descriptor. SpoxelNet [30] voxelized the point cloud in spherical coordinates and defines the occupancy of each voxel in ternary values. Then they use a neural network to extract the global descriptor. The above methods combine more advanced features with geometric features. However, most of them use neural networks to extract abstract features, which are more complicated and not well interpretable. Semantic-based methods: SGPR [14] represents the scene as a semantic graph then uses a graph similarity network to score the similarity of the graphs. GOSMatch [15] proposes a new global descriptor that is generated from the spatial relationship between semantics. It also proposes a coarse-to-fine strategy to efficiently search loop closures and gives an accurate 6-DOF initial pose estimation. The two methods represent the scene as a graph and abstract the object as a node in the graph, which will cause the loss of features such as the size of each object. OverlapNet [7] designed a deep neural network that uses different types of information, such as intensity, normal, and semantics generated from LiDAR scans, to provide overlap and relative yaw angle estimates between paired 3D scans. However, it is too slow in preprocessing due to the need to calculating the normal and inferring the complex network backbone. To use the semantic information more effectively, we propose our Semantic Scan Context approach. ## III METHODOLOGY In this section, we present our semantic scan context approach. Different from other scan context-based methods that use incomplete semantic information and ignore small translations between point clouds, we explore to exploit full semantic information and emphasize that the small translation between point cloud pairs has a significant influence on the accuracy of recognition. As shown in Fig. 2, our method consists of two main parts: two-step global semantic ICP and Semantic Scan Context. The two-step global semantic ICP is divided into Fast Yaw Angle Calculate and Fast Semantic ICP. First, we define a point cloud frame as $P=\\{p_{1},p_{2},\cdots,p_{n}\\}$, with each point $p_{i}=[x_{i},y_{i},z_{i},\eta_{i}]$, $\eta_{i}$ represent the semantic label of $p_{i}$. Given a pair of point clouds $(P_{1},P_{2})$, we first use our Fast Yaw Angle Calculate method to get the relative yaw angle $\theta$ between them. Then we use the Fast Semantic ICP to calculate their relative translation $(\Delta x,\Delta y)$ in the x-y plane. Through the above two steps, we get the relative poses $(\Delta x,\Delta y,\theta)$ of the two frames of point clouds in 3D pose space. In order to eliminate the influence of rotation (e.g., reverse loop closures) and small translation on recognition, we use the obtained relative pose to align point cloud $P_{2}$. We mark the aligned point cloud as $P_{a}$. Finally, we use our global descriptor – the Semantic Scan Context to describe $(P_{1},P_{a})$ as $(S_{1},S_{2})$. The similarity score is obtained by comparing $S_{1}$ and $S_{2}$. (a) Yaw Aligned (b) Translation Aligned Figure 3: An illustration of the two-step global semantic ICP. ### III-A Global Semantic ICP It is known that the general ICP algorithm based on local iterative optimization is susceptible to local minimums [31]. For place recognition, we usually cannot get a valid initial value, which leads to the failure of the general ICP algorithm. To solve this, we propose the two-step global semantic ICP algorithm consisting of Fast Yaw Angle Calculate and Fast Semantic ICP. Benefited from the use of semantic information, our algorithm does not require any initial values to get satisfactory results. Fast Yaw Angle Calculate. For scan context based methods, columns of their descriptor represent the yaw angle. The pure rotation of the LiDAR in the horizontal plane will cause the column shift of their descriptor. Scan context and Intensity Scan Context get the similarity score and the yaw angle at the same time. Specifically, they calculate similarity (or distance) with all possible column-shifted descriptors and find the maximum similarity (or minimum distance). However, there are two main disadvantages. Firstly, it’s inefficient to compare the whole 2D descriptors by shifting. Secondly, they still try to get the maximum score for point clouds from different places (not loop closure). This obviously makes it more prone to false positives. To draw the above issues, we propose the semantic-based fast yaw angle calculate method. Given a point cloud pair $(P_{1},P_{2})$, we select representative objects such as buildings, tree trunks, and traffic signs based on semantic information. Then we convert the filtered clouds to polar coordinate in the x-y plane: $\displaystyle p_{i}$ $\displaystyle=[r_{i},\varphi_{i},x_{i},y_{i},\eta_{i}]$ (1) $\displaystyle r_{i}$ $\displaystyle=\sqrt{x_{i}^{2}+y_{i}^{2}}$ $\displaystyle\varphi_{i}$ $\displaystyle=arctan(\frac{y_{i}}{x_{i}})$ where $p_{i}$ is the $i^{th}$ point in each converted cloud, $r_{i}$ and $\varphi_{i}$ represent polar diameter and polar angle, respectively. Each converted cloud is then segmented to $N_{a}$ sectors by yaw angle. We only keep the point with the smallest polar diameter in each sector. Finally, we get two clouds $P_{I1}$ and $P_{I2}$, with $N_{a}$ elements. We sort the points in $P_{I1}$ and $P_{I2}$ according to the azimuth angle and save their corresponding polar diameters as vectors $R_{1}$ and $R_{2}$. Similar to the scan context, the shift of the column vector is related to the yaw angle: $\displaystyle shift$ $\displaystyle=\mathop{argmin}\limits_{i,i\in[0,N_{a}]}\varPsi(R_{1},R_{2}^{i})$ (2) $\displaystyle\theta$ $\displaystyle=360-\frac{360\times shift}{N_{a}}$ where $R_{2}^{i}$ is $R_{2}$ shifted by $i^{th}$ element and $\varPsi$ is defined as: $\varPsi(R_{1},R_{2}^{i})=\left\lVert R_{1}-R_{2}^{i}\right\rVert_{1}$ (3) Compared with Scan Context and Intensity Scan Context, our method only needs to compare one-dimensional vectors; therefore, it is more efficient. Moreover, our method does not obtain the angle via maximizing the score, which is helpful to identify non-loop-closure point-cloud pairs. Fig. 3 shows the result of Fast Yaw Angle Calculate. Figure 4: An example of generating SSC. $\rho$ and $\theta$ represent the polar diameter and polar angle, respectively. A sector corresponds to a descriptor column, while a ring corresponds to a row of the descriptor. Fast Semantic ICP. Though most works ignore translation between point clouds, ignoring the translation causes considerable declines in our experiments. In fact, for methods based on scan context, translation will affect both the row and column of the descriptor. We can’t get the best result just by the column- shifted descriptor. Therefore, we propose a fast semantic ICP algorithm to correct the translation between point clouds. To find the relative translation, we firstly rotate $P_{I2}$ to the same direction as $P_{I1}$, and the rotated point cloud is $P_{Ia}$, which is defined as: $\displaystyle x_{ai}$ $\displaystyle=x_{i}cos(\theta)-y_{i}sin(\theta)$ (4) $\displaystyle y_{ai}$ $\displaystyle=x_{i}sin(\theta)+y_{i}cos(\theta)$ where $(x_{i},y_{i})$ and $(x_{ai},y_{ai})$ represent the $i^{th}$ point in $P_{I2}$ and $P_{Ia}$ respectively. Our ICP problem can be defined as: $\displaystyle(\Delta x,\Delta y)$ $\displaystyle=\mathop{argmin}\limits_{\Delta x,\Delta y}L=\mathop{argmin}\limits_{\Delta x,\Delta y}\sum_{i=1}^{N_{a}}\Gamma(\eta_{ai},\eta_{ri})$ (5) $\displaystyle\times$ $\displaystyle\frac{(x_{ai}+\Delta x-x_{ri})^{2}+(y_{ai}+\Delta y-y_{ri})^{2}}{2}$ where $(x_{ri},y_{ri})$ represents the corresponding point of $(x_{ai},y_{ai})$, which is the point closest to $(x_{ai},y_{ai})$ in $P_{I1}$, $\eta_{ai}$ and $\eta_{ri}$ are semantic labels of the points. If $\eta_{ai}$ is equal to $\eta_{ri}$, then the output of $\Gamma(\eta_{ai},\eta_{ri})$ is $1$; otherwise, $0$. As our point clouds are ordered, we can search for the corresponding points near the position where the yaw angle is consistent with the target point. Specifically, our search interval for the $i^{th}$ target point is: $[i+shift-\frac{N_{l}}{2},i+shift+\frac{N_{l}}{2}]$ (6) where $N_{l}$ is the length of search interval and $shift$ is defined in Eq. 2. After a certain number of iterations, we can get the relative translation between the input point clouds, shown in Fig. 3. (a) 00 (b) 02 (c) 05 (d) 06 (e) 07 (f) 08 Figure 5: Precision-Recall curves on KITTI dataset. TABLE I: $F_{1}$ max scores and Extended Precision on KITTI dataset Methods | 00 | 02 | 05 | 06 | 07 | 08 | Mean ---|---|---|---|---|---|---|--- SC[3] | 0.750/0.609 | 0.782/0.632 | 0.895/0.797 | 0.968/0.924 | 0.662/0.554 | 0.607/0.569 | 0.777/0.681 ISC[10] | 0.657/0.627 | 0.705/0.613 | 0.771/0.727 | 0.842/0.816 | 0.636/0.638 | 0.408/0.543 | 0.670/0.661 M2DP[5] | 0.708/0.616 | 0.717/0.603 | 0.602/0.611 | 0.787/0.681 | 0.560/0.586 | 0.073/0.500 | 0.575/0.600 LI[17] | 0.668/0.626 | 0.762/0.666 | 0.768/0.747 | 0.913/0.791 | 0.629/0.651 | 0.478/0.562 | 0.703/0.674 PV[21] | 0.779/0.641 | 0.727/0.691 | 0.541/0.536 | 0.852/0.767 | 0.631/0.591 | 0.037/0.500 | 0.595/0.621 ON[7] | 0.869/0.555 | 0.827/0.639 | 0.924/0.796 | 0.930/0.744 | 0.818/0.586 | 0.374/0.500 | 0.790/0.637 SGPR[14] | 0.820/0.500 | 0.751/0.500 | 0.751/0.531 | 0.655/0.500 | 0.868/0.721 | 0.750/0.520 | 0.766/0.545 Ours-RN | 0.939/0.826 | 0.890/0.745 | 0.941/0.900 | 0.986/0.973 | 0.870/0.773 | 0.881/0.732 | 0.918/0.825 Ours-SK | 0.951/0.849 | 0.891/0.748 | 0.951/0.903 | 0.985/0.969 | 0.875/0.805 | 0.940/0.932 | 0.932/0.868 * • $F_{1}$ max scores and Extended Precision: $F_{1}$ max scores / Extended Precision. The best scores are marked in bold and the second best scores are underlined. ### III-B Semantic Scan Context Scan Context and Intensity Scan Context uses the points’ height and reflection intensity as features, respectively. Their methods essentially take advantage of the different characteristics of different objects in the scene. However, height and reflection intensity is only low-level features of the object which are not representative enough. We explore to use the high-level semantic features to represent scenes and thus propose the Semantic Scan Context descriptor. Descriptor definition. Given a point cloud $P$, we first convert it to the polar coordinate system as we did in Section III-A. Then, like scan context, we divide the point cloud into $N_{s}\times N_{r}$ blocks along the azimuthal and radial directions. Each block is represented by: $\displaystyle B_{ij}=\\{\eta_{k}|\frac{(i-1)\cdot R_{max}}{N_{r}}\leq r_{k}<\frac{i\cdot R_{max}}{N_{r}},$ (7) $\displaystyle\frac{(j-1)\cdot 2\pi}{N_{s}}-\pi\leq\varphi_{k}<\frac{j\cdot 2\pi}{N_{s}}-\pi\\}$ where $R_{max}$ is the the maximum effective measurement distance of LiDAR, $i\in[1,N_{r}]$ and $j\in[1,N_{s}]$. Our descriptor can be defined by: $S(i,j)=f(B_{ij})=\mathop{argmax}\limits_{\eta\in B_{ij}}E(\eta)$ (8) $f$ is an encoding function to encode features of $B_{ij}$. Note that if $B_{ij}=\varnothing,~{}f(B_{ij})=0$. We manually set the priority of different semantics in function $E$ to show their representativeness. We believe objects that appear less frequently in the scene are more representative (e.g., traffic signs are more representative than roads). Similarity Scoring. Given aligned clouds $P_{1}$ and $P_{a}$, we can get their descriptors $S_{1}$ and $S_{2}$ by Eq. 8. Then the similarity score between them can be calculated by: $\displaystyle score=\frac{\sum\limits_{1\leq i\leq N_{r}}\sum\limits_{1\leq j\leq N_{s}}I(S_{1}(i,j)=S_{2}(i,j))}{\sum\limits_{1\leq i\leq N_{r}}\sum\limits_{1\leq j\leq N_{s}}I(S_{1}(i,j)\neq 0~{}or~{}S_{2}(i,j)\neq 0)}$ (9) where $I$ is the indicator function, defined by: $I(x)=\left\\{\begin{aligned} &1&&x~{}is~{}true\\\ &0&&x~{}is~{}false\end{aligned}\right.$ (10) Fig. 4 shows Semantic Scan Context creation. (a) Average $F_{1}$ max scores (b) Average $EP$ Figure 6: Average $F_{1}$ max score and Average Extended Precision corresponding to different $\alpha$. ## IV EXPERIMENTS ### IV-A Experiment Setup We conduct experiments on the KITTI odometry dataset[32] collected by a 64-ring LiDAR, which contains 11 training sequences (00-10) with ground truth poses. We choose sequences with loop-closure (00,02,05,06,07,08) for evaluation and note that sequence 08 has reverse loops while others are in the same direction. Similar to SGPR[14], we regard the point cloud pair with a relative distance less (greater) than 3m (20m) as a positive (negative) sample. Since there are too many negative samples, we only select a part of the negative samples for evaluation. Specifically, if there are $N_{p}$ positive samples in a sequence, we will randomly select $\alpha\cdot N_{p}$ negative samples. We can adjust the proportion of negative samples by changing the coefficient $\alpha$. The ground-truth semantic labels are from the SemanticKITTI dataset[33]. We also test our method with the semantic segmentation algorithm (RangeNet++ [34]) to prove that our method can be applied to noisy predictions in real situations. In our experiments, we set $N_{a}=360,~{}N_{l}=20,~{}N_{s}=360,~{}N_{r}=50$. All experiments are done on the same system with an Intel i7-9750H @3.00GHz CPU with 16 GB RAM. ### IV-B Place Recognition Performance As mentioned in Section IV-A, we use both ground-truth semantic labels (Ours- SK) and predicted semantic labels (Ours-RN) for testing. We compare our approach with the state-of-the-art methods, including Scan Context[3] (SC), Intensity Scan Context[10] (ISC), M2DP[5], LiDAR Iris[17] (LI), PointNetVLAD[21] (PV), OverlapNet[7] (ON), and SGPR[14]. For SGPR, we use their pre-trained models trained with the 1-fold strategy. As we cannot reproduce the results of OverlapNet, we use the pre-trained model provided by the author. The model is trained on sequences 03-10, so sequences 05, 06, 07, 08 are included in the training set. Fixed $\alpha$. In this experiment, we set $\alpha$ to 100, which means the number of negative samples is $100N_{p}$. Fig. 5 shows the precision-recall curve of each method. Additionally, we also use the maximum $F_{1}$ score and Extended Precision[35] (EP) shown in Tab. I to analyze the performance. The $F_{1}$ score is defined as: $F_{1}=2\times\frac{P\times R}{P+R}$ (11) where $P$ and $R$ represent the Precision and Recall, respectively; $F_{1}$ is the harmonic mean of $P$ and $R$. It treats $P$ and $R$ as equally important and measures the overall performance of classification. The Extended Precision is defined as: $EP=\frac{1}{2}(P_{R0}+R_{P100})$ (12) where $P_{R0}$ is the precision at minimum recall, and $R_{P100}$ is the max recall at $100\%$ precision. $EP$ is specifically designed metrics for place recognition algorithms. As shown in Fig. 5 and Tab. I, Ours-SK surpasses other methods in all indicators of all sequences with a large margin. Especially in sequence 08, which has only reverse loops, the performance of other methods drops significantly while our method still performs well. This indicates that our method is robust to view angle changes. OverlapNet performs well on most sequences except 08. We guess this is because it uses the normal of the point cloud, which will change as the point cloud rotates. Therefore, this method cannot robustly handle reverse loops. SGPR works well on indicator the $F_{1}$ max score but poorly on the Extended Precision. We find that it gives some negative samples a huge score, which causes the recall to be almost zero when the accuracy reaches $100\%$. The result of Ours-RN is slightly worse than Ours-SK as expected. As the difference is not obvious, it means that our approach can adapt to semantic segmentation algorithms for actual systems. Change $\alpha$. In this experiment, we change the value of $\alpha$ to analyze the influence of the number of negative samples on those algorithms. Fig. 6 shows the Average $F_{1}$ max score and Average Extended Precision corresponding to different $\alpha$. It clearly shows that our method performs better than others no matter how much $\alpha$ is taken. As $\alpha$ increases, the performance of all methods gradually decreases, but our method is less affected, showing that our method can effectively identify negative samples. For place recognition, negative samples are generally far more than positive samples, which is one key reason why our method leads in metrics far ahead. Moreover, identifying negative samples is significant as false positives will bring fatal crashes to the SLAM system. TABLE II: Yaw error on KITTI dataset sequences | SC (deg) | ISC (deg) | ON (deg) | Ours-SK (deg) ---|---|---|---|--- 00 | 11.526 | 0.829 | 2.595 | 0.891 02 | 11.301 | 1.343 | 4.911 | 1.142 05 | 18.394 | 0.904 | 3.329 | 0.653 06 | 4.074 | 0.534 | 1.124 | 0.759 07 | 21.862 | 0.684 | 2.233 | 0.512 08 | 49.170 | 3.856 | 68.622 | 1.878 Average | 19.388 | 1.358 | 13.802 | 0.973 ### IV-C Pose Accuracy As described in Section III-A, our approach can estimate the 3D relative pose $(\Delta x,\Delta y,\theta)$, while most other methods cannot estimate pose or can only estimate 1D pose (yaw). We compare our method with Scan Context, Intensity Scan Context, and Overlap. The ground-truth pose is calculated by: $\displaystyle T$ $\displaystyle=T_{1}^{-1}T_{2}$ (13) $\displaystyle(\Delta x,\Delta y,\theta)$ $\displaystyle=(T(1,3),T(2,3),arctan(\frac{T(2,1)}{T(1,1)}))$ where $T_{1}\in SE(3)$ and $T_{2}\in SE(3)$ represent the pose of $P^{1}$ and $P^{2}$, respectively. Since the pitch and roll angles are hardly changed in autonomous vehicles, we ignore them. Tab. II shows the relative yaw error on the KITTI dataset. We can see that our method outperforms other methods in terms of the average relative yaw error. Especially in the challenging sequence 08, affected by the reverse loop, most methods perform poorly, while our method can still accurately estimate the yaw angle. This again shows that our method can handle the reverse loop well. As mentioned in Section IV-B, OverlapNet performs poorly due to its inability to handle reverse loops. Fig. 7 shows the relative translation error of our approach on the KITTI dataset. As shown, our method can estimate accurate relative translation, which is currently not possible with other methods to our knowledge. Thus, our Fast Yaw Angle Calculate and Fast Semantic ICP approaches can give accurate 3D pose estimation. This can provide a good initial value for the ICP algorithm to obtain a 6D pose or directly serve as a global constraint in the SLAM system. Figure 7: Translation error. TABLE III: Contribution of individual components Yaw | ICP | Semantic | $F_{1}/EP$ | Decrease ---|---|---|---|--- | $\surd$ | $\surd$ | 0.896/0.820 | 3.6%/4.8% $\surd$ | | $\surd$ | 0.757/0.685 | 17.5%/18.3% $\surd$ | $\surd$ | | 0.775/0.762 | 15.7%/10.6% $\surd$ | $\surd$ | $\surd$ | 0.932/0.868 | 0.0%/0.0% ### IV-D Ablation Study We design an ablation study to investigate the contribution of each component. Specifically, we remove or replace a module at a time and then calculate the $F_{1}$ max scores and Extended Precision. To show the contribution of our Fast Yaw Angle Calculate method, we replace this module with the method used in scan context – shift the column of descriptors and calculate the maximum similarity score while obtaining the yaw angle. Similarly, we replace the semantic label in the descriptor by maximum $z$ to see semantic contribution. To evaluate the contribution of our Fast Semantic ICP approach, we directly set $\Delta x$ and $\Delta y$ to 0. As shown in Tab. III, after removing Yaw, ICP, and Semantic, the average $F_{1}$ max score decrease by $3.6\%$, $17.5\%$, $15.7\%$, and the average Extended Precision decrease by $4.8\%$, $18.3\%$, $10.6\%$. Therefore, the following conclusions can be drawn: * • Compared with other methods, our approach can get a more accurate yaw angle and translation. * • As we emphasized, the small translation has a significant impact on scan context-based methods. Simply ignoring the translation will greatly weaken the performance. * • High-level features, like semantics, can bring considerable improvements in the scene description. TABLE IV: Average time cost on KITTI 08 Methods | Size | Description | Retrieval | ICP | Total ---|---|---|---|---|--- SC | $20\times 60$ | 4.825 | 0.158 | - | 4.983 ISC | $20\times 90$ | 3.094 | 0.800 | - | 3.894 Ours | $50\times 360$ | 2.563 | 0.066 | 2.126 | 4.755 * • The unit of time in the table is milliseconds. ### IV-E Efficiency To evaluate the efficiency, we set $\alpha$ to $1$ and compare the average time cost of our method with Scan Context and Intensity Scan Context on sequence 08. As shown in Tab. IV, the total time cost of our approach is acceptable. As we use the obtained 3D pose to align the point clouds in advance, we don’t need to shift the column of descriptors during the matching stage, so our retrieval speed is extremely fast. Our two-step global semantic ICP only takes 2.126 milliseconds on average. This algorithm is fast due to the following reasons. Firstly, since we only keep $N_{a}$ (360 taken in our experiments) points, the computational cost is greatly reduced compared to the original point cloud (about 120,000 points). Secondly, We divide the algorithm into two steps, first calculate the yaw angle, and then iteratively calculate $\Delta x$ and $\Delta y$, which simplifies the algorithm and speeds up the calculation. Thirdly, when calculating $\Delta x$ and $\Delta y$, we use the yaw angle to align the input clouds in advance. Therefore we don’t need to traverse the entire point cloud when looking for the corresponding points. Instead, we can find them near the corresponding positions, which greatly reduces the number of searches. ## V CONCLUSION In this paper, we propose a novel semantic-based global descriptor for place recognition. We propose a two-step global semantic ICP to obtain the 3D pose $(x,y,yaw)$ of the point cloud pair, aligning the point clouds to improve the descriptor matching accuracy. In addition, it can provide good initial values for point cloud registration. We achieve leading performance on the KITTI odometry dataset compared to the state-of-the-art methods. Our method also has some limitations. Like most place recognition methods, our method does not consider pitch angle and roll angle. Therefore, our method may fail in some extreme scenarios. In the future work, we will try to solve the above problems and further explore the application of semantic information in LiDAR-based SLAM systems. ## References * [1] A. Angeli, D. Filliat, S. Doncieux, and J. Meyer, “Fast and incremental method for loop-closure detection using bags of visual words,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1027–1037, 2008. * [2] A. E. Johnson and M. Hebert, “Using spin images for efficient object recognition in cluttered 3d scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 433–449, 1999. * [3] G. Kim and A. Kim, “Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4802–4809, 2018. * [4] G. Kim, B. Park, and A. Kim, “1-day learning, 1-year localization: Long-term lidar localization using scan context image,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1948–1955, 2019. * [5] L. He, X. Wang, and H. Zhang, “M2dp: A novel 3d point cloud descriptor and its application in loop closure detection,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 231–237, 2016. * [6] H. Yin, L. Tang, X. Ding, Y. Wang, and R. Xiong, “Locnet: Global localization in 3d point clouds for mobile vehicles,” in 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 728–733, 2018. * [7] X. Chen, T. Läbe, A. Milioto, T. Röhling, O. Vysotska, A. Haag, J. Behley, and C. Stachniss, “OverlapNet: Loop Closing for LiDAR-based SLAM,” in Proceedings of Robotics: Science and Systems (RSS), 2020. * [8] K. P. Cop, P. V. K. Borges, and R. Dubé, “Delight: An efficient descriptor for global localisation using lidar intensities,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3653–3660, 2018. * [9] J. Guo, P. V. K. Borges, C. Park, and A. Gawel, “Local descriptor for robust place recognition using lidar intensity,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1470–1477, 2019. * [10] H. Wang, C. Wang, and L. Xie, “Intensity scan context: Coding intensity and geometry relations for loop closure detection,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 2095–2101, 2020\. * [11] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, “Pv-rcnn: Point-voxel feature set abstraction for 3d object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020. * [12] X. Zhu, H. Zhou, T. Wang, F. Hong, Y. Ma, W. Li, H. Li, and D. Lin, “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation,” arXiv preprint arXiv:2011.10033, 2020. * [13] H. Tang, Z. Liu, S. Zhao, Y. Lin, J. Lin, H. Wang, and S. Han, “Searching efficient 3d architectures with sparse point-voxel convolution,” in European Conference on Computer Vision, 2020. * [14] X. Kong, X. Yang, G. Zhai, X. Zhao, X. Zeng, M. Wang, Y. Liu, W. Li, and F. Wen, “Semantic graph based place recognition for 3d point clouds,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8216–8223, 2020. * [15] Y. Zhu, Y. Ma, L. Chen, C. Liu, M. Ye, and L. Li, “Gosmatch: Graph-of-semantics matching for detecting loop closures in 3d lidar data,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5151–5157, 2020. * [16] W. Wohlkinger and M. Vincze, “Ensemble of shape functions for 3d object classification,” in 2011 IEEE International Conference on Robotics and Biomimetics, pp. 2987–2992, 2011. * [17] Y. Wang, Z. Sun, C. Z. Xu, S. E. Sarma, J. Yang, and H. Kong, “Lidar iris for loop-closure detection,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5769–5775, 2020. * [18] Y. Fan, Y. He, and U. X. Tan, “Seed: A segmentation-based egocentric 3d point cloud descriptor for loop closure detection,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5158–5163, 2020. * [19] R. Dubé, D. Dugas, E. Stumm, J. Nieto, R. Siegwart, and C. Cadena, “Segmatch: Segment based place recognition in 3d point clouds,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5266–5272, IEEE, 2017. * [20] R. Dubé, A. Cramariuc, D. Dugas, H. Sommer, M. Dymczyk, J. Nieto, R. Siegwart, and C. Cadena, “Segmap: Segment-based mapping and localization using data-driven descriptors,” The International Journal of Robotics Research, p. 0278364919863090, 2019. * [21] M. A. Uy and G. H. Lee, “Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4470–4479, 2018. * [22] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660, 2017\. * [23] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “Netvlad: Cnn architecture for weakly supervised place recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5297–5307, 2016. * [24] W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, “L3-net: Towards learning based lidar localization for autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6389–6398, 2019. * [25] L. Schaupp, M. Bürki, R. Dubé, R. Siegwart, and C. Cadena, “Oreos: Oriented recognition of 3d point clouds in outdoor scenarios,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3255–3261, 2019. * [26] J. Du, R. Wang, and D. Cremers, “Dh3d: Deep hierarchical 3d descriptors for robust large-scale 6dof relocalization,” in European Conference on Computer Vision, pp. 744–762, Springer, 2020. * [27] Z. Liu, S. Zhou, C. Suo, P. Yin, W. Chen, H. Wang, H. Li, and Y.-H. Liu, “Lpd-net: 3d point cloud learning for large-scale place recognition and environment analysis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2831–2840, 2019. * [28] J. Komorowski, “Minkloc3d: Point cloud based large-scale place recognition,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1790–1799, 2021. * [29] P. Yin, F. Wang, A. Egorov, J. Hou, J. Zhang, and H. Choset, “Seqspherevlad: Sequence matching enhanced orientation-invariant place recognition,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5024–5029, 2020. * [30] M. Y. Chang, S. Yeon, S. Ryu, and D. Lee, “Spoxelnet: Spherical voxel-based deep place recognition for 3d point clouds of crowded indoor spaces,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8564–8570, 2020. * [31] J. Yang, H. Li, D. Campbell, and Y. Jia, “Go-icp: A globally optimal solution to 3d icp point-set registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 11, pp. 2241–2254, 2016\. * [32] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013. * [33] J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “Semantickitti: A dataset for semantic scene understanding of lidar sequences,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 9297–9307, 2019. * [34] A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “RangeNet++: Fast and Accurate LiDAR Semantic Segmentation,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019. * [35] B. Ferrarini, M. Waheed, S. Waheed, S. Ehsan, M. J. Milford, and K. D. McDonald-Maier, “Exploring performance bounds of visual place recognition using extended precision,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1688–1695, 2020.
\ulabel{5.3,-1}{S}; % \stoch{8.75,1.5}{$(\psi_H{\comp}f)|_S$}{4}; \stoch{8.75,1.5}{$c$}{4}; \rlabel{11,1.5}{H\sprime}; \rlabel{11,-1}{S}; % \showgrid % \pbox{ % \stringdiagram{ % \ulabel{0.2,0}{H}; % \wire(-4,0) -- (2,0); % \wire[2.25] (-3.2,0) -- (-3.2,2.25) -- (9,2.25); % \blackdot{-3.2,0}; % \llabel{-4,0}{\Theta}; % \stoch{2,0}{$f$}{4}; % \stoch{-1.5,0}{$\psi_H$}{3}; % \wire(2,1) -- (3.5,1); % \wire(2,-1) -- (11.5,-1); % \rlabel{3.8,1}{H\sprime}; % \blackdot{3.5,1}; % \wire[2] (5.5,-1)--(5.5,1)--(9,1); % \wire (9,1.5) -- (11.5,1.5); % \blackdot{5.5,-1}; % \ulabel{5,-1}{S}; % \stoch{8.75,1.5}{$(\psi_H{\comp}f)|_S$}{4}; % \rlabel{11.5,1.5}{H\sprime}; % \rlabel{11.5,-1}{S}; %% \showgrid % }}{.} \end{equation} This is an instance of definition 11.4 in <cit.>. The conditional $c$ is now a kernel of type $\Theta\times S \to P(H')$, depending on both a data point $s\in S$ and a parameter value $\theta\in\Theta$. The next idea is that if our parametrised distributions are suitably chosen, then for every $s$ and every $\theta$ the conditional over $H'$ will be a member of the family parametrised by $\Theta'$. This amounts to saying that there exists a deterministic function $\gamma\colon \Theta\times S \to \Theta'$ such that \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \stoch{2,1.5}{$c$}{2} \llabel{0,3}{ S}; \llabel{0,1}{ \Theta}; \rlabel{4,1}{ H\sprime}; %\wire (-2.5,1)--(6.5,1); %\wire[0.5] (-2.5,3)--(-2,3)--(-1,2)--(0,2); %\llabel{-2.5,3}{ S}; %\llabel{-2.5,1}{ \Theta}; %\rlabel{6.5,1}{ H\sprime}; \stringdiagram{ \wire (0,1)--(9,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma$}{2}; \llabel{0,3}{ S}; \llabel{0,1}{ \Theta}; \ulabel{3.6,1}{ \Theta\sprime}; \stoch{6.2,1.5}{$\Psi_{H\sprime}$}{2}; \rlabel{9,1}{H\sprime}; The equation then becomes \begin{equation} \label{deterministic-heterogeneous-filtering} \stringdiagram{ \ulabel{0.2,0}{H}; \wire(-3.5,0) -- (2,0); \llabel{-3.5,0}{\Theta}; \stoch{2,0}{$f$}{4}; \stoch{-1.5,0}{$\psi_H$}{3}; \wire(2,1) -- (4.5,1); \wire(2,-1) -- (4.5,-1); \rlabel{4.5,1}{H\sprime}; \rlabel{4.5,-1}{S}; \,= \pbox{ \stringdiagram{ \ulabel{0.2,0}{H}; \wire(-4,0) -- (2,0); \wire[2.25] (-3.2,0) -- (-3.2,2.25) -- (4.5,2.25); \wire[0.5] (4.5,2.25) -- (5,2.25) -- (6,1) -- (8.5,1); \blackdot{-3.2,0}; \llabel{-4,0}{\Theta}; \stoch{2,0}{$f$}{4}; \stoch{-1.5,0}{$\psi_H$}{3}; \wire(2,1) -- (3.5,1); \wire(2,-1) -- (13,-1); \rlabel{3.8,1}{H\sprime}; \blackdot{3.5,1}; \wire (8.5,1.5) -- (13,1.5); \wire[2.5] (4.5,-1)--(5.5,2.25)--(8.5,2.25); \blackdot{4.5,-1}; \ulabel{5.3,-1}{S}; \sqbox{7.75,1.5}{$\gamma$}{4}; \stoch{11,1.5}{$\psi_{H\sprime}$}{4}; \rlabel{13,1.5}{H\sprime}; \rlabel{13,-1}{S}; \ulabel{9.25,1.5}{\Theta\sprime} % \showgrid \end{equation} The point is that once the appropriate families of distributions have been chosen such that this equation holds, only the function $\gamma$ needs to be implemented. I am again considering only talking about deterministic machines. If we don't do that, the following footnote would be a paragraph at this point, and we'd use stochastic updates after that. Later, in taking a natural science perspective, we will want to consider the case where $\gamma$ is not a function $\Theta\times S \to \Theta'$ but instead a Markov kernel $\Theta\times S \to P(\Theta')$. That is, $\gamma$ may behave stochastically, choosing a random element of $\Theta'$ that may depend on $s\in S$ and $\theta\in \Theta$. In this case <ref> must be changed to \begin{equation} \label{stochastic-heterogeneous-filtering} \stringdiagram { \wire (0,0) -- (11.5,0); \ulabel{0,0}{\Theta}; \ulabel{11.5,0}{\Theta\sprime}; \blackdot{0.7,0}; \wire[3] (0.7,0) -- (0.7,3) -- (5,3); \wire (6,2) -- (11.5,2); \ulabel{11.5,2}{S}; \stoch{2.8,3}{$\psi_H\!$}{3}; \stoch{5.75,3}{$f$}{4}; \ulabel{4.2,3}{H}; \stoch{9.5,0.25}{$\gamma$}{3}; \blackdot{7.5,2}; \ulabel{8,2}{S}; \wire[1.5] (7.5,2) -- (7.5,0.5) -- (9.5,0.5); \wire[2] (5,4) -- (11.5,4); \ulabel{11.5,4}{H}; % \showgrid \, = \! \pbox{ \stringdiagram { \wire (0,0) -- (15,0); \ulabel{0,0}{\Theta}; \ulabel{15,0}{\Theta\sprime}; \blackdot{0.7,0}; \wire[3] (0.7,0) -- (0.7,3) -- (5,3); \wire (6,2) -- (15,2); \ulabel{15,2}{S}; \stoch{2.8,3}{$\psi_H\!$}{3}; \stoch{5.75,3}{$f$}{4}; \ulabel{4.2,3}{H}; \wire[2] (5,4) -- (8,4); \ulabel{7.4,4}{H\sprime}; \blackdot{8,4}; \stoch{9,0.25}{$\gamma$}{3}; \blackdot{7.3,2}; \ulabel{7.8,2}{S}; \wire[1.5] (7.3,2) -- (7.3,0.5) -- (9,0.5); \wire[4] (10.5,0) -- (10.5,4) -- (15,4); \blackdot{10.5,0}; \ulabel{11.3,0}{\Theta\sprime}; \stoch{13.2,4}{$\psi_{H\sprime}$}{3}; \ulabel{15,4}{H\sprime}; % \showgrid \end{equation} This is because if $\gamma$ is stochastic then there may be several different elements of $\Theta'$ that can result from giving passing values $\theta\in\Theta$ and $s\in S$ to $\gamma$. <Ref> demands that each of these map to a valid conditional distribution over $H'$. In a typical Bayesian filtering setup, we would demand that $H=H'$, $\Theta=\Theta'$ and $\psi_{H} = \psi_{H'}$. That is, instead of having one family of distributions over $H$ and another over $H'$ we instead just have one family of distributions and update its parameters in response to data. This allows the updating process to be iterated over a sequence of inputs. However, before we consider that case, let us first consider a slightly different kind of iteration. Suppose that in addition to the kernel \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H}; we also have a kernel \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f'$}{2} \rlabel{0,-3}{ T}; \rlabel{0,-1}{ \,\,H''}; \llabel{-4,-1}{ H'}; which we can interpret as the hidden system undergoing a second transition, changing stochastically to a new state in $H''$ and producing a second observable from a sample space $T$, which might in general be different from $S$. Correspondingly, we can define a new parameter space $\Theta''$ and a new family of distributions 6,1 H”; and a new update function \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma'$}{2} \llabel{0,3}{ T}; \llabel{0,1}{ \Theta'}; \rlabel{4,1}{ \,\,\,\Theta''}; obeying the analog of <ref>.or <ref> if we include stochastic updates. We can see this either as two separate updates (one in response to an input in $S$ and another in response to an input in $T$), or we can see it as a single update in response to an input $(s,t)\in S\times T$, that is, an input consisting of sequence consisting of $s$ followed by $t$. This gives us two different ways to write <ref>, and we might hope that these would be equivalent. Indeed this is the case, and it has the interesting consequence that Bayesian filtering updates form a category. Although this fact is somewhat technical and we do not use it extensively below, we state it as a proposition: There is a category $\mathsf{Filt}$ of Bayesian filtering updates, in which $(i)$ the objects are tuples $\left\langle \Theta, H, \!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$\psi_H$}{2} \llabel{0,1}{\Theta}; \rlabel{4,1}{H}; consisting of two measurable spaces and a Markov kernel; $(ii)$ a morphism $\left\langle \Theta, H, \!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$\psi_H$}{2} \llabel{0,1}{\Theta}; \rlabel{4,1}{H}; }\right\rangle \to \left\langle \Theta', H', \!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$\psi_{H\sprime}$}{2} \llabel{0,1}{\Theta'}; \rlabel{4,1}{H'}; }\right\rangle$ is an equivalence class of tuples $\left\langle S, \!\stringdiagram{ \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H}; }, \!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma$}{2} \llabel{0,3}{ S}; \llabel{0,1}{ \Theta}; \rlabel{4,1}{ \Theta'}; } \right\rangle$ that consist of a measurable space, a stochastic kernel and a deterministic kernel such that <ref> holds; and $(iii)$ given morphisms $\left\langle S, \!\stringdiagram{ \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H}; }, \!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma$}{2} \llabel{0,3}{ S}; \llabel{0,1}{ \Theta}; \rlabel{4,1}{ \Theta'}; } \right\rangle$ $\left\langle T, \!\stringdiagram{ \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H'}; }, \!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma'$}{2} \llabel{0,3}{ T}; \llabel{0,1}{ \Theta'}; \rlabel{4,1}{ \Theta''}; } \right\rangle$, their composition is given by the equivalence class containing the morphism $\left\langle S\times T, \!\stringdiagram{ \wire (0,-1)--(-7.5,-1); \wire[0.5] (0,-2.5)--(-0.5,-2.5)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f'$}{2} \wire[0.5] (0,-4)--(-3,-4)--(-4,-2)--(-5.5,-2); \stoch{-5.5,-1.5}{$f$}{2} \rlabel{0,-2.5}{ S}; \rlabel{0,-4.2}{ T}; \rlabel{0,-0.7}{ \,\,\,H''}; \llabel{-7.5,-1}{ H}; \wire (0,1)--(7.5,1); \wire[0.5] (0,2.5)--(0.5,2.5)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma$}{2} \wire[0.5] (0,4)--(3,4)--(4,2)--(5.5,2); \sqbox{5.5,1.5}{$\gamma$}{2} \llabel{0,2.5}{ T}; \llabel{0,4.2}{ S}; \llabel{0,0.7}{ \Theta}; \rlabel{7.5,1}{ \,\,\,\Theta''}; \right\rangle$. The appropriate notion of equivalence is defined in the supplementary information. Alternatively, $\mathsf{Filt}$ may be viewed as a bicategory, with 2-cells given by a generalisation of sufficient statistics. The proof and further details are given in the supplementary information. It is interesting to note that the category $\mathsf{Filt}$ has some similarity to the categories of lenses or optics that have previously been used to model Bayesian updates two of Toby's papers, as well as agents in a game-theoretic or control-theoretic context e.g. Bayesian open games, backprop as functor, towards foundations. However, it is different in that $\mathsf{Filt}$ lacks the bidirectional quality of lenses and optics. Its morphisms consist of pairs of morphisms of the underlying category, but both point in the same direction instead of one of them being reversed. We conjecture that a more lens-like bidirectional structure would be needed if we were to consider Bayesian smoothing rather than Bayesian filtering. It will be interesting in future work to better understand the relationship between lens-like categories and the concept of interpretation developed in the present work. Some steps have been taken in this direction recently cyber kittens, towards foundations, Translating Extensive Form Games to Open Games with Agency. These works share a common theme with the present work, in that they identify a relationship between agency and parametrisation. §.§ The category $\mathsf{Filt}$ Here we restate in more detail the definition of the category $\mathsf{Filt}$, in both its category form and its bicategory form. As in the main paper, we assume we are working in a Markov category $\mathscr{C}$. We use the term “space” or “measurable space” for objects of $\mathscr{C}$ and “Markov kernel” for morphisms of $\mathscr{C}$. We start by defining a parametrised family of distributions. These will be the objects of $\mathsf{Filt}$. A parametrised family of distributions (or family) $X$ is a choice of measurable spaces $\Theta_{X}$ and $H_{X}$, together with a Markov kernel \wire (-0.3,1)--(4.5,1); \stoch{2,1}{$\psi_X$}{2} \llabel{-0.3,1}{\Theta_X\,\,\,\,}; \rlabel{4.5,1}{\,\,\,\,H_X}; We then define a Bayesian filtering update between two parametrised families. In the category version of $\mathsf{Filt}$ the morphisms of $\mathsf{Filt}$ will be equivalence classes of filtering updates. In the bicategory version of $\mathsf{Filt}$, filtering updates will be the 1-cells. Given parametrised families $X$ and $Y$, a Bayesian filtering update $u$ from $X$ to $Y$ is a choice of measurable space $S_u$, a Markov kernel \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H}; temporary, just so I can copy and paste: There is a category $\mathsf{Filt}$ of Bayesian filtering updates, in which $(i)$ the objects are tuples $\left\langle \Theta, H, \!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$\psi_H$}{2} \llabel{0,1}{\Theta}; \rlabel{4,1}{H}; consisting of two measurable spaces and a Markov kernel; $(ii)$ a morphism $\left\langle \Theta, H, \!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$\psi_H$}{2} \llabel{0,1}{\Theta}; \rlabel{4,1}{H}; }\right\rangle \to \left\langle \Theta', H', \!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$\psi_{H\sprime}$}{2} \llabel{0,1}{\Theta'}; \rlabel{4,1}{H'}; }\right\rangle$ is an equivalence class of tuples $\left\langle S, \!\stringdiagram{ \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H}; }, \!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma$}{2} \llabel{0,3}{ S}; \llabel{0,1}{ \Theta}; \rlabel{4,1}{ \Theta'}; } \right\rangle$ that consist of a measurable space, a stochastic kernel and a deterministic kernel such that <ref> holds; and $(iii)$ given morphisms $\left\langle S, \!\stringdiagram{ \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H}; }, \!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma$}{2} \llabel{0,3}{ S}; \llabel{0,1}{ \Theta}; \rlabel{4,1}{ \Theta'}; } \right\rangle$ $\left\langle T, \!\stringdiagram{ \wire (0,-1)--(-4,-1); \wire[0.5] (0,-3)--(-0.5,-3)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f$}{2} \rlabel{0,-3}{ S}; \rlabel{0,-1}{ H\sprime}; \llabel{-4,-1}{ H'}; }, \!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma'$}{2} \llabel{0,3}{ T}; \llabel{0,1}{ \Theta'}; \rlabel{4,1}{ \Theta''}; } \right\rangle$, their composition is given by the equivalence class containing the morphism $\left\langle S\times T, \!\stringdiagram{ \wire (0,-1)--(-7.5,-1); \wire[0.5] (0,-2.5)--(-0.5,-2.5)--(-1,-2)--(-2,-2); \stoch{-2,-1.5}{$f'$}{2} \wire[0.5] (0,-4)--(-3,-4)--(-4,-2)--(-5.5,-2); \stoch{-5.5,-1.5}{$f$}{2} \rlabel{0,-2.5}{ S}; \rlabel{0,-4.2}{ T}; \rlabel{0,-0.7}{ \,\,\,H''}; \llabel{-7.5,-1}{ H}; \wire (0,1)--(7.5,1); \wire[0.5] (0,2.5)--(0.5,2.5)--(1,2)--(2,2); \sqbox{2,1.5}{$\gamma$}{2} \wire[0.5] (0,4)--(3,4)--(4,2)--(5.5,2); \sqbox{5.5,1.5}{$\gamma$}{2} \llabel{0,2.5}{ T}; \llabel{0,4.2}{ S}; \llabel{0,0.7}{ \Theta}; \rlabel{7.5,1}{ \,\,\,\Theta''}; \right\rangle$. old version of this section. Probably just as good in the end, and the same length, but doesn't introduce $\psi_H$ as a Markov kernel §.§ Interpretations and reasoners some or all of this subsection could be moved to intro? We now describe our central concept: an interpretation of a machine. Here we will present only two kinds of interpretations, Bayesian interpretations and Bayesian filtering interpretations, but we expect these to fit naturally into a much broader family of concepts. The idea is that an interpretation is a function that maps the physical state of a machine to some other mathematical object, which can be thought of as a set of possible beliefs about some external world. In the case of Bayesian interpretations and Bayesian filtering interpretations, this object is the set of probability measures over some hidden variable. An interpretation will also generally be required to satisfy some kind of consistency requirement. In the case of Bayesian and Bayesian filtering interpretations, this consistency requirement is given by Bayes' theorem, as we explain below. As we will show, not every machine can be consistently interpreted in every possible way; the existence of a non-trivial consistent interpretation can put strong constraints on the possible dynamics that a machine can have. Given a machine $\gamma$ and a consistent interpretation, we will refer to the two together as a reasoner. The idea is that a machine by itself is merely a (possibly stochastic) dynamical process, but if a consistent interpretation exists then it is at least consistent to ascribe a meaning to its states. Specifically, we can think of each state as corresponding to a probability distribution representing a subjective state of knowledge, and these distributions will update correctly as the machine receives new inputs. As possible extensions of this idea, one can also imagine weaker kinds of consistency requirement, which might correspond to Jeffrey updating Jeffrey; see also Jacobs, or to approximate Bayes of some form, perhaps based on some form of the free energy principle. One could also imagine applying these ideas to machines that have outputs as well as inputs, which might allow concepts such as goals, plans and actions to be formalised via the notion of interpretation, in addition to Bayesian beliefs. However, we leave the development of these notions to future work, and focus here only on interpretations in terms of exact Bayesian inference and filtering. It should be noted that, for a given machine, the question of whether it can be consistently interpreted in a particular way is in principle an empirical one, since it depends on the machine's update kernel, which can in principle be measured. However, in general a given machine might have multiple non-equivalent consistent interpretations, and one cannot distinguish between these empirically by looking only at the system's internal dynamics. [We leave open the possibility that they could be distinguished by looking at some broader context, e.g. by discovering that a device's designer intended a particular interpretation, or that evolution selected for a particular interpretation.] Consequently the relationship between interpretations and the empirical, physical world is rather subtle, and one should keep in mind that our notion of “reasoner” unavoidably involves an element of choice in which interpretation to adopt. I like the following paragraph but think it's a distraction In string diagram notation, the update kernel can be written as $\!\!\!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \stoch{2,1.5}{$\gamma$}{2} \llabel{0,3}{ S}; \llabel{0,1}{ Y}; \rlabel{4,1}{ Y}; The fact that it is able to take sequences of inputs follows from its type: two copies of $\gamma$ can be composed in a particular way to form $\!\!\!\stringdiagram{ \wire (0,1)--(7.5,1); \wire[0.5] (0,2.5)--(0.5,2.5)--(1,2)--(2,2); \stoch{2,1.5}{$\gamma$}{2} \wire[0.5] (0,4)--(3,4)--(4,2)--(5.5,2); \stoch{5.5,1.5}{$\gamma$}{2} \llabel{0,2.5}{ S}; \llabel{0,4.2}{ S}; \llabel{0,0.7}{ Y}; \rlabel{7.5,1}{ Y}; [-1cm]note to self: this is related to the Para construction, e.g. from “towards foundations of categorical cybernetics”, also a similar trick is used in “backprop as functor.” Not sure whether to cite these here as this is obviously way simpler. This can be read as a kernel that takes a state $y\in Y$ and a sequence of two inputs in $S$, and returns another state in $Y$ stochastically, as a result of feeding the two inputs to the machine in sequence. This generalises to sequences of any length. (Some care must be taken over associativity but this can be done in a standard way.) \begin{equation} \label{jacobs-equation-b} \stringdiagram { \wire (0,0) -- (9,0); \ulabel{0,0}{Y}; \ulabel{9,0}{H}; \ulabel{3.5,0}{H}; \blackdot{4,0}; \wire[2] (4,0) -- (4,2) -- (9,2); \ulabel{9,2}{S}; \stoch{2,0}{$\psi$}{3}; \stoch{6.5,2}{$\phi$}{3}; % \showgrid \, = \! \pbox{ \stringdiagram { \wire (0,0) -- (14,0); \ulabel{0,0}{Y}; \ulabel{14,0}{H}; \blackdot{1,0}; \wire[2] (1,0) -- (1,2) -- (14,2); \ulabel{14,2}{S}; \stoch{3,2}{$\psi$}{3}; \ulabel{4.25,2}{H}; \stoch{5.5,2}{$\phi$}{3}; \sqbox{9,0}{$\gamma$}{3}; \blackdot{7,2}; \ulabel{7.5,2}{S}; \wire[1.5] (7,2) -- (7,0.5) -- (9,0.5); \ulabel{10.3,0}{Y}; \stoch{11.5,0}{$\psi$}{3}; % \showgrid \end{equation} § TECHNICAL PRELIMINARIES I bet I can move most of this section to the SI once I've got the rest written. In order to fill in some of the details sketched above, we introduce some concepts and notation that has been developed in the context of category theory Fritz, Jacobs, …. We present here only an informal sketch of the ideas. An introduction to category theory and the string diagram notation can be found in a short but accessible form in Baez and Stay or in a longer text-book format in Seven Sketches. For a comprehensive technical exposition of their application to probability theory via so-called Markov categories, see Fritz, maybe also the representable Markov cats paper. The framework has been used previously in a context related to Bayesian updates and the free energy principle Toby. The central concepts are that of measurable space and of Markov kernel. A measurable space is a set equipped with a $\sigma$-algebra. The basic idea is that a measurable space is the kind of thing on which one can define a probability measure, but unlike a probability space it does not have any measure defined on it by default. Often, but not always, we will take our measurable spaces to be finite sets, in which case we always take their $\sigma$-algebras to be their power sets. In the supplementary information we give a formal definition of Markov kernels , along with the category-theoretic calculus we use to reason about them, aimed at readers without a category theory background. However the basic idea is that for measurable spaces $X$ and $Y$, a Markov kernel $\kappa$ with source $X$ and target $Y$ is a function that maps elements of $X$ to probability measures over $Y$ (along with a technical requirement that this function be measurable in a certain sense). Following Toby we write such a kernel as $\kappa\colon X\todot Y$, using the symbol $\todot$ to distinguish Markov kernels from ordinary functions. Sometimes we will think of a Markov kernel as being an ordinary deterministic function that takes an element of $X$ and returns a probability distribution over $Y$. However, we will also sometimes think of it as a stochastic function, which takes an element $x\in X$ and returns a randomly chosen element of $Y$, sampled according to a probability measure that might depend on $x$. (These two interpretations can be distinguished formally using the machinery of representable Markov cats paper, but we will not make use of that here.) We write $\kappa(y\mg x)$ to denote the probability that the kernel $\kappa$ assigns to the outcome $y\in Y$ when given the input $x\in X$. We use a thick vertical line to indicate a close relationship to conditional probability while also emphasising that the concept is different: given a kernel $\kappa\colon X\todot Y$ the quantities $\kappa(y\mg x)$ are always defined, regardless of whether any probability distribution has been defined over $X$, and regardless of whether $x$ has a nonzero probability according to such a distribution. (In fact, with a little care this same notation can be used even in the general measure-theoretic case, where $X$ and $Y$ are not necessarily finite Fritz, section 2.8. In this case $x$ and $y$ are interpreted as abstract indices rather than elements of $X$ and $Y$.) two short paragraphs that mention composition and joint distributions, referring to the SI for more details I'm entering some introductory stuff about string diagrams here, just to try and see if this will work. Nothing in this document yet is final - I'm still unsure of the form the paper will take. my conclusion: this might have to go in the supplementary material - it could be made a bit shorter but I think most of the points here have to be said in one form or another for it to make sense. Or we just give references and don't explain it. A measurable space is a set equipped with a $\sigma$-algebra, or formally a tuple $(X,\Sigma)$, where $X$ is a set and $\Sigma\subseteq \mathcal{P}(X)$ is a set of subsets of $X$ that obeys the axioms of a $\sigma$-algebra. Often, but not always, our measurable spaces will be finite sets, in which case we will always assume $\Sigma=\mathcal{P}(X)$. Given two measurable spaces $X$ and $Y$, a Markov kernel, which we write $f\colon X\todot Y$, is a measurable function $f\colon X\to P(Y)$, where $P(Y)$ is the space of probability distributions over $Y$, which is itself a measurable space. We first introduce the framework in the case of finite probability, and then outline how it generalises to a more general measure-theoretic context that includes continuous probability. We will make use of the more general version later on. For now, let $X, Y, Z\dots$ represent finite sets. Given a finite set $X$, we write $P(X)$ for the set of all probability distributions over $X$. In this finite case, $P(X)$ may be thought of as a $(|X|-1)$-dimensional simplex, given by the set of vectors in $\mathbb{R}^{|X|}$ whose components are nonnegative and sum to 1. Given finite sets $X$ and $Y$, a Markov kernel $f\colon X\todot Y$ is defined as a function $f\colon X\to P(Y)$. The kernel $f$ can be thought of as a set of probability distributions over $Y$, parametrised by elements of $X$. Alternatively, it may be thought of as a stochastic function, that is, something that takes an input in the underlying set of $X$ and then produces an output in $Y$, chosen randomly according to a distribution that depends on $X$. (A formal distinction between these interpretations can be made using the machinery of representable Markov cats, but we will not make use of that here.) In the finite case $f$ may also be thought of as a stochastic matrix (i.e. one whose entries are nonnegative and whose columns sum to 1), representing a linear map that maps distributions over $X$ to distributions over $Y$. Following Toby we use the symbol $\todot$ to distinguish Markov kernels from ordinary functions. We write $f(y\mg x)$ to denote the probability that the kernel $f$ assigns to the outcome $y\in Y$ when given the input $x\in X$. We use a thick vertical line to indicate a close relationship to conditional probability while also emphasising that the concept is different: given a kernel $f\colon X\todot Y$ the quantities $f(y\mg x)$ are always defined, regardless of whether any probability distribution has been defined over $X$, and regardless of whether $x$ has a nonzero probability according to such a distribution. Other common notations include $|$ or $;$ in place of $\mg$. Given Markov kernels $f\colon X\todot Y$ and $g\colon Y\todot Z$, we can compose them to form a new kernel of type $X\todot Z$, which we write $f\comp g$, given by \begin{equation} (f\comp g)(z\mg x) = \sum_{y\in Y} f(x\mg y)\,g(y\mg z), \end{equation} which is a discrete version of the Chapman-Kolmogorov equation. There are two important special cases of Markov kernels that have special notation. The first is that for every finite set $X$ there is a kernel $\id_X\colon X\to X$ corresponding to the identity matrix, which simply returns The second important special case is a Markov kernel $p\colon \one \to X$, where $\one = \{\star\}$ is a set with a single element. (The identity of the element does not matter.) In this case the kernel $p$ can be defined by choosing a single probability distribution over $X$, which is the kernel's output when given its only possible input. In fact we think of distributions over $X$ and kernels of type $\one\to X$ as the same thing, and we write $p(x)$ rather than $p(x\mg \star)$. (Note that the symbol $p$ here refers to the name of the kernel, rather than being a generic symbol for probability.) We also use a notation called string diagrams for Markov kernels. String diagrams can be applied to a much wider range of mathematical topics than probability (Baez+Stay is a good overview of some other applications). Here we present them only in the context of Markov kernels, as used by Cho+Jacobs, Fritz. We choose to orient our diagrams horizontally and generally write kernels with curved edges. In general, in our context, every string diagram represents a Markov kernel, which might be built up out of other Markov kernels. We denote a kernel $f\colon X\todot Y$ by the diagram $\!\!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$f$}{2} \llabel{0,1}{X}; \rlabel{4,1}{Y}; Each wire represents a finite set. Composition via the Chapman-Kolmogorov equation is written as \begin{equation} \stringdiagram { \wire (0,0) -- (4,0); \stoch{2,0}{$f\comp g$}{3}; \ulabel{0,0}{A}; \ulabel{4,0}{C}; \,=\, \pbox{\stringdiagram { \wire (0,0) -- (7,0); \stoch{2,0}{$f$}{3}; \ulabel{0,0}{A}; \ulabel{3.5,0}{B}; \stoch{5,0}{$g$}{3}; \ulabel{7,0}{C}; \end{equation} Here each side of the equation represents a Markov kernel and we are asserting that the two kernels are equal. For this to make sense, the types have to match, meaning that the “dangling wires” on each side of the two diagrams must match. As a special case, we write a one-element set as no wire, so that a probability distribution over $X$ (that is, a kernel $p\colon \one\to X$) is written simply as $\!\stringdiagram{ \wire (2,1)--(4,1); \stoch{2,1}{$p$}{2} \rlabel{4,1}{X}; String diagrams come into their own when we consider joint distributions. For example, we might want to consider a function $h\colon A\times B\to P(C\times D)$, which we write as a Markov kernel $h\colon A\otimes B\todot C\otimes D$. Here $A\otimes B$ and $B\otimes C$ represent the spaces of probability distributions over $A\times B$ and $C\times D$. We write such spaces of joint distributions as pairs of parallel wires, so that the kernel Two central concepts are measurable spaces and Markov kernels. A measurable space is a set equipped with a $\sigma$-algebra. This is a similar concept to a measure space or a probability space, but without a specified measure defined on it. Given a measurable space $X$, we write $P(X)$ for the set of all probability measures over $X$. In fact $P(X)$ itself can be made into a measurable space. We will often consider the case where a measurable space is a finite set, in which case we will always assume its associated $\sigma$-algebra is its power set. Given measurable spaces $X$ and $Y$, a Markov kernel $f\colon X\todot Y$ can be defined as a measurable function $f\colon X\to P(Y)$. That is, $f$ can be thought of as a set of probability distributions over $Y$, parametrised by elements of $X$. Alternatively, it may be thought of as a stochastic function, that is, something that takes an input in the underlying set of $X$ and then produces an output in $Y$, chosen randomly according to a distribution that depends on $X$. Following Toby we use the symbol $\todot$ to distinguish Markov kernels from ordinary functions. equations for the DE machine version I am thinking there should be an introduction to Jacobs' version of conjugate priors before this. It would basically note that the update function already looks kind of like a machine, but it would note that there's a couple of issues: first we need the update function to be deterministic otherwise it doesn't work (for the reasons we discussed), and second Jacobs' equation says nothing about what the machine should do in the case of subjectively impossible inputs. The text would explain roughly why that's a problem. So we motivate the definitions below by saying they exist to solve those issues. Given a measurable space $S$, an $S$-machine is a tuple $(M,\mu)$, consisting of a measurable space $M$, together with a Markov kernel of type $\mu\colon M\otimes S\todot M$. The set $M$ is called the state space of the machine, and $\mu$ is called its update kernel. We can write the update kernel in string diagrams as $\!\!\!\stringdiagram{ \wire (0,1)--(4,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \stoch{2,1.5}{$\mu$}{2} \llabel{0,3}{ S}; \llabel{0,1}{ M}; \rlabel{4,1}{ M}; Often we will refer to a machine $(M,\mu)$ by its update kernel $\mu$, leaving the state space implicit. The intuition is that a machine can start in a state $m\in M$ and receive an input $s\in S$, and as a result change its state to some other $m'\in M$. This next state may be chosen stochastically, and in general will depend on both the previous state and the input. The process of receiving inputs can be iterated. For example, the following diagram represents a kernel of type $M\otimes(S\otimes S\otimes S)\todot M$ formed by composing three copies of $\mu$. It can be thought of as receiving a string of three inputs and outputting the resulting state. can restructure to remove this diagram if we need to save space \begin{equation*} \stringdiagram{ \wire (0,1)--(10,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \stoch{2,1.5}{$\mu$}{3} \wire[1] (0,4) -- (2,4) --(4,2)--(5,2); \stoch{5,1.5}{$\mu$}{3} \wire[1] (0,5) -- (4,5) --(7,2)--(8,2); \stoch{8,1.5}{$\mu$}{3} \llabel{0,3}{S}; \llabel{0,4}{S}; \llabel{0,5}{S}; \llabel{0,1}{M}; \rlabel{10,1}{ M}; \dlabel{3.5,1}{ M}; \dlabel{6.5,1}{ M}; \end{equation*} It is worth noting that the notion of a machine plays two roles in our work: sometimes a machine will represent a physical system, whose state space and input space are interpreted physically, with its update function modelling its true physical dynamics. However, machines will also sometimes play a more abstract role in our definitions, and not every machine we define will necessarily exist as a physical system. Given an $S$-machine $(M,\mu)$ and a $T$-machine $(N,\nu)$, a map of machines $f\colon \mu\to\nu$ consists of two deterministic kernels, $f_\textnormal{state}\colon M\to N$ and $f_\textnormal{input}\colon S\to T$, such that \begin{equation} \stringdiagram{ \wire (-4,1)--(4,1); \wire (-4,3)--(0,3); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \stoch{2,1.5}{$\nu$}{3} \sqbox{-1.6,0.8}{$f_\textnormal{state}$}{3} \sqbox{-1.6,3.3}{$f_\textnormal{input}$}{3} \llabel{-4,3}{S}; \llabel{-4,1}{M}; \dlabel{0.4,1}{N}; \ulabel{0.4,3}{T}; \rlabel{4,1}{N}; \pbox{\stringdiagram{ \wire (0,1)--(7.25,1); \wire[0.5] (0,3)--(0.5,3)--(1,2)--(2,2); \stoch{2,1.5}{$\mu$}{3} \sqbox{5,1.5}{$f_\textnormal{state}$}{3} \llabel{0,3}{S}; \llabel{0,1}{M}; \dlabel{3.25,1}{M}; \rlabel{7.25,1}{N}; \end{equation} If both of the functions $f_\text{state}$ and $f_\text{input}$ are surjective we say $f$ is a surjective map of machines. Similarly, if both of the underlying functions $f_\text{state}$ and $f_\text{input}$ are injective we say $f$ is an injective map of machines. if needed, we can define “isomorphism,” “isomorphic,” and “identity-on-$S$ map” here as well. To get an intuition for the concept it is helpful to consider the surjective case first. In this case, the functions $f_\text{input}$ and $f_\text{state}$ can be seen as coarse-graining the input space and the state space, in such a way that the dynamics are preserved. That is, coarse-graining the state and input followed by an update of $\nu$ must be the same as first performing an update of $\mu$ and then coarse-graining the state space. This is a similar concept to so-called lumpability of Markov processes. The definition also demands that $\nu$ itself be a machine, which is an extra condition not implied by the lumpability condition itself. (See the paper Martin recommended for an explanation of this point, in relation to Markov processes rather than machines.) [-2cm]to discuss with Martin: what's the one-sentence way to correctly explain what this definition amounts to, in terms of that paper? This intuition is probably helpful but can be removed to save space (except for the last sentence of this paragraph) We can also consider the case of an injective map of machines $f\colon \mu\to \nu$. In this case we can think of $\mu$ as a kind of “sub-machine” of $\nu$, in that the states in $M$ correspond to a subset of the states in $N$, and similarly $S$ corresponds to a subset of $T$. The condition in <ref> ensures that these sets embed into $\nu$ in such a way that the dynamics are preserved, meaning in this case that if $\nu$ starts in a state corresponding to a state in $M$, and if it only receives inputs corresponding to inputs in $S$, then it will behave exactly like the machine $\mu$. In general a map of machines need not be surjective or injective, meaning that some states in $M$ and $S$ may be coarse-grained, while some states in $T$ and $N$ might have no states that map to them. Martin, feel free to ignore anything below this point - you can write the into however you like, it doesn't have to join up to this. Although ultimately our goal is to understand agents in a much broader sense, we restrict ourselves here to the following very simple scenario, which is a generalisation of the one considered in Biehl and Kanai. We consider a physical system that receives a stream of inputs. The system is modelled as a Markov process that takes inputsis there a standard term for “Markov process that takes inputs”?, meaning that at each time step its next state is determined stochastically as a function of both its previous state and the input received. The question we ask is, given such a system, can that system be consistently interpreted as performing exact Bayesian inference about some hidden variable?, and if it can, how many such interpretations are possible? The latter question might at first sound surprising, but we will show in a proposition that if one non-trivial such interpretation exists then so do an infinite set of alternatives. We will show that the requirement of consistency puts fairly strong constraints on the physical systems that can be interpreted as performing inference. It should be noted however, that the condition of being able to perform exact inference on arbitrarily long sequences of inputs is quite a restrictive one, and one could certainly consider weaker forms of interpretation. We focus on exact inference as a kind of case study, in order to elucidate the relationship of mutual constraint between the dynamical properties of a physical system and its interpretations as an agent performing inference. By an “interpretation,” in this particular restricted case, we mean a function that maps the system's internal state to a space of probability distributions, which we think of as representing the agent's subjective beliefs about some hidden variable $H$. The hidden variable may or may not in fact correspond to something that exists in the physical world: we do not demand that the agent's beliefs be true, but only that they be consistent. I'm removing the original content of the template, but it can be found at the bottom of the .tex file, after the end document command. After quite a bit of work I was able to fit Simon's equation on a single line — see below. Hopefully future equations will be less work, but at least currently I have to manually position each element of the diagram, so it takes a while to enter them and get them to look nice. Inline string diagrams are possible but it's not really feasible to convey much information. For example, we can say $\!\!\!\stringdiagram{ \wire (0,1)--(4,1); \stoch{2,1}{$\iota$}{2} \llabel{0,1}{M}; \rlabel{4,1}{S}; } = \!\!\!\stringdiagram{ \wire (0,1)--(6,1); \stoch{1.75,1}{$\psi$}{2} \stoch{4.25,1}{$\phi$}{2} \llabel{0,1}{M}; \rlabel{6,1}{S}; but there isn't really even enough space to fit an $H$ label on the intermediate wire, without increasing the height of the equation above the height of a normal line. So probably we won't want to use inline diagrams in this paper. (It works better if you only have dots and wires.) We require the interpretation map $M\xklto{\psi} H$ and the agent's model $H\xklto{\phi} S$ to obey the following consistency equation: \begin{equation} \stringdiagram { \wire (0,0) -- (11.5,0); \ulabel{0,0}{M}; \ulabel{11.5,0}{M}; \blackdot{1,0}; \wire[2] (1,0) -- (1,2) -- (11.5,2); \ulabel{11.5,2}{S}; \stoch{3,2}{$\psi$}{3}; \stoch{6,2}{$\phi$}{3}; \blackdot{4.5,2}; \ddlabel{4.5,2}{H}; \stoch{9.5,0.25}{$\mu$}{3}; \blackdot{7.5,2}; \ulabel{8,2}{S}; \wire[1.5] (7.5,2) -- (7.5,0.5) -- (9.5,0.5); \wire[2] (4.5,2) -- (4.5,4) -- (11.5,4); \ulabel{11.5,4}{H}; % \showgrid \, = \! \pbox{ \stringdiagram { \wire (0,0) -- (15,0); \ulabel{0,0}{M}; \ulabel{15,0}{M}; \blackdot{1,0}; \wire[2] (1,0) -- (1,2) -- (15,2); \ulabel{15,2}{S}; \stoch{3,2}{$\psi$}{3}; \ulabel{4.25,2}{H}; \stoch{5.5,2}{$\phi$}{3}; \stoch{9,0.25}{$\mu$}{3}; \blackdot{7,2}; \ulabel{7.5,2}{S}; \wire[1.5] (7,2) -- (7,0.5) -- (9,0.5); \wire[4] (10.5,0) -- (10.5,4) -- (15,4); \blackdot{10.5,0}; \ulabel{11.3,0}{M}; \stoch{13.2,4}{$\psi$}{3}; \ulabel{15,4}{H}; % \showgrid \end{equation} This says that … The following is copied-and-pasted from elsewhere - the notation will need to be changed (the rest to match this, or this to match the rest). We build on the formal definition of conjugate priors given by Jacobs, which we sketch here, using the language of string diagrams. In this graphical notation, we write a Markov kernel $f\colon X\todot Y$ as \begin{equation} \pbox{\stringdiagram{ \wire(0,0) -- (4,0); \stoch{2,0}{$f$}{3}; \ulabel{0,0}{X}; \ulabel{4,0}{Y}; \end{equation} This should be thought of as a function that maps elements $x\in X$ to probability distributions over $Y$. We can use Markov kernels to model either physical processes (in which case we interpret $X$ as the space of possible inputs and $Y$ as the space of possible outputs) or statistical models, in which case we regard $X$ as the set of possible parameter values and $Y$ as the sample space. The Markov category framework provides various ways to compose Markov kernels together, either in series (which can be thought of as feeding the output of one process into the input of another), or in parallel, which can be thought of in terms of two processes operating on separate variables independently. Jacobs' definition of a conjugate prior may be stated within this framework. Let $q\colon\Theta\todot X$ be a statistical model with parameter space $\Theta$ and sample space $X$. We say that $q$ admits a conjugate prior if there exists $(i)$ another statistical model $p\colon\Psi\to\Theta$, whose sample space is $\Theta$, and a deterministic function $h\colon \Psi\times X\to \Psi$, such thatThis equation can be made vertically smaller by laying it out in the style of <ref>. Then it will only be 2 units high, not 3. \begin{equation} \label{conjugate-prior-def} \stringdiagram{ \wire(0,0) -- (3.5,0); \stoch{2,0}{$p$}{3}; \ulabel{0,0}{\Psi}; \rlabel{3.8,0}{\Theta}; \blackdot{3.5,0}; \wire[2.5] (3.5,0) -- (3.5,2.5) -- (7.5,2.5); \ulabel{7.5,2.5}{X}; \stoch{5.75,2.5}{$q$}{3}; \wire[2.5] (3.5,0) -- (3.5,-2.5) -- (7.5,-2.5); \ulabel{7.5,-2.5}{\Theta}; \pbox{ \stringdiagram{ \wire(-3,0) -- (3.5,0); \stoch{-0.3,0}{$p$}{3}; \ulabel{0.9,0}{\Theta}; \stoch{2.1,0}{$q$}{3}; \ulabel{-3,0}{\Psi}; % \uulabel{-2,0.1}{\Psi}; \blackdot{-2,0}; \wire[2.5] (-2,0) -- (-2,-2.8) -- (5.75,-2.8); \rlabel{3.8,0}{X}; \blackdot{3.5,0}; \wire[2.5] (3.5,0) -- (3.5,2.5) -- (10,2.5); \ulabel{10,2.5}{X}; \wire[2.2] (3.5,0) -- (3.5,-2.2) -- (5.75,-2.2); \wire(5.75,-2.5) -- (10,-2.5); \sqbox{5.75,-2.5}{$h$}{3}; \ulabel{7,-2.5}{\Psi}; \stoch{8.2,-2.5}{$p$}{3}; \ulabel{10,-2.5}{\Theta}; \end{equation} The parameters $\psi\in\Psi$ of $p$ are called the hyperparameters. The function $h$ takes the value of the hyperparameters along with a sample from $X$ and returns the updated value of the hyperparameters. The equation expressed in string diagrams ensures that the behaviour of these functions is consistent with Bayesian inference. The model of Biehl and Kanai may be expressed in this framework by interpreting Equation <ref> differently. We regard $\Psi$ as the agent's physical state and $h$ as the function that updates it according to its observations $X$. We interpret $q$ as the agent's model of $X$, and $p$ as the interpretation map, sending its internal state to a distribution over model parameters. Only $\Psi$, $h$ and the observations $X$ are part of the physical system. The other Markov kernels constitute the interpretation map, with $\Theta$ being a hidden variable in the agent's assumed model. Seen this way the equation above can be seen as the defining equation for an interpretation map, for an agent that performs exact inference. § OLD INTRO A question of general interest that also plays a central role in the Free Energy Principle (FEP) is which systems are performing Bayesian inference. More precisely, what are the necessary and sufficient conditions under which it is justified to say that a system performs Bayesian inference. This question is of general interest by itself because Bayesian inference is in some sense the optimal way to reason about uncertain events cox which makes systems (including biological ones) performing Bayesian inference special bayes in biology. This question is also of interest as part of the broader question of which systems are intelligent since Bayesian inference plays an important role in current theories of intelligence and rationality. The Free Energy Principle itself may be seen as a theory of intelligence that relies on Bayesian inference but it is not the only one <cit.>. We now give some informal motivation for our proposal to answer this question. Readers mainly interested in formal development may skip ahead to refer. §.§ Bayesian inference Briefly, Bayesian inference is a prescription for how to update probabilistic beliefs about hypothetical causes of observations in response to such observations. The set of hypothetical causes is represented by a family of distributions or a (statistical) model which we will informally write as $\phi(s \mg h)$. Here $h \in H$ are hypotheses or parameters in parameter space $H$ and $s \in S$ are observations in observation space $S$. Note that the parameterized model is often called the likelihood function and written in the style of a conditional probability $p(s|h)$ and sometimes as $p(s;h)$. In order to keep the exposition in this introduction accessible we avoid measure theoretic notions and consider probabilistic beliefs as probability mass functions in the discrete case and probability density functions in the continuous case. When there is no need to distinguish them we will refer to both as probability distributions. Bayesian inference then takes a probability distribution (the prior belief) over $H$ and an observation $s \in S$ and transforms them into a new probability distribution (the posterior belief) over $H$. This is a new attempt. Make Bayesian inference about updating probability distributions instead of measures. Then the things that are updated by the well known form of Bayes rule are not necessarily parameterizations. The other way would be to use probability measures and Bayes rule for probability measures, but only really technical people would get that... Importantly, this transformation (and therefore Bayesian inference) is only well defined if we specify the model $\phi(s \mg h)$ whose parameters the belief ranges over. A complicating issue is that even if a model is defined, Bayesian inference may not specify a posterior belief for every prior belief. We will discuss this issue in the technical part refer but in this introduction we only consider the case where every prior is transformed to a unique posterior for every observation. Next we want to consider what it could mean that a system performs Bayesian inference. To some degree this depends on what we consider systems to be. So we discuss this first. §.§ Systems For the purpose of this paper a system is a triple $\langle Y,S,\gamma\rangle$ consisting of a state space $Y$, an input space $S$, and transformation $\gamma$ that takes a state and an input and transforms them into a new state. In this introduction we will assume that $\gamma$ is deterministic but the formal treatment works for stochastic systems as well. Many examples of such systems are known to exist (in the mathematical sense) because they are formally defined, i.e. state space, input space, and transformation are all given in some formal language (e.g. ZFC set theory). I am not sure that “ZFC set theory” is really a possible formal language in which all of these things can be unambiguously defined, I am pretty sure in some way it is possible. For example, let $Y=\{0,1,2\}$, $S=\{0,1\}$, and $\gamma(y,s) \coloneqq y+s \mod 3$. We assume that there are also parts of the physical world that are systems of this kind. These are what we refer to as physical systems. For example the state space may be the possible configurations of a set of degrees of freedom of some physical matter and the input space may be the set of possible configurations of a disjoint set of degrees of freedom. [There may be other ways in which parts of the physical world forms systems of the kind we study here.] All degrees of freedom evolve under the laws of physics with the time evolution of the state space degrees of freedom possibly depending on the input degrees of freedom. The input dependent time evolution of the state space degrees of freedom then induces and thereby defines the transformation of such a physical system. An important fact about systems in general is that even though two systems may have different state spaces and input spaces (which also means their transformations are different) they can still be isomorphic. Intuitively this just means that they “behave” in the same way. This will be explained below refer. A simple example of a system that is isomorphic to the example above is $Y=\{0,-1,-2\}$, $S=\{0,-1\}$, and $\gamma(y,s) \coloneqq -y-s \mod 3$. Note that among the formally defined systems there are some that were constructed specifically to model physical systems. Note also that for some physical systems an isomorphic formal system may, by accident or due to somebodies skill, have already been defined. With a better idea of what we mean by systems we now discuss which ones should count as performing Bayesian inference. §.§ Systems that should count as performing Bayesian inference As mentioned before belief updating according to Bayesian inference is only defined with respect to a model. This means that the question of whether a system performs Bayesian inference is formulated more precisely as whether there exists a model with respect to which a system is performing Bayesian inference. One thing we require from the model is that its observation space is equal to the system's input space. The question is then when a system performs Bayesian inference with respect to a model of its inputs. Note that it is straightforward to (formally) construct a system that performs Bayesian inference with respect to a model of its inputs. At the very least it should be justified to say that such a system performs Bayesian inference. To construct such a system we start by defining a model $\phi(s\mg h)$ together with its parameter space $H$ and observation space $S$. The input space of the constructed system is defined as the observation space of the model. We can then construct the space of all probability distributions (beliefs) over parameter space $H$. [To make sure all posteriors are well defined we only allow beliefs that assign positive probability to all parameters.] This forms the states space $Y$ of the constructed system. Any pair $(y,s)$ of state and input is now also a pair of a probability distribution over parameters of a model (a possible prior) and an observation such that we can use Bayes rule to obtain a new probability distribution $\gamma(y,s)$, the posterior. This completes the construction of a system that performs Bayesian inference. For example we can choose a model for binary observations $S \coloneqq \{0,1\}$ (or equivalently $\{\text{heads},\text{tails}\}$) the family of probability distributions $\phi(s \mg h)$ \begin{align} \phi(s \mg h) \coloneqq h^{\delta_0(s)} (1-h)^{\delta_1(s)} \end{align} where $h \in H$ and $H \coloneqq [0,1]$. This is equivalent to considering $H$ as a set of hypothetical biases that could be causing the outcomes of a coin flip. For the state space $Y$ we can choose the set of positive probability density functions over $H$. We can then define the transformation $\gamma(y,s)$ via Bayes rule: \begin{align} \gamma(y,s)(h) \coloneqq \frac{\phi(s \mg h)y(h)}{\int \phi(s \mg \bar{h})y(\bar{h}) d\bar{h}}. \end{align} The result of $\gamma(y,s)$ is again a positive probability density function over $H$ and therefore an element of $Y$ for all $(y,s) \in Y\times S$.$\gamma$ isn't a Markov kernel in this case, for that we would need a distribution over $Y$ on the r.h.s. not just an element of $Y$. This could be done using a Dirac delta, but it would be nice to avoid it. Maybe it needs a short note. However, apart from systems directly constructed to perfrom Bayesian inference there are three other classes of systems that we also consider to be performing Bayesian inference. * Systems that implement Bayesian inference in a (hyper-) parameter space. It is well known that certain statistical models have associated parameterized conjugate priors. In this case Bayesian inference can be, and in practice often is, implemented completely in some parameter space. This parameter space need not be isomorphic to any space of probability distributions and is often a low dimensional real space $\mathbb{R}^n$. Maybe point to, or quickly describe an example with conjugate prior here. Let us explain a bit more how parameterized conjugate priors allow Bayesian inference in parameter space. The first thing to note is that for some models there are subsets of probability distributions (beliefs) over the model parameters that are closed under Bayesian updates. This just means that if we start with a prior measure from such a subset then the posterior measure for any observation will always be in this subset again. Often these subsets of probability distributions can be conveniently parameterized. A set $C$ of probability distributions is parameterized by a set $D$ (not necessarily containing probability distributions and often some $\mathbb{R}^n$) if there is a bijective function $f:D \to C$. Note that parameters of beliefs are sometimes called hyperparameters to distinguish them from the model parameters. Let's assume we have a model that has such a closed subset $C$ of probability distributions over its parameters and is parameterized by some set $D$ of hyperparameters. Let $f(d) \in C$ denote the probability distribution associated to hyperparameter $d \in D$ and write $f(d)(h)$ for the probability assigned to parameter/hypothesis $h \in H$ by $f(d)$. If we then pick an arbitrary observation $s \in S$ and hyperparameter $d \in D$ we can take the belief $f(d)$ it parameterized and use Bayes rule to update it with respect to the observation $s$ to get a posterior belief $p \in C$ which is also in $C$ since $C$ is closed: \begin{align} \label{eq:hyperbayes} p(h)=\frac{\phi(s \mg h)f(d)(h)}{\int \phi(s \mg \bar{h})f(d)(\bar{h}) d\bar{h}}. \end{align} Since $f$ is bijective we can then use its inverse to find the hyperparameter $d' \coloneqq f^{-1}(p) \in D$ of the posterior. We can therefore define a function $\alpha(d,s)$ mapping the hyperparameter of any prior belief $f(d) \in C$ and observations $s \in S$ to the hyperparameter $d'$ of the posterior belief $f(d')=p$. \begin{align} \alpha(d,s) \coloneqq f^{-1}\left(\frac{\phi(s \mg h)f(d)(h)}{\int \phi(s \mg \bar{h})f(d)(\bar{h}) d\bar{h}}\right). \end{align} This is what we mean by implementing Bayesian inference in a (hyper)parameter space. Note that $\alpha(d,s)$ is then also the transformation of a system whose state space is $D$ and input space is $S$. Systems of this kind should also be considered as systems performing Bayesian inference. * Systems where only a part performs Bayesian inference. For example if one component of the system's state space are parameters whose dynamics implement Bayesian inference with respect to the input but another component of the system does something else. Example? * Systems that are isomorphic to a system that performs Bayesian inference. Note that this means that if some formal system performs Bayesian inference and is isomorphic to some physical system then we can also say that the physical system performs Bayesian inference. The necessary and sufficient condition that we propose to identify systems that perform Bayesian inference is satsified by these three classes of systems.I guess ideally we would also know that it is satisfied by only those classes, but I don't think we can or necessarily need to. Next we give the main idea behind that condition. §.§ Condition for systems to perform Bayesian inference We now introduce the main idea behind the proposed formal condition (see refer) under which a system performs Bayesian inference with respect to given model. This also leads to a condition under which a system performs Bayesian inference in general. The condition is in form of a consistency requirement or equation that has to be satisfied between the system's transformation and Bayesian belief updating with respect to the model. Recall that in the case of parameterized conjugate priors there was a bijective function $f$ that mapped hyperparameters to beliefs about the model parameters $H$. In that case we constructed the transformation of the system that implemented Bayesian inference by using Bayes rule <ref>. In the case of a given system $\langle Y,S,\gamma\rangle$ we require the existence of a (not necessarily bijective) function, which we call the interpretation map $g$, from system states $y \in Y$ to beliefs $g(y)$ about the model parameters $h \in H$ that is “consistent with Bayes rule”. This means, the belief $g(\gamma(y,s))$ associated to the new state $\gamma(y,s)$ must be the posterior belief obtained by considering belief $g(y)$ associated to old state $y$ as the prior belief, input $s$ as the observation, $\phi(s \mg h)$ as the model and applying Bayes rule. More formally, a (deterministic) system $\langle Y,S,\gamma\rangle$, performs Bayesian inference with respect to model $\phi(s \mg h)$ if and only if there exists a function $g$ such that for all $y \in Y,s \in S$: \begin{align} g(\gamma(y,s))(h) = \frac{\phi(s \mg h) g(y)(h)}{\int \phi(s \mg \bar{h}) g(y)(\bar{h}) d\bar{h}} \end{align} The dynamics of hyperparameters of conjugate priors obey this condition with the interpretation function being the bijective function $f$. Systems where a only part of the system state is involved in Bayesian inference also satisfy this condition since the interpretation function can just ignore the rest of the system state. Note that the condition is also satisfied when the state space consists of probability distributions directly and the dynamics of the system are identical to Bayesian inference. In that case a possible interpretation function is just the identity. Finally, if two systems are isomorphic and the first of them obeys the condition, the second does so as well. In this case we can construct the interpretation map of the second system by composing the bijective function that exists between isomorphic systems with the interpretation function of the first system. To be continued...
[table]capposition=top Prepublication version 0.25 175(20.5,262) ©2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. # UnScenE: Toward Unsupervised Scenario Extraction for Automated Driving Systems from Urban Naturalistic Road Traffic Data Nico Weber1,2, Christoph Thiem1, and Ulrich Konigorski2 1 Opel Automobile GmbH, Stellantis NV, 65423 Rüsselsheim am Main, Germany <EMAIL_ADDRESS>Control Systems and Mechatronics Laboratory, TU Darmstadt, 64283 Darmstadt, Germany ###### Abstract Scenario-based testing is a promising approach to solve the challenge of proving the safe behavior of vehicles equipped with automated driving systems (ADS). Since an infinite number of concrete scenarios can theoretically occur in real-world road traffic, the extraction of relevant scenarios that are sensitive regarding the safety-related behavior of ADS-equipped vehicles is a key aspect for the successful verification and validation of these systems. Therefore, this paper provides a method for extracting multimodal urban traffic scenarios from naturalistic road traffic data in an unsupervised manner for minimizing the amount of (potentially biased) prior expert knowledge needed. Rather than an (expensive) rule-based assignment by extracting concrete scenarios into predefined functional scenarios, the presented method deploys an unsupervised machine learning pipeline. It includes principal feature analysis, feature extraction with so-called scenario grids, dimensionality reduction by principal component analysis, scenario clustering as well as cluster validation. The approach allows exploring the unknown natures of the data and interpreting them as scenarios that experts could not have anticipated. The method is demonstrated and evaluated for naturalistic road traffic data at urban intersections from the inD and the Silicon Valley dataset. The findings encourage the use of this type of data as well as unsupervised machine learning approaches as important pillar for a systematic construction of a relevant scenario database with sufficient coverage for testing ADS. ## I INTRODUCTION The maturity of technical implementations of automated driving systems (ADS) [1] and resulting fields of application are continuously increasing. While ADS-equipped vehicles promise to contribute significantly to a safer, more efficient and more comfortable future mobility [2], the greatest challenge for a market launch of such systems arises from the need for prior proof of safety of the intended functionality [3] for future operation in real-world road traffic [4], [5]. Regarding to an urban operational design domain (ODD), possible increases in traffic safety are particularly relevant, as nearly 70% of accidents involving personal injury in Germany do occur in urban areas [6] (data collection period: 2016-2020). Since existing safety validation approaches would require billions of test kilometers under representative conditions before market launch [5], new methods for the release of ADS-equipped vehicles are currently under development. One of these methods is the scenario-based development and test approach, as proposed by project PEGASUS [7]. Following the paradigm of this approach, the majority of conventional driving kilometers is not challenging enough and thus a reduction of the test effort is to be expected when testing exclusively relevant scenarios [8]. In recent years, the focus of research into safety validation approaches for ADS has primarily been set on highway applications (e.g., a Highway Chauffeur) [7]. With the approval of the first Level 3 ADS for traffic jam situations within highway environments by German Federal Motor Transport Authority (KBA) by end of 2021 [9], series-production ADS-equipped vehicles are expected to be introduced on public roads soon. However, both the objective determination what relevant means with respect to scenarios for testing ADS and a commonly accepted methodology for the construction of a representative scenario database of sufficient coverage are subject of ongoing research [2], [10]. The basic challenge here, independent of the ODD, is the enormously high number of traffic situations that the ADS has to cope with, referred to as open-context problem. The goal of a scenario extraction method can be summarized as mapping of this infinite-dimensional open context to a finite and manageable set of scenarios [11] that reflects the nature of the traffic dynamics of interest in a sufficiently valid manner for subsequent testing. Real-world data represents a valuable data source for the construction of a comprehensive scenario database [2], [12]. While the construction of such a database solely by expert knowledge already seems extremely challenging for the highway environments, even when applying sophisticated statistical approaches [13], it appears to be virtually impossible for urban ODDs. This can be traced back to significant changes regarding the PEGASUS six-layer model for structuring scenarios [14] for the first four layers when transitioning to urban ODDs. The main reason for this is a substantially increasing variability in terms of both traffic spaces and traffic dynamics. With regard to traffic dynamics, the less rule-based behavior and multimodal interaction of various road user (RU) types is to be considered as crucial aspect. Existing publications concerning data-driven scenario extraction can be divided in rule-based approaches, machine learning based approaches or a combination of both (e.g., [15], [16], and [17]). Rule-based approaches have the advantage of not relying on large amounts of annotated (labeled) data. However, the complexity of the rules to be implemented and thus the required effort increases with the complexity of the reality of interest [18] to be captured. On the one hand, this leads to potentially undetected known, unsafe scenarios. On the other hand, it is impossible to explore previously unknown, unsafe scenarios, which, however, are of great importance for a reliable safety argumentation. Supervised machine learning approaches, in turn, require a sufficient amount of labeled data. As these labels are partly unknown in advance, discovering them must be an intrinsic part of a comprehensive scenario extraction method before respective supervised approaches can be deployed. Therefore, this paper proposes a method for extracting multimodal urban traffic scenarios from naturalistic road traffic data in an unsupervised manner, minimizing the amount of prior expert knowledge required, as shown in Fig. 1. Based on knowledge gained from first rule-based investigations [19], we follow a generic clustering procedure [20] for exploring the unknown natures of the data, which are interpretable as scenarios [21]. The method utilizes principal feature analysis [22] for feature selection and proposes the use of scenario grids for a scenario representation compatible with the application of static data based clustering algorithms. We are emphasizing a scenario representation compatible with established approaches in the field of computer-vision, like convolutional neural networks, for a future deployment of the method within a semi-supervised machine learning pipeline. For the evaluation of the method, hierarchical agglomerative clustering (HAC) is applied after reducing the dimensionality by principal component analysis. The method is evaluated using the inD dataset [23] and the Silicon Valley Intersections dataset [24]. The evaluation of the clustering results by comparison with underlying trajectories and a rule-based baseline approach shows promising results. It appears that unsupervised machine learning methods can make a valuable contribution to the construction of a relevant scenario database with sufficient coverage for testing ADS. The main contributions of this paper are as follows: provision of an overview of existig time series clustering methods and data-driven scenario extraction approaches (Sec. II), description of a method for Unsupervised Scenario Extraction (UnScenE) from naturalistic road traffic data (Sec. III), exemplary implementation and evaluation of the scenario extraction method (Sec. IV) and, finally, an elaboration on the results showing future research directions (Sec. V). ## II Background ### II-A Requirements Specification The specification of requirements for the scenario extraction method should be accompanied by a prioritization of them in order to develop a common understanding and to achieve a focus on the most important aspects of the development task at hand. We utilize the MoSCoW prioritization, where each requirement is marked as must (M), should (S), could (C) or won t (W) according to its importance. Note, that requirements marked as W are potentially as important as the ones marked with M, but can be left for a future release of the implementation [25]. The following requirements for the scenario extraction method were identified: * • The scenario extraction method must allow the processing of naturalistic road traffic data by means of (multivariate) time series of different lengths (M1). * • The scenario extraction method must allow a clustering of multimodal urban traffic scenarios of varying number of road users (M2). * • The scenario extraction method must be applicable to different urban traffic spaces with as little adaptation effort as possible (M3). * • The scenario extraction method must include a feature representation that is compatible with supervised machine learning approaches (M4). * • The scenario extraction method must allow a slicing of time series data in meaningful sub-sequences (M5). * • The scenario extraction method should have a feature representation that includes multiple road user dynamics simultaneously at the scenario level (S1). * • The scenario extraction method should minimize the amount of prior expert knowledge required for the application (S2). * • The scenario extraction method is intended to enable scenario clustering without including road network information (C1). * • The scenario extraction method is intended to enable scenario clustering based on different data sources in future releases (W1). ### II-B Clustering of Time Series Data 175(20.5,268) ©2022 IEEE The major strategies for clustering time series data can be divided into raw- data based, feature-based, and model-based approaches [26]. Raw-data based (direct) approaches typically compare different time series using established clustering algorithms by replacing the distance or similarity measure for static data with an appropriate one for time series, e.g., dynamic time warping [20]. Figure 1: UnScenE method for extracting multimodal urban traffic scenarios. Model-based approaches assume that the time series under investigation are based on an underlying model or that they are determined by a combination of probability distributions and are evaluated by means of a similarity measure between fitted models [27]. Both, model-based and feature-based approaches can be categorized as indirect approaches, as they first convert raw time series data into a feature vector of lower dimension and perform the clustering on the model parameters or the feature vector, respectively, rather than on the raw time series data themselves [26]. Feature-based (representation-based [28]) approaches can be further divided into strategies deploying manual or automated feature extraction (Representation Learning [29]). Despite the enormous number of existing approaches within the major strategies described, we are not aware of any implementation that meets all of our requirements specified. This is mainly caused by three characteristics within the nature of our problem at hand: First, the scenario extraction method must be able to deal with time series of different lengths, as different road users have different velocities while passing through the observation area (cf. Req. M1). Second, the method has to cope with multivariate time series, since a relevant scenario is composed of at least two road users potentially involving multiple time series per road user for a meaningful scenario representation (cf. Req. M2). Third, the extraction method has to cope with a varying amount of time series to be considered, because semantically similar scenarios can include different numbers of road users (cf. Req. M2 and S1). Therefore, we deployed a novel approach for scenario representation that is tailored to the problem at hand allowing the use of established clustering algorithms for static data. ### II-C Data-Driven Scenario Extraction Most of the approaches dealing with data-driven scenario extraction are developed for highway use cases (e.g., [4], [16], [30], [31], and [32]). Elpsas et al. [17] propose a method, where rule-based detections are used to train fully convolutional networks for extracting lane change and cut-in scenarios. Kerber et al. [33] define a custom scenario distance measure, considering the occupancy and relative distances of vehicles in eight slots surrounding the ego-vehicle, for subsequent HAC. The authors mention that the method is not suited for urban scenario extraction without modifications. Montanari et al. [34] encourage the approach of extracting scenarios from real driving data in an unsupervised manner. They describe a method for slicing bus communication data of test vehicles, extract features from these slices and cluster them into different highway scenarios using HAC. As the traffic dynamics within urban ODDs is characterized by multimodal interaction of various road user types, the scenario extraction method presented in this paper shall be particularly able to extract scenarios based on naturalistic road traffic data, e.g., recorded by unmanned aerial vehicles (drones). This type of data collection allows an uninfluenced and objective view of a scene [8] and thus enables capturing the traffic dynamics that an ADS-equipped vehicle is exposed to in real-world road traffic [23]. Publications dealing with an urban ODD mostly focus on scenario extraction based on other data sources, e.g., accident data or data of field tests with measurement vehicles, and particularly analyze vehicle-to-vehicle interaction scenarios (e.g., [29], [35], and [36]). Regarding valuable work in terms of scenario extraction based on naturalistic road traffic data, King et al. [37] propose an approach for deriving logical vehicle-to-vehicle interaction scenarios for an unsignalized intersection. Finally, Ries et al. [38] introduce a raw-data based clustering method for grouping real driving sequences into semantically similar sequences. The proposed method is the only one we are aware of that leverages clustering to extract urban traffic scenarios with different road user types and numbers including potentially differing sequence lengths. While this leads to fulfilment of several of our targeted must and should requirements (cf. Req. M1, M2, and S2), others of them are not (fully) addressed. In detail, the authors state that the method has to be adapted for an inter-traffic-space clustering and describe possible solutions to this challenge (cf. Req. M3). Furthermore, we are targeting a feature representation that is compatible with supervised machine learning approaches (cf. Req. M4) and include a slicing of the time series data, since one ego-vehicle can encounter multiple concrete scenarios during its lifetime (cf. Req. M5) [37]. Furthermore, instead of a sequential and pairwise similarity estimation between trajectories belonging to a concrete scenario, we want to compare the dynamics of multiple road users simultaneously (cf. Req. S1). Summarizing, to the best of our knowledge, there is no commonly accepted method for extracting relevant multimodal urban traffic scenarios for testing ADS that meets all the requirements specified. ## III Scenario Extraction Method The UnScenE method for extracting multimodal urban traffic scenarios developed with respect to the specified requirements (cf. Sec. II) is shown in Fig. 1. It follows the generic clustering procedure according to Xu and Wunsch [20] and contains novel instantiations within the different steps to solve the task at hand. While the amount of data decreases with the passing of the different steps, the knowledge of the inherent patterns regarding the reality of interest (i.e., relevant scenarios concerning specified urban traffic spaces) increases. Since the method in the first stage of development must be able to process naturalistic road traffic data particularly, exemplary explanations of the application refer to this type of data source. ### III-A Spatiotemporal Filtering 175(20.5,268) ©2022 IEEE Naturalistic road traffic data contain partly irrelevant parts for subsequent scenario extraction (e.g., vehicles driving straight through the observation area without other RUs present). Given a non-empty set of input patterns in form of multivariate time series data $\textbf{{X}}=\\{\textbf{{x}}_{1},...,\textbf{{x}}_{j},...,\textbf{{x}}_{N}\\}$, where $\textbf{{x}}_{j}=\\{x_{j\text{1}},x_{j\text{2}},..,x_{jd}\\}~{}\epsilon~{}\mathbb{R}^{d}$, with a $d$-dimensional feature space, the goal of this process step is to determine the relevant subset of samples $j$ out of $N$, where $j<N$. Hence, this process step aims at reducing noise in the data by removing the irrelevant data proportions. Within the proposed method, this is achieved by a search algorithm based on the post-encroachment time (PET) [39] of all possible combinations of an ego-vehicle (ego) trajectory and those of encountered road users (challengers) while passing through the observation area (ego-lifetime). As an ego can encounter multiple concrete scenarios within its lifetime (cf. Req. M5), it has to be accounted for a slicing of the corresponding trajectories in such cases, additionally. In detail, this step is performed by a two-layered procedure for each car included in the dataset, as our goal is to extract relevant test scenarios for ADS-equipped vehicles. In the first layer, a distance matrix is generated for each combination of ego and challenger, containing the Euclidean distances of the involved trajectories for all common time steps within the observation area. By user-adaptive parametrization of the threshold for the distance between trajectories $d_{\text{traj}}$, it is possible to define how close the involved trajectories have to come to each other, so that an interaction between the involved road users is conceivable. The value of $d_{\text{traj}}$ can be interpreted as radius of a moving circle surrounding the ego, whose area must overlap the challenger trajectory. Within the second layer, the minimum value of the distance matrix values that satisfy this criterion is used to calculate the PET of the trajectories involved. By setting a PET threshold $t_{\text{PET}}$, the temporal evolvement of the scenario is taken into account. If the two layers of the algorithm are passed through several times during the lifetime of an ego-vehicle, the resulting ego-challenger combinations can be used to slice the respective time series. It should be noted that, at the current state of implementation, this algorithm is accompanied by two main assumptions: First, an intersection area (rather than an intersection point) between trajectories involved in a scenario is considered sufficient to maintain it as potentially relevant. Second, the PET between an ego-vehicle and the nearest challenger is crucial for the following classification of the concrete scenario to be kept. In case of an empty set after spatiotemporal filtering, the extraction method is aborted and, e.g., new data containing relevant scenarios must be recorded for successful application. For further discussion, the entirety of the remaining dataset is referred to as intersecting trajectories. ### III-B Principal Feature Analysis (Feature Selection) While the spatiotemporal filtering involves a reduction in the number of samples, i.e., reduces the $N$, this process step aims at reducing the number of columns, i.e., reducing the $d$. Thus, the task can be described as choosing $n$ distinguishing features out of $d$, whereas $n<d$. An elegant selection of salient features can greatly decrease the storage requirements, simplify the subsequent design process, and facilitate the understanding of the data [20], [21]. With respect to the problem at hand, an approach has to be chosen that is able to determine the importance of the features of the intersecting trajectories in order to subsequently select a meaningful subset of them. While there are various methods to reduce the dimensionality of a feature set, e.g., principal component analysis (PCA), these approaches are characterized by resulting in a lower-dimensional representation, where the features are not physically interpretable anymore [20], [22]. While this is unproblematic for other tasks, these approaches are not effective in the context of subsequent manual feature extraction, where physical interpretability is of great importance. Hence, the proposed method deploys feature selection by principal feature analysis (PFA) [22], which makes it possible to exploit the structure of the principal components of a feature set to find a subset of the original feature vector. The method includes five steps, with the first step calculating the covariance matrix of the standardized intersecting trajectories dataset $\textbf{{X}}_{\text{std}}$. This is followed by the calculation of the principal components and the eigenvalues of the covariance matrix C. By choosing the explained variance ratio $var_{\text{PFA}}~{}\epsilon~{}[0,1]$, $s$ columns of the matrix A, representing the orthonormal eigenvectors of C, are kept, constructing the subspace matrix $\textbf{{A}}_{s}$. The parametrization of $var_{\text{PFA}}$ decides how much of the variability of the data is desired to be retained. In the third step, the rows $\textbf{{v}}_{i}~{}\epsilon~{}\mathbb{R}^{s}$ of $\textbf{{A}}_{s}$ are clustered using $k$-means algorithm. Since each vector $\textbf{{v}}_{i}$ represents the projection of the $i$'th feature of $\textbf{{X}}_{\text{std}}$ in the lower-dimensional space, the $s$ elements of $\textbf{{v}}_{i}$ correspond to the weights of the $i$'th feature on each axis of the subspace $s$. As the amount of mutual information increases with the similarity of the absolute values of these vectors, the clusters derived by $k$-means can be used to choose one feature of a subset of highly correlated features, respectively [22]. According to [22], these features represent each group optimally by means of high spread in the lower dimension, reconstruction and insensitivity to noise. It is of note, that the number of clusters should be chosen slightly higher than $s$. Finally, using this method it is possible to systematically reduce the intersecting trajectories dataset regarding its dimensionality. This paves the way for a well-founded extraction of features for the construction of the scenario grids, as described in the following process step. ### III-C Feature Extraction 175(20.5,268) ©2022 IEEE With the spatiotemporal filtered and dimensionally reduced intersecting trajectories dataset available, the following process step consists of a feature extraction suitable for the subsequent clustering of multimodal urban traffic scenarios. Ideally, such a feature representation should be of use in distinguishing patterns belonging to different clusters, immune to noise, and easy to obtain and interpret [20]. Considering these generic requirements as well as the task-specific requirements (cf. Sec. II), we propose a two-step process to finally construct a matrix feasible for serving as input for the subsequent process step of clustering. We utilize the principle of a grid-based representation, within which a certain spatial area around the ego is discretized and defined as potentially relevant in terms of the ego behavior. The basic principle has already been proposed for use within the robotic domain [40] and various modules of an ADS, such as decision making or motion planning [41]. Furthermore, Gruner et al. [42] propose the use of such grid-based representations for scenario classification based on object-list data and confirm the usability of this type of representation for training artificial neural networks. In contrast to [42], we propose the construction of such a discrete, multi-channel grid structure per scenario and not per scene. The reasons for this are on the one hand the offline character of the scenario extraction use case and the knowledge of the evolvement of an entire scenario based on naturalistic road traffic data. On the other hand, our approach aims at maximizing the degree of automation with respect to scenario labeling through unsupervised machine learning approaches, while [42] implements a rule-based approach for a semi- automated scene labeling. In addition, the previous spatiotemporal filtering already accounts for the temporal evolvement of a scenario and scenes are assigned to a respective concrete scenario, even if the label of the corresponding scenario is still unknown. Finally, our investigations showed that the scenario-level representation requires significantly less computational effort. #### III-C1 Key Frame Calculation This step comprises the determination of the point in time or frame, respectively, of the corresponding concrete scenario (key frame), which is used for the subsequent construction of the different grid channels defining a scenario tensor. On the one hand, the determination of this key frame should be applicable as generically as possible to all relevant ego-challenger combinations identified. On the other hand, the snapshot of the scenario created on the basis of the key frame should capture the distinguishing characteristics of the specific scenario category as accurately as possible. Our investigations have shown that a computation based on the maximum of the yaw rate of the ego $\dot{\varphi}_{\text{e,max}}$ or challenger $\dot{\varphi}_{\text{c,max}}$ within the specific concrete scenario entails the best compromise on robustness and computational effort. In detail, first the maximum yaw rates of the RU involved in a concrete scenario are calculated for all ego-challenger combinations within the intersecting scenarios. Then, the maximum of the respective yaw rate value set is calculated. In case of $\dot{\varphi}_{\text{e,max}}\geq\dot{\varphi}_{\text{c,max}}$, the frame associated with $\dot{\varphi}_{\text{e,max}}$ is used to construct the scenario tensor. Since the construction of the scenario tensor is always done with respect to the ego-vehicle state, in case of $\dot{\varphi}_{\text{e,max}}<\dot{\varphi}_{\text{c,max}}$, a shift of time to the associated challenger frame has to take place. In Fig. 2, the resulting key frames are implicitly shown by the exemplary occupancy grid channels for two concrete scenarios. Figure 2: Exemplary concrete scenarios with respective occupancy grid channel at Bendplatz traffic space (left: ego-to-vehicle, right: ego-to-pedestrian). #### III-C2 Scenario Tensor Construction The key frame for each concrete scenario defines which point in time or frame, respectively, is to be used to construct the corresponding scenario tensor $\Phi=\\{\textbf{{G}}_{1},...,\textbf{{G}}_{j},...,\textbf{{G}}_{l}\\}$, where $\textbf{{G}}_{i}$ is the $i$'th grid channel matrix of size $(a_{\text{gr}}\cdot r_{\text{gr,1}})\times(a_{\text{gr}}\cdot r_{\text{gr,2}})$. All feature values within the $l$ grid channels are calculated with respect to the ego coordinate system at the key frame. Thus, the scenario tensor consists of a discrete, multi-channel grid structure, representing the scenarios spatiotemporal characteristics from a bird's-eye view. A feasible number of grid channels $l_{\text{gr}}$ is determined by the result of previous PFA, whith $l_{\text{gr}}$ less or equal to the dimension of the reduced intersecting trajectories dataset. Both the region of interest $a_{\text{gr}}$ of the rectangular grids and the resolution vector $\textbf{{r}}_{\text{gr}}=({r_{\text{gr,1}},r_{\text{gr,2}}})$, containing the longitudinal and lateral grid resolution, respectively, can be adapted to the specific application. While $a_{\text{gr}}$ determines how far the ego looks into space and time, $\textbf{{r}}_{\text{gr}}$ determines how fine the grid resolves the spatiotemporal evolvement of the respective scenario. In the lower part of Fig. 2, the occupancy grid $\textbf{{G}}_{1}$, containing both the trajectories of the ego and the nearest challenger as well as grid cells occupied by other surrounding RUs for the corresponding key frame, is visualized for an exemplary ego-to-vehicle and ego-to-pedestrian scenario. In agreement with [42], this scenario representation is basically applicable to different ODDs, adaptive by means of the region of interest and resolution. Furthermore, other grid forms are conceivable. It should be noted that this type of scenario representation addresses the fulfillment of essential requirements stated (e.g., Req. M1, M2, M3, M4, and S1). To obtain a representation suitable for clustering, each scenario tensor is flattened column-wise into a scenario grid matrix of size $(a_{\text{gr}}\cdot r_{\text{gr,1}})\times(a_{\text{gr}}\cdot r_{\text{gr,2}}\cdot l)$. After standardization as well as dimensionality reduction by applying PCA, the entirety of resulting scenario grids can finally be forwarded for the following process step as cluster input matrix $\textbf{{M}}_{\text{c}}$. ### III-D Clustering 175(20.5,268) ©2022 IEEE The following process step of the UnScenE method consists of the actual application of a clustering procedure. The structure of the cluster input matrix $\textbf{{M}}_{\text{c}}$ basically allows the application of any clustering algorithm based on static data. Since the scenario extraction method is in general applicable to different data sources and traffic spaces, the user can and must select the most appropriate clustering approach for the dataset at hand. The evaluation in the scope of this paper shows the most promising clustering results using hierarchical agglomerative clustering (cf. Sec. IV). This is in accordance with literature, in which HAC is described as suitable for use cases with many clusters and many samples [21]. Furthermore, this is in line with other publications in the context of real-data based scenario extraction, both based on similarly structured data sources [33] as well as for other types of data sources [34]. ### III-E Cluster Validation and Result Interpretation Clustering is a subjective process in nature due to the absence of a ground truth [20], [37], since the main goal of the clustering and the subsequent result interpretation itself is to find these before unknown labels as good as possible within the given data. Clustering results heavily depend on the choice of the clustering approach, and even for the same algorithm, the selection of related parameters [20]. Furthermore, clustering results are a matter of view and the definition of similarity highly depends on the problem. In literature, three different testing criteria categories, namely external, internal and relative indices, are defined to be able to estimate the quality of clustering results [20]. In accordance with present work in the field of unsupervised scenario extraction (e.g., [33], [34], and [38]), one branch of our evaluation approach can be assigned to the external indices category, where external information is used as standard to validate the clustering results. For this purpose, we use the map information available within the datasets in the form of images of the corresponding traffic spaces. In detail, we compare the trajectories involved in different concrete scenarios belonging to a respective cluster by visual validation. In addition, we compare the clustering results for the Bendplatz traffic space with the results of a rule- based baseline approach, since for the latter the elaborate rule-based implementation was intentionally accepted to get a better reference to the presented extraction method. Finally, we compare different clustering structures in order to obtain a reference for deciding which one may best reveal the characteristics of the data. Since cluster analysis is not a one- shot process [20] the method includes feedback loops depending on the validation results for trials and repetitions with an adjusted parametrization at different process steps. In case that the validation criteria are met, the extracted relevant scenarios can be used for subsequent applications, e.g., scenario-based testing of an ADS. ## IV Evaluation In this chapter, we evaluate the UnScenE method for extracting relevant urban traffic scenarios using the inD dataset [23] as well as the Silicon Valley Intersections dataset (sv dataset) [24]. Some process steps are evaluated for the entire datasets to get an impression of the generic character of the method. Other process steps are demonstrated for exemplary traffic spaces in order to illustrate specific advantages and drawbacks of the method in depth. ### IV-A Datasets and Parametrization There are various reasons for choosing the inD dataset for the main part of the evaluation of the scenario extraction method. The main advantages of the inD dataset compared to other similar datasets are its size, representativeness and accuracy [23]. The dataset consists of more than 11,500 naturalistic road user trajectories including cars, trucks and busses as well as about 5,000 pedestrian and bicyclist trajectories recorded at four unsignalized intersections in Aachen, Germany. In particular, the high proportion of vulnerable road users (VRU) makes this dataset an interesting challenger to the method (cf. Tab. ind_overview), as it can be assumed that it is especially difficult to detect recurring patterns within the less rule- based behavior of VRU. In addition, we use the sv dataset [24] to evaluate individual process steps. This dataset includes naturalistic road traffic data for seven traffic spaces in the Silicon Valley area. Due to the diversity of the traffic spaces as well as changed boundary conditions (especially layer 1-4 of the six-layer model) compared to the inD dataset, more comprehensive conclusions regarding the feasibility of the extraction method can be drawn. As shown in Tab. I, Bendplatz traffic space and Frankenburg traffic space of the inD dataset both have a high total number of trajectories and a high percentage of VRUs. This makes these two traffic spaces particularly interesting for in-depth studies, since a higher share of multimodal interaction can be assumed. TABLE I: InD Dataset Overview Traffic Space | Share in total rec. RU | VRU/Vehicles ---|---|--- Heckstraße | 0.09 | 0.07 Aseag | 0.17 | 0.12 Bendplatz | 0.28 | 0.5 Frankenburg | 0.46 | 1.6 For the subsequent studies, the following parametrization was performed: $t_{\text{PET}}=$3\text{\,}\mathrm{s}\text{/}$$, $d_{\text{traj}}=$1\text{\,}\mathrm{m}\text{/}$$, $var_{\text{PFA}}=0.95$, $a_{\text{gr}}=$30\text{\,}\mathrm{m}\text{/}$$, $r_{\text{gr,1}}=r_{\text{gr,2}}=$1\text{\,}\text{px}\mathrm{/}\mathrm{m}$$, and $var_{\text{PCA}}=0.99$. Even if the numerical values have been chosen on the basis of expert judgement and, if possible, literature references [42], [43], they should not be given too much weight, since the focus is on demonstrating the applicability of the method. ### IV-B Spatiotemporal Filtering and Feature Selection 175(20.5,268) ©2022 IEEE While the application of the search algorithm for spatio-temporal filtering is trivial, interesting conclusions can already be drawn on the basis of the proportion of the remaining relevant intersecting trajectories dataset. Since the datasets provide a classification regarding the respective RU types (cars, trucks, pedestrians, and bicyclists), high-level scenario categories relevant from the viewpoint of proving the safety of an ADS can be defined as ego-to- vehicle (e-to-v) scenarios, ego-to-pedestrian (e-to-p) scenarios, and ego-to- bicyclist (e-to-b) scenarios. For the example of the Bendplatz traffic space, it is found that on average about 4% of pedestrians, 14% of vehicles, and 15% of bicyclists remain within the dataset for the subsequent process steps. These numerical values may serve as an indication of the combination of external risk to which the various RU types are exposed and internal RU type specific risk tolerance. In addition, this gives a hint on the required amount of recordings, since even with the conservative parametrization performed, most of the original amount of data is classified as not relevant for the task at hand. Fig. 3 shows exemplary results of the principal feature analysis in terms of the cumulative explained variance $var_{\text{PFA}}$ over the number of features for different traffic spaces and RU types. Note, that even if this is not a continuous function, the continuous representation of the graphs supports the understanding of the respective results. Fig. 3 shows the curves for the object type car for the traffic spaces with the minimum and maximum number of features required for the defined cumulative explained variance for the corresponding dataset. It is noticeable that the extrema of the sv dataset wrap around those of the inD dataset. This corresponds to the intuitive expectations, since the traffic space sv_san_jose (lat.: 37.3627, long.: -121.8759) is located in direct proximity to a highway entrance and only vehicles are present. It is also plausible that traffic space sv_santa_clara (lat.: 37.3252, long.: -121.9490) requires more features for capturing the same cumulative explained variance, since it includes a complex parking lot with two 4-way stops and six normal stop entries. Figure 3: Cumulative explained variance over number of selected features for exemplary traffic spaces and road user types. While pedestrians in general need more features for the same value of cumulative explained variance, the analysis of the inD dataset shows that the influence of the traffic space is marginal for this RU type, which is why only one curve is shown here. On average, the 11 features considered (xCenter, yCenter, heading, xVelocity, yVelocity, xAcceleration, yAcceleration, lonVelocity, latVelocity, lonAcceleration, latAcceleration) [23] can be reduced to about the half, with pedestrians defining the lower threshold. While this seems intuitively logical for the inD dataset, since some features differ only by the reference coordinate system, this process step can be an even more valuable tool for objective feature selection for differently structured datasets. ### IV-C Feature Extraction The evaluation of the following process steps is conducted exemplarily for the Bendplatz traffic space of the inD dataset (cf. Fig. 2). The PFA results in seven features for $var_{\text{PFA}}=0.95$, with the pedestrian class forming the feature superset of all RU classes. On this basis, the features xCenter, yCenter, heading, xVelocity, yVelocity, xAcceleration, and yAcceleration are used for the construction of the scenario tensor. Since the information of the features xCenter, yCenter, and heading can be combined in the occupancy grid channel, this results in a total of five grid channels consisting of the occupancy grid and velocity and acceleration grids in both spatial directions with regard to the ego state at the respective key frame. While Fig. 2 shows the deviation of the feature representation for the case of semantically dissimilar scenarios, Fig. 4 shows the similarity of the feature representation for semantically similar scenarios using the example of the x-velocity grids for two e-to-v scenarios (same scale). The underlying scenarios were chosen randomly from the group of left turning vehicles, respectively, out of the cluster visualized in Fig. fig_traj_v2v. Figure 4: x-velocity grids for semantically similar ego-to-vehicle scenarios randomly chosen from one cluster at Bendplatz (cf. Fig. fig_traj_v2v). ### IV-D Clustering 175(20.5,268) ©2022 IEEE The reasons for choosing HAC as clustering method are described in Sec. III. HAC follows a bottom-up approach, starting with N clusters, each of which includes exactly one sample. A series of merge operations follows based on a linkage criterion until a threshold is reached or all samples are forced to the same cluster [20], [33]. The results of HAC can be visualized by dendrograms, which are characterized by an inverted tree-like structure [43]. While the root node of a dendrogram represents the entire dataset, each leaf can be regarded as one sample. Therefore, the intermediate nodes describe the degree of similarity between samples and the height of each branch represents the distance between a pair of samples or clusters, or a sample and a respective cluster [20]. Figure 5: Dendrogram depicting the clustering structure of ego-to-vehicle scenarios at Bendplatz (color-coded for distance threshold value of 180). Fig. fig_dendro_v2v shows the dendrogram for the e-to-v scenarios at Bendplatz, created based on all available recordings. Our experiments have shown the best results for this scenario category with ward linkage criterion. On the one hand, this shows that the different process steps of the method are consistent up to this point and that the developed feature representation is compatible with static data based clustering methods. On the other hand, however, clustering algorithms can always produce a partitioning given a dataset, regardless of whether or not there exists a particular structure in the data [20]. Thus, it is only the subsequent cluster validation and result interpretation that provides the actual gain in knowledge and therefore confidence in the clustering results. ### IV-E Cluster Validation and Result Interpretation 175(20.5,268) ©2022 IEEE Figure 6: Semantically similar ego-to-vehicle scenarios. Our approach to validate the cluster results involves three branches (cf. Sec. II). The first branch consists of a comparison of different clustering methods and their respective parametrizations. Thereby, we present the parametrizations that produced the best results in the scope of this paper. The second branch covers a visual validation approach, as described by Fig. fig_traj_v2v and Fig. fig_traj_v2p. In detail, this approach entails the comparison of RU trajectories in different concrete scenarios of one specific cluster, randomly chosen for a defined distance threshold. The trajectories shown in Fig. fig_traj_v2v belong to all concrete e-to-v scenarios within the green, red, and purple clusters of the left side of the dendrogram, all of which combine into a single cluster as the distance threshold value increases up to 240. This result is promising since only semantically similar scenarios are included within the cluster, which can all be assigned to a left-turning ego-vehicle in the presence of oncoming traffic (cf. Fig. fig_traj_v2v). In addition, this example illustrates the direction independence of the feature representation, as semantically similar scenarios are clustered regardless of their specific position within the traffic space. Figure 7: Semantically similar ego-to-pedestrian scenarios. While 369 concrete e-to-v scenarios were available for the clustering at the Bendplatz traffic space, a significantly lower number of 33 e-to-p scenarios remained as cluster input due to their lower probability of occurrence. In analogy to the procedure for the e-to-v scenarios, Fig. fig_traj_v2p shows such e-to-p scenarios, all of which originate from a specific cluster. The impression that the extraction method is able to cluster semantically similar scenarios is strengthened. This impression also applies to the case of multimodal interaction in Fig. fig_traj_v2p, including less rule-based pedestrian behavior. To get a broader impression about the performance of the scenario extraction method, the third branch of the validation procedure includes the attempt to get a statement about the accuracy of the overall clustering. For this purpose, we compare the results of the clustering process with respect to the e-to-v scenarios for the Bendplatz traffic space with a rule-based baseline approach particularly designed for this traffic space [19]. The methodology for determining overall accuracy is based on [43] and has been extended for this use case. In detail, this involves assigning to each sample one of the nine associated scenario labels originating from the rule-based approach [19]. Subsequently, the most frequently occurring labels per cluster are summed up over all clusters and are divided by the total number of samples. Fig. fig_clust_acc shows the results of this procedure for a varying distance threshold. As the distance threshold increases, the number of clusters decreases, which is accompanied by a decreasing accuracy. With respect to the dendrogram, increasing the distance threshold corresponds to cutting the tree- like cluster structure at a higher point. As can be seen from Fig. 8, there is a large drop in overall acurracy from 0.75 to 0.56 in the range of the distance threshold between 180 and 190. This corresponds analogously to a transition from 21 to 16 clusters. With a distance threshold of 180 and a cluster number of 21, it can be observed that many clusters have a very high accuracy, while a few larger clusters show low accuracy. In detail, 18 of the 21 clusters have an accuracy above 0.9, while one cluster with an accuracy of 0.32 is the lower outlier. Finding the reasons for this result is subject of future work. Figure 8: Overall accuracy of ego-to-vehicle scenario clustering compared to rule-based baseline approach for Bendplatz traffic space. It should be noted that the labels of the rule-based approach used for determining the accuracy do not represent the ground truth either. Thus, false positive as well as false negative ratings of the accuracy can occur. Nevertheless, this approach offers a building block for the determination of a sufficient number of scenario clusters with respect to a reasonable balance between scenario coverage and effort during validation and testing of an ADS. ## V Conclusion and Outlook 175(20.5,268) ©2022 IEEE This paper proposes a method for extracting multimodal urban traffic scenarios from naturalistic road traffic data in an unsupervised manner. The method comprises five process steps, namely spatiotemporal filtering, principal feature analysis, feature extraction, clustering and cluster validation. The results of the principal feature analysis show a dependence of the cumulative explained variance within the data on both traffic space and road user type. In addition, the required features for the subsequent steps can be reduced by a factor of about two within the exemplary evaluation. For feature extraction, a discrete, multi-channel grid structure for scenario modeling is proposed, resulting in a scenario tensor with various grids defined by the previously selected features. This feature representation particularly adresses the requirements M3, M4, M5, S1, S2, and C1 (cf. Sec. II). Based on the developed feature representation it is possible to apply clustering methods which rely on static data. To evaluate the method, hierarchical agglomerative clustering is applied to an urban traffic space and the corresponding results are discussed. Thereby, both the results of visual validation for selected multimodal urban traffic scenarios and the comparison with a rule-based based baseline approach are promising. These results confirm in particular the fulfillment of the requirements M1 as well as M2 (cf. Sec. II). Based on the results, future research should push for a broader evaluation of the method by investigating more traffic spaces as well as more clustering approaches. Furthermore, answering the question for the few, large outliers in the cluster-individual overall accuracy represents a relevant aspect. In addition, addressing requirement W1 by incorporating other data sources is of interest. Moreover, the incorporation of the method into the scenario-based simulation platform presented in [19] represents a future use case, with the extracted scenarios representing one main input channel for the adaptive replay-to-sim approach (ARTS), in addition to an agent-based simulation and the ADS-under-test. Finally, the method will be integrated into a semi- supervised machine learning pipeline to achieve the medium-term goal of a robust and generic scenario classifier. The presented method contributes to the quantitative real-data based extraction of relevant traffic scenarios. This method can be seen as building block toward a systematic, data-driven construction of a relevant scenario database with sufficient coverage in a fully automated manner for validating the safe behavior of ADS. ## ACKNOWLEDGMENT We want to thank Stefan Berger and Ulrich Eberle (Opel Automobile GmbH, Stellantis NV) for the productive discussions and the feedback on our approaches as well as for peer review of this publication prior to its submission. ## References * [1] Society of Automotive Engineers, _Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems_ , Std. J3016. * [2] M. Wood et al., “Safety first for automated driving.” [Online]. Available: https://www.daimler.com/dokumente/innovation/sonstiges/safety-first-for-automated-driving.pdf * [3] International Organization for Standardization, _Road Vehicles - Safety of the Intended Functionality_ , Std. ISO/PAS 21 448. * [4] S. Hallerbach, Y. Xia, U. Eberle, and F. Koester, “Simulation-based identification of critical scenarios for cooperative and automated vehicles,” _SAE International Journal of Connected and Automated Vehicles_ , vol. 1, no. 2018-01-1066, pp. 93–106, 2018. * [5] W. Wachenfeld and H. Winner, “The release of autonomous vehicles,” in _Autonomous Driving: Technical, Legal and Social Aspects_ , M. Maurer, J. C. Gerdes, B. Lenz, and H. Winner, Eds. Springer, 2016, pp. 425–449. * [6] Statistisches Bundesamt, “Verkehrsunfälle,” 2021, Accessed: Feb. 12, 2021\. [Online]. Available: https://www.destatis.de/DE/Themen/Gesellschaft-Umwelt/Verkehrsunfaelle/_inhalt.html * [7] PEGASUS Project Consortium, “Pegasus method: An overview,” 2019, Accessed: Jan. 20, 2022. [Online]. Available: https://www.pegasusprojekt.de/files/tmpl/Pegasus-Abschlussveranstaltung/PEGASUS-Gesamtmethode.pdf * [8] S. Ulbrich, T. Menzel, A. Reschka, F. Schuldt, and M. Maurer, “Defining and substantiating the terms scene, situation, and scenario for automated driving,” in _2015 IEEE 18th International Conference on Intelligent Transportation Systems_ , 2015, pp. 982–988. * [9] Daimler AG, “First internationally valid system approval for conditionally automated driving,” 2021, Accessed: Jan. 17, 2022. [Online]. Available: https://www.daimler.com/innovation/product-innovation/autonomous-driving/system-approval-for-conditionally-automated-driving.html * [10] S. Riedmaier, T. Ponn, D. Ludwig, B. Schick, and F. Diermeyer, “Survey on scenario-based safety assessment of automated vehicles,” _IEEE Access_ , vol. 8, pp. 87 456–87 477, 2020. * [11] C. Neurohr, L. Westhofen, M. Butz, M. H. Bollmann, U. Eberle, and R. Galbas, “Criticality analysis for the verification and validation of automated vehicles,” _IEEE Access_ , vol. 9, pp. 18 016–18 041, 2021. * [12] A. Pütz, A. Zlocki, J. Bock, and L. Eckstein, “System validation of highly automated vehicles with a database of relevant traffic scenarios,” in _12th ITS Eur. Congr_ , 2017. * [13] N. Weber, D. Frerichs, and U. Eberle, “A simulation-based, statistical approach for the derivation of concrete scenarios for the release of highly automated driving functions,” in _AmE 2020 - Automotive meets Electronics; 11th GMM-Symposium_ , 2020, pp. 1–6. * [14] M. Scholtes, L. Westhofen, L. R. Turner, K. Lotto, M. Schuldes, H. Weber, N. Wagener, C. Neurohr, M. H. Bollmann, F. Körtke, J. Hiller, M. Hoss, J. Bock, and L. Eckstein, “6-layer model for a structured description and categorization of urban traffic and environment,” _IEEE Access_ , vol. 9, pp. 59 131–59 147, 2021. * [15] T. Ponn, M. Breitfuß, X. Yu, and F. Diermeyer, “Identification of challenging highway-scenarios for the safety validation of automated vehicles based on real driving data,” in _2020 Fifteenth International Conference on Ecological Vehicles and Renewable Energies (EVER)_ , 2020, pp. 1–10. * [16] F. Kruber, J. Wurst, E. S. Morales, S. Chakraborty, and M. Botsch, “Unsupervised and supervised learning with the random forest algorithm for traffic scenario clustering and classification,” in _2019 IEEE Intelligent Vehicles Symposium (IV)_ , 2019, pp. 2463–2470. * [17] Philip Elspas., Yannick Klose., Simon Isele., Johannes Bach., and Eric Sax., “Time series segmentation for driving scenario detection with fully convolutional networks,” _Proceedings of the 7th International Conference on Vehicle Technology and Intelligent Transport Systems - VEHITS,_ , pp. 56–64, 2021. * [18] National Aeronautics and Space Administration, _Standard for Models and Simulations_ , Std. 7009A. * [19] N. Weber, C. Thiem, and U. Konigorski, “A needle in a haystack – how to derive relevant scenarios for testing automated driving systems in urban areas,” in _30th Aachen Colloq. Sustainable Mobility 2021_ , 2021, pp. 1697–1734, arXiv:2109.03 648. * [20] R. Xu and D. Wunsch, _Clustering_ , 10th ed. Hoboken, New Jersey: Wiley & Sons, Inc, 2008. * [21] B. S. Everitt, S. Landau, M. Leese, and Stahl. D, _Cluster Analysis_ , 5th ed. The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom: John Wiley & Sons Ltd, 2011. * [22] Y. Lu, I. Cohen, X. S. Zhou, and Q. Tian, “Feature selection using principal feature analysis,” in _Proceedings of the 15th ACM International Conference on Multimedia_ , ser. MM ’07. New York, NY, USA: Association for Computing Machinery, 2007, pp. 301–304. * [23] J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein, “The ind dataset: A drone dataset of naturalistic road user trajectories at german intersections,” in _2020 IEEE Intelligent Vehicles Symposium (IV)_ , 2020, pp. 1929–1934. * [24] levelXdata, “Silicon valley intersections dataset,” 2020, Accessed: Jan. 28, 2022\. [Online]. Available: https://levelxdata.com/ * [25] J. A. Khan, I. U. Rehman, Y. H. Khan, I. J. Khan, and S. Rashid, “Comparison of requirement prioritization techniques to find best prioritization technique,” _International Journal of Modern Education & Computer Science_, vol. 7, no. 11, 2015. * [26] T. W. Liao, “Clustering of time series data—a survey,” _Pattern recognition_ , vol. 38, no. 11, pp. 1857–1874, 2005. * [27] E. A. Maharaj, P. D’Urso, and J. Caiado, _Time series clustering and classification_. Chapman and Hall/CRC, 2019. * [28] S. Rani and G. Sikka, “Recent techniques of clustering of time series data: a survey,” _International Journal of Computer Applications_ , vol. 52, no. 15, 2012. * [29] Wang Wenshuo, Ramesh Aditya, Zhu Jiacheng, Li Jie, and Zhao Ding, “Clustering of driving encounter scenarios using connected vehicle trajectories,” _IEEE Transactions on Intelligent Vehicles_ , vol. 5, no. 3, pp. 485–496, 2020. * [30] E. de Gelder and J.-P. Paardekooper, “Assessment of automated driving systems using real-life scenarios,” in _2017 IEEE Intelligent Vehicles Symposium (IV)_ , 2017, pp. 589–594. * [31] A. Erdogan, B. Ugranli, E. Adali, A. Sentas, E. Mungan, E. Kaplan, and A. Leitner, “Real- world maneuver extraction for autonomous vehicle validation: A comparative study,” in _2019 IEEE Intelligent Vehicles Symposium (IV)_ , 2019, pp. 267–272. * [32] H. Watanabe, T. Maly, J. Wallner, T. Dirndorfer, M. Mai, and G. Prokop, “Methodology of scenario clustering for predictive safety functions,” in _9\. Tagung Automatisiertes Fahren_ , 2019. * [33] J. Kerber, S. Wagner, K. Groh, D. Notz, T. Kühbeck, D. Watzenig, and A. Knoll, “Clustering of the scenario space for the assessment of automated driving,” in _2020 IEEE Intelligent Vehicles Symposium (IV)_ , 2020, pp. 578–583. * [34] F. Montanari, R. German, and A. Djanatliev, “Pattern recognition for driving scenario detection in real driving data,” in _2020 IEEE Intelligent Vehicles Symposium (IV)_ , 2020, pp. 590–597. * [35] L. Hartjen, R. Philipp, F. Schuldt, B. Friedrich, and F. Howar, “Classification of driving maneuvers in urban traffic for parametrization of test scenarios,” in _9\. Tagung Automatisiertes Fahren_ , 2019. * [36] M. Barbier, C. Laugier, O. Simonin, and J. Ibañez-Guzmán, “Classification of drivers manoeuvre for road intersection crossing with synthethic and real data,” in _2017 IEEE Intelligent Vehicles Symposium (IV)_ , 2017, pp. 224–230. * [37] C. King, T. Braun, C. Braess, J. Langner, and E. Sax, “Capturing the variety of urban logical scenarios from bird-view trajectories,” in _VEHITS_ , 2021, pp. 471–480. * [38] L. Ries, P. Rigoll, T. Braun, T. Schulik, J. Daube, and E. Sax, “Trajectory-based clustering of real-world urban driving sequences with multiple traffic objects,” in _2021 IEEE International Intelligent Transportation Systems Conference (ITSC)_ , 2021, pp. 1251–1258. * [39] P. J. Cooper, “Experience with traffic conflicts in canada with emphasis on “post encroachment time” techniques,” in _International calibration study of traffic conflict techniques_. Springer, 1984, pp. 75–96. * [40] A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” _Computer_ , vol. 22, no. 6, pp. 46–57, 1989. * [41] J. Ziegler and C. Stiller, “Fast collision checking for intelligent vehicle motion planning,” in _2010 IEEE Intelligent Vehicles Symposium_ , 2010, pp. 518–522. * [42] R. Gruner, P. Henzler, G. Hinz, C. Eckstein, and A. Knoll, “Spatiotemporal representation of driving scenarios and classification using neural networks,” in _2017 IEEE Intelligent Vehicles Symposium (IV)_ , 2017, pp. 1782–1788. * [43] A. A. Patel, _Hands-On Unsupervised Learning Using Python_. O’Reilly Media, Inc., 2019. 175(20.5,268) ©2022 IEEE
# Transfer Learning in Conversational Analysis through Reusing Preprocessing Data as Supervisors Joshua Yee Kim University of Sydney <EMAIL_ADDRESS> &Tongliang Liu University of Sydney <EMAIL_ADDRESS> Kalina Yacef University of Sydney <EMAIL_ADDRESS> ###### Abstract Conversational analysis systems are trained using noisy human labels and often require heavy preprocessing during multi-modal feature extraction. Using noisy labels in single-task learning increases the risk of over-fitting. Auxiliary tasks could improve the performance of the primary task learning during the same training – this approach sits in the intersection of transfer learning and multi-task learning (MTL). In this paper, we explore how the preprocessed data used for feature engineering can be re-used as auxiliary tasks, thereby promoting the productive use of data. Our main contributions are: (1) the identification of sixteen beneficially auxiliary tasks, (2) studying the method of distributing learning capacity between the primary and auxiliary tasks, and (3) studying the relative supervision hierarchy between the primary and auxiliary tasks. Extensive experiments on IEMOCAP and SEMAINE data validate the improvements over single-task approaches, and suggest that it may generalize across multiple primary tasks. ## 1 Introduction The sharp increase in uses of video-conferencing creates both a need and an opportunity to better understand these conversations (Kim et al., 2019a). In post-event applications, analyzing conversations can give feedback to improve communication skills (Hoque et al., 2013; Naim et al., 2015). In real-time applications, such systems can be useful in legal trials, public speaking, e-health services, and more (Poria et al., 2019; Tanveer et al., 2015). Analyzing conversations requires both human expertise and a lot of time. However, to build automated analysis systems, analysts often require a training set annotated by humans (Poria et al., 2019). The annotation process is costly, thereby limiting the amount of labeled data. Moreover, third-party annotations on emotions are often noisy. Deep networks coupled with limited noisy labeled data increases the chance of overfitting (James et al., 2013; Zhang et al., 2016). Could data be used more productively? From the perspective of feature engineering to analyze video-conferences, analysts often employ pre-built libraries (Baltrušaitis et al., 2016; Vokaturi, 2019) to extract multimodal features as inputs to training. This preprocessing phase is often computationally heavy, and the resulting features are only used as inputs. In this paper, we investigate how the preprocessed data can be re-used as auxiliary tasks which provide inductive bias through multiple noisy supervision (Caruana, 1997; Lipton et al., 2015; Ghosn and Bengio, 1997) and consequently, promoting a more productive use of data. Specifically, our main contributions are (1) the identification of beneficially auxiliary tasks, (2) studying the method of distributing learning capacity between the primary and auxiliary tasks, and (3) studying the relative supervision hierarchy between the primary and auxiliary tasks. We demonstrate the value of our approach through predicting emotions on two publicly available datasets, IEMOCAP (Busso et al., 2008) and SEMAINE (McKeown et al., 2011). ## 2 Related Works and Hypotheses Multitask learning has a long history in machine learning (Caruana, 1997). In this paper, we focus on transfer learning within MTL, a less commonly discussed subfield within MTL (Mordan et al., 2018). We are concerned with the performance on one (primary) task – the sole motivation of adding auxiliary tasks is to improve the primary task performance. In recent years, this approach has been gaining attention in computer vision (Yoo et al., 2018; Fariha, 2016; Yang et al., 2018; Mordan et al., 2018; Sadoughi and Busso, 2018), speech recognition (Krishna et al., 2018; Chen and Mak, 2015; Tao and Busso, 2020; Bell et al., 2016; Chen et al., 2014), and natural language processing (NLP) (Arora et al., 2019; Yousif et al., 2018; Zalmout and Habash, 2019; Yang et al., 2019; Du et al., 2017). The drawback of adding multiple tasks increases the risk of negative transfer (Torrey and Shavlik, 2010; Lee et al., 2016, 2018; Liu et al., 2019; Simm et al., 2014), which leads to many design considerations. Three of such considerations are, identifying (a) what tasks are beneficial, (b) how much of the model parameters to share between the primary and auxiliary tasks, and (c) whether we should prioritize primary supervision by giving it a higher hierarchy than the auxiliary supervision. In contrast with previous MTL works, our approach (a) identifies sixteen beneficially auxiliary targets, (b) dedicates a primary-specific branch within the network, and (c) investigates the efficacy and generalization of prioritizing primary supervision across eight primary tasks. Since our input representation is fully text-based, we dive deeper into MTL model architecture designs in NLP, Søgaard and Goldberg (2016) found that lower-level tasks like part-of-speech tagging, are better kept at the lower layers, enabling the higher-level tasks like Combinatory Categorical Grammar tagging to use these lower-level representations. In our approach, our model hierarchy is not based on the difficulty of the tasks, but more simply, we prioritize the primary task. Regarding identifying auxiliary supervisors in NLP, existing works have included tagging the input text (Zalmout and Habash, 2019; Yang et al., 2019; Søgaard and Goldberg, 2016). Text classification with auxiliary supervisors have included research article classification (Du et al., 2017; Yousif et al., 2018), and tweet classification (Arora et al., 2019). Multimodal analysis of conversations has been gaining attention in deep learning research (Poria et al., 2019). The methods in the recent three years have been intelligently fusing numeric vectors from the text, audio, and video modalities before feeding it to downstream layers. This approach is seen in MFN (Zadeh et al., 2018a), MARN (Zadeh et al., 2018b), CMN (Hazarika et al., 2018b), ICON (Hazarika et al., 2018a), DialogueRNN (Majumder et al., 2019), and M3ER (Mittal et al., 2020). Our approach is different in two ways. (1) Our audio and video information is encoded within text before feeding only the text as input. Having only text as input has the benefits of interpretability, and the ability to present the conversational analysis on paper (Kim et al., 2019b), similar to how the linguistics community performs manual conversational analysis using the Jefferson transcription system (Jefferson, 2004), where the transcripts are marked up with symbols indicating how the speech was articulated. (2) Instead of using the audio and video information as only inputs, we demonstrate how to use multimodal information in both input and as auxiliary supervisors. Hypothesis H1: The introduced set of auxiliary supervision features improves primary task performance. We introduce and motivate the full set of sixteen auxiliary supervisions, all based on existing literature: these are grouped into four families, each with four auxiliary targets. The four families are (1) facial action units, (2) prosody, (3) historical labels, and (4) future labels: (1) Facial action units, from the facial action coding system identifies universal facial expressions of emotions (Ekman, 1997). Particularly, AU 05 (upper lid raiser), 17 (chin raiser), 20 (lip stretcher), 25 (lips part) have been shown to be useful in detecting depression (Yang et al., 2016a; Kim et al., 2019b) and rapport-building (Kim et al., 2021). (2) Prosody, the tone of voice – happiness, sadness, anger, and fear – can project warmth and attitudes (Hall et al., 2009), and has been used as inputs in emotions detection (Garcia-Garcia et al., 2017). (3 and 4) Using features at different historical time-points is a common practice in statistical learning, especially in time-series modelling (Christ et al., 2018). Lastly, predicting future labels as auxiliary tasks can help in learning (Caruana et al., 1996; Cooper et al., 2005; Trinh et al., 2018; Zhu et al., 2020; Shen et al., 2020). We propose using historical and future (up to four talkturns ago or later) target labels as auxiliary targets. Figure 1: Reusing components (dotted lines) of the feature engineering process as auxiliary targets (in blue). The MONAH framework (Kim et al., 2021) is introduced in section 4.2 Given that we are extracting actions and prosody families as inputs, we propose to explore whether they can be reused as supervisors (see Fig. 1). Our hypothesis H1 is that re-using them as auxiliary supervision improves primary task performance. This is related to using hints in the existing MTL literature, where the auxiliary tasks promote the learning of the feature (Cheng et al., 2015; Yu and Jiang, 2016). Hypothesis H2: When the primary branch is given maximum learning capacity, it would not be outperformed by models with primary branch having less than the maximum learning capacity. Deeper models with higher learning capacity produce better results (Huang et al., 2019; Nakkiran et al., 2019; Menegola et al., 2017; Blumberg et al., 2018; Romero et al., 2014). Also, since the auxiliary branch is shared with the primary supervision, the auxiliary capacity should be limited to improve primary task performance (Wu et al., 2020) because limiting the auxiliary capacity will force the branch to learn common knowledge (instead of auxiliary specific knowledge) across the auxiliary tasks (Arpit et al., 2017). Therefore, given a fixed learning capacity budget, our hypothesis H2 implies that we should allocate the maximum learning capacity to the primary branch because we care only about the primary task performance. Hypothesis H3: Auxiliary supervision at the lower hierarchy yields better primary task performance as compared to flat-MTL. Having the auxiliary tasks at the same supervisory level as the primary task is inherently sub-optimal because we care only about the performance of the primary task (Mordan et al., 2018). Information at the lower hierarchy learns basic structures that are easy to transfer, whilst upper hierarchy learns more semantic information that is less transferable (Zeiler and Fergus, 2014). Therefore, we propose that the auxiliary supervision to be at the lower hierarchy than the primary supervision. ## 3 Model Architecture ### 3.1 Flat-MTL Hierarchical Attention Model We start with an introduction of the Hierarchical Attention Model (HAN) (Yang et al., 2016b). We chose HAN because of its easy interpretability as it only uses single-head attention layers. There are four parts to the HAN model, (1) text input, (2) word encoder, (3) talkturn encoder, and (4) the predictor. In our application, we perform our predictions at the talkturn-level for both IEMOCAP and SEMAINE. For notation, let $s_{i}$ represent the i-th talkturn and $w_{it}$ represent the t-th word in the i-th talkturn. Each single talkturn can contain up to T words, and each input talkturn can contain up to L past talkturns to give content context (see section 4.2). Given a talkturn of words, we first convert the words into vectors through an embedding matrix $W_{e}$, and the word selection one-hot vector, $w_{it}$. The word encoder comprises of bidirectional GRUs (Bahdanau et al., 2014) and a single head attention to aggregate word embeddings into talkturn embeddings. Given the vectors $x_{it}$, the bidirectional GRU reads the words from left to right as well as from right to left (as indicated by the direction of the GRU arrows) and concatenates the two hidden states together to form $h_{it}$. We then aggregate the hidden states into one talkturn embedding through the attention mechanism. $u_{it}$ is the hidden state from feeding $h_{it}$ into a one-layer perceptron (with weights $W_{w}$ and biases $b_{w}$). The attention weight ($\alpha_{it}$) given to $u_{it}$ is the softmax normalized weight of the similarity between itself ($u_{it}$) and $u_{w}$, which are all randomly initialized and learnt jointly. Figure 2: Forward pass of the Flat-MTL HAN architecture. Auxiliary tasks (yellow) are added at the same level of the primary task (orange). Figure 3: Forward pass of the HAN-ROCK architecture. There is a primary branch (orange) where auxiliary supervision (yellow) can not influence. The fusion module (blue) aggregates the talkturn embeddings from all tasks into one. $\displaystyle x_{it}=W_{e}w_{it},t\in[1,T].$ $\displaystyle\overrightarrow{h}_{it}=\overrightarrow{GRU}(x_{it}),t\in[1,T].$ $\displaystyle\overleftarrow{h}_{it}=\overleftarrow{GRU}(x_{it}),t\in[T,1].$ $\displaystyle{h}_{it}=(\overrightarrow{h}_{it},\overleftarrow{h}_{it}).$ $\displaystyle u_{it}=relu(W_{w}h_{it}+b_{w}).$ $\displaystyle s_{i}=\Sigma_{t}\alpha_{it}u_{it}.$ $\displaystyle\alpha_{it}=\frac{exp(u_{it}^{\top}u_{w})}{\Sigma_{t}exp(u_{it}^{\top}u_{w})}.$ With the current and past talkturn embeddings (content context, to discuss in section 4.2), the talkturn encoder aggregates them into a single talkturn representation ($v$) in a similar fashion, as shown below. $\displaystyle\overrightarrow{h}_{i}=\overrightarrow{GRU}(s_{i}),i\in[1,L].$ $\displaystyle\overleftarrow{h}_{i}=\overleftarrow{GRU}(s_{i}),i\in[L,1].$ $\displaystyle{h}_{i}=(\overrightarrow{h}_{i},\overleftarrow{h}_{i}).$ $\displaystyle u_{i}=relu(W_{s}h_{i}+b_{s}).$ $\displaystyle\alpha_{i}=\frac{exp(u_{i}^{\top}u_{s})}{\Sigma_{i}exp(u_{i}^{\top}u_{s})}.$ $\displaystyle v=\Sigma_{i}\alpha_{i}u_{i}.$ The simplest way of adding the sixteen auxiliary task predictors would be to append them to where the primary task predictor is, as illustrated in Fig. 2. That way, all predictors use the same representation $v$. We refer to this architecture as flat-MTL, but we are unable to test H2 and H3 using this architecture. Therefore, we introduce HAN-ROCK next. ### 3.2 HAN-ROCK We adapted111Implementation will be available at GitHub; Please see attached supplementary material during the review phase. the ROCK architecture (Mordan et al., 2018) which was built for Convolutional Neural Networks (LeCun et al., 1995) found in ResNet-SSD (He et al., 2016; Liu et al., 2016) to suit GRUs (Bahdanau et al., 2014) found in HAN (Yang et al., 2016b) (see Fig. 3). To study H3, we bring the auxiliary task predictors forward (see Fig. 3), so that the back-propagation from the primary supervision is able to temper the back- propagation from the auxiliary supervision but not vice-versa. This also sets us up to study H2. Each of the auxiliary tasks has its own talkturn encoder but shares one word encoder in the auxiliary branch (to keep the network small). Subscript $a$ indicates whether the word encoder is for the primary or auxiliary branch: $\displaystyle x_{it}=W_{e}w_{it},t\in[1,T]$ $\displaystyle\overrightarrow{h}_{ait}=\overrightarrow{GRU_{a}}(x_{it}),t\in[1,T],a\in\\{pri,aux\\}$ $\displaystyle\overleftarrow{h}_{ait}=\overleftarrow{GRU_{a}}(x_{it}),t\in[T,1],a\in\\{pri,aux\\}$ $\displaystyle{h}_{ait}=(\overrightarrow{h}_{ait},\overleftarrow{h}_{ait})$ $\displaystyle u_{ait}=relu(W_{aw}h_{ait}+b_{aw})$ $\displaystyle\alpha_{ait}=\frac{exp(u_{ait}^{\top}u_{aw})}{\Sigma_{t}exp(u_{ait}^{\top}u_{aw})}$ $\displaystyle s_{ai}=\Sigma_{t}\alpha_{ait}u_{ait}$ Figure 4: Example of a MONAH transcript. Each task has its own talkturn encoder. Subscript $b$ indicates which of the seventeen tasks – the primary talkturn task or one of the sixteen auxiliary tasks – is the talkturn encoder is dedicated to: $\displaystyle\overrightarrow{h}_{abi}=\overrightarrow{GRU_{ab}}(s_{ai}),i\in[1,L],a\in\\{pri,aux\\},$ $\displaystyle b\in\\{pri,aux1,aux2,...,aux16\\}$ $\displaystyle\overleftarrow{h}_{abi}=\overleftarrow{GRU_{ab}}(s_{ai}),i\in[L,1],a\in\\{pri,aux\\},$ $\displaystyle b\in\\{pri,aux1,aux2,...,aux16\\}$ $\displaystyle{h}_{abi}=(\overrightarrow{h}_{abi},\overleftarrow{h}_{abi})$ $\displaystyle u_{abi}=relu(W_{b}h_{abi}+b_{b})$ $\displaystyle\alpha_{abi}=\frac{exp(u_{abi}^{\top}u_{b})}{\Sigma_{i}exp(u_{abi}^{\top}u_{b})}$ $\displaystyle v_{ab}=\Sigma_{i}\alpha_{abi}u_{abi}$ The seventeen talkturn embeddings ($v_{ab}$) goes through a concatenation, then the single head attention, aggregating talkturn embeddings across seventeen tasks into one talkturn embedding for the primary task predictor. Subscript $c$ pertains to the fusion module. $\displaystyle\text{concatenation: }v_{c}=(v_{ab}),a\in\\{pri,aux\\},$ $\displaystyle b\in\\{pri,aux1,aux2,...,aux16\\}$ $\displaystyle\text{attention: }\alpha_{c}=\frac{exp(v_{c}^{\top}u_{c})}{\Sigma_{c}exp(v_{c}^{\top}u_{c})}$ $\displaystyle\text{overall primary talkturn vector: }v=\Sigma_{c}\alpha_{c}v_{c}$ ## 4 Experiments ### 4.1 Data and Primary Tasks We validate our approach using two datasets with a total of eight primary tasks: the IEMOCAP (Busso et al., 2008) and the SEMAINE (McKeown et al., 2011) datasets. Both datasets are used in multimodal emotions detection research (Poria et al., 2019). We divided the datasets into train, development, and test sets in an approximate 60/20/20 ratio such that the sets do not share any speaker (Appendix A.1 details the splits). The target labels of the eight primary tasks are all at the talkturn-level. The four primary tasks of IEMOCAP consists of the four-class emotions classification (angry, happy, neutral, sad), and three regression problems – valence (1-negative, 5-positive), activation (1-calm, 5-excited), and dominance (1-weak, 5-strong). The four-class emotions classification target is common for IEMOCAP (Latif et al., 2020; Xia and Liu, 2015; Li et al., 2019; Hazarika et al., 2018b; Mittal et al., 2020). For SEMAINE, there are four regression problems – activation, intensity, power, valence. We use two standard evaluation metrics, mean absolute error (MAE), and 4-class weighted mean classification accuracy, MA(4). ### 4.2 Input Multimodal feature extraction is computed using the MONAH framework (Kim et al., 2021). This framework uses a variety of pre-trained models to extract nine multimodal features, associated with the prosody of the speech and the actions of the speaker, and weaves them into a multimodal text narrative. We refer the reader to Kim et al. (2021) for the details and efficacy of the MONAH framework. The benefit of the created narrative is that it describes what is said together with how it is said for each talkturn, giving richer nonverbal context to the talkturn (see Fig. 4 for an example). Being fully text-based means that the analysis product can be printed out on paper, without the need for speakers nor monitors to replay the conversation on a computer. In addition to nonverbal context, we concatenated a variable number of preceding talkturns to the current talkturn as content context. Content context has been proven to be useful in CMN (Hazarika et al., 2018b) and ICON (Hazarika et al., 2018a), DialogueRNN (Majumder et al., 2019). The content- context size is tuned as a hyperparameter. The resulting multimodal text narrative, consisting of both nonverbal and context context, is used as the sole input to the model. ### 4.3 Auxiliary Targets We first clarify the method of extraction for the auxiliary families. The OpenFace algorithm (Baltrušaitis et al., 2016) is used to extract the four continuous facial action units (AU) – AU 05, 17, 20, 25. The Vokaturi algorithm (Vokaturi, 2019) is used to extract the four continuous dimensions in the tone of voice – happiness, sadness, anger, and fear. As for historical and future features, we simply look up the target label for the past four talkturns and future four talkturns. Any label that is not available (for example, the label four talkturns ago is not available for the third talkturn) is substituted with the next nearest non-missing label. All auxiliary targets that reused the input features (actions and prosody) are converted into a percentile rank that has the range [0,1] using the values from the train partition. This is a subtle but note-worthy transformation. When reusing an input as an auxiliary target, it would be trivial if the input can easily predict the target. For example, given the following MONAH transcript as input, “The woman sadly and slowly said no." It would be trivial to use a binary (quantized) auxiliary target of “was the tone sad?" because we would only be training the model to look for the word “sadly". However, if the auxiliary target is a percentile rank (less quantized) of the sadness in tone, then the presence of the word “sadly" increases the predicted rank, but the model could still use the rest of the nonverbal cues (“slowly") and what is being said (“no") to predict the degree of sadness. That way, representations learnt for the auxiliary tasks uses more of the input. Percentile rank also has the convenient property of having the range [0,1]. We scaled the percentile ranks so that they all have the same range as the primary task (see appendix A.2 for transformation details). This ensures that if we assigned equal loss weights to all tasks, the contribution of every task is of the same order of magnitude (Gong et al., 2019; Hassani and Haley, 2019; Sener and Koltun, 2018). Table 1: H1 Results. *: the model performance has a statistically significant difference with the baseline model. a: action, p: prosody, h: historical labels, f: future labels. | IEMOCAP | SEMAINE ---|---|--- Aux. Target | Classif. MA(4) | Val. MAE | Act. MAE | Dom. MAE | Act. MAE | Int. MAE | Pow. MAE | Val. MAE None (Baseline) | 0.625 | 0.527 | 0.518 | 0.667 | 0.194 | 0.238 | 0.170 | 0.177 ap | 0.715* | 0.538 | 0.507 | 0.600* | 0.184* | 0.218* | 0.167* | 0.178 aphf | 0.706* | 0.497* | 0.504 | 0.587* | 0.187 | 0.231 | 0.165* | 0.169 Table 2: H2 Results. *: the model performance has a statistically significant difference with the baseline model (P = 256). ^: Assigning 1 GRU to the auxiliary task talkturn encoder yields a statistically significant difference with assigning 0 GRU. | IEMOCAP | SEMAINE ---|---|--- Pri. GRU | Classif. MA(4) | Val. MAE | Act. MAE | Dom. MAE | Act. MAE | Int. MAE | Pow. MAE | Val. MAE 256 (Baseline) | 0.715^ | 0.538 | 0.507 | 0.587^ | 0.187 | 0.231 | 0.165^ | 0.169^ 192 | 0.736 | 0.509 | 0.518 | 0.604 | 0.187 | 0.228 | 0.165 | 0.184* 128 | 0.711 | 0.537 | 0.512 | 0.597 | 0.189 | 0.216 | 0.176* | 0.196* 64 | 0.687 | 0.540 | 0.507 | 0.593 | 0.191 | 0.234 | 0.167 | 0.192* 1 | 0.656 | 0.554 | 0.509 | 0.599 | 0.190 | 0.229 | 0.168* | 0.191* ### 4.4 Models, training, and hyperparameters tuning The overall loss is calculated as the weighted average across all seventeen tasks: (1) we picked a random weight for the primary task from the range [0.50, 0.99]; this ensures that the primary task has the majority weight. (2) For the remaining weights (1 - primary weight), we allocated them to the sixteen auxiliary tasks by: (a) random, (b) linearly-normalized mutual information, or (c) softmax-normalized mutual information. (a) is self- explanatory. As for (b) and (c), mutual information has been shown that it is the best predictor – compared to entropy and conditional entropy – of whether the auxiliary task would be helpful (Bjerva, 2017). We computed the mutual information (vector $m$) of each auxiliary variable with the primary target variable (Kraskov et al., 2004; Ross, 2014) using scikit-learn (Pedregosa et al., 2011). Then, we linearly-normalized or softmax-normalized $m$ to sum up to 1. Finally, we multiplied the normalized $m$ with the remaining weights from (2); this ensures that the primary weight and the sixteen auxiliary weight sum up to one. (a), (b), and (c) have ten trials each during hyper- parameters tuning. Two variants of the HAN architectures are used (Fig 2 and 3). For hypotheses testing, we bootstrapped confidence intervals (appendix A.4). ## 5 Results and Discussion The key takeaways are: (H1) The introduced set of auxiliary supervision improves primary task performance significantly in six of the eight primary tasks. (H2) Maximum learning capacity should be given to the primary branch as a default. (H3) HAN-ROCK is unlikely (in one of the eight tasks) to degrade primary task performance significantly, and sometimes significantly improves it (in four of the eight tasks). (H1): To test H1 (whether the introduced set of auxiliary supervision improves primary task performance), we first train the model with all sixteen auxiliary targets (from families: actions, prosody, historical, and future). Then, to differentiate the effect from the historical and future supervision, we set the loss weights from historical and future targets to be zero; effectively, there is only supervision from eight auxiliary targets (actions and prosody). Lastly, for the baseline model (no auxiliary supervision), we set the loss weights from all sixteen auxiliary targets to zero. Table 3: H3 Results. *: the model performance has a significant difference with the baseline. | IEMOCAP | SEMAINE ---|---|--- Hierarchy | Classif. MA(4) | Val. MAE | Act. MAE | Dom. MAE | Act. MAE | Int. MAE | Pow. MAE | Val. MAE Flat (Baseline) | 0.699 | 0.520 | 0.526 | 0.606 | 0.183 | 0.230 | 0.164 | 0.185 HAN-ROCK | 0.715 | 0.538 | 0.507* | 0.600* | 0.184 | 0.218* | 0.167* | 0.178* Table 4: Class-wise classification F1 score on IEMOCAP. Baseline (challenger) refers to HAN-Rock architecture under the three hypotheses. *: the challenger performance has a statistically significant difference with the baseline model. Distribution | H1 | H2 | H3 | SoTA ---|---|---|---|--- Label | Count | Baseline Aux target: None | Challenger Aux target: aphf | Baseline Pri GRU: 256 | Challenger Pri GRU: 1 | Baseline Hier-archy: Flat | Challenger Hier-archy: HAN ROCK | M3-ER Sad | 1,084 | 0.573 | 0.689* | 0.699 | 0.591* | 0.674 | 0.704* | 0.775 Anger | 1,103 | 0.531 | 0.683 | 0.752 | 0.672* | 0.657 | 0.720* | 0.862 Happy | 1,636 | 0.772 | 0.784 | 0.776 | 0.754* | 0.804 | 0.806 | 0.862 Neutral | 1,708 | 0.664 | 0.636 | 0.688 | 0.627* | 0.645 | 0.631 | 0.745 Given auxiliary supervision, the model significantly outperforms the baseline of not having auxiliary supervision in six out of the eight primary tasks (Table 1). Comparing the baseline model with the model with two auxiliary target families, they significantly outperformed the baseline model in five out of eight primary tasks. The addition of two auxiliary target families (historical and future labels) sometimes significantly improved primary task performance (valence in IEMOCAP), but it also sometimes significantly made it worse (activation and intensity in SEMAINE). This shows that the value of auxiliary tasks, and the associated risk of negative transfer, depends on the auxiliary task. (H2): To test H2 (whether maximum learning capacity should be given to the primary branch), we let P represent the number of GRU assigned to the primary talkturn encoder, and A represent the number of GRU assigned to each of the sixteen auxiliary talkturn encoder. We constrained P \+ A to be equal to 257. During our experiments, we set P to 1, 64, 128, 192, and 256. We set 256 as the baseline model because it is the maximum learning capacity we can give to the primary branch while giving 1 GRU ($=257-256$) to each of the sixteen auxiliary talkturn encoders. In all primary tasks, the baseline model of assigning 256 GRUs to the primary branch is not significantly outperformed by models that assigned 1, 64, 128, 192 GRUs (Table 2). Generally, the performance decreased as the number of GRUs assigned to the primary talkturn encoder decreased from 256 to 1. We observed significantly worse performance in two out of eight tasks – in power and valence in SEMAINE. Also, assigning 256 GRU to the primary talkturn encoders and 1 to each of the sixteen auxiliary talkturn encoders yields the smallest model222As opposed to assigning 1 GRU to the primary talkturn encoder and 256 GRU to each of the sixteen auxiliary encoder., and thus trains the fastest. Therefore, we recommend that the maximum capacity be given to the primary branch as a default. That said, the presence of an auxiliary branch is still important. The baseline of H1 (no auxiliary supervision, Table 1) can be approximated333Same model architecture except that the loss weights of all auxiliary tasks are zero as P=$256+16\times 1$, A=0 . We compared the former to the baseline in Table 2, and found that four out of eight primary tasks have significant improvements by changing the number of talkturn encoders assigned to each auxiliary task from zero to one. (H3): To test H3 (whether auxiliary supervision should be given a lower hierarchy), we compare the results from the flat-MTL HAN architecture (baseline) against the HAN-ROCK architecture (Table 3). Placing auxiliary supervision at the lower hierarchy significantly improves primary task performance in four out of eight tasks. In only one out of eight tasks (power in SEMAINE), auxiliary supervision significantly degrades primary task performance. Further improvements are possible through the fusion module with future research. ### 5.1 Class-wise Performance and SoTA Generally, we found that all hypotheses effects are stronger in lower resource labels (sad and anger, Table 4). We also present the performance of M3ER (Mittal et al., 2020), a previous state-of-the-art (SoTA) approach. We do not expect the performance of our text-only input to match the SoTA approach, which is confirmed in Table 4. By fusing numerical vectors from the three modalities prevalent in SoTA approaches (Zadeh et al., 2018a, b; Hazarika et al., 2018b, a; Majumder et al., 2019; Mittal et al., 2020), the inputs are of a much higher granularity as compared to our approach of describing the multimodal cues using discrete words. Although the text-based input is likely to constrain model performance, the multimodal transcription could be helpful for a human to analyze the conversation, we could also overlay the model perspective on the multimodal transcription to augment human analysis (see Appendix A.5). ## 6 Conclusion We proposed to re-use feature engineering pre-processing data as auxiliary tasks to improve performance and transfer learning. Three hypotheses were tested. The experimental results confirm H1 – Introducing our set of sixteen auxiliary supervisors resulted in better performance in most primary tasks. For H2, maximum learning capacity should be given to the primary branch. Lastly, for H3, placing the auxiliary supervision in a lower hierarchy is unlikely to hurt performance significantly, and it sometimes significantly improves performance. This is encouraging news for multi-modal conversational analysis systems as we have demonstrated how pre-processed data can be used twice to improve performance, once as inputs, and again as auxiliary tasks. The first limitation of our paper is that the solutions are evaluated on eight tasks in the conversational analysis domain, and it is not clear if these would generalize outside of this domain. The second limitation is that we have evaluated on HAN, but not on other network architectures. A challenge to be addressed is the apriori selection of the auxiliary targets. Future research could investigate targets selection, including how to use a much larger range of auxiliary targets, how to decide the optimum number of auxiliary targets, and whether it is possible to perform these automatically. ## References * Arora et al. (2019) Udit Arora, William Scott Paka, and Tanmoy Chakraborty. 2019. Multitask learning for blackmarket tweet detection. In _Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining_ , pages 127–130. * Arpit et al. (2017) Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. 2017. A closer look at memorization in deep networks. In _International Conference on Machine Learning_ , pages 233–242. PMLR. * Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_. * Baltrušaitis et al. (2016) Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency. 2016. Openface: an open source facial behavior analysis toolkit. In _Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on_ , pages 1–10. IEEE. * Bell et al. (2016) Peter Bell, Pawel Swietojanski, and Steve Renals. 2016. Multitask learning of context-dependent targets in deep neural network acoustic models. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , 25(2):238–247. * Bergstra and Bengio (2012) James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. _The Journal of Machine Learning Research_ , 13(1):281–305. * Bjerva (2017) Johannes Bjerva. 2017. Will my auxiliary tagging task help? estimating auxiliary tasks effectivity in multi-task learning. In _Proceedings of the 21st Nordic Conference on Computational Linguistics_ , pages 216–220. * Blumberg et al. (2018) Stefano B Blumberg, Ryutaro Tanno, Iasonas Kokkinos, and Daniel C Alexander. 2018\. Deeper image quality transfer: Training low-memory neural networks for 3d images. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 118–125. Springer. * Busso et al. (2008) Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. _Language resources and evaluation_ , 42(4):335. * Caruana (1997) Rich Caruana. 1997. Multitask learning. _Machine learning_ , 28(1):41–75. * Caruana et al. (1996) Rich Caruana, Shumeet Baluja, and Tom Mitchell. 1996. Using the future to “sort out" the present: Rankprop and multitask learning for medical risk evaluation. In _Advances in neural information processing systems_ , pages 959–965. * Chen et al. (2014) Dongpeng Chen, Brian Mak, Cheung-Chi Leung, and Sunil Sivadas. 2014. Joint acoustic modeling of triphones and trigraphemes by multi-task learning deep neural networks for low-resource speech recognition. In _2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 5592–5596. IEEE. * Chen and Mak (2015) Dongpeng Chen and Brian Kan-Wing Mak. 2015. Multitask learning of deep neural networks for low-resource speech recognition. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , 23(7):1172–1183. * Cheng et al. (2015) Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-domain name error detection using a multi-task rnn. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 737–746. * Christ et al. (2018) Maximilian Christ, Nils Braun, Julius Neuffer, and Andreas W Kempa-Liehr. 2018. Time series feature extraction on basis of scalable hypothesis tests (tsfresh–a python package). _Neurocomputing_ , 307:72–77. * Cooper et al. (2005) Gregory F Cooper, Vijoy Abraham, Constantin F Aliferis, John M Aronis, Bruce G Buchanan, Richard Caruana, Michael J Fine, Janine E Janosky, Gary Livingston, Tom Mitchell, et al. 2005. Predicting dire outcomes of patients with community acquired pneumonia. _Journal of biomedical informatics_ , 38(5):347–366. * Du et al. (2017) Yongping Du, Yunpeng Pan, and Junzhong Ji. 2017. A novel serial deep multi-task learning model for large scale biomedical semantic indexing. In _2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)_ , pages 533–537. IEEE. * Ekman (1997) Rosenberg Ekman. 1997. _What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS)_. Oxford University Press, USA. * Fariha (2016) Anna Fariha. 2016. Automatic image captioning using multitask learning. In _In the Proceedings of Neural Information Processing Systems_ , volume 20, pages 11–20. * Garcia-Garcia et al. (2017) Jose Maria Garcia-Garcia, Victor MR Penichet, and Maria D Lozano. 2017. Emotion detection: a technology review. In _Proceedings of the XVIII international conference on human computer interaction_ , pages 1–8. * Ghosn and Bengio (1997) Joumana Ghosn and Yoshua Bengio. 1997. Multi-task learning for stock selection. In _Advances in neural information processing systems_ , pages 946–952. * Golovin et al. (2017) Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D Sculley. 2017. Google vizier: A service for black-box optimization. In _Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining_ , pages 1487–1495. * Gong et al. (2019) Ting Gong, Tyler Lee, Cory Stephenson, Venkata Renduchintala, Suchismita Padhy, Anthony Ndirango, Gokce Keskin, and Oguz H Elibol. 2019. A comparison of loss weighting strategies for multi task learning in deep neural networks. _IEEE Access_ , 7:141627–141632. * Hall et al. (2009) Judith A Hall, Debra L Roter, Danielle C Blanch, and Richard M Frankel. 2009. Observer-rated rapport in interactions between medical students and standardized patients. _Patient Education and Counseling_ , 76(3):323–327. * Hassani and Haley (2019) Kaveh Hassani and Mike Haley. 2019. Unsupervised multi-task feature learning on point clouds. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 8160–8171. * Hazarika et al. (2018a) Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. 2018a. Icon: Interactive conversational memory network for multimodal emotion detection. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2594–2604. * Hazarika et al. (2018b) Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018b. Conversational memory network for emotion recognition in dyadic dialogue videos. In _Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting_ , volume 2018, page 2122. NIH Public Access. * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 770–778. * Hoque et al. (2013) Mohammed Hoque, Matthieu Courgeon, Jean-Claude Martin, Bilge Mutlu, and Rosalind W Picard. 2013. Mach: My automated conversation coach. In _Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing_ , pages 697–706. * Huang et al. (2019) Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. 2019. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In _Advances in neural information processing systems_ , pages 103–112. * James et al. (2013) Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. _An introduction to statistical learning_ , volume 112. Springer. * Jefferson (2004) Gail Jefferson. 2004. Glossary of transcript symbols with an introduction. _Pragmatics and Beyond New Series_ , 125:13–34. * Kim et al. (2019a) Joshua Y Kim, Rafael A Calvo, Kalina Yacef, and NJ Enfield. 2019a. A review on dyadic conversation visualizations-purposes, data, lens of analysis. _arXiv preprint arXiv:1905.00653_. * Kim et al. (2019b) Joshua Y Kim, Greyson Y Kim, and Kalina Yacef. 2019b. Detecting depression in dyadic conversations with multimodal narratives and visualizations. In _Australasian Joint Conference on Artificial Intelligence_ , pages 303–314. Springer. * Kim et al. (2021) Joshua Y Kim, Kalina Yacef, Greyson Y Kim, Chunfeng Liu, Rafael Calvo, and Silas CR Taylor. 2021. Monah: Multi-modal narratives for humans to analyze conversations. _arXiv preprint arXiv:2101.07339_. * Kraskov et al. (2004) Alexander Kraskov, Harald Stögbauer, and Peter Grassberger. 2004. Estimating mutual information. _Physical review E_ , 69(6):066138. * Krishna et al. (2018) Kalpesh Krishna, Shubham Toshniwal, and Karen Livescu. 2018. Hierarchical multitask learning for ctc-based speech recognition. _arXiv preprint arXiv:1807.06234_. * Latif et al. (2020) Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, Julien Epps, and Bjórn Wolfgang Schuller. 2020. Multi-task semi-supervised adversarial autoencoding for speech emotion recognition. _IEEE Transactions on Affective Computing_. * LeCun et al. (1995) Yann LeCun, Yoshua Bengio, et al. 1995. Convolutional networks for images, speech, and time series. _The handbook of brain theory and neural networks_ , 3361(10):1995. * Lee et al. (2016) Giwoong Lee, Eunho Yang, and Sung Hwang. 2016. Asymmetric multi-task learning based on task relatedness and loss. In _International Conference on Machine Learning_ , pages 230–238. * Lee et al. (2018) Hae Beom Lee, Eunho Yang, and Sung Ju Hwang. 2018. Deep asymmetric multi-task feature learning. In _International Conference on Machine Learning_ , pages 2956–2964. PMLR. * Li et al. (2019) Yuanchao Li, Tianyu Zhao, and Tatsuya Kawahara. 2019. Improved end-to-end speech emotion recognition using self attention mechanism and multitask learning. In _Interspeech_ , pages 2803–2807. * Lipton et al. (2015) Zachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzel. 2015. Learning to diagnose with lstm recurrent neural networks. _arXiv preprint arXiv:1511.03677_. * Liu et al. (2019) Shengchao Liu, Yingyu Liang, and Anthony Gitter. 2019. Loss-balanced task weighting to reduce negative transfer in multi-task learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 9977–9978. * Liu et al. (2016) Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. 2016. Ssd: Single shot multibox detector. In _European conference on computer vision_ , pages 21–37. Springer. * Majumder et al. (2019) Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 6818–6825. * McKeown et al. (2011) Gary McKeown, Michel Valstar, Roddy Cowie, Maja Pantic, and Marc Schroder. 2011\. The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. _IEEE transactions on affective computing_ , 3(1):5–17. * Menegola et al. (2017) Afonso Menegola, Michel Fornaciali, Ramon Pires, Flávia Vasques Bittencourt, Sandra Avila, and Eduardo Valle. 2017. Knowledge transfer for melanoma screening with deep learning. In _2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)_ , pages 297–300. IEEE. * Mittal et al. (2020) Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, and Dinesh Manocha. 2020. M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues. In _AAAI_ , pages 1359–1367. * Mordan et al. (2018) Taylor Mordan, Nicolas Thome, Gilles Henaff, and Matthieu Cord. 2018. Revisiting multi-task learning with rock: a deep residual auxiliary block for visual detection. In _Advances in Neural Information Processing Systems_ , pages 1310–1322. * Naim et al. (2015) Iftekhar Naim, M Iftekhar Tanveer, Daniel Gildea, and Mohammed Ehsan Hoque. 2015\. Automated prediction and analysis of job interview performance: The role of what you say and how you say it. In _2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG)_ , volume 1, pages 1–6. IEEE. * Nakkiran et al. (2019) Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. 2019. Deep double descent: Where bigger models and more data hurt. _arXiv preprint arXiv:1912.02292_. * Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830. * Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , pages 1532–1543. * Poria et al. (2019) Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard Hovy. 2019. Emotion recognition in conversation: Research challenges, datasets, and recent advances. _IEEE Access_ , 7:100943–100953. * Romero et al. (2014) Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. _arXiv preprint arXiv:1412.6550_. * Ross (2014) Brian C Ross. 2014. Mutual information between discrete and continuous data sets. _PloS one_ , 9(2):e87357. * Sadoughi and Busso (2018) Najmeh Sadoughi and Carlos Busso. 2018. Expressive speech-driven lip movements with multitask learning. In _2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)_, pages 409–415. IEEE. * Sener and Koltun (2018) Ozan Sener and Vladlen Koltun. 2018. Multi-task learning as multi-objective optimization. In _Advances in Neural Information Processing Systems_ , pages 527–538. * Shen et al. (2020) Wei Shen, Xiaonan He, Chuheng Zhang, Qiang Ni, Wanchun Dou, and Yan Wang. 2020. Auxiliary-task based deep reinforcement learning for participant selection problem in mobile crowdsourcing. In _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_, pages 1355–1364. * Simm et al. (2014) Jaak Simm, Ildefons Magrans de Abril, and Masashi Sugiyama. 2014. Tree-based ensemble multi-task learning method for classification and regression. _IEICE TRANSACTIONS on Information and Systems_ , 97(6):1677–1681. * Søgaard and Goldberg (2016) Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 231–235. * Tanveer et al. (2015) M Iftekhar Tanveer, Emy Lin, and Mohammed Hoque. 2015. Rhema: A real-time in-situ intelligent interface to help people with public speaking. In _Proceedings of the 20th International Conference on Intelligent User Interfaces_ , pages 286–295. * Tao and Busso (2020) Fei Tao and Carlos Busso. 2020. End-to-end audiovisual speech recognition system with multitask learning. _IEEE Transactions on Multimedia_. * Torrey and Shavlik (2010) Lisa Torrey and Jude Shavlik. 2010. Transfer learning. In _Handbook of research on machine learning applications and trends: algorithms, methods, and techniques_ , pages 242–264. IGI global. * Trinh et al. (2018) Trieu H Trinh, Andrew M Dai, Minh-Thang Luong, and Quoc V Le. 2018. Learning longer-term dependencies in rnns with auxiliary losses. _arXiv preprint arXiv:1803.00144_. * Vokaturi (2019) Vokaturi. 2019. Vokaturi Overview. * Wu et al. (2020) Sen Wu, Hongyang R Zhang, and Christopher Ré. 2020. Understanding and improving information transfer in multi-task learning. _arXiv preprint arXiv:2005.00944_. * Xia and Liu (2015) Rui Xia and Yang Liu. 2015. A multi-task learning framework for emotion recognition using 2d continuous space. _IEEE Transactions on affective computing_ , 8(1):3–14. * Yang et al. (2019) Jianliang Yang, Yuenan Liu, Minghui Qian, Chenghua Guan, and Xiangfei Yuan. 2019\. Information extraction from electronic medical records using multitask recurrent neural network with contextual word embedding. _Applied Sciences_ , 9(18):3658. * Yang et al. (2016a) Le Yang, Dongmei Jiang, Lang He, Ercheng Pei, Meshia Cédric Oveneke, and Hichem Sahli. 2016a. Decision tree based depression classification from audio video and language information. In _Proceedings of the 6th international workshop on audio/visual emotion challenge_ , pages 89–96. * Yang et al. (2018) Min Yang, Wei Zhao, Wei Xu, Yabing Feng, Zhou Zhao, Xiaojun Chen, and Kai Lei. 2018\. Multitask learning for cross-domain image captioning. _IEEE Transactions on Multimedia_ , 21(4):1047–1061. * Yang et al. (2016b) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016b. Hierarchical attention networks for document classification. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1480–1489. * Yoo et al. (2018) ByungIn Yoo, Youngjun Kwak, Youngsung Kim, Changkyu Choi, and Junmo Kim. 2018. Deep facial age estimation using conditional multitask learning with weak label expansion. _IEEE Signal Processing Letters_ , 25(6):808–812. * Yousif et al. (2018) Abdallah Yousif, Zhendong Niu, and Ally S Nyamawe. 2018. Citation classification using multitask convolutional neural network model. In _International Conference on Knowledge Science, Engineering and Management_ , pages 232–243. Springer. * Yu and Jiang (2016) Jianfei Yu and Jing Jiang. 2016. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In _Association for Computational Linguistics_. * Zadeh et al. (2018a) Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multi-view sequential learning. _arXiv preprint arXiv:1802.00927_. * Zadeh et al. (2018b) Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, and Louis-Philippe Morency. 2018b. Multi-attention recurrent network for human communication comprehension. In _Proceedings of the… AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence_ , volume 2018, page 5642\. NIH Public Access. * Zalmout and Habash (2019) Nasser Zalmout and Nizar Habash. 2019. Adversarial multitask learning for joint multi-feature and multi-dialect morphological modeling. _arXiv preprint arXiv:1910.12702_. * Zeiler and Fergus (2014) Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In _European conference on computer vision_ , pages 818–833. Springer. * Zhang et al. (2016) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2016\. Understanding deep learning requires rethinking generalization. _arXiv preprint arXiv:1611.03530_. * Zhu et al. (2020) Fengda Zhu, Yi Zhu, Xiaojun Chang, and Xiaodan Liang. 2020. Vision-language navigation with self-supervised auxiliary reasoning tasks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 10012–10022. ## Appendix A Appendix ### A.1 Dataset partitions We detail the dataset partition in Table 5 for aid reproducibility. Table 5: Dataset partitions IEMOCAP | SEMAINE ---|--- Partition | Session | Count | Partition | Session | Count Train | Ses01F | 861 | Train | 2008.12.05.16.03.15 | 424 Train | Ses01M | 958 | Train | 2009.01.30.12.00.35 | 404 Train | Ses02F | 889 | Train | 2009.02.12.10.49.45 | 686 Train | Ses02M | 922 | Train | 2009.05.15.15.04.29 | 436 Train | Ses03F | 958 | Train | 2009.05.29.14.30.05 | 760 Train | Ses03M | 1178 | Train | 2009.05.22.15.17.45 | 668 Dev | Ses04F | 1105 | Train | 2009.05.25.11.23.09 | 928 Dev | Ses04M | 998 | Train | 2009.05.26.10.19.53 | 790 Test | Ses05F | 1128 | Train | 2009.06.05.10.14.28 | 732 Test | Ses05M | 1042 | Train | 2009.06.15.12.13.06 | 958 | | | Train | 2009.06.19.14.01.24 | 586 | | | Train | 2009.10.27.16.17.38 | 440 | | | Dev | 2008.12.19.11.03.11 | 188 | | | Dev | 2009.01.06.14.53.49 | 472 | | | Dev | 2009.05.12.15.02.01 | 448 | | | Dev | 2009.05.08.11.28.48 | 752 | | | Dev | 2009.06.26.14.38.17 | 404 | | | Test | 2008.12.14.14.47.07 | 372 | | | Test | 2009.01.06.12.41.42 | 264 | | | Test | 2009.01.28.15.35.20 | 364 | | | Test | 2009.06.26.14.38.17 | 440 | | | Test | 2009.06.26.14.09.45 | 448 ### A.2 Scaling the auxiliary targets We detail the operations in scaling the percentile scores that range [0,1] to various primary tasks. For IEMOCAP Primary Tasks that are regression problems, we multiply the percentile score by 4 and add 1 to obtain the range [1,5]. For IEMOCAP classification task, we leave the auxiliary targets in the range of [0,1]. As for SEMAINE tasks, which are all regression problems, we multiply the percentile score by 2 and minus 1 to obtain the range [-1,1]. ### A.3 Hyperparameters tuning process Glove word embeddings (300-dimensions) are used to represent the words (Pennington et al., 2014). Hyper-parameters tuning is crucial because different combinations of primary and auxiliary tasks require different sets of hyperparameters. For hyperparameters tuning, we used random search (Bergstra and Bengio, 2012) with thirty trials. We tuned the learning rate, batch size, L2 regularization, the number of GRUs assigned to the primary and auxiliary branches, the auxiliary weights assignment, the content-context size, and lastly the GRU dropout and recurrent dropout (as detailed in Table 6). Training is done on a RTX2070 or a V100 GPU, for up to 350 epochs. Early stopping is possible via the median-stopping rule (Golovin et al., 2017) after the fifth epoch and after every two epochs (i.e., at epoch number 5, 7, 9, $...$, 349). Table 7 details the hyperparameters of models that performed the best on the development set. Table 6: Range of Hyperparameters tuned. U: Uniform sampling, LU: Uniform sampling on the logarithmic scale. Name | Min. | Max | Sampling ---|---|---|--- Learning rate | $2^{-10}$ | $2^{-5}$ | LU Batch-Size | 32 | 256 | U Pri GRU | 1, 64, 128, 192, 256 | U Aux GRU | 257 - (minus) Pri GRU Loss-weights Assignment | Random, Linear-Normalized, Softmax-Normalized | U L2 Regularization | 0.0 | 0.50 | LU Content-Size | 1 | 30 | U GRU dropout | 0.01 | 0.50 | U Recurrent dropout | 0.01 | 0.50 | U Table 7: Best Hyperparameters settings for development set. R: Random, L: Linear-Normalized, S: Softmax-Normalized. | IEMOCAP | SEMAINE ---|---|--- | Classif. MA(4) | Val. MAE | Act. MAE | Dom. MAE | Act. MAE | Int. MAE | Pow. MAE | Val. MAE Dev. set | 0.714 | 0.482 | 0.492 | 0.570 | 0.118 | 0.164 | 0.135 | 0.138 Test set | 0.747 | 0.497 | 0.499 | 0.587 | 0.184 | 0.215 | 0.164 | 0.169 Learning rate | 6.21 e-03 | 2.21 e-02 | 1.44 e-02 | 3.13 e-02 | 2.43 e-02 | 1.58 e-02 | 3.04 e-03 | 2.94 e-02 Batch size | 43 | 43 | 62 | 77 | 41 | 43 | 33 | 62 Pri GRU | 256 | 256 | 256 | 256 | 256 | 256 | 192 | 256 Aux GRU | 1 | 1 | 1 | 1 | 1 | 1 | 65 | 1 L2 regularization | 2.25 e-05 | 0 | 1.43 e-05 | 2.41 e-04 | 0 | 0 | 0 | 0 Content-Size | 18 | 4 | 3 | 3 | 17 | 21 | 19 | 14 GRU dropout | 0.27 | 0.49 | 0.33 | 0.30 | 0.06 | 0.17 | 0.05 | 0.07 Recurrent dropout | 0.04 | 0.03 | 0.02 | 0.32 | 0.25 | 0.28 | 0.08 | 0.26 Epoch No. | 232 | 161 | 74 | 14 | 48 | 116 | 110 | 110 Aux. weights assignment | S | R | L | L | L | R | S | R Main loss | 0.840 | 0.560 | 0.810 | 0.610 | 0.870 | 0.940 | 0.950 | 0.920 AU05 loss | 0.009 | 0.013 | 0.014 | 0.000 | 0.000 | 0.002 | 0.002 | 0.008 AU17 loss | 0.009 | 0.027 | 0.009 | 0.003 | 0.002 | 0.004 | 0.002 | 0.003 AU20 loss | 0.009 | 0.028 | 0.0 | 0.001 | 0.000 | 0.007 | 0.002 | 0.006 AU25 loss | 0.009 | 0.031 | 0.006 | 0.002 | 0.015 | 0.008 | 0.002 | 0.009 Happy tone loss | 0.010 | 0.023 | 0.038 | 0.006 | 0.021 | 0.013 | 0.002 | 0.003 Sad tone loss | 0.010 | 0.028 | 0.053 | 0.008 | 0.037 | 0.010 | 0.002 | 0.003 Angry tone loss | 0.010 | 0.016 | 0.034 | 0.006 | 0.055 | 0.011 | 0.002 | 0.000 Fear tone loss | 0.010 | 0.043 | 0.036 | 0.004 | 0.000 | 0.004 | 0.002 | 0.003 $Y_{t-1}$ loss | 0.010 | 0.030 | 0.0 | 0.040 | 0.000 | 0.000 | 0.005 | 0.006 $Y_{t-2}$ loss | 0.011 | 0.022 | 0.0 | 0.049 | 0.000 | 0.000 | 0.004 | 0.004 $Y_{t-3}$ loss | 0.010 | 0.006 | 0.0 | 0.046 | 0.000 | 0.000 | 0.003 | 0.006 $Y_{t-4}$ loss | 0.010 | 0.040 | 0.0 | 0.044 | 0.000 | 0.000 | 0.003 | 0.007 $Y_{t+1}$ loss | 0.010 | 0.042 | 0.0 | 0.042 | 0.000 | 0.000 | 0.005 | 0.005 $Y_{t+2}$ loss | 0.011 | 0.0 | 0.0 | 0.052 | 0.000 | 0.000 | 0.004 | 0.006 $Y_{t+3}$ loss | 0.010 | 0.050 | 0.0 | 0.043 | 0.000 | 0.000 | 0.003 | 0.008 $Y_{t+4}$ loss | 0.010 | 0.038 | 0.0 | 0.044 | 0.000 | 0.000 | 0.003 | 0.003 ### A.4 Details of computing the bootstrap confidence interval Baseline models for each hypothesis are detailed in section 2. All non- baseline models are referred to as challenger models. We created 1000 bootstrap samples of the test set performance by (1) resampling the development set performance, then (2) selecting the set of hyperparameters that resulted in the best development set performance, and (3) looking up the test set performance given the set of best-performing hyperparameters for the development set. To judge whether the challenger outperforms the baseline, we computed the 95 percent confidence interval by (1) performing element-wise subtraction between the resampled test set performance of the baseline against the challenger, (2) removing the top and bottom 2.5 percent from the differences, and (3) observing whether the remaining 95 percent confidence interval includes zero. If it does not include zero, then the difference is statistically significant. ### A.5 Visualization from HAN-ROCK We demonstrate how the HAN-ROCK model could be used to support humans analyze conversations using only text-based inputs. We visualized the attention weights from two models, (1) MTL refers to the classification model with auxiliary supervisors, whilst (2) STL refers to the same model architecture, but its auxiliary supervisors’ loss weights are set to zero. In principle, the MTL model should exhibit attention weights that are less likely to overfit because the weights are tempered by auxiliary supervisors. We observe that across the two models, both use the historical talkturns more so than the current talkturn; secondly, both assign high attention to the second word of the talkturns, which is interesting because the second word is where the multimodal annotations are inserted. Figure 5: Conversation analysis example. Both models predicted the class label (anger) correctly. The left-most column denotes the talkturn context – $i$ refers to the current talkturn where the target emotion class is predicted. The square boxes indicate the level of attention (N: None, L: Low, M: Medium, H: High) assigned to the talkturn. Within the talkturn, we also enlarge and darken the font color ti visualize higher attention weights. As explained in Section 3.2, there are three levels of attention over the internal representations, word ($\alpha_{it}$), talkturn ($\alpha_{abi}$), and task ($\alpha_{c}$). To compute the overall word and talkturn attention, we compute the weighted average of $\alpha_{it}$ and $\alpha_{abi}$ using the $\alpha_{c}$ (task attention) as weights. Once we have the overall word and talkturn attention, we standardize the weights by computing the z-score. Depending on the z-score, we bucket the attention in none (z $<$ 0), low (0 $<$ z $<$ 1), medium (1 $<$ z $<$ 2), or high (2 $<$ z). We plan to validate the efficacy of the attention weights with human users in future research.
# Revealing Galaxy Candidates out to $z\sim 16$ with JWST Observations of the Lensing Cluster SMACS0723 Hakim Atek1, Marko Shuntov1, Lukas J. Furtak2, Johan Richard3, Jean-Paul Kneib4, Guillaume Mahler5, Adi Zitrin2, H.J. McCracken1, Stéphane Charlot1, Jacopo Chevallard6, Iryna Chemerynska1 1Institut d’Astrophysique de Paris, CNRS, Sorbonne Université, 98bis Boulevard Arago, 75014, Paris, France 2Physics Department, Ben-Gurion University of the Negev, P.O. Box 653, Be’er- Sheva 84105, Israel 3Univ Lyon, Univ Lyon1, Ens de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230, Saint-Genis Laval, France 4Laboratoire d’Astrophysique, Ecole Polytechnique Fédérale de Lausanne, Observatoire de Sauverny, CH-1290 Versoix, Switzerland 5Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK 6Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract One of the main goals of the JWST is to study the first galaxies in the Universe. We present a systematic photometric analysis of very distant galaxies in the first JWST deep field towards the massive lensing cluster SMACS0723. As a result, we report the discovery of two galaxy candidates at $z\sim 16$, only $250$ million years after the Big Bang. We also identify two candidates at $z\sim 12$ and 6 candidates at $z\sim 9-11$. Our search extended out to $z\lesssim 21$ by combining color information across seven NIRCam and NIRISS filters. By modelling the Spectral Energy Distributions (SEDs) with EAZY and BEAGLE, we test the robustness of the photometric redshift estimates. While their intrinsic (un-lensed) luminosity is typical of the characteristic luminosity L∗ at $z>10$, our high-redshift galaxies typically show small sizes and their morphologies are consistent with disks in some cases. The highest- redshift candidates have extremely blue UV-continuum slopes $-3<\beta<-2.4$, young ages $\sim 10-100$ Myr, and stellar masses around $\log(M_{\star}/\mathrm{M}_{\odot})=8.8$ inferred from their SED modeling, which indicate a rapid build-up of their stellar mass. Our search clearly demonstrates the capabilities of JWST to uncover robust photometric candidates up to very high redshifts, and peer into the formation epoch of the first galaxies. ###### keywords: galaxies: high-redshift – dark ages, reionization, first stars – galaxies: dwarfs – galaxies: evolution – gravitational lensing: strong – cosmology: observations ††pubyear: 2022††pagerange: Revealing Galaxy Candidates out to $z\sim 16$ with JWST Observations of the Lensing Cluster SMACS0723–LABEL:lastpage ## 1 Introduction Understanding the emergence of the first structures that hosted star formation has been one of the most important goals in modern astrophysics. One of the main routes to address this question is to identify the first generation of galaxies that we believe formed at $z\gtrsim 15$. Great progress has been made in the last decade to uncover the most distant galaxies out to $z\sim 11$, essentially thanks to the Hubble Space Telescope (HST) and 8-10 m class ground-based telescopes. The record-holder for the farthest galaxy known with a spectroscopic confirmation is GN-z11 (Oesch et al., 2016; Jiang et al., 2021), at a redshift of $z=10.96$. Remarkably, GN-z11 is a relatively bright and massive galaxy ($M_{\mathrm{UV}}=-22.1$ AB), which challenges current galaxy formation models. The existence of a galaxy with a billion solar masses only $\sim 400$ Myr after the Big Bang indicates that galaxy formation was well underway very early in the history of the Universe. At slightly lower redshifts $z\sim 7-9$, large samples of nearly 2,000 galaxies have been photometrically identified, thanks mostly to the infrared sensitivity of the Wide Field Camera Three (WFC3) aboard HST (e.g. Finkelstein et al., 2015; Bouwens et al., 2021) or ground-based observations (e.g., Kauffmann et al., 2022). These large samples resulted in strong constraints on the shape of the ultra-violet (UV) luminosity function of galaxies at those epochs and its evolution with redshift. Moreover, Spitzer Space Telescope observations allowed to probe the rest-frame optical emission of these high- redshift galaxies, which enabled us to constrain their stellar mass (e.g. Song et al., 2016; Bhatawdekar et al., 2019; Kikuchihara et al., 2020; Stefanon et al., 2021), albeit with important uncertainties due to source confusion and nebular emission contamination (Grazian et al., 2015; Furtak et al., 2021). More recently, using deep Spitzer and ground-based observations, Harikane et al. (2022) reported the detection of two galaxy candidates at $z\sim 12-13$. Atacama Large Millimeter/submillimeter Array (ALMA) follow-up observations have revealed the tentative ($2\sigma$) detection of an [O iii]$\lambda 88\mu\mathrm{m}$ emission line, putting HD1 at a redshift of $z=13.27$. Again, these two candidates do not align with theoretical models, which predict a significantly lower density of such bright galaxies at $z>10$. While wide extra-galactic surveys have unveiled these surprisingly large numbers of luminous galaxies at $z\sim 9-11$, deeper observations at longer wavelengths are needed to, (i) push the redshift frontier to uncover farther away galaxies and (ii) constrain the number density of faint galaxies at early times. The advent of the JWST marks a new era in the detection and, more importantly, the characterization of early star-forming galaxies. The most important improvement is the significantly higher spatial resolution of JWST compared to Spitzer, which enables accurate photometric measurements in the near and mid- infrared range corresponding to the rest-frame optical for galaxies at $z>6$. This is crucial for deriving physical properties of galaxies through spectral energy distribution (SED) fitting, since it allows us to account for a larger dynamic range in the age of the underlying stellar populations. Also, for the first time, we will be able to obtain rest-frame optical spectra for these early-star forming galaxies, giving us access to several optical emission line, which are the gold standards for measuring galaxy properties such as star formation rate (SFR), gas-phase metallicity, dust content, etc. The first JWST observations consist of the Early Release Observations (ERO), which were obtained to showcase the observatory capabilities, and the Early Release Science (ERS) programs, aimed at testing the variety of instruments observing modes and to help the community understand and exploit JWST data. One of the primary targets of the ERO observations is the galaxy cluster SMACS J0723.3-7327 (SMACS0723 hereafter; Ebeling et al., 2001; Repp & Ebeling, 2018) which was recently observed with HST as part of the Reionization Lensing Cluster Survey program (RELICS; Coe et al., 2019; Salmon et al., 2020). One of the accepted ERS programs (1324, PI: Treu) also targets a lensing cluster, Abell 2744 (A2744), which is one of the Hubble Frontier Fields (HFF; Lotz et al., 2017) clusters. These observations follow in the steps of numerous HST observing campaigns and efforts to use massive galaxy clusters as gravitational telescopes. The magnification provided by the strong gravitational lensing (SL) enables us to reach intrinsically fainter and lower-mass galaxies and thus to constrain the very faint and low-mass ends of the luminosity and stellar mass function (e.g. Atek et al., 2014a; Atek et al., 2018; Livermore et al., 2017; Bouwens et al., 2017; Ishigaki et al., 2018; Bhatawdekar et al., 2019; Kikuchihara et al., 2020; Furtak et al., 2021). In this work, we use the JWST ERO data of SMACS0723 to identify and characterize the most distant galaxies in the Universe. We use imaging data in 7 broad-band filters to select $z\gtrsim 9$ galaxies based on the Lyman break technique coupled with SED fitting to estimate their photometric redshifts and physical properties. This paper is structured as follows. In section 2, we present the observational data. In section 3, we describe how the galaxy sample was selected. The SL model used in this work and our SED-fitting analysis are shown in sections 4 and 5 respectively. Finally, we compare our estimate with a compilation of literature results and discuss the implications on galaxy formation models in section 6 before presenting our summary and conclusions in section 7. Throughout the paper, magnitudes are in the AB system (Oke & Gunn, 1983) and we adopt a cosmology with H${{}_{0}}=70$ km s-1 Mpc-1, $\Omega_{\Lambda}=0.7$, and $\Omega_{m}=0.3$. ## 2 Observations Table 1: Limiting AB magnitudes (5$\sigma$) of the JWST/NIRCam and JWST/NIRISS imaging data used in this work. The depths were computed using random 0.3″circular apertures. SW filter | F090W | F115W | F150W | F200W ---|---|---|---|--- Depth | 28.0 | 27.8 | 28.8 | 28.9 LW filter | F277W | F356W | F444W | Depth | 29.2 | 29.3 | 29.1 | Figure 1: JWST/NIRCam and JWST/NIRISS through-put curves of the filter set of the JWST ERO observations of SMACS0723 used in this work. The first JWST science observations, targeting the lensing cluster SMACS0723, were obtained as part of the Early Release Observations (ERO) program (ID 2736; PI: Pontoppidan), and have been released on Wednesday, July 13th. They consist of deep multi-wavelength imaging with NIRCam (Near-Infrared Camera) and MIRI (Mid-Infrared Instrument), NIRSpec (Near-Infrared Spectrograpgh) multi-object Spectroscopy, and NIRISS (Near-Infrared Imager and Siltless Spectrograpgh) wide-field slitless spectroscopy. We retrieved the data from the Mikulski Archive for Space Telescopes (MAST). Both the short wavelength (SW) and the long wavelength (LW) channels of NIRCam were used simultaneously to obtain uniform spectral coverage from $\lambda\sim 1$ µm to $\lambda\sim 5$ µm. Tab. 1 summarizes the set of filters used in this work, and Fig. 1 shows their transmission curves. Note that we complement the 6 available NIRCam filters with F115W-band imaging data from the Near-Infrared Imager and Slitless Spectrograph (NIRISS; Doyon et al., 2012). These however have a smaller field-of-view and only cover the core of the cluster. For each of the filters, we reduced the raw uncal files using the most recent JWST calwebb pipeline v1.6.2 and the calibration context CRDS_CTX = jwst_0970.pmap. In particular, the latest reference files contain updates from in-flight calibration, whereas ground calibrations were used in the early reductions of the ERO data. First, we applied the Detector1 stage to all files. Second, we ran the images through the second reduction stage of the pipeline Image2, which performs flat-field correction, WCS registration, and photometric calibration. Due to significant levels of background residuals in the calibrated images, and following recommendations from the ERS CEERS program, we independently ran background subtraction on individual cal files. In addition, the first NIRCam calibration observations have shown that in most cases the detectors/filters were more sensitive than predicted (Rigby et al., 2022). Therefore, we updated the photometric zeropoints of the cal files with recent measurements of Boyer et al. (2022) and G. Brammer111https://github.com/gbrammer/grizli/pull/107. Finally, cal files were processed through the Image3 stage of the pipeline to create the final drizzled mosaic in each filter. All images were registered to GAIA DR2 astrometry and the relative registration was refined using TweakReg. We adopted a pixel scale of 0.03″ for both SW and LW channels. In order to increase the detection efficiency in the extraction procedure described in section 3.1, we created a deep LW image by stacking the three LW filters using their respective weight maps. We also create a stacked SW image of the non-detection filters blue-ward of the break. One common limitation for the detection of faint sources behind massive cluster fields is the intra-cluster light (ICL). This emission in the central region of the cluster is emitted by tidally stripped stars and bright cluster members (e.g. Atek et al., 2014a; Montes, 2022). In order to reduce the impact of this diffuse light, we apply a median filter to the detection image. We adopted a filter size of $\sim 2$″ $\times$ 2″, which significantly reduces the diffuse ICL and BCG light while minimizing residual artifacts. Note that the photometry measurement itself is still performed in the original image of each filter. Finally, we also use spectroscopic observations of SMACS0723 obtained with the Near-Infrared Spectrograph (NIRSpec; Jakobsen et al., 2022) as part of the same ERO program. We obtained level 3 data products from MAST which include 2D and 1D spectra of selected targets in the cluster field. ## 3 Sample ### 3.1 Photometric Catalogs Figure 2: Magnitude number counts from the SE++ extraction in the seven NIRCam and NIRISS filters. Also over-plotted are the HST WFC3/IR F140W number counts from RELICS (Coe et al., 2019; Salmon et al., 2020). The plot illustrates the improvement in depth and the increase of the number of faint sources detected with JWST. We carry out source detection and photometry measurements using SourceXtractor++ v0.17 (Bertin et al., 2020; Kümmel et al., 2020). SourceXtractor++ (SE++) is the successor of the classic and widely used SExtractor2, rewritten in C++ in order to optimize computational efficiency and parallelization across multiple cores. The novelty and advantage of the code is its support for two-level detection thresholding, flexible multi- object and multi-frame model fitting as well as allowing for custom definition of models. Additionally, it can operate on multiple images based on their WCS information, bypassing the need to re-sample images on the same pixel scale. SE++ has been extensively tested in its performance on photometry and morphological parameter recovery from model fitting in the Euclid Morphology Challenge, where it achieved the highest scores among all the tested codes (Euclid Collaboration et al., 2022b, a). For the detection of sources, we use the ICL-subtracted LW stack (cf. section 3.1) and its corresponding weight image as a weight map. The background map is computed on a grid with cell size of 128 pixels and smoothed with a box-filter of 5 pixels per side. Sources are detected via two-level thresholding which ensures the detection of faint and low surface brightness sources while minimizing false positives around bright sources. First, we detect all sources with at least 6 contiguous pixels and an integrated SNR threshold of 1.75. A second, higher threshold of 2.55 and 18 contiguous pixels is applied, whose main impact is to clear false positives around bright sources. Finally, a minimum area of 15 pixels is applied to clean spurious detections near bright sources and random noise peaks. Detected sources that are close enough such that they have connected pixels above the thresholds are grouped together and later fitted simultaneously. Measurements are then carried out on all 6 NIRCam bands simultaneously. Additionally, we perform measurements on the NIRISS F115W-band separately because of its smaller size and shallower depth (cf. Tab. 1), but using the same detection image. Errors in the photometry are estimated from an RMS map, obtained as the inverse of the square root of the weight image. The weight image contains the contribution of the sky, dark, flat and read noise to each pixel. As such, it considers the Poisson noise of the background, but not of the sources. Regardless, given the fact that of interest for this paper are the faintest sources, which are typically dominated by read and background noise, this approach is sufficient. The photometry for each source is obtained using AUTO apertures, as well as from model-fitting. The models and the priors on their parameters can be user-defined, and in our case we use a Sérsic model (Sérsic, 1963) with the radius, Sérsic index, angle, and axis ratio as free parameters. We find that this model performs well for the faint and small sources that we are interested in. The models are convolved with the point- spread-function (PSF) obtained using WebbPSF v1.0.0. The PSF models are generated on a pixel scale of $0.03$ ″/pix, on a stamp with a field-of-view of $\sim 10$″ (301 pixels). For the purpose of this paper, we use the model magnitudes from the simultaneous fit of the NIRCam bands and the AUTO magnitudes for the separate NIRISS measurement. The resulting magnitude number counts are shown in Fig. 2 for all bands used in this work. In order to check the photometric calibration of these early JWST observations, we conducted a comparison with well- calibrated HST images of the same cluster from RELICS. We measured fluxes in the HST/WFC3 F140W and in the JWST/NIRCam F150W images, using the same apertures and the HST mosaic as a detection image. The results are discussed in appendix A and show a good agreement with a small residual offset. Fig. 2 also shows consistent number counts and magnitude depths across the JWST filters. For comparison, we also show the number counts from the RELICS F140W-band. ### 3.2 Dropout selection Figure 3: Color-color selection windows using the NIRCam broadband filters for high-redshift dropouts. Left: for $9<z<11$ Middle: for $11<z<15$; and Right: for $16<z<21$. The solid lines, from blue to turquoise, represent the color tracks of star-forming galaxy templates at high redshift generated with BEAGLE (cf. text for details). The dashed lines, from yellow to red, are quiescent galaxies, illustrating low-redshift galaxy contaminants, generated with GRASIL (Silva et al., 1998). For each star-forming template we applied three values of dust attenuation following the SMC dust law, $A_{V}$=[0,0.25,0.5], which are illustrated with the brighter blue or red colors in increasing order to illustrate the effect of dust attenuation on the color tracks. For the quiescent galaxies, we apply attenuation values in the range $A_{V}$=[1,2,3]. The purple circles represent the colors of a library of M-class stars and brown dwarfs (Chabrier et al., 2000; Allard et al., 2001), which also represent potential contaminants. The high-redshift dropouts selected in this work are represented as green diamonds. Figure 4: Examples of NIRSpec spectra using the G395M grism for 2 dropout galaxies at $z\sim 7-9$, displaying very strong H$\beta$ and [O iii] emission lines. We proceed to identify high-redshift galaxies using the Lyman-break, or dropout, selection technique (e.g. Steidel et al., 1996; Atek et al., 2015; Kawamata et al., 2015; Bouwens et al., 2015). In order to determine the best color-color selection criteria, we ran a set of simulations using different galaxy templates. We generated starburst galaxy templates using BEAGLE (Chevallard & Charlot, 2016, cf. section 5) at increasing redshifts from $z=6$ to $z=28$, including the intergalactic medium (IGM) attenuation from Inoue et al. (2014). In addition, we explore a range of dust attenuation from $A_{V}=0$ to $A_{V}=0.5$ using the SMC extinction law (Pei, 1992), which has been found to well match the dust attenuation in high-redshift galaxies (e.g. Capak et al., 2015; Reddy et al., 2015, 2018a). We then computed synthetic photometry in the NIRCam broad-band filters and plotted the color tracks of these mock galaxies in Fig. 3. In order to mitigate contamination from low-redshift interlopers that mimic the IGM absorption by their intrinsically red colors, we also explore the position of low-redshift quiescent galaxies, generated with GRASIL (Silva et al., 1998) and available in the framework of the SWIRES template library (Polletta et al., 2007), on this color-color diagram. These again include several values of dust attenuation between $A_{V}=1$ and $A_{V}=3$. Finally, we include the last significant source of contamination, cold red stars and brown dwarfs, which can also mimic the Lyman break. We used the stellar templates from Chabrier et al. (2000) and Allard et al. (2001). With this information in hand, we define the best color selection window that maximizes identification of the high-redshift sources while minimizing contamination for each redshift range. The resulting selection criteria are illustrated in Fig. 3. According to these simulations, we select galaxy candidates in the redshift ranges $z\sim 9-11$ and $z\sim 11-15$ respectively based on the following criteria: $\begin{array}[]{l}M_{115}-M_{150}>0.5\\\ M_{115}-M_{150}>1.0+1.4(M_{150}-M_{200})\\\ M_{150}-M_{200}<0.5\end{array}$ and $\begin{array}[]{l}M_{150}-M_{200}>0.5\\\ M_{150}-M_{200}>1.6+1.8(M_{200}-M_{277})\\\ M_{200}-M_{277}<0.2\end{array}$ In addition to these color criteria, galaxy candidates must also satisfy a detection significance above 5$\sigma$ in the deep IR stacked image, and 4$\sigma$ in the individual detection filters. We also reject any candidate that shows a significant detection (at 2$\sigma$ level) in the stacked image that combines all filters blue-wards of the Lyman break. If a source is not detected in the dropout filter, we adopt the 1$\sigma$ lower limit measured in the same aperture to calculate the Lyman break. For $z\sim 16-21$ candidates, we adopt the following criteria: $\begin{array}[]{l}M_{200}-M_{277}>0.5\\\ M_{200}-M_{277}>1.2+2.0(M_{277}-M_{356})\\\ M_{277}-M_{356}<0.4\end{array}$ Similarly to the $11<z<15$ selection, all candidates must satisfy the same detection and non-detection criteria detailed above. All candidates are visually inspected to check for spurious detection, artifacts, or bad PSF residuals. We also carefully check the candidates against other potential sources of contamination. For instance, very strong emission lines in intermediate-redshift ($z\sim 3-5$) galaxies can contaminate the broadband flux and create a photometric dropout (e.g. Atek et al., 2011; de Barros et al., 2014). In particular, strong [O iii]$\lambda\lambda$4959,5007 + H$\beta$ lines are expected in high-redshift galaxies, (e.g. De Barros et al., 2019) which can lead to red colors. To address this issue, we perform SED fitting that takes nebular emission lines into account (cf. section 5). Note also that most of the intermediate-redshift contaminants should be easily detected in the bluer filters, given the depth of the F090W and F150W filters and the observed hypothetical excess in the red filters. Finally, we discard all sources that have point-source morphologies in order to sieve out any stars that might scatter into our selection window. The colors of our selected galaxies are shown in Fig. 3 as green diamonds. Note that there are a few objects selected as high-redshift galaxies in the $z\sim 11-15$ bin which do not completely satisfy all of the color-criteria. These objects were selected from their photometric redshift estimates with EAZY (cf. section 5) because they have unambiguous high-redshift solutions and do not present morphologies consistent with stars. In the present study, we have also analyzed the NIRSpec observations to investigate the spectroscopic properties of the high-redshift galaxies targeted by this program. Two examples at $z_{\mathrm{spec}}=7.66$ and $z_{\mathrm{spec}}=8.5$ are shown in Fig. 4. These are the first rest-frame optical spectra observed for high-redshift ($z>6$) galaxies (cf. also Schaerer et al., 2022). They show strong H$\beta$ and [Oiii] emission lines. The rest- frame equivalent width (EW) of these lines reaches up to $\sim 400$Å, which is still a lower limit because the continuum is hardly detected. This means that the flux contribution of emission lines to the broadband flux can reach up to $\sim 30$ % in some galaxies. Another consequence of the strong lensing is that some background sources will have multiple images. Using the most recent available parametric lensing model (cf. section 4), we have not identified any possible multi-image systems. The magnification factors of the candidates range from $\mu\sim 1$ to $\sim 4$. ## 4 Lensing Model Figure 5: The cumulative surface area behind SMACS0723 at $z=9$ as a function of gravitational magnification $\mu$ (cf. section 4), expressed in magnitudes. The solid line shows the area corresponding to the NIRCam field of view, whereas the dashed line is for the NIRISS pointing. The un-lensed total survey area is 7.23 arcmin2, and 2.1 arcmin2 for the NIRCam and NIRISS observations respectively. For reference, the observed area is $\sim 10.8$ arcmin2 and $\sim 4.9$ arcmin2 for NIRCam and NIRISS, respectively. In order to compute the gravitational magnifications of our objects, we use the most recent strong lensing mass model by Mahler et al. (2022). The model was built using the lenstool (Kneib et al., 1996; Jullo et al., 2007; Jullo & Kneib, 2009) software to optimize a parametric SL model for the mass distribution of the cluster. Following the approach used by Richard et al. (2014) for the HFF clusters (Lotz et al., 2017) and Fox et al. (2022) for the RELICS clusters, this model uses a combination of double Pseudo Isothermal Elliptical potentials (dPIE; Elíasdóttir et al., 2007) for the total mass distribution at cluster- and galaxy-scales. The parameters of the galaxy-scale dPIE potentials are matched to the identification of cluster members on the red sequence from the RELICS HST catalogs (cf. Coe et al. 2019 for details), fixing the center and elliptical parameters of the potential to its isophotes. The mass parameters of the dPIE profile (velocity dispersion $\sigma$, core radius rcore, cut radius rcut) are scaled according to the luminosity L∗ of a reference galaxy using a constant mass-to-light ratio. The SL constraints used in Mahler et al. (2022) are based on the released JWST/NIRCam images (cf. section 2), and make use of spectroscopic redshifts measured with public data from the Multi Unit Spectroscopic Explorer (MUSE; Bacon et al., 2010) on ESO’s Very Large Telescope (VLT) by Golubchik et al. (2022) and with JWST/NIRSpec data in Mahler et al. (2022). In summary, 48 secured images of 16 separate sources were identified, including 5 spectroscopic redshifts. The Mahler et al. (2022) SL model reproduces all systems with a residual lens plane RMS of $\sim 0.7\arcsec$ between the predicted and observed locations of all images. The cumulated surface area of magnification according to the model is shown in Fig. 5. We make use of this model to compute the magnification factors and associated errors on each high redshift galaxy candidates. All magnifications assume $z=15$ as there is only very little variation of magnification with redshift at $z>7$. lenstool allows us to compute a statistical error on the magnification by sampling the posterior probability distribution of each parameter of the model. ## 5 Spectral Energy Distribution Fitting We continue our selection of high-redshift galaxy candidates by computing photometric redshifts by fitting galaxy spectral energy distributions (SEDs) to the multi-wavelength photometry. The photometric redshifts are obtained with EAZY (Brammer et al., 2008), a Python-based SED fitting code. EAZY fits a non-negative linear combination of a set of basis templates to the observed flux densities for each galaxy. The fit is performed on the flux densities obtained from the model-fitting in SE++ (cf. section 3.1), converted to units of $\mu$Jy, and corrected for Milky Way extinction internally in the code. Sources with valid flux density measurements available in at least three bands are considered for fitting. We use the template set "tweak_fsps_QSF_1_v3.param" that is derived from the Flexible Stellar Population Synthesis models (FSPS) (Conroy et al., 2009; Conroy & Gunn, 2010). The allowed redshift range is set to $0.01<z<22$, with a flat prior on the luminosity. Sources of interest are then selected by visually inspecting all sources with a best-fit $z_{\mathrm{phot}}>9$. In a second selection step, candidates are required to show a reasonably good SED fit. In addition, we simultaneously infer physical parameters and photometric redshifts of the selected high-redshift candidates by fitting galaxy SEDs to the JWST photometry with the BayEsian Analysis of GaLaxy sEds tool (BEAGLE; Chevallard & Charlot, 2016). This is done to refine our selection and obtain accurate uncertainty estimates on the physical parameters of the galaxies. Because the inclusion of nebular emission has been found to be crucial in fitting broad-band photometry of galaxies (Schaerer & de Barros, 2009, 2010; Atek et al., 2011; Atek et al., 2014b; McLure et al., 2011; Smit et al., 2014; Duncan et al., 2014; Reddy et al., 2018b), BEAGLE uses galaxy templates computed by Gutkin et al. (2016) which combine the stellar population templates by Bruzual & Charlot (2003) with the photoionization code CLOUDY (Ferland et al., 2013). It then accounts for IGM attenuation using the Inoue et al. (2014) models and applies a dust attenuation law (cf. section 5.1 for details) to the galaxy emission. The BEAGLE tool is fully Bayesian, i.e. performs a Monte-Carlo Markov Chain (MCMC) analysis of the SED, which makes it ideally suited to determine both photometric redshifts and several physical parameters at the same time while robustly quantifying and combining their uncertainties. ### 5.1 BEAGLE set-up For our SED-fit with BEAGLE, we assume a constant star-formation history (SFH). While this is a simplification, it is a common assumption made in high- redshift galaxy analyses (e.g. Eyles et al., 2007; Stark et al., 2009; González et al., 2011; Grazian et al., 2015; Kikuchihara et al., 2020) and the results have been shown to not strongly differ from more flexible analytic SFH assumptions such as e.g. a delayed exponential SFH (Furtak et al., 2021). In order to avoid over-fitting due to the relatively low number of photometric bands available for this work, we limit the fit-parameters to four: * • Photometric redshift $z_{\mathrm{phot}}$ both fixed to the EAZY result first and then with a uniform prior in the limits of $z_{\mathrm{phot}}\in[0,25]$ for confirming the robustness of the photometric redshift estimate. * • Stellar mass $M_{\star}$ with a log-uniform prior in the limits of $\log(M_{\star}/\mathrm{M}_{\odot})\in[6,10]$. * • Maximum stellar age $t_{\mathrm{age}}$ with a log-uniform prior in the limits of $\log(t_{\mathrm{age}}/\mathrm{yr})\in[7,t_{\mathrm{universe}}]$ where $t_{\mathrm{universe}}$ is the age of the Universe at the redshift of the galaxy. * • Effective V-band dust attenuation optical depth $\hat{\tau}_{V}$ with a uniform prior in the limits of $\hat{\tau}_{V}\in[0,0.5]$. We furthermore fix the stellar metallicity to a constant value of $Z=0.1\,\mathrm{Z}_{\odot}$ since it has been shown that the broad-band photometry of high-redshift dropout galaxies is not sensitive to the metallicity in the very low metallicity regime expected for high-redshift galaxies (Furtak et al., 2021). Finally, we adopt an SMC dust extinction law (Pei, 1992) which has been shown to match the observations of high-redshift galaxies best (Capak et al., 2015; Reddy et al., 2015, 2018a), in particular in the low-metallicity range that we are probing (Shivaei et al., 2020). The SED fit is run on all six broad-band filters without taking the gravitational magnification into account yet, in order to avoid including the uncertainties of magnification in the SED fit. Instead, we correct the stellar mass for magnification after the fit following our approach in Furtak et al. (2021). Note though that the magnifications of our objects are not very high, $\mu\sim 1-4$. ### 5.2 Results Table 2: Complete list of our high-redshift candidates identified behind SMACS0723 with their derived parameters. ID | RA | Dec | $z_{\mathrm{phot}}$ | $M_{\mathrm{UV}}$ | $\beta$ | $\log(M_{\star}/\mathrm{M}_{\odot})$ | $\log(t_{\mathrm{age}}/\mathrm{yr})$ | $n$ | $r_{\rm e}$ [kpc] ---|---|---|---|---|---|---|---|---|--- $z\sim 9-11$ candidates SMACS_z10a | 7:23:26.252 | -73:26:56.940 | $9.78^{+0.02}_{-0.02}$ | $-18.77\pm 0.20$ | $-1.72\pm$0.04 | $9.11^{+0.07}_{-0.07}$ | $8.68^{+0.2}_{-0.2}$ | $<1$ | $0.22$ SMACS_z10b | 7:23:22.709 | -73:26:06.183 | $8.88^{+0.02}_{-0.02}$ | $-20.78\pm 0.13$ | $-1.36\pm$0.19 | $10.20^{+0.03}_{-0.03}$ | $8.74^{+0.2}_{-0.2}$ | $<1$ | $0.65$ SMACS_z10c | 7:23:20.169 | -73:26:04.233 | $9.77^{+0.02}_{-0.02}$ | $-20.19\pm 0.15$ | $-2.14\pm$0.12 | $9.53^{+0.02}_{-0.02}$ | $8.68^{+0.2}_{-0.2}$ | $<1$ | $0.41$ SMACS_z10d | 7:22:46.696 | -73:28:40.898 | $9.31^{+0.06}_{-0.08}$ | $-19.76\pm 0.18$ | $-2.22\pm$0.17 | $7.77^{+0.14}_{-0.11}$ | $7.29^{+0.18}_{-0.12}$ | $<1$ | $0.55$ SMACS_z10e | 7:22:45.304 | -73:29:30.557 | $10.89^{+0.16}_{-0.14}$ | $-18.91\pm 0.26$ | $-2.03\pm$0.19 | $8.51^{+0.22}_{-0.16}$ | $7.49^{+0.26}_{-0.20}$ | $<1$ | $0.33$ SMACS_z11a | 7:22:39.505 | -73:29:40.224 | $11.05^{+0.09}_{-0.08}$ | $-18.55\pm 0.38$ | $-1.97\pm$0.38 | $8.77^{+0.17}_{-0.24}$ | $8.20^{+0.21}_{-0.32}$ | $<1$ | $0.39$ $z\sim 11-15$ candidates SMACS_z12a | 7:22:47.380 | -73:30:01.785 | $12.20^{+0.21}_{-0.12}$ | $-19.75\pm 0.23$ | $-2.69\pm$0.16 | $8.14^{+0.21}_{-0.17}$ | $7.61^{+0.26}_{-0.20}$ | $<1$ | $0.47$ SMACS_z12b | 7:22:52.261 | -73:27:55.497 | $12.26^{+0.17}_{-0.16}$ | $-20.01\pm 0.17$ | $-2.82\pm$0.21 | $7.91^{+0.26}_{-0.17}$ | $7.56^{+0.30}_{-0.23}$ | $4.0\pm 1.0$ | $1.99$ $z>15$ candidates SMACS_z16a | 7:23:26.393 | -73:28:04.561 | $15.92^{+0.17}_{-0.15}$ | $-20.59\pm 0.15$ | $-2.63\pm$0.13 | $8.79^{+0.32}_{-0.33}$ | $7.65^{+0.36}_{-0.39}$ | $1.1\pm 0.2$ | $0.40$ SMACS_z16b | 7:22:39.439 | -73:30:08.185 | $15.32^{+0.16}_{-0.13}$ | $-20.96\pm 0.14$ | $-2.40\pm$0.34 | $8.80^{+0.44}_{-0.25}$ | $7.46^{+0.52}_{-0.35}$ | $2.8\pm 0.6$ | $1.17$ Figure 6: Cutout images of SMACS_z16a in the JWST bands centered on the source. The left-most column shows the LW-stack detection image (cf. section 2) and the source partition map. The following columns show the seven NIRCam and NIRISS science images in the top rows, best-fit model image in the middle rows and the residual in the bottom rows. The science and model images are scaled using a linear-stretch normalization, with minimum value equal to zero and maximum value 10 times the $2\sigma$-clipped standard deviation. The residuals are scaled linearly between $\pm 7$ times the $2\sigma$-clipped standard deviation. Figure 7: Best-fit solution for the SED and photometric redshift of SMACS_z16a. Upper row: Best-fit SED using the BEAGLE code. Left: Best-fit SED (black solid curve) with the observed photometric data (blue points) and expected model photometric points (black points) and associated uncertainties (pink areas). Right: Triangle plot of the posterior probability distribution of the four fitted galaxy parameters: redshift, stellar mass, stellar age and attenuation. Bottom row: Best-fit SED using the EAZY code. Left panel: The best-fit SED over-plotted over the observed flux densities (in dark squares). Model flux densities are shown in blue circles. The Lyman-break of the SED of this galaxy is estimated at $z=15.88$ and the redshift probability distribution function is shown in the right panel. Both codes agree on a high- redshift solution with a relatively narrow posterior distribution and which does not show a secondary peak at lower redshift. Figure 8: Same as Fig. 6, this time for the $z\sim 12$ candidate galaxy SMACS_z12a. Figure 9: Same as Fig. 7, here showing the $z\sim 12$ galaxy candidate SMACS_z12a. Figure 10: Example of a galaxy candidate, SMACS_z10c, at $z\simeq 9.76\pm 0.01$ showing an excess in the F444W-band. The best-fit SED (same legend as Fig. 7) shows the presence of a Balmer break around 4.5 µm. The final sample of high-redshift candidates consists of a total of 6 galaxies in the redshift range $9<z<11$, two galaxies at $11<z<15$, and two galaxies at $z>15$. All sources are new identifications, since Salmon et al. (2020) only reported galaxies below $z\sim 8$ in this field. The complete results for our full sample are reported in Tab. 2. Our highest redshift galaxy, SMACS_z16a, is consistently identified by the dropout selection and the two SED fitting procedures, as shown in Fig. 7. The photometric redshifts derived by both codes (cf. section 5) are in excellent agreement: $z=15.92^{+0.17}_{-0.15}$ and $z=15.89\pm 0.43$ for BEAGLE and EAZY respectively. This places the galaxy just $\sim 250$ Myr after the Big Bang. The galaxy exhibits an extremely blue UV continuum slope of $\beta=-2.63\pm 0.13$ which is consistent with a young and low-attenuation galaxy (cf. Fig. 7). We note in general that no template in EAZY is able to match the very blue UV continuum, whereas BEAGLE allows for more flexibility in this range of galaxy templates. To obtain a more accurate estimate of the physical properties of our galaxies, we ran a second round of SED fitting with BEAGLE using a prior in redshift from the PDF of EAZY (cf. section 5.1). We derive a stellar mass of $\log(M_{\star}/\mathrm{M}_{\odot})=8.79^{+0.32}_{-0.33}$ and a stellar age of $\log(t_{\mathrm{age}}/\mathrm{yr})=7.65^{+0.36}_{-0.39}$ for SMACS_z16a. This galaxy is also very compact, as can be seen in Fig. 6, with an estimated size of $\sim 0.4$ kpc based on the model fitting of SE++ (cf. section 3.1) and after correcting for the lensing distortion. In the next lower redshift range, one of the two most robust candidates, SMACS_z12a, is presented in Fig. 8. Again, the source is well detected in all four filters and shows a clear continuum break. As before, there is a good agreement between the two photometric redshifts $z=12.20^{+0.21}_{-0.12}$ (BEAGLE) and $z=12.11\pm 0.19$ (EAZY). The PDF from EAZY is broader than the BEAGLE one, because of the challenge in matching the blue colors ($\beta\sim-2.60\pm 0.32$) of this object (Fig. 9). This source is also among the highest-redshift candidates identified photometrically in the literature (e.g. Harikane et al., 2022; Naidu et al., 2022; Castellano et al., 2022), around 360 Myr after the Big Bang. The derived parameters are very similar to SMACS_z16a, with a stellar mass of $\log(M_{\star}/\mathrm{M}_{\odot})=8.14^{+0.21}_{-0.17}$, and an age of $\log(t_{\mathrm{age}}/\mathrm{yr})=7.61^{+0.21}_{-0.20}$. In the lowest redshift range, $9<z<11$, some sources around $z\sim 10$ show indication of a Balmer break as a clear excess in the F444W-band. An example is shown in Fig. 10, where flux excess is observed relative to a flat continuum in the bluer bands. This could indicate that an evolved stellar population was already in place at those early epochs, which is confirmed by the best-fit SED with a stellar age around $\sim 400$ Myr. An example for our $9<z<11$ objects is shown in appendix B. Of course, a stronger contribution from rest-frame optical emission lines is also possible, although this alternative does not yield the best-fit solution. ## 6 Discussion Figure 11: Absolute UV magnitude $M_{\mathrm{UV}}$ as a function of redshift for the high-redshift candidates in this work, along with a compilation of known galaxies at $z>8$. The candidates in this work are shown as orange filled hexagons, while empty hexagon marks the less secure candidate. The empty purple pentagons show the recently identified candidates from JWST in the GLASS parallel field by Naidu et al. (2022). The empty blue circles show photometric-redshift galaxies, while the solid circles represent spectroscopically confirmed galaxies compiled from Bouwens et al. (2022). While observing through a lensing cluster offers invaluable flux amplification, it also results in a smaller survey area in the source plane. The field-of-view in the NIRCam image plane is about $9.7$ arcmin2, which translates into $7.23$ arcmin2 in the source plane after de-lensing. Although the area reduction is still reasonable (because one of the NIRCam modules is less affected by magnification), wide-area surveys are more prone to uncover rare and bright high-redshift galaxies (e.g. Roberts-Borsani et al., 2016; Oesch et al., 2016; Bowler et al., 2020; Harikane et al., 2022), while lensing-assisted surveys are sensitive to the fainter population. While some of our sources are relatively bright, they all have luminosities around, or below, the characteristic luminosity $M^{\star}$ at $z\sim 9-10$ (Oesch et al., 2018; Bouwens et al., 2021). The highest redshift candidate in our sample, SMACS_z16a, has an intrinsic UV magnitude of $M_{\mathrm{UV}}=-20.59\pm 0.15$, which is fainter than previous very high redshift candidates Gn_z11 and HD1, and roughly similar to the recently discovered candidate GLz13 (Naidu et al., 2022). Moreover, the yield of the galaxy hunt presented in this paper is surprisingly high, with the discovery of 2 candidates at $z\sim 16$, 2 candidates at $z\sim 12$, and 6 candidates at $z=10-11$. In Fig. 11, we put these discoveries in the context of known high- redshift galaxies and compare our candidates to the published samples of galaxies at $z>8$. Not only do these results demonstrate the exceptional capabilities of JWST, but they also suggest a higher number density than predicted by an extrapolation of the $z\sim 9-10$ luminosity function (e.g. Bouwens et al., 2021) and theoretical predictions (e.g., Ocvirk et al., 2020; Dayal et al., 2022). Some early theoretical interpretations suggest that these observations could be compatible with a total absence of dust in these galaxies, as their very blue UV slopes might also indicate (Ziparo et al., 2022). The evolution of the UV luminosity function observed through lensing clusters will be the scope of a future paper. The inferred age of SMACS_z16a indicates a recent formation history, which points to a rapid stellar mass assembly in this galaxy. In the context of early galaxy formation, one would expect the first galaxies to more resemble SMACS_z16a in terms of physical parameters, than the bright and uncommon galaxies discussed earlier. As we are closing in on the formation epoch of the first galaxies, it will become increasingly challenging to find such evolved and relatively massive objects. Perhaps upcoming and future deep surveys through gravitational lenses will offer the best route to unveil these smallest structures. This first JWST cycle already includes several ERS, General Observer (GO), and Guaranteed Time (GTO) programs that target lensing clusters using different observing modes. Another interesting finding is the small sizes of these galaxies. For instance, SMACS_z16a has an effective radius of $r_{\rm e}=0.4$ kpc, which is consistent with recent size measurement of high-redshift galaxies (Kawamata et al., 2018; Adams et al., 2022). More importantly, the Sérsic index, derived from the SE++ modeling and source-plane reconstruction, hints to a disk-like morphology. We measured $n=1.1\pm 0.2$ for SMACS_z16a, which corresponds to a disk-like profile, although there is a real possibility that we are detecting only the brightest star-forming region in the galaxy. Such a morphology was also observed for GLz13 (Naidu et al., 2022), as well as in several other high-redshift candidates reported in Adams et al. (2022). The existence of well settled disks at early epochs would provide strong constraints on theoretical models of galaxy formation. While the nature of these ERO observations implies potential sources of uncertainties, we performed photometric quality verifications, and also combined multiple selection methods to build our sample of high-redshift sources. The dropout selection and the photometric redshifts from two independent codes significantly increase the robustness of the candidates. In the near future, deeper imaging in the short wavelength range will bring additional confirmation to these candidates, in addition to follow-up spectroscopy, e.g with NIRSpec, which will provide definitive redshifts. ## 7 Summary In this paper, we presented our search for very high-redshift galaxies in the first JWST observations of the lensing cluster SMACS 0723. We report the discovery of the highest redshift candidate known to date at $z\sim 16$. We also discover two candidates at $z\sim 12$, and 11 candidates at $z\sim 10-11$. We have used 6 NIRCam broadband filters, and one NIRISS filter, between $\sim 0.8$ µm and 5 µm to identify Lyman break galaxies in the redshift ranges $9<z<11$, $11<z<15$, and $16<z<21$. We combined the dropout selection with SED-fitting to obtain accurate photometric redshifts using two different codes, BEAGLE and EAZY. We also simultaneously derive the physical properties of these galaxies. Our final sample is based on all candidates that satisfy the three selection methods and whose photometric redshifts from the two SED fits are in agreement. To determine the intrinsic luminosity of galaxies and account for potential multiple images, we use the one of the latest lensing models of SMACS 0723 produced from the same JWST data set in a companion paper (Mahler et al., 2022). The measured amplification factors range from 1 to $\sim 4$, which gives us a lensing boost up to 1.5 magnitudes in the best case. With an intrinsic UV luminosity of $M_{\mathrm{UV}}=-20.59\pm 0.15$, we find that SMACS_z16a is fainter than typical $z>10$ galaxies in the literature and close to the characteristic luminosity $M^{\star}$ at $z\sim 10$. The best-fit SED of SMACS_z16a is compatible with a young age of $\log(t_{\mathrm{age}}/\mathrm{yr})=7.65^{+0.36}_{-0.39}$ and a stellar mass of $\log(M_{\star}/\mathrm{M}_{\odot})=8.79^{+0.32}_{-0.33}$. The bulk of its stellar mass build-up took place in the last 20 Myr. This galaxy, like some other candidates, shows a very blue UV continuum-slope of $\beta=-2.63\pm 0.13$ consistent with young ages. In general, the galaxy candidates, from $\sim 10$ to $z\sim 16$, have ages in the range $\log(t_{\mathrm{age}}/\mathrm{yr})\sim=7.3-8.7$, and stellar masses of $\log(M_{\star}/\mathrm{M}_{\odot})\sim 7.8-10.2$. Our source extraction and photometry procedure with SE++ also provide morphological measurements, since it relies on model-fitting. The measured sizes were reconstructed back to the lens plane using lenstool to determine the intrinsic effective radius and the Sersić index of the sources. Our measurements indicate very compact sources, with $r_{\rm e}$ below 1 kpc for most of the candidates. Some of these sources also have Sérsic indices compatible with disk-like a morphology. If true, it is surprising, for a galaxy like SMACS_z16a, to evolve so quickly in the span of less than $\sim 250$ Myr. The discovery of one, possibly two, galaxy candidates only $\sim 250$ Myr after the Big Bang, together with eight new candidates at $9<z<13$, announces great discoveries awaiting in the deep extra-galactic fields that will be observed by JWST. The number density, the physical properties and the morphologies of these galaxies will be extremely valuable in constraining hydrodynamical simulations of galaxy formation in a cosmological context. ## Acknowledgements We would like to thank Gabriel Brammer for the grizli reduction notebooks. HA acknowledges support from CNES. LF and AZ acknowledge support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF). AZ acknowledges support by the Ministry of Science & Technology, Israel. This work is based on observations obtained with the NASA/ESA/CSA JWST and the NASA/ESA Hubble Space Telescope (HST), retrieved from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. This work has made use of the CANDIDE Cluster at the Institut d’Astrophysique de Paris (IAP), made possible by grants from the PNCG and the region of Île de France through the program DIM-ACAV+. The JWST Early Release Observations (ERO) and associated materials were developed, executed, and compiled by the ERO production team: Hannah Braun, Claire Blome, Matthew Brown, Margaret Carruthers, Dan Coe, Joseph De- Pasquale, Nestor Espinoza, Macarena Garcia Marin, Karl Gordon, Alaina Henry, Leah Hustak, Andi James, Ann Jenkins, Anton Koekemoer, Stephanie LaMassa, David Law, Alexandra Lockwood, Amaya Moro-Martin, Susan Mullally, Alyssa Pagan, Dani Player, Klaus Pontoppidan, Charles Prof- fitt, Christine Pulliam, Leah Ramsay, Swara Ravindranath, Neill Reid, Massimo Robberto, Elena Sabbi, Leonardo Ubeda. The EROs were also made possible by the foundational efforts and support from the JWST instruments, STScI planning and scheduling, and Data Management teams. ## Data Availability The data underlying this article are publicly available on the Mikulski Archive for Space Telescopes222https://archive.stsci.edu/ (MAST), under program ID 2736. ## References * Adams et al. (2022) Adams N. J., et al., 2022, arXiv e-prints, p. arXiv:2207.11217 * Allard et al. (2001) Allard F., Hauschildt P. H., Alexander D. R., Tamanai A., Schweitzer A., 2001, ApJ, 556, 357 * Atek et al. (2011) Atek H., et al., 2011, ApJ, 743, 121 * Atek et al. (2014a) Atek H., et al., 2014a, ApJ, 786, 60 * Atek et al. (2014b) Atek H., et al., 2014b, ApJ, 789, 96 * Atek et al. (2015) Atek H., et al., 2015, ApJ, 800, 18 * Atek et al. (2018) Atek H., Richard J., Kneib J.-P., Schaerer D., 2018, MNRAS, 479, 5184 * Bacon et al. (2010) Bacon R., et al., 2010, in McLean I. S., Ramsay S. K., Takami H., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III. p. 773508, doi:10.1117/12.856027 * Bertin et al. (2020) Bertin E., Schefer M., Apostolakos N., Álvarez-Ayllón A., Dubath P., Kümmel M., 2020, in Pizzo R., Deul E. R., Mol J. D., de Plaa J., Verkouter H., eds, Astronomical Society of the Pacific Conference Series Vol. 527, Astronomical Data Analysis Software and Systems XXIX. p. 461 * Bhatawdekar et al. (2019) Bhatawdekar R., Conselice C. J., Margalef-Bentabol B., Duncan K., 2019, MNRAS, 486, 3805 * Bouwens et al. (2015) Bouwens R. J., et al., 2015, ApJ, 803, 34 * Bouwens et al. (2017) Bouwens R. J., Oesch P. A., Illingworth G. D., Ellis R. S., Stefanon M., 2017, ApJ, 843, 129 * Bouwens et al. (2021) Bouwens R. J., et al., 2021, AJ, 162, 47 * Bouwens et al. (2022) Bouwens R. J., et al., 2022, ApJ, 931, 160 * Bowler et al. (2020) Bowler R. A. A., Jarvis M. J., Dunlop J. S., McLure R. J., McLeod D. J., Adams N. J., Milvang-Jensen B., McCracken H. J., 2020, MNRAS, 493, 2059 * Boyer et al. (2022) Boyer M. L., et al., 2022, arXiv e-prints, p. arXiv:2209.03348 * Brammer et al. (2008) Brammer G. B., van Dokkum P. G., Coppi P., 2008, ApJ, 686, 1503 * Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, Monthly Notices of the Royal Astronomical Society, 344, 1000 * Bruzual A. & Charlot (1993) Bruzual A. G., Charlot S., 1993, ApJ, 405, 538 * Capak et al. (2015) Capak P. L., et al., 2015, Nature, 522, 455 * Castellano et al. (2022) Castellano M., et al., 2022, arXiv e-prints, p. arXiv:2207.09436 * Chabrier et al. (2000) Chabrier G., Baraffe I., Allard F., Hauschildt P., 2000, ApJ, 542, 464 * Chevallard & Charlot (2016) Chevallard J., Charlot S., 2016, MNRAS, 462, 1415 * Coe et al. (2019) Coe D., et al., 2019, ApJ, 884, 85 * Conroy & Gunn (2010) Conroy C., Gunn J. E., 2010, ApJ, 712, 833 * Conroy et al. (2009) Conroy C., Gunn J. E., White M., 2009, ApJ, 699, 486 * Dayal et al. (2022) Dayal P., et al., 2022, MNRAS, 512, 989 * De Barros et al. (2019) De Barros S., Oesch P. A., Labbé I., Stefanon M., González V., Smit R., Bouwens R. J., Illingworth G. D., 2019, MNRAS, 489, 2355 * Doyon et al. (2012) Doyon R., et al., 2012, in Clampin M. C., Fazio G. G., MacEwen H. A., Oschmann Jacobus M. J., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 8442, Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave. p. 84422R, doi:10.1117/12.926578 * Duncan et al. (2014) Duncan K., et al., 2014, MNRAS, 444, 2960 * Ebeling et al. (2001) Ebeling H., Edge A. C., Henry J. P., 2001, ApJ, 553, 668 * Elíasdóttir et al. (2007) Elíasdóttir Á., et al., 2007, preprint, (arXiv:0710.5636) * Euclid Collaboration et al. (2022a) Euclid Collaboration et al., 2022a, arXiv e-prints, p. arXiv:2209.12906 * Euclid Collaboration et al. (2022b) Euclid Collaboration et al., 2022b, arXiv e-prints, p. arXiv:2209.12907 * Eyles et al. (2007) Eyles L. P., Bunker A. J., Ellis R. S., Lacy M., Stanway E. R., Stark D. P., Chiu K., 2007, MNRAS, 374, 910 * Ferland et al. (2013) Ferland G. J., et al., 2013, Rev. Mex. Astron. Astrofis., 49, 137 * Finkelstein et al. (2015) Finkelstein S. L., et al., 2015, ApJ, 810, 71 * Fox et al. (2022) Fox C., Mahler G., Sharon K., Remolina González J. D., 2022, ApJ, 928, 87 * Furtak et al. (2021) Furtak L. J., Atek H., Lehnert M. D., Chevallard J., Charlot S., 2021, MNRAS, 501, 1568 * Golubchik et al. (2022) Golubchik M., Furtak L. J., Meena A. K., Zitrin A., 2022, arXiv e-prints, p. arXiv:2207.05007 * González et al. (2011) González V., Labbé I., Bouwens R. J., Illingworth G., Franx M., Kriek M., 2011, ApJ, 735, L34 * Grazian et al. (2015) Grazian A., et al., 2015, A&A, 575, A96 * Gutkin et al. (2016) Gutkin J., Charlot S., Bruzual G., 2016, MNRAS, 462, 1757 * Harikane et al. (2022) Harikane Y., et al., 2022, ApJ, 929, 1 * Inoue et al. (2014) Inoue A. K., Shimizu I., Iwata I., Tanaka M., 2014, MNRAS, 442, 1805 * Ishigaki et al. (2018) Ishigaki M., Kawamata R., Ouchi M., Oguri M., Shimasaku K., Ono Y., 2018, ApJ, 854, 73 * Jakobsen et al. (2022) Jakobsen P., et al., 2022, A&A, 661, A80 * Jiang et al. (2021) Jiang L., et al., 2021, Nature Astronomy, 5, 256 * Jullo & Kneib (2009) Jullo E., Kneib J.-P., 2009, MNRAS, 395, 1319 * Jullo et al. (2007) Jullo E., Kneib J.-P., Limousin M., Elíasdóttir Á., Marshall P. J., Verdugo T., 2007, New Journal of Physics, 9, 447 * Kauffmann et al. (2022) Kauffmann O. B., et al., 2022, arXiv e-prints, p. arXiv:2207.11740 * Kawamata et al. (2015) Kawamata R., Ishigaki M., Shimasaku K., Oguri M., Ouchi M., 2015, ApJ, 804, 103 * Kawamata et al. (2018) Kawamata R., Ishigaki M., Shimasaku K., Oguri M., Ouchi M., Tanigawa S., 2018, ApJ, 855, 4 * Kikuchihara et al. (2020) Kikuchihara S., et al., 2020, ApJ, 893, 60 * Kinney et al. (1996) Kinney A. L., Calzetti D., Bohlin R. C., McQuade K., Storchi-Bergmann T., Schmitt H. R., 1996, ApJ, 467, 38 * Kneib et al. (1996) Kneib J. P., Ellis R. S., Smail I., Couch W. J., Sharples R. M., 1996, ApJ, 471, 643 * Kümmel et al. (2020) Kümmel M., Bertin E., Schefer M., Apostolakos N., Álvarez-Ayllón A., Dubath P., 2020, in Pizzo R., Deul E. R., Mol J. D., de Plaa J., Verkouter H., eds, Astronomical Society of the Pacific Conference Series Vol. 527, Astronomical Data Analysis Software and Systems XXIX. p. 29 * Livermore et al. (2017) Livermore R. C., Finkelstein S. L., Lotz J. M., 2017, ApJ, 835, 113 * Lotz et al. (2017) Lotz J. M., et al., 2017, ApJ, 837, 97 * Mahler et al. (2022) Mahler G., et al., 2022, arXiv e-prints, p. arXiv:2207.07101 * McLure et al. (2011) McLure R. J., et al., 2011, MNRAS, 418, 2074 * Montes (2022) Montes M., 2022, Nature Astronomy, 6, 308 * Naidu et al. (2022) Naidu R. P., et al., 2022, arXiv e-prints, p. arXiv:2207.09434 * Ocvirk et al. (2020) Ocvirk P., et al., 2020, MNRAS, 496, 4087 * Oesch et al. (2016) Oesch P. A., et al., 2016, ApJ, 819, 129 * Oesch et al. (2018) Oesch P. A., Bouwens R. J., Illingworth G. D., Labbé I., Stefanon M., 2018, ApJ, 855, 105 * Oke & Gunn (1983) Oke J. B., Gunn J. E., 1983, ApJ, 266, 713 * Pei (1992) Pei Y. C., 1992, ApJ, 395, 130 * Polletta et al. (2007) Polletta M., et al., 2007, ApJ, 663, 81 * Reddy et al. (2015) Reddy N. A., et al., 2015, ApJ, 806, 259 * Reddy et al. (2018a) Reddy N. A., et al., 2018a, ApJ, 853, 56 * Reddy et al. (2018b) Reddy N. A., et al., 2018b, ApJ, 869, 92 * Repp & Ebeling (2018) Repp A., Ebeling H., 2018, MNRAS, 479, 844 * Richard et al. (2014) Richard J., et al., 2014, preprint, (arXiv:1405.3303) * Rigby et al. (2022) Rigby J., et al., 2022, arXiv e-prints, p. arXiv:2207.05632 * Roberts-Borsani et al. (2016) Roberts-Borsani G. W., et al., 2016, ApJ, 823, 143 * Salmon et al. (2020) Salmon B., et al., 2020, ApJ, 889, 189 * Schaerer & de Barros (2009) Schaerer D., de Barros S., 2009, A&A, 502, 423 * Schaerer & de Barros (2010) Schaerer D., de Barros S., 2010, A&A, 515, A73 * Schaerer et al. (2022) Schaerer D., Marques-Chaves R., Oesch P., Naidu R., Barrufet L., Izotov Y. I., Guseva N. G., Brammer G., 2022, arXiv e-prints, p. arXiv:2207.10034 * Sérsic (1963) Sérsic J. L., 1963, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 6, 41 * Shivaei et al. (2020) Shivaei I., et al., 2020, ApJ, 899, 117 * Silva et al. (1998) Silva L., Granato G. L., Bressan A., Danese L., 1998, ApJ, 509, 103 * Smit et al. (2014) Smit R., et al., 2014, ApJ, 784, 58 * Song et al. (2016) Song M., et al., 2016, ApJ, 825, 5 * Stark et al. (2009) Stark D. P., Ellis R. S., Bunker A., Bundy K., Targett T., Benson A., Lacy M., 2009, ApJ, 697, 1493 * Stefanon et al. (2021) Stefanon M., Bouwens R. J., Labbé I., Illingworth G. D., Gonzalez V., Oesch P. A., 2021, ApJ, 922, 29 * Steidel et al. (1996) Steidel C. C., Giavalisco M., Pettini M., Dickinson M., Adelberger K. L., 1996, The Astrophysical Journal, 462, L17 * Ziparo et al. (2022) Ziparo F., Ferrara A., Sommovigo L., Kohandel M., 2022, arXiv e-prints, p. arXiv:2209.06840 * de Barros et al. (2014) de Barros S., Schaerer D., Stark D. P., 2014, A&A, 563, A81 ## Appendix A Photometry validation Figure 12: Magnitude number counts in the HST WFC3/IR F140W and JWST NIRCam F150W bands. The magnitudes are obtained for objects detected in the shallower F140W-band and measured simultaneously in F140W and F150W. The number counts show a good agreement and no photometric offset is present. Figure 13: Magnitude comparison between HST-WFC3/IR F140W and JWST-NIRCam F150W bands. The blue line marks the running median offset and the blue envelope the $\sigma$-clipped standard deviation. The magnitudes of the two bands show a relatively tight dispersion and an offset of about 0.24 mag, that can be explained by the difference between the filter curves and pivot wavelengths. We perform photometry validation by comparing our photometry with the well known and calibrated HST WFC3/IR images from the RELICS survey. We run SE++ on a stacked detection image and do photometric measurements simultaneously on the WFC3/IR F140W and NIRCam F150W bands. Fig. 12 shows the AUTO magnitude number counts in NIRCam F150W and WFC3/IR F140W as measured with SE++. The number counts are in excellent agreement, which is not surprising given the same detection image and with minimal or no offsets. This confirms the accurate zero-point calibration of our JWST data reduction, explained in section 2. The comparison of magnitudes is shown in Fig 2, where on the $y-$axis we show the magnitude difference between the F140W and F150W bands. There is a relatively tight scatter and a median offset of about $0.36$ towards fainter F140W magnitudes. Most of this offset can be attributed to the difference between the F140W and F150W filter curves and pivot wavelengths. Using Pysynphot, we performed synthetic photometry in the two filters, assuming a variety of galaxy templates from BC95 (Bruzual A. & Charlot, 1993) and KC96 (Kinney et al., 1996) libraries, and a redshift range of $z=0-5$. We found a median offset of $\sim 0.3$ magnitudes. ## Appendix B Example of a $z\sim 10$ candidate Figure 14: Same as Fig. 6, this time for the $z\sim 12$ candidate galaxy SMACS_z11a. Figure 15: Same as Fig. 7, here showing the galaxy candidate SMACS_z11a. In addition to the $z\sim 16$ and $z\sim 12$ examples shown in the main text (cf. section 5.2), we display here an example of a $z\sim 10$ galaxy, the candidate SMACS_z11a, in Figs 14 and 15. ## Appendix C Full candidate list Figure 16: Photometric data and best-fit SED results for candidate SMACS_z10a. Figure 17: Photometric data and best-fit SED results for candidate SMACS_z10b. Figure 18: Photometric data and best-fit SED results for candidate SMACS_z10c. Figure 19: Photometric data and best-fit SED results for candidate SMACS_z10d. Figure 20: Photometric data and best-fit SED results for candidate SMACS_z10e. Figure 21: Photometric data and best-fit SED results for candidate SMACS_z12b. Figure 22: Photometric data and best-fit SED results for candidate SMACS_z16b.
# EasyTrack: Efficient and Compact One-stream 3D Point Clouds Tracker Baojie Fan, Wuyang Zhou, Kai Wang, Shijun Zhou, Fengyu Xu, Jiandong Tian, Senior Member, IEEE Baojie Fan, Wuyang Zhou, Kai Wang and Fengyu Xu are from the College of Automation $\&$ College of Artificial Intelligence, Nanjing University of Posts and Telecommunications. Nanjing 210023, China, e-mail<EMAIL_ADDRESS>Jiandong Tian and Shijun Zhou are with the State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China and Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China. Corresponding author: Jiandong Tian<EMAIL_ADDRESS> ###### Abstract Most of 3D single object trackers (SOT) in point clouds follow the two-stream multi-stage 3D Siamese or motion tracking paradigms, which process the template and search area point clouds with two parallel branches, built on supervised point cloud backbones. In this work, beyond typical 3D Siamese or motion tracking, we propose a neat and compact one-stream transformer 3D SOT paradigm from the novel perspective, termed as EasyTrack, which consists of three special designs: 1) A 3D point clouds tracking feature pre-training module is developed, utilizing a transformer with masks to learn patterns of point-wise spatial relationships within three-dimensional data. 2) A unified 3D tracking feature learning and fusion network is proposed to simultaneously learns target-aware 3D features, and extensively captures mutual correlation through the flexible self-attention mechanism. 3) A efficient target location network in the dense bird’s eye view (BEV) feature space is constructed for target classification and regression. Moreover, we develop an enhanced version named EasyTrack++, which designs the center points interaction (CPI) strategy to reduce the ambiguous targets caused by the noise point cloud background information. The proposed EasyTrack and EasyTrack++ set a new state-of-the-art performance (18%, 40% and 3% success gains) in KITTI, NuScenes, and Waymo while runing at 52.6fps with few parameters (1.3M). The code will be available at https://github.com/KnightApple427/Easytrack. ###### Index Terms: 3D single object tracking, Lidar, point clouds pre-training, one-stream compact tracking framework, transformer ## 1 Introduction Single object tracking (SOT) is a classic task in computer vision. In the past few years, camera-based 2D SOT has achieved rapid developments[1, 2, 3, 4, 5, 6, 7, 8, 9]. However, in some practical applications such as autonomous driving, mobile robotics, and unmanned aerial vehicles[10, 11, 12, 13], 3D SOT has attracted more and more attention to provide the pose and position of the tracked target in the 3D space. Early 3D SOT methods mainly focus on RGB-D tracking[14, 15]. With the development of LiDAR sensors, many trackers perform 3D SOT in the point clouds scanned by LiDAR sensors, since they can preserve accurate geometry information[16, 17] of 3D objects and are not sensitive to illumination changes. Given the target’s state parameters $(x,y,z,w,l,h,\theta)$ in the first frame of a point clouds tracklet, the purpose of 3D SOT in point clouds aims to predict the states of the target in the coming frames. Due to the sparsity and irregularity of the point clouds, 3D SOT in point clouds is still a challenging task. Efficient 3D feature learning and accurate target location are crucial to develop an effective and robust point clouds tracker. Figure 1: Comparison of different 3D point clouds tracking framework. (a) The framework of typical Siamese trackers. (b) The framework of the proposed EasyTrack. Most existing 3D SOT trackers in point clouds follow the typical 3D Siamese multi-stage tracking paradigm, as shown in Fig. 1(a). Among it, two parallel branches of 3D backbone with sharing parameters are designed to extract the point cloudy features of the template and search area, respectively. Then a heavy 3D feature fusion module is necessary to fuse 3D point cloudy features from template and searching, and transfer the target’s information into the search area. Finally, various target localization networks are applied to regress the target’s location. Due to the fact that point clouds are usually sparse, textureless, unorder and incomplete[12], many trackers introduce such as shape completion networks [18], segmentation networks [19], or graph networks [20] to regularize the 3D feature learning process or strengthen the 3D feature representation. Following this Siamese paradigm, P2B[21] is the first end-to-end trained 3D Siamese tracker. It adopts PointNet++[22] as the 3D feature backbone to process point clouds. Then, the target-specific 3D feature embedding network based on cosine similarity is designed to fuse template and search area features. The deep Hough voting strategy is applied for classification and regression. Different from P2B, STNet[23], SMAT[24] design a transformer-based backbone to capture the long-range context in point clouds. V2B[18], PTTR and PTTR++ [25] are proposed to match template and search area features in a global manner by MLP layers or the transformer network. C2FT[26], PTTR[25] aims to improve the voting-based localization networks in a coarse-to-fine manner. V2B[18], GPT[20] introduce auxiliary shape completion or graph network to improve the tracking performance. Although these Siamese trackers present the favorable tracking performance, they are still some problems due to the inherent properties of the multi-stage Siamese paradigm. Firstly, the separated and parallel feature learning branches of the template and search area points lead to the limited discriminative ability, especially in some non-rigid categories like Pedestrian. Because there is no mutual interaction or communication between the template and search area in the 3D feature learning process. Secondly, the point cloudy feature fusing operations are indispensable among these 3D Siamese trackers since the search point features are unaware of the tracked target. It is difficult to perform the effective feature matching in extremely incomplete point clouds and the heavy feature fusing operation usually brings high computational costs. Thirdly, we believe that in point cloud tracking tasks, data may be subject to interference from environmental factors (such as lighting changes, noise, occlusions), leading to challenges for the network to effectively learn point-pair relationship patterns of the targets. To address the above problems, we propose an extremely neat one-stream framework for 3D SOT in point clouds from a novel perspective, abbreviated as EasyTrack, as shown in Fig. 1(b). Different from the 3D Siamese tracking framework, EasyTrack develops a target-aware unified one-stream network to extract target-specified search area point features, based on the proposed masked point clouds self-supervised tracking feature Learning module. And then, the target-aware search features are directly fed into the BEV-based localization network for classification and regression without any auxiliary networks. The detailed network structure is shown in Fig. 3. We crop and sample a fixed number of the original template and search area points as input. Then a target-aware point cloud feature learning network is proposed to extract point features and establish interaction between template and search area points at the same time. Among it, a local embedding module based on ball query is designed to aggregate local features for generating more global feature representations. Then we concatenate the initial template and search point features and apply self-perception transformer blocks to acquire point- wise spatial relationships and facilitate regional interactions, thereby providing target-aware search point features. In this way, the heavy feature fusion network is no longer needed. We directly utilize a target location network to obtain the target’s location in the search area. Specifically, since the 3D point cloud is too sparse to accurately regression, we encode the point cloud into a dense BEV feature space via a 2D convolution with residuals and output the prediction parameters using decoupled positioning heads. Although EasyTrack has already demonstrates superior tracking performance with such a simple architecture, considering the influence of disturbances in the dataset, we propose an enhanced version, abbreviated as Easytrack++. As mentioned above, we feed the whole template points together with the search points into the target-aware feature learning network for global interactions. However, the template typically contains the target in the central region and many background points that may cause interaction noise and ambiguous feature description. To solve this problem, we develop a center points interaction (CPI) strategy. As shown in Fig 6, we utilize the points in the center area of the template to conduct secondary interaction with the search points to embed more clear target information into the search area. It further improves the tracking performance with acceptable computational costs. Our main contributions can be summarized as follows: * • We propose a novel and neat one-stream paradigm EasyTrack for 3D SOT in point clouds without any auxiliary networks or tricks. It runs at a speed of 52.6fps and only has 1.30M parameters. * • We develop a new masked point clouds pre-training technique for 3D SOT, and demonstrate its excellent performance on one-stream 3D SOT framework with detailed ablations. * • A unified 3D tracking feature learning and interaction module is specially designed for 3D SOT in point clouds. It generates target-aware point features through a single-branch backbone. * • We further propose EasyTrack++ on top of EasyTrack. Among it, a center points interaction strategy is applied to reduce the noise caused by background points in the global interaction stage. The rest of this paper is organized as follows. In section II, we introduce the related work. Section III describes the methods of the proposed EasyTrack and EasyTrack++. In section IV, we validate the superior performance of our trackers in the KITTI, nuScenes and Waymo Open datasets through extensive experiments. Finally, we conclude in section V. ## 2 Related Work ### 2.1 3D Single Object Tracking in Point Clouds The pioneering work SC3D[27] first introduces the Siamese paradigm into 3D SOT in point clouds. It uses the Kalman filter or exhaustive search to generate candidate shapes in the search area. Then the candidate shape with the largest cosine similarity is selected as the tracking result. P2B[21] adopts PointNet++[22] as the backbone and embeds the target clues into the search points based on cosine similarity. Proposals are obtained through a region proposal network (RPN) based on deep Hough voting[28]. BAT[29] proposes an additional feature representation called BoxCloud and designs a reliable feature fusion module based on BoxCloud. MLVSNet[30] makes full use of the multi-layer features of PointNet++ to conduct multi-layer Hough voting to obtain more proposals. PointSiamRCNN[31] proposes a voxel-based tracker with attention mechanism. It utilizes the multi-scale RoI pooling to form a two- stage pipeline. C2FT[26] aims to improve the voting-based regression stage in a coarse-to-fine manner. A local feature refinement module and a global feature refinement module work collaboratively to refine the proposals. V2B[18] designs a point cloud completion network to force the learned features to include more shape information. A center-based head is adopted to locate the target directly from BEV features. LTTR[32] first projects the point clouds into the bird’s eye view and then uses the transformer network for feature fusion. PTT[33] inserts two transformer blocks in the deep Hough voting network to focus on deeper object clues in the tracking process. PTTR[25] applies a transformer network including the self attention and cross attention mechanisms. Among them, the global self attention operation captures the long-range dependency and the cross attention is used to fuse two sets of point features. GPT[20] introduces the graph neural network into point cloud object tracking and designs a feature fusion module based on the graph neural network. DMT[34] removes the usage of complicated 3D detectors and leverages temporal motion cues to track. Similar to DMT, $M^{2}$-Track[19] proposes a two-stage motion-centric paradigm. It first locates the target in consecutive frames through motion transformation and then refines the prediction structure through motion-assisted shape completion. STNet[18] uses the self attention mechanism to capture the non-local information of the point clouds and the decoder uses the cross attention mechanism to upsample discriminative point features. An iterative correlation network based on cross attention is applied to associate the template with the potential targets in the search area. SMAT[24] converts point clouds into Pillars and compresses them into two- dimensional BEV features. An encoder based on the attention mechanism realizes the global similarity calculation between the template and search branch on multi-scale features. CMT[35] proposes a context-matching-guided transformer to effectively match the target template features with the search area. Many above 3D tracking methods inherit the Siamese paradigm based on appearance matching with the parallel classification and regression branches, based on the supervised point clouds backbones such as PointNet++[22]. ### 2.2 2D Object Tracking Most of the existing 3D point cloud tracking algorithms refer to the idea of 2D Siamese trackers. SiamFC[1] is the pioneer of these trackers. It uses the convolutional neural network to design a two-branch feature extraction network for the template and search frame, respectively. Then, it obtains response maps through cross-correlation operations. SiamRPN[2] introduces the regional proposal (RPN) network in Faster RCNN[36] to generate proposals. SiamRPN++[3] commits to using deeper and more powerful modern networks to extract features such as ResNet[37]. SiamFC++[4] and SiamBAN[5] are anchor-free trackers that directly regress the target’s location without any prior information. TransT[6] proposes a feature fusion network based on the transformer network, which effectively fuses the features of the template and search area through the attention mechanism. STARK[7] uses the transformer network to capture spatio-temporal information in video sequences. The encoder models the global spatio-temporal feature dependency between the target and the search area, and the decoder learns a query embedding to predict the spatial location of the target. SwinTrack[8] designs a pure transformer tracking framework and applies the transformer network in both feature extraction and feature fusion stages. Mixformer[38] and OSTrack[9] follow a one-stream 2D tracking framework that unifies the feature extraction and feature fusion processes by using the transformer network. ### 2.3 Transformer in point clouds The transformer network [39] has been widely adopted in various point cloud for 3D vision tasks. In the point cloud segmentation and classification tasks, PT[40] designs a transformer block based on the vector self attention mechanism. Combining it with the set abstraction layer in PointNet++[22], PT constructs a complete 3D point cloud understanding network. PCT[41] proposes a novel offset-attention mechanism for point cloud processing. PTv2[42] develops an effective group vector attention that enables efficient information exchange within and among attention groups. In the 3D object detection task, Pointformer[43] designs a local transformer module to model the interaction between points in the local area and a global transformer to learn context- aware representation at the scene level. VoTr[44] devotes to finding a suitable transformer network for voxel representation. SST[45] proposes an single-stride sparse transformer. With its local attention mechanism and capability of handling sparse data, it overcome receptive field shrinkage in the single-stride setting. In the 3D multi object tracking task, TransFusion[46] proposes a two-layer transformer decoder. It first generates initial bounding box predictions in the point clouds, and then attentively fuse object queries with image features. Moreover, the application of transformer-based pre-training strategies has found extensive utility in the domain of point cloud tasks[47]. PointContrast[48] evaluates the transferability of advanced representations in 3D point clouds across diverse scenarios, demonstrating that unsupervised pre- training effectively enhances performance on various downstream tasks and datasets. Point-BERT[49] has introduced the concept of Masked Point Modeling (MPM) as a pre-training task for point cloud transformers, thereby extending the BERT paradigm into the realm of 3D point clouds. Point-MAE[50] has devised an elegant solution for self-supervised learning on point clouds, employing an asymmetric design and token manipulation through shift masking. It aims to extract high-level latent features from unmasked point patches and reconstruct the masked point patches. Point-PEFT[51] has devised a model that adjusts the pre-training strategy with minimal learnable parameters. They freeze a significant portion of the model’s parameters and only fine-tune the newly introduced PEFT module on downstream tasks. Inspired above improvement of pre- training methods on multiple point cloud tasks, we try to pre-train the point cloud feature backbone of 3D SOT trackers, and extend the application of MAE pre-training on point cloud 3D target tracking.. Figure 2: The 3D tracking pre-training strategy. We perform weight transfer on the transformer blocks in the target-aware network. ## 3 METHOD In this section, we first give an overview of one-stream 3D single point cloud object tracking framework. Then, the detailed introduction of the proposed EasyTrack and EasyTrack++ is presented. It mainly consists of four parts: the 3D point clouds tracking pre-training module, the unified target-aware point clouds feature learning and interaction, target localization, and the center points interaction strategy in EasyTrack++. ### 3.1 Overview We propose a novel framework for 3D single object tracking, based on the developed pre-training 3D point clouds tracking feature backbone. The network structure is shown in Fig.3. Taking template points $P^{t}\in\mathbb{R}^{N_{1}\times 3}$ and search points $P^{s}\in\mathbb{R}^{N_{2}\times 3}$ as input, the tracking process can be formulated as: $(F^{t},F^{s})=\Phi(P^{t},P^{s})$ (1) $(x,y,z,\theta)=\Psi(F^{s})$ (2) where $\Phi(\cdot)$ is the target-aware point cloud feature learning network, $F^{t}\in\mathbb{R}^{N_{1}\times d}$ is the feature of template points and $F^{s}\in\mathbb{R}^{N_{2}\times d}$ is the feature of search points. $\Psi(\cdot)$ is the target localization network and $(x,y,z,\theta)$ is the predicted parameters. Specifically, $(x,y,z)$ are the 3D coordinates of the target’s center and $\theta$ is the heading angle in the X-Y plane. It is worth noting that the size information of the target $(w,l,h)$ is given in the template bounding box, and keeps unchanged across all frames in the point cloud scenes. Thus we only predict $(x,y,z,\theta)$ in each frame. Figure 3: The network structure of EasyTrack. It is mainly composed of two parts: (1) Joint feature extraction and fusion network for template and search area points feature learning and fusing. (2) Target location network for classification and regression in the BEV feature space. Figure 4: The detailed structure of the proposed local embedding network. Ball Query and MLP layers are applied to capture the local features of point clouds. ### 3.2 3D tracking pre-training Inspired by the success of BERT [52], we extend our pre-training strategy to the field of 3D single target tracking. We utilize fine-tuned Point-MAE to accommodate 3D SOT tasks, and the primary goal of pre-training is to capture the relationships between 3D data points. We observe that the objects we need to track in datasets such as KITTI have the following characteristics. (i) Compared to the dozens or even hundreds of objects with completely different shapes and structures that point cloud analysis tasks need to deal with, tracking objects are often various types of Car, Cyclist or Motorcyclist, Pedestrian, etc., which contain fewer categories and are more geometrically similar. (ii) Data sets used for point cloud analysis tasks, such as ShapeNet, often contain only a single CAD model with precise geometry and size information. The data set nuScenes and others used by 3D SOT collected various complex situations in real road scenes, and factors such as disturbance and weather would cause different degrees of target occlusion. (iii) Since point cloud data mainly comes from LiDAR sensors, the data collection process may be limited by environmental conditions, time, location, and other factors, resulting in the diversity and richness of data samples affected. This limits the ability of the tracking model to learn the relationships between the target points. Based on the above observations, we emphasize that the tracking model learns the correlation between points in 3D data in advance through pre-training. For the first time, we introduce a new single-stream paradigm based on pre- training. Specifically, for the pre-trained weights to fit a tracking target with fewer categories and simpler geometry, we reduced the dimensions of the attention heads and hidden layers in the encoder and the overall depth. Of course, this operation inevitably reduces the encoder’s ability to learn, so we increase the number of attention heads in the decoder to provide more parallelism. This balances the encoder-decoder difference of the pre-trained network, making the coding part more inclined to a new kind of point-pair relational learning network. The pre-training process is depicted in Fig. 2. Specifically, the developed approach pre-trains a target-aware 3D tracking feature learning network based on Point-MAE[50]. This pre-training process requires the use of sampling, KNN algorithms, packet layers, and other methods in PointNet to obtain point- position embedding and mask tokens and input tokens, and then learn the underlying features through an encoder-decoder network of transformers. Representing the complete mask labeling as $F_{m}\in\mathbb{R}^{mn\times C}$, for a given input point cloud $P\in\mathbb{R}^{N\times 3}$, the developed pre- training procedure can be described as follows: $P_{e}=\text{{FPS}}(P,n),F_{v}=MLP(\text{KNN}(P,k))$ (3) $T_{e}=\text{{Encoder}}(F_{v},P_{e})$ (4) $H_{m}=\text{{Decoder}}(\text{concat}(T_{e},T_{m}))$ (5) Where $Encoder$represents transformer-based point relationship learning network, $T_{e}$ and $T_{m}$ correspond to encoding tokens and mask labels, respectively. $P_{e}$indicates the location code. $H_{m}$ denotes the learned mask tokens, and $F_{v}$ signifies the local features for each point. $n$ refers to the number of points that need to be sampled, and $k$ stands for K nearest neighbor algorithm to select k samples around the center of mass point as its neighbors. A Multi-layer perceptron (MLP) layer is utilized as our prediction head to reconstruct masked point patches in the coordinate space. Extracting the output, $H_{m}$, from the decoder, the prediction head projects it onto a vector. Subsequently, a reshaping operation is performed to yield the predicted masked point patch, denoted as $P_{pre}$: $P_{pre}=\text{{MLP}}(H_{m})$ (6) EasyTrack leverages the pre-trained Point-MAE. There are some fundamental differences between the two approaches: (i) The joint feature extraction and fusion network is employed for both feature extraction and information fusion in template and search area, while Point-MAE employs self attention for feature extraction. (ii) The learning tasks are distinct, and correspondingly, the inputs and heads differ. We utilize template and search area as inputs, employing a voxel-to-BEV target center localization network to generate bounding boxes, whereas Point-MAE is designed for point cloud analysis. (iii) We further introduce center-point interaction strategies and an efficient decoder design. ### 3.3 Target-aware 3D feature learning and interaction Local embedding. Before feeding the point clouds into the transformer layers, we design a local embedding network to obtain unique local features for each point. It merges the position embedding and input embedding networks in the typical transformer network. It composes of Ball Query[22] and multi-layer perception (MLP) layers, aiming at local shape feature encoding for sparse and incomplete point clouds. The detailed network structure is shown in Fig 4. For each point $p_{i}$ in $P^{t}$, we take it as the center, $r$ as the radius, and obtain $K$ points $P_{i}^{Q}\in\mathbb{R}^{K\times 3}$. Since we take original point clouds as input, we treat the 3D coordinates as the point features and apply three MLP layers to embed them into a 64-dimensional feature space $F_{i}^{Q}\in\mathbb{R}^{K\times 64}$. Last, we perform max pooling operation among $K$ points to obtain the local feature $f_{i}^{lt}\in\mathbb{R}^{64}$ for $p^{i}$. The local feature embedding process can be formulated as: $P_{i}^{Q}=\left\\{s_{j}|\left\|s_{j}-p_{i}\right\|\leq r\right\\},j=1,...,K$ (7) $f_{i}^{lt}=max(MLP(P_{i}^{Q}))$ (8) Figure 5: Illustration of the target-aware 3D feature learning. (a) The detailed structure of the transformer layer. (b) The attention maps generated by our target-aware point cloud feature learning network in the Car and Pedestrian category in the KITTI dataset. We unify the input embedding and the position embedding of the typical transformer network into a local embedding module. The embedded local features for the template $F^{lt}=\left\\{f_{i}^{lt}\right\\}_{i=1}^{N_{1}}$ provide initial features for each point and unique local information since each point has unique coordinates. Target-aware feature learning. Different from previous Siamese trackers, we develop the target-aware feature extraction and interaction method by a unified single-branch block and avoid the time-consuming feature fusing network, which is constructed based on the transformer network and self attention mechanism. After local feature embedding, the template local features $F^{lt}\in\mathbb{R}^{N_{1}\times 64}$ and search local features $F^{ls}\in\mathbb{R}^{N_{2}\times 64}$ are concatenated as $F^{l}=[F^{lt};F^{ls}]$. As shown in Fig. 3, we feed $F^{l}$ into three stacked transformer blocks and obtain $F^{g}\in\mathbb{R}^{{(N_{1}+N_{2})}\times d}$. Then we split it to obtain target-aware search point features $F^{s}\in\mathbb{R}^{N_{2}\times d}$. The detailed structure of the transformer network is shown in Fig. 5(a). In each transformer block, we first use layer normalization operation to normalize the initial point features. Then, linear layers are utilized to map the inputs into query (Q), key (K), and value (V). Unlike the typical transformer structure, we abandon the special position encoding module since the coordinate-based local feature embedding network already provides unique position information for every point. The self attention mechanism is the core operation of our transformer network. It can be formulated as: $attn=Softmax(\frac{QK^{T}}{\sqrt{d}}),$ (9) $out=Softmax(\frac{QK^{T}}{\sqrt{d}})\cdot V$ (10) where $attn$ is the attention map and $out$ is the output of the self attention mechanism. Especially, we concatenate the template and search point features as the input embeddings $F^{l}=[F^{lt};F^{ls}]$, thus the Q, K, V can be represented as $[Q^{t};Q^{s}]$, $[K^{t};K^{s}]$, and $[V^{t};V^{s}]$. Eq. 9 and Eq. 10 can be expanded as: $\displaystyle attn$ $\displaystyle=Softmax(\frac{[Q^{t};Q^{s}][K^{t};K^{s}]^{T}}{\sqrt{d}})$ $\displaystyle=[\lambda^{tt},\lambda^{ts};\lambda^{st},\lambda^{ss}],$ (11) $\displaystyle out$ $\displaystyle=[\lambda^{tt},\lambda^{ts};\lambda^{st},\lambda^{ss}]\cdot[V^{t};V^{s}]$ $\displaystyle=[\lambda^{tt}V^{t}+\lambda^{ts}V^{s};\lambda^{st}V^{t}+\lambda^{ss}V^{s}]$ (12) where $\lambda^{tt}$ and $\lambda^{ss}$ are attention weights, $\lambda^{ts}$ and $\lambda^{st}$ indicate the dual-interaction between template and search area points. Eq.11 and Eq.12 explain why the proposed single-branch backbone can learn target-aware point features for the search area. As shown in Fig. 5(b), we visualize the attention maps generated by the above transformer network. We can find that the unified target-aware feature learning and interaction network can focus on the target well and distinguish the similar disturbances. It is worth noting that we apply the multi-head attention mechanism to enhance the representative ability of the network. Besides, the residual connection is considered and an MLP layer is designed to extract much deeper features. ### 3.4 Efficient BEV-based Target Localization After feature learning, we obtain target-aware search point features $F^{s}\in\mathbb{R}^{N_{2}\times d}$. An efficient target localization is proposed to regress the target’s location in the bird’s eye view (BEV) feature space instead of directly regressing in sparse point clouds. We first divide the irregular point clouds into equal voxels [53] according to their 3D coordinates. By performing max pooling along the z-axis on the voxelized feature maps, we obtain dense BEV feature maps where low responses in the voxelized feature maps can be suppressed. However, voxelized features are four-dimensional tensors and temporal fusion will superimpose features, making the encoder computationally intensive. A pragmatic approach involves the utilization of 3D convolutions applied to aggregate voxel features. We perform max pooling operation along the $z$ axis to project voxel features into the BEV feature space. However, this method is notably slow and inefficient. To address this issue, we propose an efficient encoder module (EEM) that reduces dimensionality with space-to-channel operations. Specifically, the module converts the 4D voxel tensor $V\in R^{H\times W\times Z\times C}$ into a 3D BEV tensor $V\in R^{H\times W\times(ZC)}$ via ”torch. reshape” operation, thus avoiding the need for memory-intensive 3D convolution. Given a target- aware search point features $F^{s}$ as input, we employ several 2D convolutions to reduce the channel dimension and incorporate residual connections to capture information from different layers. The module can be represented as follows: $F^{v}=Reshape(Voxel(F^{s}))$ (13) $F^{r}=\alpha(F^{v}+\beta F^{v})$ (14) where $F^{r}\in\mathbb{R}^{H\times W\times d_{1}}$ represent the encoded features of the BEV feature. $\beta$ means two 3*3 2D convolutions and a ReLU layer. $\alpha$ represents two ReLU layers and a 3*3 2D convolution. Figure 6: The network structure of EasyTrack++. It is built on top of EasyTrack. Thus, the target-aware feature learning and target localization networks are the same as Fig. 3. The center points interaction strategy crop the center points and then make secondary interaction with the search area points through the transformer network discussed in Fig. 5. Figure 7: Illustration of the heads of the proposed EasyTrack. It consists of four parts to realize accurate classification and regression collaboratively. Inspired by [54], we design four heads from a decoupling perspective to achieve accurate tracking, as shown in Fig. 6. A heatmap $M\in\mathbb{R}^{H\times W\times 1}$ is predicted to find which pixel in the BEV space in the target’s center is located. The offset map $O\in\mathbb{R}^{H\times W\times 2}$ helps find a more accurate center and recover the discretization error. Orientation $\Theta\in\mathbb{R}^{H\times W\times 1}$ determines the heading angle of the target. We also predict the height of the target’s center $Z\in\mathbb{R}^{H\times W\times 1}$ to locate the target in the $z$-axis. After post processing, we can decode the target location $(x,y,z,w,l,h,\theta)$ in the search area. The total loss function of our method can be formulated as: $L=\lambda_{1}L_{cls}+\lambda_{2}L_{off}+\lambda_{3}L_{\theta}+\lambda_{4}L_{Z}$ (15) Among them, $L_{cls}$ is a modified focal loss for heatmap $M$, $L_{off}$ is a $L_{1}$ loss for offset map $O$, $L_{\theta}$ and $L_{Z}$ are also $L_{1}$ losses for Orientation $\Theta$ and height $Z$ respectively. $\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}$ are weights for different parts. ### 3.5 Center points interaction strategy Although EasyTrack has demonstrated superior performance and achieved a balance between tracking accuracy and speed, we still make improvements to it and propose an enhanced version named EasyTrack++. In 3D single object tracking task, the template usually contains information around the target. This character brings heavy interaction noise when matching template and search area. To address this problem, SimTrack[55] propose a foveal window strategy to provide more diversified template information in 2D SOT. Here, we develop a center points interaction strategy that specially designed for 3D SOT in point clouds. As shown in Fig. 8(a), the template contains points belonging to pedestrians located in the center and background points distributed around them. We utilize a simple but effective cropping strategy to crop the center points to make secondary interactions with the search points to emphasize target information in the search area. Specifically, after target-aware feature learning, we obtain template features $F^{t}\in\mathbb{R}^{N_{1}\times d}$ and search area features $F^{s}\in\mathbb{R}^{N_{2}\times d}$. We first use the simplified ball query to define a central area around the geometric center $o$ of the template $P^{t}\in\mathbb{R}^{N_{1}\times 3}$. Then we collect a fixed number of points $P^{c}\in\mathbb{R}^{N_{3}\times 3}$ inside it and aggregate their features in $F^{t}$ as $F^{c}\in\mathbb{R}^{N_{3}\times d}$, where $N_{3}$ is the fixed number. If the number of points inside the central area is more than $N_{3}$, we use random sampling to sample $N_{3}$ points. Otherwise, we repeat the existing points. After that, we concatenate features of center points and search area and make secondary interactions through the transformer block mentioned in Sec 3.3. The whole center points interaction strategy can be formulated as: $(P^{c},F^{c})=BallQuery(P^{t},F^{t},o)$ (16) $F^{f}=Transformer(concat(F^{c},F^{s}))$ (17) where $F^{f}$ is the final feature for target localization. We visualize the attention maps generated by the proposed EasyTrack and EasyTrack++ in Fig. 8. As shown in Fig. 8(c), in EasyTrack, some background points around the target have high response that maybe confuse the target localization network and lead to inaccurate classification and regression. However, as shown in Fig. 8(d), the response of target is much more distinct than other backgrounds. This further reflects that the center points interaction strategy enhances the discrimination of feature extraction networks. Figure 8: The visualization comparison of attention maps generated by the global interactions in EasyTrack and EasyTrack++. We select one tracking scenes in the Pedestrian category in the KITTI dataset. (a) the template points. (b) the search area points. (c) the attention map generated by EasyTrack. (d) the attention map generated by EasyTrack++. TABLE I: Tracking results compared to other trackers in the KITTI dataset. Success / Precision are used for evaluation. Bold denotes the best result. Metric | Success | Precision ---|---|--- Category | Car | Pedestrian | Van | Cyclist | Mean | Car | Pedestrian | Van | Cyclist | Mean [t] Frame Number | 6424 | 6088 | 1248 | 308 | 14068 | 6424 | 6088 | 1248 | 308 | 14068 [b] SC3D[27] | 41.3 | 18.2 | 40.4 | 41.5 | 31.2 | 57.9 | 37.8 | 47.0 | 70.4 | 48.5 [t] P2B[21] | 56.2 | 28.7 | 40.8 | 32.1 | 42.4 | 72.8 | 49.6 | 48.4 | 44.7 | 60.0 MLVSNet[30] | 56.0 | 34.1 | 52.0 | 34.3 | 45.7 | 74.0 | 61.1 | 61.4 | 44.5 | 66.7 LTTR[32] | 65.0 | 33.2 | 35.8 | 66.2 | 48.7 | 77.1 | 56.8 | 45.6 | 89.9 | 65.8 BAT[29] | 60.5 | 42.1 | 52.4 | 33.7 | 51.2 | 77.7 | 70.1 | 67.0 | 45.4 | 72.8 PTT[33] | 67.8 | 44.9 | 43.6 | 37.2 | 55.1 | 81.8 | 72.0 | 52.5 | 47.3 | 74.2 C2FT[26] | 67.0 | 48.6 | 53.4 | 38.0 | 57.2 | 80.4 | 75.6 | 66.1 | 48.7 | 76.4 V2B[18] | 70.5 | 48.3 | 50.1 | 40.8 | 58.4 | 81.3 | 73.5 | 58.0 | 49.7 | 75.2 PTTR[25] | 65.2 | 50.9 | 52.5 | 65.1 | 57.9 | 77.4 | 81.6 | 61.8 | 90.5 | 78.1 CMT[35] | 70.5 | 49.1 | 54.1 | 55.1 | 59.4 | 81.9 | 75.5 | 64.1 | 82.4 | 77.6 STNet[23] | 72.1 | 49.9 | 58.0 | 73.5 | 61.3 | 84.0 | 77.2 | 70.6 | 93.7 | 80.1 $M^{2}$-Track[19] | 65.5 | 61.5 | 53.8 | 73.2 | 62.9 | 80.8 | 88.2 | 70.7 | 93.5 | 83.4 STTrack[56] | 66.5 | 60.4 | 50.5 | 75.3 | 62.6 | 79.9 | 89.4 | 63.6 | 93.9 | 82.9 PCET[57] | 68.7 | 56.9 | 57.9 | 75.6 | 62.7 | 80.1 | 85.1 | 66.1 | 93.7 | 81.3 PTIT[58] | 68.6 | 56.7 | 53.8 | 74.2 | 62.6 | 81.2 | 86.3 | 70.7 | 92.5 | 82.7 CXTrack[59] | 69.1 | 67.0 | 60.0 | 74.2 | 67.5 | 81.6 | 91.5 | 71.8 | 94.3 | 85.3 CorpNet[60] | 73.6 | 74.3 | 58.7 | 55.6 | 64.5 | 84.1 | 94.2 | 66.5 | 82.4 | 82.0 SyncTrack[61] | 73.3 | 54.7 | 60.3 | 73.1 | 64.1 | 85.0 | 80.5 | 70.0 | 93.8 | 81.9 MBPTrack[62] | 73.4 | 68.6 | 61.3 | 76.7 | 70.3 | 84.8 | 93.9 | 72.7 | 94.3 | 87.9 M3SOT[63] | 75.9 | 66.6 | 59.4 | 70.3 | 70.3 | 87.4 | 92.5 | 74.7 | 93.4 | 88.6 [b] EasyTrack(ours) | 87.9 | 88.2 | 65.3 | 85.3 | 86.0 | 90.1 | 95.6 | 78.8 | 93.8 | 91.6 [t] EasyTrack++(ours) | 88.7 | 91.2 | 68.6 | 87.4 | 88.0 | 92.6 | 96.3 | 80.0 | 95.4 | 93.2 [b] TABLE II: Tracking results compared to other trackers in the nuScenes dataset. Success / Precision are used for evaluation. Bold denotes the best result. Category | Car | Pedestrian | Truck | Bicycle | Mean | Car | Pedestrian | Truck | Bicycle | Mean [t] ---|---|---|---|---|---|---|---|---|---|--- Frame Number | 15578 | 8019 | 3710 | 501 | 27808 | 15578 | 8019 | 3710 | 501 | 27808 [b] SC3D[27] | 25.0 | 14.2 | 25.7 | 17.0 | 21.8 | 27.1 | 16.2 | 21.9 | 18.2 | 23.1 [t] P2B[21] | 27.0 | 15.9 | 21.5 | 20.0 | 22.9 | 29.2 | 22.0 | 16.2 | 26.4 | 25.3 BAT[29] | 22.5 | 17.3 | 19.3 | 17.0 | 20.5 | 24.1 | 24.5 | 15.8 | 18.8 | 23.0 V2B[18] | 31.3 | 17.3 | 21.7 | 22.2 | 25.8 | 35.1 | 23.4 | 16.7 | 19.1 | 29.0 STNet[23] | 32.2 | 20.4 | 27.6 | 18.5 | 26.5 | 33.4 | 32.9 | 20.8 | 26.8 | 31.5 CXTrack[59] | 29.6 | 20.4 | 27.6 | 18.5 | 26.5 | 33.4 | 32.9 | 20.8 | 26.8 | 31.5 CorpNet[60] | 35.0 | 21.3 | 39.7 | 26.9 | 31.8 | 38.4 | 33.6 | 36.3 | 43.5 | 36.8 SyncTrack[61] | 36.7 | 19.1 | 39.4 | 23.8 | 31.8 | 38.1 | 27.8 | 38.6 | 30.4 | 35.1 M3SOT[63] | 34.2 | 24.6 | 29.6 | 18.8 | 30.5 | 38.6 | 37.8 | 25.5 | 27.9 | 36.4 [b] EasyTrack(ours) | 76.1 | 60.5 | 71.8 | 44.8 | 70.5 | 80.5 | 86.9 | 70.0 | 73.2 | 80.8 [t] EasyTrack++(ours) | 76.8 | 61.4 | 72.1 | 45.1 | 71.2 | 80.9 | 87.2 | 70.2 | 74.5 | 81.2 [b] TABLE III: Tracking results compared to other trackers in the WOD. Success / Precision are used for evaluation. Bold denotes the best result. Category | Vehicle(185731) | Pedestrian(241752) | Mean [t] ---|---|---|--- Easy | Medium | Hard | Mean | Easy | Medium | Hard | Mean | [b] P2B[21] | 57.1/65.4 | 52.0/60.7 | 47.9/58.5 | 52.6/61.7 | 18.1/30.8 | 17.8/30.0 | 17.7/29.3 | 17.9/30.1 | 33.0/43.8 [t] BAT[29] | 61.0/68.3 | 53.3/60.9 | 48.9/57.8 | 54.7/62.7 | 19.3/32.6 | 17.8/29.8 | 17.2/28.3 | 18.2/30.3 | 34.1/44.4 V2B[18] | 64.5/71.5 | 55.1/63.2 | 52.0/62.0 | 57.6/65.9 | 27.9/43.9 | 22.5/36.2 | 20.1/33.1 | 23.7/37.9 | 38.4/50.1 TAT[64] | 66.0/72.6 | 56.6/64.2 | 52.9/62.5 | 58.9/66.7 | 32.1/49.5 | 25.6/40.3 | 21.8/35.9 | 26.7/42.2 | 40.7/52.8 STNet[23] | 65.9/72.7 | 57.5/66.0 | 54.6/64.7 | 59.7/68.0 | 29.2/45.3 | 24.7/38.2 | 22.2/35.8 | 25.5/39.9 | 40.4/52.1 CXTrack[59] | 63.9/71.1 | 54.2/62.7 | 52.1/63.7 | 57.1/66.1 | 35.4/55.3 | 29.7/47.9 | 26.3/44.4 | 30.7/49.4 | 42.2/56.7 $M^{2}$-Track[19] | 68.1/75.3 | 58.6/66.6 | 55.4/64.9 | 61.1/69.3 | 35.5/54.2 | 30.7/48.4 | 29.3/45.9 | 32.0/49.7 | 44.6/58.2 MBPTrack[62] | 68.5/77.1 | 58.4/68.1 | 57.6/69.7 | 61.9/71.9 | 37.5/57.0 | 33.0/51.9 | 30.0/48.8 | 33.7/52.7 | 46.0/61.0 [b] EasyTrack++(Ours) | 70.0/77.8 | 59.1/69.1 | 58.3/70.5 | 62.8/72.7 | 38.1/58.5 | 35.2/52.2 | 31.5/49.0 | 35.1/53.5 | 47.1/61.8 TABLE IV: The results for the actual tracked frames in the nuScenes dataset. Category | Car | Pedestrian | Truck | Bicycle | Mean ---|---|---|---|---|--- Frame Number | 146871 | 75066 | 35448 | 4698 | 262123 V2B[18] | 34.5/36.1 | 18.9/23.6 | 32.9/29.3 | 23.2/30.5 | 29.6/31.5 Easytrack++ | 74.5/78.8 | 58.5/84.7 | 70.1/68.3 | 38.5/72.1 | 68.7/78.9 ## 4 Experiments In this section, we conduct extensive experiments to show the favorable performance of EasyTrack and EasyTrack++. First, we introduce our experiment settings including datasets and evaluation metrics. Then we give comprehensive comparisons with other state-of-the-art trackers to validate the superiority of our tracker. Last, adequate ablation studies validate the effectiveness of each part in the EasyTrack and EasyTrack++. ### 4.1 Experiment settings Datasets. We evaluate the performance of EasyTrack and EasyTrack++ on the challenging KITTI[65], nuScenes[66] and Waymo Open Dataset (WOD)[67]. They mainly focus on autonomous driving scenes. KITTI has 21 training scenes and we follow [27] to split the training set since the testing set has no annotations. Specifically, we train in scenes 0-16, validate in scenes 17-18, and test in scenes 19-20. For the nuScenes dataset, we follow [18] to split it into 750 training sequences and 150 validation sequences. We train our method on the training set and test it on its validation set. Notably, we use the official toolkit to insert ground truth for unannotated frames since only key frames have annotations in nuScenes. For WOD, we followed LiDAR SOT[68] to evaluate our approach on 1,121 tracks divided into easy, medium, and difficult subsets based on point cloud sparsity. The datasets use for our pre-training include ShapeNet,ModelNet40,ScanObjectNN. The ShapeNet[69] dataset utilized for pre-training comprises approximately 51,300 meticulously curated 3D models, encompassing 55 prevalent object categories. It was meticulously compiled by collecting CAD models from open-source 3D repositories available online. CAD models, which are digital three-dimensional representations within computer-aided design software, exhibit highly precise geometric information. ModelNet40 is a commonly used dataset for 3D object recognition that contains 40 different categories of 3D CAD models. The models were divided into 12,311 training samples and 1,269 test samples. The object category in ModelNet40 covers common objects such as chairs, tables, monitors, airplanes, and so on. Unlike the first two, ScanObjectNN is a challenging real-world dataset consisting of approximately 15,000 objects from 15 categories that provide more light variation, noise interference, and target context information to simulate complex and variable real-world scenarios. Evaluation Metrics. We follow previous methods to report the Success and Precision metrics. Among them, the Success metric is defined by the 3D IOU between the predicted 3D bounding box and the ground truth. The Precision metric is defined by the Area Under Curve (AUC) for the distance between centers of the predicted and annotated bounding boxes from 0 to 2m. Model Details. In the proposed EasyTrack and EasyTrack++, we take $N_{1}=512$ template points and $N_{2}=1024$ search area points as input. The radius $r=0.3m$ and $K=32$ neighbor local points are considered in the local feature embedding network. In the target-aware feature learning process, we use the multi-head self attention in the transformer block. We stack three transformer blocks with four heads. The feature dimension $d$ is set to 64. In the center points interaction stage, we collect $N_{3}=128$ points to form $P^{c}$ and we set use one layer of transformer network for secondary interaction. In the target localization network, the voxel size is set to $(0.3m,0.3m,0.3m)$. The size of the BEV feature map is $24\times 38\times 128$. Four 2D convolutions and two 2D deconvolutions are designed to process the BEV feature in combination with the residual connection. Training. In the training stage, we merge point clouds in the $(t-1)$-th bounding box and the first bounding box as the template. The point clouds inside the enlarged $t$-th bounding box are sampled as the search area along with random shift. We set $\lambda_{1}=1.0,\lambda_{2}=1.0,\lambda_{3}=1.0,\lambda_{4}=2.0$ in the loss function. Our model is end-to-end trained for 20 epochs. We use the Adam optimizer, and the initial learning rate is set to 0.001. It is divided by 5 every 6 epochs. Testing. In the testing stage, we give the target in the first frame of a tracklet and track it across all frames. We merge point clouds in the $(t-1)$-th predicted result and the given bounding box in the first frame as the template. The $(t-1)$-th predicted result is enlarged by $2m$ in the $t$-th frame and we sample 1024 points inside it as the search area. 3D Pre-training. Empirical evidence reveals that pre-trained models on 3D analysis task datasets (e.g. ShapeNet dataset) can significantly enhance the precision of 3D tracking tasks. The ShapeNet dataset comprises approximately 50,000 3D CAD models spanning 14 major categories and 55 subcategories. To be more specific, our proposed target-aware feature learning network serves as the backbone, and we pre-trained it on ShapeNet for 300 epochs using Point- MAE. During training on the KITTI and nuScenes datasets, we initialized EasyTrack with pre-trained weights from the backbone network of the 3D analysis framework. The remaining layers were initialized randomly. Figure 9: Robustness analysis in the Car category in the KITTI dataset. ### 4.2 Comparison with other trackers Results on KITTI. We compare EasyTrack and EasyTrack++ with other state-of- the-art 3D SOT methods in Table I. Following previous works, we report the results in four categories including Car, Pedestrian, Van, and Cyclist. The mean results are calculated based on all frames. It can be observed that the proposed EasyTrack outperforms other trackers across all categories, with an average success and precision score of 86.0/91.6. Besides, thanks to the center points interaction strategy, the enhanced version EasyTrack++ further improves the tracking performance. The mean Success and Precision metrics are the highest among all trackers (88.0/93.2). Compared to EasyTrack, EasyTrack++ shows an improvement of 2.0 in mean Success and 1.6 in Precision metrics compared to EasyTrack. Results on nuScenes. We show the tracking results on nuScenes dataset compared to other state-of-the-art trackers in Table II. It is important to note that for unannotated frames, the ground truth is obtained through interpolation, as mentioned earlier. We compare with those trackers that follow the same settings. We report the results in four categories including Car, Pedestrian, Truck, and Bicycle. Note that the results are computed on the key frames of the validation set. EasyTrack outperforms other trackers in all categories. EasyTrack++ further improves the tracking performance and achieves the best mean Success and precision (71.2/81.2). Compared to tracking in the KITTI dataset, it is more difficult to track the target in nuScenes due to the low- quality interpolated ground truth. In previous work, tracking results were reported only for keyframes (Car-15578), as shown in Table II. However, in Table IV, we provide results for all actual tracked frames (Car-145871), and EasyTrack++ exhibits a significant advantage over V2B. Figure 10: Visualization of advantage cases compared to STNet[23]. We visualize one tracking scene for each category in the KITTI dataset. Figure 11: Visualization of advantage cases compared to STNet[23]. We visualize one tracking scene for each category in the nuScenes dataset. Results on Waymo. To assess the generalization capabilities of our proposed EasyTrack++, we conducted an evaluation of the KITTI pre-trained model on the WOD, as outlined in the prior work [23]. It is worth noting that the object categories between KITTI and WOD align as follows: Car $\rightarrow$ Vehicles and Pedestrian $\rightarrow$ Pedestrian. The experimental results, presented in Table III, Comparison with other methods reveal that EasyTrack++ exhibits superior tracking performance across various challenging occlusion scenarios, achieving the highest average success rate and precision (47.1/61.8). In summary, our proposed methodology not only demonstrates accurate target tracking across diverse object sizes (vehicle or pedestrian) but also exhibits strong generalization capabilities when applied to previously unobserved scenes. TABLE V: Computation costs compared to other Siamese trackers. Method | Param(M) | FLOPs(G) | FPS ---|---|---|--- P2B[21] | 1.3 | 4.7 | 35.7 [t] BAT[29] | 1.5 | 3.1 | 51.8 PTT[33] | 4.9 | 6.2 | 41.2 V2B[18] | 1.3 | 5.6 | 20.4 PTTR[25] | 2.1 | 2.7 | 35.9 STNet[23] | 2.0 | 3.1 | 27.5 $M^{2}$-Track[19] | 2.2 | 2.5 | 57.0 EasyTrack | 1.0 | 2.3 | 53.2 EasyTrack++ | 1.3 | 2.6 | 52.6 [b] Computational cost. We propose a simple baseline for 3D SOT without the heavy feature fusion module that is necessary for typical Siamese trackers. The computational costs of different trackers are reported in Table V. For a fair comparison, we use a single NVIDIA 3090 GPU to test all the trackers. Table V reflects that EasyTrack has fewer parameters and FLOPs than other trackers. The proposed object localization module effectively reduces the model’s parameter training volume with the compromising accuracy. It runs at a real- time speed (53.2FPS) and is only slow than $M^{2}$-Track but much more accurate as shown in Table I. Compared with typical Siamese trackers, EasyTrack is a lightweight tracker and achieves a good balance between running speed and tracking accuracy. At the same time, EasyTrack++ further improves tracking performance with the acceptable computation cost. It achieves the best tracking performance as illustrated in Tab I and still outperforms most 3D SOT trackers in the speed metric and runs in a 52.6FPS. Figure 12: (a) The impact of different pre-training epochs on tracking performance in the KITTI dataset. (b) The effect of pre-training datasets. (c) The convergence speed of the model in the KITTI dataset. $*$ indicates no pre- training employed. Robustness in Sparse Scenes. It is crucial for a 3D tracker to perform robust tracking in sparse point clouds for practical applications. We divide the tracing scenes into four intervals according to the number of points. We report the Success metric in the Car category in the KITTI dataset in Fig. 9. As the number of points increases, the performance of trackers increases steadily. EasyTrack presents better performance than V2B[18] and STNet[23] in three intervals. It reflects that EasyTrack has good robustness in sparse scenes that have incomplete target information. The proposed EasyTrack++ gradually improves the performance of EasyTrack since the center points interaction strategy has more informative points to guide the secondary interaction in relatively dense scenes. Fig. 12(b) intuitively reflects that EasyTrack and EasyTrack++ have good robustness in sparse scenes. This is critical when facing occlusion conditions or tracking distant objects. Visualization comparisons. To demonstrate the superiority of the proposed EasyTrack more intuitively, we visualize the tracking process compared to the competitive tracker STNet[23] in Fig. 10 and Fig. 11. In the Car category in the KITTI dataset, as shown in the first row of Fig. 10, the point clouds of the target are quite sparse and incomplete. EasyTrack tracks well while STNet deviates largely. It reflects the good robustness of EasyTrack in sparse scenes. In a scene with more disturbs in the second row, EasyTrack holds the target pedestrian tightly while STNet gradually loses the target and locates a similar target. This can be attributed to the superiority of our target-aware feature learning process. The simple correlation operation in STNet leads to the unreliable feature matching among incomplete point clouds. Compared to KITTI dataset that uses a 64-line LIDAR, nuScenes dataset uses a 32-line LiDAR, thus the points are usually more sparse. As shown in Fig. 11, EasyTrack also performs better than STNet in the sparse tracking scenes. ### 4.3 Ablation study For a more thorough assessment of the individual efficacy of the proposed method, it is imperative to note that, unless otherwise specified, ablation experiments were conducted without the utilization of the pre-training strategy. Model components. Our ablation study examined the impact of 3D SOT pre- training (Pre), target-aware feature learning (TA), and target localization (TL) during the training process. The results are shown in Table VI. It was observed that the pre-training strategy significantly enhances tracking performance ($\uparrow$21.2/$\uparrow$9.4). This finding substantiates the superiority of the pre-training strategy specifically tailored for 3D SOT tasks, emphasizing the complementary value of the CAD model-based ShapeNet dataset used in pre-training for point cloud data. Even without employing the pre-training strategy, our tracker still demonstrates competitive results, with a mean success and precision of 64.5/83.5. Furthermore, our proposed target-aware feature learning and object localization methods exhibited varying degrees of performance improvement. Subsequently, with the implementation of the proposed target-aware feature learning, the mean performance is enhanced by 1.0/0.7. Finally, by leveraging the proposed target localization, we achieve a further performance boost of 1.2/0.5. This demonstrates its ability to accurately localize objects in the fusion space based on BEV features. TABLE VI: Ablation studies on different model components on KITTI[65] dataset. Pre | TA | TL | Car | Pedestrian | Van | Cyclist | Mean ---|---|---|---|---|---|---|--- ✗ | ✗ | ✗ | 71.5/82.0 | 53.8/82.3 | 56.9/65.9 | 72.7/93.8 | 62.6/81.0 ✗ | ✓ | ✓ | 72.8/84.0 | 56.5/85.8 | 58.6/67.5 | 73.1/93.7 | 64.5/83.5 ✓ | ✗ | ✗ | 85.2/88.5 | 87.1/95.0 | 60.1/76.5 | 83.2/93.7 | 83.8/90.4 ✓ | ✓ | ✗ | 86.5/89.6 | 87.9/95.2 | 60.8/77.5 | 84.3/93.7 | 84.8/91.1 ✓ | ✓ | ✓ | 87.9/90.1 | 88.2/95.6 | 65.3/78.8 | 85.3/93.8 | 86.0/91.6 TABLE VII: Results of different target localization networks. Method | Success | Precision | FPS ---|---|---|--- Coordinate Space[28] | 70.3 | 81.9 | 53.4 BEV Space (w/o EEM) | 73.1 | 84.5 | 50.1 BEV Space (w/ EEM) | 73.4 | 84.4 | 52.6 TABLE VIII: Integration with Siamese-based network. Method | Car | Pedestrian | Van | Cyclist | Mean ---|---|---|---|---|--- STNet | 72.1/84.0 | 49.9/77.2 | 58.0/70.6 | 73.5/93.7 | 61.3/80.1 [t] STNet* | 81.1/87.5 | 71.2/84.2 | 68.7/78.4 | 80.8/93.9 | 75.7/85.4 Improvement | $\uparrow$9.0/$\uparrow$3.5 | $\uparrow$21.3/$\uparrow$7.0 | $\uparrow$10.7/$\uparrow$7.8 | $\uparrow$7.3/$\uparrow$0.2 | $\uparrow$14.4/$\uparrow$5.3 PTTR | 65.2/77.4 | 50.9/81.6 | 52.5/61.8 | 65.1/90.5 | 58.4/77.8 [t] PTTR* | 74.1/87.2 | 72.3/88.2 | 58.8/70.5 | 73.2/93.4 | 72.0/86.3 Improvement | $\uparrow$8.9/$\uparrow$10.2 | $\uparrow$21.4/$\uparrow$6.6 | $\uparrow$6.3/$\uparrow$8.7 | $\uparrow$8.1/$\uparrow$2.9 | $\uparrow$13.6/$\uparrow$8.5 *Integrated the pre-training strategy. | | TABLE IX: Comparison with different feature fusing modules. Method | Success | Precision | FPS ---|---|---|--- P2B-xorr[21] | 71.5 | 82.4 | 39.4 [t] MLVSNet-xorr[33] | 70.6 | 81.4 | 41.7 V2B-xorr[18] | 71.8 | 82.7 | 29.2 STNet-xorr[25] | 72.0 | 83.2 | 35.7 Easytrack++† | 72.4 | 83.5 | 49.8 Easytrack++ | 73.4 | 84.4 | 52.6[b] † represents the adoption of a mask strategy. 3D pre-training. To further investigate the effectiveness of the pre-training strategy, we conducted a series of pre-training epochs ranging from 0-600 to explore the impact of different pre-training durations on tracking performance. The results for different epochs within the Car category on the KITTI dataset are presented in Fig. 12(a). The optimal performance was observed at epoch=300, with success/precision metrics reaching 87.9/90.1. Furthermore, to assess the generalization of the pre-training strategy, we integrated it into the PTTR framework. The Tab VIII illustrates a significant performance improvement when pre-training is incorporated into the PTTR network ($\uparrow$13.6/$\uparrow$8.5). Fig. 12(c) presents a visualization of the influence of pre-training on convergence speed. It is evident that when utilizing pre-training, convergence is achieved as early as the 15th epoch. This advancement of approximately 20 epochs, compared to standard training, leads to a substantial reduction in training duration. Layers of the transformer. As shown in the Fig. 4, We stack three layers of the self-perception transformer in the target-aware feature learning network in the EasyTrack++. The parameter $n$ of the self-perception transformer layers is a key parameter. If it is set too small, the ability of feature extraction and interaction of the network equilibrium point is insufficient.. If it is set too large, it will lead to overfitting and slow running speed. We report the results of setting different $n$ in the Car category in the KITTI dataset in Fig. 13. We can find that as the parameter $n$ gradually increases, the tracking performance first increases and then decreases. The best performance is obtained when $n=3$. The Success/Precision metrics are 73.4/84.4. Joint or split point feature learning. The core design of our tracker is a target-aware point cloud feature learning network. It effectively embeds the target’s information into the search area and greatly accelerates the running speed. To verify the superiority of our design, we also design a typical Siamese tracker with the transform backbone in Fig. 5(a). Different from EasyTrack++, we do not concatenate the template and search area before feature learning. Different feature fusing modules are equipped for a full comparison. As shown in Tab IX, we select four kinds of feature fusing networks including P2B-xorr[21], MLVSNet-xorr[30], V2B-xorr[18], and STNet-xorr[23]. We can find that EasyTrack++ achieves the best accuracy and fastest running speed. This demonstrates that the proposed target-aware feature learning network can improve the accuracy and running speed at the same time. Localization in the BEV or 3D feature space. As discussed in Sec. 15, we regress and classify in a relatively dense BEV feature map instead of the 3D feature space. When facing sparse point clouds scenes, it is hard to generate high-quality proposals in the 3D space, so these methods may not be able to track the object effectively. To validate the superiority of our design, we preserve the feature learning network and replace the target localization network with the widely adopted VoteNet[28]-based strategy. However, due to the sparsity of point clouds, the voting stage suffers from heavy outliers and generates unreliable proposals. The results in the Car category in the KITTI dataset are shown in Tab VII. We achieve better performance when regressing and classifying in the BEV feature space, 3.1/2.5 higher in the Success/Precision metrics. It further demonstrates the effectiveness of localization in the BEV feature space. Ways to generate center points. In the proposed EasyTrack++, we design a center points interaction strategy to make secondary interaction to provide more detailed target information. The quality of center points has a significant impact on the tracking performance. In fact, we consider two ways to define the center points including ball query and K-nearest-neighbor (KNN) algorithms. These are two commonly used algorithms to capture local information in the point clouds scene. Meanwhile, the number of center points is also a key parameter. We conduct extensive ablation experiments in the Car category in the KITTI dataset. The results are illustrated in Tab XI. We can find that ball query shows better performance compared to KNN algorithm with the same number of points. And as the number of points increases, the tracking performance first increases and then stabilizes. Finally, we select ball query to crop center points and the number of points is set to 128. since it achieves the best tracking accuracy. TABLE X: Results of different template generating schemes. We report the Success/Precision metrics in the Car category in the KITTI dataset. Scheme | P2B | BAT | PTT | V2B | PTTR | STNet | EasyTrack | EasyTrack++ ---|---|---|---|---|---|---|---|--- First GT | 46.7/59.7 | 51.8/65.5 | 62.9/76.5 | 67.8/79.3 | 55.0/65.6 | 70.8/82.4 | 70.9/82.6 | 71.3/82.9 [t] Previous result | 53.1/68.9 | 59.2/75.6 | 64.9/77.5 | 70.0/81.3 | 65.0/77.1 | 66.0/76.6 | 69.6/80.1 | 69.8/80.3 First GT & Previous result | 56.2/72.8 | 60.5/77.7 | 67.8/81.8 | 70.5/81.3 | 65.2/77.4 | 72.1/84.0 | 72.8/84.0 | 73.4/84.4 All previous results | 51.4/66.8 | 55.8/71.4 | 59.8/74.5 | 69.8/81.2 | 63.1/74.8 | 73.3/85.4 | 70.1/81.4 | 70.5/81.8 [b] TABLE XI: Results of different ways to generate the center points to make secondary interaction in the proposed EasyTrack++. Method | Number | Success | Precision ---|---|---|--- KNN | 32 | 72.4 | 83.6 [t] 64 | 72.9 | 83.9 128 | 73.2 | 84.2 256 | 73.2 | 84.1 [b] Ball Query | 32 | 72.6 | 83.7 [t] 64 | 73.1 | 83.9 128 | 73.4 | 84.4 256 | 73.2 | 84.1 [b] Different template generating schemes. The quality of the template has a great impact on the performance of the tracker since the target is given in the template. Incomplete template provides ambiguous target information. To find the best scheme, we consider four schemes to generate the template including using the first ground truth, using the previous result, using all previous results, and merging the first ground truth and previous result. The results of different schemes compared to other trackers in the Car category are shown in Tab X. EasyTrack outperforms other trackers in most cases. And EasyTrack++ further improves the tracking performance in all schemes. However, we can find that when only using the previous result to generate template, V2B [18] achieves the best performance. It is because V2B uses an auxiliary shape completion network to capture more shape information. This makes the network structure more complex. When using all previous results, STNet[23] achieves the best performance but at a very low speed. Considering the balance between speed and accuracy, all the trackers use the first ground truth and previous result to generate the template and our methods achieve the best performance in this scheme. Figure 13: Results of setting different $n$ values in the target-aware feature learning network. ## 5 Conclusions In this paper, we propose a new baseline for 3D single object tracking in point clouds abbreviated as EasyTrack, which can simplify the two-stream multi-stage 3D Siamese trackers, and unify the process of 3D feature extraction and target information integration, based on the masked point cloud tracking feature pre-trained learning. After that, we design a localization network in the BEV feature space for accurate regression and classification. In addition, we further design an enhanced version named EasyTrack++, which develops a center points interaction (CPI) strategy to reduce the ambiguous target description caused by the background information in the noisy point clouds. Extensive experiments in the KITTI, nuScenes and Waymo datasets demonstrate that EasyTrack achieves state-of-the-art performance and runs at a real-time speed. Due to the rich texture information provided by RGB data for incomplete point clouds, future plans include complementing point clouds with 2D images and proposing a multi-modal one-stream 3D tracker. ## References * [1] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, “Fully-convolutional siamese networks for object tracking,” in _ECCV 2016 Workshops_ , 2016, pp. 850–865. * [2] B. Li, J. Yan, W. Wu, Z. Zhu, and X. Hu, “High performance visual tracking with siamese region proposal network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8971–8980. * [3] B. Li, W. Wu, Q. Wang, F. Zhang, J. Xing, and J. Yan, “Siamrpn++: Evolution of siamese visual tracking with very deep networks,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 4282–4291. * [4] Y. Xu, Z. Wang, Z. Li, Y. Yuan, and G. Yu, “Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 07, 2020, pp. 12 549–12 556. * [5] Z. Chen, B. Zhong, G. Li, S. Zhang, and R. Ji, “Siamese box adaptive network for visual tracking,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 6668–6677. * [6] X. Chen, B. Yan, J. Zhu, D. Wang, X. Yang, and H. Lu, “Transformer tracking,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 8126–8135. * [7] B. Yan, H. Peng, J. Fu, D. Wang, and H. Lu, “Learning spatio-temporal transformer for visual tracking,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 10 448–10 457. * [8] L. Lin, H. Fan, Y. Xu, and H. Ling, “Swintrack: A simple and strong baseline for transformer tracking,” _arXiv preprint arXiv:2112.00995_ , 2021. * [9] B. Ye, H. Chang, B. Ma, S. Shan, and X. Chen, “Joint feature learning and relation modeling for tracking: A one-stream framework,” in _European Conference on Computer Vision_. Springer, 2022, pp. 341–357. * [10] C. Saltori, F. Galasso, G. Fiameni, N. Sebe, F. Poiesi, and E. Ricci, “Compositional semantic mix for domain adaptation in point cloud segmentation,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2023. * [11] K. Chitta, A. Prakash, B. Jaeger, Z. Yu, K. Renz, and A. Geiger, “Transfuser: Imitation with transformer-based sensor fusion for autonomous driving,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2022. * [12] Q. Hu, B. Yang, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, and A. Markham, “Learning semantic segmentation of large-scale point clouds with random sampling,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 44, no. 11, pp. 8338–8354, 2021. * [13] S. Ye, D. Chen, S. Han, and J. Liao, “Robust point cloud segmentation with noisy annotations,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 45, no. 6, pp. 7696–7710, 2022. * [14] A. Bibi, T. Zhang, and B. Ghanem, “3d part-based sparse tracker with automatic synchronization and registration,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2016, pp. 1439–1448. * [15] U. Kart, J.-K. Kamarainen, and J. Matas, “How to make an rgbd tracker?” in _Proceedings of the European Conference on Computer Vision (ECCV) Workshops_ , 2018, pp. 0–0. * [16] Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, “Deep learning for 3d point clouds: A survey,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 43, no. 12, pp. 4338–4364, 2020. * [17] C. Zheng, X. Yan, H. Zhang, B. Wang, S. Cheng, S. Cui, and Z. Li, “An effective motion-centric paradigm for 3d single object tracking in point clouds,” _arXiv preprint arXiv:2303.12535_ , 2023. * [18] L. Hui, L. Wang, M. Cheng, J. Xie, and J. Yang, “3d siamese voxel-to-bev tracker for sparse point clouds,” _arXiv preprint arXiv:2111.04426_ , 2021\. * [19] C. Zheng, X. Yan, H. Zhang, B. Wang, S. Cheng, S. Cui, and Z. Li, “Beyond 3d siamese tracking: A motion-centric paradigm for 3d single object tracking in point clouds,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 8111–8120. * [20] M. Park, H. Seong, W. Jang, and E. Kim, “Graph-based point tracker for 3d object tracking in point clouds,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 36, no. 2, 2022, pp. 2053–2061. * [21] H. Qi, C. Feng, Z. Cao, F. Zhao, and Y. Xiao, “P2b: Point-to-box network for 3d object tracking in point clouds,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 6329–6338. * [22] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” _arXiv preprint arXiv:1706.02413_ , 2017. * [23] L. Hui, L. Wang, L. Tang, K. Lan, J. Xie, and J. Yang, “3d siamese transformer network for single object tracking on point clouds,” in _European Conference on Computer Vision_. Springer, 2022, pp. 293–310. * [24] Y. Cui, J. Shan, Z. Gu, Z. Li, and Z. Fang, “Exploiting more information in sparse point cloud for 3d single object tracking,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 4, pp. 11 926–11 933, 2022. * [25] C. Zhou, Z. Luo, Y. Luo, T. Liu, L. Pan, Z. Cai, H. Zhao, and S. Lu, “Pttr: Relational 3d point cloud object tracking with transformer,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 8531–8540. * [26] B. Fan, K. Wang, H. Zhang, and J. Tian, “Accurate 3d single object tracker with local-to-global feature refinement,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 4, pp. 12 211–12 218, 2022. * [27] S. Giancola, J. Zarzar, and B. Ghanem, “Leveraging shape completion for 3d siamese tracking,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 1359–1368. * [28] C. R. Qi, O. Litany, K. He, and L. J. Guibas, “Deep hough voting for 3d object detection in point clouds,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 9277–9286. * [29] C. Zheng, X. Yan, J. Gao, W. Zhao, W. Zhang, Z. Li, and S. Cui, “Box-aware feature enhancement for single object tracking on point clouds,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 13 199–13 208. * [30] Z. Wang, Q. Xie, Y.-K. Lai, J. Wu, K. Long, and J. Wang, “Mlvsnet: Multi-level voting siamese network for 3d visual tracking,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 3101–3110. * [31] H. Zou, C. Zhang, Y. Liu, W. Li, F. Wen, and H. Zhang, “Pointsiamrcnn: Target-aware voxel-based siamese tracker for point clouds,” in _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2021, pp. 7029–7035. * [32] Y. Cui, Z. Fang, J. Shan, Z. Gu, and S. Zhou, “3d object tracking with transformer,” in _32nd British Machine Vision Conference 2021, BMVC 2021, Online, November 22-25, 2021_. BMVA Press, 2021, p. 317. * [33] J. Shan, S. Zhou, Z. Fang, and Y. Cui, “Ptt: Point-track-transformer module for 3d single object tracking in point clouds,” _arXiv preprint arXiv:2108.06455_ , 2021. * [34] Y. Xia, Q. Wu, W. Li, A. B. Chan, and U. Stilla, “A lightweight and detector-free 3d single object tracker on point clouds,” _IEEE Transactions on Intelligent Transportation Systems_ , 2023. * [35] Z. Guo, Y. Mao, W. Zhou, M. Wang, and H. Li, “Cmt: Context-matching-guided transformer for 3d tracking in point clouds,” in _European Conference on Computer Vision_. Springer, 2022, pp. 95–111. * [36] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” _Advances in neural information processing systems_ , vol. 28, 2015. * [37] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778. * [38] Y. Cui, C. Jiang, G. Wu, and L. Wang, “Mixformer: End-to-end tracking with iterative mixed attention,” _arXiv preprint arXiv:2302.02814_ , 2023. * [39] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” _Advances in neural information processing systems_ , vol. 30, 2017. * [40] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V. Koltun, “Point transformer,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 16 259–16 268. * [41] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu, “Pct: Point cloud transformer,” _Computational Visual Media_ , vol. 7, no. 2, pp. 187–199, 2021. * [42] X. Wu, Y. Lao, L. Jiang, X. Liu, and H. Zhao, “Point transformer v2: Grouped vector attention and partition-based pooling,” _arXiv preprint arXiv:2210.05666_ , 2022. * [43] X. Pan, Z. Xia, S. Song, L. E. Li, and G. Huang, “3d object detection with pointformer,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 7463–7472. * [44] J. Mao, Y. Xue, M. Niu, H. Bai, J. Feng, X. Liang, H. Xu, and C. Xu, “Voxel transformer for 3d object detection,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 3164–3173. * [45] L. Fan, Z. Pang, T. Zhang, Y.-X. Wang, H. Zhao, F. Wang, N. Wang, and Z. Zhang, “Embracing single stride 3d object detector with sparse transformer,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 8458–8468. * [46] X. Bai, Z. Hu, X. Zhu, Q. Huang, Y. Chen, H. Fu, and C.-L. Tai, “Transfusion: Robust lidar-camera fusion for 3d object detection with transformers,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 1090–1099. * [47] A. Xiao, J. Huang, D. Guan, X. Zhang, S. Lu, and L. Shao, “Unsupervised point cloud representation learning with deep neural networks: A survey,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2023. * [48] S. Xie, J. Gu, D. Guo, C. R. Qi, L. Guibas, and O. Litany, “Pointcontrast: Unsupervised pre-training for 3d point cloud understanding,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16_. Springer, 2020, pp. 574–591. * [49] X. Yu, L. Tang, Y. Rao, T. Huang, J. Zhou, and J. Lu, “Point-bert: Pre-training 3d point cloud transformers with masked point modeling,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 19 313–19 322. * [50] Y. Pang, W. Wang, F. E. Tay, W. Liu, Y. Tian, and L. Yuan, “Masked autoencoders for point cloud self-supervised learning,” in _European conference on computer vision_. Springer, 2022, pp. 604–621. * [51] I. Tang, E. Zhang, and R. Gu, “Point-peft: Parameter-efficient fine-tuning for 3d pre-trained models,” _arXiv preprint arXiv:2310.03059_ , 2023. * [52] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” _arXiv preprint arXiv:1810.04805_ , 2018. * [53] Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3d object detection,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 4490–4499. * [54] R. Ge, Z. Ding, Y. Hu, Y. Wang, S. Chen, L. Huang, and Y. Li, “Afdet: Anchor free one stage 3d object detection,” _arXiv preprint arXiv:2006.12671_ , 2020\. * [55] B. Chen, P. Li, L. Bai, L. Qiao, Q. Shen, B. Li, W. Gan, W. Wu, and W. Ouyang, “Backbone is all your need: a simplified architecture for visual object tracking,” in _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXII_. Springer, 2022, pp. 375–392. * [56] Y. Cui, Z. Li, and Z. Fang, “Sttracker: Spatio-temporal tracker for 3d single object tracking,” _IEEE Robotics and Automation Letters_ , 2023. * [57] P. Wang, L. Ren, S. Wu, J. Yang, E. Yu, H. Yu, and X. Li, “Implicit and efficient point cloud completion for 3d single object tracking,” _IEEE Robotics and Automation Letters_ , vol. 8, no. 4, pp. 1935–1942, 2023. * [58] J. Liu, Y. Wu, M. Gong, Q. Miao, W. Ma, and F. Xie, “Instance-guided point cloud single object tracking with inception transformer,” _IEEE Transactions on Instrumentation and Measurement_ , 2023. * [59] T.-X. Xu, Y.-C. Guo, Y.-K. Lai, and S.-H. Zhang, “Cxtrack: Improving 3d point cloud tracking with contextual information,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2023, pp. 1084–1093. * [60] M. Wang, T. Ma, X. Zuo, J. Lv, and Y. Liu, “Correlation pyramid network for 3d single object tracking,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2023, pp. 3215–3224. * [61] T. Ma, M. Wang, J. Xiao, H. Wu, and Y. Liu, “Synchronize feature extracting and matching: A single branch framework for 3d object tracking,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2023, pp. 9953–9963. * [62] T.-X. Xu, Y.-C. Guo, Y.-K. Lai, and S.-H. Zhang, “Mbptrack: Improving 3d point cloud tracking with memory networks and box priors,” _arXiv preprint arXiv:2303.05071_ , 2023. * [63] J. Liu, Y. Wu, M. Gong, Q. Miao, W. Ma, and C. Qin, “M3sot: Multi-frame, multi-field, multi-space 3d single object tracking,” _arXiv preprint arXiv:2312.06117_ , 2023. * [64] K. Lan, H. Jiang, and J. Xie, “Temporal-aware siamese tracker: Integrate temporal context for 3d object tracking,” in _Proceedings of the Asian Conference on Computer Vision_ , 2022, pp. 399–414. * [65] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in _2012 IEEE conference on computer vision and pattern recognition_. IEEE, 2012, pp. 3354–3361. * [66] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 11 621–11 631. * [67] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine _et al._ , “Scalability in perception for autonomous driving: Waymo open dataset,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 2446–2454. * [68] Z. Pang, Z. Li, and N. Wang, “Model-free vehicle tracking and state estimation in point cloud sequences,” in _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2021, pp. 8075–8082. * [69] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su _et al._ , “Shapenet: An information-rich 3d model repository,” _arXiv preprint arXiv:1512.03012_ , 2015. | Baojie Fan is an professor in the Department of Automation college at Nanjing University of Posts and Telecommunications. He received the Ph.D. degree in pattern recognition and intelligent system from State Key Laboratory of Robotics, Shenyang Institute Automation, Chinese Academy of Sciences. His major research interest includes robot vision system, 2D/3D object tracking, and pattern recognition. He has published more than 20 top conferences and journals, such as IEEE TIP, TMM, TCSVT, PR, RAL, ICRA, ECCV, etc. ---|--- | Wuyang Zhou is currently pursuing the Ph.D. degree with the Department of Automation college at Nanjing University of Posts and Telecommunications. His research focuses on point clouds analysis, 3D object tracking. ---|--- | Kai Wang is currently pursuing the Ph.D. degree with the Department of Automation college at Nanjing University of Posts and Telecommunications. His research focuses on 3D object detection and tracking, multi-model object tracking. he has published multiple top conferences or journal papers, such as ICRA, RAL, etc. ---|--- | Shijun Zhou is currently pursuing the Ph.D. degree with the State Key Laboratory of Robotics, Shenyang Institute of Automation, University of Chinese Academy of Sciences. Her research focuses on computer vision methods in scattering media. ---|--- | Fengyu Xu received the M.S. degree from Hefei University of Technology, Hefei, China, in 2005 and the Ph.D. degree from Southeast University, Nanjing, China, in 2009. From May 2016 to April 2017, he was a Visiting Scientist with the Department of Mechanical Engi neering, Michigan State University, East Lansing, MI, USA. He is currently a Full Professor, and the Associate Dean of the College of Automation and College of Artificial Intelligence, Nanjing Univer-sity of Posts and Telecommunications. His current research interests include inspection robot, soft robotics, machine vision and intelligent manufacturing. ---|--- | Jiandong Tian (Senior Member, IEEE) received the B.S. degree from the Department of Automation, Heilongjiang University, China, in 2005, and the Ph.D. degree from the Shenyang Institute of Automation, Chinese Academy of Sciences, in 2011. He is currently a Professor with the Shenyang Institute of Automation, Chinese Academy of Sciences. He has published more than 50 top journal and conference papers in the field of robot vision, image processing, and pattern recognition. He has also served as reviewer for several top journals and conferences such as T-PAMI, TIP, JMLR, TKDE, IJCV, TRO, CVPR, ICCV, ICRA, RSS and ECCV. ---|---
# Array Programming with NumPy Charles R. Harris Independent Researcher, Logan, Utah, USA K. Jarrod Millman Brain Imaging Center, University of California, Berkeley, Berkeley, CA, USA Division of Biostatistics, University of California, Berkeley, Berkeley, CA, USA Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, USA<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Stéfan J. van der Walt Applied Mathematics, Stellenbosch University, Stellenbosch, South Africa Brain Imaging Center, University of California, Berkeley, Berkeley, CA, USA Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, USA <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Ralf Gommers Quansight LLC, Austin, TX, USA<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Pauli Virtanen Department of Physics and Nanoscience Center, University of Jyväskylä, Jyväskylä, Finland David Cournapeau Mercari JP, Tokyo, Japan Eric Wieser Department of Engineering, University of Cambridge, Cambridge, UK Julian Taylor Independent Researcher, Karlsruhe, Germany Sebastian Berg Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, USA Nathaniel J. Smith Independent Researcher, Berkeley, CA, USA Robert Kern Enthought, Inc., Austin, TX, USA Matti Picus Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, USA Stephan Hoyer Google Research, Mountain View, CA, USA Marten H. van Kerkwijk Department of Astronomy & Astrophysics, University of Toronto, Toronto, ON, Canada Matthew Brett Brain Imaging Center, University of California, Berkeley, Berkeley, CA, USA School of Psychology, University of Birmingham, Edgbaston, Birmigham, UK Allan Haldane Department of Physics, Temple University, Philadelphia, PA, USA Jaime Fernández del Río Google, Zurich, Switzerland Mark Wiebe Department of Physics and Astronomy, The University of British Columbia, Vancouver, BC, Canada Amazon, Seattle, Washington, USA Pearu Peterson Quansight LLC, Austin, TX, USA Independent Researcher, Saue, Estonia Department of Mechanics and Applied Mathematics, Institute of Cybernetics at Tallinn Technical University, Tallinn, Estonia Pierre Gérard-Marchant Department of Biological and Agricultural Engineering, University of Georgia, Athens, GA France-IX Services, Paris, France Kevin Sheppard Department of Economics, University of Oxford, Oxford, UK Tyler Reddy CCS-7, Los Alamos National Laboratory, Los Alamos, NM, USA Warren Weckesser Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, USA Hameer Abbasi Quansight LLC, Austin, TX, USA Christoph Gohlke Laboratory for Fluorescence Dynamics, Biomedical Engineering Department, University of California, Irvine, Irvine, CA, USA Travis E. Oliphant Quansight LLC, Austin, TX, USA ###### Abstract Array programming provides a powerful, compact, expressive syntax for accessing, manipulating, and operating on data in vectors, matrices, and higher-dimensional arrays [1]. NumPy is the primary array programming library for the Python language [2, 3, 4, 5]. It plays an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, material science, engineering, finance, and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves [6] and the first imaging of a black hole [7]. Here we show how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring, and analyzing scientific data. NumPy is the foundation upon which the entire scientific Python universe is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Because of its central position in the ecosystem, NumPy increasingly plays the role of an interoperability layer between these new array computation libraries. Figure 1: The NumPy array incorporates several fundamental array concepts. a, The NumPy array data structure and its associated metadata fields. b, Indexing an array with slices and steps. These operations return a _view_ of the original data. c, Indexing an array with masks, scalar coordinates, or other arrays, so that it returns a copy of the original data. In the bottom example, an array is indexed with other arrays; this broadcasts the indexing arguments before performing the lookup. d, Vectorization efficiently applies operations to groups of elements. e, Broadcasting in the multiplication of two- dimensional arrays. f, Reduction operations act along one or more axes. In this example, an array is summed along select axes to produce a vector, or along two axes consecutively to produce a scalar. g, Example NumPy code, illustrating some of these concepts. Two Python array packages existed before NumPy. The Numeric package began in the mid-1990s and provided an array object and array-aware functions in Python, written in C, and linking to standard fast implementations of linear algebra [8, 9]. One of its earliest uses was to steer C++ applications for inertial confinement fusion research at Lawrence Livermore National Laboratory [10]. To handle large astronomical images coming from the Hubble Space Telescope, a reimplementation of Numeric, called Numarray, added support for structured arrays, flexible indexing, memory mapping, byte-order variants, more efficient memory use, flexible IEEE error handling capabilities, and better type casting rules [11]. While Numarray was highly compatible with Numeric, the two packages had enough differences that it divided the community, until 2005, when NumPy emerged as a “best of both worlds” unification [12]—combining Numarray’s features with Numeric’s performance on small arrays and its rich C _Application Programming Interface_ (API). Now, fifteen years later, NumPy underpins almost every Python library that does scientific or numerical computation including SciPy [13], Matplotlib [14], pandas [15], scikit-learn [16], and scikit-image [17]. It is a community-developed, open-source library, which provides a multidimensional Python array object along with array-aware functions that operate on it. Because of its inherent simplicity, the NumPy array is the _de facto_ exchange format for array data in Python. NumPy operates on in-memory arrays using the CPU. To utilize modern, specialized storage and hardware, there has been a recent proliferation of Python array packages. Unlike with the Numarray and Numeric divide, it is now much harder for these new libraries to fracture the user community—given how much work already builds on top of NumPy. However, to provide the ecosystem with access to new and exploratory technologies, NumPy is transitioning into a central coordinating mechanism that specifies a well-defined array programming API and dispatches it, as appropriate, to specialized array implementations. ## NumPy arrays The NumPy array is a data structure that efficiently stores and accesses multidimensional arrays [18], also known as tensors, and enables a wide variety of scientific computation. It consists of a pointer to memory, along with metadata used to interpret the data stored there, notably data type, shape, and strides (Fig. 1a). The _data type_ describes the nature of elements stored in an array. An array has a single data type, and each array element occupies the same number of bytes in memory. Examples of data types include real and complex numbers (of lower and higher precision), strings, timestamps, and pointers to Python objects. The _shape_ of an array determines the number of elements along each axis, and the number of axes is the array’s dimensionality. For example, a vector of numbers can be stored as a one-dimensional array of shape $N$, while color videos are four-dimensional arrays of shape $(T,M,N,3)$. _Strides_ are necessary to interpret computer memory, which stores elements linearly, as multidimensional arrays. It describes the number of bytes to move forward in memory to jump from row to row, column to column, and so forth. Consider, for example, a 2-D array of floating-point numbers with shape $(4,3)$, where each element occupies 8 bytes in memory. To move between consecutive columns, we need to jump forward 8 bytes in memory, and to access the next row $3\times 8=24$ bytes. The strides of that array are therefore $(24,8)$. NumPy can store arrays in either C or Fortran memory order, iterating first over either rows or columns. This allows external libraries written in those languages to access NumPy array data in memory directly. Users interact with NumPy arrays using indexing (to access subarrays or individual elements), operators (e.g., $+$, $-$, $\times$ for vectorized operations and $@$ for matrix multiplication), as well as array-aware functions; together, these provide an easily readable, expressive, high-level API for array programming, while NumPy deals with the underlying mechanics of making operations fast. _Indexing_ an array returns single elements, subarrays, or elements that satisfy a specific condition (Fig. 1b). Arrays can even be indexed using other arrays (Fig. 1c). Wherever possible, indexing that retrieves a subarray returns a view on the original array, such that data is shared between the two arrays. This provides a powerful way to operate on subsets of array data while limiting memory usage. To complement the array syntax, NumPy includes functions that perform _vectorized_ calculations on arrays, including arithmetic, statistics, and trigonometry (Fig. 1d). Vectorization—operating on whole arrays rather than their individual elements—is essential to array programming. This means that operations that would take many tens of lines to express in languages such as C can often be implemented as a single, clear Python expression. This results in concise code and frees users to focus on the details of their analysis, while NumPy handles looping over array elements near-optimally, taking into consideration, for example, strides, to best utilize the computer’s fast cache memory. When performing a vectorized operation (such as addition) on two arrays with the same shape, it is clear what should happen. Through _broadcasting_ , NumPy allows the dimensions to differ, while still producing results that appeal to intuition. A trivial example is the addition of a scalar value to an array, but broadcasting also generalizes to more complex examples such as scaling each column of an array or generating a grid of coordinates. In broadcasting, one or both arrays are virtually duplicated (that is, without copying any data in memory), so that the shapes of the operands match (Fig. 1d). Broadcasting is also applied when an array is indexed using arrays of indices (Fig. 1c). Other array-aware functions, such as sum, mean, and maximum, perform element- by-element _reductions_ , aggregating results across one, multiple, or all axes of a single array. For example, summing an $n$-dimensional array over $d$ axes results in a $(n-d)$-dimensional array (Fig. 1f). NumPy also includes array-aware functions for creating, reshaping, concatenating, and padding arrays; searching, sorting, and counting data; and reading and writing files. It provides extensive support for generating pseudorandom numbers, includes an assortment of probability distributions, and performs accelerated linear algebra, utilizing one of several backends such as OpenBLAS [19, 20] or Intel MKL optimized for the CPUs at hand. Altogether, the combination of a simple in-memory array representation, a syntax that closely mimics mathematics, and a variety of array-aware utility functions forms a productive and powerfully expressive array programming language. Figure 2: NumPy is the base of the scientific Python ecosystem. Essential libraries and projects that depend on NumPy’s API gain access to new array implementations that support NumPy’s array protocols (Fig. 3). ## Scientific Python ecosystem Python is an open-source, general-purpose, interpreted programming language well-suited to standard programming tasks such as cleaning data, interacting with web resources, and parsing text. Adding fast array operations and linear algebra allows scientists to do all their work within a single language—and one that has the advantage of being famously easy to learn and teach, as witnessed by its adoption as a primary learning language in many universities. Even though NumPy is not part of Python’s standard library, it benefits from a good relationship with the Python developers. Over the years, the Python language has added new features and special syntax so that NumPy would have a more succinct and easier to read array notation. Since it is not part of the standard library, NumPy is able to dictate its own release policies and development patterns. SciPy and Matplotlib are tightly coupled with NumPy—in terms of history, development, and use. SciPy provides fundamental algorithms for scientific computing, including mathematical, scientific, and engineering routines. Matplotlib generates publication-ready figures and visualizations. The combination of NumPy, SciPy, and Matplotlib, together with an advanced interactive environment like IPython [21], or Jupyter [22], provides a solid foundation for array programming in Python. The scientific Python ecosystem (Fig. 2) builds on top of this foundation to provide several, widely used _technique specific_ libraries [16, 17, 23], that in turn underlay numerous _domain specific_ projects [24, 25, 26, 27, 28, 29]. NumPy, at the base of the ecosystem of array-aware libraries, sets documentation standards, provides array testing infrastructure, and adds build support for Fortran and other compilers. Many research groups have designed large, complex scientific libraries, which add _application specific_ functionality to the ecosystem. For example, the eht-imaging library [30] developed by the Event Horizon Telescope collaboration for radio interferometry imaging, analysis, and simulation, relies on many lower-level components of the scientific Python ecosystem. NumPy arrays are used to store and manipulate numerical data at every step in the processing chain: from raw data through calibration and image reconstruction. SciPy supplies tools for general image processing tasks such as filtering and image alignment, while scikit-image, an image processing library that extends SciPy, provides higher-level functionality such as edge filters and Hough transforms. The scipy.optimize module performs mathematical optimization. NetworkX [23], a package for complex network analysis, is used to verify image comparison consistency. Astropy [24, 25] handles standard astronomical file formats and computes time/coordinate transformations. Matplotlib is used to visualize data and to generate the final image of the black hole. The interactive environment created by the array programming foundation along with the surrounding ecosystem of tools—inside of IPython or Jupyter—is ideally suited to exploratory data analysis. Users fluidly inspect, manipulate, and visualize their data, and rapidly iterate to refine programming statements. These statements are then stitched together into imperative or functional programs, or notebooks containing both computation and narrative. Scientific computing beyond exploratory work is often done in a text editor or an integrated development environment (IDEs) such as Spyder. This rich and productive environment has made Python popular for scientific research. To complement this facility for exploratory work and rapid prototyping, NumPy has developed a culture of employing time-tested software engineering practices to improve collaboration and reduce error [31]. This culture is not only adopted by leaders in the project but also enthusiastically taught to newcomers. The NumPy team was early in adopting distributed revision control and code review to improve collaboration on code, and continuous testing that runs an extensive battery of automated tests for every proposed change to NumPy. The project also has comprehensive, high-quality documentation, integrated with the source code [32, 33, 34]. This culture of using best practices for producing reliable scientific software has been adopted by the ecosystem of libraries that build on NumPy. For example, in a recent award given by the Royal Astronomical Society to Astropy, they state: > _The Astropy Project has provided hundreds of junior scientists with > experience in professional-standard software development practices including > use of version control, unit testing, code review and issue tracking > procedures. This is a vital skill set for modern researchers that is often > missing from formal university education in physics or astronomy._ Community members explicitly work to address this lack of formal education through courses and workshops [35, 36, 37]. The recent rapid growth of data science, machine learning, and artificial intelligence has further and dramatically boosted the usage of scientific Python. Examples of its significant application, such as the eht-imaging library, now exist in almost every discipline in the natural and social sciences. These tools have become _the primary_ software environment in many fields. NumPy and its ecosystem are commonly taught in university courses, boot camps, and summer schools, and are at the focus of community conferences and workshops worldwide. NumPy and its API have become truly ubiquitous. ## Array proliferation and interoperability Figure 3: NumPy’s API and array protocols expose new arrays to the ecosystem. In this example, NumPy’s mean function is called on a Dask array. The call succeeds by dispatching to the appropriate library implementation (i.e., Dask in this case) and results in a new Dask array. Compare this code to the example code in Fig. 1g. NumPy provides in-memory, multidimensional, homogeneously typed (i.e., single pointer and strided) arrays on CPUs. It runs on machines ranging from embedded devices to the world’s largest supercomputers, with performance approaching that of compiled languages. For most its existence, NumPy addressed the vast majority of array computation use cases. However, scientific data sets now routinely exceed the memory capacity of a single machine and may be stored on multiple machines or in the cloud. In addition, the recent need to accelerate deep learning and artificial intelligence applications has led to the emergence of specialized accelerator hardware, including graphics processing units (GPUs), tensor processing units (TPUs), and field-programmable gate arrays (FPGAs). Due to its in-memory data model, NumPy is currently unable to utilize such storage and specialized hardware directly. However, both distributed data and the parallel execution of GPUs, TPUs, and FPGAs map well to the _paradigm_ of array programming: a gap, therefore, existed between available modern hardware architectures and the tools necessary to leverage their computational power. The community’s efforts to fill this gap led to a proliferation of new array implementations. For example, each deep learning framework created its own arrays; PyTorch [38], Tensorflow [39], Apache MXNet [40], and JAX arrays all have the capability to run on CPUs and GPUs, in a distributed fashion, utilizing lazy evaluation to allow for additional performance optimizations. SciPy and PyData/Sparse both provide sparse arrays—which typically contain few non-zero values and store only those in memory for efficiency. In addition, there are projects that build on top of NumPy arrays as a data container and extend its capabilities. Distributed arrays are made possible that way by Dask, and labeled arrays—referring to dimensions of an array by name rather than by index for clarity, compare x[:, 1] vs. x.loc[:, ’time’]—by xarray [41]. Such libraries often mimic the NumPy API, because it lowers the barrier to entry for newcomers and provides the wider community with a stable array programming interface. This, in turn, prevents disruptive schisms like the divergence of Numeric and Numarray. But exploring new ways of working with arrays is experimental by nature and, in fact, several promising libraries—such as Theano and Caffe—have already ceased development. And each time that a user decides to try a new technology, they must change import statements and ensure that the new library implements all the parts of the NumPy API they currently use. Ideally, operating on specialized arrays using NumPy functions or semantics would simply work, so that users could write code once, and would then benefit from switching between NumPy arrays, GPU arrays, distributed arrays, and so forth, as appropriate. To support array operations between external array objects, NumPy therefore added the capability to act as a central coordination mechanism with a well-specified API (Fig. 2). To facilitate this _interoperability_ , NumPy provides “protocols” (or contracts of operation), that allow for specialized arrays to be passed to NumPy functions (Fig. 3). NumPy, in turn, dispatches operations to the originating library, as required. Over four hundred of the most popular NumPy functions are supported. The protocols are implemented by widely used libraries such as Dask, CuPy, xarray, and PyData/Sparse. Thanks to these developments, users can now, for example, scale their computation from a single machine to distributed systems using Dask. The protocols also compose well, allowing users to redeploy NumPy code at scale on distributed, multi-GPU systems via, for instance, CuPy arrays embedded in Dask arrays. Using NumPy’s high-level API, users can leverage highly parallel code execution on multiple systems with millions of cores, all with minimal code changes [42]. These array protocols are now a key feature of NumPy, and are expected to only increase in importance. As with the rest of NumPy, we iteratively refine and add protocol designs to improve utility and simplify adoption. ## Discussion NumPy combines the expressive power of _array programming_ , the performance of C, and the readability, usability, and versatility of Python in a mature, well-tested, well-documented, and community-developed library. Libraries in the scientific Python ecosystem provide fast implementations of most important algorithms. Where extreme optimization is warranted, compiled languages such as Cython [43], Numba [44], and Pythran [45], that extend Python and transparently accelerate bottlenecks, can be used. Because of NumPy’s simple memory model, it is easy to write low-level, hand-optimized code, usually in C or Fortran, to manipulate NumPy arrays and pass them back to Python. Furthermore, using array protocols, it is possible to utilize the full spectrum of specialized hardware acceleration with minimal changes to existing code. NumPy was initially developed by students, faculty, and researchers to provide an advanced, open-source array programming library for Python, which was free to use and unencumbered by license servers, dongles, and the like. There was a sense of building something consequential together, for the benefit of many others. Participating in such an endeavor, within a welcoming community of like-minded individuals, held a powerful attraction for many early contributors. These user-developers frequently had to write code from scratch to solve their own or their colleagues’ problems—often in low-level languages that precede Python, like Fortran [46] and C. To them, the advantages of an interactive, high-level array library were evident. The design of this new tool was informed by other powerful interactive programming languages for scientific computing such as Basis [47], Yorick [48], R [49], and APL [50], as well as commercial languages and environments like IDL and MATLAB. What began as an attempt to add an array object to Python became the foundation of a vibrant ecosystem of tools. Now, a large amount of scientific work depends on NumPy being correct, fast, and stable. It is no longer a small community project, but is core scientific infrastructure. The developer culture has matured: while initial development was highly informal, NumPy now has a roadmap and a process for proposing and discussing large changes. The project has formal governance structures and is fiscally sponsored by NumFOCUS, a nonprofit that promotes open practices in research, data, and scientific computing. Over the past few years, the project attracted its first funded development, sponsored by the Moore and Sloan Foundations, and received an award as part of the Chan Zuckerberg Initiative’s Essentials of Open Source Software program. With this funding, the project was (and is) able to have sustained focus over multiple months to implement substantial new features and improvements. That said, it still depends heavily on contributions made by graduate students and researchers in their free time. NumPy is no longer _just_ the foundational array library underlying the scientific Python ecosystem, but has also become the standard API for tensor computation and a central coordinating mechanism between array types and technologies in Python. Work continues to expand on and improve these interoperability features. Over the next decade, we will face several challenges. New devices will be developed, and existing specialized hardware will evolve, to meet diminishing returns on Moore’s law. There will be more, and a wider variety of, data science practitioners, a significant proportion of whom will be using NumPy. The scale of scientific data gathering will continue to expand, with the adoption of devices and instruments such as light sheet microscopes and the Large Synoptic Survey Telescope (LSST) [51]. New generation languages, interpreters, and compilers, such as Rust [52], Julia [53], and LLVM [54], will invent and determine the viability of new concepts and data structures. Through various mechanisms described in this paper, NumPy is poised to embrace such a changing landscape, and to continue playing a leading role in interactive scientific computation. To do so will require sustained funding from government, academia, and industry. But, importantly, it will also need a new generation of graduate students and other developers to engage, to build a NumPy that meets the needs of the next decade of data science. ## References * [1] K. E. Iverson, “Notation as a tool of thought,” Communications of the ACM, vol. 23, p. 444–465, Aug. 1980. * [2] P. F. Dubois, “Python: Batteries included,” Computing in Science & Engineering, vol. 9, no. 3, pp. 7–9, 2007. * [3] T. E. Oliphant, “Python for scientific computing,” Computing in Science & Engineering, vol. 9, pp. 10–20, May-June 2007. * [4] K. J. Millman and M. Aivazis, “Python for scientists and engineers,” Computing in Science & Engineering, vol. 13, no. 2, pp. 9–12, 2011. * [5] F. Pérez, B. E. Granger, and J. D. Hunter, “Python: an ecosystem for scientific computing,” Computing in Science & Engineering, vol. 13, no. 2, pp. 13–21, 2011. * [6] B. P. Abbott, R. Abbott, T. Abbott, M. Abernathy, F. Acernese, K. Ackley, C. Adams, T. Adams, P. Addesso, R. Adhikari, et al., “Observation of gravitational waves from a binary black hole merger,” Physical Review Letters, vol. 116, no. 6, p. 061102, 2016. * [7] A. A. Chael, M. D. Johnson, R. Narayan, S. S. Doeleman, J. F. Wardle, and K. L. Bouman, “High-resolution linear polarimetric imaging for the event horizon telescope,” The Astrophysical Journal, vol. 829, no. 1, p. 11, 2016. * [8] P. F. Dubois, K. Hinsen, and J. Hugunin, “Numerical Python,” Computers in Physics, vol. 10, no. 3, pp. 262–267, 1996. * [9] D. Ascher, P. F. Dubois, K. Hinsen, J. Hugunin, and T. E. Oliphant, “An open source project: Numerical Python,” 2001. * [10] T.-Y. Yang, G. Furnish, and P. F. Dubois, “Steering object-oriented scientific computations,” in Proceedings of TOOLS USA 97. International Conference on Technology of Object Oriented Systems and Languages, pp. 112–119, IEEE, 1997\. * [11] P. Greenfield, J. T. Miller, J. Hsu, and R. L. White, “numarray: A new scientific array package for Python,” PyCon DC, 2003. * [12] T. E. Oliphant, Guide to NumPy. Trelgol Publishing USA, 1st ed., 2006. * [13] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, I. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors, “SciPy 1.0—fundamental algorithms for scientific computing in Python,” Nature Methods, vol. 17, pp. 261–272, 2020. * [14] J. D. Hunter, “Matplotlib: A 2D graphics environment,” Computing in Science & Engineering, vol. 9, no. 3, pp. 90–95, 2007. * [15] W. McKinney, “Data structures for statistical computing in Python,” in Proceedings of the 9th Python in Science Conference (S. van der Walt and J. Millman, eds.), pp. 51 – 56, 2010. * [16] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, no. Oct, pp. 2825–2830, 2011. * [17] S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, T. Yu, and the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ, vol. 2, p. e453, 2014. * [18] S. van der Walt, S. C. Colbert, and G. Varoquaux, “The NumPy array: a structure for efficient numerical computation,” Computing in Science & Engineering, vol. 13, no. 2, pp. 22–30, 2011. * [19] Q. Wang, X. Zhang, Y. Zhang, and Q. Yi, “Augem: automatically generate high performance dense linear algebra kernels on x86 cpus,” in SC’13: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, pp. 1–12, IEEE, 2013. * [20] Z. Xianyi, W. Qian, and Z. Yunquan, “Model-driven level 3 blas performance optimization on loongson 3a processor,” in 2012 IEEE 18th International Conference on Parallel and Distributed Systems, pp. 684–691, IEEE, 2012. * [21] F. Pérez and B. E. Granger, “IPython: a system for interactive scientific computing,” Computing in Science & Engineering, vol. 9, no. 3, pp. 21–29, 2007. * [22] T. Kluyver, B. Ragan-Kelley, F. Pérez, B. Granger, M. Bussonnier, J. Frederic, K. Kelley, J. Hamrick, J. Grout, S. Corlay, P. Ivanov, D. Avila, S. Abdalla, and C. Willing, “Jupyter Notebooks—a publishing format for reproducible computational workflows,” in Positioning and Power in Academic Publishing: Players, Agents and Agendas (F. Loizides and B. Schmidt, eds.), pp. 87–90, IOS Press, 2016. * [23] A. A. Hagberg, D. A. Schult, and P. J. Swart, “Exploring network structure, dynamics, and function using NetworkX,” in Proceedings of the 7th Python in Science Conference (G. Varoquaux, T. Vaught, and K. J. Millman, eds.), (Pasadena, CA USA), pp. 11–15, 2008. * [24] Astropy Collaboration, T. P. Robitaille, E. J. Tollerud, P. Greenfield, M. Droettboom, E. Bray, T. Aldcroft, M. Davis, A. Ginsburg, A. M. Price-Whelan, W. E. Kerzendorf, A. Conley, N. Crighton, K. Barbary, D. Muna, H. Ferguson, F. Grollier, M. M. Parikh, P. H. Nair, H. M. Unther, C. Deil, J. Woillez, S. Conseil, R. Kramer, J. E. H. Turner, L. Singer, R. Fox, B. A. Weaver, V. Zabalza, Z. I. Edwards, K. Azalee Bostroem, D. J. Burke, A. R. Casey, S. M. Crawford, N. Dencheva, J. Ely, T. Jenness, K. Labrie, P. L. Lim, F. Pierfederici, A. Pontzen, A. Ptak, B. Refsdal, M. Servillat, and O. Streicher, “Astropy: A community Python package for astronomy,” Astronomy & Astrophysics, vol. 558, p. A33, Oct. 2013. * [25] A. M. Price-Whelan, B. M. Sipőcz, H. M. Günther, P. L. Lim, S. M. Crawford, S. Conseil, D. L. Shupe, M. W. Craig, N. Dencheva, A. Ginsburg, J. T. VanderPlas, L. D. Bradley, D. Pérez-Suárez, M. de Val-Borro, P. Paper Contributors, T. L. Aldcroft, K. L. Cruz, T. P. Robitaille, E. J. Tollerud, A. Coordination Committee, C. Ardelean, T. Babej, Y. P. Bach, M. Bachetti, A. V. Bakanov, S. P. Bamford, G. Barentsen, P. Barmby, A. Baumbach, K. L. Berry, F. Biscani, M. Boquien, K. A. Bostroem, L. G. Bouma, G. B. Brammer, E. M. Bray, H. Breytenbach, H. Buddelmeijer, D. J. Burke, G. Calderone, J. L. Cano Rodríguez, M. Cara, J. V. M. Cardoso, S. Cheedella, Y. Copin, L. Corrales, D. Crichton, D. D’Avella, C. Deil, É. Depagne, J. P. Dietrich, A. Donath, M. Droettboom, N. Earl, T. Erben, S. Fabbro, L. A. Ferreira, T. Finethy, R. T. Fox, L. H. Garrison, S. L. J. Gibbons, D. A. Goldstein, R. Gommers, J. P. Greco, P. Greenfield, A. M. Groener, F. Grollier, A. Hagen, P. Hirst, D. Homeier, A. J. Horton, G. Hosseinzadeh, L. Hu, J. S. Hunkeler, Ž. Ivezić, A. Jain, T. Jenness, G. Kanarek, S. Kendrew, N. S. Kern, W. E. Kerzendorf, A. Khvalko, J. King, D. Kirkby, A. M. Kulkarni, A. Kumar, A. Lee, D. Lenz, S. P. Littlefair, Z. Ma, D. M. Macleod, M. Mastropietro, C. McCully, S. Montagnac, B. M. Morris, M. Mueller, S. J. Mumford, D. Muna, N. A. Murphy, S. Nelson, G. H. Nguyen, J. P. Ninan, M. Nöthe, S. Ogaz, S. Oh, J. K. Parejko, N. Parley, S. Pascual, R. Patil, A. A. Patil, A. L. Plunkett, J. X. Prochaska, T. Rastogi, V. Reddy Janga, J. Sabater, P. Sakurikar, M. Seifert, L. E. Sherbert, H. Sherwood-Taylor, A. Y. Shih, J. Sick, M. T. Silbiger, S. Singanamalla, L. P. Singer, P. H. Sladen, K. A. Sooley, S. Sornarajah, O. Streicher, P. Teuben, S. W. Thomas, G. R. Tremblay, J. E. H. Turner, V. Terrón, M. H. van Kerkwijk, A. de la Vega, L. L. Watkins, B. A. Weaver, J. B. Whitmore, J. Woillez, V. Zabalza, and A. Contributors, “The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package,” The Astronomical Journal, vol. 156, p. 123, Sept. 2018. * [26] P. J. Cock, T. Antao, J. T. Chang, B. A. Chapman, C. J. Cox, A. Dalke, I. Friedberg, T. Hamelryck, F. Kauff, B. Wilczynski, and M. J. L. de Hoon, “Biopython: freely available Python tools for computational molecular biology and bioinformatics,” Bioinformatics, vol. 25, no. 11, pp. 1422–1423, 2009. * [27] K. J. Millman and M. Brett, “Analysis of functional Magnetic Resonance Imaging in Python,” Computing in Science & Engineering, vol. 9, no. 3, pp. 52–55, 2007. * [28] T. SunPy Community, S. J. Mumford, S. Christe, D. Pérez-Suárez, J. Ireland, A. Y. Shih, A. R. Inglis, S. Liedtke, R. J. Hewett, F. Mayer, K. Hughitt, N. Freij, T. Meszaros, S. M. Bennett, M. Malocha, J. Evans, A. Agrawal, A. J. Leonard, T. P. Robitaille, B. Mampaey, J. Iván Campos-Rozo, and M. S. Kirk, “SunPy—Python for solar physics,” Computational Science and Discovery, vol. 8, p. 014009, Jan. 2015. * [29] J. Hamman, M. Rocklin, and R. Abernathy, “Pangeo: A Big-data Ecosystem for Scalable Earth System Science,” in EGU General Assembly Conference Abstracts, EGU General Assembly Conference Abstracts, p. 12146, Apr 2018. * [30] A. A. Chael, K. L. Bouman, M. D. Johnson, R. Narayan, S. S. Doeleman, J. F. Wardle, L. L. Blackburn, K. Akiyama, M. Wielgus, C.-k. Chan, et al., “ehtim: Imaging, analysis, and simulation software for radio interferometry,” Astrophysics Source Code Library, 2019. * [31] K. J. Millman and F. Pérez, “Developing open-source scientific practice,” Implementing Reproducible Research. CRC Press, Boca Raton, FL, pp. 149–183, 2014. * [32] S. van der Walt, “The SciPy documentation project (technical overview),” in Proceedings of the 7th Python in Science Conference (SciPy 2008) (G. Varoquaux, T. Vaught, and K. J. Millman, eds.), pp. 27–28, 2008. * [33] J. Harrington, “The SciPy documentation project,” in Proceedings of the 7th Python in Science Conference (SciPy 2008) (G. Varoquaux, T. Vaught, and K. J. Millman, eds.), pp. 33–35, 2008. * [34] J. Harrington and D. Goldsmith, “Progress report: NumPy and SciPy documentation in 2009,” in Proceedings of the 8th Python in Science Conference (SciPy 2009) (G. Varoquaux, S. van der Walt, and K. J. Millman, eds.), pp. 84–87, 2009. * [35] G. Wilson, “Software carpentry: Getting scientists to write better code by making them more productive,” Computing in Science & Engineering, November–December 2006. * [36] J. E. Hannay, H. P. Langtangen, C. MacLeod, D. Pfahl, J. Singer, and G. Wilson, “How do scientists develop and use scientific software?,” in Proc. 2009 ICSE Workshop on Software Engineering for Computational Science and Engineering, 2009. * [37] K. J. Millman, M. Brett, R. Barnowski, and J.-B. Poline, “Teaching computational reproducibility for neuroimaging,” Frontiers in Neuroscience, vol. 12, p. 727, 2018. * [38] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32 (H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, eds.), pp. 8024–8035, Curran Associates, Inc., 2019. * [39] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016. * [40] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems,” arXiv preprint arXiv:1512.01274, 2015\. * [41] S. Hoyer and J. Hamman, “xarray: N-D labeled arrays and datasets in Python,” Journal of Open Research Software, vol. 5, no. 1, 2017. * [42] P. Entschev, “Distributed multi-GPU computing with Dask, CuPy and RAPIDS.” EuroPython 2019, 2019. * [43] S. Behnel, R. Bradshaw, C. Citro, L. Dalcin, D. S. Seljebotn, and K. Smith, “Cython: The best of both worlds,” Computing in Science & Engineering, vol. 13, no. 2, pp. 31–39, 2011. * [44] S. K. Lam, A. Pitrou, and S. Seibert, “Numba: A LLVM-based Python JIT compiler,” in Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, LLVM ’15, (New York, NY, USA), pp. 7:1–7:6, ACM, 2015\. * [45] S. Guelton, P. Brunet, M. Amini, A. Merlini, X. Corbillon, and A. Raynaud, “Pythran: Enabling static optimization of scientific python programs,” Computational Science & Discovery, vol. 8, no. 1, p. 014001, 2015. * [46] J. Dongarra, G. H. Golub, E. Grosse, C. Moler, and K. Moore, “Netlib and na-net: Building a scientific computing community,” IEEE Annals of the History of Computing, vol. 30, no. 2, pp. 30–41, 2008. * [47] P. F. Dubois, “The basis system,” tech. rep., Lawrence Livermore National Laboratory, CA (USA), 1989. UCRL-MA-118543, Parts I-VI. * [48] D. H. Munro and P. F. Dubois, “Using the yorick interpreted language,” Computers in Physics, vol. 9, no. 6, pp. 609–615, 1995. * [49] R. Ihaka and R. Gentleman, “R: a language for data analysis and graphics,” Journal of Computational and Graphical Statistics, vol. 5, no. 3, pp. 299–314, 1996. * [50] K. E. Iverson, “A programming language,” in Proceedings of the May 1-3, 1962, Spring Joint Computer Conference, pp. 345–351, 1962. * [51] T. Jenness, F. Economou, K. Findeisen, F. Hernandez, J. Hoblitt, K. S. Krughoff, K. Lim, R. H. Lupton, F. Mueller, W. O’Mullane, et al., “Lsst data management software development practices and tools,” in Software and Cyberinfrastructure for Astronomy V, vol. 10707, p. 1070709, International Society for Optics and Photonics, 2018. * [52] N. D. Matsakis and F. S. Klock, “The rust language,” Ada Letters, vol. 34, pp. 103–104, Oct. 2014. * [53] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, “Julia: A fresh approach to numerical computing,” SIAM Review, vol. 59, no. 1, pp. 65–98, 2017. * [54] C. Lattner and V. Adve, “LLVM: A compilation framework for lifelong program analysis and transformation,” (San Jose, CA, USA), pp. 75–88, Mar 2004. * [55] P. Peterson, “F2PY: a tool for connecting Fortran and Python programs,” International Journal of Computational Science and Engineering, vol. 4, no. 4, pp. 296–305, 2009. * [56] The NumPy Project Community, “NumPy project governance,” 2015. * [57] The NumPy Project Community, “NumPy code of conduct,” 2018. * [58] D. Holth, “Pep 427 – the wheel binary package format 1.0,” 2012. * [59] Brett, M. et al, “multibuild,” 2016. * [60] B. Griffith, P. Virtanen, N. Smith, M. van Kerkwijk, and S. Hoyer, “NEP 13 – a mechanism for overriding ufuncs,” 2013. * [61] S. Hoyer, M. Rocklin, M. van Kerkwijk, H. Abbasi, and E. Wieser, “NEP 18 – a dispatch mechanism for numpy’s high level array functions,” 2018. * [62] M. E. O’Neill, “Pcg: A family of simple fast space-efficient statistically good algorithms for random number generation,” Tech. Rep. HMC-CS-2014-0905, Harvey Mudd College, Claremont, CA, Sept. 2014. * [63] J. K. Salmon, M. A. Moraes, R. O. Dror, and D. E. Shaw, “Parallel random numbers: As easy as 1, 2, 3,” in Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’11, (New York, NY, USA), pp. 16:1–16:12, ACM, 2011. * [64] C. Doty-Humphrey, “Practrand, version 0.94.” * [65] M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudo-random number generator,” ACM Transactions on Modeling and Computer Simulation, vol. 8, pp. 3–30, Jan. 1998\. * [66] K. Sheppard, B. Duvenhage, P. de Buyl, and D. A. Ham, “bashtage/randomgen: Release 1.16.2,” Apr. 2019. * [67] G. Marsaglia and W. W. Tsang, “The ziggurat method for generating random variables,” Journal of Statistical Software, Articles, vol. 5, no. 8, pp. 1–7, 2000. * [68] D. Lemire, “Fast random integer generation in an interval,” ACM Transactions on Modeling and Computer Simulation, vol. 29, pp. 1–12, Jan 2019\. * [69] top500, “Top 10 sites for november 2019,” 2019. * [70] wikichip, “Astra - supercomputers,” 2019. * [71] Wikipedia, “Arm architecture,” 2019. * [72] NumPy Developers, “Numpy roadmap,” 2019. * [73] Dustin Ingram, “Pep 599 – the manylinux2014 platform tag,” 2019. ## Methods We use Git for version control and GitHub as the public hosting service for our official _upstream_ repository (https://github.com/numpy/numpy). We each work in our own copy (or fork) of the project and use the upstream repository as our integration point. To get new code into the upstream repository, we use GitHub’s pull request (PR) mechanism. This allows us to review code before integrating it as well as to run a large number of tests on the modified code to ensure that the changes do not break expected behavior. We also use GitHub’s issue tracking system to collect and triage problems and proposed improvements. ### Library organization Broadly, the NumPy library consists of the following parts: the NumPy array data structure ndarray; the so-called _universal functions_ ; a set of library functions for manipulating arrays and doing scientific computation; infrastructure libraries for unit tests and Python package building; and the program f2py for wrapping Fortran code in Python [55]. The ndarray and the universal functions are generally considered the core of the library. In the following, we give a brief summary of these components of the library. #### _Core._ The ndarray data structure and the universal functions make up the core of NumPy. The ndarray is the data structure at the heart of NumPy. The data structure stores regularly strided homogeneous data types inside a contiguous block memory, allowing for the efficient representation of $n$-dimensional data. More details about the data structure are given in “The NumPy array: a structure for efficient numerical computation” [18]. The _universal functions_ , or more concisely, _ufuncs_ , are functions written in C that implement efficient looping over NumPy arrays. An important feature of ufuncs is the built-in implementation of _broadcasting_. For example, the function arctan2(x, y) is a ufunc that accepts two values and computes $\tan^{-1}(y/x)$. When arrays are passed in as the arguments, the ufunc will take care of looping over the dimensions of the inputs in such a way that if, say, x is a 1-D array with length 3, and y is a 2-D array with shape $2\times 1$, the output will be an array with shape $2\times 3$ (Fig. 1c). The ufunc machinery takes care of calling the function with all the appropriate combinations of input array elements to complete the output array. The elementary arithmetic operations of addition, multiplication, etc., are implemented as ufuncs, so that broadcasting also applies to expressions such as x + y * z. #### _Computing libraries._ NumPy provides a large library of functions for array manipulation and scientific computing, including functions for: creating, reshaping, concatenating, and padding arrays; searching, sorting and counting data in arrays; computing elementary statistics, such as the mean, median, variance, and standard deviation; file I/O; and more. A suite of functions for computing the _fast Fourier transform (FFT)_ and its inverse is provided. NumPy’s linear algebra library includes functions for: solving linear systems of equations; computing various functions of a matrix, including the determinant, the norm, the inverse, and the pseudo-inverse; computing the Cholesky, eigenvalue, and singular value decompositions of a matrix; and more. The random number generator library in NumPy provides alternative _bit stream generators_ that provide the core function of generating random integers. A higher-level generator class that implements an assortment of probability distributions is provided. It includes the beta, gamma and Weibull distributions, the univariate and multivariate normal distributions, and more. #### _Infrastructure libraries._ NumPy provides utilities for writing tests and for building Python packages. The testing subpackage provides functions such as assert_allclose(actual, desired) that may be used in test suites for code that uses NumPy arrays. NumPy provides the subpackage distutils which includes functions and classes to facilitate configuration, installation, and packaging of libraries depending on NumPy. These can be used, for example, when publishing to the PyPI website. #### _F2PY._ The program f2py is a tool for building NumPy-aware Python wrappers of Fortran functions. NumPy itself does not use any Fortran code; F2PY is part of NumPy for historical reasons. ### Governance NumPy adopted an official Governance Document on October 5, 2015 [56]. Project decisions are usually made by consensus of interested contributors. This means that, for most decisions, everyone is entrusted with veto power. A Steering Council, currently composed of 12 members, facilitates this process and oversees daily development of the project by contributing code and reviewing contributions from the community. NumPy’s official Code of Conduct was approved on September 1, 2018 [57]. In brief, we strive to: _be open_ ; _be empathetic, welcoming, friendly, and patient_ ; _be collaborative_ ; _be inquisitive_ ; and _be careful in the words that we choose_. The Code of Conduct also specifies how breaches can be reported and outlines the process for responding to such reports. ### Funding In 2017, NumPy received its first large grants totaling 1.3M USD from the Gordon & Betty Moore and the Alfred P. Sloan foundations. Stéfan van der Walt is the PI and manages four programmers working on the project. These two grants focus on addressing the technical debt accrued over the years and on setting in place standards and architecture to encourage more sustainable development. NumPy received a third grant for 195K USD from the Chan Zuckerberg Initiative at the end of 2019 with Ralf Gommers as the PI. This grant focuses on better serving NumPy’s large number of beginning to intermediate level users and on growing the community of NumPy contributors. It will also provide support to OpenBLAS, on which NumPy depends for accelerated linear algebra. Finally, since May 2019 the project receives a small amount annually from Tidelift, which is used to fund things like documentation and website improvements. ### Developers NumPy is currently maintained by a group of 23 contributors with commit rights to the NumPy code base. Out of these, 17 maintainers were active in 2019, 4 of whom were paid to work on the project full-time. Additionally, there are a few long term developers who contributed and maintain specific parts of NumPy, but are not officially maintainers. Over the course of its history, NumPy has attracted PRs by 823 contributors. However, its development relies heavily on a small number of active maintainers, who share more than half of the contributions among themselves. At a release cycle of about every half year, the five recent releases in the years 2018 and 2019 have averaged about 450 PRs each,111 Note that before mid 2011, NumPy development did not happen on github.com. All data provided here is based on the development which happened through GitHub PRs. In some cases contributions by maintainers may not be categorized as such. with each release attracting more than a hundred new contributors. Figure 4 shows the number of PRs merged into the NumPy master branch. Although the number of PRs being merged fluctuates, the plot indicates an increased number of contributions over the past years. Figure 4: Number of pull requests merged into the NumPy master branch for each quarter since 2012. The total number of PRs is indicated with the lower blue area showing the portion contributed by current or previous maintainers. ### Community calls The massive number of scientific Python packages that built on NumPy meant that it had an unusually high need for stability. So to guide our development we formalized the feature proposal process, and constructed a development roadmap with extensive input and feedback from the community. Weekly community calls alternate between triage and higher level discussion. The calls not only involve developers from the community, but provide a venue for vendors and other external groups to provide input. For example, after Intel produced a forked version of NumPy, one of their developers joined a call to discuss community concerns. ### NumPy enhancement proposals Given the complexity of the codebase and the massive number of projects depending on it, large changes require careful planning and substantial work. NumPy Enhancement Proposals (NEPs) are modeled after Python Enhancement Proposals (PEPs) for “proposing major new features, for collecting community input on an issue, and for documenting the design decisions that have gone into Python”222https://numpy.org/neps/nep-0000.html. Since then there have been 19 proposed NEPS—6 have been implemented, 4 have been accepted and are being implemented, 4 are under consideration, 3 have been deferred or superseded, and 2 have been rejected or withdrawn. ### Central role NumPy plays a central role in building and standardizing much of the scientific Python community infrastructure. NumPy’s docstring standard is now widely adopted. We are also now using the NEP system as a way to help coordinate the larger scientific Python community. For example, in NEP 29, we recommend, along with leaders from various other projects, that all projects across the Scientific Python ecosystem adopt a common “time window-based” policy for support of Python and NumPy versions. This standard will simplify downstream project and release planning. ### Wheels build system A Python _wheel_ [58] is a standard file format for distributing Python libraries. In addition to Python code, a wheel may include compiled C extensions and other binary data. This is important, because many libraries, including NumPy, require a C compiler and other build tools to build the software from the source code, making it difficult for many users to install the software on their own. The introduction of wheels to the Python packaging system has made it much easier for users to install precompiled libraries. A GitHub repository containing scripts to build NumPy wheels has been configured so that a simple commit to the repository triggers an automated build system that creates NumPy wheels for several computer platforms, including Windows, Mac OSX and Linux. The wheels are uploaded to a public server and made available for anyone to use. This system makes it easy for users to install precompiled versions of NumPy on these platforms. The technology that is used to build the wheels evolves continually. At the time this paper is being written, a key component is the multibuild suite of tools developed by Matthew Brett and other developers [59]. Currently, scripts using multibuild are written for the continuous integration platforms Travis- CI (for Linux and Mac OSX) and Appveyor (for Windows). ### Recent technical improvements With the recent infusion of funding and a clear process for coordinating with the developer community, we have been able to tackle a number of important large scale changes. We highlight two of those below, as well as changes made to our testing infrastructure to support hardware platforms used in large scale computing. ### Array function protocol A vast number of projects are built on NumPy; these projects are consumers of the NumPy API. Over the last several years, a growing number of projects are providers of a _NumPy-like API_ and array objects targeting audiences with specialized needs beyond NumPy’s capabilities. For example, the NumPy API is implemented by several popular tensor computation libraries including CuPy333https://cupy.chainer.org/, JAX444https://jax.readthedocs.io/en/latest/jax.numpy.html, and Apache MXNet555https://numpy.mxnet.io/. PyTorch666https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html and Tensorflow777https://www.tensorflow.org/tutorials/customization/basics provide tensor APIs with NumPy-inspired semantics. It is also implemented in packages that support sparse arrays such as scipy.sparse and PyData/Sparse. Another notable example is Dask, a library for parallel computing in Python. Dask adopts the NumPy API and therefore presents a familiar interface to existing NumPy users, while adding powerful abilities to parallelize and distribute tasks. The multitude of specialized projects creates the difficulty that consumers of these NumPy-like APIs write code specific to a single project and do not support all of the above array providers. This is a burden for users relying on the specialized array-like, since a tool they need may not work for them. It also creates challenges for end-users who need to transition from NumPy to a more specialized array. The growing multitude of specialized projects with NumPy-like APIs threatened to again fracture the scientific Python community. To address these issues NumPy has the goal of providing the fundamental API for _interoperability_ between the various NumPy-like APIs. An earlier step in this direction was the implementation of the __array_ufunc__ protocol in NumPy 1.13, which enabled interoperability for most mathematical functions [60]. In 2019 this was expanded more generally with the inclusion of the __array_function__ protocol into NumPy 1.17. These two protocols allow providers of array objects to be interoperable with the NumPy API: their arrays work correctly with almost all NumPy functions [61]. For the users relying on specialized array projects it means that even though much code is written specifically for NumPy arrays and uses the NumPy API as import numpy as np, it can nevertheless work for them. For example, here is how a CuPy GPU array can be passed through NumPy for processing, with all operations being dispatched back to CuPy: ⬇ import numpy as np import cupy as cp x_gpu = cp.array([1, 2, 3]) y = np.sum(x_gpu) # Returns a GPU array Similarly, user defined functions composed using NumPy can now be applied to, e.g., multi-node distributed Dask arrays: ⬇ import numpy as np import dask.array as da def f(x): """Function␣using␣NumPy␣API␣calls""" y = np.tensordot(x, x.T) return np.mean(np.log(y + 1)) x_local = np.random.random([10000, 10000]) # random local array x_distr = da.random.random([10000, 10000]) # random distributed array f(x_local) # returns a NumPy array f(x_distr) # works, returns a Dask array ### Random number generation The NumPy random module provides pseudorandom numbers from a wide range of distributions. In legacy versions of NumPy, simulated random values are produced by a RandomState object that: handles seeding and state initialization; wraps the core pseudorandom number generator based on a Mersenne Twister implementation888to be precise, the standard 32-bit version of MT19937; interfaces with the underlying code that transforms random bits into variates from other distributions; and supplies a singleton instance exposed in the root of the random module. The RandomState object makes a compatibility guarantee so that a fixed seed and sequence of function calls produce the same set of values. This guarantee has slowed progress since improving the underlying code requires extending the API with additional keyword arguments. This guarantee continues to apply to RandomState. NumPy 1.17 introduced a new API for generating random numbers that use a more flexible structure that can be extended by libraries or end-users. The new API is built using components that separate the steps required to generate random variates. Pseudorandom bits are generated by a bit generator. These bits are then transformed into variates from complex distributions by a generator. Finally, seeding is handled by an object that produces sequences of high- quality initial values. Bit generators are simple classes that manage the state of an underlying pseudorandom number generator. NumPy ships with four bit generators. The default bit generator is a 64-bit implementation of the Permuted Congruential Generator [62] (PCG64). The three other bit generators are a 64-bit version of the Philox generator [63] (Philox), Chris Doty-Humphrey’s Small Fast Chaotic generator [64] (SFC64), and the 32-bit Mersenne Twister [65] (MT19937) which has been used in older versions of NumPy.999The randomgen project supplies a wide range of alternative bit generators such as a cryptographic counter-based generators (AESCtr) and generators that expose hardware random number generators (RDRAND) [66]. Bit generators provide functions, exposed both in Python and C, for generating random integer and floating point numbers. The Generator consumes one of the bit generators and produces variates from complicated distributions. Many improved methods for generating random variates from common distributions were implemented, including the Ziggurat method for normal, exponential and gamma variates [67], and Lemire’s method for bounded random integer generation [68]. The Generator is more similar to the legacy RandomState, and its API is substantially the same. The key differences all relate to state management, which has been delegated to the bit generator. The Generator does not make the same stream guarantee as the RandomState object, and so variates may differ across versions as improved generation algorithms are introduced.101010Despite the removal of the compatibility guarantee, simple reproducibility across versions is encouraged, and minor changes that do not produce meaningful performance gains or fix underlying bug are not generally adopted. Finally, a SeedSequence is used to initialize a bit generator. The seed sequence can be initialized with no arguments, in which case it reads entropy from a system-dependent provider, or with a user-provided seed. The seed sequence then transforms the initial set of entropy into a sequence of high- quality pseudorandom integers, which can be used to initialize multiple bit generators deterministically. The key feature of a seed sequence is that it can be used to spawn child SeedSequences to initialize multiple distinct bit generators. This capability allows a seed sequence to facilitate large distributed applications where the number of workers required is not known. The sequences generated from the same initial entropy and spawns are fully deterministic to ensure reproducibility. The three components are combined to construct a complete random number generator. ⬇ from numpy.random import ( Generator, PCG64, SeedSequence, ) seq = SeedSequence(1030424547444117993331016959) pcg = PCG64(seq) gen = Generator(pcg) This approach retains access to the seed sequence which can then be used to spawn additional generators. ⬇ children = seq.spawn(2) gen_0 = Generator(PCG64(children[0])) gen_1 = Generator(PCG64(children[1])) While this approach retains complete flexibility, the method np.random.default_rng can be used to instantiate a Generator when reproducibility is not needed. The final goal of the new API is to improve extensibility. RandomState is a monolithic object that obscures all of the underlying state and functions. The component architecture is one part of the extensibility improvements. The underlying functions (written in C) which transform the output of a bit generator to other distributions are available for use in CFFI. This allows the same code to be run in both NumPy and dependent that can consume CFFI, e.g., Numba. Both the bit generators and the low-level functions can also be used in C or Cython code.111111As of 1.18.0, this scenario requires access to the NumPy source. Alternative approaches that avoid this extra step are being explored. ### Testing on multiple architectures At the time of writing the two fastest supercomputers in the world, Summit and Sierra, both have IBM POWER9 architectures [69]. In late 2018, Astra, the first ARM-based supercomputer to enter the TOP500 list, went into production [70]. Furthermore, over 100 billion ARM processors have been produced as of 2017 [71], making it the most widely used instruction set architecture in the world. Clearly there are motivations for a large scientific computing software library to support POWER and ARM architectures. We’ve extended our continuous integration (CI) testing to include ppc64le (POWER8 on Travis CI) and ARMv8 (on Shippable service). We also test with the s390x architecture (IBM Z CPUs on Travis CI) so that we can probe the behavior of our library on a big-endian machine. This satisfies one of the major components of improved CI testing laid out in a version of our roadmap [72]—specifically, “CI for more exotic platforms.” PEP 599 [73] lays out a plan for new Python binary wheel distribution support, manylinux2014, that adds support for a number of architectures supported by the CentOS Alternative Architecture Special Interest Group, including ARMv8, ppc64le, as well as s390x. We are thus well-positioned for a future where provision of binaries on these architectures will be expected for a library at the base of the ecosystem. ## Acknowledgments We thank Ross Barnowski, Paul Dubois, Michael Eickenberg, and Perry Greenfield, who suggested text and provided helpful feedback on the manuscript. We also thank the many members of the community who provided feedback, submitted bug reports, made improvements to the documentation, code, or website, promoted NumPy’s use in their scientific fields, and built the vast ecosystem of tools and libraries around NumPy. We also gratefully acknowledge the Numeric and Numarray developers on whose work we built. Jim Hugunin wrote Numeric in 1995, while a graduate student at MIT. Hugunin based his package on previous work by Jim Fulton, then working at the US Geological Survey, with input from many others. After he graduated, Paul Dubois at the Lawrence Livermore National Laboratory became the maintainer. Many people contributed to the project including T.E.O. (a co-author of this paper), David Ascher, Tim Peters, and Konrad Hinsen. In 1998 the Space Telescope Science Institute started using Python and in 2000 began developing a new array package called Numarray, written almost entirely by Jay Todd Miller, starting from a prototype developed by Perry Greenfield. Other contributors included Richard L. White, J. C. Hsu, Jochen Krupper, and Phil Hodge. The Numeric/Numarray split divided the community, yet ultimately pushed progress much further and faster than would otherwise have been possible. Shortly after Numarray development started, T.E.O. took over maintenance of Numeric. In 2005, he led the effort and did most of the work to unify Numeric and Numarray, and produce the first version of NumPy. Eric Jones co-founded (along with T.E.O. and P.P.) the SciPy community, gave early feedback on array implementations, and provided funding and travel support to several community members. Numerous people contributed to the creation and growth of the larger SciPy ecosystem, which gives NumPy much of its value. Others injected new energy and ideas by creating experimental array packages. K.J.M. and S.J.v.d.W. were funded in part by the Gordon and Betty Moore Foundation through Grant GBMF3834 and by the Alfred P. Sloan Foundation through Grant 2013-10-27 to the University of California, Berkeley. S.J.v.d.W., S.B., M.P., and W.W. were funded in part by the Gordon and Betty Moore Foundation through Grant GBMF5447 and by the Alfred P. Sloan Foundation through Grant G-2017-9960 to the University of California, Berkeley. ## Author Contributions Statement K.J.M. and S.J.v.d.W. composed the manuscript with input from others. S.B., R.G., K.S., W.W., M.B., and T.J.R. contributed text. All authors have contributed significant code, documentation, and/or expertise to the NumPy project. All authors reviewed the manuscript. ## Competing Interests The authors declare no competing interests.
# Motivic homotopy theory of the classifying stack of finite groups of Lie type Can Yaylali Mail<EMAIL_ADDRESS> Department of Mathematics, TU Darmstadt, Schlossgartenstraße 7, 64289 Darmstadt, Germany ORCID: 0000-0003-0150-3622 ###### Abstract Let $G$ be a reductive group over $\mathbb{F}_{p}$ with associated finite group of Lie type $G^{F}$. Let $T$ be a maximal torus contained inside a Borel $B$ of $G$. We relate the (rational) Tate motives of $\textup{B}G^{F}$ with the $T$-equivariant Tate motives of the flag variety $G/B$. On the way, we show that for a reductive group $G$ over a field $k$, with maximal Torus $T$ and absolute Weyl group $W$, acting on a smooth finite type $k$-scheme $X$, we have an isomorphism $A^{n}_{G}(X,m)_{\mathbb{Q}}\cong A^{n}_{T}(X,m)_{\mathbb{Q}}^{W}$ extending the classical result of Edidin- Graham to higher equivariant Chow groups in the non-split case. We also extend our main result to reductive group schemes over a regular base that admit maximal tori. Further, we apply our methods to more general quotient stacks. In this way, we are able to compute the motive of the stack of $G$-zips introduced by Pink-Wedhorn-Ziegler for reductive groups over fields of positive characteristic. ###### Contents 1. 1 Introduction 2. 2 Rational equivariant motivic homotopy theory 3. 3 Equivariant motives under reductive groups 1. 3.1 Preliminaries on reductive group schemes 2. 3.2 Torsors under finite groups 3. 3.3 The relation between the motives of $\left[X/T\right]$ and $\left[X/G\right]$ 4. 4 Torsors under tori 1. 4.1 The motive of $T$-torsors 2. 4.2 Motivic cohomology of $T$-torsors 5. 5 Motivic cohomology of quotients up to isogeny 6. 6 Generalizations 7. A Existence of maximal tori over Prüfer domains ## 1 Introduction Let $G$ be a reductive group over $\mathbb{F}_{q}$, a finite field of characteristic $p>0$, and $\varphi\colon G\rightarrow G$ the $q$-Frobenius. Then $G$ acts on itself via $\varphi$-conjugation, i.e. $(g,h)\mapsto gh\varphi(g)^{-1}$. The stabilizer of the neutral element is denoted by $G^{F}$. If $\overline{\mathbb{F}}_{q}$ denotes an algebraic closure of $\mathbb{F}_{q}$, then $G^{F}(\overline{\mathbb{F}}_{q})=G(\mathbb{F}_{q})$ is a finite group of Lie-type. The representation of finite groups of Lie type over fields of characteristic $0$ was studied by Deligne and Lusztig (cf. [DL76]). In their article they construct representations of $G(\mathbb{F}_{q})$ by an action on the $\ell$-adic cohomology of certain varieties, for $\ell\neq p$. Roughly, the varieties in question are constructed by intersection of Bruhat strata and graph of Frobenius. Let us fix a Borel $B$ of $G$.111Any reductive group over a finite field is quasi-split and thus admits a Borel. The Bruhat strata of $\left[G/B\right]$ are induced by the Bruhat decomposition of $G$ via pullback along $G/B\rightarrow G/B\times^{G}G/B\cong\left[B\backslash G/B\right]$. In this article, we want to analyze the cohomological connection between $\textup{B}G^{F}$ and $\left[B\backslash G/B\right]$, i.e. study how their motivic categories are related. The derived category of $\ell$-adic sheaves $D(\textup{B}G^{F},\mathbb{Q}_{\ell})$, for $\ell\neq p$, encodes information about the action of $G^{F}$ on $\ell$-adic cohomology. One can show that $\textup{B}G^{F}\cong\left[G/_{\varphi}G\right]$, where $G$ acts on itself via $\varphi$-conjugation. The restriction of the $\varphi$-conjugation to $B$ yields an adjunction ${D(\left[G/_{\varphi}G\right],\mathbb{Q}_{\ell})}$${D(\left[G/_{\varphi}B\right],\mathbb{Q}_{\ell}).}$ On the other hand there is an adjunction ${D(\left[G/_{\varphi}B\right],\mathbb{Q}_{\ell})}$${D_{B}(G/B,\mathbb{Q}_{\ell}),}$ which is induced via the graph of $\varphi$ (here the right hand side denotes the derived category of $B$-equivariant $\mathbb{Q}_{\ell}$-modules on the flag variety). Thus, it seems natural that the study of these two adjunctions should lead to information about the geometric representation theory of $G^{F}$ and connection to the classical theory of Deligne-Lusztig. Instead of rewriting the theory of Deligne-Lusztig in the derived setting, we want to understand the adjunctions above in the motivic setting with rational coefficients. The idea is that first after $\ell$-adic realization, we get the classical situation back and further this could lead to information about the $\mathbb{Q}$-representations of $G^{F}$, as we are naturally working with rational coefficients. ### Motives and connection to representation theory Motives were famously envisioned by Grothendieck to capture similar behavior of cohomology theories in an abelian category. The construction of such a category is not an easy task and has been studied for many years. The main approach is to define a derived category of motives with the hope to find a $t$-structure on it, so that the heart of this $t$-structure defines the abelian category of motives. To capture functorial behavior on cohomology theories one demands a full six functor formalism for the derived category of motives. There are several versions of the derived category of motives which agree under certain assumptions. One version was constructed by Cisinki and Déglise in the case of rational coefficients, which we denote by $\operatorname{DM}$ (cf. [CD19]). They show that the assignment $X\mapsto\operatorname{DM}(X)$ from smooth $k$-schemes indeed admits a six functor formalism $(\otimes\dashv\underline{\operatorname{Hom}},f^{*}\dashv f_{*},f_{!}\dashv f^{!})$ and agrees with the classical construction of Morel. In particular, they show that motivic cohomology, i.e. $\operatorname{Hom}_{\operatorname{DM}(X)}(1_{X},1_{X}(n)[m])$ agrees with Bloch’s higher Chow groups $A^{n}(X,2n-m)_{\mathbb{Q}}$. With the help of the $6$-functor formalism, we can define the motive of an $k$-scheme $\pi\colon X\rightarrow\operatorname{Spec}(k)$ resp. the global sections via $M_{k}(X)\coloneqq\pi_{!}\pi^{!}1_{Y}\quad\textup{resp.}\quad R\Gamma_{k}(X,\mathbb{Q})\coloneqq\pi_{*}\pi^{*}1_{k}$ computing motivic cohomology resp. cohomology. The existence of a $t$-structure is a more delicate problem and in general not known. Levine shows that for a particular class of schemes $X$, e.g. finite fields or affine spaces over finite fields, a $t$-structure exists on the full triangulated subcategory of Tate motives $\operatorname{DTM}(X)\subseteq\operatorname{DM}(X)$ generated by the $1_{X}(n)$ for $n\in\mathbb{Z}$ (cf. [Lev93]). Further, using weight structures one can see that $\operatorname{DTM}(\mathbb{F}_{p})$ is equivalent to the bounded derived category of $\mathbb{Q}$-vector spaces. We also have realization functors $\textup{real}_{\ell}\colon\operatorname{DTM}(X)\rightarrow D_{\operatorname{\acute{e}t}}(X,\mathbb{Q}_{\ell})$ for $\ell\neq p$ that is conservative and $t$-exact for the perverse $t$-structure on $D_{\operatorname{\acute{e}t}}(X,\mathbb{Q}_{\ell})$. Let us remark that if for a morphism $f\colon X\rightarrow Y$ we have $f_{*}1_{X}\in\operatorname{DTM}(Y)$, then this automatically induces an adjunction of $\operatorname{DTM}(X)$ and $\operatorname{DTM}(Y)$. In particular, the adjunction $f^{*}\dashv f_{*}$ restricts to Tate motives. We call morphisms with such a property Tate. Let us now explain the relation between Tate motives and geometric representation theory. For simplicity, let us assume that $G/\mathbb{F}_{q}$ is a split reductive group. Let $T$ be a split maximal torus inside a Borel $B$ of $G$. In [SW18], Soergel and Wendt show that for schemes stratified by affine spaces, such as the flag variety $G/B$, one can define a subcategory of $\operatorname{DM}$ called stratified Tate motives that admits a $t$-structure. This $t$-structure is glued from the $t$-structure on the strata. For $G/B$ with the Bruhat stratification, we will denote the category of stratified Tate motives with $\operatorname{DTM}_{(B)}(G/B)$. Soergel and Wendt show that $\operatorname{DTM}_{(B)}(G/B)$ is equivalent to the bounded derived category $D^{b}({\mathcal{O}}^{\mathbb{Z},ev})$ of graded ${\mathcal{O}}\coloneqq H^{*}(G/B)$-modules concentrated in even degrees. To connect this to representations, we have to go further and endow motivic cohomology with group actions. For this, we need to define equivariant motives. To make sense of the following construction, we need to work in the setting of $\infty$-categories. The idea is to define $\operatorname{DM}({\mathcal{X}})$ for an Artin stack ${\mathcal{X}}$ via gluing along the atlas. As we essentially have to glue the derived category this only makes sense in the $\infty$-categorical framework. Then this gluing can be defined via right Kan-extension from schemes to Artin stacks (cf. [RS20]). As one would expect motivic cohomology of a quotient stack $[X/G]$, where $X$ is a smooth $k$-scheme and $G$ is a linear algebraic group, yields the equivariant Chow groups $A^{n}_{G}(X,2n-m)$ of Edidin and Graham (cf. [RS20]). In this way, one can also extend the stratified Tate motives to Artin stacks. In the case of the flag variety, Soergel, Virk and Wendt show that $\operatorname{DTM}(\left[B\backslash G/B\right])$ is equivalent to the bounded derived category of bi-Soergel-modules (cf. [SVW18]). Further, they show that applying $K_{0}$ yields an isomorphism to the Iwahori-Hecke algebra and Verdier duality yields the Kazhdan-Lusztig involution. In particular, the stratified Tate motives of $\left[B\backslash G/B\right]$ with the weight structure and $6$-functor formalism carries information about the $\ell$-adic geometric representations of $G$. ### Connection between finite groups of Lie type and their associated flag variety As we have seen above the geometric representation theory of the flag variety is linked to stratified Tate motives. This particular connection uses that the flag variety is stratified by affine spaces and that Tate motives behave nicely under this stratification. We expect that the geometric representation theory of $G^{F}$ is linked to Tate motives on $\textup{B}G^{F}$, the classifying space of $G^{F}$. In this way, we want to relate representation theory of $G^{F}$ with the representation theory of the associated flag variety. We will do this by linking the Tate motives of $\textup{B}G^{F}$ to Tate motives on $\left[B\backslash G/B\right]$ in the following way. The stack $\textup{B}G^{F}$ is equivalent to $\left[G/_{\varphi}G\right]$, where $G$ acts on itself via $\varphi$-conjugation. Let us fix a maximal torus $T\subseteq B$. Let $T$ act on $G$ also via $\varphi$-conjugation. We can embed $T$ into $T\times T$ via $t\mapsto(t,\varphi(t))$. If we now let $T\times T$ act on $G$ via $(t,t^{\prime},g)\mapsto tgt^{\prime-1}$, we get a zigzag of Artin stacks (1) ${\left[G/_{\varphi}G\right]}$${\left[G/_{\varphi}T\right]}$${\left[G/T\times T\right].}$$\scriptstyle{a}$$\scriptstyle{b}$ It is a fact that $\operatorname{DM}(\left[G/T\times T\right])\simeq\operatorname{DM}(\left[B\backslash G/B\right])$. Thus, on the level of motivic categories, this zigzag yields adjunctions between $\operatorname{DM}(\textup{B}G^{F})$ and $\operatorname{DM}(\left[B\backslash G/B\right])$. Now we can formulate the leading question of this article: ($\ast$) Do these adjunctions preserve Tate motives? The answer to this question is positive and yields a first point to access motivic representation theory of $G^{F}$ via the motivic geometric representation theory of $G$. ### Equivariant motivic homotopy theory of reductive groups From now on let $\tilde{B}$ be a an excellent noetherian scheme of dimension $\leq 2$ and $S$ a regular connected $\tilde{B}$-scheme of finite type. We will first work with reductive $S$-group schemes $H$ that admit split maximal tori $Z$. This is not a classical notion and any such reductive group scheme may not be split or even quasi-split (in sense of [SGA3] or [Con14]). But in this case, one has that the Weyl-group scheme $N_{H}(Z)/Z$ is represented by a constant group scheme associated to a finite constant group $W_{H}$, which we will call the Weyl group of $Z$ in $H$ (cf. Section 3.1). As $S$ is regular noetherian, any maximal torus splits after passage to a Galois cover and we deduce the answer to our leading question ($\ast$ ‣ 1) from the split case. From now on let $G$ be a reductive $S$-group scheme that admits a maximal torus $T$. Our main question is about the behavior of Tate motives under the induced maps on $\operatorname{DM}$ corresponding to the zigzag (1). We will work in a more general setting and look at $a$ and $b$ separately. #### Equivariant motives and passage to Tori The morphism $a$ resembles the motivic version of a more classical problem on classical problem on Chow groups. Let $X$ be an $S$-scheme locally of finite type with $G$-action, what is the relation between $A^{\bullet}_{G}(X)$ and $A^{\bullet}_{T}(X)$? In [EG98] Edidin and Graham answer this question for rational Chow groups in the case where $S=\operatorname{Spec}(k)$ is the spectrum of a field and $T$ is split, i.e. $A^{\bullet}_{G}(X)_{\mathbb{Q}}\cong A^{\bullet}_{T}(X)^{W}_{\mathbb{Q}}$, where $W$ denotes the Weyl group of $T$ in $G$. This isomorphism is just a shadow of an equivalence that can be seen motivically. ###### Theorem 1 (3.14). Let $G$ be a reductive $S$-group scheme with split maximal torus $T$ and Weyl group $W$. Assume $G$ acts on an $S$-scheme $X$ locally of finite type. Then the natural map $\left[X/T\right]\rightarrow\left[X/G\right]$ is Tate. Further, we have $R\Gamma_{S}(\left[X/G\right],\mathbb{Q})\simeq R\Gamma_{S}(\left[X/T\right],\mathbb{Q})^{W}.$ The idea of the proof of Theorem 1 is to factorize $\left[X/T\right]\rightarrow\left[X/G\right]$ into $\left[X/T\right]\rightarrow\left[X/N_{G}(T)\right]\rightarrow\left[X/G\right].$ Then the first map of the factorization is naturally a $W$-torsor and the second map a $G/N_{G}(T)$-bundle. For torsors under finite groups étale descent relates motives via $W$-invariants. For $G/N_{G}(T)$-bundles it suffices to see that $R\Gamma_{S}(G/N_{G}(T),\mathbb{Q})$ is trivial. We will prove this by reducing the triviality to the equivalence of the map $K_{0}(S)_{\mathbb{Q}}\rightarrow K_{0}(G/B)^{W}_{\mathbb{Q}}$, which after comparison with rational Chow theory is classical. Applying Theorem 1 to motivic cohomology in the case, where $S=\operatorname{Spec}(k)$, we can extend the classical result to higher Chow groups even in the non-split case, generalizing the analogous result from [Kri13] from split reductive groups to non-split groups. The idea is that any maximal torus splits, after passage to a finite Galois extension. Then we reduce to the split case, using that motives of torsors under finite groups are related by taking invariants. ###### Corollary 2 (3.15). Let $k$ be a field an let $G$ be a reductive $k$-group scheme with maximal torus $T$ and absolute Weyl group $W$. Assume $G$ acts on a smooth finite type $k$-scheme $X$. Then for all $n,m\in\mathbb{Z}$, we have $A^{n}_{G}(X,m)_{\mathbb{Q}}\cong A^{n}_{T}(X,m)_{\mathbb{Q}}^{W}.$ The same argumentation as before works also over general base and arbitrary reductive group schemes, to show that $\left[X/T\right]\rightarrow\left[X/G\right]$ is Tate. This is because over a connected noetherian normal base any torus splits after passage to a finite Galois extension. ###### Corollary 3 (3.16). Let $G$ be a reductive $S$-group scheme that admits a maximal torus $T$. Assume $G$ acts on an $S$-scheme $X$ locally of finite type. Then the natural map $\left[X/T\right]\rightarrow\left[X/G\right]$ is Tate. #### Motives of $T$-torsors We will now analyze morphism $b$. For this let $\varphi\colon G\rightarrow G$ be an isogeny that fixes $T$. Consider the embedding $T\hookrightarrow T\times T$ given by the graph of $\varphi$. The quotient $T\times T/T$ under this embedding is isomorphic to $T$. In particular, this isomorphism gives the map $b\colon\left[G/_{\varphi}T\right]\rightarrow\left[G/T\times T\right]$ the structure of a $T$-torsor. So, to understand the motivic behavior of $b$ it is enough to understand motives of $T$-torsors. Thus, let $X\rightarrow Y$ be a morphism of Artin stacks that is a $T$-torsor. Classically, Chow groups in this setting can be computed rather easily. For each character $\chi\in T\rightarrow\mathbb{G}_{\textup{m}}$ we get a $1$-dimensional representation $\kappa(\chi)$ of $T$. This yields a line bundle $L_{\chi}\coloneqq X\times^{T}\kappa(\chi)$ on $Y$. Multiplication with the first Chern class of $L_{\chi}$ yields an action of the character group $\hat{T}$ of $T$ on $A^{\bullet}(Y)$. If $T$ is split, the morphism $b$ yields (2) $A^{\bullet}_{T}(G)\cong A^{\bullet}_{T\times T}(G)/\hat{T}A^{\bullet}_{T\times T}(G)$ (cf. [Tot14]). Again, this is just a shadow of computations for oriented cohomology theories. The idea is the following. If $T$ is split, we have $T\cong\mathbb{G}_{\textup{m}}^{r}$ for some $r\in\mathbb{N}$. By applying successive $\mathbb{G}_{\textup{m}}$-quotients, we can write $X\rightarrow X_{1}\rightarrow X_{2}\rightarrow\dots\rightarrow X_{r}\cong Y,$ where $X_{i}\coloneqq\left[X/\mathbb{G}_{\textup{m}}^{i}\right]$. Each of the maps $X_{i-1}\rightarrow X_{i}$ is a $\mathbb{G}_{\textup{m}}$-torsor. So we may reduce to the case, where $T=\mathbb{G}_{\textup{m}}$. In this case, we can follow [HPL21] and assign the line bundle ${\mathcal{L}}\coloneqq X\times^{\mathbb{G}_{\textup{m}}}\mathbb{A}^{1}$ over $Y$. Multiplication with the first Chern class of ${\mathcal{L}}$ yields a fibre sequence $M_{k}(X)\rightarrow M_{k}(Y)\rightarrow M_{k}(Y)(1)[2].$ Applying this construction in the split case to motivic cohomology yields a generalization of (2). ###### Corollary 4 (4.6). Assume that $S=\operatorname{Spec}(k)$ and $T$ is a split torus. Let $X\rightarrow Y$ be a $T$-torsor of smooth Artin stacks over $S$. Then $A^{\bullet}(X)_{\mathbb{Q}}\cong A^{\bullet}(Y)_{\mathbb{Q}}/\hat{T}A^{\bullet}(Y)_{\mathbb{Q}}.$ If $X$ and $Y$ are represented by quotients of qcqs smooth schemes by diagonalizable group schemes, for example for $b$ as above, we can actually replace the Chow ring with equivariant $K_{0}$. More generally, the analogous statement holds for any oriented cohomology theory that is $m$-connective (cf. Remark 4.7). Further, combining the above with the fact that any torus over $S$ splits after passage to a finite Galois extension, we also see the following. ###### Proposition 5 (4.3). Let $f\colon X\rightarrow Y$ be a $T$-torsor of smooth Artin stacks over $S$. Then $f$ is a Tate map. #### Applications to quotients up to conjugation by isogeny We will summarize our results above to see that the maps relating $\textup{B}G^{F}$ and $\left[G/T\times T\right]$ are Tate. But we haven’t particularly used the fact that we are interested in Frobenius-conjugation but rather conjugation up to isogeny. Thus, we can work in a more general setting, that we describe in the following. Let $P$ resp. $Q$ be parabolics inside $G$ with Levi-components $L$ resp. $M$. Let $\varphi\colon L\rightarrow M$ be an isogeny. Then $L$ acts on $G$ via $(l,g)\mapsto lg\varphi(l)^{-1}$. Let $T$ be a maximal torus of $G$ contained in $L$. Fix a $g_{0}\in G(S)$ such that $g_{0}\varphi(T)g_{0}^{-1}=T$ and denote by $\widetilde{\varphi}$ the composition of $\varphi$ and $g_{0}$-conjugation. We can embed $T$ into $T\times T$ via $t\mapsto(t,\widetilde{\varphi}(t))$. ###### Theorem 6 (5.1). In the setting above, we have the following zigzag of Tate maps ${\left[G/_{\varphi}L\right]}$${\left[G/_{\varphi}T\right]}$${\left[G/T\times T\right].}$$\scriptstyle{a}$$\scriptstyle{b}$ Further, if $S=\operatorname{Spec}(k)$ and $T$ is split, we can compute the Chow ring of $\left[G/_{\varphi}L\right]$ as $A^{\bullet}(\left[G/_{\varphi}L\right])_{\mathbb{Q}}\cong\left(A^{\bullet}_{T}(G/T)_{\mathbb{Q}}/\hat{T}A^{\bullet}_{T}(G/T)_{\mathbb{Q}}\right)^{W_{L}},$ where $W_{L}$ denotes the Weyl group of $T$ in $L$. Using results about equivariant $K$-theory by Uma and Krishna and our results on the cohomology theory of $T$-torsors, we can extend the situation above to equivariant $K$-theory and generalize a result by Brokemper. ###### Corollary 7 (5.4). In the setting above assume that $S=\operatorname{Spec}(k)$, $G$ is split with respect to $T$ and that the derived group of $G$ is simply connected. Then we have $K_{0}(\left[G/_{\varphi}L\right])_{\mathbb{Q}}\cong R(T)_{\mathbb{Q}}^{W_{L}}/(f-\tilde{\varphi}f\mid f\in R(T)_{\mathbb{Q}}^{W}),$ where $W_{L}$ denotes the Weyl group of $T$ in $L$ and $W$ as before denotes the Weyl group of $T$ in $G$. ###### Example 8. Let us give two interesting examples, where Theorem 6 can be used. 1. 1. Let $k=\mathbb{F}_{q}$ be a finite field with $q$-elements and assume $S=\operatorname{Spec}(k)$. If we set $L=G$ and $\varphi$ the $q$-Frobenius, then we precisely get the situation of the beginning back. In particular, we see that there is an adjunction between $\operatorname{DTM}(\textup{B}G^{F})$ and $\operatorname{DTM}(\left[B\backslash G/B\right])$. 2. 2. Let $k$ be a finite field of characteristic $p>0$ and assume $S=\operatorname{Spec}(k)$. Another interesting example is the stack of $G$-zips of Pink-Wedhorn-Ziegler (cf. Example 5.6). In particular, we can recover the computations of Brokemper [Bro18] for Chow groups (up to some computations of loc.cit.). This answers our main question ($\ast$ ‣ 1) positively. ###### Remark 9. If we work with reductive $S$-group schemes that admit split maximal tori, then we will see in our main text, that we can drop the connectedness hypothesis on the results above. #### Structure of this article We start this article by recalling properties of the $\infty$-category of motives and how to extend this to arbitrary locally of finite type Artin stacks. After defining the necessary notions for this article, we quickly recollect some computational aspects. Afterwards, we start to focus on motives on schemes with group action. First, we explain how to achieve a group action on motives and how torsors under finite groups of Artin stacks have a particular behavior. Then we concentrate on the case $T\subseteq G$, a split maximal torus inside a reductive group. Namely, we show that the relation between $T$-equivariant Chow groups and $G$-equivariant Chow groups extend to the motivic case. Next, we show that $T$-torsors of Artin stacks are Tate and explicitly compute motivic cohomology implying the classical case of Chow groups. In the end, we focus on reductive groups with conjugation up to isogeny. We use our results from before to get the desired adjunction of Tate motives with the $T$-equivariant flag variety. We end the paper with ideas for generalization, that we want to address in the future. #### Setup 1. $\bullet$ Throughout, we fix a noetherian excellent scheme $\tilde{B}$ of dimension at most $2$ and a regular scheme $S$ of finite type over $\tilde{B}$. 2. $\bullet$ An Artin stack is an algebraic stack in the sense of [Sta22]. When we write “stack”, we always mean “Artin stack”. Every Artin stack (and hence scheme) will be locally of finite type over $S$ and any morphism of Artin stacks will be an $S$-morphism. 3. $\bullet$ Throughout, we will work in the setting of $\infty$-categories and freely use the language of $\infty$-categories. In particular, presheaves will be always considered as presheaves with values in $\infty$-groupoids. 4. $\bullet$ Throughout $\operatorname{DM}$ denotes the Beilinson motives with coefficients in $\mathbb{Q}$. 5. $\bullet$ We will work with reductive group schemes over $S$ (cf. [SGA3]), i.e. $S$-affine smooth group schemes such that the geometric fibers are connected reductive groups. 6. $\bullet$ We say that a reductive $S$-group scheme $G$ is quasi-split if it so in the sense of [SGA3, Exp. 24 §3.9]. 7. $\bullet$ Let $X$ be an $S$-scheme and $G$ an $S$-group scheme acting on $X$. Then we denote by $\left[X/G\right]$ the associated quotient stack in the étale topology. 8. $\bullet$ For a $S$-group scheme $G$, we denote its (étale) classifying stack by $\textup{B}G\coloneqq\left[S/G\right]$, where $G$ acts trivially on $S$. ### Acknowledgement I would like to thank Torsten Wedhorn for multiple discussions and comments on the earlier version, as well as his communication of Appendix A and the idea to use reductive group schemes with split maximal tori, generalizing the earlier versions. Further, I would like to thank Paul Ziegler, who communicated this project and shared his thoughts with me. Finally, I would like to thank Rizacan Ciloguli, Arnaud Éteve, Tim Holzschuh, Marc Hoyois, Adeel Khan, Timo Richarz, Jakob Scholbach, Fabio Tanania, Thibaud van den Hove for fruitful discussions and feedback. This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 524431573, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 Geometry and Arithmetic of Uniformized Structures, project number 444845124 and by the LOEWE grant ‘Uniformized Structures in Algebra and Geometry’. ## 2 Rational equivariant motivic homotopy theory In this section, we want to recall some properties of the category of (rational) motives and how to extend this to Artin stacks locally of finite type. We expect that most readers are familiar with the notion of motives and the $6$-functor formalism and refer to [RS20, Syn. 2.1.1] for an overview of the properties of the $6$-functor formalism. Nevertheless, to prevent confusion, let us quickly recall some common notation and remarks. ###### Remark 2.1 ([RS20], [CD19]). In the following any scheme and any morphism will be considered in the category of finite type $S$-schemes, $\textup{Sch}_{S}^{\textup{ft}}$. 1. (i) For any $S$-scheme $X$, $\operatorname{DM}(X)$ is a stable, presentable, closed symmetric monoidal $\infty$-category. The $\otimes$-unit will be denoted by $1_{X}$. It has all limits and colimits. 2. (ii) The assignment $X\mapsto\operatorname{DM}(X)$ can be upgraded to a presheaf of symmetric monoidal $\infty$-categories $\operatorname{DM}^{*}\colon\textup{Sch}_{S}^{\textup{ft}}\rightarrow\textup{Cat}_{\infty}^{\otimes},\ X\mapsto\operatorname{DM}(X),\ f\mapsto f^{*}.$ For any morphism of schemes $f\colon X\rightarrow Y$, there is an adjunction ${f^{*}\colon\operatorname{DM}(Y)}$${\operatorname{DM}(X)\colon f_{*}.}$ 3. (iii) If $f$ is smooth, then $f^{*}$ has a left adjoint, denoted $f_{\sharp}$. 4. (iv) The assignment $X\mapsto\operatorname{DM}(X)$ can be upgraded to a presheaf of $\infty$-categories $\operatorname{DM}^{!}:(\textup{Sch}_{S}^{\textup{ft}})^{\operatorname{op}}\rightarrow\textup{Cat}_{\infty},\ X\mapsto\operatorname{DM}(X),f\mapsto f^{!}.$ For each $f$, there is an adjunction $f_{!}:\operatorname{DM}(X)\rightleftarrows\operatorname{DM}(Y):f^{!}.$ For any factorization $f=p\circ j$ with $j$ an open immersion and $p$ a proper map, there is a natural equivalence $f_{!}\cong p_{*}j_{\sharp}$. 5. (v) Both functors $\operatorname{DM}^{*}$ and $\operatorname{DM}^{!}$ are sheaves for the $h$-topology (cf. [RS20, Thm. 2.1.13]) 6. (vi) For the projection $p:\mathbb{G}_{\textup{m},S}\times_{S}X\rightarrow X$, and any $M\in\operatorname{DM}(X)$, the map induced by the counit $p_{\sharp}p^{*}M[-1]\rightarrow M[-1]$ in $\operatorname{DM}(X)$ is a split monomorphism. The complementary summand is denoted by $M(1)$. The functor $M\mapsto M(1)$ is an equivalence with inverse denoted by $M\mapsto M(-1)$. For any integer $n$ the $n$-fold composition is denoted by $M\mapsto M(n)$ and in the future, we will abbreviate $\langle n\rangle\coloneqq(n)[2n]$. Let $\underline{X}$ be a prestack222This notion is from [RS20]., i.e. presheaf of anima on the category of rings. There are several approaches to the $\infty$-category $\operatorname{DM}(\underline{X})$. If $\underline{X}$ is an Artin stack over a field $k$ Hoskins-Lehalleur use a construction similar to equivariant Chow groups. One resolves $\underline{X}$ by open substacks $(\underline{X}_{i})$ such that on each $\underline{X}_{i}$ there is a vector bundle $V_{i}$ together with an open $U_{i}$ with a free $G$-action such that the codimension of $V_{i}\setminus U_{i}$ tends towards infinity (cf. [HPL21]). This idea was further generalized to arbitrary cohomology theories by Khan-Ravi by a construction they call lisse extension (cf. [KR21]). This construction was already used for the motive of classifying stacks $\textup{B}G$ by Morel-Voevodsky (cf. [MV99, §4]). Totaro then gave an explicit computation of the motive of $\textup{B}\mathbb{G}_{\textup{m}}$ over a field (cf. [Tot16] and Example 2.8). Alternatively to the construction of Hoskins-Lehalleur resp. Khan-Ravi, Richarz-Scholbach give a construction via certain left and right Kan extension (cf. [RS20]). Their approach is based on gluing the motivic structure on Beilinsion motives to arbitrary prestacks. Indeed, $\operatorname{DM}(-)$ satisfies $h$-descent, so it is rather formal to extend the six functor formalism to Artin stacks locally of finite type, we will use this approach. One should note that this was also discussed in [Kha19], to extend the $6$-functor formalism to higher Artin stacks. For computations of the underlying motives it seems to be better to work with the definition of Hoskins-Lehalleur resp. Morel-Voevodsky. Let $f\colon{\mathfrak{X}}\rightarrow\operatorname{Spec}(k)$ be a smooth Artin stack and let $M({\mathfrak{X}})$ denote the $k$-linear motive of ${\mathfrak{X}}$ defined in [HPL21]. This defines an object in $\operatorname{DM}^{*}(\operatorname{Spec}(k))$. Let $1_{k}$ denote the unit in $\operatorname{DM}^{*}(\operatorname{Spec}(k))$. We will see in Corollary 2.7 that if $f$ is smooth, we have $M({\mathfrak{X}})\simeq f_{\sharp}f^{*}1_{k}$. In particular, if we use the approach of [RS20] and define a motive of a prestack as the $\sharp$-push/$*$-pull of the unit, we see that the notion of motives on Artin stacks as in loc.cit. agrees with the classical ones. To define $\operatorname{DM}$ for prestacks, we will follow [RS20]. For the following definition let us fix a regular cardinal $\kappa$. Let $\textup{Aff}_{S}^{\textup{ft}}$ denote the (Nerve of the) category of affine schemes of finite type over $S$. We let $\textup{Aff}_{S}^{\kappa}$ denote the $\kappa$-pro-completion of $\textup{Aff}_{S}^{\textup{ft}}$. Further, $\textup{DGCat}_{\textup{cont}}$ denotes the $\infty$-category of presentable stable $\mathbb{Q}$-linear dg-$\infty$-categories with colimit preserving functors. ###### Definition 2.2 ([RS20]). Let $y\colon\textup{Aff}_{S}^{\kappa}\hookrightarrow P(\textup{Aff}_{S}^{\kappa})$ be the Yoneda embedding. We define the functor $\operatorname{DM}_{S}\colon P(\textup{Aff}_{S})^{\operatorname{op}}\rightarrow\textup{DGCat}_{\textup{cont}}$ via left Kan extension along the inclusion $\textup{Aff}_{S}^{\textup{ft}}\subseteq\textup{Aff}_{S}^{\kappa}$ and right Kan extension along $\textup{Aff}_{S}^{\kappa}P(\textup{Aff}_{S}^{\kappa})$, where all the functors are given via $!$-pullback. For a prestack $\underline{X}\in P(\textup{Aff}_{S}^{\kappa})$, we define the $\infty$-category of $S$-linear motives (with rational coefficients) of $\underline{X}$ as $\operatorname{DM}_{S}(\underline{X})$. As noted in [RS20], for applications we can take $\kappa$ to be large enough, so that all affine schemes of interest are inside $\textup{Aff}_{S}^{\kappa}$. Thus, for once and for all we may fix a regular cardinal $\kappa$ and drop it from the notation. Khan showed in [Kha19, Thm. A.5] that this method of extending the theory of motives to (derived) Artin stacks, does not loose the $6$-functor formalism. One way to see this, is that we can use the DESCENT program in [LZ12], since the Beilinsion motives satisfy étale descent, in our context. As mentioned in [Kha19], this is equivalent to the construction of [RS20]. ###### Theorem 2.3. Let $\widetilde{\operatorname{DM}}$ be the restriction of $\operatorname{DM}$ to Artin stacks locally of finite type over $S$. Then $\widetilde{\operatorname{DM}}$ is compatible with the $6$-functor formalism in the sense of [RS20, Syn. 2.1.1]. ###### Proof. The proof is the same as [Kha19, Thm A.5]333As mentioned in op.cit., the method of extending the $6$-functor formalism works with any motivic category that satisfies étale descent. and follows from the results of [RS20]. ∎ ###### Definition 2.4 ([RS20]). Let ${\mathfrak{X}}$ be an Artin $S$-stack locally of finite type with structure morphism $f\colon{\mathfrak{X}}\rightarrow S$. Then we define the (rational) $S$-linear motive of ${\mathfrak{X}}$ as $M_{S}({\mathfrak{X}})\coloneqq f_{!}f^{!}1_{S}$. If $S=\operatorname{Spec}(A)$ is affine, we write $M_{A}({\mathfrak{X}})$. We further define the global sections of ${\mathfrak{X}}$ over $S$ to be $R\Gamma_{S}({\mathfrak{X}},\mathbb{Q})\coloneqq f_{*}f^{*}1_{S}$. ###### Remark 2.5. Let $f\colon{\mathfrak{X}}\rightarrow S$ be an Artin stack locally of finite type over $S$. If $f$ is smooth, then relative purity implies that $f_{\sharp}f^{*}\simeq f_{!}f^{!}$ and in particular, we see with $\operatorname{Hom}_{\operatorname{DM}(S)}(M_{S}({\mathfrak{X}}),1(n)[m])\simeq\operatorname{Hom}_{\operatorname{DM}(S)}(1(-n)[-m],R\Gamma_{S}({\mathfrak{X}},\mathbb{Q}))$ that $M_{S}({\mathfrak{X}})$ computes motivic homology and $R\Gamma_{S}({\mathfrak{X}},\mathbb{Q})$ motivic cohomology. ###### Notation 2.6. Let $G$ be an $S$-group scheme locally of finite type acting on an $S$-scheme $X$ locally of finite type, via a morphism $a$. For the quotient stack $\left[X/G\right]$, we can define a simplicial object in $S$-schemes locally of finite type, via its Bar-resolution ${\dots}$${G\times_{S}X}$${X}$$\scriptstyle{p}$$\scriptstyle{a}$ We denote the corresponding simplicial functor with $\textup{Bar}^{\bullet}(X,G)$. There is also an alternative way to define motives of algebraic stacks via the Bar-resolution. For each $n\geq 0$ let $\mathbb{Q}(\textup{Bar}^{n}(X,G))$ be the constant étale sheaf with coefficients in $\mathbb{Q}$ associated to $\textup{Bar}^{n}(X,G)$. This yields a simplicial object in étale sheaves of $S$-schemes locally of finite type with rational coefficients. The complex associated to this simplicial object induces a motive $M_{S}(\textup{Bar}^{\bullet}(X,G))$ in $\operatorname{DM}(S)$. For $S=\operatorname{Spec}(k)$ the spectrum of a field, Hoskins-Lehalleur explain in [HPL21] that this definition is equivalent to their definition of a motive of an Artin stack. The naturally arising question is if $M_{S}(\textup{Bar}^{\bullet}(X,G))$ is equivalent to $M_{S}(\left[X/G\right])$ as defined in Definition 2.4. If $\left[X/G\right]$ is representable by a smooth scheme, then the answer is positive and follows by cohomological descent for $\operatorname{DM}$ with respect to the $h$-topology (cf. [CD19, Thm. 14.3.4, Prop. 5.2.10]). Thus, the answer stays positive for smooth Artin stacks, as $\operatorname{DM}$ satisfies $h$-descent by gluing (cf. [RS20, Thm. 2.2.16]). ###### Corollary 2.7. Let $k$ be a field and $S=\operatorname{Spec}(k)$. Let $X$ be a smooth $S$-scheme and $G$ be a smooth $S$-group scheme acting on $X$ with structure map $f\colon\left[X/G\right]\rightarrow S$. Then $M_{S}(\left[X/G\right])$ is equivalent to $M_{S}(\textup{Bar}^{\bullet}(X,G))$. ###### Proof. This follows from [HPL21, Prop. A.7] and the discussion above. ∎ We can use Corollary 2.7 to compute the motive of $B\mathbb{G}_{\textup{m}}$ as in [Tot16]. ###### Example 2.8. Let $k$ be a field. Further, let $\mathbb{G}_{\textup{m},k}$ act trivially on $\operatorname{Spec}(k)$. Then $M_{k}(\textup{B}\mathbb{G}_{\textup{m},k})\simeq\mathop{\textup{colim}}_{i\in\mathbb{N}}M_{k}(\mathbb{P}^{i}_{k})\simeq\bigoplus_{i\geq 0}1_{k}\langle i\rangle.$ ###### Remark 2.9. In the following we want to understand the Gysin sequence for algebraic stacks. Let us quickly recall it in the scheme case. Let $i\colon Z\hookrightarrow X$ be a closed immersion of $S$-schemes of pure codimension $n$ with open complement $U$. Let us assume that $Z$ and $X$ are smooth over $S$. In particular, we see that $i$ is equivalently a regular closed immersion of codimension $n$. Then there exists a fibre sequence of the form $M_{S}(U)\rightarrow M_{S}(X)\rightarrow M_{S}(Z)\langle n\rangle$ (cf. [CD19, 11.3.4]). We are going to replace $S$ by a smooth Artin stack ${\mathfrak{Y}}$ over $S$ and $X$ by a smooth Artin stack ${\mathfrak{X}}$ over ${\mathfrak{Y}}$. For this let us also recall the notion of a (regular) closed immersion of a certain codimension for Artin stacks. Let $\iota\colon{\mathfrak{Z}}\hookrightarrow{\mathfrak{X}}$ be a closed immersion of locally noetherian Artin stacks. Let $X\rightarrow{\mathfrak{X}}$ be a smooth atlas. Then $\iota$ is representable and we define the codimension of ${\mathfrak{Z}}$ as the codimension of ${\mathfrak{Z}}\times_{{\mathfrak{X}}}X$ in $X$ (cf. [Oss15, §6]). We can also define the notion of a regular immersion in that way (cf. [Sta22, 06FM]) and the notion of its codimension. In particular, a closed immersion of smooth Artin $S$-stacks ${\mathfrak{Z}}\hookrightarrow{\mathfrak{X}}$ is automatically regularly immersed and the codimension of the regular immersion agrees with the codimension as a closed substack. ###### Lemma 2.10 (The Gysin sequence). Let $f\colon{\mathfrak{X}}\rightarrow{\mathfrak{Y}}$ be a smooth schematic morphism of smooth Artin $S$-stacks. Further let $i\colon{\mathfrak{Z}}\hookrightarrow{\mathfrak{X}}$ be a closed immersion of (pure) codimension $n$ such that ${\mathfrak{Z}}$ is smooth over ${\mathfrak{Y}}$ with open complement $j\colon{\mathfrak{U}}\rightarrow{\mathfrak{X}}$. Further, let us denote $f_{0}\coloneqq f\circ j$ and $\bar{f}\coloneqq f\circ i$. Then there exists the following fibre sequence $f_{0!}f_{0}^{!}1_{{\mathfrak{Y}}}\rightarrow f_{!}f^{!}1_{{\mathfrak{Y}}}\rightarrow\bar{f}_{!}\bar{f}^{!}1_{{\mathfrak{Y}}}\langle n\rangle.$ ###### Proof. Let $Y\rightarrow{\mathfrak{Y}}$ be a smooth atlas. Let us define $X\coloneqq Y\times_{{\mathfrak{Y}}}{\mathfrak{X}}$ and let $\check{C}(Y)_{\bullet}$ resp. $\check{C}(X)_{\bullet}$ denote the corresponding Čech nerves. By construction $\check{C}(X)_{\bullet}$ is obtained by $\check{C}(Y)_{\bullet}\times_{Y}X$. So, by functoriality we get maps ${f_{\bullet!}\colon\operatorname{DM}(\check{C}(Y)_{\bullet})}$${\operatorname{DM}(\check{C}(X)_{\bullet})\colon f_{\bullet}^{!}}$ that induce the maps $f_{!}$ and $f^{!}$ after passing to the limit. By construction we have a pullback diagram ${\check{C}(X)_{\bullet}}$${\check{C}(Y)_{\bullet}}$${{\mathfrak{X}}}$${{\mathfrak{Y}}.}$$\scriptstyle{f_{\bullet}}$$\scriptstyle{j_{X,\bullet}}$$\scriptstyle{j_{Y,\bullet}}$ In particular, by smoothness of the atlas and the exchange equivalence, we have $j^{*}_{Y,\bullet}f_{!}f^{!}1_{Y}\simeq f_{\bullet!}f_{\bullet}^{!}1_{\check{C}(Y)_{\bullet}}.$ Thus, by smoothness we can use that $\operatorname{DM}^{*}({\mathfrak{Y}})\simeq\operatorname{DM}^{!}({\mathfrak{Y}})$ and descent to see that $\mathop{\textup{colim}}_{\Delta}f_{\bullet!}f_{\bullet}^{!}1_{\check{C}(Y)_{\bullet}}\simeq f_{!}f^{!}1_{Y}.$ Analogously, we can write $\mathop{\textup{colim}}_{\Delta}f_{0\bullet!}f_{0\bullet}^{!}1_{\check{C}(Y)_{\bullet}}\simeq f_{0!}f^{!}_{0}1_{Y},\quad\mathop{\textup{colim}}_{\Delta}\bar{f}_{\bullet!}\bar{f}_{\bullet}^{!}1_{\check{C}(Y)_{\bullet}}\simeq\bar{f}_{!}\bar{f}^{!}1_{Y}.$ Therefore, we may assume that ${\mathfrak{Y}}$ is representable by a scheme and by representability of $f$ also ${\mathfrak{X}},{\mathfrak{U}}$ and ${\mathfrak{Z}}$ are representable by schemes. Hence, the result now follows from the classical Gysin sequence (cf. [CD19, 11.3.4]). ∎ Lastly, let us define Tate motives. As we mentioned in the introduction, the existence of a motivic $t$-structure is still an open problem. For a field $k$ Levine proved that under certain vanishing assumptions on motivic cohomology in $\operatorname{DM}(k)$, the so called Beilinson-Soulé vanishing conjecture, such a $t$-structure exists on the full stable subcategory generated by Tate twists $1_{k}(n)$ (cf. [Lev93]). The Beilinson-Soulé vanishing conjecture holds for example for finite fields. ###### Definition 2.11 ([RS20]). Let $X$ be an Artin $S$-stack locally of finite type. We define the category of Tate motives $\operatorname{DTM}(X)$ to be the full stable subcategory of $\operatorname{DM}(X)$ generated by $1_{X}(n)$, for $n\in\mathbb{Z}$. An element $M\in\operatorname{DM}(X)$ is Tate, if $M\in\operatorname{DTM}(X)$. A map $f\colon X\rightarrow Y$ of Artin stacks locally of finite type over $S$ is called Tate if $f_{*}1_{X}$ is Tate. Levine further shows, that existence of a weight structure on Tate motives for a field imply that the heart of $\operatorname{DTM}(\mathbb{F}_{p})$ under the motivic $t$-structure is equivalent to the category of graded finite dimensional $\mathbb{Q}$-vector spaces $(\mathbb{Q}\textup{-VS}^{\mathbb{Z}})$. In particular, using a result of Wildeshaus, we see for example that $\operatorname{DTM}(\mathbb{F}_{p})\simeq{\mathcal{D}}^{b}(\mathbb{Q}\textup{-VS}^{\mathbb{Z}})$, where ${\mathcal{D}}^{b}$ denotes the bounded derived category (cf. [Wil09]). Let us give a particularly interesting example of a Tate map that will be used later on. ###### Example 2.12. Let $G$ be a split reductive $S$-group scheme and $B\subseteq G$ a Borel. Then we claim that the structure map of the flag variety $G/B\rightarrow S$ is Tate. Indeed, the Bruhat decomposition of $G/B$ yields a stratification by affine spaces indexed by the Weyl group. The length of each Weyl element yields a partial order on the associated Schubert varieties. With this order, one can show using standard arguments that $R\Gamma_{S}(G/B,\mathbb{Q})\simeq\bigoplus_{w\in W}1_{S}\langle l(w)-n\rangle$, where $n$ denotes the relative dimension of $G/B$ over $S$ (cf. [Yay23] or [Bac19] for more details on analogous problems). ###### Remark 2.13 ([RS20]). If $f\colon X\rightarrow Y$ is a smooth morphism of Artin $S$-stacks locally of finite type and $X$ is smooth over $S$, then $D_{Y}(f_{*}1_{X})\simeq f_{!}D_{X}(1_{X})\simeq f_{!}1_{X}\langle\Omega_{X/S}\rangle$ and $D_{Y}(f_{!}1_{X})\simeq f_{*}1_{X}\langle\Omega_{X/S}\rangle$. Thus, $f_{*}1_{X}$ is Tate if and only if $f_{!}1_{X}$ is Tate. In the literature one also considers the stable cocomplete $\infty$-category generated by Tate twists (cf. [RS20]). This is usually also referred to as “Tate motives” but we will differentiate these from our definition. An example of such motives is given by the motive of $\textup{B}\mathbb{G}_{\textup{m},k}\rightarrow\operatorname{Spec}(k)$, the classifying stack of $\mathbb{G}_{\textup{m},k}$ over a field $k$. Indeed, in Example 2.8 we have seen $M_{k}(\textup{B}\mathbb{G}_{\textup{m}})\simeq\bigoplus_{n\in\mathbb{Z}}1_{k}\langle n\rangle$ and thus this lies in the ind-completion of the category of Tate motives. ###### Definition 2.14. Let $X$ be an Artin $S$-stack locally of finite type. We will call an $M\in\operatorname{DM}(X)$ completed Tate if it is already in the full stable cocomplete subcategory generated by $1_{X}(n)$. ## 3 Equivariant motives under reductive groups Let us give an outlook of the next subsections in the split case. The non- split case will then be a corollary. Assume that $S$ is connected and let $G$ be a reductive $S$-group scheme and $T$ a split maximal torus in $G$ with Weyl group scheme $W_{S}$. We will see in Section 3.1 that $W_{S}$ is a constant group scheme associated to the finite group $W\coloneqq W_{S}(S)$, which we call Weyl group of $T$ in $G$. Now let $X$ be an $S$-scheme locally of finite type with $G$ action. In this section, we want to show that the natural map $f\colon\left[X/T\right]\rightarrow\left[X/G\right]$ is Tate, i.e. $f_{*}1_{\left[X/T\right]}$ is a Tate motive in $\operatorname{DM}(\left[X/G\right])$. The key idea is to use the factorization $\left[X/T\right]\xrightarrow{g}\left[X/N\right]\xrightarrow{h}\left[X/G\right]$, where $N$ is the normalizer of $T$ in $G$. We note that per definition the map $g\colon\left[X/T\right]\rightarrow\left[X/N\right]$ is a $W$-torsor. As $W_{S}$ is finite étale, we see that after passage to an étale cover, $\left[X/T\right]$ is isomorphic to the disjoint union of $\left[X/N\right]$ indexed by $W$. Thus, automatically $g$ is Tate. The map $h\colon\left[X/N\right]\rightarrow\left[X/G\right]$ is a $G/N$-torsor. Up to taking $W$-invariant, we can identify $G/N$ with $G/B$. On $G/B$, we have stratification by Schubert cells, which are affine spaces. In this way, we can decompose $p_{*}1_{G/B}$ as a direct sum of twists and shifts indexed by $W$, where $p\colon G/B\rightarrow S$ is the structure map. This will enable us to reduce the question if $h$ is Tate to ordinary $K$-theory. More precisely, it will be enough to show that $K_{0}(S)_{\mathbb{Q}}\rightarrow K(G/T)^{W}_{\mathbb{Q}}$ is an isomorphism, which after reduction to the case of Chow groups is classically known. Before coming to our main result of this section, we will first introduce group actions on motives. In the end, we will see that our computations before showed that, we have $R\Gamma_{S}(\left[X/G\right],\mathbb{Q})\cong R\Gamma_{S}(\left[X/T\right],\mathbb{Q})^{W}$, analogous to the case of Chow groups. ###### Remark 3.1. A key argument in this section, is that torsors under finite étale group schemes have related motives by taking invariants of the action. This is a major obstruction for the generalization to integral coefficients, as we expect that this is only satisfied if we have étale descent (c.f. [CD19, Thm. 3.3.32]). Nevertheless, using the theory of étale motives it should not be to difficult to have analogous results after inverting only the residue characteristics of our base scheme. ### 3.1 Preliminaries on reductive group schemes Let $G$ be a reductive $S$-group scheme. In the following article, we want to work with reductive group schemes that admit maximal tori. Even though this is automatically satisfied if $S$ is the spectrum of a field, this is not true for general base schemes (cf. [Con15]). We will first proof our results under the assumption that the maximal torus of $G$ is split. This is not equivalent to splitness of $G$ or even quasi-splitness. In general one can find two main notions of quasi-split in the literature. In SGA 3 a quasi-split reductive group scheme admits a Borel pair as well as a global section of the associated Dynkin diagram (cf. [SGA3, Exp. XXIV]). Another definition is given by Conrad (cf. [Con14]), where he only enforces existence of a Borel subgroup. If $S$ is affine, this automatically implies the existence of a maximal torus contained in $B$. As mentioned in the introduction, for applications it is enough to consider only the existence of a maximal torus. If the maximal torus is split, we can also talk about the associated Weyl group, as the next lemma shows. We want to thank Torsten Wedhorn for explaining this and elaborating that it is enough to consider reductive group schemes with split maximal tori instead of split reductive group schemes. ###### Lemma 3.2. Assume that $S$ is connected. Let $T$ be a split maximal torus inside $G$ with normalizer $N_{G}(T)$, then $W_{S}\coloneqq N_{G}(T)/T$ is represented by a constant finite group scheme. ###### Proof. Let us first recall some well known fact (cf. [SGA3, Exp. XXII §3]). The quotient $W_{S}$ is represented by a finite étale $S$-scheme and that there is an open immersion $\iota\colon W_{S}\hookrightarrow\underline{\operatorname{Aut}}_{S}(T)$. As $T$ is split, $\underline{\operatorname{Aut}}_{S}(T)$ is the constant group associated to the automorphism group $\operatorname{Aut}(X^{\bullet}(T))$. In particular, $\underline{\operatorname{Aut}}_{S}(T)\cong\coprod_{\operatorname{Aut}(X^{\bullet}(T))}S$. Note that $\underline{\operatorname{Aut}}_{S}(T)$ is separated and thus $\iota$ is finite. As any finite monomorphism is a closed immersion (cf. [Sta22, 03BB]), the morphism $\iota$ is an open and closed immersion. By assumption $S$ is connected and therefore the image of $\iota$ has to be a disjoint union of connected components, i.e. $W_{S}\cong\coprod_{H}S$, for some subset $H\subseteq\operatorname{Aut}(X^{\bullet}(T))$. Finally, since $W_{S}\rightarrow S$ is finite, the set $H$ has to be finite. ∎ ###### Definition 3.3. Assume that $S$ is connected and that $G$ admits a split maximal torus. Then we call the finite group $W\coloneqq(N_{G}(T)/T)(S)$ the Weyl group of $G$444If $G$ is split reductive, then this agrees with the classical notion.. Let us give some examples of reductive groups that admit maximal tori. ###### Example 3.4. Let us give examples of schemes $S$ and $G$ as above. 1. $\bullet$ If $S=\operatorname{Spec}(k)$ is the spectrum of a field, then any reductive group admits a maximal torus (cf. [SGA3, Exp. XIV Thm 1.1]). 2. $\bullet$ If $S$ is any connected regular $\tilde{B}$-scheme of finite type, then any quasi-split reductive $S$-group scheme $G$ suffices. 3. $\bullet$ Assume, that $A$ is a Dedekind domain and $S=\operatorname{Spec}(A)$. Then $G$ admits a maximal torus if its generic fibre quasi-split (cf. Appendix A). 4. $\bullet$ Brian Conrad constructed examples of reductive groups over $\operatorname{Spec}(\mathbb{Z})$ that do not admit maximal tori (cf. [Con15]). All of these examples concern non-classical reductive groups and the author is not aware of any other counterexamples. ### 3.2 Torsors under finite groups To warm up, we first show that torsors under finite groups are Tate. In particular, it will be clear that the canonical map $\left[X/T\right]\rightarrow\left[X/N\right]$, considered in the beginning of Section 3, is Tate. We will not need any connectedness hypothesis on $S$ for the results of this section. ###### Lemma 3.5. Let $\Gamma$ be a finite étale $S$-group scheme. Let $f\colon X\rightarrow Y$ be a $\Gamma$-torsor of Artin $S$-stacks locally of finite type. Then $f$ is Tate, i.e. $f_{*}1_{X}$ is a Tate motive in $\operatorname{DM}(Y)$. ###### Proof. Let $n$ denote the degree of $\Gamma$ over $S$. Then we claim that the natural map $\coprod_{i=1}^{n}1_{Y}\rightarrow f_{*}1_{X}$ induced via the unit of $f_{*}f^{*}$ is an equivalence. Indeed, by $h$-descent we may assume that $f$ is given by the trivial $\Gamma$-torsor $\Gamma\times_{S}Y\rightarrow Y$ and $Y$ is represented by a scheme. In particular, $f$ is a finite étale cover of degree $n$. After passage to an étale cover, we may assume that $\Gamma\times_{S}Y\cong\coprod_{i=1}^{n}Y$ implying the claim. ∎ Next, let us analyze the structure of the motives with respect to the base. For this, we need to understand group actions on motives and taking fixed points under these actions. An action of a group $\Gamma$ on a motive $M$ in $\operatorname{DM}(X)$ is map $\Gamma\rightarrow\operatorname{Aut}_{\operatorname{DM}(S)}(M)$ that is a group homomorphism on $\pi_{0}$. Or equivalently, it is a map $\Sigma_{\Gamma}\rightarrow\operatorname{DM}(X)$ (here $\Sigma_{\Gamma}$ denotes the deloop of the group $\Gamma$ seen as a discrete category - usually this is denoted with $\textup{B}\Gamma$ but to avoid confusion, we changed the notation). ###### Definition 3.6. Let $M$ be a motive in $\operatorname{DM}(X)$ with an action by a finite group $\Gamma$. Then we define the homotopy $\Gamma$-fixed points of $M$, denoted by $M^{h\Gamma}$, as the limit of the action map $\Sigma_{\Gamma}\rightarrow\operatorname{DM}(X)$. ###### Definition and Remark 3.7. Let $X$ be an Artin $S$-stack locally of finite type and with an action by a finite group $\Gamma$. For each $\gamma$, we have an action map $a_{\gamma}\colon X\rightarrow X$. By construction of $\operatorname{DM}(X)$ this defines a map $\gamma.\colon 1_{X}\rightarrow 1_{X}$ by lax-monoidality of the $*$-pushforward. This endows any $M\in\operatorname{DM}(X)$ with an action via $\gamma.$. We define the $\Gamma$-fixed points of $M$, denoted by $M^{\Gamma}$ as the image555Note that (the homotopy category of) $\operatorname{DM}(X)$ is pseudo-abelian and hence for any idempotent operator we can define its image. of the map $p=\frac{1}{\\#\Gamma}\sum_{\gamma\in\Gamma}\gamma.$ The canonical map $M^{\Gamma}\rightarrow M$ defines an equivalence $M^{\Gamma}\xrightarrow{\sim}M^{h\Gamma}$ (cf. [CD19, 3.3.21]). Let $X\rightarrow Y$ be a map of $\Gamma$-torsor of Artin stacks locally of finite type over $S$ (here we see $\Gamma$ as a constant group scheme on $S$). Then the $\Gamma$-torsor $Y$-automorphisms of $X$ is isomorphic to $\Gamma$ and thus we get a $\Gamma$-action on $f_{*}f^{*}M$ for any $E\in\operatorname{DM}(Y)$ (via $*$-pushforward of a $\Gamma$-torsor $Y$-automorphism). Note that $f_{*}f^{*}E$ can be used to compute the motivic cohomology of $X$ with coefficients in $E$ for smooth $f$ as $\operatorname{Hom}_{\operatorname{DM}(Y)}(1_{Y},f_{*}f^{*}E)=\operatorname{Hom}_{\operatorname{DM}(Y)}(f^{*}1_{Y},f^{*}E)=\operatorname{Hom}_{\operatorname{DM}(Y)}(M_{Y}(X),E).$ In particular, we see that666Kernels in triangulated categories are monomorphisms and thus on the level of homotopy groups induce short exact sequences, dualy cokernels induce epimorphisms. Thus, applying the homotopy invariance functor to the Hom-spectrum $\underline{\operatorname{Hom}}_{\operatorname{DM}(Y)}(1_{Y},f_{*}f^{*}E)$, we see that $\pi_{0}$ precisely gives us the invariants of the induced group action on $\pi_{0}$. $\operatorname{Hom}_{\operatorname{DM}(Y)}(1_{Y},(f_{*}f^{*}E)^{\Gamma})=\operatorname{Hom}_{\operatorname{DM}(Y)}(M_{Y}(X),E)^{\Gamma}.$ Note that as $\Gamma$ is finite, the $\Gamma$-torsor $f$ is étale and proper, hence $f_{*}f^{*}\simeq f_{!}f^{!}$. Let $k$ be a field and $X$ a scheme locally of finite type over $\operatorname{Spec}(k)$. Further, let $K/k$ be a finite Galois extension and let us denote the base change of $X$ to $\operatorname{Spec}(K)$ by $X_{K}$. The Chow groups of $X$ and $X_{K}$ are related by the fixed points under the Galois group, i.e. $\textup{CH}_{n}(X)=\textup{CH}_{n}(X_{K})^{\operatorname{Gal}(K/k)}$. As one expects, this also holds motivically. This is due to Ayoub and Cisinki- Déglise. As noted in Remark 3.1, we do not expect this to hold, when we do not impose étale descent. Thus, we do not expect the next lemma to hold with integral coefficients. ###### Lemma 3.8. Let $\Gamma$ be a finite group. Let $f\colon X\rightarrow Y$ be a $\Gamma$-torsor of Artin $S$-stacks locally of finite type. Then the unit factors as $\operatorname{id}\rightarrow(f_{*}f^{*})^{\Gamma}\rightarrow f_{*}f^{*}$ and the map $\operatorname{id}\rightarrow(f_{*}f^{*})^{\Gamma}$ is an equivalence. ###### Proof. The factorization of the unit follows from the description of $(f_{*}f^{*})^{\Gamma}$ as a limit. We claim that $\operatorname{id}\rightarrow(f_{*}f^{*})^{\Gamma}$ is an equivalence. It suffices to check this after base change to a smooth atlas of $Y$. In particular, we may assume that $Y$ is represented by a scheme. Since $\operatorname{DM}_{\mathbb{Q}}$ satisfies $h$-descent, we may assume777Using [CD19, Prop. 3.3.31], we see that $M_{Y}(X)^{\Gamma}\simeq\varphi_{*}\varphi^{*}1_{Y}$, where $\varphi\colon({\mathscr{X}},\Gamma)\rightarrow Y$ is the induced morphism of the diagram $\Sigma_{\Gamma}\rightarrow\textup{Sch}_{S}$ that maps the single point of the category $\Sigma_{\Gamma}$ to $X$ and the morphisms to the actions. Then we can base change using [CD19, Prop. 3.1.17]. that $f$ is a trivial $\Gamma$-torsor, where it follows from [Ayo07, Prop. 2.1.166]. ∎ ###### Example 3.9. Let us consider the $W$-torsor $f\colon\left[X/T\right]\rightarrow\left[X/N\right]$ in the beginning of Section 3. As $W$ is finite, Lemma 3.8 implies that $f_{*}f^{*}1_{\left[X/T\right]}^{W}\simeq f_{!}f^{!}1_{\left[X/T\right]}^{W}\simeq 1_{\left[X/N\right]}$. After $\sharp$\- resp. $*$-pushforward to the base $S$ along the structure map $\left[X/N\right]\rightarrow S$, we see that $M_{S}(\left[X/N\right])\simeq M_{S}(\left[X/T\right])^{W}\textup{ resp. }R\Gamma(S,\left[X/N\right])\simeq R\Gamma(S,\left[X/T\right])^{W}.$ Now assume that $S=\operatorname{Spec}(k)$ is the spectrum of a field. Applying the latter equivalence to motivic cohomology yields $A^{n}_{N}(X,m)\cong A^{n}_{T}(X,m)^{W}$ for the equivariant intersection theory of $X$. ### 3.3 The relation between the motives of $\left[X/T\right]$ and $\left[X/G\right]$ We have seen, that the map $\left[X/T\right]\rightarrow\left[X/N\right]$ is Tate and how to use the action of the Weyl group to compute the motive of $\left[X/N\right]$ with respect to the motive of $\left[X/T\right]$. Now our goal is to show that the map $\left[X/N\right]\rightarrow\left[X/G\right]$ is Tate and how to compute the $!$-push/pull of this map. This will be achieved by analyzing the motive of $G/N$. Using that $G/T\rightarrow G/N$ is again a $W$-torsor, we will reduce to the case of the flag variety $G/B$. For this, we will need that equivariant motives do not see the action of split unipotent subgroups. Now let us recall the definition of a split unipotent subgroup. These are extensions of vector bundles, e.g. a Borel $B$ containing a maximal split torus $T$ is an extension of $T$ by a split unipotent subgroup. ###### Definition 3.10. An algebraic $S$-group scheme $U$ is called split-unipotent if there exists a normal series, i.e. a filtration $U=U_{n}\supseteq U_{n-1}\supseteq\dots\supseteq U_{0}=0$ such that $U_{i}$ is normal in $U_{i+1}$, with each successive quotient is isomorphic a vector bundle $\mathbb{V}({\mathcal{E}})$, where ${\mathcal{E}}$ is a finite locally free ${\mathcal{O}}_{S}$-module. ###### Example 3.11. The $S$-subgroup scheme of unipotent matricies $\mathbb{U}_{n,S}$ in $\operatorname{GL}_{n,S}$ is split unipotent. More generally, let $G$ be a reductive $S$-group scheme and $P$ be a parabolic in $G$, then the unipotent radical $R_{u}(P)$ of $P$ is split unipotent (cf. [SGA3, Exp. XXVI Prop. 2.1]). ###### Lemma 3.12. Let $G$ be a linear algebraic $S$-group scheme. Consider a split exact sequence of $S$-group schemes $1\rightarrow U\rightarrow G\rightarrow H\rightarrow 1$ where $U$ is split unipotent. Chose a splitting $\pi\colon H\hookrightarrow G$. Let $X$ be an $S$-scheme locally of finite type with an $F$-action. Then the $!$-pullback induces an equivalence $\pi^{!}\colon\operatorname{DM}(X/G)\xrightarrow{\sim}\operatorname{DM}(X/H).$ ###### Proof. This is analogous to the proof of [RS20, Prop. 2.2.11] but for completion, we give a proof. The morphism $\pi^{!}$ induces a morphism $\pi_{n}^{!}\colon\operatorname{DM}(\textup{Bar}^{n}(X,G))\rightarrow\operatorname{DM}(\textup{Bar}^{n}(X,H))$. Using [BN19, Lem. B.6], it suffices to show that $\pi^{!}\colon\operatorname{DM}(G)\rightarrow\operatorname{DM}(H)$ is fully faithful. By assumption $\pi$ is a $U$-torsor and using étale descent, we may assume that $\pi$ is given by the trivial $U$-torsor, i.e. $\pi\colon U\times H\rightarrow H$ is given by the projection. Replacing $H$ by $S$, it suffices to show that $\pi^{!}\colon\operatorname{DM}(S)\rightarrow\operatorname{DM}(U)$ is fully faithful. As $U$ is split unipotent, it has a filtration by subgroup $U_{i}$ with successive quotients isomorphic to a vector bundle. Using the same argumentation as above, we may assume that $U$ is a vector bundle, in which case the assertion is clear by homotopy invariance. ∎ Let $G$ be a split reductive $S$-group scheme and let $B$ be a Borel contain $T$ inside $G$. Let $X$ be an $S$-scheme locally of finite type with $B$-action. The above lemma shows that $\operatorname{DM}(\left[X/T\right])\simeq\operatorname{DM}(\left[X/B\right])$. We also have that $G/T\rightarrow G/N$ is a $W$-torsor. The results on torsors under finite groups combined with the above lemma then yields that $R\Gamma_{S}(G/N,\mathbb{Q})\simeq R\Gamma_{S}(G/B,\mathbb{Q})^{W}$. The next result will use this fact. ###### Proposition 3.13. Let $G$ be a reductive $S$-group scheme with maximal torus $T$. Let $N$ denote the normalizer of $T$ in $G$. Further, let $X$ be an $S$-scheme locally of finite type with $G$-action and $f\colon\left[X/N\right]\rightarrow\left[X/G\right]$ the canonical map. Then the unit $1_{\left[X/G\right]}\rightarrow f_{*}f^{*}1_{\left[X/G\right]}$ is an equivalence. In particular, $f_{*}1_{\left[X/N\right]}$ is a Tate motive in $\operatorname{DM}(\left[X/G\right])$. Let us summarize the idea of the proof. We will show that the unit $1_{\left[X/G\right]}\rightarrow f_{*}f^{*}1_{\left[X/G\right]}$ is an equivalence. To see this we will use that we have the following pullback diagram ${G/N\times_{S}X}$${X}$${\left[X/N\right]}$${\left[X/G\right]}$$\scriptstyle{\operatorname{pr}}$ (cf. [Sta22, 04Y4]). In particular, after étale-descent, we may assume that $f$ is given by the projection $G/N\rightarrow S$. Again, by étale descent, we may assume that $G$ is split with respect to $T$. Let $B$ be a Borel of $G$ containing $T$. Using our calculations about torsors under finite groups, it is enough to show that the induced map $1_{S}\rightarrow R\Gamma_{S}(G/T,\mathbb{Q})^{W}$ is an equivalence. But as $R\Gamma_{S}(G/T,\mathbb{Q})^{W}\simeq R\Gamma_{S}(G/B,\mathbb{Q})^{W}$ will reduce this question to a classical question about Chow rings, at least when $S$ is the spectrum of a field, namely if the pullback map $\textup{CH}_{n}(S,m)_{\mathbb{Q}}\rightarrow\textup{CH}_{n}(G/B,m)_{\mathbb{Q}}^{W}$ is an isomorphism for all $n,m\in\mathbb{Z}$. But as we can identify $\textup{CH}_{n}(G/B,m)_{\mathbb{Q}}^{W}$ with $\textup{CH}_{n}(G/G,m)_{\mathbb{Q}}$, this is clear (cf. [Kri13, Cor. 8.7]). In the case when $S$ is not the spectrum of a field, we have to use $K$-theory and as the flag variety admits an integral model, we can reduce to $S=\operatorname{Spec}(\mathbb{Z})$. Up to $K_{1}$ the rational $K$-theory of $\mathbb{Z}$ and $\mathbb{Q}$ are isomorphic. Using that the flag variety admits a stratification by affine spaces, we may assume the $S=\operatorname{Spec}(\mathbb{Q})$, as the only part we have to worry about, $K_{1}(\mathbb{Z})_{\mathbb{Q}}$, vanishes. ###### Proof of Proposition 3.13. Throughout this proof, we will denote for readability the structure map of an Artin stack ${\mathcal{X}}$ to $S$ with $p_{{\mathcal{X}}}$. To show that the unit $1_{\left[X/G\right]}\rightarrow f_{*}f^{*}1_{\left[X/G\right]}$ is an equivalence, we may assume by étale descent that the map $f$ is given by the structure map $G/N\rightarrow S$. Again, by étale descent, we may assume that $G$ is split with respect to $T$. Then it is enough to show that the unit $1_{S}\rightarrow R\Gamma_{S}(G/N,\mathbb{Q})$ is an equivalence, since the pullback of this equivalence along $p_{X}$ yields the desired equivalence. Note that the natural map $g\colon G/T\rightarrow G/N$ is naturally a $W$-torsor. Hence, by Lemma 3.8, we see that $1_{G/N}\simeq(g_{*}1_{G/T})^{W}$ and thus $R\Gamma_{S}(G/N,\mathbb{Q})\simeq p_{G/N*}(g_{*}1_{G/T})^{W}$. Since $p_{G/N*}$ is a right adjoint it commutes with limits. Thus, we have $p_{G/N*}(g_{*}1_{G/T})^{W}\simeq(p_{G/N*}g_{*}1_{G/T})^{W}\simeq(p_{G/T*}1_{G/T})^{W}\simeq R\Gamma_{S}(G/T,\mathbb{Q})^{W}$ By Lemma 3.12, we see that $R\Gamma_{S}(G/T,\mathbb{Q})\simeq R\Gamma_{S}(G/B,\mathbb{Q})$. By Example 2.12, we have that $R\Gamma_{S}(G/B,\mathbb{Q})\simeq\bigoplus_{w\in W}1_{S}\langle l(w)-n\rangle,$ where $n$ is the relative dimension of the flag variety $G/B$. In particular, $R\Gamma_{S}(G/B,\mathbb{Q})$ and thus $R\Gamma_{S}(G/T,\mathbb{Q})$ is Tate. As the $W$-invariants are defined as an image of a map, we see that $R\Gamma_{S}(G/T,\mathbb{Q})^{W}$ is also Tate. The $\infty$-category of Tate motives over $S$ is the stable subcategory of $\operatorname{DM}(S)$ generated by $1(r)$, for $r\in\mathbb{Z}$. Therefore, the natural map $1_{S}\rightarrow R\Gamma_{S}(G/T,\mathbb{Q})^{W}$ is an equivalence if and only if the induced map (3.1) $\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S}(r),1_{S})\rightarrow\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S}(r),R\Gamma_{S}(G/T,\mathbb{Q})^{W})$ is an equivalence for all $r\in\mathbb{Z}$ (here $\underline{\operatorname{Hom}}$ denotes the inner-$\operatorname{Hom}$). If $r>0$, then the $m$-th homotopy group of the left hand side is isomorphic to $K_{-2r+m}(S)^{(-r)}$ and thus vanishes as the negative Adams eigenspaces vanish per definition (cf. [Sou85]). By Remark 3.7 and the computation of $p_{G/B*}1_{G/B}$, we see that $\pi_{m}\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S}(r),R\Gamma_{S}(G/T,\mathbb{Q})^{W})\cong(\bigoplus_{w\in W}K_{-2r+m}(S)^{(l(w)-n-r)})^{W},$ which also vanishes for $r>0$ as $l(w)-n\leq 0$. Therefore, we may assume from now on that $r\leq 0$. Before continuing with the proof, let us first assume that $S=\operatorname{Spec}(k)$ is the spectrum of a field $k$. Then by the above, we are reduced to show that the restriction map $A^{r}(S,m-2r)\rightarrow A^{r}(G/B,m-2r)^{W}$ is an isomorphism (by homotopy invariance and identification of motivic cohomology with higher Chow-theory). But as $A^{r}(G/B,m-2r)^{W}\cong A^{r}(G/G,m-2r)=A^{r}(S,m-2r)$ (cf. [Kri13, Cor. 8.7]), the restriction above yields the identity on $A^{r}(S,m-2r)$ which is trivially an isomorphism. Hence, we are done in the case of $S=\operatorname{Spec}(k)$. Next, let us show how to reduce to the case of $S=\operatorname{Spec}(k)$. As $r\leq 0$, we may use commutativity of $\underline{\operatorname{Hom}}$ with limits, to see that (3.1) is an equivalence if and only if (3.2) $\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S}\langle r\rangle,1_{S})\rightarrow\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S}\langle r\rangle,R\Gamma_{S}(G/T,\mathbb{Q})^{W})$ is an equivalence. Further, since $S$ is finite dimensional (cf. [GW10, Prop. 14.109]), it is a fact that for every $n\in\mathbb{Z}$, the groups $K_{n}(S)^{(i)}$ vanish for all but finitely many $i\in\mathbb{Z}$ (cf. [Sou85, §2]). Therefore, the morphism (3.2) is an equivalence for all $r\leq 0$ if and only if $\bigoplus_{r\in\mathbb{Z}}\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S}\langle r\rangle,1_{S})\rightarrow\bigoplus_{r\in\mathbb{Z}}\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S}\langle r\rangle,R\Gamma_{S}(G/T,\mathbb{Q})^{W})$ is an isomorphism. Equivalently, we can write this morphism as $\bigoplus_{r\in\mathbb{Z}}\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S},1_{S}\langle r\rangle)\rightarrow\bigoplus_{r\in\mathbb{Z}}\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(M_{S}(G/T),1_{S}\langle r\rangle)^{W}$ The motive $M_{S}(G/T)$ is a direct sum of shifts and twists of the unit of $S$ and therefore compact (cf. [CD19, Thm. 11.1.13]). By compactness of $1_{S}$ and $M_{S}(G/T)$, the above morphism is an equivalence if and only if the morphism $\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(1_{S},\bigoplus_{r\in\mathbb{Z}}1_{S}\langle r\rangle)\rightarrow\underline{\operatorname{Hom}}_{\operatorname{DM}(S)}(M_{S}(G/T),\bigoplus_{r\in\mathbb{Z}}1_{S}\langle r\rangle)^{W}$ is an equivalence. By construction of the rational K-theory spectrum in $\operatorname{DM}(S)$, denoted by $\textup{KGL}_{S,\mathbb{Q}}$, we see that $\bigoplus_{r\in\mathbb{Z}}1_{S}\langle r\rangle\simeq\textup{KGL}_{S,\mathbb{Q}}$ (cf. [CD19, Lem. 14.1.4]). Thus, as $G/T$ is representable by a scheme (cf. [SGA3, Exp. IX Thm. 5.1]), the $n$-homotopy group of the right hand side is isomorphic to $K_{n}(G/T)_{\mathbb{Q}}^{W}$, the rational higher $K$-theory of $G/T$. Further, the properties of the $K$-theory spectrum yield that the induced morphism $K_{n}(S)_{\mathbb{Q}}\rightarrow K_{n}(G/T)_{\mathbb{Q}}$ is given by the pullback along $G/T\rightarrow S$ (cf. [CD19, §13.1]). Taking $W$-invariants yields the map (3.3) $K_{n}(S)_{\mathbb{Q}}\rightarrow K_{n}(G/T)_{\mathbb{Q}}^{W}$ on the $n$-th homotopy groups. We have to show that this map is an isomorphism for all $n\geq 0$. In the following we will freely identify $K_{n}(G/T)_{\mathbb{Q}}$ with $K_{n}(G/B)_{\mathbb{Q}}$ and $K$-theory with $G$-theory as we are only working with smooth schemes. As $G$ is split reductive, there exists a model $(G_{\mathbb{Z}},B_{\mathbb{Z}},T_{\mathbb{Z}})$ over $\operatorname{Spec}(\mathbb{Z})$ such that $G/B$ is the pullback of the associated flag variety $G_{\mathbb{Z}}/B_{\mathbb{Z}}$. As $G/B$ (resp. $G_{\mathbb{Z}}/B_{\mathbb{Z}}$) is stratified by affine spaces, we have an equivalence of $K$-theory spectra $K(G/B)_{\mathbb{Q}}\simeq K(G_{\mathbb{Z}}/B_{\mathbb{Z}})_{\mathbb{Q}}\otimes_{K(\mathbb{Z})_{\mathbb{Q}}}K(S)_{\mathbb{Q}}$ (cf. [Jos01, Cor. 4.3]), here $K(-)_{\mathbb{Q}}$ denotes the underlying $E_{\infty}$-ring spectrum in $\operatorname{DM}(\mathbb{Z})$, in particular $K(\mathbb{Z})_{\mathbb{Q}}=\textup{KGL}_{\mathbb{Z},\mathbb{Q}}$ and $K(S)_{\mathbb{Q}}$ (resp. $K(G/B)_{\mathbb{Q}}$) is just the $*$-pushforward of $\textup{KGL}_{S,\mathbb{Q}}$ (resp. $\textup{KGL}_{G/B,\mathbb{Q}}$) to $\operatorname{DM}(\mathbb{Z})$. We will show that the induced morphism $K(S)_{\mathbb{Q}}\rightarrow K(G/B)_{\mathbb{Q}}^{W}$ is an equivalence if and only if $K(\mathbb{Z})_{\mathbb{Q}}\rightarrow K(G_{\mathbb{Z}}/B_{\mathbb{Z}})_{\mathbb{Q}}^{W}.$ This follows from the following equivalence, $(K(G_{\mathbb{Z}}/B_{\mathbb{Z}})_{\mathbb{Q}}\otimes_{K(\mathbb{Z})_{\mathbb{Q}}}K(S)_{\mathbb{Q}})^{W}\simeq K(G_{\mathbb{Z}}/B_{\mathbb{Z}})_{\mathbb{Q}}^{W}\otimes_{K(\mathbb{Z})_{\mathbb{Q}}}K(S)_{\mathbb{Q}},$ which we will show holds true. This is shown similarly to the analogous facts on representations of finite groups, but we will give an argument. For a scheme $X$ let $\textup{DK}(X)$ of $X$ denote the $\infty$-category of $KGL_{X,\mathbb{Q}}$-modules in $\operatorname{DM}(X)$. This can also be extended to stacks by étale descent.888Note that the étale localized $K$-theory spectrum of stacks does not agree with the genuine rational $K$-theory spectrum. As we will only need the existence of a six functor formalism, we will just ignore the technicalities. By abuse of notation will denote the tensor unit of $DK(X)$ with $1_{X}$ and just for the following will even drop the $X$ in the notation, as it is clear in which categories the units are. Also, we will again denote the inner-$\operatorname{Hom}$ by $\underline{\operatorname{Hom}}$. Let us fix the following diagram with pullback squares ${G_{\mathbb{Z}}/B_{\mathbb{Z}}}$${\mathbb{Z}}$${S}$${\left[W\backslash G_{\mathbb{Z}}/B_{\mathbb{Z}}\right]}$${\left[\mathbb{Z}/W\right]}$${\left[S/W\right]}$${\mathbb{Z}}$${S,}$$\scriptstyle{g}$$\scriptstyle{b}$$\scriptstyle{f}$$\scriptstyle{a^{\prime}}$$\scriptstyle{a}$ here $W$ acts on $\mathbb{Z}$ resp. $S$ trivially and $a,b$ are the natural projection. Note that by Lemma 3.8, we have that ${g_{*}g^{*}}^{W}\simeq\operatorname{id}$. Using this, smooth base change, projection formula999Note that $f$ is proper, as $W$ is a finite constant group scheme. and adjunctions, we get the following chain of equivalences $\displaystyle(K(G_{\mathbb{Z}}/B_{\mathbb{Z}})_{\mathbb{Q}}\otimes_{K(\mathbb{Z})_{\mathbb{Q}}}K(S)_{\mathbb{Q}})^{W}$ $\displaystyle\simeq\underline{\operatorname{Hom}}_{\textup{DK}(\left[W/\mathbb{Z}\right])}(1,g_{*}g^{*}(b_{*}1\otimes a^{\prime}_{*}1))^{W}$ $\displaystyle\simeq\underline{\operatorname{Hom}}_{\textup{DK}(\left[W/\mathbb{Z}\right])}(1,b_{*}1\otimes a^{\prime}_{*}1)$ $\displaystyle\simeq\underline{\operatorname{Hom}}_{\textup{DK}(\mathbb{Z})}(1,f_{*}(b_{*}1\otimes a^{\prime}_{*}1))$ $\displaystyle\simeq K(G_{\mathbb{Z}}/B_{\mathbb{Z}})_{\mathbb{Q}}^{W}\otimes_{K(\mathbb{Z})_{\mathbb{Q}}}K(S)_{\mathbb{Q}}.$ Note that we also have to use that $K(G_{\mathbb{Z}}/B_{\mathbb{Z}})_{\mathbb{Q}}$ is a direct sum of the unit, to see that the $\underline{\operatorname{Hom}}$ agrees with the tensor product of spectra (see below for an explicit description). Thus, as the structure map $G/B\rightarrow S$ is induced via pullback, we see that $K(S)_{\mathbb{Q}}\rightarrow K(G/B)^{W}_{\mathbb{Q}}$ is induced via $K(\mathbb{Z})_{\mathbb{Q}}\otimes_{K(\mathbb{Z})_{\mathbb{Q}}}K(S)_{\mathbb{Q}}\xrightarrow{\alpha^{W}\otimes\operatorname{id}_{S}}K(G_{\mathbb{Z}}/B_{\mathbb{Z}})^{W}_{\mathbb{Q}}\otimes_{K(\mathbb{Z})_{\mathbb{Q}}}K(S)_{\mathbb{Q}},$ where $\alpha\colon G_{\mathbb{Z}}/B_{\mathbb{Z}}\rightarrow\operatorname{Spec}(\mathbb{Z})$ denotes the structure map. In particular, we may assume that $S=\operatorname{Spec}(\mathbb{Z})$. As $G/B$ admits an affine cell decomposition, we see that $K(G/B)_{\mathbb{Q}}\simeq\bigoplus_{w\in W}K(\mathbb{Z})_{\mathbb{Q}},\textup{ and }K(G_{\mathbb{Q}}/B_{\mathbb{Q}})_{\mathbb{Q}}\simeq\bigoplus_{w\in W}K(\mathbb{Q})_{\mathbb{Q}},$ where $G_{\mathbb{Q}}/B_{\mathbb{Q}}$ denotes the generic fiber of $G/B$ (cf. [Yay23]). It is well known that for any $n\neq 1$ there is an isomorphism $K_{n}(\mathbb{Z})_{\mathbb{Q}}\cong K_{n}(\mathbb{Q})_{\mathbb{Q}}$ and $K_{1}(\mathbb{Z})_{\mathbb{Q}}=0$. Thus, it is clear that $K_{1}(\mathbb{Z})_{\mathbb{Q}}\rightarrow K_{1}(G/B)_{\mathbb{Q}}^{W}$ is an isomorphism. For the other $K$-groups, we can use the identification of $K_{n}(\mathbb{Z})_{\mathbb{Q}}$ with $K_{n}(\mathbb{Q})_{\mathbb{Q}}$ and may assume that $S=\operatorname{Spec}(\mathbb{Q})$. This concludes the proof via the argument in the Chow group case. ∎ Combining Proposition 3.13 and the results of Section 3.2 now yields the main result of this section. ###### Theorem 3.14. Let $G$ be a reductive $S$-group scheme that admits a split maximal torus $T$ with Weyl group $W$. Assume $G$ acts on an $S$-scheme $X$ locally of finite type. Then the natural map $\left[X/T\right]\rightarrow\left[X/G\right]$ is Tate. Further, if $S$ is connected, we have $R\Gamma_{S}(\left[X/G\right],\mathbb{Q})\simeq R\Gamma_{S}(\left[X/T\right],\mathbb{Q})^{W}.$ ###### Proof. Let $N$ be the normalizer of $T$ in $G$. We can factor the map in the theorem as $\left[X/T\right]\xrightarrow{f}\left[X/N\right]\xrightarrow{g}\left[X/G\right]$. The first part of the theorem follows immediately with Lemma 3.5 and Proposition 3.13. For the proof of the second claim, we apply Lemma 3.8 and Proposition 3.13 and get $\displaystyle(p_{\left[X/T\right]*}1_{\left[X/T\right]})^{W}\simeq(p_{\left[X/G\right]*}g_{*}f_{*}1_{\left[X/T\right]})^{W}$ $\displaystyle\simeq p_{\left[X/G\right]*}g_{*}(f_{*}1_{\left[X/T\right]})^{W}$ $\displaystyle\simeq p_{\left[X/G\right]}g_{*}1_{\left[X/N\right]}\simeq p_{\left[X/G\right]*}1_{\left[X/G\right]}.$ ∎ Let $S=\operatorname{Spec}(k)$ be the spectrum of a field and $G$ a split reductive group over $k$ with split maximal torus $T$ and Weyl group $W$. Edidin and Graham have shown in [EG98] that for any smooth scheme $X$ with $G$-action, we have $A^{\bullet}_{G}(X)\cong A^{\bullet}_{T}(X)^{W}$. Later on Krishna has shown in [Kri13, Cor. 8.7] that this result can be generalized to higher Chow groups, at least when $X$ is quasi-projective. Our Theorem 3.14 is the motivic analog of both of these results. In fact, we will see in the next corollary, that applying Theorem 3.14 to motivic cohomology recovers both Edidin-Graham’s and Krishna’s result. Moreover, as any reductive group over a field splits after passage to a finite Galois extension, we can use the results of Section 3.2 to extend this comparison of (higher) equivariant Chow groups to arbitrary reductive groups over fields. We also note that Levine showed that Bloch’s higher Chow groups also satisfy a localization sequence, even in the non quasi-projective case. The following corollary summarizes this discussion. ###### Corollary 3.15. Assume that $S=\operatorname{Spec}(k)$ is the spectrum of a field. Let $G$ be a reductive $S$-group scheme with maximal torus $T$. Let $W$ denote the absolute Weyl101010Let $\bar{k}$ be an algebraic closure of $k$, then ${\mathcal{W}}\coloneqq N_{G}(T)/T$ is a finite étale group scheme of $S$ and we set the absolute Weyl group to be $W\coloneqq{\mathcal{W}}(\bar{k})$ group of $T$ in $G$. Assume $G$ acts on a smooth finite type $S$-scheme $X$. Then we have $R\Gamma_{S}(\left[X/G\right],\mathbb{Q})\simeq R\Gamma_{S}(\left[X/T\right],\mathbb{Q})^{W}.$ In particular, applying this result to motivic cohomology yields for all $n,m\in\mathbb{Z}$ $A^{n}_{G}(X,m)_{\mathbb{Q}}\cong A^{n}_{T}(X,m)_{\mathbb{Q}}^{W}.$ ###### Proof. Any reductive group over $k$ becomes split after passing to a finite Galois extension $K/k$ (cf. [SGA3, Exp. XXII Cor. 2.4]). Thus, let $K$ be such an extension, so that $T_{K}$ is a split maximal torus. Then we have the following diagram with pullback squares ${\left[X_{K}/T_{K}\right]}$${\left[X_{K}/G_{K}\right]}$${\operatorname{Spec}(K)}$${\left[X/T\right]}$${\left[X/G\right]}$${\operatorname{Spec}(k).}$$\scriptstyle{f}$$\scriptstyle{p_{T_{K}}}$$\scriptstyle{g}$$\scriptstyle{p_{G_{K}}}$$\scriptstyle{p_{K}}$$\scriptstyle{p_{T}}$$\scriptstyle{p_{G}}$ As $K/k$ is a finite Galois extension, the morphisms $p_{K}$ and thus also $f$ are $\Gamma\coloneqq\operatorname{Gal}(K/k)$-torsors. Then Lemma 3.8 yields the equivalence $p_{G*}1_{\left[X/G\right]}\simeq(p_{K*}p_{K}^{*}p_{G*}1_{\left[X/G\right]})^{\Gamma}.$ As the diagram above has cartesian squares, we can use smooth base change, to see that $p_{K}^{*}p_{G*}\simeq p_{G_{K}*}g^{*}$. But by Theorem 3.14, we have $p_{G_{K}*}g^{*}1_{\left[X/G\right]}\simeq R\Gamma_{K}(\left[X_{K}/G_{K}\right],\mathbb{Q})\simeq R\Gamma_{K}(\left[X_{K}/T_{K}\right],\mathbb{Q})^{W}.$ Commutativity of the above diagram yields $p_{K*}R\Gamma_{K}(\left[X_{K}/T_{K}\right],\mathbb{Q})\simeq p_{T*}f_{*}1_{\left[X_{K}/T_{K}\right]}.$ Thus, we have $p_{G*}1_{\left[X/G\right]}\simeq((p_{T*}f_{*}1_{\left[X_{K}/T_{K}\right]})^{W})^{\Gamma}.$ As limits commute with limits, we may write the right hand side as $(p_{T*}(f_{*}f^{*}1_{\left[X/T\right]})^{\Gamma})^{W}$ and again by Lemma 3.8, we have $(f_{*}f^{*}1_{\left[X/T\right]})^{\Gamma}\simeq 1_{\left[X/T\right]}$ concluding the proof. The result about motivic cohomology follows from Remark 3.7 by using [RS20, Thm. 2.2.10] which proves that motivic cohomology for smooth quotient stacks is computed by the higher equivariant Chow groups of Edidin-Graham.111111In loc.cit. they assume some properties on the groups and on $X$, as Edidin- Graham need these assumptions to compare higher Chow theory of stacks and equivariant higher Chow theory [EG98, Prop. 13 (b)]. The assumptions in loc.cit. are needed as Bloch only shows the existence of a long exact sequence for higher Chow groups in the case where $X$ is quasi-projective. This result was extended by Levine to all finite type $k$-schemes (cf. [Lev01]). Thus, the comparison of Edidin-Graham and hence also of Richarz-Scholbach go through in the case of the corollary. ∎ We can apply the same reasoning as above to see in the next corollary that the map $\left[X/T\right]\rightarrow\left[X/G\right]$ is Tate even if $T$ is not split. The main point is that groups of multiplicative type over locally noetherian normal schemes are isotrivial, i.e. become diagonalizable after passing to a finite étale cover (cf. [SGA3, Exp. X Thm. 5.16]). In particular, after passing to a Galois cover of our base, we can reduce to Theorem 3.14 using the results of Section 3.2. ###### Corollary 3.16. Let $S$ be a connected scheme. Let $G$ be a reductive $S$-group scheme that admits a maximal torus $T$. Assume $G$ acts on an $S$-scheme $X$ locally of finite type. Then the natural map $\left[X/T\right]\rightarrow\left[X/G\right]$ is Tate. ###### Proof. As any étale cover of a connected scheme can be refined by a Galois cover (cf. [Sta22, 03SF]), there exists a Galois cover $\tilde{S}\rightarrow S$ such that $\tilde{T}\coloneqq T\times_{S}\tilde{S}$ splits (cf. [SGA3, Exp. X Thm. 5.16]). Let us denote the base change of $X$ under $\tilde{S}\rightarrow S$ with $\tilde{X}$. We get an induced action of $\tilde{G}$ on $\tilde{X}$. In particular, we get a pullback square diagram ${\left[\tilde{X}/\tilde{T}\right]}$${\left[\tilde{X}/\tilde{G}\right]}$${\left[X/T\right]}$${\left[X/G\right].}$$\scriptstyle{\tilde{f}}$$\scriptstyle{h}$$\scriptstyle{g}$$\scriptstyle{f}$ Let $\Gamma$ denote the finite étale $S$-automorphism group of $\tilde{S}$. Then $g$ and $h$ are $\Gamma$-torsors. Thus, we can compute $f_{*}1_{\left[X/T\right]}\simeq(f_{*}h_{*}1_{\left[\tilde{X}/\tilde{T}\right]})^{\Gamma}\simeq(g_{*}\tilde{f}_{*}1_{\left[\tilde{X}/\tilde{T}\right]})^{\Gamma}.$ The morphism $\tilde{f}$ is Tate by Theorem 3.14 and $g$ is also Tate by Lemma 3.5. Further, taking invariants under finite group actions is given by taking the image of the fixed point operator (cf. Remark 3.7), i.e. by extensions, and thus preserves Tate motives. In particular, $f_{*}1_{\left[X/T\right]}$ is a Tate motive. ∎ ## 4 Torsors under tori Let us fix a torus $T$ over $S$ (cf. [SGA3, Exp. IX]). In this subsection, we want to understand the motivic homotopy theory of torsors under $T$. More precisely, let us consider the following situation. Let $X\rightarrow Y$ be a $T$-torsors of Artin $S$-stacks locally of finite type. We want to understand the relation between $M_{S}(X)$ and $M_{S}(Y)$. Let us recall the classical case of Chow theory. For this paragraph let us assume that $X$ and $Y$ are smooth Artin stacks over $\operatorname{Spec}(k)$, where $k$ is a field. Then the group $\hat{T}$ of characters of $T$ acts on $A^{\bullet}(Y)$ in the following way. Let $\chi\in\hat{T}$ be a character and consider its associated $1$-dimensional representation $\kappa(\chi)$. The quotient $L_{\chi}\coloneqq X\times^{T}\kappa(\chi)$ is representable by a line bundle over $Y$. Multiplication with the first Chern class of $L_{\chi}$ yields an action of $\hat{T}$ on $A^{\bullet}(Y)$. If $T$ is split, then for Chow rings it is known that $A^{\bullet}(X)\cong A^{\bullet}(Y)/\hat{T}A^{\bullet}(Y).$ Our goal is to extend this result to motivic homotopy theory, so that it generalizes the result about Chow theory and yields a similar statement for $K$-theory. Even though we work with Beilinson motives, we will remark in the end how to extend this to integral $K$-theory under some assumptions on $X$ and $Y$. ### 4.1 The motive of $T$-torsors Let us denote the character group of $T$ with $\hat{T}$. Let $X\rightarrow Y$ be a $T$-torsor of Artin stacks locally of finite type over $S$ and $\chi\in\hat{T}$ a character. Let $\mathbb{G}_{\textup{m},S}$ act via left multiplication in $\mathbb{A}^{1}_{S}$. Then $T$ acts via $\chi$ on $\mathbb{A}^{1}_{S}$ and thus, $X\times^{T}\mathbb{A}^{1}_{S}\rightarrow X/T\cong Y$ yields a line bundle over $Y$. The action of the first Chern class of $X\times^{T}\mathbb{A}^{1}_{S}$ on the motivic cohomology will be described by a Gysin sequence (cf. Proposition 4.2). ###### Notation 4.1. Assume that $T$ is split. In the following we want to split up the $T$-torsor $f\colon X\rightarrow Y$ into a sequence of $\mathbb{G}_{\textup{m},S}$-torsors. Note that by splitness $T\cong\mathbb{G}_{\textup{m},S}^{r}$ for some $r\in\mathbb{N}$. Fixing a numbering of the $\mathbb{G}_{\textup{m},S}$-components of $T$, we can embed for any $1\leq k\leq r$ the product $\mathbb{G}_{\textup{m},S}^{k}$ into $T$ by $\operatorname{id}_{\mathbb{G}_{\textup{m},S}^{k}}\times 1^{r-k}$. Then $\mathbb{G}_{\textup{m},S}^{k}$ acts on $X$ via this embedding. We get a sequence $X\rightarrow X/\mathbb{G}_{\textup{m},S}\rightarrow X/\mathbb{G}_{\textup{m},S}^{2}\rightarrow\dots\rightarrow X/T\cong Y$ of $\mathbb{G}_{\textup{m},S}$-torsors. We denote the induced maps $X/\mathbb{G}_{\textup{m},S}^{i}\rightarrow Y$ with $f_{i}$. ###### Proposition 4.2. Assume that $T\cong\mathbb{G}_{\textup{m},S}^{r}$ for some $r\in\mathbb{N}$. Let $X\rightarrow Y$ be a $T$-torsor of smooth Artin stacks over $S$. Then, there exists a filtration $M_{Y}(X)=M_{0}\rightarrow M_{1}\rightarrow\dots\rightarrow M_{r}=1_{Y}$ in $\operatorname{DM}(S)$, where $M_{i}\coloneqq M_{Y}(X/\mathbb{G}_{\textup{m},S}^{i})$ such that the cofiber of $M_{Y}(X_{i-1})\rightarrow M_{Y}(X_{i})$ is given by $M_{Y}(X_{i})\langle 1\rangle$ and the map $M_{Y}(X_{i})\rightarrow M_{Y}(X_{i})\langle 1\rangle$ is induced by multiplication with $c_{1}(X_{i-1}\times^{\mathbb{G}_{\textup{m}}}\mathbb{A}^{1}_{S})$. ###### Proof. This follows by successively using [HPL21, Prop. 2.32]. But for completion let us give a proof by recalling the argument. The morphism $X_{i-1}\rightarrow X_{i}$ is a $\mathbb{G}_{m}$-torsor. In particular, the scheme ${\mathcal{L}}_{i}\coloneqq X_{i-1}\times_{S}^{\mathbb{G}_{\textup{m},S}}\mathbb{A}^{1}_{S}$ is a line bundle over $X_{i}$. Let $s\colon X_{i}\hookrightarrow V_{i}$ denote the zero section. Certainly, the complement of the closed immersion $s$ is isomorphic to $X_{i-1}$. Then the Gysin sequence of Lemma 2.10 yields a fibre sequence $M_{Y}(X_{i-1})\rightarrow M_{Y}(V_{i})\simeq M_{Y}(X_{i})\xrightarrow{\varphi}M_{Y}(X_{i})\langle 1\rangle.$ By construction of the Gysin sequence $\varphi$ is given by multiplication with $c_{1}({\mathcal{L}}_{i})$. This concludes the proof. ∎ ###### Corollary 4.3. Assume $S$ is connected and let $f\colon X\rightarrow Y$ be a $T$-torsor of smooth Artin stacks over $S$. Then $f$ is a Tate map. If $T$ is split, we may drop the connectedness hypothesis on $S$. ###### Proof. If $T$ is split, then Proposition 4.2 implies that $M_{Y}(X)$ is a successive extension of $1_{Y}$. Thus, the result follows from Remark 2.13. In the non-split case, this is completely analogous to the proof of Corollary 3.16. ∎ ### 4.2 Motivic cohomology of $T$-torsors Now we will assume that $T$ is split and fix an $r\in\mathbb{N}$ such that $T\cong\mathbb{G}_{\textup{m},S}^{r}$. For any Artin stack $X$ locally of finite type over $S$, we will denote its motivic cohomology with $H^{p}(X,\mathbb{Q}(n))\coloneqq\operatorname{Hom}_{\operatorname{DM}(S)}(M_{S}(X),\mathbb{Q}(n)[p]).$ If $X$ is representable by a smooth $S$-scheme, then we have $H^{p}(X,\mathbb{Q}(n))\cong K_{2n-p}(X)^{(n)}$. If $S=\operatorname{Spec}(k)$ is the spectrum of a field and $X$ is smooth and over $S$, we have $H^{p}(X,\mathbb{Q}(n))\cong A^{n}(X,2n-p)$. Note that for a smooth Artin $S$-stack $X$ the motivic cohomology vanishes automatically in certain degrees by descent and vanishing of negative $K$-theory for regular schemes, i.e. we have that $H^{p}(X,\mathbb{Q}(n))\cong 0$ for $p>2n$. ###### Notation 4.4. Consider a $\chi\in\hat{T}$. The associated line bundle $L_{\chi}\coloneqq X\times_{S}^{T}\mathbb{V}({\mathcal{E}}_{\chi})$ yields a map $H^{p+2}(Y,\mathbb{Q}(n+1))\rightarrow H^{p}(Y,\mathbb{Q}(n))$, by multiplication with the Chern class of $L_{\chi}$. We denote the image of this map with $c_{1}(L_{\chi})H^{p}(Y,\mathbb{Q}(n))$. ###### Proposition 4.5. In the setting of Proposition 4.2 let us further fix an $n\in\mathbb{Z}$. Then, we have $H^{2n}(X,\mathbb{Q}(n))\cong H^{2n}(Y,\mathbb{Q}(n))/\hat{T}H^{2n}(Y,\mathbb{Q}(n)).$ ###### Proof. First let us note that it is enough121212Any character is generated by primitive characters and the corresponding $1$-dimensional representation is given by the associated tensor product. to show that $H^{2n}(X,\mathbb{Q}(n))=H^{2n}(Y,\mathbb{Q}(n))/\langle c_{1}(X\times_{S}^{T}\mathbb{V}({\mathcal{E}}_{\chi_{i}}))H^{2n}(Y,\mathbb{Q}(n))\rangle,$ where $\chi_{i}$ is a primitive character in $\hat{T}$. Let $X=X_{0}\rightarrow X_{1}\rightarrow\dots\rightarrow X_{r}=Y$ be the sequence of Proposition 4.2. For each $0\leq i\leq r$ this yields a long exact sequence on motivic cohomology $\displaystyle\dots\rightarrow H^{2n+2}(X_{i},\mathbb{Q}(n+1))$ $\displaystyle\rightarrow H^{2n}(X_{i},\mathbb{Q}(n))$ $\displaystyle\rightarrow H^{2n}(X_{i-1},\mathbb{Q}(n))\rightarrow H^{2n+3}(X_{i},\mathbb{Q}(n+1))\rightarrow\dots.$ We have $H^{2n+3}(X_{i},\mathbb{Q}(n+1))=0$ and thus get an exact sequence of the form $H^{2n+2}(X_{i},\mathbb{Q}(n+1))\xrightarrow{a}H^{2n}(X_{i},\mathbb{Q}(n))\xrightarrow{b}H^{2n}(X_{i-1},\mathbb{Q}(n))\rightarrow 0.$ The map $b$ is the usual pullback on motivic cohomology. The map $a$ is induced by multiplication with the Chern class of the line bundle ${\mathcal{L}}_{i}=X_{i-1}\times^{\mathbb{G}_{\textup{m}}}_{S}V({\mathcal{E}}_{\chi_{i}})$. As $X_{r}=Y$, we have $H^{2n}(X_{r-1},\mathbb{Q}(n))\cong H^{2n}(Y,\mathbb{Q}(n))/c_{1}({\mathcal{L}}_{r})H^{2n}(Y,\mathbb{Q}(n))$. Hence, inductively we see that $H^{2n}(X,\mathbb{Q}(n))\cong H^{2n}(Y,\mathbb{Q}(n))/\langle c_{1}({\mathcal{L}}_{i})H^{2n}(Y,\mathbb{Q}(n))\rangle_{1\leq i\leq r}.$ We are left to show that $c_{1}({\mathcal{L}}_{i})H^{2n}(Y,\mathbb{Q}(n))=c_{1}(X\times_{S}^{T}V({\mathcal{E}}_{i}))H^{2n}(X,\mathbb{Q}(n))$. For this let us start with $i=r$. Then by construction $X\times_{S}^{T}V({\mathcal{E}}_{\chi_{r}})\cong X_{r-1}\times_{S}^{\mathbb{G}_{\textup{m}}}V({\mathcal{E}}_{\chi_{r}})$. Inductively, we may replace $Y$ by $X_{i}$, where the claim again follows by construction. ∎ ###### Corollary 4.6. Let $S=\operatorname{Spec}(k)$ be the spectrum of a field and Let $X\rightarrow Y$ be a $T$-torsor of smooth Artin stacks of over $S$. Then $A^{\bullet}(X)_{\mathbb{Q}}\cong A^{\bullet}(Y)_{\mathbb{Q}}/\hat{T}A^{\bullet}(Y)_{\mathbb{Q}}.$ ###### Proof. This follows immediately from Proposition 4.5. ∎ ###### Remark 4.7. Proposition 4.2 and Proposition 4.5 can be extended to other cohomology theories in the following way. Let us fix a $T$-torsor $X\rightarrow Y$ of smooth Artin $S$-stacks. 1. (1) (Rational étale localized cohomology theories) Let us fix a $T$-torsor $X\rightarrow Y$ of smooth Artin $S$-stacks. Let $M\in\textup{SH}(S)_{\mathbb{Q},\operatorname{\acute{e}t}}$ be an oriented $E_{\infty}$-ring spectrum and let us denote its pullback to any smooth Artin $S$-stack $Z$ with $M_{Z}$. The orientation of $M$ yields a Chern class map $c_{1}\colon\operatorname{Pic}(Z)\rightarrow H^{2}(Z,M(1))\coloneqq\operatorname{Hom}_{\textup{SH}(Z)}(1_{Z},M_{Z}(1)[2]).$ In the same fashion as before, for any character $\chi\in\hat{T}$, we can define $c_{1}(L_{\chi})H^{n}(Y,M)$. Assume there exists a $p\in\mathbb{Z}$ such that $M$ satisfies $H^{m}(Z,M)\coloneqq\operatorname{Hom}_{\textup{SH}(Z)}(1_{Z},M_{Z}[m])=0$ for all $m>p$. Then $H^{p}(X,M)\cong H^{p}(Y,M)/\langle c_{1}(X\times_{S}^{T}\mathbb{V}({\mathcal{E}}_{\chi_{i}}))H^{p}(Y,M)\rangle.$ If we take for example $M=\textup{KGL}_{\mathbb{Q},S}$, the rational $K$-theory spectrum in $\textup{SH}(S)$, then we get $K_{0}(X)_{\mathbb{Q}}^{\operatorname{\acute{e}t}}\cong K_{0}(Y)_{\mathbb{Q}}^{\operatorname{\acute{e}t}}/\langle c_{1}(X\times_{S}^{T}\mathbb{V}({\mathcal{E}}_{\chi_{i}}))K_{0}(Y)_{\mathbb{Q}}^{\operatorname{\acute{e}t}}\rangle,$ where $K_{0}(-)_{\mathbb{Q}}^{\operatorname{\acute{e}t}}$ denotes the étale localized rational $K$-theory. 2. (2) (Integral $K$-theory) The extension to integral $K$-theory is more subtle, as we have to restrict ourselves to certain algebraic stacks131313These are called scalloped stacks (cf. [KR21]). to make sense of the stable homotopy category. For simplicity, we may assume that $X=\left[X^{\prime}/H\right]$ and $Y=\left[Y^{\prime}/F\right]$ are represented by quotients of quasi-projective schemes by diagonalizable group schemes. Then there is a well defined notion of a stable homotopy category $\textup{SH}(X)$ resp. $\textup{SH}(Y)$ together with a functorial $E_{\infty}$-ring object KGL that represents equivariant $K$-theory (cf. [KR21]). Further, Bott-periodicity yields an orientation on KGL. In particular, using that $\textup{KH}(X)$ and $\textup{KH}(Y)$ are connective141414This holds Nisnevich locally by loc.cit. and by descent for KH. (cf. [HK19, Thm. 5.7]), we see that Corollary 4.6 holds for integral $K_{0}$ in this case, i.e. $K_{0}(X)\cong K_{0}(Y)/\hat{T}K_{0}(Y).$ This result can be glued to the class of so called scalloped stacks (cf. [KR21] for the notion of scalloped stacks and the construction of SH). ## 5 Motivic cohomology of quotients up to isogeny We are now ready to answer the main question of this article ($\ast$ ‣ 1). Let us assume that $S$ is connected and let us fix a reductive $S$-group scheme $G$ and assume that it admits a maximal torus $T$. For completeness, we will work in a more general setting than in the beginning of the introduction. Let $P$ resp. $Q$ be parabolics inside $G$ with Levi-components $L$ resp. $M$. Assume that $T$ is contained in $L$. Let $\varphi\colon L\rightarrow M$ be an isogeny. Then $L$ acts on $G$ via $(l,g)\mapsto lg\varphi(l)^{-1}$. We are interested in the quotient of this action, which we denote by $\left[G/_{\varphi}L\right]$ or rather its motive. To do so, we follow the idea of Brokemper in the proof of [Bro16, Prop. 1.2]. As $\varphi$ is an isogeny the image of $T$ is again a maximal torus. In particular, up to conjugation by an element $g_{0}\in G(S)$, we may identify $T$ with $\varphi(T)$. The $g_{0}$-conjugation of $G$ induces an isomorphism $G\rightarrow G$ that is $L$-equivariant, where $L$ acts on the right hand side via $(l,g)\mapsto lgg_{0}^{-1}\varphi(l)^{-1}g_{0}$. In particular, after replacing $M$ resp. $Q$ by their $g_{0}^{-1}$-conjugation, we may assume that $\varphi(T)=T$. Then we have the following embedding $T\hookrightarrow T\times T$, via $t\mapsto(t,\varphi(t))$. The quotient under this embedding is $T\times T/T\cong T$. Thus, the naturally induced morphism $\left[G/_{\varphi}T\right]\rightarrow\left[G/T\times T\right]$, where $T\times T$ acts on $G$ via $(t,t^{\prime},g)\mapsto tgt^{\prime-1}$, is a $T$-torsor. This leaves us with the following picture ${\left[G/_{\varphi}L\right]}$${\left[G/_{\varphi}T\right]}$${\left[G/T\times T\right].}$$\scriptstyle{a}$$\scriptstyle{b}$ For morphism $a$ we can apply Corollary 3.16 and see that it is Tate. The morphism $b$ is by the above a $T$-torsor. Therefore, we can use Corollary 4.3 to see that $b$ is also Tate. If $T$ is split, we can say more. In the split case, we have $R\Gamma_{S}(\left[G/_{\varphi}L\right],\mathbb{Q})\simeq R\Gamma_{S}(\left[G/_{\varphi}T\right],\mathbb{Q})^{W_{L}},$ where $W_{L}$ denotes the Weyl group of $T$ in $L$ (cf. Theorem 3.14). Further, we can compute the motive resp. the motivic cohomology of $\left[G/_{\varphi}T\right]$ via the motive resp. the motivic cohomology of $\left[G/T\times T\right]$ using Proposition 4.2 and Proposition 4.5. If $G$ admits a Borel $B$ containing $T$, then by invariance under extensions of unipotent groups, we can identify $\operatorname{DM}(\left[G/T\times T\right])$ with $\operatorname{DM}(\left[T\backslash G/B\right])$ (cf. Lemma 3.12). Therefore, with all of the above we see that the $T$-equivariant motivic cohomology resp. motive of the flag variety $G/B$ yields results about the motivic cohomology resp. the motive of $\left[G_{\varphi}/L\right]$. But the author has shown151515Even though in the cited article, we assume that $S$ is affine, all the arguments go through in $\operatorname{DM}$ without this assumption. in [Yay23] that the motive of $\left[T\backslash G/B\right]$ is computed by $M_{S}(G/B)\otimes M_{S}(BT)$. If further $S=\operatorname{Spec}(k)$, we have seen that $M_{S}(\textup{B}T)\cong\bigotimes_{i=1}^{r}M_{S}(\textup{B}\mathbb{G}_{\textup{m}})=\bigotimes_{i=1}^{r}\bigoplus_{j\geq 0}1_{k}\langle i\rangle,$ which is completed Tate (cf. Example 2.8). As the motive of the flag variety $G/B$ is also Tate (cf. Example 2.12), we see that $M_{S}(T\backslash G/B)$ is completed Tate. Summarizing all of the above yields the following theorem. ###### Theorem 5.1. We have the following diagram of Tate maps ${\left[G/_{\varphi}L\right]}$${\left[G/_{\varphi}T\right]}$${\left[G/T\times T\right].}$$\scriptstyle{a}$$\scriptstyle{b}$ If $T$ is split, we may drop the connectedness hypothesis on $S$. Further, if $S=\operatorname{Spec}(k)$ and $T$ is split, the motives $R\Gamma_{S}(\left[G/_{\varphi}L\right],\mathbb{Q})$ and $M_{S}(\left[G/_{\varphi}L\right])$ are completed Tate motives in $\operatorname{DM}(S)$. And, we can compute the Chow ring of $\left[G/_{\varphi}L\right]$ as $A^{\bullet}(\left[G/_{\varphi}L\right])_{\mathbb{Q}}\cong\left(A^{\bullet}_{T}(G/T)_{\mathbb{Q}}/\hat{T}A^{\bullet}_{T}(G/T)_{\mathbb{Q}}\right)^{W_{L}}$ ###### Proof. The first assertion is the discussion above (for the non-connected case use Theorem 3.14). Together with the discussion above, the second resp. third assertion follows again from the discussion above and Remark 2.13 resp. Corollary 4.6. ∎ ###### Remark 5.2. The last isomorphism in Theorem 5.1 is also valid in the case, where $S$ is not the spectrum of a field, after replacing the Chow groups with the $(2n,n)$ motivic cohomology group for $n\in\mathbb{Z}$ as in Proposition 4.5. ###### Remark 5.3. If $G$ is a split reductive group with split maximal torus $T$, Brokemper has shown that one can give a more explicit computation of the Chow ring of $\left[G_{\varphi}/L\right]$ using the computations of Brion [Bri97] (cf. [Bro16, Prop. 1.2]). To be more precise, we can write the last isomorphism of Theorem 5.1 as $A^{\bullet}(\left[G/_{\varphi}L\right])_{\mathbb{Q}}\cong S^{W_{L}}/(f-\varphi f\mid f\in S_{+}^{W_{G}}),$ where $S=\operatorname{Sym}_{\mathbb{Q}}(\hat{T})\cong A^{\bullet}_{T}(\ast)_{\mathbb{Q}}$, $S_{+}$ are the elements of positive degree and $W_{G}$ is the Weyl group of $T$ in $G$. A more detailed computation can be found in the proof of [Bro16, Prop. 1.2]. Remark 4.7 enables us to get an analogous result of Remark 5.3 for rational $K_{0}$. ###### Proposition 5.4. In the setting above, let us assume that $S=\operatorname{Spec}(k)$ is the spectrum of a field, the derived group of $G$ is simply connected and $T$ is a split maximal torus. Then we have $K_{0}(\left[G/_{\varphi}L\right])_{\mathbb{Q}}\cong R(T)_{\mathbb{Q}}^{W_{L}}/(f-\tilde{\varphi}f\mid f\in R(T)_{\mathbb{Q}}^{W_{G}}).$ ###### Proof. For $K_{0}$, we could not produce an analogue of Corollary 3.15 using Theorem 3.14 and thus have to use a result of Krishna on equivariant $G$-theory (cf. [Kri14, Lem. 9.2]). For completion we recall the main argument. First, we may replace $Q$ and $M$ by $\prescript{g_{0}}{}{Q}$ and $\prescript{g_{0}}{}{M}$ and assume that $\varphi(T)=T$. In particular, $\tilde{\varphi}$-conjugation of $G$ by $T$ is just $\varphi$-conjugation. Now we embed $T$ into $T\times_{S}T$ by $t\mapsto(t,\varphi(t))$. Let $T\times_{S}T$ act on $G$ by $(t,t^{\prime}).g\coloneqq tgt^{\prime-1}$. This yields a morphism $\left[G/_{\varphi}T\right]\rightarrow\left[G/T\times_{S}T\right]$. This is $T\cong T\times_{S}T/T$-torsor. Thus, by Remark 4.7, we have $\displaystyle K_{0}(\left[G/_{\varphi}T\right])_{\mathbb{Q}}$ $\displaystyle\cong K_{0}(\left[G/T\times_{S}T\right])_{\mathbb{Q}}/\hat{T}K_{0}(\left[G/T\times_{S}T\right])_{\mathbb{Q}}.$ By homotopy invariance, we have $K_{0}(\left[G/T\times_{S}T\right])_{\mathbb{Q}}\cong K_{0}^{T}(G/B)_{\mathbb{Q}}$. Therefore, we are reduced to classical statements about $T$-equivariant $K$-theory of flag varieties (cf. [Uma13]) and get $K_{0}(\left[G/_{\varphi}T\right])_{\mathbb{Q}}\cong R(T)_{\mathbb{Q}}/(f-\tilde{\varphi}f\mid f\in R(T)_{\mathbb{Q}}^{W_{G}}).$ It follows161616Even though Krishna works with $G=\operatorname{GL}_{n}$ his proof of [Kri14, Lem. 9.2] goes through in our case. from [Kri14, Lem. 9.2] that $K_{0}(\left[G/_{\varphi}L\right])_{\mathbb{Q}}\cong K_{0}(\left[G/_{\varphi}T\right])_{\mathbb{Q}}^{W_{L}}.$ Thus, it suffices to show that $IR(T)_{\mathbb{Q}}^{W_{L}}=IR(T)_{\mathbb{Q}}\cap R(T)^{W_{L}}_{\mathbb{Q}}$, where $I=(f-\tilde{\varphi}f\mid f\in R(T)_{\mathbb{Q}}^{W_{G}})$, but this follows from faithfully flatness of $R(T)^{W_{L}}\hookrightarrow R(T)$ (cf. [Ste75, Thm. 1.2] \- see also the proof of [Bro16, Prop. 1.2] resp. [Bro18, Prop. 2.3.2] for a detailed argument in the Chow group case). ∎ We want to give two motivating examples of quotients $\left[G/_{\varphi}L\right]$ as above, that appear naturally and apply Theorem 5.1 to see that their motives are Tate. The classifying stack of finite groups of Lie-type and the stack of $G$-zips. Both are examples in characteristic $p>0$. In the last section, we want to use Theorem 5.1 to give an idea how we want to approach geometric representation theory of finite groups of Lie-type $G^{F}$ by relating it motivically to geometric representation of the Langlands dual of $G$ (see below for the notation). ###### Example 5.5. Let us assume that $S=\operatorname{Spec}(\mathbb{F}_{q})$ be a finite field of characteristic $p>0$. We set $\varphi\colon G\rightarrow G$ to be the $q$-Frobenius. This is an isogeny and thus, we can apply Theorem 5.1 in this setting. Further, the stack $\left[G/_{\varphi}G\right]$ is isomorphic to $\textup{B}G^{F}$, where $G^{F}$ is the stabilizer group scheme of the neutral element (cf. [Bro16, Lem. 2.1]). It is well known that $G^{F}(\overline{\mathbb{F}}_{q})\cong G(\mathbb{F}_{q})$, where $\overline{\mathbb{F}}_{q}$ denotes an algebraic closure of $\mathbb{F}_{q}$. Thus, we see that the motive of the classifying stack of a finite group of Lie-type is Tate. Further, we are able to relate Tate motives of $\left[T\backslash G/B\right]$, where $B$ is a Borel containing $T$ with Tate motives of $\textup{B}G^{F}$ via the diagram in Theorem 5.1. This answers our main question ($\ast$ ‣ 1) of the article. One of Brokemper’s applications of his computations is the computation of the Chow ring of $G$-zips. In a similar fashion we will apply the above results and show that the motive of the stack of $G$-zips over a field is completed Tate. ###### Example 5.6. Let $k$ be a field of characteristic $p>0$ and let $S=\operatorname{Spec}(k)$. Let $G,P,Q$ be as above. Let us denote the unipotent radical of $P$ resp. $Q$ with $R_{u}(P)$ resp. $R_{u}(Q)$. Further, let $\varphi\colon P/R_{u}(P)\rightarrow Q/R_{u}(Q)$ be an isogeny. The datum ${\mathcal{Z}}\coloneqq(G,P,Q,\varphi)$ is called algebraic zip-datum. To every algebraic zip-datum like above, we can associate the group $E_{{\mathcal{Z}}}\coloneqq\\{(p,q)\in P\times Q\mid\varphi(\bar{p})=\bar{q}\\}.$ The group $E_{{\mathcal{Z}}}$ acts on $G$ via conjugation $(p,q).g\coloneqq pgq^{-1}$. The quotient stack $G\textup{-Zip}\coloneqq\left[G/E_{{\mathcal{Z}}}\right]$ is called the stack of G-zips. There are also alternative constructions using a Tannakian formalism on the stack of $F$-zips (cf. [PWZ15]). In op.cit. there is also an explicit description of the points of $G\textup{-Zip}$. Let $L\subseteq P$ be a Levi-component of $P$. Then as seen in the proof of [Bro18, Thm. 2.4.4] there is a split exact sequence $1\rightarrow R_{u}(P)\times R_{u}(Q)\rightarrow E_{{\mathcal{Z}}}\rightarrow L\rightarrow 1,$ where the splitting is induced by $L\hookrightarrow E_{{\mathcal{Z}}}$, $l\mapsto(l,\varphi(l))$. Therefore, by homotopy invariance, we have $M_{S}(G\textup{-Zip})\simeq M_{S}(\left[G/_{\varphi}L\right])$ which is completed Tate by Theorem 5.1 and the discussion before the theorem. ## 6 Generalizations In this section, we want to give an overview on the integral version of Theorem 3.14 and Theorem 5.1. We want to mention three questions that came up naturally during the work on this article that we want to address in the future. 1. (1) Under what assumptions can we transport all of these results to motives defined via Spitzweck’s motivic cohomology ring spectrum? 2. (2) Is it enough to invert the residue characteristics of the base for all of our results? 3. (3) Can the results of this article be extended to other cohomology theories? Let us go a bit more into detail. So, let $S=\operatorname{Spec}(k)$ be the spectrum of a field and $X$ be an $S$-scheme locally of finite type. further, let $G$ be a split reductive group over $k$ with split maximal torus $T$ and associated Weyl group $W$. Assume $G$ acts on $X$. Question (1) is rather straightforward. For Chow groups one can see that if $A^{\bullet}_{G}(X)\cong A^{\bullet}_{T}(X)^{W}$ holds integrally if any only if $G$ is special. But if we assume that $G$ is special, then any $G$-torsor is trivial Zariski-locally. If we define integral motives $\operatorname{DM}_{\mathbb{Z}}$ on Prestacks via right Kan extension of Spitzweck motives (cf. [Spi18]), we see that $\operatorname{DM}_{\mathbb{Z}}$ satisfies Nisnevich and in particular Zariski descent. Thus, up to technicalities, we expect that all the arguments after Section 3.3 go through. Crucially, Section 3.2 has to be worked out in this context, as both Ayoub and Cisinski-Déglise use rational coefficients. This is not surprising, as one can see that the particular motivic behavior of torsors under finite groups should yield étale descent. We hope that in the case of special group there is a work around. Question (2) is addressed in a similar fashion, instead of Spitzweck motives, we can use étale motive (cf. [CD16]). As these still satisfy étale descent again all the arguments after Section 3.3 should go through. We expect that inverting the residue characteristics should be enough to recover the statements of Section 3.2 in this case. But still one needs to prove the necessary results, which we expect to be rather straightforward. Question (3) needs more careful treatment. The results about $T$-torsors of Section 4 can be extended to other oriented cohomology theories, as we have seen. Section 3.3 is more difficult as it boils down to vanishing results on cohomology theories. The key part is Proposition 3.13. We expect that this proposition still holds for modules over the étale $K$-theory ring spectrum but we did not check this thoroughly. If one wants to work with genuine $K$-theory, then this statement can not be proven via étale descent and thus needs a more careful treatment. For other cohomology theories one can possibly give a precise vanishing assumption to extend the results of this article. For completion let us note, that the computation of the global sections in Theorem 3.14 will not work for arbitrary cohomology theories. If one takes for example Chow-Witt cohomology and $G=\operatorname{SL}_{2}$, then the Chow-Witt ring of $\textup{B}\mathbb{G}_{\textup{m}}$ is trivial, while the Chow-Witt ring of $\textup{B}G$ is not (cf. [Lev22]). ## Appendix A Existence of maximal tori over Prüfer domains The following is essentially from the authors notes of a seminar talk from Torsten Wedhorn and we do not claim originality (cf. [Wed23]). I want to thank him for allowing me to use his ideas. For our main result (Theorem 5.1), we need to work with reductive group schemes that admit maximal tori. As we mentioned in Section 6 not every reductive group scheme admits such a maximal tori. In this short appendix, we want to prove that a reductive group scheme over sufficiently nice base, e.g. Dedekind domains, with quasi-split generic fibre admits a maximal torus. In fact, to prove this, we only need to assumptions on the base. First, we will need that the base is affine and secondly the stalks of the base are valuation rings. The following idea was communicated to us by Torsten Wedhorn. Rings with the property that the stalks are valuation rings were studied before and we will give them a name according to literature. ###### Definition A.1 ([FS01]). An integral domain $A$ is called Prüfer domain, if for any prime ${\mathfrak{p}}\subseteq A$, the localization $A_{{\mathfrak{p}}}$ is a valuation ring. ###### Example A.2. Any Dedekind domain is a Prüfer domain conversely, any noetherian Prüfer domain is in fact a Dedekind domain (cf. [Sta22, 00II]). So, Prüfer domains can be seen as the non-noetherian analog of Dedekind domains. Another example of Prüfer domains, are so called Bézout domains, i.e. integral domains in which every finitely generated ideal is principal. In particular, the Picard group of a Bézout domain is trivial. This includes all PID’s and valuation rings, but also rings such as the algebraic integers $\overline{\mathbb{Z}}$ (cf. [DB72, pp. 243-245] for more examples). In the following let us fix a Prüfer domain $A$ and set $S\coloneqq\operatorname{Spec}(A)$. ###### Lemma A.3. Let us denote $K\coloneqq\operatorname{Quot}(A)$. Further, let $X\rightarrow S$ be a proper morphism of finite presentation. Then the map $X(A)\rightarrow X(K)$ is bijective. ###### Proof. Let $\sigma\in X(K)$. By the valuative criterion for properness, we can find for any $s\in S$ a unique lift $\sigma_{s}\in X({\mathcal{O}}_{S,s})$ of $\sigma$. As $X$ is of finite presentation, we can find an open neighborhood $U_{s}$ of $s$ and a lift $\sigma_{U_{s}}\in X(U_{s})$ of $\sigma_{s}$. By uniqueness of the lifts $\sigma_{s}$, the $\sigma_{U_{s}}$ glue to a unique lift $\sigma_{A}\in X(A)$ of $\sigma$. ∎ Applying this lemma to the scheme representing Borels in a reductive $S$-group scheme $G$, we see that if the generic fiber of $G$ is quasi-split, then $G$ is quasi-split. The existence of a maximal torus in $G$ follow now from the fact maximal tori contained in $B$ are a $R_{u}(B)$-torsor. Writing everything out we get the following proposition. ###### Proposition A.4. Let $G$ be a reductive $S$-group scheme. If the generic fibre of $G$ is quasi- split, then $G$ admits a Borel pair. Further, if $\operatorname{Pic}(A)=0$ and the generic fibre of $G$ is split, then $G$ is split. ###### Proof. Let $K\coloneqq\operatorname{Quot}(A)$ and $\operatorname{Bor}$ be the scheme classifying the Borels of $G$ over $A$. As $G_{K}$ is quasi-split, there exists an element $B_{K}\in\operatorname{Bor}(K)$. The scheme $\operatorname{Bor}$ is proper and of finite presentation (cf. [SGA3, Exp. XXII Cor. 5.8.3]) and thus we can apply Lemma A.3 to $\operatorname{Bor}$ and see that $B_{K}$ lifts uniquely to a Borel $B$ of $G$. Let $U$ denote the unipotent radical of $B$. The functor of maximal tori in $B$ is representable by a smooth affine $S$-scheme $X$ and is further a $U$-torsor (cf. [SGA3, Exp. XXII Cor. 5.6.13]). But $H^{1}(S,U)=0$, as $S$ is affine (cf. [SGA3, Exp. XXII Cor. 5.9.6]), and therefore $X\rightarrow S$ admits a section. If further $T_{K}$ is split and $\operatorname{Pic}(A)=0$, then we claim that $G$ is split. For this let ${\bar{\eta}}$ be the geometric point corresponding to $K$. As $S$ is integral and normal (cf. [Sta22, 00IC]), $T$ splits after passage to a finite Galois cover of $S$ (cf. [Con15, Cor. B.3.6]). This induces an action of $\pi_{1}^{\operatorname{\acute{e}t}}(A,{\bar{\eta}})$ on the character group $X^{\bullet}(T)$, which is trivial if and only if $T$ is split. As $T_{K}$ is split, the action of $\pi_{1}^{\operatorname{\acute{e}t}}(K,{\bar{\eta}})$ on $X^{\bullet}(T_{K})$ is trivial. Since $\pi_{1}^{\operatorname{\acute{e}t}}(A,{\bar{\eta}})$ is a quotient of $\pi_{1}^{\operatorname{\acute{e}t}}(K,{\bar{\eta}})$ (cf. [Sta22, 0BQM]), we deduce that $\pi_{1}^{\operatorname{\acute{e}t}}(A,{\bar{\eta}})$ acts trivially on $X^{\bullet}(T)$. Therefore, $T$ is split and since $\operatorname{Pic}(A)=0$, we see that $G$ is split (cf. [SGA3, Exp. XXII Prop. 2.2]). ∎ ## References * [Ayo07] J. Ayoub. Les six opérations de Grothendieck et le formalisme des cycles évanescents dans le monde motivique. (I, II). Astérisque, (314, 315), 2007. * [Bac19] T. Bachmann. Affine Grassmannians in $\mathbb{A}^{1}$-homotopy theory. Selecta Math. (N.S.), 25(2):Paper No. 25, 14, 2019. doi:10.1007/s00029-019-0471-1. * [BN19] U. Bunke and T. Nikolaus. Twisted differential cohomology. Algebr. Geom. Topol., 19(4):1631–1710, 2019. doi:10.2140/agt.2019.19.1631. * [Bri97] M. Brion. Equivariant Chow groups for torus actions. Transform. Groups, 2(3):225–267, 1997. doi:10.1007/BF01234659. * [Bro16] D. Brokemper. On the chow ring of the classifying space of some chevalley groups, 2016\. doi:10.48550/ARXIV.1611.07735. * [Bro18] D. Brokemper. On the Chow ring of the stack of truncated Barsotti-Tate groups. Pacific J. Math., 296(2):271–303, 2018. doi:10.2140/pjm.2018.296.271. * [CD16] D.-C. Cisinski and F. Déglise. Étale motives. Compositio Mathematica, 152(3):556–666, 2016. doi:10.1112/S0010437X15007459. * [CD19] D.-C. Cisinski and F. Déglise. Triangulated categories of mixed motives. Springer Monographs in Mathematics. Springer, Cham, [2019] ©2019. doi:10.1007/978-3-030-33242-6. * [Con14] B. Conrad. Reductive group schemes. In Autour des schémas en groupes. Vol. I, volume 42/43 of Panor. Synthèses, pages 93–444. Soc. Math. France, Paris, 2014. * [Con15] B. Conrad. Non-split reductive groups over ${\bf Z}$. In Autours des schémas en groupes. Vol. II, volume 46 of Panor. Synthèses, pages 193–253. Soc. Math. France, Paris, 2015\. doi:10.1017/CBO9781316092439. * [DB72] B. J. Dulin and H. S. Butts. Composition of binary quadratic forms over integral domains. Acta Arith., 20:223–251, 1972. doi:10.4064/aa-20-3-223-251. * [Dem73] M. Demazure. Invariants symétriques entiers des groupes de Weyl et torsion. Invent. Math., 21:287–301, 1973. doi:10.1007/BF01418790. * [DL76] P. Deligne and G. Lusztig. Representations of reductive groups over finite fields. Ann. of Math. (2), 103(1):103–161, 1976. doi:10.2307/1971021. * [EG98] D. Edidin and W. Graham. Equivariant intersection theory. Invent. Math., 131(3):595–634, 1998. doi:10.1007/s002220050214. * [EKMM97] A. D. Elmendorf, I. Kriz, M. A. Mandell, and J. P. May. Rings, modules, and algebras in stable homotopy theory, volume 47 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1997. doi:10.1090/surv/047. With an appendix by M. Cole. * [FS01] L. Fuchs and L. Salce. Modules over non-Noetherian domains, volume 84 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2001. doi:10.1090/surv/084. * [GW10] U. Görtz and T. Wedhorn. Algebraic geometry I. Advanced Lectures in Mathematics. Vieweg + Teubner, Wiesbaden, 2010. doi:10.1007/978-3-8348-9722-0. Schemes with examples and exercises. * [HK19] M. Hoyois and A. Krishna. Vanishing theorems for the negative $K$-theory of stacks. Ann. K-Theory, 4(3):439–472, 2019. doi:10.2140/akt.2019.4.439. * [HPL21] V. Hoskins and S. Pepin Lehalleur. On the Voevodsky motive of the moduli stack of vector bundles on a curve. Q. J. Math., 72(1-2):71–114, 2021. doi:10.1093/qmathj/haaa023. * [Jos01] R. Joshua. Algebraic $K$-theory and higher Chow groups of linear varieties. Math. Proc. Cambridge Philos. Soc., 130(1):37–60, 2001. doi:10.1017/S030500410000476X. * [Kha19] A. A. Khan. Virtual fundamental classes of derived stacks I, 2019, 1909.01332. arXiv:1909.01332. * [KR21] A. A. Khan and C. Ravi. Generalized cohomology theories for algebraic stacks, 2021. doi:10.48550/ARXIV.2106.15001. * [Kri13] A. Krishna. Higher Chow groups of varieties with group action. Algebra Number Theory, 7(2):449–507, 2013. doi:10.2140/ant.2013.7.449. * [Kri14] A. Krishna. Riemann-Roch for equivariant $K$-theory. Adv. Math., 262:126–192, 2014. doi:10.1016/j.aim.2014.05.010. * [Lev93] M. Levine. Tate motives and the vanishing conjectures for algebraic $K$-theory. In Algebraic $K$-theory and algebraic topology (Lake Louise, AB, 1991), volume 407 of NATO Adv. Sci. Inst. Ser. C: Math. Phys. Sci., pages 167–188. Kluwer Acad. Publ., Dordrecht, 1993. doi:10.1007/978-94-017-0695-7_7. * [Lev01] M. Levine. Techniques of localization in the theory of algebraic cycles. J. Algebraic Geom., 10(2):299–363, 2001. * [Lev22] M. Levine. Atiyah-bott localization in equivariant witt cohomology, 2022, 2203.13882. * [LZ12] Y. Liu and W. Zheng. Enhanced six operations and base change theorem for higher artin stacks, 2012. doi:10.48550/ARXIV.1211.5948. arXiv:1211.5948. * [MV99] F. Morel and V. Voevodsky. ${\bf A}^{1}$-homotopy theory of schemes. Inst. Hautes Études Sci. Publ. Math., (90):45–143 (2001), 1999\. URL http://www.numdam.org/item?id=PMIHES_1999__90__45_0. * [Oss15] B. Osserman. Relative dimension of morphisms and dimension for algebraic stacks. Journal of Algebra, 437:52–78, 2015. doi:https://doi.org/10.1016/j.jalgebra.2015.04.022. * [PWZ15] R. Pink, T. Wedhorn, and P. Ziegler. $F$-zips with additional structure. Pacific J. Math., 274(1):183–236, 2015. doi:10.2140/pjm.2015.274.183. * [RS20] T. Richarz and J. Scholbach. The intersection motive of the moduli stack of shtukas. Forum Math. Sigma, 8:Paper No. e8, 99, 2020. doi:10.1017/fms.2019.32. * [SGA3] M. Artin, J.-E. Bertin, M. Demazure, A. Grothendieck, P. Gabriel, M. Raynaud, and J.-P. Serre. Schémas en groupes. Séminaire de Géométrie Algébrique de l’Institut des Hautes Études Scientifiques. Institut des Hautes Études Scientifiques, Paris, 1963/1966. * [Sou85] C. Soulé. Opérations en k-théorie algébrique. Canadian Journal of Mathematics, 37(3):488–550, 1985. doi:10.4153/CJM-1985-029-x. * [Spi18] M. Spitzweck. A commutative $\mathbb{P}^{1}$-spectrum representing motivic cohomology over Dedekind domains. Mém. Soc. Math. Fr. (N.S.), (157):110, 2018. doi:10.24033/msmf.465. * [Sta22] T. Stacks Project Authors. Stacks Project. https://stacks.math.columbia.edu, 2022. * [Ste75] R. Steinberg. On a theorem of Pittie. Topology, 14:173–177, 1975. doi:10.1016/0040-9383(75)90025-7. * [SVW18] W. Soergel, R. Virk, and M. Wendt. Equivariant motives and geometric representation theory. (with an appendix by f. hörmann and m. wendt), 2018, 1809.05480. * [SW18] W. Soergel and M. Wendt. Perverse motives and graded derived category ${\mathcal{O}}$. J. Inst. Math. Jussieu, 17(2):347–395, 2018. doi:10.1017/S1474748016000013. * [Tot14] B. Totaro. Group cohomology and algebraic cycles, volume 204 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 2014. doi:10.1017/CBO9781139059480. * [Tot16] B. Totaro. The motive of a classifying space. Geom. Topol., 20(4):2079–2133, 2016. doi:10.2140/gt.2016.20.2079. * [Uma13] V. Uma. Equivariant $K$-theory of flag varieties revisited and related results. Colloq. Math., 132(2):151–175, 2013. doi:10.4064/cm132-2-1. * [Wed23] T. Wedhorn. Seminar talk within the CRC326-GAUS on the “The moduli of Langlands parameters”. https://crc326gaus.de/wp-content/uploads/2023/01/Program-Moduli-of-Langlands-parameters.pdf, 2023\. * [Wil09] J. Wildeshaus. $f$-catégories, tours et motifs de Tate. C. R. Math. Acad. Sci. Paris, 347(23-24):1337–1342, 2009. doi:10.1016/j.crma.2009.10.016. * [Yay23] C. Yaylali. $T$-equivariant motives of flag varieties, 2023. doi:10.48550/ARXIV.2304.02288.
# A Robust Unscented Transformation for Uncertain Moments Hugo T. M. Kussaba<EMAIL_ADDRESS>João Y. Ishihara Leonardo R. A. X. Menezes Department of Electrical Engineering, University of Brasília – UnB, 70910-900, Brasília, DF, Brazil ###### Abstract This paper proposes a robust version of the unscented transform (UT) for one- dimensional random variables. It is assumed that the moments are not exactly known, but are known to lie in intervals. In this scenario, the moment matching equations are reformulated as a system of polynomial equations and inequalities, and it is proposed to use the Chebychev center of the solution set as a robust UT. This method yields a parametrized polynomial optimization problem, which in spite of being NP-Hard, can be relaxed by some algorithms that are proposed in this paper. ###### keywords: Unscented Transform , Polynomial Optimization , Lasserre’s hierarchy , Statistics , Filtering. t1t1footnotetext: ©2019. This manuscript version is made available under the CC BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/. This article has been accepted for publication in a future issue of the Journal of Franklin Institute, but has not been fully edited. Content may change prior to final publication. The final version of record is available at https://doi.org/10.1016/j.jfranklin.2019.02.018. ## 1 Introduction In numerous problems of statistics and stochastic filtering, one is often interested in calculating the posterior expectation of a continuous random variable $X$ that undergoes a nonlinear transform $f$, viz.: $\textrm{E}\left\\{f(X)\right\\}=\int_{\mathbb{R}}f(\xi)p_{X}(\xi)d\xi.$ (1) It is not always possible to have a closed-form expression for this integral in terms of elementary functions: if this integral does not satisfy the hypothesis of Liouville’s theorem (see for instance, Section 12.4 of [1]), then the antiderivative of this integral cannot be expressed in terms of elementary functions. Thus, instead of using analytical methods to calculate (1), in many situations numerical methods must be employed. A common way to numerically calculate (1) is by using the Monte Carlo integration method [2], which is a stochastic sampling technique: by taking a sufficiently large number of samples of the random variable $X$, one can approximate the probability density function $p_{X}$ and obtain an estimate for (1). However, this method can be very demanding computationally, since it frequently employs several thousands of simulations to obtain the statistics of the final result. Another way to calculate (1) with less computational burden than Monte Carlo integration method is the technique of Unscented Transform (UT). Originally proposed for the problem of extending the Kalman filter for nonlinear dynamical systems [3], this method has also been applied in several problems of engineering, such as in the analysis of the sensitivity of antennas [4] and in circuit optimization [5]. Different from the Monte Carlo integration method, the UT is a deterministic sampling technique: in place of choosing random points to approximate $p_{X}$, points known as sigma points are deterministically selected to capture the statistics of $p_{X}$. This is accomplished by generating a discrete distribution $p^{\prime}_{X}$ having the same first and second (and possibly higher) moments of $p_{X}$. The mean, covariance and higher moments of the transformed ensemble of sigma points can then be computed as the estimate of the nonlinear transformation $f$ of the original distribution [3], [6]. Thus, at least the first moments of $p_{X}$ must exist and be exactly known. In several circumstances, however, this is not valid: for instance, it may be the case that the distribution does not even have the first moment (one example is the Cauchy distribution [7]). In the case that the moments can be assumed to exist, it is usual that they are not precisely known, but it is still possible to have upper and lower bounds for the exact value of moments from statistical experiments or by the use, for instance, of Chebychev’s inequality [2]. In this paper it is designed a technique for generating sigma points when the exact value of the moments are not known, but upper and lower bounds for the unknown moments are known. In this case, the moment matching equations of UT are no longer just a system of polynomial equations but a system of polynomial equations with polynomial inequalities. Furthermore, since this system can have more than one solution, it is possible to choose a set of sigma points which minimizes a given cost function by formulating the problem as a polynomial optimization problem (POP). Although the solution to this problem is in general computationally infeasible [8], by using Lasserre’s hierarchy of semidefinite programming relaxations one can approximate the solution of the original POP problem by the solution of a computationally feasible convex optimization problem[9, 10]. The main contribution of this paper is the introduction of the concept of UT robustness in the sense of exploiting the upper and lower bounds for moments. A robust UT is proposed in [11] but robustness has different meaning and it is achieved by matching precisely known high order moments. It is important to note that Lasserre’s hierarchy of relaxations was applied in earlier investigations of the moment matching problem [12], but its use was limited to polynomial equations while here polynomial inequalities are also taken into account. This paper is organized as follow. Some preliminaries for the robust UT are presented in Section 2. The robust UT itself is detailed in Section 3. Details about the computation of robust sigma points is presented in Section 4. The computation of UT transform is presented in Section 5. Finally, the conclusion is presented in Section 7. ## 2 Unscented Transform The rationale behind the unscented transformation is that it is easier to calculate a moment for a discrete distribution than a continuous one. In fact, to compute the posterior distribution of a random scalar variable $X$ with distribution $p_{X}$ by a function $f$, one must calculate the integral (1) while, in other hand, if $Z$ is a discrete random variable with $m+1$ atoms $z_{i}$ and distribution $p_{Z}$, one only needs to calculate the sum $E\left\\{f(Z)\right\\}=\sum_{i=1}^{m+1}f(z_{i})p_{Z}(z_{i}).$ (2) Henceforth, if it is possible to choose the atoms and their weights of an adequate $p_{Z}$ to approximate $p_{X}$, then the value $E\\{f(Z)\\}$ would be a good approximation to $E\\{f(X)\\}$, but with (2) being easier to compute than (1). Thus, the principle behind the UT is to approximate the continuous distribution $p_{X}$ by the discrete distribution $p_{Z}$ by equating the first $m$ moments of these distributions. In other words, the following equations must be satisfied: $\displaystyle E\\{X^{k}\\}$ $\displaystyle=E\\{Z^{k}\\}$ (3) $\displaystyle=\sum_{i=1}^{m+1}z_{i}^{k}p_{Z}(z_{i}),\ k=1,\ldots,m,$ (4) where is supposed the access to $E\\{X^{k}\\}$ for $k=1,\ldots,m$. Given $E\\{X^{k}\\}$ it is always possible to find $z_{i}$ and $p_{Z}(z_{i})$, $i=1,\ldots,m+1$ (which will be called sigma point and its weight, respectively) satisfying (4). In fact, as it will be stated next, there is at least one solution for the equations $E\left\\{g_{k}(Z)\right\\}=E\left\\{g_{k}(X)\right\\},\ k=1,\ldots,m,$ (5) where $g_{k}:\mathbb{R}\rightarrow\mathbb{R}$ are continuous functions, $g_{k}\not\equiv g_{j}$ for $k\neq j$ and from which (4) is a particular case with $g_{k}(x)=x^{k}$. That (5) has at a least one solution is stated next in Theorem 1. ###### Theorem 1. Consider that the $m$ moments $E\\{g_{k}(X)\\}$, $k=1,\ldots,m$, are given. The system (5) in terms of variables $z_{i}$ and $p_{Z}(z_{i})$ has at least one solution with at most $m+1$ sigma points.111The proof of the theorem is known in the mathematical literature in the context of the Caratheodory’s theorem. Since it is less known in the context of UT literature, the proof is presented here for easy reference. See, e.g., [13]. ###### Proof. Since the point $P=\left(E\left\\{g_{1}(X)\right\\},\ldots,E\left\\{g_{m}(X)\right\\}\right)$ belongs to the convex hull of the set $G=\left\\{\left(g_{1}(x),\ldots,g_{m}(x)\right)\mid x\in\mathbb{R}\right\\}$, then Caratheodory’s theorem [14, Theorem 1.3.6] gives that $P$ can be written as a convex combination of at most $m+1$ points in $G$. Thus, one has that there exist $\theta_{i}\geq 0$ and $z_{i}\in\mathbb{R}$ such that $E\\{g_{k}(X)\\}=\sum_{i=1}^{m+1}\theta_{i}g_{k}(z_{i}),\ k=1,\ldots,m$ (6) and $\sum_{i=1}^{m+1}\theta_{i}=1.$ (7) Taking $Z$ to be the discrete random variable with probability distribution given by $p_{Z}(k)=\begin{cases}\theta_{i},&\text{if }k=z_{i},\\\ 0,&\text{otherwise,}\end{cases}$ one has that equations (6) and (7) are exactly the equations given by (5). ∎ It is important to note that Theorem 1 only states the existence of sigma points satisfying (5), but it does not give any conclusion about uniqueness. In fact, many solutions are possible, and the choice of an adequate set of sigma points has been investigated thoroughly in the literature [6, 15]. ###### Remark 1. Specifically for the first moments of $X$ focused in this paper, i.e. the case that $g_{k}(x)=x^{k}$, $k=0,\ldots,m+1$, one can also see (4) as a Gaussian quadrature integration scheme with $p_{X}$ being the weighting function and $z_{i}$ and $\omega_{i}:=p_{Z}(z_{i})$, $i=1,\ldots,m+1$, being respectively the nodes and their weights in the quadrature formula. Thus, depending on the probability density function $p_{X}$, the sigma points can be readily calculated as the roots of an orthogonal polynomial [16, Section 4.6]. For instance, if $p_{X}$ is a normal distribution, then the sigma points are the roots of a Hermite polynomial. One can note that if the only intention were to find some discrete random variable such that $E\\{f(X)\\}=E\\{f(Z)\\}$, it is always possible to find a $Z$ variable with two sigma points. In fact, in Theorem 1, consider $m=1$ and $f=g_{1}$. However, this would imply the knowledge of the function $f$. In the UT reasoning, it is sought a greater number of sigma points in order that the approximation $E\\{f(X)\\}=E\\{f(Z)\\}$ be valid for a greater number of functions $f$. In [17] it is shown that the approximation is good for any function $f$ which can be well approximated by its $m$-order Taylor representation. On top of that, the larger is $m$, the more precise is the approximation for $E\\{f(X)\\}$: an estimate for the error $\int_{a}^{b}p_{X}(x)f(x)\,dx-\sum_{i=1}^{m}\omega_{i}f(z_{i})$ is given by $\frac{f^{(2m)}(\xi)}{(2m)!}\int_{a}^{b}p_{X}(x)h_{m}^{2}(x)\,dx,$ where $\xi\in\left(a,b\right)\subset\mathbb{R}$ and $h_{m}$ is the associated monic orthogonal polynomial of degree $m$ associated to $p_{X}$ [18, Theorem 3.6.24]. As a matter of fact, the Gauss quadrature computed integral for $E\\{f(X)\\}$ is exact for all $f(x)$ that are polynomials of degree less or equal than $2m-1$ [18, pp. 172-175]. Since Theorem 1 takes in account generic functions $g_{k}$, one can analyze very general moment settings (for instance, the case of fractional moments is worked in [19]). Summing up, one can note that the basic assumption for the UT theory until now is that the moments are precisely known. However, this assumption can be too strong in practical situations where the moments are estimated from experiments and then, their values are only known to be in some intervals. In this work, we deal with the scenario of unknown moments and propose to calculate a robust set of sigma-points in the sense of minimizing the worst possible error between this robust choice of sigma-points and the sigma-points computed by using the real values of the moments. ## 3 Robust UT To motivate the proposed robust UT consider a normally distributed random variable $X$ with mean $\mu$ and variance $V$. In the case where $\mu$ and $V$ are given and $m=2$, it is known that a set of sigma points are given by [6] $\begin{array}[]{ccc}z_{1}&=&\mu-\sqrt{3V},\\\ z_{2}&=&\mu,\\\ z_{3}&=&\mu+\sqrt{3V},\end{array}$ (8) with its weights given respectively by $\omega_{1}=\frac{1}{6}$, $\omega_{2}=\frac{2}{3}$ and $\omega_{3}=\frac{1}{6}$. Suppose now that the value of $V$ is not exactly known, but an upper bound $\overline{V}$ and a lower bound $\underline{V}$ for $V$ is known, i.e. $V\in\left[\underline{V},\overline{V}\right]$. A naive approach for choosing the sigma points would be to use the mean value between $\overline{V}$ and $\underline{V}$ for $V$ in (8). In this case, the sigma points $z_{1}$ and $z_{3}$ would be given by $\displaystyle z_{1}=$ $\displaystyle z_{1}^{M}\coloneqq\mu-\sqrt{3(\underline{V}+\overline{V})/2},$ $\displaystyle z_{3}=$ $\displaystyle z_{3}^{M}\coloneqq\mu+\sqrt{3(\underline{V}+\overline{V})/2},$ and $z_{2}$, and the weights $\omega_{1}$, $\omega_{2}$ and $\omega_{3}$ unchanged. As Figure 1 illustrates, this can be a pessimistic choice for the sigma points, since if, for example, the real value of $V$ is in fact nearer of $\bar{V}$, a better choice of sigma points would be nearer to the point $\left(z_{1},z_{3}\right)=\left(\mu-\sqrt{3\overline{V}},\mu+\sqrt{3\overline{V}}\right)$. In fact, as can be seen in Figure 1, a preferable choice for the sigma points would be at the center of the region: $\displaystyle z_{1}=$ $\displaystyle z_{1}^{C}\coloneqq\mu-(\sqrt{3}/2)(\sqrt{\underline{V}}+\sqrt{\overline{V}}),$ $\displaystyle z_{3}=$ $\displaystyle z_{3}^{C}\coloneqq\mu+(\sqrt{3}/2)(\sqrt{\underline{V}}+\sqrt{\overline{V}}).$ This choice is precisely the Chebychev center222There are two non-equivalent definitions of Chebychev center of a bounded set with non-empty interior: the first definition is the center of the minimal-radius ball enclosing this set, and the second is the center of the largest incribed ball in this set [20]. In this paper, only the first definition will be used. of the set of possibles sigma points given that $V\in\left[\underline{V},\overline{V}\right]$. It has the property of having the minimum worst possible error between the chosen sigma point set and the sigma point set corresponding to the true value of $V$. Figure 1: Possible choices of sigma points. Consider now a general scenario where some of the moments are not known, but upper and lower bounds for them are. In this case, the set of possible sigma points and their weights would be in the semialgebraic set $S\subset\mathbb{R}^{2m+2}$ of elements $x=(z_{1},\ldots,z_{m+1},\omega_{1},\ldots,\omega_{m+1})$, defined as solution of the system $S:\qquad\begin{array}[]{cccc}\omega_{j}&\geq&0,&j\in\\{1,\ldots,m\\},\\\ \sum_{i=1}^{m+1}\omega_{i}&=&1,\\\ \sum_{i=1}^{m+1}z_{i}^{k_{1}}\omega_{i}&=&E\\{X^{k_{1}}\\},&k_{1}\in\mathcal{K}_{1},\\\ \sum_{i=1}^{m+1}z_{i}^{k_{2}}\omega_{i}&\leq&u_{k_{2}},&k_{2}\in\mathcal{K}_{2},\\\ \sum_{i=1}^{m+1}z_{i}^{k_{2}}\omega_{i}&\geq&\ell_{k_{2}},&k_{2}\in\mathcal{K}_{2},\end{array}$ (9) where $\mathcal{K}_{1}$ (resp. $\mathcal{K}_{2}$) is the set of indexes $k$ for which the $k$-moment is known (resp. unknown), $\mathcal{K}_{1}\cup\mathcal{K}_{2}=\\{1,\ldots,m\\}$ and $u_{k_{2}}$ and $\ell_{k_{2}}$ are respectively known upper and lower bounds for the $k_{2}$-moment. Although Theorem 1 guarantees a solution for (9), this solution may not be unique. On that account, there is more than one set of sigma points and corresponding weights, and it is natural to ask which is a good choice of sigma points and weights. Based on the previous example, a so-called robust choice would be the Chebychev center of the semialgebraic set defined by (9) since this choice minimizes the worst possible error between the chosen sigma point set and the sigma point set corresponding to the possible true values of the moments of $X$. Different from the previous example, however, in this generic scenario it may be the case that an analytic formula to express the sigma points as the function of the moments of the random variable $X$ is not available, and the following optimization problem must be solved to find the Chebychev center: $(z_{1}^{C},\ldots,z_{m+1}^{C},\omega_{1}^{C},\ldots,\omega_{m+1}^{C})=\underset{\hat{x}\in\mathbb{R}^{2m+2}}{\arg\min}\ \underset{x\in S}{\max}\|x-\hat{x}\|^{2},$ (10) where $S$ is the solution set for (9) and $x=(z_{1},\ldots,z_{m+1},\omega_{1},\ldots,\omega_{m+1})$. It is important to note that the optimization problem in (10) is not always guaranteed to have a solution, since the set $S$ may not be bounded (if, for example, $\omega_{1}$ is zero, then $z_{1}$ can take any value). Nevertheless, it still makes sense to use the Chebychev center of a large bounded subset of $S$ to try to choose a set of sigma points minimizing the worst possible estimation error. In fact, for any (bounded or unbounded) set $S$, one can take an $\varepsilon>0$ and consider the Chebychev center of the semi- algebraic set $S_{\varepsilon}\subseteq S$ defined as the set of points $x=(z_{1},\ldots,z_{m+1},\omega_{1},\ldots,\omega_{m+1})$ satisfying $S_{\varepsilon}:\qquad\begin{array}[]{cccc}\omega_{j}&\geq&\varepsilon,&j\in\\{1,\ldots,m\\},\\\ \sum_{i=1}^{m+1}\omega_{i}&=&1,\\\ \sum_{i=1}^{m+1}z_{i}^{k_{1}}\omega_{i}&=&E\\{X^{k_{1}}\\},&k_{1}\in\mathcal{K}_{1},\\\ \sum_{i=1}^{m+1}z_{i}^{k_{2}}\omega_{i}&\leq&u_{k_{2}},&k_{2}\in\mathcal{K}_{2},\\\ \sum_{i=1}^{m+1}z_{i}^{k_{2}}\omega_{i}&\geq&\ell_{k_{2}},&k_{2}\in\mathcal{K}_{2}.\end{array}$ (11) As $\varepsilon$ decreases, $S_{\varepsilon}$ covers a larger part of $S$. Fig. 2 illustrates how the set $S_{\varepsilon}$ approximates the set $S$ by choosing a sufficiently small $\varepsilon>0$. While the Chebychev center of $S$ may not exist, the next theorem assures the existence of the Chebychev center of $S_{\varepsilon}$ for any $\varepsilon>0$. ###### Theorem 2. If $m\geq 2$ and $\varepsilon>0$, then the optimization problem $(z_{1}^{C},\ldots,z_{m+1}^{C},\omega_{1}^{C},\ldots,\omega_{m+1}^{C})=\underset{\hat{x}\in\mathbb{R}^{2m+2}}{\arg\min}\ \underset{x\in S_{\varepsilon}}{\max}\|x-\hat{x}\|^{2},$ (12) has a solution. ###### Proof. It suffices to prove that the set $S_{\varepsilon}$ is bounded. The variables $\omega_{i}$, $i=1,\ldots,m+1$ are all inside the $m+1$ dimensional simplex and thus bounded. Since $\sum_{i=1}^{m+1}z_{i}^{2}\omega_{i}\leq u_{2},$ and $\omega_{i}>0$, it is impossible for any variable $z_{i}$ to grow without bound. ∎ ###### Remark 2. It is interesting to note that alternatively, instead of the compact (closed and bounded) set $S_{\varepsilon}$ one could consider the set $\hat{S}$ defined as the set $S$ up to the first inequalities replaced by a strict inequality (that is, $\omega_{j}\geq 0$ is replaced by $\omega_{j}>0$, $j\in\\{1,\ldots,m\\}$). This set is indeed bounded as $S_{\varepsilon}$. However, strict inequalities are in general not well-handled by numeric solvers. Figure 2: The regions $S\setminus S_{\varepsilon}$ (red) and $S_{\varepsilon}$ (blue) are plotted in various viewpoints to illustrate the approximation of the semi-algebraic set $S$ by $S_{\varepsilon}$ with $\varepsilon=0.01$, $m=1$, $\mathcal{K}_{1}=\emptyset$, $\mathcal{K}_{2}=\\{1,2\\}$, $\ell_{1}=\ell_{2}=0$ and $u_{1}=u_{2}=1$. Only $\omega_{1}$, $z_{1}$ and $z_{2}$ are exhibited in the graphic, since $\omega_{2}=1-\omega_{1}$. The inner maximization problems of (10) and (12) are polynomial optimization problems (POP). Such problems are ubiquitous and are encountered in several fields [21], such as: finance [22, 23, 24], robust and nonlinear control [25, 26], signal processing [27, 28], quantum physics [29] and materials science [30]. It is known that this problem is NP-Hard in general [8], but despite this, it is possible to approximate a POP by convex optimization problems which are computationally feasible using Lasserre’s hierarchy of semidefinite programming relaxations [9, 10]. In fact, by using the GloptiPoly 3 package333Available for download at http://homepages.laas.fr/henrion/software/gloptipoly3/. [31] or the SparsePOP package444Available for download at http://sourceforge.net/projects/sparsepop/. [32] for MATLAB, or ncpol2sdpa library555Available for download at https://gitlab.com/peterwittek/ncpol2sdpa. [33] for Python, these relaxations can be easily implemented and enables the user to transparently construct an increasing sequence of convex LMI relaxations whose optima are guaranteed to converge monotonically to the global optimum of the original non-convex global optimization problem [26]. Moreover, it is possible to numerically certify the global optimality of the problem. A direct approach to solve (10) would be to use the mentioned Lasserre’s hierarchy to solve the inner maximization problem for a fixed $\hat{x}=\hat{x}_{0}$ and a local optimization algorithm to search which $\hat{x}_{0}$ minimizes (10). This however, besides being a hard numeric problem, would not guarantee a good approximation to the real value of the Chebychev center. In the next section alternative ways to compute (10) with different trade-off between precision of the result and speed of algorithm are proposed. ## 4 Computation of robust sigma points ### 4.1 Computation of an outer box to approximate the Chebychev center The first proposed approach to find an approximate solution to (12) is to compute an outer-bounding box $B$ of $S_{\varepsilon}$ and approximate the Chebychev center of $S_{\varepsilon}$ by the Chebychev center of $B$. By Theorem 2, for $m\geq 2$ the constraint set $S_{\varepsilon}$ is bounded. Thus, each one of the following polynomial optimization problems on $x=(x_{1},\ldots,x_{2m})\in S_{\varepsilon}$ has solution $\displaystyle\underline{z}_{i}$ $\displaystyle=\arg\min_{x\in\mathbb{R}^{2m+2}}x_{i}\text{ s.t. }x\in S_{\varepsilon},\ i=1,\ldots,m+1,$ $\displaystyle\underline{\omega}_{i}$ $\displaystyle=\arg\min_{x\in\mathbb{R}^{2m+2}}x_{i+m+1}\text{ s.t. }x\in S_{\varepsilon},\ i=1,\ldots,m+1,$ $\displaystyle\overline{z}_{i}$ $\displaystyle=\arg\max_{x\in\mathbb{R}^{2m+2}}x_{i}\text{ s.t. }x\in S_{\varepsilon},\ i=1,\ldots,m+1,$ $\displaystyle\overline{\omega}_{i}$ $\displaystyle=\arg\max_{x\in\mathbb{R}^{2m+2}}x_{i+m+1}\text{ s.t. }x\in S_{\varepsilon},\ i=1,\ldots,m+1,$ (13) and can be used to construct an outer box $\displaystyle B=\\{(z_{1},\ldots,z_{m+1},\omega_{1},\ldots,\omega_{m+1})\in\mathbb{R}^{2m+2}:\underline{z}_{i}\leq z_{i}\leq\overline{z}_{i},$ $\displaystyle\underline{\omega}_{i}\leq\omega_{i}\leq\overline{\omega}_{i},i=1,\ldots,m+1\\}$ such that $S_{\varepsilon}\subset B$. The next theorem gives an estimate of the error of the outer-bounding box approximation. ###### Theorem 3. Let $c_{B}$ be the center of Chebychev of $B$ and let $c_{S}$ be the center of Chebychev of $S_{\varepsilon}$. If $d\coloneqq\|c_{B}-c_{S}\|$ is the defined as the distance between the centers of Chebychev, then $d\leq\frac{\textrm{diam}(B)}{2},$ where $\textrm{diam}(B)$ is the diameter of $B$, that is, the least upper bound of the set of all distances between pairs of points in $B$. ###### Proof. Suppose that $d>\textrm{diam}(B)/2$. Thus $c_{S}\not\in B$, and since $c_{S}$ is the center of Chebychev of $S_{\varepsilon}$, one has that $c_{S}$ is in the convex hull of $S_{\varepsilon}$. But this implies that $c_{S}$ is in $B$, and the result follows by contradiction. ∎ ### 4.2 Polynomial optimization program to compute minimum enclosing ball Another approximate solution to problem (12) can be found by using the two- stage approach of [34]. Since the Chebychev center of $S_{\varepsilon}$ is always inside of the outer box $B$ computed in Section 4.1, problem (10) is equivalent to $(z_{1}^{C},\ldots,z_{m+1}^{C},\omega_{1}^{C},\ldots,\omega_{m+1}^{C})=\underset{\hat{x}\in B}{\arg\min}\ \underset{x\in S_{\varepsilon}}{\max}\|x-\hat{x}\|^{2}.$ (14) In the first stage, the inner optimization problem of (14), namely, $J(\hat{x})\coloneqq\max_{x\in S_{\varepsilon}}\|x-\hat{x}\|^{2},$ is approximated by a polynomial function $\tilde{J}_{\tau}(\hat{x})$ of degree $2\tau$ using the semidefinite optimization program outlined in [34]. Then, in the second stage, the outer minimization problem is replaced with the polynomial optimization problem $(z_{1}^{C},\ldots,z_{m+1}^{C},\omega_{1}^{C},\ldots,\omega_{m+1}^{C})=\underset{\hat{x}\in B}{\arg\min}\ \tilde{J}_{\tau}(\hat{x}),$ (15) which can be solved using the Lasserre’s hierarchy. As the degree $2\tau$ of the approximation polynomial increases, the solution of problem (15) converges to the solution of problem (14) in the sense of [34]. Alternatively, problem (12) can be shown to be equivalent to the following polynomial optimization problem: $\begin{array}[]{cc}\min_{\hat{x},r}r&\text{ s.t. }\|x-\hat{x}\|^{2}\leq r,\\\ &x\in S_{\varepsilon},\,\hat{x}\in B,\end{array}$ (16) where $\hat{x}$ is the Chebychev center of $S_{\varepsilon}$. In other words, the Chebychev center is the center of the radius of the minimum volume ball that encloses $S_{\varepsilon}$. ###### Remark 3. It is interesting to note that if the gap between $\ell_{i}$ and $u_{i}$ is not large enough, the resulting semi-definite programs from relaxing the polynomial optimization problems (13) and (16) can be numerically unstable. In this case, however, one may use the naive approach of computing sigma points by using the arithmetic mean of the moments without loss, since the difference between the real Chebychev center and the point computed by this method would be not so great due to the small difference between the upper and lower bounds of the moments. ## 5 Computation of UT transform Suppose that $x_{CB}\coloneqq(z_{1}^{C},\ldots,z_{m+1}^{C},\omega_{1}^{C},\ldots,\omega_{m+1}^{C})$ is the Chebychev center of $S$ computed by one of the methods of Section 4. Based on (2), define the function $UT_{f}(z_{1},\ldots,z_{m+1},w_{1},\ldots,w_{m+1})\coloneqq\sum_{i=1}^{m+1}w_{i}f(z_{i}).$ As discussed above, $UT_{f}$ is a good approximation for $E\\{f(X)\\}$ for a sufficiently large $m$. While it is true that $x_{CB}$ approximates of the center of Chebychev of $S$, one also wish to know if $UT_{f}(x_{CB})$ is near to the Chebychev center of $UT_{f}(S)$ (that is, the image of the set $S$ by the function $UT_{f}$). A class of functions $f$ such that $UT_{f}(x_{CB})$ is near the Chebychev center of $UT_{f}(S)$ is given by the functions such that the solution of the following optimization problem $\displaystyle\min\,D\ \text{s.t.\ }$ $\displaystyle(1-D)\|x-y\|\leq\|UT_{f}(x)-UT_{f}(y)\|$ (17) $\displaystyle\leq(1+D)\|x-y\|,$ $\displaystyle x,y\in S$ is sufficiently small. If these values are sufficiently small, the function $UT_{f}$ is a low-distortion geometric embedding [35], and this implies that $UT_{f}(x_{CB})$ is near the Chebychev center of $UT_{f}(S)$. Finally, to compute (17), one can estimate a solution for (17) by uniformly sampling random values $(x_{i},y_{i})$ of $S$ and computing $\max\,D_{i}$, where $\displaystyle D_{i}\coloneqq\min\,D\ \text{s.t.\ }$ $\displaystyle(1-D)\|x_{i}-y_{i}\|\leq\|UT_{f}(x_{i})-UT_{f}(y_{i})\|$ (18) $\displaystyle\leq(1+D)\|x_{i}-y_{i}\|.$ ## 6 Numerical experiments In this example, the computation of Chebychev center using the methods proposed in this paper will be illustrated. Consider that one desires to compute $2$ sigma points in a scenario where the values of the first and second moment are not precisely known, but it is known that $E\\{X\\}\in[0,1]$ and that $E\\{X^{2}\\}\in[0,1]$. In this case, one has (9) with $m=1$, $\mathcal{K}_{1}=\emptyset$, $\mathcal{K}_{2}=\\{1,2\\}$, and $\ell_{1}=-3$ and $u_{1}=4$, and $\ell_{2}=0$ and $u_{2}=5$. Naive method: For fixed values of $E\\{X\\}=\mu$ and $E\\{X^{2}\\}=V$, one can compute a point inside the set $S$ by using the canonical formula $\begin{array}[]{c}\omega_{1}=0.5,\ \omega_{2}=0.5,\\\ z_{1}=\mu+\sqrt{V-\mu^{2}},\ z_{2}=\mu-\sqrt{V-\mu^{2}}.\end{array}$ (19) A naive choice of sigma points would be the use of the arithmetic mean of the lower and upper bounds for the moments in (19). Using (19) with $\mu=\frac{1}{2}(\ell_{1}+u_{1})=0.5$ and $V=\frac{1}{2}(\ell_{2}+u_{2})=2.5$ results in the sigma points $z_{1}=2$, $z_{2}=-1$ with weights given by $w_{1}=0.5$ and $w_{2}=0.5$. Method 1: By using the method proposed in Section 4.1 it is possible to compute an outer-bounding box approximation to the semi-algebraic set $S_{\varepsilon}$. Using $\varepsilon=0.01$, the computation of an approximate point of the Chebychev center using outer-bounding box approximation results in the sigma points $z_{1}=0$, $z_{2}=0$ with weights respectively given by $w_{1}=0.5$ and $w_{2}=0.5$. Method 2: Finally, by computing a point using the method proposed in Section 4.2 with $\varepsilon=0.01$ results in the sigma points $z_{1}=-0.0001$, $z_{2}=0.0001$ with weights respectively given by $w_{1}=0.1$ and $w_{2}=0.9$. In both Method 1 and 2, the MATLAB toolbox Gloptipoly 3 was used to relax the polynomial optimization problems (13) and (16). A relaxation order of $2$ was used and the global optimality of the problems was numerically certified by Gloptipoly3. Moreover, higher relaxation orders could not be used, since using bigger relaxation orders results in semi-definite programs with a large number of variables and constraints, and thus yields an unstable numeric problem. Figure 3 illustrates the semi-algebraic variety $S_{\varepsilon}$ and the coordinates of the sigma points calculated by the three methods. It can be seen by Figure 3 that the point computed by using the outer-bounding box approximation is the best approximation to the real Chebychev center of the variety. Moreover, it is important to note that the point calculated by Method 2 is very far from the real Chebychev center of the set $S_{\varepsilon}$. This is due to the semi-definite relaxation of (16) being far from the exact solution. In principle, one can increase the relaxation, but this can lead to an unstable numeric problem, which global optimality cannot be more certified. Figure 3: Semi-algebraic variety $S_{\varepsilon}$ with $\varepsilon=0.01$, $m=1$, $\mathcal{K}_{1}=\emptyset$, $\mathcal{K}_{2}=\\{1,2\\}$, $\ell_{1}=-3$, $u_{1}=4$, $\ell_{2}=0$ and $u_{2}=5$. The blue point is the coordinates of the sigma-point obtained by the naive method, the red point is the coordinates of the sigma-point obtained by the outer-bounding box approximation, and the green point is obtained by solving the polynomial optimization problem (16) directly. Only $\omega_{1}$, $z_{1}$ and $z_{2}$ are illustrated in the graphic, since $\omega_{2}=1-\omega_{1}$. Finally, to illustrate the robustness of the computed point, $100$ random samples from $E\\{X\\}$ and $E\\{X^{2}\\}$ are respectively draw from the uniform distributions $U(\ell_{1},u_{1})$ and $U(\ell_{2},u_{2})$. Solving (18) for $f(x)=\sin(x)$ with $500$ samples gives an estimate for $D$ of $0.9956$, and the posterior expectation in (1) is approximated as $\textrm{E}\left\\{\sin(X)\right\\}\approx\omega_{1}\sin(z_{1})+\omega_{2}\sin(z_{2}),$ where $\omega_{1}$, $\omega_{2}$, $z_{1}$ and $z_{2}$ are computed according to the above methods. The mean error $\left|E\\{\sin(X)\\}-\sum_{i=1}^{2}\omega_{i}\sin(z_{i})\right|$ by the naive method is $0.0339$, while by Method 1 is $3.6932\cdot 10^{-6}$ and by Method 2 is $1.2085\cdot 10^{-4}$. ## 7 Conclusion In this paper, it was proposed a way to devise a robust unscented transform by the computation of the Chebychev center of the semialgebraic set defined by the possible choices of sigma points and its weights. Although, in general, this problem is NP-Hard, some methods are proposed in this paper to approximate the solution of the original problem by convex optimization problems. Further works intend to generalize the present work to higher dimensions, as this could enable novel filter designs for multivariate dynamical systems. Another possible extension to this work is to consider adjustments of the proposed algorithm to work with confidence bounds for the moments instead of absolute upper and lower bounds, since the former is more common to be used on interval estimation techniques, such as the bootstrapping method. Finally, one limitation of the method is that a great number of sigma points can lead to a numerically unstable relaxation of the polynomial optimization problems. Nevertheless, recent advances in polynomial optimization techniques such as novel relaxation hierarchies based on linear programming [36] could help to scale the presented approaches to a higher number of sigma points. ## Acknowledgments The authors would like to thanks Dario Piga and Henrique M. Menegaz for the worthy suggestions and discussions relevant to this paper. We would also like to thanks the Brazilian agencies CNPq and CAPES which partially supported this work. ## References * [1] K. O. Geddes, S. R. Czapor, G. Labahn, Algorithms for Computer Algebra, Kluwer Academic Publishers, 1992. * [2] A. Papoulis, S. U. Pillai, Probability, Random Variables, and Stochastic Processes, McGraw-Hill, New York, 2002. * [3] S. J. Julier, J. K. Uhlmann, H. F. Durrant-Whyte, A new approach for filtering nonlinear systems, in: Proceedings of the 1995 American Control Conference, Vol. 3, 1995, pp. 1628–1632. * [4] L. R. A. X. de Menezes, A. J. M. Soares, F. C. Silva, M. A. B. Terada, D. Correia, A new procedure for assessing the sensitivity of antennas using the unscented transform, IEEE Transactions on Antennas and Propagation 58 (3) (2010) 988–993. * [5] M. L. Carneiro, P. H. P. de Carvalho, N. Deltimple, L. d. C. Brito, L. R. A. X. de Menezes, E. Kerherve, S. G. de Araujo, A. S. Rocira, Doherty amplifier optimization using robust genetic algorithm and unscented transform, in: Proceedings of the 9th IEEE International conference on New Circuits and Systems Conference, IEEE, 2011, pp. 77–80. * [6] H. M. T. Menegaz, J. Y. Ishihara, G. A. Borges, A. N. Vargas, A systematization of the unscented Kalman filter theory, IEEE Transactions on Automatic Control 60 (10) (2015) 2583–2598. * [7] K. Krishnamoorthy, Handbook of Statistical Distributions with Applications, CRC Press, 2010. * [8] K. G. Murty, S. N. Kabadi, Some NP-complete problems in quadratic and nonlinear programming, Mathematical programming 39 (2) (1987) 117–129. * [9] J. B. Lasserre, Global optimization with polynomials and the problem of moments, SIAM Journal on Optimization 11 (3) (2001) 796–817. * [10] J. B. Lasserre, Moments, Positive Polynomials and Their Applications, Imperial College Press optimization series, Imperial College Press, Singapore, 2009. * [11] J. R. Van Zandt, A more robust unscented transform, in: International symposium on optical science and technology, International Society for Optics and Photonics, 2001, pp. 371–380. * [12] S. Mehrotra, D. Papp, Generating moment matching scenarios using optimization techniques, SIAM Journal on Optimization 23 (2) (2013) 963–999. * [13] H. Dette, W. J. Studden, The Theory of Canonical Moments with Applications in Statistics, Probability, and Analysis, Vol. 338, John Wiley & Sons, 1997. * [14] J.-B. Hiriart-Urruty, C. Lemaréchal, Fundamentals of Convex Analysis, Grundlehren Text Editions, Springer, Germany, 2001. * [15] R. Radhakrishnan, A. Yadav, P. Date, S. Bhaumik, A new method for generating sigma points and weights for nonlinear filtering, IEEE Control Systems Letters 2 (3) (2018) 519–524. * [16] W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes: The Art of Scientific Computing, 3rd Edition, Cambridge University Press, New York, 2007. * [17] S. J. Julier, J. K. Uhlmann, Unscented filtering and nonlinear estimation, Proceedings of the IEEE 92 (3) (2004) 401–422. * [18] J. Stoer, R. Bulirsch, Introduction to Numerical Analysis, Texts in Applied Mathematics, Springer, New York, 2002. * [19] R. Caballero-Aguila, A. Hermoso-Carazo, J. Linares-Pérez, Extended and unscented filtering algorithms in nonlinear fractional order systems with uncertain observations, Applied Mathematical Sciences 6 (30) (2012) 1471–1486. * [20] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004. * [21] Z. Li, S. He, S. Zhang, Approximation Methods for Polynomial Optimization: Models, Algorithms, and Applications, Springer Briefs in Optimization, Springer, New York, 2012. * [22] H. Markowitz, Portfolio selection, The journal of finance 7 (1) (1952) 77–91. * [23] E. Jondeau, M. Rockinger, Optimal portfolio allocation under higher moments, European Financial Management 12 (1) (2006) 29–55. * [24] P. M. Kleniati, P. Parpas, B. Rustem, Partitioning procedure for polynomial optimization: Application to portfolio decisions with higher order moments, Tech. Rep. WPS-023, COMISEF Working Papers Series (2009). * [25] A. P. Roberts, M. M. Newmann, Polynomial optimization of stochastic feedback control for stable plants, IMA Journal of Mathematical Control and Information 5 (3) (1988) 243–257. * [26] D. Henrion, J.-B. Lasserre, Solving nonconvex optimization problems, Control Systems, IEEE 24 (3) (2004) 72–83. * [27] B. Mariere, Z.-Q. Luo, T. N. Davidson, Blind constant modulus equalization via convex optimization, Signal Processing, IEEE Transactions on 51 (3) (2003) 805–818. * [28] L. Qi, K. L. Teo, Multivariate polynomial minimization and its application in signal processing, Journal of Global Optimization 26 (4) (2003) 419–433. * [29] G. Dahl, J. M. Leinaas, J. Myrheim, E. Ovrum, A tensor product matrix approximation problem in quantum physics, Linear algebra and its applications 420 (2) (2007) 711–725. * [30] S. Soare, J. W. Yoon, O. Cazacu, On the use of homogeneous polynomials to develop anisotropic yield functions with applications to sheet forming, International Journal of Plasticity 24 (6) (2008) 915–944. * [31] D. Henrion, J.-B. Lasserre, J. Löfberg, GloptiPoly 3: moments, optimization and semidefinite programming, Optimization Methods & Software 24 (4-5) (2009) 761–779. * [32] H. Waki, S. Kim, M. Kojima, M. Muramatsu, H. Sugimoto, Algorithm 883: SparsePop—a sparse semidefinite programming relaxation of polynomial optimization problems, ACM Transactions on Mathematical Software 35 (2) (2008) 15. * [33] P. Wittek, Algorithm 950: Ncpol2sdpa–sparse semidefinite programming relaxations for polynomial optimization problems of noncommuting variables, ACM Transactions on Mathematical Software (TOMS) 41 (3) (2015) 21. * [34] V. Cerone, J. B. Lasserre, D. Piga, D. Regruto, A unified framework for solving a general class of conditional and robust set-membership estimation problems, IEEE Transactions on Automatic Control 59 (11) (2014) 2897–2909. * [35] P. Indyk, Algorithmic applications of low-distortion geometric embeddings, in: Proceedings of 42nd IEEE Symposium on Foundations of Computer Science, 2001, pp. 10–33. * [36] A. A. Ahmadi, A. Majumdar, DSOS and SDSOS optimization: more tractable alternatives to sum of squares and semidefinite optimization, arXiv preprint arXiv:1706.02586.
# Content Delivery over Broadcast Erasure Channels with Distributed Random Cache Alireza Vahid Shih-Chun Lin I-Hsiang Wang Yi-Chun Lai Alireza Vahid is with the Electrical Engineering Department of the University of Colorado Denver, Denver, CO, USA. Email<EMAIL_ADDRESS>Shih-Chun Lin and Yi-Chun Lai are with Department of Electrical and Computer Engineering, NTUST, Taipei, Taiwan. Email<EMAIL_ADDRESS>I-Hsiang Wang is with Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan. Email: <EMAIL_ADDRESS> ###### Abstract We study the content delivery problem between a transmitter and two receivers through erasure links, when each receiver has access to some random side- information about the files requested by the other user. The random side- information is cached at the receiver via the decentralized content placement. The distributed nature of receiving terminals may also make the erasure state of two links and indexes of the cached bits not perfectly known at the transmitter. We thus investigate the capacity gain due to various levels of availability of channel state and cache index information at the transmitter. More precisely, we cover a wide range of settings from global delayed channel state knowledge and a non-blind transmitter (i.e. one that knows the exact cache index information at each receiver) all the way to no channel state information and a blind transmitter (i.e. one that only statistically knows cache index information at the receivers). We derive new inner and outer bounds for the problem under various settings and provide the conditions under which the two match and the capacity region is characterized. Surprisingly, for some interesting cases the capacity regions are the same even with single- user channel state or single-user cache index information at the transmitter. ## I Introduction Available receiver-end side-information can greatly enhance content delivery and increase the attainable data rates in wireless systems. In particular, in various applications such as caching [1, 2], coded computing [3], private information retrieval [4, 5], and index coding [6, 7, 8], side-information is intentionally and strategically placed at each receiver’s cache during some placement phase in order to lighten future communication loads. However, in a more realistic setting, there is no centralized mechanism to populate the local caches. On the other hand, receivers may obtain some side-information by simply over-hearing the signals intended for other nodes over the shared wireless medium. The transmitter(s) may or may not be aware of the exact content of the side-information at each receiver, and if the transmitter is not aware of the exact content, the problem is referred to as Blind Index Coding [9]. Further, the transmitter(s) may be enhanced by receiving channel state feedback. This paper focuses on content delivery in wireless networks with potential channel state feedback and/or random available side-information at the receivers. In a packet-based communication network, instead of the classic Gaussian channel, the network-coding-based approaches generally model each communication hop as a packet erasure channel. In this work, as [10], we focus on broadcast erasure channels with random receiver side-information. More specifically, we consider a single transmitter communicating with two receiving terminals through erasure links. Through decentralized content placement [9][10], each receiver has randomly cached some side-information about the message bits (file) of the other user. In [10], for both users, their indexes of cached bits during the placement and (delayed) channel erasure state during the delivery are globally known at the transmitter. However, as pointed out in [11][12], in distributed networks, acquiring such global information at transmitters is prohibitive due to the extensive date exchange and the heterogenous capabilities for receiving terminals to feed back. Thus we consider three scenarios about the transmitter’s knowledge of the receiver-end side-information: (1) the blind-transmitter case where only the statistics of the _random_ cache index information is known to the transmitter, (2) the non-blind-transmitter case where the transmitter knows exactly what each receiver has access to, and (3) the semi-blind-transmitter case that knows the exact cache index information at only one of the receivers. Meanwhile, a range of assumptions on the availability of channel state information at the transmitter (CSIT) are also considered: (1) when both receivers provide delayed CSI to the transmitter (DD); (2) when only one receiver provides delayed CSI to the transmitter (DN); and (3) when receivers do not provide any CSI to the transmitter (NN). The blind-transmitter case under scenario NN in [9] corresponds to the setting that during the whole content placement and delivery, both receivers can not have capabilities to feed back through the control channel or shared wireless medium. Then the transmitter has neither cache index information nor CSI. On the other hand, for each receiver, if the feedback resource is available for some period of time and then becomes unavailable or intermittent [13, 14], then we have other scenarios. For example, though during the placement both receivers can feed back cache indexes, during the delivery one of the receiver may not be able to feed back the fast varying channel state of its own link. Then we have a non-blind transmitter under scenario DN. This setting could also arise due to new security threats that aim to disrupt the flow of control packets in distributed systems [15, 16, 17], that is, during the delivery the CSI feedback from one of the receivers is severely attacked. Related Work and Literature Review: To better place our work within the literature for cached network or index coding, we summarize some of the main results in the literature. The classic index coding problem [18, 6, 19] assumes the transmitter is aware of the exact side-information at each node [7, 20], and the network does not have wireless links directly. This problem has been a powerful tool in studying the data communication network with receiver caching [21, 8, 22]. For index coding over wireless links, as aforementioned, two extreme cases have been studied [9][10]. In [10], the capacity regions for two and three- user broadcast erasure channels, when the transmitter has access to global delayed CSI and cached index information, are characterized. This setting assumes rather strong and stable feedback channels from all receivers. On the other hand, blind wireless index coding is introduced in [9] where the transmitter only knows the statistics of the CSI and cached index information since there is no feed back at all from the two users. However, a limitation is added in [9] such that only the user with weaker link (higher erasure probability) has cache. Even under this limitation, the capacity region is not fully known In conclusion, there remains a gap in the index coding literature between these two extreme points when it comes to wireless setting, which is the target of our paper. More comparisons with [9][10] can be found in Section III. Contributions: In this work, we consider the two-user erasure broadcast channel with the erasure states of the two links being independent of each other. We first present the capacity for the two-user broadcast erasure channel with a non-blind transmitter under scenario NN, that is, no CSI feedback. For scenario NN, we observe that the stronger receiver, _i.e._ the one whose channel has a smaller erasure probability, will eventually be able to decode messages intended for both receivers. Note that this observation does not imply that the broadcast channel is degraded due to the cache. Interestingly, the achievability derived from this observation indicates that our optimal protocol works even when the transmitter does not know the exact cache index information at the stronger receiver, _i.e._ with a “semi-blind” transmitter. To derive the outer-bounds, besides using the aforementioned observations, we also show an extremal entropy inequality between the two receivers that captures the availability of receiver-end side-information. With a blind transmitter, these outer-bounds can be automatically applied since the transmitter has more knowledge in the non-blind case, and we show that the outer bounds can be achieved when the channel is symmetric. When some delayed CSI is available, we also demonstrate the capacity with a non-blind transmitter and global CSI (scenario DD) [10] can be achieved when the transmitter has less information. In particular, for the blind-transmitter case under scenario DD, we provide an optimal protocol for the symmetric channel. For the non-blind-transmitter case or semi-blind-transmitter case, we also extend our earlier non-cached capacity result for scenario DN [12], and show that even with cache the capacity region with only single-user CSI feed back can match that with global CSI in [10]. TABLE I: Summary of Contributions Categorized by Transmitter’s Information State | Cache | Contributions | State | Cache | Contributions ---|---|---|---|---|--- W | S | W | S | | W | S | W | S | ✗ | ✗ | ✓ | ✓ | *$\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$ in Theorem 1 | ✓ | ✗ | ✗ | ✓ | $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{DN}}$ in Theorem 2 Case B ✗ | ✗ | ✓ | ✗ | *$\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{NN}}$ in Corollary 1 | ✗ | ✓ | ✗ | ✗ | Remains open ✗ | ✗ | ✗ | ✓ | Inner-bounds in Theorem 4 | ✓ | ✓ | ✓ | ✓ | $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{DD}}$ previously known [10] ✗ | ✗ | ✗ | ✗ | Symmetric $\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ in Theorem 3 | ✓ | ✓ | ✓ | ✗ | $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{DD}}$ in Theorem 2 Case B ✗ | ✓ | ✓ | ✓ | *$\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{DN}}$ in Theorem 2 Case C | ✓ | ✓ | ✗ | ✓ | $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{DD}}$ in Theorem 2 Case B ✗ | ✓ | ✓ | ✗ | $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{DN}}$ in Theorem 2 Case B | ✓ | ✓ | ✗ | ✗ | Symmetric $\mathcal{C}^{\mathrm{blind}}_{\mathrm{DD}}$ in Theorem 2 Case A Table I summarizes our contributions, which we will further elaborate upon in Section III. In this table, “W” stands for the weak user (the one with higher erasure probability) and “S” stands for the strong user. Further, “State” columns are for CSIT, and “Cache” columns are for the transmitter’s knowledge of bit indices cached at the receiver: “✓” indicates availability and “✗” indicates missing information. For example, “✗✗✓✗” is the no CSI case (NN) with semi-blind Tx that knows the cache index information at the weaker receiver. For the three cases marked with “*”, the capacity regions are fully identified without additional limitations on system parameters. Paper Organization: The rest of the paper is organized as follows. In Section II, we present the problem setting and the assumptions we make in this work. Section III presents the main contributions and provides further insights and interpretations of the results. The proof of the main results will be presented in the following sections. Finally, Section VIII concludes the paper. ## II Problem Formulation Following [10], we consider the canonical two-user broadcast erasure channel in Figure 1 to understand how transmitters can exploit the available side- information at the receivers to improve the capacity region. In this network, a transmitter, $\mathsf{Tx}$, wishes to transmit two independent messages (files), $W_{1}$ and $W_{2}$, to two receiving terminals $\mathsf{Rx}_{1}$ and $\mathsf{Rx}_{2}$, respectively, over $n$ channel uses. Each message, $W_{i}$, contains $m_{i}$ data packets (or bits) which we denote by $\vec{a}=\left(a_{1},a_{2},\ldots,a_{m_{1}}\right)$ for $\mathsf{Rx}_{1}$, and by $\vec{b}=\left(b_{1},b_{2},\ldots,b_{m_{2}}\right)$ for $\mathsf{Rx}_{2}$. Here, we note that each packet is a collection of encoded bits, however, for simplicity and without loss of generality, we assume each packet is in the binary field, and we refer to them as bits. Extensions to broadcast packet erasure channels where packets are in large finite fields are straightforward as in [10][23]. Channel model: At time instant $t$, the messages are mapped to channel input $X[t]\in\mathbb{F}_{2}$, and the corresponding received signals at $\mathsf{Rx}_{1}$ and $\mathsf{Rx}_{2}$ are $\displaystyle Y_{1}[t]=S_{1}[t]X[t]~{}~{}\;\mbox{and}\;~{}~{}Y_{2}[t]=S_{2}[t]X[t],$ (1) respectively, where $\left\\{S_{i}[t]\right\\}$ denotes the Bernoulli $(1-\delta_{i})$ process that governs the erasure at $\mathsf{Rx}_{i}$, and is independently and identically distributed (i.i.d.) over time and across users. When $S_{i}[t]=1$, $\mathsf{Rx}_{i}$ receives $X[t]$ noiselessly; and when $S_{i}[t]=0$, it receives an erasure. In other words, as we assume receivers are aware of their local channel state information, each receiver can map the received signal when $S_{i}[t]=0$ to an erasure. CSI assumptions: We assume the receivers are aware of the channel state information (_i.e._ global CSIR). For the transmitter, on the other hand, we assume the following scenarios: 1. 1. NN or No CSIT model: The transmitter knows only the erasure probabilities and not the actual channel realizations; 2. 2. DN model: The transmitter knows the erasure probabilities and the actual channel realizations of one receiver with unit delay; 3. 3. DD or delayed CSIT model: The transmitter knows the erasure probabilities and the actual channel realizations of both receivers with unit delay. Figure 1: Two-user broadcast erasure channels with channel state feedback and random side-information at each receiver. Available receiver side-information: Decentralized content placement [9][10] is adopted, where each user independently caches a subset of the message bits (file). In particular, we assume a random fraction $(1-\epsilon_{i})$ of the bits intended for receiver $\mathsf{Rx}_{\bar{i}}$ is cached at $\mathsf{Rx}_{{i}}$, $\bar{i}\overset{\triangle}{=}3-i$, and we denote this side information with $W_{\bar{i}|i}$ as in Figure 1. This assumption on the available side-information at each receiver could also be represented using an erasure side channel. More precisely, we can assume available side-information to receiver $i$ is created through $\displaystyle E_{i}[\ell]W_{\bar{i}}[\ell],\qquad\ell=1,2,\ldots,nR_{\bar{i}},$ (2) where $W_{\bar{i}}[\ell]$ is the $\ell^{\mathrm{th}}$ bit of message $W_{\bar{i}}$, while the cache index information $E_{i}[\ell]$ is an i.i.d. Bernoulli $(1-\epsilon_{i})$ process independent of all other channel parameters and known at receiver $i$. To concisely present our result, in our placement model each user does not cache its own message but only the interference. Our results can be easily extended to the case when both the own message and interference bits are cached. The Transmitter’s knowledge of the cache index: Following the convention which presents the length $nR_{\bar{i}}$ sequence $E_{i}[1],\ldots E_{i}[nR_{\bar{i}}]$ as $E^{nR_{\bar{i}}}_{i}$, we consider three scenarios for the blindness of the cache index at the transmitter 1. 1. Blind Transmitter: In this scenario, the transmitter’s knowledge of the receiver side-information is limited to the values of $\epsilon_{1}$ and $\epsilon_{2}$, while the cache index information $E^{nR_{2}}_{1}$ and $E^{nR_{1}}_{2}$ in (2) is unknown. 2. 2. Semi-Blind Transmitter: In this scenario, the transmitter’s knowledge of the side-information at $\mathsf{Rx}_{i}$ is limited to the value of $\epsilon_{i}$, while the transmitter knows $W_{i|\bar{i}}$ through $E^{nR_{i}}_{\bar{i}}$ 3. 3. Non-Blind Transmitter: In this scenario, the transmitter knows exactly what fraction of each message is available to the unintended receiver. In other words, the transmitter knows $W_{2|1}$ and $W_{1|2}$ through the the cache index information $E^{nR_{2}}_{1}$ and $E^{nR_{1}}_{2}$. Encoding: We start with the NN model where the constraint imposed at the encoding function $f_{t}(.)$ at time index $t$ for the blind scenario is $\displaystyle X[t]=f_{t}\left(W_{1},W_{2},\mathsf{PI}\right),$ (3) for the non-blind scenario is $\displaystyle X[t]=f_{t}\left(W_{1|2},\bar{W}_{1|2},W_{2|1},\bar{W}_{2|1},\mathsf{PI}\right),$ (4) and $\displaystyle X[t]=f_{t}\left(W_{i|\bar{i}},\bar{W}_{i|\bar{i}},W_{\bar{i}},\mathsf{PI}\right),$ (5) where $\mathsf{PI}$ represents the knowledge of statistical parameters $\delta_{1},\delta_{2},\epsilon_{1},$ and $\epsilon_{2}$, and $\bar{W}_{i|\bar{i}}$ is the complement of $W_{i|\bar{i}}$ with respect to $W_{i}$, $i=1,2$. Although not ideal, this notation is adopted to highlight the transmitter’s knowledge of the available side-information at the receivers. For the DD case, $S^{t-1}$ is added to the inputs of $f_{t}(.)$, while under DN model, only of the channels is revealed to the transmitter up to time $(t-1)$. Rather than enumerating all possibilities, we present an example, to clarify the encoding constraints. Suppose the transmitter is knows the side- information available to $\mathsf{Rx}_{2}$ (semi-blind), and has access to the delayed CSI from $\mathsf{Rx}_{1}$ (DN model), then, we have $\displaystyle X[t]=f_{t}\left(W_{1|2},\bar{W}_{1|2},W_{2},S_{1}^{t-1},\mathsf{PI}\right),$ (6) Decoding: Each receiver $\mathsf{Rx}_{i}$, $i=1,2$ knows its own CSI across entire transmission block $S^{n}_{i}$, and the CSI $S^{n}_{\bar{i}}$ if the other receiver $\bar{i}$ provides feedback. Under scenario NN it uses a decoding function $\varphi_{i,n}\left(Y_{i}^{n},S_{i}^{n},W_{\bar{i}|i}\right)$ to get an estimate $\widehat{W}_{i}$ of $W_{i}$, while under scenario DD the decoding function becomes $\varphi_{i,n}\left(Y_{i}^{n},S^{n},W_{\bar{i}|i}\right)$ where $S^{n}=(S_{1}^{n},S^{n}_{2})$. Note that under scenario DN, only the no- feedback receiver has global $S^{n}$. An error occurs whenever $\widehat{W}_{i}\neq W_{i}$. The average probability of error is given by $\displaystyle\lambda_{i,n}=\mathbb{E}[P(\widehat{W}_{i}\neq W_{i})],$ (7) where the expectation is taken with respect to the random choice of the transmitted messages. Capacity region: We say that a rate pair $(R_{1},R_{2})$ is achievable, if there exists a block encoder at the transmitter, and a block decoder at each receiver, such that $\lambda_{i,n}$ goes to zero as the block length $n$ goes to infinity. The capacity region, $\mathcal{C}$, is the closure of the set of the achievable rate pairs. Throughout the paper, we will distinguish the capacity region under different assumptions. For example, $\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ is the capacity region of the two-user broadcast erasure channels with a blind transmitter and no CSIT. ## III Main Results In this section, we present the main contributions of this paper and provide some insights and intuitions about the findings. ### III-A Statement of the Main Results We start with scenarios in which we characterize the capacity region, and then, we present cases for which we derive new inner-bounds. In Theorem 1, for the no CSIT scenario, we establish the capacity region with a non-blind transmitter. We will highlight the importance of side-information at the weaker receiver and how a semi-blind transmitter may achieve the same region in Remarks 2 and 1, respectively. Next, we present new capacity results when (some) delayed CSI is available to the transmitter in Theorem 2. Next, for the no CSIT assumption and a blind transmitter, in Theorem 3 we present new conditions beyond [9] under which the capacity region is achievable, and a new achievable region is presented in Theorem 4. ###### Theorem 1. For the two-user broadcast erasure channel with a non-blind transmitter and no CSIT as described in Section II, we have $\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{NN}}\equiv\left\\{\begin{array}[]{ll}0\leq\beta^{\mathrm{no}}_{1}R_{1}+R_{2}\leq\left(1-\delta_{2}\right),&\\\ 0\leq R_{1}+\beta^{\mathrm{no}}_{2}R_{2}\leq\left(1-\delta_{1}\right).&\end{array}\right.$ (8) where $\displaystyle\beta^{\mathrm{no}}_{i}=\epsilon_{\bar{i}}\min\left\\{\frac{1-\delta_{\bar{i}}}{1-\delta_{i}},1\right\\}.$ (9) The derivation of the outer-bounds has two main ingredients. First, as detailed in upcoming Remark 1, even the channel is not degraded, the stronger receiver can decode both messages regardless of the values of $\epsilon_{1}$ and $\epsilon_{2}$ for receiver cache. Second, as detailed in upcoming Lemma 2, we derive an extremal entropy inequality between the two receivers that captures the availability of receiver-end side-information, including the channel state and cache index information. The outer-bound region holds for the non-blind setting and thus, includes the capacity region with a blind transmitter as well. The following two remarks provide further insights. ###### Remark 1 (Simplified expressions and degradedness). Without loss of generality, assume $\delta_{2}\geq\delta_{1}$, meaning that receiver $1$ has a stronger channel. Then, the region of Theorem 1 can be written as $\left\\{\begin{array}[]{ll}0\leq\epsilon_{2}\frac{1-\delta_{2}}{1-\delta_{1}}R_{1}+R_{2}\leq\left(1-\delta_{2}\right),&\\\ 0\leq R_{1}+\epsilon_{1}R_{2}\leq\left(1-\delta_{1}\right).\end{array}\right.$ (10) Unlike the scenario with no side-information at the receivers, this assumption does not mean the channel is degraded. However, the stronger receiver, ${\sf Rx}_{1}$ in this case, will be able to decode both $W_{1}$ and $W_{2}$ by end of the communication block regardless of the values of $\epsilon_{1}$ and $\epsilon_{2}$. The reason is as follows. After decoding $W_{1}$, receiver ${\sf Rx}_{1}$ has access to the side-information of receiver ${\sf Rx}_{2}$, _i.e._ $W_{1|2}$, and can emulate the channel of ${\sf Rx}_{2}$ as it has a stronger channel ($\delta_{2}\geq\delta_{1}$). Finally, we note that although the stronger receiver is able to decode both messages, this does not imply that the stronger receiver will have a higher rate. As an example, suppose $\delta_{1}=1/3,\delta_{2}=1/2,\epsilon_{1}=2/3,$ and $\epsilon_{2}=1/6$. Then, from the Theorem 1, the maximum sum-rate point is: $\displaystyle\left(R_{1},R_{2}\right)=\left(\frac{4}{11},\frac{5}{11}\right).$ (11) ###### Remark 2 (Importance of side-information at the weaker receiver). Under the same assumption of the previous remark, $\delta_{2}\geq\delta_{1}$, from the outer-bounds of Theorem 1, we conclude that if the weaker receiver has no side-information, _i.e._ $\epsilon_{2}=1$, then, the capacity region is the same as having no side-information at either receivers. In other words, as long as the weaker receiver has no side-information, additional information at the stronger receiver does not enlarge the region. On other other hand, any side-information at the weaker receiver results in an outer-bound region that is strictly larger than the capacity region with no side-information at either receivers. Based on these remarks, we can provide more details on the achievability protocol. Under the no-CSIT assumption, the stronger receiver will eventually be able to decode both messages. Thus, the first step is to deliver the message intended for the weaker receiver. The stronger receiver will be able to decode this message faster than the intended receiver and thus, in the second step, we include the part of the message for the stronger user that is available at the weaker receiver. In other words, this second step is beneficial to both receivers. We note that to accomplish this task, the transmitter at least needs to know the side-information of the weaker receiver. This latter fact is further explained in the Corollary below. During the final step, the remaining part of the message intended for the stronger receiver is delivered. As will be detailed in Section V, to achieve the outer-bounds, indeed the transmitter only needs to know the side-information available to the weaker receiver. Thus, we have ###### Corollary 1. The capacity region with a semi-blind transmitter $\mathcal{C}^{\mathrm{semi- blind}}_{\mathrm{NN}}$ equals to $\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{NN}}$ in Theorem 1 if the cache index information (2) of the weaker receiver (link with larger erasure probability) is known at the transmitter. The following lemma from [10] establishes the outer-bounds on the capacity region of the two-user broadcast erasure channel with a non-blind transmitter and delayed CSIT, $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{DD}}$. We provide several new achievability strategies to achieve these bounds when the transmitter has less knowledge compared to what these bounds assume, and establish interesting capacity results. ###### Lemma 1 ([10]). For the two-user broadcast erasure channel with a non-blind transmitter and delayed CSIT as described in Section II, we have outer-bound region $\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{DD}}\subseteq\left\\{\begin{array}[]{ll}\vspace{1mm}0\leq R_{i}\leq(1-\delta_{i}),&i=1,2,\\\ \beta^{\mathrm{delayed}}_{i}R_{i}+R_{\bar{i}}\leq\left(1-\delta_{\bar{i}}\right),&i=1,2.\end{array}\right.$ (12) where $\displaystyle\beta^{\mathrm{delayed}}_{i}=\epsilon_{\bar{i}}\frac{1-\delta_{\bar{i}}}{1-\delta_{1}\delta_{2}}.$ (13) Now, we show that this outer-bound region is achievable under the following scenarios. ###### Theorem 2. For the two-user broadcast erasure channel, the capacity region is achieved when: Case A: with a blind transmitter and global delayed CSIT, the capacity region $\mathcal{C}^{\mathrm{blind}}_{\mathrm{DD}}$ equals to (12) when the channel is symmetric (_i.e._ $\delta_{1}=\delta_{2}=\delta$ and $\epsilon_{1}=\epsilon_{2}=\epsilon$); Case B: with the transmitter knowing full side-information from one receiver and only the delayed CSI of the other receiver (e.g., the semi-blind- transmitter case with $\epsilon_{1}=0$ for $\mathsf{Rx}_{1}$), the capacity region $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{DN}}$ (and thus $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{DD}}$) equals to (12) Case C: with a non-blind transmitter having access to only the delayed CSI of one receiver, the capacity region $\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{DN}}$ equals to (12). Note that having full side-information at a receiver (as in Case B above) immediately implies the transmitter is not blind with respect to that receiver. For the blind-transmitter case, if $\epsilon_{i}=0$ then the transmitter knows $\mathsf{Rx}_{i}$ has full side-information, and thus $\mathcal{C}^{\mathrm{blind}}_{\mathrm{DN}}$ is also partially known from Case B. Without the global channel state and/or cache index information from both receivers, our new achievability results of Theorem 2 differ significantly from the those in [10]. In particular in [10], overheard bits and cached bits are both known at the transmitter to create network coding opportunities simultaneously benefit for both receivers. In our achievability, network coding opportunities can only be opportunistically created. Rather interestingly, in three cases identified by our Theorem, transmitter blindness or one-sided feedback may not result in any capacity loss compared with [10]. To achieve capacity $\mathcal{C}^{\mathrm{blind}}_{\mathrm{DD}}$ in Case A, we present an opportunistic protocol where the transmitter first sends out linear combinations of the packets for both receivers. Then, using the feedback signals and the fact that some of the packets for one receiver are available at the other, the transmitter sends bits for intended for one receiver in such as way to help one receiver remove interference and the other to obtain new information about its bits. Depending on channel parameters, this may follow with a multicast phase. This idea could be interpreted as an opportunistic reverse network coding for erasure channels. Both Cases B and C focus on cached capacity with only single-user delayed CSI. Interestingly, our four- phase opportunistic network coding for Case C will also create blind side- information at the “N” user for the recycled bits. Indeed, the last two phases in Case C is modified form the achievability for Case B. Compared with [12], the new ingredient for Case C is the non-blind cache, and we propose to generalize [12] by carefully mixing the fresh cached bits with the recycled un-cached bits. Specifically, the recycling in [12] is done by mixing two pre- encoded bit sequences. On the contrary to [12] where the input sequence of each pre-encoder is always recycled, it may contain fresh bits in our Case C as detailed in Sec. VII. The following two theorems focus on the no-CSIT blind-transmitter assumption. The first one identifies conditions under which the outer-bound region of Theorem 1 can be achieved even when the transmitter does not know what side- information is available to each receiver. The second theorem presents an achievable region when the stronger receiver has full side-information, but this achievable region does not match the outer-bounds. ###### Theorem 3. For the two-user broadcast erasure channel with no channel feedback, a blind transmitter, and available receiver side-information as described in Section II, the capacity $\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ equals to $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$ in Theorem 1 when: 1. 1. When $\delta_{2}\geq\delta_{1}$ and $\epsilon_{2}\in\\{0,1\\}$; 2. 2. Symmetric setting: when $\delta_{1}=\delta_{2}=\delta$, and $\epsilon_{1}=\epsilon_{2}=\epsilon$. ###### Theorem 4. For the two-user broadcast erasure channel with no channel feedback, a blind transmitter, as described in Section II, when $\delta_{2}\geq\delta_{1}$ and $\epsilon_{1}=0$, we can the following region is achievable: $\mathcal{R}^{\mathrm{in}}\equiv\left\\{\begin{array}[]{ll}0\leq\left(\epsilon_{2}+\delta_{1}(1-\epsilon_{2})\right)\frac{1-\delta_{2}}{1-\delta_{1}}R_{1}+R_{2}\leq\left(1-\delta_{2}\right),&\\\ 0\leq R_{1}\leq\left(1-\delta_{1}\right).&\end{array}\right.$ (14) We note that for $\delta_{1}=0$ (no erasure at the stronger receiver), the inner-bound region of Theorem 4 matches the outer-bounds of Theorem 1, _i.e._ $\mathcal{R}^{\mathrm{in}}\equiv\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ . Note that the blind-transmitter case under no-CSIT was also considered in [9] but only the weaker receiver has side-information. In that setting, outer and inner bounds were presented that match only when erasure probabilities are equal to zero, which is no longer an erasure setting. In contrast, in this work, we have the capacity region $\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{NN}}$ in Theorem 1 for the non-blind-transmitter case, which recovers the outer-bounds of [9] as a special case. The capacity region $\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ of blind index coding over no- CSIT broadcast erasure channel remains open in general. ### III-B Illustration of the results In this subsection, we briefly illustrate the results through a few examples to further clarify and discuss some of the insights and intuitions provided above. We start with Theorem 2 where the transmitter has access to (some) delayed CSI. Figure 2 illustrates the capacity region $\mathcal{C}^{\mathrm{blind}}_{\mathrm{DD}}=\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{DN}}$ from Theorem 2 when $\delta_{1}=\delta_{2}=\delta=0.5$ and $\epsilon_{1}=\epsilon_{2}=\epsilon\in\\{0,0.5,1\\}$. In particular, $\epsilon=1$ is the case in which no side-information is available and our results recover [24]. The other extreme is $\epsilon=0$ where the entire message of one user is available to the other and maximum individual point-to- point rates can be achieved. Finally, $\epsilon=0.5$ is an intermediate case and the capacity is strictly larger than that of no side-information. Figure 2: Capacity region $\mathcal{C}^{\mathrm{blind}}_{\mathrm{DD}}=\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{DN}}$ for $\delta_{1}=\delta_{2}=\delta=0.5$ and $\epsilon_{1}=\epsilon_{2}=\epsilon\in\\{0,0.5,1\\}$. We then consider the no-CSIT scenario. For the first example in this case, we consider $\delta_{1}=\frac{1}{2}$ and $\delta_{2}=\frac{3}{4}$. The capacity region of the broadcast erasure channel with these parameters and no side- information at either receiver is described by all non-negative rates satisfying: $\displaystyle\frac{1}{2}R_{1}+R_{2}\leq\frac{1}{4}.$ (15) Note that we no side-information, the channel is degraded. Further, as discussed earlier, as long as the weaker receiver (${\sf Rx}_{2}$ in this case) has no side-information, _i.e._ $\epsilon_{2}=1$, the capacity region remains identical to the one described in (15) with no side-information at either receivers. This region is included in Figure 3(a) and (b) as benchmark. Note that significant caching or index coding gains are obtained for all settings presented in Figure 3(a) and (b). Figure 3: (a) Illustration of the capacity region $\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{NN}}$ when ${\sf Rx}_{i}$ has full side-information ($\epsilon_{i}=0$); (b) increase in the achievable rates as $\epsilon_{1}$ goes from $1$ to $0$ for $\epsilon_{2}=0.5$. Next, we examine the region $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$ when one receiver has full side-information. Figure 3(a) includes both these cases and depicts how the capacity region enlarges as more side-information is available to the receivers. We note that with full side-information at both receivers ($\epsilon_{1}=\epsilon_{2}=0$), maximum individual rates given by $R_{i}=(1-\delta_{i})$, $i=1,2$, are achievable simultaneously. Figure 3(b) depicts the gradual increase in achievable rates when $\epsilon_{2}=\frac{1}{2}$ and $\epsilon_{1}$ goes from $1$ (no side information) to $0$ (full side-information). Note that receiver ${\sf Rx}_{1}$ has a stronger channel so the illustrated region also equals to the capacity region $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{NN}}$ when the transmitter has no cache index information of ${\sf Rx}_{1}$. Figure 4: (a) Although the stronger receiver will always be able to decode both messages, the weaker receiver may have a higher rate. In this example for $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$, we have $\delta_{1}=\frac{1}{3},\delta_{2}=\frac{1}{2},\epsilon_{1}=\frac{2}{3},$ and $\epsilon_{2}=\frac{1}{6}$; (b) Capacity region with symmetric parameters $\delta_{1}=\delta_{2}=\delta$ and $\epsilon_{1}=\epsilon_{2}=\epsilon$. The maximum sum-rate point is given by $\frac{2(1-\delta)}{(1+\epsilon)}$. The capacity region with symmetric parameters can be achieved under the blind- transmitter scenario as well. See Theorem 3. As the second example, we consider $\delta_{1}=\frac{1}{3}$ and $\delta_{2}=\frac{1}{2}$. Similar to the previous case for capacity region $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$, receiver ${\sf Rx}_{1}$ has a stronger channel. However, as illustrated in Figure 4(a), with $\epsilon_{1}=\frac{2}{3}$ and $\epsilon_{2}=\frac{1}{6}$, the weaker receiver has a higher rate as given in (11). Finally, Figure 4(b) depicts the capacity region $\mathcal{C}^{\mathrm{non- blind}}_{\mathrm{NN}}=\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ with symmetric channel parameters ($\delta_{1}=\delta_{2}=\delta$ and $\epsilon_{1}=\epsilon_{2}=\epsilon$): with no side-information, the maximum achievable sum-rate is $(1-\delta)$; and with side-information (even blind), the maximum sum-rate is given by: $\displaystyle\frac{2(1-\delta)}{(1+\epsilon)}.$ (16) Figure 5: Comparing the inner-bounds of Theorem 4 for $\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ to the outer-bounds of Theorem 1 for $\delta_{2}\geq\delta_{1}$ and $\epsilon_{1}=0$. Black dashed lines define the outer-bound region, while the solid green lines are the inner-bounds. The shaded area is the gap between the two. Finally, Figure 5 illustrates the outer-bounds of Theorem 1 and the inner- bounds of Theorem 4 for $\delta_{2}\geq\delta_{1}$ and $\epsilon_{1}=0$. In other words, in this case, the stronger receiver has full side-information. We note that the inner-bounds of Theorem 4 for the blind-transmitter case match the outer-bounds when $\delta_{1}=0$. ### III-C Organization of the Proofs In the following sections, we provide the proof of our main contributions. We prove the capacity region of Theorem 1 in Sections IV and V. We then move to the achievability part of Theorem 2 as they are capacity-achieving and include several interesting new ingredients, as in Section VI and VII. The proofs of other results are deferred to the Appendix. ## IV Converse Proof of $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$ in Theorem 1 In this section, we derive the outer-bounds of Theorem 1. We note that as the capacity region of the non-blind setting includes that of the blind assumption, the derivation in this section is for the non-blind transmitter case and the bounds apply to the blind-transmitter case as well. The point-to-point outer-bounds, _i.e._ $R_{i}\leq(1-\delta_{i})$, are those of erasure channels, and thus, omitted. Without loss of generality, we assume $\delta_{2}\geq\delta_{1}$, meaning that receiver $1$ has a stronger channel. As discussed in Remark 1, unlike the scenario with no side-information at the receivers, this assumption does not mean the channel is degraded. In what follows, we derive the following outer-bounds: $\displaystyle\mathbf{B1}:\frac{\epsilon_{2}(1-\delta_{2})}{(1-\delta_{1})}R_{1}+R_{2}\leq\left(1-\delta_{2}\right),$ $\displaystyle\mathbf{B2}:R_{1}+\epsilon_{1}R_{2}\leq\left(1-\delta_{1}\right).$ (17) Suppose rate-tupe $\left(R_{1},R_{2}\right)$ is achievable. We first derive $\mathbf{B2}$ to get some insights. Derivation of $\mathbf{B2}$: As discussed in Remark 1, the stronger receiver, ${\sf Rx}_{1}$ in this case, is able to decode both messages by the end of the communication block using its available side information. Thus, we have $\displaystyle H(W_{1}|G^{n})+H(\bar{W}_{2|1}|G^{n})\leq$ $\displaystyle I(W_{1},\bar{W}_{2|1};Y_{1}^{n}|W_{2|1},G^{n})+n\upxi_{n}\leq H(Y_{1}^{n}|W_{2|1},G^{n})+n\upxi_{n}\leq n(1-\delta_{1})+n\upxi_{n}.$ (18) We also note that $\displaystyle nH(\bar{W}_{2|1}|G^{n})=\sum^{m_{2}}_{\ell=1}H\big{(}(1-E_{1}[\ell])W_{2}[\ell]\big{|}E_{1}[\ell]\big{)}=n\epsilon_{1}H(W_{2})=n\epsilon_{1}R_{2}.$ (19) Thus, from (IV) and (19), we get $\displaystyle n(R_{1}+\epsilon_{1}R_{2})\leq n(1-\delta_{1})+n\upxi_{n}.$ (20) Dividing both sides by $n$ and let $n\rightarrow\infty$, we get the second outer-bound in (IV). Derivation of $\mathbf{B1}$: We enhance receiver $\mathsf{Rx}_{1}$ by providing the entire $W_{2}$ to it, as opposed to the already available $W_{2|1}$, and we note that this cannot reduce the rates. Moreover, motivated from the Derivation of $\mathbf{B2}$, this enhancement should only have limited rate increase. From the decentralized placement model (2), we define the global channel state and cache index information as $G^{n}:=\\{S^{n}_{1},S^{n}_{2},E^{nR_{2}}_{1},E^{nR_{1}}_{2}\\}:=\\{S^{n},E^{n}\\}$ (21) then, using $\displaystyle\beta_{1}^{\mathrm{no}}=\frac{\epsilon_{2}(1-\delta_{2})}{(1-\delta_{1})},$ (22) we have $\displaystyle n$ $\displaystyle\left(\beta_{1}^{\mathrm{no}}R_{1}+R_{2}\right)=\beta_{1}^{\mathrm{no}}H(W_{1})+H(W_{2})$ $\displaystyle\overset{(a)}{=}\beta_{1}^{\mathrm{no}}H(W_{1}|W_{2},G^{n})+H(W_{2}|W_{1|2},G^{n})$ $\displaystyle\overset{(\mathrm{Fano})}{\leq}\beta_{1}^{\mathrm{no}}I(W_{1};Y_{1}^{n}|W_{2},G^{n})+I(W_{2};Y_{2}^{n}|W_{1|2},G^{n})+n\upxi_{n}$ $\displaystyle=\beta_{1}^{\mathrm{no}}H(Y_{1}^{n}|W_{2},G^{n})-\beta_{1}^{\mathrm{no}}\underbrace{H(Y_{1}^{n}|W_{1},W_{2},G^{n})}_{=~{}0}$ $\displaystyle\quad+H(Y_{2}^{n}|W_{1|2},G^{n})-H(Y_{2}^{n}|W_{1|2},W_{2},G^{n})+n\upxi_{n}$ $\displaystyle\overset{(b)}{\leq}H(Y_{2}^{n}|W_{1|2},G^{n})+2n\upxi_{n}$ $\displaystyle\overset{(c)}{\leq}n\left(1-\delta_{2}\right)+2\upxi_{n},$ (23) where $\upxi_{n}\rightarrow 0$ as $n\rightarrow\infty$; $(a)$ follows from the independence of messages and captures the enhancement of receiver $\mathsf{Rx}_{1}$; $(b)$ follows from Lemma 2 below; $(c)$ is true since the entropy of a binary random variable is at most one and the channel to the second receiver is only on a fraction $\left(1-\delta_{2}\right)$ of the communication time. Dividing both sides by $n$ and let $n\rightarrow\infty$, we get the first outer-bound in (IV). ###### Lemma 2. For the two-user broadcast erasure channel with no channel feedback but with non-blind side information at the receivers as described in Section II, whether blind or not, and $\beta_{1}^{\mathrm{no}}$ given in (9), we have $\displaystyle H\left(Y_{2}^{n}|W_{1|2},W_{2},G^{n}\right)+n\upxi_{n}\geq\beta_{1}^{\mathrm{no}}H\left(Y_{1}^{n}|W_{2},G^{n}\right),$ (24) where $\upxi_{n}\rightarrow 0$ as $n\rightarrow\infty$, where $G^{n}$ is the global channel state and cache index information in (21). ###### Proof. We first have the following fact $\displaystyle H\left(Y_{2}^{n}|W_{1|2},W_{2},G^{n}\right)\geq\frac{1-\delta_{2}}{1-\delta_{1}}H\left(Y_{1}^{n}|W_{1|2},W_{2},G^{n}\right),$ (25) which is modified from our previous result [25]. For completeness, we still present the details in Appendix A. Now, to prove this Lemma with (25), we note that $0=H\left(Y_{1}^{n}|W_{1},W_{2},G^{n}\right)=H\left(Y_{1}^{n}|W_{1|2},\bar{W}_{1|2},W_{2},G^{n}\right),$ (26) where $\bar{W}_{1|2}$ is the complement of $W_{1|2}$ in $W_{1}$, and then $\displaystyle H\left(Y_{1}^{n}|W_{1|2},W_{2},G^{n}\right)=I(\bar{W}_{1|2};Y_{1}^{n}|W_{1|2},W_{2},G^{n})$ $\displaystyle=$ $\displaystyle H(\bar{W}_{1|2}|W_{1|2},W_{2},G^{n})-H(\bar{W}_{1|2}|Y_{1}^{n},W_{1|2},W_{2},G^{n}).$ (27) Since $H\left(W_{1}|Y_{1}^{n},W_{2},G^{n}\right)\leq n\upxi_{n},$ the second term in the RHS of (27) will also be less than $n\upxi_{n}$ due to the $W_{1}=(W_{1|2},\bar{W}_{1|2})$ and the chain rule. For the first term in the RHS, as (19), it equals to $H(\bar{W}_{1|2}|E_{2}^{n})=\epsilon_{2}H(W_{1}).$ Then we get $\displaystyle H$ $\displaystyle\left(Y_{1}^{n}|W_{1|2},W_{2},G^{n}\right)\geq\epsilon_{2}H\left(W_{1}\right)-n\upxi_{n}\overset{(a)}{\geq}\epsilon_{2}H\left(Y_{1}^{n}|W_{2},G^{n}\right)-n\upxi_{n},$ (28) and $(a)$ is obtained from (26) as $\displaystyle H\left(Y_{1}^{n}|W_{1},W_{2},G^{n}\right)=0\Rightarrow H\left(W_{1}\right)\geq H\left(Y_{1}^{n}|W_{2},G^{n}\right).$ Finally, from (25) and (28), we obtain $\displaystyle H$ $\displaystyle\left(Y_{2}^{n}|W_{1|2},W_{2},G^{n}\right)\overset{\eqref{Eq:withSideInfoN}}{\geq}\frac{1-\delta_{2}}{1-\delta_{1}}H\left(Y_{1}^{n}|W_{1|2},W_{2},G^{n}\right)$ $\displaystyle\overset{\eqref{Eq:RemovingSideInfoN}}{\geq}\frac{\epsilon_{2}\left(1-\delta_{2}\right)}{\left(1-\delta_{1}\right)}H\left(Y_{1}^{n}|W_{2},G^{n}\right)-n\upxi_{n}$ $\displaystyle\overset{\eqref{Eq:Beta_No}}{=}\beta_{1}^{\mathrm{no}}H\left(Y_{1}^{n}|W_{2},G^{n}\right)-n\upxi_{n}.$ (29) This completes the proof of Lemma 2. ∎ ## V Achievability Proof of $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$ in Theorem 1 In this section, we provide an achievability protocol for the non-blind transmitter case, and we show that the achievable regions matches the outer- bounds of Theorem 1. Thus, we characterize the capacity region of the problem when the transmitter is aware of the available side-information at the receivers. For the proof, without loss of generality, we assume $\delta_{2}\geq\delta_{1}$, meaning that ${\sf Rx}_{1}$ has a stronger channel than ${\sf Rx}_{2}$. Our achievability proof reveals a surprising result, only the cache index information $E^{nR_{1}}_{2}$ of the weaker receiver 2 in (2) is needed at the transmitter. Thus the transmitter can be semi-blind as Corollary 1 to achieve the outer-bounds. As a warm-up, we first focus on an example where $\epsilon_{2}=0$. In this case, receiver ${\sf Rx}_{2}$ (the weaker receiver) has full side-information of the message for ${\sf Rx}_{1}$, _i.e._ $W_{1|2}=W_{1}$, and receiver ${\sf Rx}_{1}$ has access to $(1-\epsilon_{1})$ of the bits intended for ${\sf Rx}_{2}$. The outer-bounds of Theorem 1 in this case become: $\left\\{\begin{array}[]{ll}0\leq R_{2}\leq\left(1-\delta_{2}\right),&\\\ 0\leq R_{1}+\epsilon_{1}R_{2}\leq\left(1-\delta_{1}\right).\end{array}\right.$ (30) Thus, the non-trivial corner point is given by: $\displaystyle R_{1}=(1-\delta_{1})-\epsilon_{1}(1-\delta_{2}),$ $\displaystyle R_{2}=(1-\delta_{2}).$ (31) In this case, we set $\displaystyle\eta=\frac{R_{1}}{R_{2}}=\frac{(1-\delta_{1})}{(1-\delta_{2})}-\epsilon_{1}>0.$ (32) Achievability protocol for $\epsilon_{2}=0$ : Recall $\eta=m_{1}/m_{2}$, we start with $m$ bits for ${\sf Rx}_{2}$ and $\eta m$ bits for ${\sf Rx}_{1}$. The achievability protocol is carried a single phase with two segments (another Phase will be added later for general $\epsilon_{2}$). The total communication time is set to $\frac{m}{(1-\delta_{2})}.$ Segment a: This segment has a total length of $\displaystyle t_{a}=\frac{\epsilon_{1}m}{(1-\delta_{1})}<\frac{m}{(1-\delta_{2})},$ (33) where the last inequality is from $R_{1}>0$ in (V). During this segment, the transmitter sends $t_{a}$ of the random combinations of the $m$ bits intended for ${\sf Rx}_{2}$. During this segment, stronger ${\sf Rx}_{1}$ obtains $\displaystyle(1-\delta_{1})t_{a}=\epsilon_{1}m$ (34) random equations of the $m$ bits for ${\sf Rx}_{2}$ and in combination with the available side-information $W_{2|1}$ with $\left|W_{2|1}\right|=(1-\epsilon_{1})m$, ${\sf Rx}_{1}$ has sufficient linearly independent equations to decode $W_{2}$ when the code length is large enough. Segment b: This segment has a total length of $\displaystyle t_{b}=\frac{(1-\delta_{1})-\epsilon_{1}(1-\delta_{2})}{(1-\delta_{1})(1-\delta_{2})}m=\frac{m}{(1-\delta_{2})}-t_{a}>0.$ (35) During this segment, the transmitter creates $t_{b}$ random linear combinations of the $\eta m$ bits in $W_{1}$, and creates the XOR of these combinations with $t_{b}$ random combinations of the $m$-bit $W_{2}$ for ${\sf Rx}_{2}$. The transmitter sends the resulting XORed sequence during Segment b. The decodability comes as follows. In segment b, ${\sf Rx}_{1}$ can remove the interference since $W_{2}$ is known from Segment a, and gets $t_{b}(1-\delta_{1})=\eta m$ linearly independent equations for decoding $W_{1}$ correctly. Also in this segment, ${\sf Rx}_{2}$ can remove the interference from $W_{1}$ using the side-information $W_{1|2}=W_{1}$, then the total linearly independent equations it has will be $(t_{b}+t_{a})(1-\delta_{2})=m$. Thus, by the end of the communication block, ${\sf Rx}_{2}$ can decode $W_{2}$, and ${\sf Rx}_{1}$ can decode both $W_{1}$ and $W_{2}$. Achievable rates: Since the total communication time is $\displaystyle\frac{m}{(1-\delta_{2})},$ (36) we immediately conclude the achievability of rates in (V). Note that in the toy example aforementioned, only cache index information $E^{nR_{1}}_{2}$ for $W_{1|2}=W_{1}$ is used at the transmitter in Segment b, but not the other $E^{nR_{2}}_{1}(W_{2|1})$. Now we present the proof for the general case $\epsilon_{2}\neq 0$ and show that the achievability also needs a “semi-blind” transmitter. From (10), the non-trivial corner point of the region defined in (8) is given by: $\displaystyle R_{1}=\frac{(1-\delta_{1})-\epsilon_{1}(1-\delta_{2})}{1-\frac{\epsilon_{1}\epsilon_{2}(1-\delta_{2})}{(1-\delta_{1})}},$ $\displaystyle R_{2}=\frac{(1-\epsilon_{2})(1-\delta_{2})}{1-\frac{\epsilon_{1}\epsilon_{2}(1-\delta_{2})}{(1-\delta_{1})}}.$ (37) We define $\displaystyle\eta\overset{\triangle}{=}\frac{R_{1}}{R_{2}}=\frac{(1-\delta_{1})-\epsilon_{1}(1-\delta_{2})}{(1-\epsilon_{2})(1-\delta_{2})}.$ (38) We note that if $\displaystyle(1-\epsilon_{2}+\epsilon_{1})>\frac{(1-\delta_{1})}{(1-\delta_{2})},$ (39) then $\eta<1$. Achievability protocol for general $\epsilon_{2}$: We start with $m$ bits for ${\sf Rx}_{2}$ and $\eta m$ bits for ${\sf Rx}_{1}$. The achievability protocol is carried over two phases with the first phase having two segments as those for $\epsilon_{2}=0$. As revealed by the decodability in toy example, the idea that after the first phase, receiver ${\sf Rx}_{2}$ (the weaker receiver) decodes its message $W_{2}$. Since the first receiver has a stronger channel, it can recover interference $W_{2}$ in a shorter time horizon after the first segment of Phase I. Then, during the second segment of Phase I, the transmitter starts delivering cached $W_{1|2}$ to ${\sf Rx}_{1}$, while it continues delivering $W_{2}$ to ${\sf Rx}_{2}$. Note that since ${\sf Rx}_{2}$ knows $W_{1|2}$ and ${\sf Rx}_{1}$ has recovered $W_{2}$ in the first segment, the second segment benefits both receivers. Finally, during the newly-added second phase, $\bar{W}_{1|2}$ (the non-cached part of $W_{1}$ outside $W_{1|2}$) is delivered to ${\sf Rx}_{1}$. The whole process is summarized in Figure 6. Figure 6: Semi-blind two-phase protocol to achieve outer-bounds $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{NN}}$ in Theorem 1. Phase I: The transmitter creates $\displaystyle\frac{m}{(1-\delta_{2})}+O\left(m^{2/3}\right)$ (40) random linear combinations of the $m$ bits intended for ${\sf Rx}_{2}$ such that any randomly chosen $m$ combinations are randomly independent. This can be accomplished by a random linear codebook of which each element is generated from i.i.d. Bernoulli $\mathrm{Ber}(1/2)$. In practice, Fountain codes can be used. ###### Remark 3 (Expected values of concentration results). The $O\left(m^{2/3}\right)$ is to ensure sufficient number of equations will be received given the stochastic nature of the channel. At the end we let $m\rightarrow\infty$, such terms do not affect the achievability of the overall rates. Thus, for simplicity of expressions, we omit these terms and only work with the expected value of the number of equations in what follows, since the actual number will converge to this expection. Segment a of Phase I: This segment has a total (on average) length of $t_{a}$ as in (33). During this segment, the transmitter sends $t_{a}$ of the random combinations it created for ${\sf Rx}_{2}$ as described above and illustrated in Figure 6. Segment b of Phase I: This segment has a total (on average) length of $t_{b}$ as (35). During this segment, the transmitter creates $t_{b}$ random linear combinations of the $(1-\epsilon_{2})\eta m$ bits in $W_{1|2}$, and XORs these combinations with the another $t_{b}$ random combinations it created for $W_{2}$. The transmitter sends the XORed sequence during Segment b of Phase I. Phase II: This phase has a total (on average) length of $\displaystyle t_{2}=\frac{\epsilon_{2}\left((1-\delta_{1})-\epsilon_{1}(1-\delta_{2})\right)}{(1-\epsilon_{2})(1-\delta_{1})(1-\delta_{2})}m=\frac{\epsilon_{2}\eta m}{(1-\delta_{1})}.$ (41) During this phase, the transmitter creates $t_{2}$ random linear combinations of the bits in $\bar{W}_{1|2}$ and sends these combinations. The decodability for ${\sf Rx}_{1}$ comes as follows. At the end of the second phase, ${\sf Rx}_{1}$ gathers $\epsilon_{2}\eta m$ linearly independent equations of non-cached $\bar{W}_{1|2}$, and as $\left|\bar{W}_{1|2}\right|=\epsilon_{2}\eta m$, ${\sf Rx}_{1}$ can decode $\bar{W}_{1|2}$. For cached $W_{1|2}$, as in toy example, during segment b of Phase I, ${\sf Rx}_{1}$ can remove the interference ($W_{2}$ is known in segment a) and decode $W_{1|2}$. Thus, ${\sf Rx}_{1}$ can decode ${W}_{1|2}$ and $\bar{W}_{1|2}$, meaning that it can recover $W_{1}$. The decodability for ${\sf Rx}_{2}$ after Phase I directly follows from that in the toy example. Achievable rates: Recall the total length $t_{a}+t_{b}$ of Phase I is $\frac{m}{(1-\delta_{2})}$ from (35), then with (41), the total communication time is given by: $\displaystyle t_{a}+t_{b}+t_{2}=\frac{(1-\delta_{1})-\epsilon_{1}\epsilon_{2}(1-\delta_{2})}{(1-\epsilon_{2})(1-\delta_{1})(1-\delta_{2})}m,$ (42) which immediately implies the achievability of rates in (V). ## VI Achievability Proof for $\mathcal{C}^{\mathrm{blind}}_{\mathrm{DD}}$ in Case A of Theorem 2 First, we remind the reader of the conditions in Case A of Theorem 2: global delayed CSIT and symmetric channel where $\delta_{1}=\delta_{2}=\delta$ and $\epsilon_{1}=\epsilon_{2}=\epsilon$. For this setup, we present an opportunistic communication protocol that enables the transmitter to use the available delayed CSI and the statistical knowledge of the available side- information at the receivers. This protocol starts by sending the summation (_i.e._ XOR in the binary field) of individual bits intended for the two receivers, and then, based on the available channel feedback and the statistics of receiver side-information, the transmitter is able to efficiently create recycled bits for retransmission. In this regard, the protocol has some similarities to the reverse Maddah-Ali-Tse scheme [26], which was originally designed for multiple-input fading broadcast channels [27]. As discussed in [28], the channel setting is fundamentally different: discrete erasure channel vs. continuous Rayleigh channel, single antenna vs. multiple-input transmitter. Together with the blind receiver side-information in our case; thus, our protocols end up sharing little ingredients with [26]. We skip the protocol for achieving $R_{i}=\left(1-\delta\right)$, as it is a well-established result. Instead, we provide the achievability protocol for the maximum symmetric sum-rate point given by $\displaystyle R_{1}=R_{2}=\frac{1-\delta^{2}}{1+\delta+\epsilon}.$ (43) Figure 7: The proposed four-phase protocol when $\epsilon\leq\delta$: Phase I delivers combinations of $\vec{a}$ and $\vec{b}$; Phases II and III deliver interfering bits to unintended receivers alongside useful information to the intended receivers; Phase IV multicasts XOR of available bits at each receiver needed at the other. We break the scheme based on the relationship between $\epsilon$ and $\delta$. We note that since we focus on the homogeneous setting in this section, the transmitter has $m$ bits for each receiver: $a_{j}$’s for receiver ${\sf Rx}_{1}$ and $b_{j}$’s for receiver ${\sf Rx}_{2}$, $j=1,2,\ldots,m$. Scenario 1 ($\epsilon\leq\delta$): This case assumes the side channel that provides each receiver its side-information is stronger than the channel from the transmitter. The transmitter first creates $\vec{c}=\left(c_{1},c_{2},\ldots,c_{m}\right)$ according to $\displaystyle c_{j}=a_{j}\oplus b_{j},\qquad j=1,2,\ldots,m.$ (44) The protocol is divided into four phases as described below and depicted in Figure 7. Phase I: During this phase, the transmitter sends out individual bits from $\vec{c}$ until at least one receiver obtains this bit, and then, moves on to the next bit. Due to the statistics of the channel, this phase takes on average $\displaystyle t_{1}=\frac{m}{1-\delta^{2}}$ (45) time instants. ###### Remark 4. To keep the description of the protocol simple, we use the expected value of different random variable (_e.g._ length of phase, number of bits received by each user, etc). A more precise statement would use a concentration theorem result such as the Bernstein inequality to show the omitted terms do not affect the overall result and the achievable rates as done in [29, 30]. Moreover, when talking about the number of bits or time instants, we are limited to integer numbers. If a ratio is not an integer number, we can use $\lceil\cdot\rceil$, the ceiling function, and since at the end we take the limit for $m\rightarrow\infty$, the results remain unchanged. After Phase I is completed, receiver $\mathsf{Rx}_{1}$ obtains on average $m/(1+\delta)$ bits from $\vec{c}$. The transmitter, using channel feedback during Phase I, knows which bits out of $\vec{b}$ where among those received by $\mathsf{Rx}_{1}$ as part of $\vec{c}$, denoted by $\vec{\tilde{b}}$ in Figure 7. Furthermore, $\mathsf{Rx}_{1}$ statistically knows a fraction $(1-\epsilon)$ of the interfering $\vec{\tilde{b}}$ from its side-information. Thus, if $\mathsf{Rx}_{1}$ obtains an additional fraction $\epsilon$ of $\vec{\tilde{b}}$, it can resolve interference in Phase I to get $m/(1+\delta)$ pure bits from $\vec{a}$. A similar statement holds for $\mathsf{Rx}_{2}$. Phase II: The transmitter creates $\epsilon m/(1+\delta)$ linearly independent combinations of $\vec{\tilde{b}}$, and encodes them using an erasure code of rate $(1-\delta)$ and sends them out. This phase takes $\displaystyle t_{2}=\frac{\epsilon m}{1-\delta^{2}},$ (46) time instant, and upon its completion, $\mathsf{Rx}_{1}$ gets the additional equations to remove interference during Phase I and recover $m/(1+\delta)$ bits from $\vec{a}$, while $\mathsf{Rx}_{2}$ obtains further $\epsilon m/(1+\delta)$ equations of its intended bits. Phase III: This phase is similar to Phase II, but the transmitter sends out $\vec{\tilde{a}},$ those bits out of $\vec{a}$ that were received by $\mathsf{Rx}_{2}$ as part of $\vec{c}$. This phase takes $\displaystyle t_{3}=\frac{\epsilon m}{1-\delta^{2}},$ (47) time instants, and upon its completion, $\mathsf{Rx}_{2}$ gets the additional equations to remove interference during Phase I and recover $m/(1+\delta)$ bits from $\vec{b}$, while $\mathsf{Rx}_{1}$ obtains further $\epsilon m/(1+\delta)$ equations of its intended bits. Number of equations at each receiver: After the first three phases, each receiver has a total of $\displaystyle\frac{m}{1+\delta}+\frac{\epsilon m}{1+\delta}$ (48) linearly independent equations of its bits, and thus, needs an additional $\displaystyle\frac{\delta-\epsilon}{1+\delta}m$ (49) new equations to complete recovery of its bits. Note that if $\epsilon=\delta$, the protocol ends here. Phase IV: The transmitter creates $\displaystyle\frac{\delta-\epsilon}{1+\delta}m$ (50) further linearly independent random combinations of the bits intended for $\mathsf{Rx}_{i}$ but received at $\mathsf{Rx}_{\bar{i}}$ as part of $\vec{c}$ during Phase I, denoted by $\vec{\tilde{a}}$ and $\vec{\tilde{b}}$ in Figure 7. Note that at this point, each receiver has full knowledge of the interfering bits during Phase I and retransmission of such bits will no longer create any interference. Thus, the transmitter encodes these two sets of bits (one for each receiver) using an erasure code of rate $\left(1-\delta\right)$ and send the XOR of these encoded bits. This phase takes $\displaystyle t_{4}=\frac{\delta-\epsilon}{1-\delta^{2}}m$ (51) time instants. We note that in Phases II, III, and IV, the transmitter needs to create linearly independent combinations of the bits. Thus, we need to guarantee the feasibility of these operations. In Phase I, as part of $\vec{c}$, a total of $\displaystyle\frac{\delta}{1+\delta}m$ (52) bits intended for one receiver arrive at the unintended receiver and effectively, during the next phases, we deliver these bits to the intended receiver. In fact, (52) equals the summation of the number of linearly equations needed for $\mathsf{Rx}_{1}$ during Phases III and IV, and for $\mathsf{Rx}_{2}$ during Phases II and IV. Thus, the feasibility of creating sufficient number of linearly independent combinations is guaranteed. Upon completion of Phase IV, each receiver first removes the contribution of the bits intended for the other user, and then, recovers the additional equations needed as indicated in (49), and thus, is able to complete recovery of its message. Achievable rates: The total communication time is $\displaystyle\sum_{j=1}^{4}{t_{j}}=\frac{1+\delta+\epsilon}{1-\delta^{2}}m,$ (53) which immediately results in target rates of (43). Scenario 2 ($\epsilon>\delta$): This scenario corresponds to the case in which the side channel that provides each receiver with its side-information is weaker than the channel from the transmitter. The protocol has four phases as before with some modifications. Phase I remains identical to the previous scenario; during Phases II and III, instead of $\epsilon m/(1+\delta)$, the transmitter creates $\delta m/(1+\delta)$ linearly independent equations of $\vec{\tilde{b}}$ and $\vec{\tilde{a}}$, respectively, and sends them out as done in the previous scenario. With these modifications, after the first three phases, $\mathsf{Rx}_{1}$, still has $\displaystyle\frac{\epsilon-\delta}{1+\delta}m$ (54) equations interfered by $\vec{\tilde{b}}$. Thus, for $\mathsf{Rx}_{1}$ to successfully recover $\vec{a}$, the transmitter has two options: $(i)$ to deliver the same number as in $\eqref{Eq:Sc2AfterPh12}$, new combinations of $\vec{\tilde{b}}$ to $\mathsf{Rx}_{1}$, and $(ii)$ to provide $\mathsf{Rx}_{1}$ with additional combinations, same as in $\eqref{Eq:Sc2AfterPh12}$ , of its own bits $\vec{\tilde{a}}$. With the first choice, receiver $\mathsf{Rx}_{1}$ fully resolves the interference and recovers its bits; while with the second choice, it simply obtains $m$ linearly independent equations of $\vec{a}$. Interestingly, either choice is also good for $\mathsf{Rx}_{2}$: with the first choice, $\mathsf{Rx}_{2}$ obtains $m$ linearly independent equations of $\vec{b}$; while with the first choice, $\mathsf{Rx}_{2}$ fully resolves the interference and recovers its bits. In other words, in this scenario, during Phase IV, no XOR operation is needed and only bits intended for one user would enable both receivers to decode their bits. Based on the discussion above, during Phase IV, the transmitter creates $\left(\epsilon-\delta\right)/\left(1+\delta\right)m$ linearly independent combinations, same as (54), of the bits in $\vec{\tilde{a}}$ intended for $\mathsf{Rx}_{1}$ but received as part of $\vec{c}$ at $\mathsf{Rx}_{2}$ during Phase I. Then, the transmitter encodes these equations using an erasure code of rate $\left(1-\delta\right)$ and sends them to both receivers. Similar to the previous scenario, we can guarantee the feasibility of creating these linearly independent equations. After Phase IV, as discussed above, $\mathsf{Rx}_{1}$ will have sufficient number of equations to recover $\vec{a}$; while $\mathsf{Rx}_{2}$ first needs to resolve the interference using the equations it obtains during this phase, and then, recover $\vec{b}$. We note that, the transmitter could send combinations of $\vec{b}$ instead of $\vec{a}$ during the last phase, and the decoding strategy of the receivers would get swapped. ## VII Achievability Proofs for $\mathcal{C}^{\mathrm{semi- blind}}_{\mathrm{DN}}$ in Case B and $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{DN}}$ in Case C of Theorem 2 Case B : Achievability for $\mathcal{C}^{\mathrm{semi-blind}}_{\mathrm{DN}}$ when $\epsilon_{1}=0$ : Now $\epsilon_{1}=0$ and thus $W_{2|1}=W_{2}$. Recall message $W_{2}$ is also represented by a bit vector $\vec{b}$. The encoding process come as follows. At time index $t$, the $j$-th bit $a_{j}$ in message $\vec{a}$ for $\mathsf{Rx}_{1}$ is repeated according to the state feedback $S_{1}$, and after XORing a random linear combination $(\vec{g}_{t})^{\intercal}\vec{b}$, the resulting superposition is sent. Each entry of $\vec{g}_{t}$ is generated from $i.i.d.$ $\mathrm{Ber}(1/2)$. Each $a_{j}$ is repeated until the corresponding state feedback is $S_{1}=1$. In other words, prior to the superposition via XORing, $\vec{a}$ is pre-encoded and repeated as in standard ARQ, while $\vec{b}$ is pre-encoded by a fountain- like random linear code. the termination of this fountain code is determined by the state feedback of $\mathsf{Rx}1$, that is whether or not all bits in $\vec{a}$ are successfully delivered to $\mathsf{Rx}1$. The decoding at $\mathsf{Rx}_{1}$ follows from the standard ARQ, since full side-information $\vec{b}$ is known. By setting total transmission length as $\frac{m_{1}}{1-\delta_{1}},$ (55) the achievable rate is $R_{1}=1-\delta_{1}$ (56) For user $\mathsf{Rx}_{2}$, it has side-information $W_{1|2}$ for $(1-\epsilon_{2})$ bits of the interference $\vec{a}$ and each reception of the corresponding XOR transmitted will result in a linear equation of $\vec{b}$. To see this, consider the $j$-th bit $a_{j}$ of interference $\vec{a}$. Suppose it is repeated $L_{i}$ times until its mixture with $\vec{b}$ is successfully arrived at $\mathsf{Rx}1$. Within this span, $\mathsf{Rx}2$ gets $K_{i}\triangleq\sum_{\ell=1}^{L_{j}}S_{2,j}[\ell]$ (57) linear equations mixing $a_{j}$ and $\vec{b}$, where $S_{2,j}[\ell]$ is the erasure state at $\mathsf{Rx}2$ for the $\ell$-th transmission of $a_{j}$. Then $\mathsf{Rx}2$ gets $K_{i}$ pure equations of $\vec{b}$. In total, the number of pure linear equations of message $\vec{b}$ is $m_{1}(1-\epsilon_{2})\mathbb{E}[K_{i}]=m_{1}(1-\epsilon_{2})\frac{1-\delta_{2}}{1-\delta_{1}},$ (58) For interference bits without side-information, by using interference alignment in [12], the number of pure linear equations is $m_{1}\epsilon_{2}\mathbb{E}[(K_{i}-1)^{+}]=m_{1}\epsilon_{2}\left(\frac{\delta_{1}-\delta_{2}}{1-\delta_{1}}+\frac{\delta_{2}-\delta_{1}\delta_{2}}{1-\delta_{1}\delta_{2}}\right)$ (59) Thus the total number of pure equations from (58) and (59) is $m_{1}(1-\delta_{2})\left(\frac{1}{1-\delta_{1}}-\frac{\epsilon_{2}}{1-\delta_{1}\delta_{2}}\right),$ which results in achievable rate $R_{2}=(1-\delta_{2})\left(1-\epsilon_{2}\frac{1-\delta_{1}}{1-\delta_{1}\delta_{2}}\right)$ (60) where (55) is applied. It can be checked that $(R_{1},R_{2})$ in (56)(60) is the corner point of outer-bound region (12) when $\epsilon_{1}=0$ $\displaystyle\epsilon_{2}\frac{1-\delta_{2}}{1-\delta_{1}\delta_{2}}$ $\displaystyle R_{1}+R_{2}\leq\left(1-\delta_{2}\right)$ (61) $\displaystyle 0\leq$ $\displaystyle R_{i}\leq(1-\delta_{i}),i=1,2,$ (62) Other corner point can be trivially achieved by time-sharing. Figure 8: Proposed four-phase protocol for achieving $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{DN}}$. Case C : Achievability for $\mathcal{C}^{\mathrm{non-blind}}_{\mathrm{DN}}$ : As in Fig. 8, we now introduce the four-phase scheme for this achievability, of which the third and fourth phases are similar to that in Case B. We first represent $W_{2|1}$ and $W_{1|2}$ using bit vectors $\vec{b}_{1}$ and $\vec{a}_{2}$ respectively, then the encoding process is Phase I : The transmitter sends bits from $\vec{a}$ which are not cached at $\mathsf{Rx}_{2}$ and not in $\vec{a}_{2}$. The total length $t_{1}$ of Phase I is $\epsilon_{2}m_{1}$. After Phase I, the transmitter knows length $t_{1}\delta_{1}$ sequence $\bar{\tilde{a}}_{2}$, which is formed by bits erased at $\mathsf{Rx}1$ in Phase I where $S_{1}[t]=0$. Phase II : The transmitter selects $\epsilon_{1}m_{2}$ bits from $\vec{b}$ which are not cached at $\mathsf{Rx}_{1}$ and not in $\vec{b}_{1}$, and send then random linear combinations of them. The total length $t_{2}$ of Phase II is $\epsilon_{1}m_{2}/(1-\delta_{1}\delta_{2})$. After Phase II, the transmitter knows length $t_{2}(1-\delta_{1})$ sequence $\bar{\tilde{b}}_{1}$, which is formed by bits received at $\mathsf{Rx}1$ in Phase I where $S_{1}[t]=1$. Phase III : The transmission is similar to that in semi-blind Case B, the differences are as follows. Now the transmitter is non-blind to $\vec{a}$ so it pre-encodes cached $\vec{a}_{2}$ instead of the whole $\vec{a}$ using ARQ. Also the transmitter is only sure that $(\bar{\tilde{b}}_{1},\vec{b}_{1})$ is known at $\mathsf{Rx}_{1}$, it pre-encodes $(\bar{\tilde{b}}_{1},\vec{b}_{1})$ instead of full $\vec{b}$ using the random linear code. More specifically, the output of the second pre-encoder at time $t$ becomes the XOR of random linear combinations $(\vec{g}_{t})^{\intercal}\bar{\tilde{b}}_{1}\oplus(\vec{g^{\prime}}_{t})^{\intercal}\vec{b}_{1}$, where each entry of $\vec{g}_{t}$ or $\vec{g^{\prime}}_{t}$ is generated from i.i.d. $\mathrm{Ber}(1/2).$ For the first pre-encoder, each bit in $\vec{a}_{2}$ cached at user 2 is repeated according to the delayed $S_{1}$ as described in Case B. Finally the XOR of outputs of these two pre-encoders is sent. Phase IV: The transmission in phase is same as that in Phase III, by replacing the input of the first ARQ pre-encoder by the recycled $\bar{\tilde{a}}_{2}$. Though $(1-\delta_{2})$ of sequence $\bar{\tilde{a}}_{2}$ will be known at $\mathsf{Rx}_{2}$ in Phase IV, the transmitter is blind to these bits. On the contrary, in Phase III the transmitter knows that the input $\vec{a}_{2}$ of the ARQ pre-encoder is totally cached at $\mathsf{Rx}_{2}$. Note that without receiver side-information $\epsilon_{1}=\epsilon_{2}=1$, there will be no Phase III and our scheme reduces to the three-phase scheme in [12]. We focus on the decodability for receiver $\mathsf{Rx}1$ first. In Phase III and IV, since $(\bar{\tilde{b}}_{1},\vec{b}_{1})$ is already known at $\mathsf{Rx}_{1}$, $\vec{a}_{2}$ and $\bar{\tilde{a}}_{2}$ can be recovered, if the lengthes of Phases are respectively chosen as $\displaystyle t_{3}(1-\delta_{1})=(1-\epsilon_{2})m_{1}$ $\displaystyle t_{4}(1-\delta_{1})=\delta_{1}t_{1}=\delta_{1}\epsilon_{2}m_{1}$ (63) Together with bits received in Phase I, receiver $\mathsf{Rx}1$ gets all $m_{1}$ bits. Now, we turn to the decodability at $\mathsf{Rx}2$. Receiver $\mathsf{Rx}2$ will first decode super-$(\bar{\tilde{b}}_{1},\vec{b}_{1})$ from its received bits during the entire three phases. With the $\bar{\tilde{b}}_{1}$ and the received equations during Phase II, it will have $t_{2}(1-\delta_{1}\delta_{2})=\epsilon_{1}m_{2}$ equations to decode uncached bits in $\vec{b}$. Together with $\vec{b}_{1}$, the whole message for user 2 is decoded. To ensure successful decoding of the super-$(\bar{\tilde{b}}_{1},\vec{b}_{1})$, we calculate the corresponding expected number of linearly independent equations as follows. In Phase III, every reception at $\mathsf{Rx}_{2}$ will result in a new equation since $\vec{a}_{2}$ is cached, and we have $(1-\epsilon_{2})m_{1}\mathbb{E}[K_{i}]$ equations after Phase III. In Phase IV, as (58) and (59), we will have additional $t_{1}\delta_{1}(1-\delta_{2})\mathbb{E}[K_{i}]+t_{1}\delta_{1}\delta_{2}\mathbb{E}[(K_{i}-1)^{+}]$ equations since $(1-\delta_{2})$ of $\bar{\tilde{a}}_{2}$ will be received during Phase I. By using equations of $\bar{\tilde{b}}_{1}$ received at $\mathsf{Rx}_{2}$ in Phase II as additional cache, we need $\displaystyle t_{2}(1-\delta_{1})+(1-\epsilon_{1})m_{2}\leq$ $\displaystyle t_{2}(1-\delta_{1}-\delta_{2}+\delta_{1}\delta_{2})+(1-\epsilon_{2})m_{1}\frac{1-\delta_{2}}{1-\delta_{1}}+t_{1}\delta_{1}(1-\delta_{2})\left(\frac{1}{1-\delta_{1}}-\frac{\delta_{2}}{1-\delta_{1}\delta_{2}}\right)$ (64) for successful decoding the length $t_{2}(1-\delta_{1})+(1-\epsilon_{1})m_{2}$ super-$(\bar{\tilde{b}}_{1},\vec{b}_{1})$. Note that by collecting $\bar{\tilde{b}}_{1}$ received in Phase II (with standard basis for $\bar{\tilde{b}}_{1}$) and the linear equations produced in Phase III and IV, one can form a set of linear equations of $(\bar{\tilde{b}}_{1},\vec{b}_{1})$ described by a full (column) rank matrix, when codelengths are long enough. For $(R_{1},R_{2})$ satisfying outer-bound $R_{1}/(1-\delta_{1})+\epsilon_{1}R_{2}/(1-\delta_{1}\delta_{2})=1$ in (12), the total communication time must meet $\sum^{4}_{j=1}t_{j}=\frac{m_{1}}{1-\delta_{1}}+\frac{\epsilon_{1}m_{2}}{1-\delta_{1}\delta_{2}}.$ From selected lengths of Phase I and II, $m_{1}=t_{1}/\epsilon_{2}$ and $m_{2}=t_{2}(1-\delta_{1}\delta_{2})/\epsilon_{1}$, together with (63), this constraint is meet since $\sum^{4}_{j=1}t_{j}=t_{1}\left(1+\frac{(1-\epsilon_{2})/\epsilon_{2}+\delta_{1}}{1-\delta_{1}}\right)+t_{2}=\frac{t_{1}}{\epsilon_{2}(1-\delta_{1})}+t_{2}$ For the corner point $(R_{1},R_{2})$ which also satisfies outer-bound $R_{2}/(1-\delta_{2})+\epsilon_{2}R_{1}/(1-\delta_{1}\delta_{2})=1$ in (12), we further show that the decodability (64) will also be met. From (12), $\frac{t_{1}}{1-\delta_{1}\delta_{2}}+\frac{1-\delta_{1}\delta_{2}}{\epsilon_{1}(1-\delta_{2})}t_{2}=\sum^{4}_{j=1}t_{j}=\frac{t_{1}}{\epsilon_{2}(1-\delta_{1})}+t_{2},$ which implies $t_{2}\left(\frac{1-\delta_{1}\delta_{2}}{\epsilon_{1}}-(1-\delta_{2})\right)=t_{1}(1-\delta_{2})\left(\frac{1}{\epsilon_{2}(1-\delta_{1})}-\frac{1}{1-\delta_{1}\delta_{2}}\right),$ or equivalently $t_{2}\left(\delta_{2}-\delta_{1}\delta_{2}+\frac{1-\epsilon_{1}}{\epsilon_{1}}(1-\delta_{1}\delta_{2})\right)=t_{1}(1-\delta_{2})\left(\frac{(1-\epsilon_{2})/\epsilon_{2}}{1-\delta_{1}}+\left(\frac{1}{1-\delta_{1}}-1\right)-\left(\frac{1}{1-\delta_{1}\delta_{2}}-1\right)\right).$ Then (64) is met since $m_{1}=t_{1}/\epsilon_{2}$ and $m_{2}=t_{2}(1-\delta_{1}\delta_{2})/\epsilon_{1}$. ## VIII Conclusion We studied the problem of communications over two-user broadcast erasure channels with random receiver side-information. We assumed the transmitter may not have access to global channel state information and global cache index information for both receivers. For the non-blind-transmitter case, we characterized the capacity region, while with a blind transmitter we showed the outer-bounds can be achieved under certain conditions. Thus, in general with a blind transmitter, the capacity region of the problem, also known as blind index coding over the broadcast erasure channel, remains open. ## Appendix A Proof of (25) in Lemma 2 For time instant $t$ where $2\leq t\leq n$, we have $\displaystyle H\left(Y_{2}[t]|Y_{2}^{t-1},W_{1|2},W_{2},S_{2}^{t},E^{n}\right)$ (65) $\displaystyle\quad=\left(1-\delta_{2}\right)H\left(X[t]|Y_{2}^{t-1},W_{1|2},W_{2},S_{2}[t]=1,S_{2}^{t-1},E^{n}\right)$ $\displaystyle\quad\overset{(a)}{=}\left(1-\delta_{2}\right)H\left(X[t]|Y_{2}^{t-1},W_{1|2},W_{2},S_{2}^{t},E^{n}\right)$ $\displaystyle\quad\overset{(b)}{\geq}\left(1-\delta_{2}\right)H\left(X[t]|Y_{1}^{t-1},W_{1|2},W_{2},S_{1}^{t},E^{n}\right)$ $\displaystyle\quad\overset{(c)}{=}\frac{1-\delta_{2}}{1-\delta_{1}}H\left(Y_{1}[t]|Y_{1}^{t-1},W_{1|2},W_{2},S_{1}^{t},E^{n}\right),$ (66) where $(a)$ holds since $X[t]$ is independent of the channel realization at time instant $t$; $(b)$ follows from the following arguments. Consider a virtual channel state $\tilde{S}[t]$ is an i.i.d. Bernoulli $(\delta_{2}-\delta_{1})/\delta_{2}$ process independent of $S_{2}[t]$ and transmitted signal, then, we have $\displaystyle H\left(X[t]|Y_{2}^{t-1},W_{1|2},W_{2},S_{2}^{t},E^{n}\right)$ $\displaystyle=H\left(X[t]|Y_{2}^{t-1},W_{1|2},W_{2},S_{2}^{t},\tilde{S}^{t-1},E^{n}\right)$ $\displaystyle\geq H\left(X[t]|\\{(1-S_{2}[\ell])\tilde{S}[\ell]X[\ell]\\}_{\ell=1}^{\ell=t-1},Y_{2}^{t-1},W_{1|2},W_{2},S_{2}^{t},\tilde{S}^{t-1},E^{n}\right)$ $\displaystyle=H\left(X[t]|Y_{1}^{t-1},W_{1|2},W_{2},S_{1}^{t},E^{n}\right),$ (67) where the third equality holds since receiving both virtual $(1-S_{2}[\ell])\tilde{S}[\ell]X[\ell]$ and $Y_{2}[\ell]=S_{2}[\ell]X[\ell]$ is statistically the same as receiving $S_{1}[\ell]X[\ell]=Y_{1}[\ell]$ (note if $S_{2}[\ell]=0$ there is $(\delta_{2}-\delta_{1})/\delta_{2}$ probability $(1-S_{2}[\ell])\tilde{S}[\ell]=1$), and $X[t]$ is independent of the channel states and virtual $\tilde{S}^{t-1}$; $(c)$ comes from the fact that $\Pr\left(S_{1}[t]=0\right)=\delta_{1}$. Next, taking the summation over $t$ from $1$ to $n$, and using the fact that the transmit signal at time instant $t$ is independent of future channel realizations, we get $H\left(Y_{2}^{n}|W_{1|2},W_{2},S^{n}_{2},E^{n}\right)\geq\frac{1-\delta_{2}}{1-\delta_{1}}H\left(Y_{1}^{n}|W_{1|2},W_{2},S^{n}_{1},E^{n}\right)$ and also (25). ## Appendix B Proof of $\mathcal{C}^{\mathrm{blind}}_{\mathrm{NN}}$ in Theorem 3: Opportunistic Transmission Case 1 : $\delta_{2}\geq\delta_{1}$ and $\epsilon_{2}\in\\{0,1\\}$: First, we note that as stated in Remark 2, when the weaker receiver has no side- information, _i.e._ $\epsilon_{2}=1$, then, the capacity region is the same as having no side-information at either receivers. The case with $\epsilon_{2}=0$ is already given in Section V Case 2 : $\delta_{1}=\delta_{2}=\delta$, and $\epsilon_{1}=\epsilon_{2}=\epsilon$ (symmetric setting): Now we focus on the blind-transmitter assumption where the transmitter no longer has the luxury of knowing $W_{1|2}$ to send bits in such a way to benefit both receivers (as was done in Segment b of Phase I in the previous section V). We note that if both $\epsilon_{1}$ and $\epsilon_{2}$ are equal to zero, the problem becomes trivial as each receiver has full side- information of the other user’s message and $R_{i}=(1-\delta_{i})$ is achievable for $i=1,2$. For case $\epsilon_{1}=\epsilon_{2}=\epsilon>0$, if each receiver obtains a total of $(1+\epsilon)m$ linearly independent observation of both $W_{1}$ and $W_{2}$, then, it can recover both messages. It turns out that this protocol is capacity-achieving. The non-trivial corner point in this case is given by: $\displaystyle R_{1}=R_{2}=\frac{(1-\delta)}{(1+\epsilon)}.$ (68) The protocol is straightforward. The transmitter starts with $m$ bits for each receiver and creates $(1+\epsilon)m$ linearly independent combinations of the total $2m$ bits for the two receivers. Then, the transmitter encodes these combinations using an erasure code of rate $(1-\delta)$ and communicates the encoded message. Each receiver by the end of the communication block will have $2m$ linearly independent observations of the $2m$ unknown variables and can decode both messages. This immediately implies the achievability of the corner point described in (68). ## Appendix C Proof of Theorem 4 In this case, receiver ${\sf Rx}_{1}$ (the stronger receiver) has full side- information of the message for ${\sf Rx}_{2}$, _i.e._ $W_{2|1}=W_{2}$, and receiver ${\sf Rx}_{2}$ has access to $(1-\epsilon_{2})$ of the bits intended for ${\sf Rx}_{1}$. The outer-bounds of Theorem 1 in this case become: $\left\\{\begin{array}[]{ll}0\leq\epsilon_{2}\frac{1-\delta_{2}}{1-\delta_{1}}R_{1}+R_{2}\leq\left(1-\delta_{2}\right),&\\\ 0\leq R_{1}\leq\left(1-\delta_{1}\right).\end{array}\right.$ (69) Thus, the non-trivial corner point is given by: $\displaystyle R_{1}=(1-\delta_{1}),$ $\displaystyle R_{2}=(1-\epsilon_{2})(1-\delta_{2}).$ (70) In this case. we cannot achieve the corner point given in (C). To achieve the region described in Theorem 4, we need to prove the achievability of the following corner point: $\displaystyle R_{1}=(1-\delta_{1}),$ $\displaystyle R_{2}=(1-\epsilon_{2})(1-\delta_{1})(1-\delta_{2}).$ (71) Achievability protocol: We start with $m$ bits for ${\sf Rx}_{2}$ and $\eta m$ bits for ${\sf Rx}_{1}$, where $\displaystyle\eta=\frac{R_{1}}{R_{2}}=\frac{1}{(1-\epsilon_{2})(1-\delta_{2})}>1.$ (72) The achievability protocol is carried over two phases. During the first phase, the transmitter creates $\eta m$ random linear combinations of the $m$ bits for ${\sf Rx}_{2}$ such that each subset of $m$ combinations are linearly independent. The transmitter then sends out the XOR of these combinations with the uncoded $\eta m$ bits of ${\sf Rx}_{1}$. Thus, this phase has a length of $\displaystyle t_{1}=\eta m.$ (73) During this phase, ${\sf Rx}_{1}$ obtains $(1-\delta_{1})\eta m$ of its bits as it has access to $W_{2}$ as side-information and can cancel out the interference. Moreover, ${\sf Rx}_{2}$ obtains $(1-\delta_{2})\eta m$ XORed combinations, and since ${\sf Rx}_{2}$ statistically knows $(1-\epsilon_{2})$ of the bits for ${\sf Rx}_{1}$, we conclude that ${\sf Rx}_{2}$ gathers $\displaystyle(1-\epsilon_{2})(1-\delta_{2})\eta m=m$ (74) linearly independent combinations of its $m$ bits and is able to decode its message $W_{2}$. The second phase has a total length of $\displaystyle t_{2}=\frac{\delta_{1}}{(1-\delta_{1})}\eta m.$ (75) During this second phase, the transmitter creates $t_{2}$ random linear combinations of the $\eta m$ bits intended for ${\sf Rx}_{1}$ and sends them out. At the end of this phase, the first receiver gathers additional $(1-\delta_{1})\eta m$ random equations of its intended bits and combined with the $(1-\delta_{1})\eta m$ bits it already knows from the first phase, ${\sf Rx}_{1}$ is able to decode $W_{1}$. Achievable rates: The total communication time is $\displaystyle t_{1}+t_{2}=\frac{\eta m}{(1-\delta_{1})}.$ (76) This immediately implies the achievability of the rates given in (C). ## References * [1] M. A. Maddah-Ali and U. Niesen, “Cache-aided interference channels,” in 2015 IEEE International Symposium on Information Theory (ISIT), pp. 809–813, IEEE, 2015. * [2] N. Naderializadeh, M. A. Maddah-Ali, and A. S. Avestimehr, “Fundamental limits of cache-aided interference management,” IEEE Transactions on Information Theory, vol. 63, no. 5, pp. 3092–3107, 2017. * [3] S. Prakash, A. Reisizadeh, R. Pedarsani, and S. Avestimehr, “Coded computing for distributed graph analytics,” in 2018 IEEE International Symposium on Information Theory (ISIT), pp. 1221–1225, IEEE, 2018. * [4] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan, “Private information retrieval,” in Proceedings of IEEE 36th Annual Foundations of Computer Science, pp. 41–50, IEEE, 1995. * [5] H. Sun and S. A. Jafar, “Private information retrieval from MDS coded data with colluding servers: Settling a conjecture by freij-hollanti et al.,” IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 1000–1022, 2017. * [6] Z. Bar-Yossef, Y. Birk, T. Jayram, and T. Kol, “Index coding with side information,” IEEE Transactions on Information Theory, vol. 57, no. 3, pp. 1479–1494, 2011. * [7] M. A. R. Chaudhry and A. Sprintson, “Efficient algorithms for index coding,” in IEEE INFOCOM Workshops 2008, pp. 1–4, IEEE, 2008. * [8] H. Maleki, V. R. Cadambe, and S. A. Jafar, “Index coding–An interference alignment perspective,” IEEE Transactions on Information Theory, vol. 60, no. 9, pp. 5402–5432, 2014. * [9] D. T. Kao, M. A. Maddah-Ali, and A. S. Avestimehr, “Blind index coding,” IEEE Transactions on Information Theory, vol. 63, no. 4, pp. 2076–2097, 2016\. * [10] A. Ghorbel, M. Kobayashi, and S. Yang, “Content delivery in erasure broadcast channels with cache and feedback,” IEEE Transactions on Information Theory, vol. 62, no. 11, pp. 6407–6422, 2016. * [11] M. Zohdy, A. Tajer, and S. Shamai, “Distributed interference management: A broadcast approach,” IEEE Transactions on Communications, vol. 69, no. 1, pp. 149–163, 2021. * [12] S.-C. Lin, I.-H. Wang, and A. Vahid, “No feedback, no problem: Capacity of erasure broadcast channels with single-user delayed CSI,” in IEEE International Symposium on Information Theory (ISIT), pp. 1647–1651, IEEE, 2019\. * [13] A. Vahid, I.-H. Wang, and S.-C. Lin, “Capacity results for erasure broadcast channels with intermittent feedback,” in IEEE Information Theory Workshop (ITW), pp. 1–5, IEEE, 2019. * [14] C. Karakus, I.-H. Wang, and S. Diggavi, “Gaussian interference channel with intermittent feedback,” IEEE Transactions on Information Theory, vol. 61, no. 9, pp. 4663–4699, 2015. * [15] Q. Liu, J. Guo, C.-K. Wen, and S. Jin, “Adversarial attack on DL-based massive MIMO CSI feedback,” arXiv preprint arXiv:2002.09896, 2020. * [16] M. Sadeghi and E. G. Larsson, “Adversarial attacks on deep-learning based radio signal classification,” IEEE Wireless Communications Letters, vol. 8, no. 1, pp. 213–216, 2018. * [17] M. Sadeghi and E. G. Larsson, “Physical adversarial attacks against end-to-end autoencoder communication systems,” IEEE Communications Letters, vol. 23, no. 5, pp. 847–850, 2019. * [18] Y. Birk and T. Kol, “Coding on demand by an informed source (ISCOD) for efficient broadcast of different supplemental data to caching clients,” IEEE Transactions on Information Theory, vol. 52, no. 6, pp. 2825–2830, 2006\. * [19] N. Alon, E. Lubetzky, U. Stav, A. Weinstein, and A. Hassidim, “Broadcasting with side information,” in 2008 49th Annual IEEE Symposium on Foundations of Computer Science, pp. 823–832, IEEE, 2008. * [20] L. Ong, “Linear codes are optimal for index-coding instances with five or fewer receivers,” in 2014 IEEE International Symposium on Information Theory, pp. 491–495, IEEE, 2014. * [21] S. A. Jafar, “Topological interference management through index coding,” IEEE Transactions on Information Theory, vol. 60, no. 1, pp. 529–568, 2013. * [22] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Transactions on Information Theory, vol. 60, no. 5, pp. 2856–2867, 2014. * [23] A. Vahid, M. A. Maddah-Ali, and A. S. Avestimehr, “Communication through collisions: Opportunistic utilization of past receptions,” in Proceedings INFOCOM, pp. 2553–2561, IEEE, 2014. * [24] M. Gatzianas, L. Georgiadis, and L. Tassiulas, “Multiuser broadcast erasure channel with feedback – capacity and algorithms,” IEEE Transactions on Information Theory, vol. 59, no. 9, pp. 5779–5804, 2013. * [25] A. Vahid, M. A. Maddah-Ali, and A. S. Avestimehr, “Capacity results for binary fading interference channels with delayed CSIT,” IEEE Transactions on Information Theory, vol. 60, pp. 6093–6130, Oct. 2014. * [26] S. Yang, M. Kobayashi, D. Gesbert, and X. Yi, “Degrees of freedom of time correlated MISO broadcast channel with delayed CSIT,” IEEE transactions on information theory, vol. 59, no. 1, pp. 315–328, 2012. * [27] M. A. Maddah-Ali and D. Tse, “Completely stale transmitter channel state information is still very useful,” IEEE Transactions on Information Theory, vol. 58, no. 7, pp. 4418–4431, 2012. * [28] S.-C. Lin and I.-H. Wang, “Gaussian broadcast channels with intermittent connectivity and hybrid state information at the transmitter,” IEEE Transactions on Information Theory, vol. 64, no. 9, pp. 6362–6383, 2018. * [29] A. Vahid, M. A. Maddah-Ali, and A. S. Avestimehr, “Capacity results for binary fading interference channels with delayed CSIT,” IEEE Transactions on Information Theory, vol. 60, no. 10, pp. 6093–6130, 2014. * [30] A. Vahid and R. Calderbank, “Throughput region of spatially correlated interference packet networks,” IEEE Transactions on Information Theory, vol. 65, no. 2, pp. 1220–1235, 2018.
# Matrix-pairing states in the alkaline Fe-selenide superconductors: exotic Josephson junctions Emilian M. Nica Department of Physics Box 871504 Arizona State University Tempe, Arizona 85287-1504<EMAIL_ADDRESS>Qimiao Si Department of Physics and Astronomy, Rice University, 6100 Main St, Houston, TX, 77005, USA Rice Center for Quantum Materials, Rice University, 6100 Main St, Houston 77005 TX, USA Onur Erten Department of Physics Box 871504 Arizona State University Tempe, Arizona 85287-1504 ###### Abstract True to their unconventional nature, multi-band alkaline Fe-selenides and, more recently, the heavy-fermion CeCu2Si2 have shown experimental signatures of fully-gapped but sign-changing superconductivity. A two-orbital pairing state, called $s\tau_{3}$, with non-trivial matrix structure, was proposed as a candidate able to reconcile the seemingly contradictory properties of these superconductors. Motivated by the non-trivial orbital structure of the proposed $s\tau_{3}$ state, which has opposite signs for the pairing functions of the two orbitals, we study prototypical Josephson junctions where at least one of the leads is in a superconducting state of this kind. An analysis of these junctions in the limit of two degenerate orbitals (bands) and with a simple form of junction hybridization reveals several remarkable properties. One is the emergence of gapless, purely electron- and hole-like bound states for $s\tau_{3}-N-s\tau_{3}$ junctions with arbitrary global phase difference between the leads, and likewise for $s\tau_{3}-N-I$ junctions. The other is the absence of static Josephson currents when both leads are superconducting. In both of these signatures, $s\tau_{3}$ junctions are dramatically different from more conventional Josephson junctions. We also find that the gapless bound states are protected by an orbital-exchange symmetry, although the protection is not topological. Junctions which break this symmetry, such as $s\tau_{3}-N-s$, have gapped Andreev bound states. In general, the Josephson effect also re-emerges once the degeneracy of the two orbitals is lifted. We support these conclusions via analytical and numerical results for the bound states, together with microscopic calculations of the Josephson current. Our results indicate that junctions involving $s\tau_{3}$ pairing in alkaline Fe- selenides will generically have bound states with a small gap together with a greatly suppressed Josephson current. ## I Introduction Unconventional superconductors (SC’s) have led to some of the most important questions in condensed matter physics. One such puzzle came to light not too long ago in a branch of the Fe-based family, the alkaline Fe-selenides Lee (2017). On one hand these SC’s are fully-gapped, as evidenced by ARPES studies Mou _et al._ (2011); X.-P. Wang and T. Qian and P. Richard and P. Zhang and J. Dong and H.-D. Wang and C.-H. Dong and M.-H. Fang and H. Ding (2011); Xu _et al._ (2012); Wang _et al._ (2012), while on the other hand, they exhibit an in-gap spin-resonance in inelastic neutron scattering experiments Park _et al._ (2011); Friemel _et al._ (2012). The first of these features points toward an $s$-wave pairing state, while the second implies a pairing state which changes sign under a $\pi/2$ rotations Eschrig (2006); Stockert _et al._ (2011); Maier _et al._ (2011); Dai (2015); Si _et al._ (2016), such as $d$-wave pairing. These seemingly mutually-exclusive, traits were shown to be reconcilable in a pairing state which transforms as a sign-changing $B_{1g}$ representation of the point group, which nonetheless leads to a fully-gapped SC state. We called this state $s\tau_{3}$ Nica _et al._ (2017) since it consists of a $s_{x^{2}y^{2}}$ form factor multiplied by a $\tau_{3}$ Pauli matrix in a $d_{xz},d_{yz}$ two-orbital space appropriate to the alkaline Fe- selendides. Underlying the remarkable properties of this pairing candidate is the non-trivial $\tau_{3}$ matrix structure in orbital space, which ensures that it transforms as $B_{1g}$. In the tight-binding appropriate to the alkaline Fe-selenides this pairing state can also be thought of as an effective $d+d$ intra- and inter-_band_ pairing Nica and Si (2021). $s\tau_{3}$ pairing was stabilized in a realistic five-orbital model of the alkaline Fe-selenides Nica _et al._ (2017). Remarkably, a similar experimental landscape has also recently emerged in the venerable heavy- fermion CeCu2Si2, the first-discovered unconventional SC Steglich _et al._ (1979). Believed to be a typical $d$-wave for almost it’s entire history, CeCu2Si2 was recently found to exhibit a small gap for temperatures well below $T_{c}$ from specific heat Kittaka _et al._ (2014) and London penetration depth measurements Pang _et al._ (2018); Yamashita _et al._ (2017). In-gap spin resonances in the inelastic neutron spectrum Stockert _et al._ (2011) as well as other indicators single out CeCu2Si2 as another example of gapped but sign-changing superconductivity. A pairing candidate, which incorporates the orbital and spin-structure of this compound into a non-trivial matrix pairing was also recently proposed by two of us Nica and Si (2021). More generally, the $s\tau_{3}$ pairing state represents one of the most dramatic forms of the overall notion of orbital-selective superconducting pairing introduced in the context of 111 iron pnictides Yu _et al._ (2014), which has also been subsequently discussed in other iron pnictides Yin _et al._ (2014); Ong _et al._ (2016) and the nematic FeSe Sprau _et al._ (2017); Hu _et al._ (2018); Yu _et al._ (2018). Figure 1: Summary of our results for Josephson junctions along $x$ where at least one of the Left (L) and Right (R) leads is in a $s\tau_{3}$ pairing state. In that case, the pairing functions of the two orbitals have opposite signs and thus have non-trivial structure on orbital space. We show the most important results in the limit where the two orbitals are degenerate and couple identically to a single orbital in the Center (C) metallic part. (a) $s\tau_{3}-N-s\tau_{3}$ junctions where the L and R leads are identical up to a global phase difference. For arbitrary global phase differences between L and R leads, the bound states are either electron- or hole-like and become gapless for a set of conserved momenta $k_{y}$ . In this limit, the static Josephson current is zero, since the contributions of the two orbital sectors effectively cancel. For almost degenerate bands, as expected for systems with $s\tau_{3}$ pairing, the bound states acquire a small gap via Andreev scattering, although exceptions can occur for junctions along $z$. A weak static Josephson current, determined by band-splitting near the FS is also expected. $s\tau_{3}-N-I$ junctions, where the R lead is in an insulating state, has similar bound states. (b) $s\tau_{3}-N-s$ junction, where the R lead consists of a single orbital in a trivial $s$-wave pairing state. In the same degenerate limit, there are gapped Andreev bound states, but a vanishing Josephson current. For small band splitting, a weak Josephson current is expected. In Ref. Nica and Si (2021), we also pointed out that the $s\tau_{3}$ pairing state and a similarly-constructed microscopic candidate in CeCu2Si2 are both equivalent to $d+d$ intra- and inter-band pairing in the band basis. This equivalence holds when periodic boundary conditions are imposed along all of the axes. Moreover, we discussed the similarities between $d+d$ and 3He-B, where a similar $p+p$ pairing can be realized in spin space. In view of the topologically-protected edge states of 3He-B, a natural question is whether $d+d$ pairing can also have non-trivial edge states. This consideration opens up a new direction in the exploration of both the general $d+d$ pairing and actual microscopic states such as $s\tau_{3}$. For definiteness, in this work we focus on Josephson junctions as these show features which are specific to this type of matrix-pairing and therefore can facilitate its identification by experiments. We reserve a more detailed discussion of the edge state spectrum for future work. Although some of the bulk properties of pairing candidates with non-trivial matrix structure have already been mentioned, much remains to be explored as far as characteristic experimental signatures are concerned. One promising route involves Josephson junctions, which in principle can take advantage of the inherent phase difference between the two orbital sectors in $s\tau_{3}$ pairing. Motivated by this observation, we study prototypical S-N-S and S-N-I junctions where at least one of the leads is in a $s\tau_{3}$ pairing state. Although our study is specifically geared toward junctions with $s\tau_{3}$ pairing states and thus target the alkaline Fe-selenides, we expect that some of the features discussed here can also manifest for similar pairing candidates with non-trivial matrix structure as proposed for CeCu2Si2 Nica and Si (2021). To isolate the most striking features of this pairing state, we describe the superconducting leads via effective two-orbital models. Furthermore, in order to capture the effects of the intrinsic phase difference between the pairing functions of the two orbitals, we consider junctions where the two orbitals couple to a single orbital in the metallic part. As we show, the bound states in these rather unusual setups differ sharply from those in more conventional junctions, which ignore cross-coupling between the various orbital. More precisely, we examine setups where the Left (L) lead, Center (C) metallic, and Right (R) leads are arranged in $s\tau_{3}-N-s\tau_{3}$, $s\tau_{3}-N-I$ (insulator), and $s\tau-N-s$ (trivial single-channel s-wave SC) junctions. We assume that the coupling to the junction does not destabilize $s\tau_{3}$ pairing. Our salient results are as summarized below. ### I.1 Summary of main results Our most striking results occur when the two orbitals of the $s\tau_{3}$ SC leads are degenerate. In this limit, both intra- and inter-orbital hybridization terms which lead to band splitting are absent, and therefore the two orbitals correspond to two degenerate bands. The precise definitions of the intra- and inter-orbital hybridizations are discussed in Sec. II. In addition, the two orbitals couple identically to a single orbital of the C part. Due to the effects of the non-trivial orbital structure of $s\tau_{3}$ pairing and to the equal coupling to a single C orbital, the usual Andreev scattering processes are effectively “frustrated”, leading to gapless bound states which are either electron- or hole-like for $s\tau_{3}-N-s\tau_{3}$ for arbitrary global phase differences between L and R leads. This also holds for $s\tau_{3}-N-I$ junctions. The bound states are non-trivial since they decay into the SC leads. We also find that the gapless bound states are also protected by an orbital-exchange symmetry which is present for these junctions in the limit considered here, although the protection is not topological in origin. We find that breaking the orbital-exchange symmetry leads to a restoration of Andreev scattering which mixes electron- and hole-like states, which, in turn, leads to a gap. An extreme example of this symmetry breaking is provided by $s\tau_{3}-N-s$ junctions, which exhibit Andreev bound states. A second important feature is that the bound state spectra for the $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-s$ junctions are invariant to changes in the _global_ phase difference across the junction. This is also in striking contrast to the typical single-channel Josephson junction, where the Andreev bound state spectrum changes with phase, leading to a static Josephson effect. Our unusual findings are confirmed via microscopic calculations of the Josephson currents in the tunneling limit. We show in this limit that the contributions from the two sectors of $s\tau_{3}$ pairing, which have opposite intrinsic phases, cancel. We also consider more realistic junctions where the SC leads include intra- and inter-orbital hybridization. We focus on cases where the band splitting in the vicinity of the Fermi surface (FS) is much smaller than the pairing amplitudes. We show that intra-orbital hybridization terms break the orbital- exchange symmetry, leading to bound states more akin to the usual, gapped Andreev bound states. The same terms also make the bound state spectrum sensitive to a global phase difference across the junction. These findings are further confirmed by microscopic calculation of the Josephson current in the tunneling limit. As a counter-example to the gapped bound state spectrum, we consider setups where the junction is along the $z$-direction, while the intra- and inter-orbital hybridization are predominantly in-plane. We show that gapless states can still be found for in-plane momenta along the diagonals, where the orbital-exchange symmetry is preserved. In all cases, we show that the analytical solutions are consistent with numerical results. Fig. 1 provides a summary of our main results. As discussed in Refs. Nica _et al._ (2017); Nica and Si (2021), $s\tau_{3}$ pairing is expected to provide a superconducting state whose single-particle excitations are gapped everywhere on the FS. The simplest case to consider is well established for the alkaline Fe-selenides. In this case, the splitting of the relevant bands near the FS, which is essentially determined by intra- and inter-orbital hybridization terms, is small when compared to the pairing amplitudes. This is likely the case for a similar proposal in CeCu2Si2 Nica and Si (2021). In light of our preceding discussion, the band-splitting in alkaline Fe-selenides and CeCu2Si2 necessarily leads to gapped bound states and to a finite Josephson current for arbitrary global phase differences between the two leads. However, we expect that these effects are small. Therefore, $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-I$ junctions will typically exhibit bound states with gaps which are much smaller than the bulk gap, although exceptions with gapless states are also possible for arbitrary global phase differences. Likewise, $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-s$ junctions will exhibit static Josephson currents which are determined by band-splitting near the FS, and are therefore small when compared to similar junctions with orbitally-trivial pairing states. While these predictions do not provide unequivocal probes for $s\tau_{3}$ pairing, their combined signatures can provide significant supporting evidence. The remainder of the article is divided into the following sections. In Sec. II, we introduce the microscopic models for the junctions. In Sec. III, we discuss both analytical and numerical solutions for $s\tau_{3}-N-s\tau_{3}$ junctions, and we briefly compare these to junctions with orbitally-trivial pairing. $s\tau_{3}-N-I$ junctions have similar bound states, and are briefly discussed at the end of the section. Section IV is devoted to $s\tau_{3}-N-s$ junctions. In the final Sec. V, we summarize our findings and discuss possible experimental realizations of the junctions. The appendices contain detailed discussions of some of the points presented in the main text. In Appendix A, we present the analytical bound state solutions for $s\tau_{3}-N-s\tau_{3}$ junctions in the important limit of degenerate orbitals. Appendix B briefly touches on the effect of unequal coupling of the two orbitals to the metallic part, while Appendix C discusses junctions with orbitally-trivial pairings. Appendices D and E present the analytical solutions for $s\tau_{3}-N-I$ and $s\tau_{3}-N-s$ junctions, respectively, both in the limit of degenerate orbitals. The final Appendix F, presents our results for the Josephson current for $s\tau_{3}-N-s\tau_{3}$ junctions, as well as for junctions with orbitally-trivial pairing. ## II Models The main distinction of the setups considered here are the superconducting leads which are in a $s\tau_{3}$ pairing state, which was originally proposed for an effective $d_{xz},d_{yz}$ two-orbital model for the alkaline Fe- selenides Raghu _et al._ (2008); Si and Abrahams (2008); Daghofer _et al._ (2010). The _spin-singlet_ pairing state consists of a $s_{x^{2}y^{2}}(k_{x},k_{y})$ form factor multiplied by a $\tau_{3}$ Pauli matrix in orbital space. Due to the non-trivial $\tau_{3}$ orbital structure, $s\tau_{3}$ pairing transforms as a $B_{1g}$ irreducible representation of the tetragonal $D_{4h}$ point-group, and thus it changes sign under a $\pi/2$ rotation about $z$. In the simplest case, with the pairing amplitude being larger than the band-splitting near the FS, which is due to intra- and inter- orbital hybridization terms in the normal-state Hamiltonian, the Bogoliubov-de Gennes (BdG) spectrum of this state is _fully gapped_. In the more general cases, it is always gapped on the FS. In Ref. Nica _et al._ (2017), $s\tau_{3}$ pairing was stabilized in a realistic five-orbital model for the alkaline Fe-selenides. When considering Josephson junctions, the gapped nature of the $s\tau_{3}$ state implies that the relative phases of the pairing of the two $d_{xz},d_{yz}$ orbital sectors are preserved by the junction. Furthermore, the common $s_{x^{2}y^{2}}$ form factor has nodes along the $k_{x}=\pm\pi/2$ and $k_{y}=\pm\pi/2$ axes. For FS’s away from these axes, we can ignore the momentum-dependence of the pairing, which does not introduce any qualitatively new effects. ### II.1 Junctions along $x$ For all of the three types of junctions along $x$, we adopt the following model $\displaystyle H=$ $\displaystyle H_{\text{L}}+H_{\text{L-C}}+H_{\text{C}}+H_{\text{C-R}}+H_{\text{R}}.$ (1) $H_{\text{L}}$, $H_{\text{R}}$, and $H_{\text{C}}$ are the bulk Hamiltonians for the left, right leads and the center metallic parts, respectively. $H_{\text{L-C}}$ and $H_{\text{C-R}}$ contain the terms at the lead-center part interfaces, and by convention, are defined on the last and first sites of the L and R leads, respectively. The L lead which includes the pairing terms extends along $x[a]\leq(L_{x}-l)/2$, where $L_{x}$ is the number of sites for the entire L-C-R system , while $l$ is the number of sites of the metallic C part, along the $x$-direction, respectively. The length is defined in units of the lattice spacing $a$. The most general form of the bulk L lead Hamiltonian for $s\tau_{3}$ pairing is $\displaystyle H_{\text{L}}=$ $\displaystyle H_{\text{L, TB}}+H_{\text{L, Pair}},~{}\text{for}~{}x[a]<\left(\frac{L_{x}-l}{2}-1\right).$ (2) $\displaystyle H_{\text{L, TB}}=$ $\displaystyle\sum_{\alpha\textbf{r}\sigma}\bigg{[}-\left(t_{x\alpha}c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{x}},\alpha\sigma}+t_{y\alpha}c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{y}},\alpha\sigma}+\text{H.c.}\right)-\mu c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r},\alpha\sigma}$ $\displaystyle+$ $\displaystyle\sum_{\beta\neq\alpha}t_{4}\left(c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{x}}+\hat{\mathbf{y}},\beta\sigma}-c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{x}}-\hat{\mathbf{y}},\beta\sigma}+\text{H.c.}\right)\bigg{]}$ (3) $\displaystyle H_{\text{L, Pair}}=\sum_{\alpha\mathbf{r}}\Delta_{\alpha,\text{L}}\left(c^{{\dagger}}_{\mathbf{r},\alpha\uparrow}c^{{\dagger}}_{\mathbf{r},\alpha\downarrow}-c^{{\dagger}}_{\mathbf{r},\alpha\downarrow}c^{{\dagger}}_{\mathbf{r},\alpha\uparrow}\right)+\text{H.c.}$ (4) The indices $\alpha\in\\{1,2\\}$ stand for the $d_{xz}$ and $d_{yz}$ orbitals, respectively, while the spin is represented by $\sigma\in\\{\uparrow,\downarrow\\}$. $t_{x\alpha}$ are the nearest-neighbor (NN) hopping coefficients for orbital $\alpha$, while $t_{4}$ is a NN orbital hybridization, with all coefficients taken to be real. These are based on a simplified two-orbital model introduced in Ref. Raghu _et al._ (2008). Note the absence of terms proportional to a $\tau_{2}$ matrix in the tight-binding part, since these break time-reversal symmetry. The pairing satisfies $\Delta_{1}=-\Delta_{2}=\Delta$. As mentioned previously, we neglect the spatial dependence of the pairing which is not of immediate importance to the effects considered here. For periodic boundary conditions (BC’s), the BdG Hamiltonian for the L lead and for single spin sector reads $\displaystyle H_{\text{L}}(\mathbf{k})=\begin{pmatrix}\xi_{0}+\xi_{3}&\xi_{1}&\Delta&0\\\ \xi_{1}&\xi_{0}-\xi_{3}&0&-\Delta\\\ \Delta^{*}&0&-(\xi_{0}+\xi_{3})&-\xi_{1}\\\ 0&-\Delta^{*}&-\xi_{1}&-(\xi_{0}-\xi_{3})\end{pmatrix}$ (5) where $\displaystyle\xi_{0}(\mathbf{k})=$ $\displaystyle(t_{1}+t_{2})\left[\cos(k_{x}a)+\cos(k_{y}a)\right]-\mu$ (6) $\displaystyle\xi_{3}(\mathbf{k})=$ $\displaystyle(t_{1}-t_{2})\left[\cos(k_{x}a)-\cos(k_{y}a)\right]$ (7) $\displaystyle\xi_{1}(\mathbf{k})=$ $\displaystyle-4t_{4}\sin(k_{x}a)\sin(k_{y}a).$ (8) $D_{4h}$ symmetry restricts $\displaystyle t_{x1}=$ $\displaystyle t_{y2}=t_{1}$ (9) $\displaystyle t_{x2}=$ $\displaystyle t_{y1}=t_{2}.$ (10) Three terms $\xi_{0}\tau_{0},\xi_{1}\tau_{1}$, and $\xi_{3}\tau_{3}$ determine the normal state, corresponding to a common dispersion, inter- and intra- orbital hybridization terms, respectively. Note that terms $\propto\cos(k_{x}a)\cos(k_{y}a)$, which preserve the lattice symmetry can also be added to the orbital-_diagonal_ $\xi_{0}$ terms Raghu _et al._ (2008). While these can lead to a change in shape of the FS, they preserve the orbital structure of $H_{\text{L}}$, i.e. do not induce any additional band splitting, which is essential to the results of this work. Therefore, we ignore these additional contributions. The important degenerate-orbital limit, which we mention in the following, occurs when both inter- and intra-orbital hybridization terms are set to zero with $\xi_{1}=\xi_{3}=0$. For junctions along $x$ which break the translation symmetry along this direction, this corresponds to $t_{1}-t_{2}=t_{4}=0.$ The positive eigenvalues are $\displaystyle E_{1,2}=\sqrt{\xi^{2}_{0}+\xi^{2}_{3}+\xi^{2}_{1}+|\Delta|^{2}\pm 2\sqrt{\xi^{2}_{0}\left(\xi^{2}_{1}+\xi^{2}_{3}\right)+\xi^{2}_{1}|\Delta|^{2}}}.$ (11) As mentioned previously, we consider cases where the band splittings near either of the two FS’s are smaller than the pairing amplitude $\displaystyle\sqrt{\xi^{2}_{1}(\mathbf{k})+\xi^{2}_{3}(\mathbf{k})}\big{|}_{\mathbf{k}\in\text{FS}}<|\Delta|^{2}.$ (12) which ensures that the leads are gapped in the bulk. The Hamiltonian at the L-C interface is $\displaystyle H_{\text{L-C}}=$ $\displaystyle\sum_{\alpha y}\bigg{\\{}\sum_{\sigma}\left[-\left(V_{\alpha}c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{x}},\sigma}+t_{y\alpha}c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{y}},\alpha\sigma}+\text{H.c.}\right)-\mu c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r},\alpha\sigma}\right]+\left[\Delta_{\alpha}\left(c^{{\dagger}}_{\mathbf{r},\alpha\uparrow}c^{{\dagger}}_{\mathbf{r},\alpha\downarrow}-c^{{\dagger}}_{\mathbf{r},\alpha\downarrow}c^{{\dagger}}_{\mathbf{r},\alpha\uparrow}\right)+\text{H.c.}\right]\bigg{\\}}$ $\displaystyle,~{}$ $\displaystyle\text{for}~{}x[a]=\frac{L_{x}-l}{2}$ (13) As mentioned earlier, the $k$-dependence of the $s(k)$ factor in the $s\tau_{3}$ pairing is unimportant for our purpose, and this is reflected in the form of the pairing term of Eq. 13. Crucially, the terms proportional to $V_{\alpha}$ denote the hybridization of the two orbitals in the L lead to _a single orbital_ in the C part along $x$. Since the point-group symmetry is necessarily broken at the lead-center interfaces, this symmetry does not _a priori_ restrict the values of the $V_{\alpha}$’s. In the following, we allow these couplings to take arbitrary values and we comment on the effects of the $d_{xz},d_{yz}$ nature of the orbitals in the leads in Sec. V. The remaining terms are identical to those of $H_{\text{L}}$ while the summation is over the $y$ coordinate along the junction. We typically choose $V_{1}=V_{2}=t_{1}=1$, unless stated otherwise. As already mentioned, we neglect the spatial dependence of the pairing. The Hamiltonian for the bulk of the central (C) part reads $\displaystyle H_{\text{C}}=$ $\displaystyle\sum_{\alpha\mathbf{r}\sigma}\left[-t_{1}\left(c^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r}+\hat{\mathbf{x}},\sigma}+c^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r}+\hat{\mathbf{y}},\sigma}+\text{H.c.}\right)-\mu c^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r},\sigma}\right],~{}\text{for}~{}\frac{L_{x}-l}{2}+1\leq x[a]\leq\frac{L_{x}+l+2}{2}-1.$ (14) As mentioned previously, the C part involves a _single channel_ without any pairing. We consider for simplicity nearest-neighbor hopping in this sector, taken to be equal to $t_{1}$. In the case of $s\tau_{3}-N-s\tau_{3}$ pairing we choose the Hamiltonians at the C-R junction $H_{\text{C-R}}$ for $x[a]=(L_{x}+l+2)/2$ and for the bulk of the R lead $H_{\text{R}}$ for $x[a]\geq(L_{x}+l+2)/2+1$ to have identical form and coefficients to $H_{\text{L-C}}$ and $H_{\text{L}}$, respectively, with the exception of a possible _global non-zero phase_ $\phi$ for the pairing terms: $\displaystyle\Delta_{\text{L}}=$ $\displaystyle\Delta$ (15) $\displaystyle\Delta_{\text{R}}=$ $\displaystyle\Delta e^{i\phi}.$ (16) For $s\tau_{3}-N-s$, $H_{\text{C-R}}$ and $H_{\text{R}}$ reflect the presence of a single channel in the R lead as $\displaystyle H_{\text{C-R}}=$ $\displaystyle\sum_{\alpha y}\bigg{\\{}\sum_{\sigma}\bigg{[}-\left(Vc^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r}-\hat{\mathbf{x}},\sigma}+t_{y}c^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r}+\hat{\mathbf{y}},\sigma}+\text{H.c.}\right)$ $\displaystyle-$ $\displaystyle\mu c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r},\alpha\sigma}\bigg{]}+\left[\Delta e^{i\phi}\left(c^{{\dagger}}_{\mathbf{r},\uparrow}c^{{\dagger}}_{\mathbf{r},\downarrow}-c^{{\dagger}}_{\mathbf{r},\downarrow}c^{{\dagger}}_{\mathbf{r},\uparrow}\right)+\text{H.c.}\right]\bigg{\\}}$ $\displaystyle,~{}\text{for}~{}x[a]=\frac{L_{J}+l+2}{2}$ (17) $\displaystyle H_{\text{R, TB}}=$ $\displaystyle-\sum_{\textbf{r}\sigma}\bigg{[}-t\left(c^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r}-\hat{\mathbf{x}},\sigma}+c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{y}},\sigma}+\text{H.c.}\right)$ $\displaystyle-$ $\displaystyle\mu c^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r},\sigma}\bigg{]}$ (18) $\displaystyle H_{\text{R, Pair}}=\sum_{\mathbf{r}}\Delta e^{i\phi}\left(c^{{\dagger}}_{\mathbf{r},\uparrow}c^{{\dagger}}_{\mathbf{r},\downarrow}-c^{{\dagger}}_{\mathbf{r},\downarrow}c^{{\dagger}}_{\mathbf{r},\uparrow}\right)+\text{H.c.}.$ (19) For simplicity, we always consider $V=t=1$. ### II.2 Junctions along $z$ For $s\tau_{3}-N-s\tau_{3}$ junctions along $z$ we define the complete Hamiltonian via $\displaystyle\tilde{H}=$ $\displaystyle\tilde{H}_{\text{L}}+\tilde{H}_{\text{L-C}}+\tilde{H}_{\text{C}}+\tilde{H}_{\text{C-R}}+\tilde{H}_{\text{R}}.$ (20) In addition to the terms already present in the pure two-dimensional case, we consider additional NN hopping along $z$ for the L/R leads as $\displaystyle\tilde{H}_{\text{L, TB}}=$ $\displaystyle H_{\text{L, TB}}-\sum_{\alpha\textbf{r}\sigma}t_{z}\left(c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{z}},\alpha\sigma}+\text{H.c.}\right),$ (21) where $H_{\text{L, TB}}$ was defined in Eq. 3, and the sum over lattice sites is now along all of the three axes. We take the lengths of the L, C, and R parts along $z$ to be the same as for the junctions along $x$. For simplicity, we consider only orbital-diagonal, NN hopping along the $z$-direction. Additional terms are of course possible, but they do not qualitatively change our conclusions. For clarity, we ignore the momentum dependence of the two- dimensional pairing, as for junctions along $x$. Similar expressions hold for the bulk of the R lead, with the exception of a global phase difference $\phi$. Likewise, the C part amounts to $\displaystyle\tilde{H}_{\text{C}}=$ $\displaystyle H_{\text{C}}-\sum_{\alpha\textbf{r}}t_{z}\left(c^{{\dagger}}_{\mathbf{r},\sigma}c_{\mathbf{r}+\hat{\mathbf{z}},\sigma}+\text{H.c.}\right).$ (22) We use a NN hopping in the C part which is equal in amplitude to that of the L/R leads $t_{z}=0.2t_{1}$, unless stated otherwise. The reduced value of $t_{z}$ reflects the tetragonal anisotropy of our target systems. Finally, the Hamiltonians for the L-C and C-R interfaces are obtained via a straightforward generalization to hopping along $z$ $\displaystyle H_{\text{L-C}}=$ $\displaystyle-\sum_{\alpha,xy}\bigg{\\{}\sum_{\sigma}\bigg{[}\left(-V_{\alpha z}c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{z}},\sigma}-t_{x\alpha}c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{x}},\alpha\sigma}-t_{y\alpha}c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{y}},\alpha\sigma}+\text{H.c.}\right)-\mu c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r},\alpha\sigma}$ $\displaystyle+$ $\displaystyle\sum_{\beta\neq\alpha}t_{4}\left(c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{x}}+\hat{\mathbf{y}},\beta\sigma}-c^{{\dagger}}_{\mathbf{r},\alpha\sigma}c_{\mathbf{r}+\hat{\mathbf{x}}-\hat{\mathbf{y}},\beta\sigma}+\text{H.c.}\right)\bigg{]}+\left[\Delta_{\alpha}\left(c^{{\dagger}}_{\mathbf{r},\alpha\uparrow}c^{{\dagger}}_{\mathbf{r},\alpha\downarrow}-c^{{\dagger}}_{\mathbf{r},\alpha\downarrow}c^{{\dagger}}_{\mathbf{r},\alpha\uparrow}\right)+\text{H.c.}\right]\Bigg{\\}}$ (23) $\tilde{H}_{\text{C-R}}$ can be obtained via a similar generalization of $H_{\text{C-R}}$. Unless stated otherwise, we choose units where the nearest-neighbor hopping and hybridization at the interfaces are $t_{1}=V_{1}=1$ and $\Delta=0.4$. ### II.3 Orbital-exchange symmetry We consider the $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-I$ setups in the important limit where the two orbitals have identical tight-binding coefficients, zero intra-orbital hybridization ($\xi_{3}=0$) and inter-orbital hybridization ($\xi_{1}=0$), and couple identically to the single orbital of the C part. This limit corresponds to $\displaystyle t_{1}=$ $\displaystyle t_{2}$ (24) $\displaystyle t_{4}=$ $\displaystyle 0$ (25) $\displaystyle V_{1}=$ $\displaystyle V_{2}.$ (26) The two orbitals in the L (or R if appropriate) lead correspond to degenerate bands in the normal state. We define a _local_ transformation which acts on the L (and R) spinors $\Psi^{T}=(c_{\mathbf{r},1\uparrow},c_{\mathbf{r},2\uparrow},c^{{\dagger}}_{\mathbf{r},1\downarrow},c^{{\dagger}}_{\mathbf{r},2\downarrow})$ as $\displaystyle\Psi\rightarrow\hat{R}\Psi$ (27) where $\displaystyle\hat{R}=\begin{pmatrix}0&i&0&0\\\ i&0&0&0\\\ 0&0&0&-i\\\ 0&0&-i&0\end{pmatrix}.$ (28) This corresponds to an exchange of the two orbitals followed by a gauge transformation which changes the sign of the pairing $\Delta$. Similarly, in the C part the transformation acting on the single-orbital spinor $\psi^{T}=(c_{\mathbf{r},\uparrow},c^{{\dagger}}_{\mathbf{r},\downarrow})$ $\displaystyle\psi\rightarrow\hat{S}\psi$ (29) where $\displaystyle\hat{S}=\begin{pmatrix}i&0\\\ 0&-i\\\ \end{pmatrix}.$ (30) This operation, corresponding to a gauge transformation for the C part, is required to compensate for the factors of $i$ in $H_{\text{L-C}}$ and $H_{\text{C-R}}$ if appropriate. Both $\hat{R}$ and $\hat{S}$ are anti- hermitian operators which obey $\displaystyle\hat{R}^{{\dagger}}=$ $\displaystyle-\hat{R}$ (31) $\displaystyle\hat{R}^{2}=$ $\displaystyle-\hat{1}$ (32) and similarly for $\hat{S}$. It is straightforward to define an orbital-exchange operator for the entire lattice models, corresponding to each of the junctions. For $s\tau_{3}-N-s\tau_{3}$ junctions it reads $\displaystyle\hat{Q}=\begin{cases}\hat{R},~{}\forall~{}x\in\text{L, R}\\\ \hat{S},~{}\forall~{}x\in\text{C}\end{cases}$ (33) The $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-I$ junction Hamiltonians in the degenerate-orbital limit considered here are invariant under $\hat{Q}$. This orbital exchange symmetry plays an important role in classifying the electron- like and hole-like solutions encountered in these cases. ## III $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-I$ junctions We treat $s\tau_{3}-N-s\tau_{3}$ junctions in some detail in the following and only briefly discuss $s\tau_{3}-N-I$ junctions at the end of the section and in Appendix C, since the latter exhibit similar bound states. ### III.1 Degenerate orbitals We consider a $s\tau_{3}-N-s\tau_{3}$ junction as defined previously, in the limit of degenerate orbitals ( Eqs. 24-26 ). We find the bound states in the continuum limit by linearizing the BdG equations in the vicinity of points $(\alpha K_{Fx},K_{Fy})$ on the FS Sauls (2018), where $\alpha=\pm 1$. The detailed solution is presented in Appendix A. Here, we summarize some of the most important results. In contrast to the typical single-channel junction, the presence of two orbitals in either leads, which couple to a single orbital in the C part, imposes additional BC’s. Indeed, in the limit considered here, only the linear combination $c_{1}+c_{2}$ of the two orbitals couples to the C part. The corresponding BdG coefficients must vary continuously at the L-C or C-R interfaces, much like in a single-channel junction. By contrast, the anti- symmetric combination does not couple to the C part and thus the corresponding BdG coefficients obey open BC’s at the interface. As shown in Appendix A, the two channels, corresponding to symmetric and anti-symmetric linear combinations, do not decouple in the bulk of the leads, due to the non-trivial matrix structure of $s\tau_{3}$ pairing. Consequently, the continuity and open BC’s cannot be satisfied simultaneously for solutions which involve both electron- and hole-like solutions in the C part. The solutions are either electron- or hole-like in the C part, in contrast to the typical Andreev bound states which mix the two. The electron-like bound states with eigenvalues $\displaystyle\frac{\epsilon}{|\Delta|}=$ $\displaystyle\pm\cos\left(\frac{\epsilon l}{2v_{Fx}}+\frac{K_{Fx}l}{2}+\frac{(a-m)\pi}{2}\right).$ (34) have the form $\displaystyle\Psi^{L/R}_{K_{Fy};\text{Electron}}=$ $\displaystyle\left(u^{L/R}_{1e},u^{L/R}_{2e},v^{L/R}_{1e},v^{L/R}_{2e}\right)^{T}$ (35) $\displaystyle u^{L/R}_{1e}=$ $\displaystyle\frac{1}{\sqrt{2}}|A||\Delta|e^{i\theta^{0}_{A}}\sin\left[K_{Fx}x\mp\frac{\epsilon l}{2v_{Fx}}+\frac{(a+m)\pi}{2}\right]$ (36) $\displaystyle u^{L/R}_{2e}=$ $\displaystyle u^{L/R}_{1}$ (37) $\displaystyle v^{L/R}_{1e}=$ $\displaystyle\frac{1}{\sqrt{2}}|A||\Delta|e^{i\theta^{0}_{A}}\sin\left[K_{Fx}\left(x\pm\frac{l}{2}\right)+\gamma_{L/R}\pi\right]$ (38) $\displaystyle v^{L/R}_{2e}=$ $\displaystyle-v^{L/R}_{1}$ (39) $\displaystyle\gamma_{L}=$ $\displaystyle a$ (40) $\displaystyle\gamma_{R}=$ $\displaystyle m$ (41) in the L/R leads and $\displaystyle\Psi^{C}_{K_{Fy};\text{Electron}}=$ $\displaystyle\left(u^{C}_{e},v^{C}_{e}\right)^{T}$ (42) $\displaystyle u^{C}_{e}=$ $\displaystyle 2|A||\Delta|e^{i\theta^{0}_{A}}\sin\left[K_{Fx}x+\frac{\epsilon x}{v_{Fx}}+\frac{(a+m)\pi}{2}\right]$ (43) $\displaystyle v^{C}_{e}=$ $\displaystyle 0.$ (44) in the C part. $|A|$ is a normalization constant, $\theta^{0}_{A}$ is an arbitrary phase, and $a,m$ are arbitrary integers. Crucially, the C $v$ BdG coefficient is identically zero. Similarly, the hole-like solutions with eigenvalues $\displaystyle\frac{\epsilon}{|\Delta|}=$ $\displaystyle\pm\cos\left(\frac{\epsilon l}{2v_{Fx}}-\frac{K_{Fx}l}{2}+\frac{(n-b)\pi}{2}\right).$ (45) are of the form $\displaystyle\Psi^{L/R}_{K_{Fy};\text{Hole}}=$ $\displaystyle\left(u^{L/R}_{1h},u^{L/R}_{2h},v^{L/R}_{1h},v^{L/R}_{2h}\right)^{T}$ (46) $\displaystyle u^{L/R}_{1h}=$ $\displaystyle\frac{1}{\sqrt{2}}|B|e^{i\theta^{B}_{0}}\sin\left[K_{Fx}\left(x\pm\frac{l}{2}\right)+\gamma_{L/R}\pi\right]$ (47) $\displaystyle u^{L/R}_{2h}=$ $\displaystyle-u^{L}_{1h}$ (48) $\displaystyle v^{L/R}_{1h}=$ $\displaystyle\frac{1}{\sqrt{2}}|B|e^{i\theta^{B}_{0}}\sin\left[K_{Fx}x\pm\frac{\epsilon l}{2v_{Fx}}+\frac{\pi}{2}+\frac{(n+b)\pi}{2}\right]$ (49) $\displaystyle v^{L/R}_{2h}=$ $\displaystyle v^{L}_{1h}$ (50) $\displaystyle\gamma_{L}=$ $\displaystyle b$ (51) $\displaystyle\gamma_{R}=$ $\displaystyle n$ (52) for the L/R leads and $\displaystyle\Psi^{C}_{K_{Fy};\text{Hole}}=$ $\displaystyle\left(u^{C}_{h},v^{C}_{h}\right)^{T}$ (53) $\displaystyle u^{C}_{h}=$ $\displaystyle 0$ (54) $\displaystyle v^{C}_{h}=$ $\displaystyle\sin\left[K_{Fx}x-\frac{\epsilon x}{v_{Fx}}+\frac{(n+b)\pi}{2}\right]$ (55) in the C part. As before, $|B|$ is a normalization constant, $\theta^{0}_{B}$ is an arbitrary phase, while $b,n$ are integers. In contrast to the electron- like solutions, $u^{C}=0$. These detailed analytical solutions present a number of important features. Firstly, the solutions are either electron- or hole-like since the corresponding BdG coefficient $v$ or $u$ vanishes in the C part. This is in clear contrast to a single-channel junction, where electron-like and hole-like solutions are mixed via Andreev scattering. The presence of both open and continuity BC’s at L-C and C-R interfaces, as well as the non-trivial structure of the pairing, prevents the existence of solutions which mix electron and hole states. Secondly, electron- and hole-like solutions become gapless when $\displaystyle K_{Fx}l=$ $\displaystyle(m-a)\pi$ (56) $\displaystyle K_{Fx}l=$ $\displaystyle(n-b)\pi,$ (57) respectively, as indicated by Eqs. 34 and 45. This happens when $K_{Fx}l$ is either 0 or a multiple of $\pi$. Recall that the solutions are labeled by the momenta $(K_{Fx},K_{Fy})$ which vary continuously along the FS. Consequently, the gapless conditions can be realized in multiple instances along the FS if the latter and/or junction length $l$ are sufficiently large. Linearizing Eqs. 34 and 45 about these points indicates that the electron- and hole-like states are counter-propagating, as a function of the conserved momentum $K_{Fy}$. Thirdly, the gaplessness of the electron- and hole-like states is protected by the orbital-exchange symmetry defined in Sec. II.3. Indeed, it is straightforward to check that $\displaystyle\hat{R}\Psi^{L/R}_{K_{Fy};\text{Electron}}=$ $\displaystyle i\Psi^{L/R}_{K_{Fy};\text{Electron}}$ (58) $\displaystyle\hat{R}\Psi^{L/R}_{K_{Fy};\text{Hole}}=$ $\displaystyle i\Psi^{L/R}_{K_{Fy};\text{Hole}},$ (59) together with $\displaystyle\Psi^{L/R,{\dagger}}_{K_{Fy};\text{Electron}}\hat{R}=$ $\displaystyle i\Psi^{L/R,{\dagger}}_{K_{Fy};\text{Electron}}.$ (60) The last relation follows from the anti-hermitian nature of $\hat{R}$. Similar relations hold for $\hat{S}$ acting on the C spinors. Together, these imply that any operator $\hat{O}$ added to the Hamiltonian of the junction, which commutes with $\hat{R}$ and $\hat{S}$, will not mix electron- and hole-like states via $\displaystyle\braket{\Psi^{{\dagger}}_{\text{Electron}}}{\hat{O}\hat{R}}{\Psi_{\text{Hole}}}=\braket{\Psi^{{\dagger}}_{\text{Electron}}}{\hat{R}\hat{O}}{\Psi_{\text{Hole}}}$ (61) which implies that $\displaystyle(-i)\braket{\Psi^{{\dagger}}_{\text{Electron}}}{\hat{O}}{\Psi_{\text{Hole}}}=$ $\displaystyle(+i)\braket{\Psi^{{\dagger}}_{\text{Electron}}}{\hat{O}}{\Psi_{\text{Hole}}}$ (62) and subsequently that $\displaystyle\braket{\Psi^{{\dagger}}_{\text{Electron}}}{\hat{O}}{\Psi_{\text{Hole}}}=0.$ (63) Note that $\hat{O}$ is not necessarily local at the lattice level. A similar reasoning can be applied to $\hat{S}$. Therefore, the orbital-exchange symmetry protects the gapless electron- and hole-like states, in analogy to the time-reversal operator for spin-polarized edge states in quantum spin Hall systems Kane and Mele (2005), although we stress that the gapless states are not due to any topological property of the bulk in our cases. In practice, the orbital-exchange symmetry is preserved when no intra-orbital hybridization terms (such as $\xi_{3}$ defined in Eq. 5) are present, although inter-orbital hybridization terms (such as $\xi_{1}$ in Eq. 5) preserve the orbital-exchange symmetry. Fourthly, the bound state spectrum is independent of the global relative phase $\phi$ between L and R leads, as indicated by Eqs. 34 and 45. This is in clear contrast to the single-channel junctions, where the change in the Andreev bound state spectrum with relative phase is proportional to the static Josephson current Sauls (2018). In the $s\tau_{3}-N-s\tau_{3}$ case, a bound state spectrum which is insensitive to the relative phase indicates the absence of a static Josephson current. This is confirmed by microscopic calculations of the Josephson current in the tunneling approximation, discussed in detail in Appendix F, in the degenerate orbital limit considered here. As shown there, the Josephson currents due to each orbital sector cancel due essentially to the $\pi$ phase difference between the two components of $s\tau_{3}$ pairing. Next, we proceed to confirm the analytical solutions via numerical solutions of the lattice model introduced in Sec. II, in the degenerate-orbital limit. The results were obtained for a lattice of 100 sites along $x$ with a C part of 5 sites extending along $49\leq x\leq 53$, in units of the lattice constant $a$. In Fig. 2, we show the bound state spectrum as a function of the conserved momentum $k_{y}$, for varying chemical potential $\mu$ and global relative phase difference between the two superconducting leads $\phi$. For $\mu=-3.0$ in panel (a), we illustrate that the spectrum is independent of $\phi$, in accordance with the eigenvalues in Eqs. 34 and 45. This implies the absence of static Josephson current, as confirmed in Appendix F by a microscopic calculation of the Josephson current in the tunneling limit. Also note that the bound states become gapless and cross twice near $k_{y}a\approx 0.3\pi$. This is consistent with the condition for gapless states as determined by the analytical solution (Eqs. 56, 57). Indeed, for increasing chemical potential $\mu=-2.0$, as shown in panel (b), an additional crossing occurs near $k_{y}=0$. This is due to an $K_{Fx}$ (near $K_{Fy}=0$) which increases by $\pi$ as the FS expands. The crossings in (a) likewise shift to higher $k_{y}$. A similar picture presents itself with increasing chemical potential $\mu=-0.25$ in panel (c). Together, these results are consistent with the analytical solution. We also consider the nature of the bound states. In Fig. 3, we show a close-up of panel (a) of Fig. 2 for $\phi=0$. One pair of hole- and electron-like states in the vicinity of a crossing are marked by a square and triangle, respectively. To elucidate the nature of these states, in Fig. 4 we illustrate the real parts of the BdG coefficients of the hole-like state marked in Fig. 3 by a square. Note that the imaginary pars are trivially zero and are not shown. The $u$ BdG coefficients shown in panel (a) are consistent with the analytical solution in Eq. 46. Indeed, $u^{C}$ vanishes in the C part for $49\leq x[a]\leq 53$, as expected for a hole-like solution. Furthermore, $u^{L/R}_{1}$ and $u^{L/R}_{2}$ both vanish at the L-C and C-R interfaces, respectively, and their signs are consistent with Eq. 46 for $n-b$ an odd multiple of $\pi$. In panel (b), the $v^{C}$ is finite and continuous across the junction. We conclude that the numerical solution is consistent with the hole-like solution indicated by the analytical results. A similar picture emerges for the electron-like state marked with a triangle in Fig. 3. In Fig. 5, we show the real parts of the BdG coefficients of this state across the junction, and compare these with the analytical solution in Eq. 35. $u^{C}$ is now finite, while $v^{C}$ vanishes in the C region, as expected. Furthermore, the signs of $u^{L/R}_{1}$ and $u^{L/R}_{2}$ are consistent with a solution with $m-a$ an odd multiple of $\pi$. Together with the BdG coefficient for the hole-like state, these numerical solutions confirm the analytical results. We briefly discuss the case for a $s\tau_{3}-N-s\tau_{3}$ junction along $z$ for the model introduced in Sec. II.2. The analytical solution is similar to the junction along $x$ with the exception that both $k_{x},k_{y}$ are conserved quantities. We therefore linearize about the pair of points $(K_{Fx},K_{Fy},\alpha K_{Fz})$ and obtain the bound-state spectrum as in Eqs. 34 and 45 which now extend along the $z$-direction. The eigenvalues are obtained via the replacement $K_{Fx}\rightarrow K_{Fz}.$ We comment on the effects of unequal coupling of the two orbitals in either lead to the C part. In this case, both symmetric and anti-symmetric linear combinations of the two orbitals (Appendix A), couple to the C part, in contrast to the solutions presented here. As shown by numerical results presented in Appendix B, unequal couplings induce a gap, as well as a spectrum which depends on the global phase difference $\phi$. The first can be understood via a breaking of the orbital-exchange symmetry, while the second can be confirmed via a direct calculation of the Josephson current, as discussed in Appendix F. Finally, we note that the surprising behavior for $s\tau_{3}$ junctions coupled to a single orbital in the C part differs dramatically from that of junctions consisting of leads in a pairing state which has trivial orbital structure. We consider junctions with such a state, which we call $s\tau_{0}$, where the two orbitals have identical pairing functions, including the signs. For degenerate orbitals, we can apply the same analysis as in the $s\tau_{3}-N-s\tau_{3}$ case (Appendix A) and we again find that the symmetric linear combination of the two orbitals in either leads couples across the junction. However, the anti-symmetric linear combination is entirely decoupled along the entire junction, and in contrast to $s\tau_{3}$ junctions, imposes no additional constraints on the bound-state solutions. Thus $s\tau_{0}$ junctions for degenerate orbitals behave essentially like conventional single- channel junctions. This is illustrated in Appendix C. Figure 2: Bound state spectrum of a $s\tau_{3}-N-s\tau_{3}$ junction along $x$, as defined in Sec. II.1, with degenerate orbitals in the L and R leads which also couple identically to a single orbital in the N part. (a) Eigenvalues for a fixed chemical potential $\mu=-3.0$ and varying global relative phase $\phi$ between the R and L superconducting leads, as functions of the conserved momentum $k_{y}$. Note that the spectrum is insensitive to $\phi$, in accordance to the analytical solution in Eqs. 34 and 45. Also note the presence of gapless states near $k_{y}a\approx 0.3\pi$. (b) Same as (a) for fixed $\mu=-2.0$ and $\phi=0.0$. Although not shown, the spectrum is also insensitive to $\phi$. There is an additional crossing of gapless states near $k_{y}a=0.0$. This is consistent with the condition for gapless states obtained from the analytical solutions in Eqs. 56-57. With decreasing $\mu$ the FS increases, allowing $K_{Fx}l$ to reach values which are multiples of $\pi$ near $k_{y}=0$. (c) Same as (b) for $\mu=-0.25$ and $\phi=0$. Note that the number of gapless crossings increases with the size of the FS, as predicted by the analytical solution. Figure 3: Close-up of panel (a) of Fig. 2 (a). The square and triangle identify the hole- and electron-like states illustrated in Fig. 4 and 5, respectively. Figure 4: Real parts of the BdG coefficients for a hole-like solution as a function of $x$ across the junction, for the momentum indicated by the square in Fig. 3. These are obtained from the numerical calculations for $\mu=-3.0,\phi=0$. The imaginary parts vanish identically and are not shown. (a) The $u$ coefficients are consistent with Eq. 46, as $u^{C}=0$ (blue squares) in the C part extending from $49\leq x[a]\leq 53$, as expected for a pure hole-like state. Similarly, $u^{L/R}_{1}$ and $u^{L/R}_{2}$ vanish at the L-C and C-R interfaces, respectively, in accordance to the analytical calculations. The opposite signs of these coefficients within the L and R leads and across the junction are also consistent with the analytical solution with $n-b$ an odd number. (b) The $v$ coefficients are finite throughout, including the C part, and are continuous across the junction. These are also consistent with the analytical solution in Eq. 46. Figure 5: Real part of the BdG coefficients for an electron-like solution as a function of $x$ across the junction, for the momentum indicated by the triangle in Fig. 3. As in Fig. 4, these are obtained from the numerical calculations for $\mu=-3.0,\phi=0$, and the imaginary parts vanish identically. (a) $u^{C}$ is finite, as expected for a pure electron- like state, and in agreement with the analytical solution in Eq. 35. The remaining $u$ coefficients are also consistent with the latter. (b) $v^{C}=0$ again indicates a pure electron-like state. $v^{L/R}_{1}$ and $v^{L/R}_{2}$ vanish at the L-C and C-R interfaces, and their signs are consistent with $m-a$ and odd, as indicated by Eq. 35. ### III.2 Non-degenerate orbitals So far, we have focused on the special case of degenerate orbitals for a $s\tau_{3}-N-s\tau_{3}$ junction along $x$. We now consider more realistic cases, where the L and R leads include terms which lift the degeneracy between the two orbitals. As discussed in Sec. II.1, we consider two types of symmetry-allowed terms which can lift the orbital degeneracy: (i) intra- orbital hybridization terms corresponding to $\xi_{3}\tau_{3}$ in the non- pairing part of the bulk lead Hamiltonians (Eq. 5), and (ii) inter-orbital hybridization terms corresponding $\xi_{1}\tau_{1}$ in the same model. We examine the effects of each of these terms on the bound state spectrum separately. We first consider intra-orbital hybridization terms only by fixing $t_{1}=1$ and allowing $\delta t=t_{1}-t_{2}$ to vary (Eq. 7). All other parameters are identical to those in the top panel of Fig. 2. In Fig. 6 (a) we show the bound state spectrum for the orbital-degenerate case with $\delta t=0$ together with a case with significant intra-orbital hybridization corresponding to $\delta t=0.8$. The gapless states for the degenerate orbital case become gapped with the inclusion of intra-orbital hybridization terms. These break the orbital- exchange symmetry and thus mix the electron- and hole-like states of the orbital-degenerate case, in accordance with Sec. III.1. In panel (b), we show the spectrum for $\delta t=0.8$ as a function of the global phase difference $\phi$. The eigenvalues change with $\phi$ and we recover gapless states at $\pi$ phase difference, as is the case for the typical single-channel junction. The evolution of the bound state spectrum with $\phi$ is also confirmed via a calculation of the Josephson current in Appendix F. We now consider the effect of the inter-orbital hybridization terms only, corresponding to $\xi_{1}$ in Eq. 5. All other parameters are the same in porevious cases. As shown in Fig. 7 (a) the bound state spectrum remains gapless under the inclusion of inter-orbital hybridization, in contrast to intra-orbital hybridization terms. This can also be understood via the orbital-exchange symmetry, since the inter-orbital hybridization terms preserve the former. Similarly, the spectrum is invariant under changes in the global phase difference $\phi$, as indicated in Fig. 7 (b). It is then clear that, due to intra-orbital hybridization terms, a general $s\tau_{3}-N-s\tau_{3}$ junction along $x$ exhibits the usual features of a single-channel junction. As a final example, we consider the effects of intra- and inter-orbital hybridization on the junction along $z$. The spectrum in this case can be labeled by both $k_{x},k_{y}$ conserved momenta. For general values of the latter, the spectrum is gapped, in analogy to the junction along $x$. However, along the diagonals of the two-dimensional Brillouin zone $|k_{x}|=|k_{y}|$, the intra-orbital hybridization terms vanish, and the bound state states again become gapless. This is illustrated in Fig. 8, and can be understood via the orbital-exchange symmetry, which is recovered along the diagonals. Thus, in contrast to the junction along $x$, the junction along $z$ can exhibit gapless edge states for a rather general model of $s\tau_{3}$ pairing. Figure 6: The effects of intra-orbital hybridization terms corresponding to $\xi_{3}$ in the bulk of the L/R leads (Eq. 5) on the bound state spectrum for a $s\tau_{3}-N-s\tau_{3}$ junction along $x$. We introduce these terms by fixing $t_{1}=1$ and allowing $\delta t=t_{1}-t_{2}$ to vary (Eq. 7). The inter-orbital hybridization corresponding to $\xi_{1}$ in Eq. 5 is set to zero. All other parameters are the same as in Fig. 2 (a) and $\phi=0$. The two orbitals couple identically to the C part. (a) A gap develops as intra-orbital hybridization terms are introduced. For clarity, we show the case for degenerate orbitals $\delta t=0$ (green rhombi), and a case with extreme intra-orbital hybridization for $\delta t=0.8$ (red squares). Note that a finite gap opens for any $\delta t\neq 0$. Intra-orbital hybridization terms break the orbital-exchange symmetry discussed in Sec. II.3, allowing the electron- and hole-like states in Eqs. 35 and 46 to mix via Andreev scattering. (b) Effect of a global phase difference $\phi$ across the junction on the bound state spectrum with finite intra-orbital hybridization $\delta t=0.8$. In contrast to the degenerate case shown in Fig. 2 (a), the inclusion of finite intra-orbital hybridization allows the spectrum to change with $\phi$. Note that the states become gapless at $\phi=\pi$, much like in a conventional single-channel junction. The sensitivity of the spectrum with $\phi$ is also confirmed via a calculation of the Josephson current in Appendix F. Figure 7: The effects of an inter-orbital hybridization term, corresponding to $\xi_{1}$ (Eq. 8) in the bulk L and R leads, on the bound state spectrum for a $s\tau_{3}-N-s\tau_{3}$ junction along $x$. The intra- orbital hybridization terms corresponding to $\xi_{3}$ are set to zero. The remaining parameters are the same as in Fig. 6. (a) The bound states remain gapless as $t_{4}$ (Eq. 8) increases, in contrast to the intra-orbital hybridization case in Fig. 6. The inter-orbital hybridization preserves the orbital-exchange symmetry defined in Sec. II.3 and thus does not mix the electron- and hole-like states, which remain adiabatically connected with the solutions of the orbital-degenerate junction in Eqs. 35 and 46. (b) The spectrum is insensitive to a global phase difference $\phi$, in contrast to leads with intra-orbital hybridization. These results are also confirmed via a calculation of the Josephson current (Appendix F). Figure 8: Bound state spectrum for a $s\tau_{3}-N-s\tau_{3}$ junction along $z$, calculated from the model in Sec. II.2, with finite intra- and inter-orbital hybridization. We use $\delta t=0.4$ and $4t_{4}=0.4$, $\mu=-3.0$ with a NN $t_{z}=0.2$ along $z$. Note that both $k_{x}$ and $k_{y}$ are conserved in this case. In contrast to the junction along $x$, gapless states are possible here even when intra- orbital hybridization is included. The latter vanishes for $|k_{x}|=|k_{y}|$. For these momenta, we recover the orbital-exchange symmetry defined in Sec. II.3. (a) bound state eigenvalues as a function of $k_{y}$ for fixed $k_{x}=0$. The intra-orbital hybridization terms are finite for these momenta, and the states are gapped, as when the junction is along $x$ (Fig. 6. (b) Same as (a) for $k_{x}=0.4k_{y}$. The gaps shrink but remain finite. (c) Same as (a) and (b) along the diagonal of the 2D Brillouin zone for $k_{x}=k_{y}$. The gap closes as the model recovers the orbital-exchange symmetry, and the spectrum is similar to that of the junction along $x$ with inter-orbital hybridization only (Fig. 7). The bound-state spectra of $s\tau_{3}-N-I$ junctions are very similar to those of $s\tau_{3}-N-s\tau_{3}$ junctions. In the limit of degenerate orbitals, the bound states of the former are also electron- or hole like, and they also become gapless for a set of conserved momenta. The analytical solutions in this limit are discussed in Appendix C. The evolution with either intra- or inter-orbital hybridization is also very similar to that in $s\tau_{3}-N-s\tau_{3}$ junctions and, as such, will not be discussed here in detail. ## IV $s\tau_{3}-N-s$ junctions We consider a $s\tau_{3}-N-s$ junction, where the L lead is in a $s\tau_{3}$ pairing state, while the R lead is in a single-channel $s$-wave state. As in the case of $s\tau_{3}-N-s\tau_{3}$ junctions, both orbitals on the L side couple identically to a single orbital in the C metallic part, which in turn couples to a single orbital in the trivially-paired R lead. The model is described in Sec. II.1. We first solve the system in the limit of degenerate orbitals in the L leads, which couple identically to the single C orbital. The analytical solution is presented in detail in Appendix E. Here, we summarize some of the most important results. In contrast to the $s\tau_{3}-N-s\tau_{3}$ case, only the L-C interface is subject to both open and continuity BC’s. Furthermore, as explained in Sec. II.3, this model does not preserve the orbital-exchange symmetry, due to the presence of a single channel in the R lead, even when the two orbitals are degenerate. Thus, in contrast to the $s\tau_{3}-N-s\tau_{3}$ case, the $s\tau_{3}-N-s$ junction allows for Andreev scattering mixing electron and hole-like states, and leading to bound states which are generally gapped. However, in the degenerate orbital limit, the bound states states of the $s\tau_{3}-N-s$ are invariant under a change in the global relative phase $\phi$. The analytical results are confirmed by the numerical solutions. In Fig. 9 (a), we show the bound state spectrum for a $s\tau_{3}-N-s$ junction, whith parameters similar to those of Fig. 2, for two values of $\mu$. We see the presence of a gap in both instances, a gap which also occurs for any value of $\mu$. An analysis of the eigenstates, not shown here for brevity, likewise indicates that the C spinors involve a mixture of electron- and hole-like states. Remarkably, the spectrum is invariant under a change in global phase difference $\phi$ in the degenerate-orbital limit, as shown in Fig. 9 (b). The spectrum does change with $\phi$ once intra-orbital hybridization terms are introduced, as in the case of $s\tau_{3}-N-s\tau_{3}$ junctions. The bound- state variation with $\phi$ is also confirmed via a calculation of the Josephson current in Appendix F. Figure 9: Bound state spectrum for a $s\tau_{3}-N-s$ junction along $x$. In all cases, the two orbitals in the L lead couple identically to the single C orbital. All other parameters, unless explicitly stated, are the same as in Fig. 2. (a) Evolution of the spectrum for degenerate orbitals in the L lead with chemical potential $\mu$. Note that the states are gapped for both values shown here, as well as for any $\mu$. This stands in contrast to the $s\tau_{3}-N-s\tau_{3}$ junction, as shown in Fig. 2. The difference can be attributed to the orbital-exchange symmetry, which is broken for the $s\tau_{3}-N-s$, thus allowing Andreev scattering which mixes electron- and hole-like states. (b) The spectrum for the degenerate-orbital case of panel (a) with $\mu=-3.0$ is invariant under a change in global relative phase $\phi$. This is due to the absence of intra-orbital hybridization terms, as is also the case with the $s\tau_{3}-N-s\tau_{3}$ junctions in Fig. 2 (a). (c) The spectrum does change with $\phi$ once intra-orbital hybridization terms are introduced for the L lead, in a manner analogous to Fig. 6. This is also confirmed by the calculation of the Josephson current in Appendix F. ## V Summary and Discussion We now summarize our main results and subsequently discuss possible experimental realizations. ### V.1 Summary of results We studied Josephson junctions where the left lead is in an unconventional $s\tau_{3}$ gapped pairing state which involves two orbitals with opposite- sign pairing. Similar pairing states were advanced as promising candidates in alkaline Fe- selenides Nica _et al._ (2017), where $d_{xz},d_{yz}$ orbitals provided the two-dimensional manifold for $s\tau_{3}$, as well as in the heavy-fermion superconductor CeCu2Si2 Nica and Si (2021). We considered junctions where both orbitals of the left lead couple to a single orbital in an intermediate metallic part. Such arrangements, although unusual, allow for a richer phenomenology when compared to the more typical junctions without cross-coupling. We considered three different types of junctions, referred to as $s\tau_{3}-N-s\tau_{3}$, $s\tau_{3}-N-I$, and $s\tau_{3}-N-s$, where the right lead is in a two-orbital $s\tau_{3}$ pairing, insulating, and trivial single-orbital $s$-wave pairing states, respectively. We discussed both two- dimensional arrangements with junctions along the $x$-axis, as well as junctions along $z$. We studied the three types of junctions in the important limit where the two orbitals of the $s\tau_{3}$ pairing state, in the left and, when appropriate, right leads, are degenerate and couple identically to a single orbital of the central part. Our most striking results can be grouped under two headings. One is the emergence of purely electron and hole-like bound states, which become gapless and degenerate for a set of conserved momenta. The other is that the bound state spectrum is invariant under a change in the global phase difference between left and right superconducting leads. In both aspects, the junctions differ sharply from the typical single- or multi-channel junctions without cross-coupling. The first of these two effects occurs for $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-I$ junctions along $x$. Here, the bound states differ from to the typical single-channel junction, where Andreev scattering mixes purely electron- and hole-like states. The absence of Andreev scattering in these cases, which leads to gapless electron- and hole-like states, is due to the combined effects of non-trivial orbital structure of $s\tau_{3}$ pairing and coupling to a single C orbital. We also find that these gapless bound states are protected by an orbital-exchange symmetry. However, we stress that this protection is not due to any topological property of the junction, unlike the well-known gapless states which occur in a single- channel $\pi$ junction. The second property, the invariance of the bound state spectrum with global phase difference, manifests in $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-s$ junctions along $x$. The invariance of the bound state spectrum points toward a vanishing Josephson current Sauls (2018). We confirmed that this is the case by calculating the Josephson current directly in the tunneling limit. We stress that $s\tau_{3}-N-s\tau_{3}$ junctions in the limit of degenerate orbitals differ dramatically from $s\tau_{0}-N-s\tau_{0}$ junctions in the same limit. The latter involve orbitally-trivial pairing states which are identical to both orbitals, including the sign. In clear contrast to the $s\tau_{3}-N-s\tau_{3}$ junctions, the $s\tau_{0}-N-s\tau_{0}$ bound state spectrum is essentially that of a typical single-channel junction. We also considered deviations from the degenerate-orbital limit by introducing intra- and inter-orbital hybridization terms for the normal part of the leads. These terms are allowed by tetragonal symmetry for $d_{xz},d_{yz}$ orbitals in the context of alkaline Fe-selenides Raghu _et al._ (2008). We found that intra-orbital hybridization terms break the orbital-exchange symmetry and lead to Andreev scattering which mixes electron and hole-like states, gapping the bound state spectrum of $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-I$ junctions along $x$. By contrast, inter-orbital hybridization terms do not break the orbital-exchange symmetry and preserve the gapless electron- and hole-like states in the cases mentioned above. The bound states thus behave as in the intra-orbital hybridization case when both the former and inter-orbital hybridization are present. Important exceptions can occur as exemplified by $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-I$ junctions along $z$, with dominant NN hopping along $z$. In this case, the bound states become gapless at the zeroes of the intra-orbital hybridization terms, along the diagonal of the two-dimensional Brillouin zone in our case, since the junction recovers orbital-exchange symmetry at these points. We also considered the effects of lifting the orbital degeneracy in the bulk of the leads when a finite global phase difference across the junction is present. Here again we find that the bound state spectrum becomes sensitive to the phase when intra-orbital hybridization terms are in effect for both $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-s$ junctions. This is also confirmed by calculations of the Josephson current. In contrast, under the addition of inter-orbital hybridization terms only, the bound state spectrum remains invariant, also in accordance with a vanishing Josephson current. The two crucial aspects behind the striking behavior of all of the junctions considered here involve two-orbital matrix-pairing $s\tau_{3}$ in the leads, together with identical coupling of both of these orbitals to a single orbital in the central metallic parts. These setups ensure that the two orbital sectors of $s\tau_{3}$ pairing are entangled in a non-trivial way near the interface with the metallic part, even when the two sectors are decoupled far into the bulk of the leads. This is to be contrasted with cases where the pairing has trivial matrix structure, as exemplified by $s\tau_{0}$ pairing. In those cases, the trivial pairing matrix structure means that two orbitals which are decoupled in the bulk of the leads remain so near the interfaces with the metallic part. Likewise, $s\tau_{3}$ junctions with central metallic regions which likewise involve two orbitals, which couple in a ”one-to-one” manner with those of $s\tau_{3}$ would fail to fully capture the effects of the non-trivial matrix structure of $s\tau_{3}$. As exemplified by $s\tau_{3}-N-I$ junctions, the interplay between non-trivial interfaces and non-trivial matrix-pairing suggests that exotic edge states could also be engineered for materials which likely involve $s\tau_{3}$ pairing, such as the alkaline Fe-selenides. ### V.2 Experimental signatures Having given a summary of our main results, we now discuss their potential experimental signatures. We stress that, in the limit of degenerate orbitals, junctions involving $s\tau_{3}$ pairing are dramatically different from the typical single-channel junction as well as from junctions involving pairing states with trivial orbital structure, such as the $s\tau_{0}$ states. However, any realizations of the $s\tau_{3}$ junctions proposed here will involve deviations from the ideal case. We argue that such deviations can be made small, in the sense discussed below, allowing a partial observation of the striking properties of the ideal case. We consider three likely deviations from the ideal case in the form of: (i) unequal coupling of the two orbitals to the single-orbital of the metallic part, (ii) lifting of the degeneracy of the two orbitals in the leads via symmetry-allowed intra- and inter-orbital hybridization, and (iii) weak disorder in the junction. For (i), unequal coupling to the C part gaps the electron- and hole-like states and induces a nonzero Josephson effect, even when the orbitals of the leads are degenerate. A single-orbital intermediate region amounts to an effectively single-band system which remains non- superconducting for temperatures above the critical temperature of $s\tau_{3}$, which can be estimated from the transition temperature in alkaline Fe-selenides Lee (2017). The condition of equal coupling to the center part is unlikely to occur for junctions along the in-plane axes of the tetragonal Fe-selenides i.e. along $x$ or $y$, since $d_{xz}$ and $d_{yz}$ orbitals typically cannot couple identically to any other in-plane orbital. However, junctions along $z$ provide better candidates in this context, as the the lobes of $d_{xz}$ and $d_{yz}$ are likely to have comparable overlap with such orbitals as $p_{z}$, $d_{z^{2}}$, and $d_{xy}$ along the $z$ axis. A second deviation from the ideal case occurs due to the presence of both intra- and inter-orbital hybridization terms in the leads for junctions involving alkaline Fe-selenide leads. As we have shown, such terms will lead to gapping of the electron- and hole-like states and will furthermore ensure that a static Josephson effect is present. Note however, that, as proposed in Refs. Nica _et al._ (2017); Nica and Si (2021), and as reiterated in our text, $s\tau_{3}$ pairing induces a full gap in the bulk of the leads when the pairing amplitude exceeds the band splitting near the FS, which is governed by the intra- and inter-orbital hybridization terms. We expect that this limits the effects of band splitting in the case of the junctions, such that the induced gap for the bound states will be small compared to the bulk gap in the leads. Similarly, the Josephson current, which depends on the strength of the intra-orbital hybridization terms, will likely be finite but strongly suppressed, as compared to a similar setup for pairing states with trivial orbital structure. However, it should be borne in mind that exceptions to this behavior can occur for junctions along $z$. The intra-orbital hybridization terms can vanish by symmetry along certain directions in the two-dimensional Brillouin zone. Here, the junction recovers the orbital-exchange symmetry and the gapless bound states. We do expect a finite Josephson current in these cases, due to the contribution of bulk states away from these points. The third source of deviation from the ideal case is the presence of disorder in the junction. The gapless bound states in the ideal degenerate cases are not due to any non-trivial topology, although they are protected by the orbital-exchange symmetry. Therefore, these states are not robust against disorder. Due to these inherent limitations, the junctions discussed here cannot singlehandedly indicate $s\tau_{3}$ pairing, but they can nonetheless provide strong supporting evidence. ## VI Acknowledgements OE acknowledge support from National Science Foundation Awards No. DMR 1904716. EMN is supported by ASU startup grant. Work at Rice has been supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0018197 and the Robert A. Welch Foundation under Grant No. C-1411 (Q.S.). One of us (Q.S.) acknowledges the hospitality of the Aspen Center for Physics, which is supported by NSF grant No. PHY-1607611. ## Appendix A Analytical solutions of the $s\tau_{3}-N-s\tau_{3}$ junction with degenerate orbitals We consider a $s\tau_{3}-N-s\tau_{3}$ junction with degenerate orbitals/bands in the normal states corresponding to $t_{1}=t_{2},t_{4}=0$ in Eqs. 9, 8 together with identical coupling to the single channel of the C part corresponding to $V_{1}=V_{2}$ in Eq. 13. In this limit, only the symmetric linear combination of the two orbitals at either L-C and C-R interfaces couples to the single orbital of the C part. We apply a local unitary transformation to the L lead $\displaystyle\begin{pmatrix}c_{\mathbf{r},I,\uparrow}\\\ c_{\mathbf{r},II,\uparrow}\\\ c^{{\dagger}}_{\mathbf{r},I,\downarrow}\\\ c^{{\dagger}}_{\mathbf{r},II,\downarrow}\end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1&0&0\\\ 1&-1&0&0\\\ 0&0&1&1\\\ 0&0&1&-1\end{pmatrix}\begin{pmatrix}c_{\mathbf{r},1,\uparrow}\\\ c_{\mathbf{r},2,\uparrow}\\\ c^{{\dagger}}_{\mathbf{r},1,\downarrow}\\\ c^{{\dagger}}_{\mathbf{r},2,\downarrow}\end{pmatrix}.$ (64) Under this transformation, only channel $I$, corresponding to the symmetric linear combination of the operators for the two orbitals, couples to the single orbital of the C part. Furthermore, the transformation leaves the orbital-diagonal tight-binding part of the Hamiltonian invariant (corresponding to $\xi_{0}$ in Eq. 5) while it transforms the $s\tau_{3}$ pairing as $\displaystyle H_{\text{L, Pair}}=$ $\displaystyle\sum_{\mathbf{r}}\Delta\left(c^{{\dagger}}_{\mathbf{r},1\uparrow}c^{{\dagger}}_{\mathbf{r},1\downarrow}-c^{{\dagger}}_{\mathbf{r},2\uparrow}c^{{\dagger}}_{\mathbf{r},2\downarrow}\right)+\text{H.c.}$ $\displaystyle\rightarrow\sum_{\mathbf{r}}\Delta\left(c^{{\dagger}}_{\mathbf{r},I\uparrow}c^{{\dagger}}_{\mathbf{r},II\downarrow}-c^{{\dagger}}_{\mathbf{r},II\uparrow}c^{{\dagger}}_{\mathbf{r},I\downarrow}\right)\text{+H.c.},$ (65) where for simplicity we only consider one spin sector. The model thus reduces to solving for channel $I$ as in a typical, single-channel Josephson junction, while channel $II$ must obey open BC’s at the L-C interface. We proceed to find solutions of the model in the transformed basis which lie below the bulk gap, corresponding to solutions which decay into the L lead. We first apply a Fourier transform along $y$ and linearize the BdG equations in the vicinity of two points $(\alpha K_{Fx}+q_{x},K_{Fy}+q_{y})$ on the FS, with $q_{x},q_{y}$ small, and $\alpha=\pm 1$. This is in analogy to the typical single-channel solution Sauls (2018). After identifying $q_{x}\rightarrow-i\partial_{x}$ in the continuum limit, we separate the BdG ansatz of the L lead into fast- and slowly-varying parts as $\displaystyle\begin{pmatrix}\tilde{u}_{I,k_{y};L}\\\ \tilde{u}_{II,k_{y};L}\\\ \tilde{v}_{I,k_{y};L}\\\ \tilde{v}_{II,k_{y};L}\end{pmatrix}=$ $\displaystyle e^{iK_{Fy}y}e^{iK_{Fx}x}e^{\kappa\left(x+\frac{l}{2}\right)}\begin{pmatrix}u_{\alpha=1,I;L}\\\ u_{\alpha=1,II;L}\\\ v_{\alpha=1,I;L}\\\ v_{\alpha=1,II;L}\end{pmatrix}+e^{iK_{Fy}y}e^{-iK_{F}x}e^{\kappa\left(x+\frac{l}{2}\right)}\begin{pmatrix}u_{\alpha=\bar{1},I;L}\\\ u_{\alpha=\bar{1},II;L}\\\ v_{\alpha=\bar{1},I;L}\\\ v_{\alpha=\bar{1},II;L}\end{pmatrix}.$ (66) Note that the C part is between $-l/2\leq x\leq l/2$. Since we are looking for solutions which decay into the L lead, we take $\kappa$ to be real and positive. Also note that we consider an ansatz which is a linear superposition of solutions with opposite momenta along $x$. This is due to the BdG coefficients of channel $II$ which must obey open BC at the L-C junction, while the momentum along $y$ is conserved. We ignore corrections due to $q_{y}a$. The slowly-varying parts obey the BdG equation $\displaystyle\begin{pmatrix}\alpha\left[-iv_{Fx}\kappa\right]-\epsilon&0&0&\Delta\\\ 0&\alpha\left[-iv_{Fx}\kappa\right]-\epsilon&\Delta&0\\\ 0&\Delta^{*}&-\alpha\left[-iv_{Fx}\kappa\right]-\epsilon&0\\\ \Delta^{*}&0&0&-\alpha\left[-iv_{Fx}\kappa\right]-\epsilon\end{pmatrix}\begin{pmatrix}u_{\alpha,I}\\\ u_{\alpha,II}\\\ v_{\alpha,I}\\\ v_{\alpha,II}\end{pmatrix}=$ $\displaystyle 0,$ (67) where we introduced the Fermi velocities along $x$ via $\displaystyle v_{Fx}=$ $\displaystyle 2t_{1}\sin(K_{Fx}a).$ (68) Note that the first and fourth, and second and third rows respectively decouple in the bulk Hamiltonian and can be solved independently. We find that the solutions in the bulk of the L lead are $\begin{pmatrix}u_{\alpha,I}\\\ u_{\alpha,II}\\\ v_{\alpha,I}\\\ v_{\alpha,II}\end{pmatrix}=\begin{pmatrix}A_{\alpha}\left(\epsilon-i\alpha\Lambda\right)\\\ B_{\alpha}\left(\epsilon-i\alpha\Lambda\right)\\\ B_{\alpha}\Delta^{*}\\\ A_{\alpha}\Delta^{*}\end{pmatrix}$ (69) where $A_{\alpha},B_{\alpha}$ are coefficients to be determined from the BC’s and $\displaystyle\kappa=$ $\displaystyle\frac{\Lambda}{v_{Fx}}$ (70) $\displaystyle\Lambda=$ $\displaystyle\sqrt{\Delta^{2}-\epsilon^{2}},~{}\text{for}~{}\epsilon<|\Delta|.$ (71) The general solution in the C part, extending from $-l/2\leq x\leq l/2$ can be similarly determined to be of the form $\displaystyle\begin{pmatrix}\tilde{u}_{k_{y};C}\\\ \tilde{v}_{k_{y};C}\\\ \end{pmatrix}=$ $\displaystyle e^{iK_{F}x}\begin{pmatrix}E_{1}e^{i\frac{\epsilon x}{v_{Fx}}}\\\ G_{1}e^{-i\frac{\epsilon x}{v_{Fx}}}\end{pmatrix}+e^{-iK_{F}x}\begin{pmatrix}E_{\bar{1}}e^{-i\frac{\epsilon x}{v_{Fx}}}\\\ G_{\bar{1}}e^{i\frac{\epsilon x}{v_{Fx}}}\end{pmatrix}.$ (72) The solutions in the R lead, which decay away from the C-R interface for $x>l/2$ are $\displaystyle\begin{pmatrix}\tilde{u}_{I,k_{y};R}\\\ \tilde{u}_{II,k_{y};R}\\\ \tilde{v}_{I,k_{y};R}\\\ \tilde{v}_{II,k_{y};R}\end{pmatrix}=$ $\displaystyle e^{iK_{F}x}e^{-\kappa\left(x-\frac{l}{2}\right)}\begin{pmatrix}M_{1}(\epsilon+i\Lambda)\\\ N_{1}(\epsilon+i\Lambda)\\\ N_{1}|\Delta|e^{-i\phi}\\\ M_{1}|\Delta|e^{-i\phi}\end{pmatrix}+e^{-iK_{F}x}e^{-\kappa\left(x-\frac{l}{2}\right)}\begin{pmatrix}M_{\bar{1}}(\epsilon-i\Lambda)\\\ N_{\bar{1}}(\epsilon-i\Lambda)\\\ N_{\bar{1}}|\Delta|e^{-i\phi}\\\ M_{\bar{1}}|\Delta|e^{-i\phi}\end{pmatrix}.$ (73) Note that we have introduced a global phase $\phi$ in the pairing of the R lead. For our purposes, it proves convenient to parameterize all of the coefficients in terms of an overall phase and a relative phase as in $\displaystyle A_{\alpha}=$ $\displaystyle|A|e^{i\theta^{0}_{A}}e^{i\alpha\theta_{A}},$ (74) and similarly for all $\alpha$-dependent quantities. We also parameterize the factors which enter in the general solutions for the leads as $\displaystyle\epsilon\pm i\alpha\Lambda=$ $\displaystyle|\Delta|e^{\pm i\alpha\theta}$ (75) $\displaystyle\theta=$ $\displaystyle\arg\left(\epsilon+i\Lambda\right).$ (76) We now consider the BC’s. As discussed previously, channel $II$ in the L and R leads does not couple to the C part, and thus must obey open BC’s at the L-C and C-R interfaces, respectively: $\displaystyle\tilde{u}_{II,k_{y};L}\left(x=\frac{-l}{2}\right)=$ $\displaystyle 0$ (77) $\displaystyle\tilde{v}_{II,k_{y};L}\left(x=\frac{-l}{2}\right)=$ $\displaystyle 0$ (78) $\displaystyle\tilde{u}_{II,k_{y};R}\left(x=\frac{l}{2}\right)=$ $\displaystyle 0$ (79) $\displaystyle\tilde{v}_{II,k_{y};R}\left(x=\frac{l}{2}\right)=$ $\displaystyle 0.$ (80) By contrast, channel $I$ couples across the junction and therefore satisfies the continuity conditions $\displaystyle\tilde{u}_{I,k_{y};L}\left(x=\frac{-l}{2}\right)=$ $\displaystyle\tilde{u}_{k_{y};C}\left(x=\frac{-l}{2}\right)$ (81) $\displaystyle\tilde{v}_{I,k_{y};L}\left(x=\frac{-l}{2}\right)=$ $\displaystyle\tilde{v}_{k_{y};C}\left(x=\frac{-l}{2}\right)$ (82) $\displaystyle\tilde{u}_{I,k_{y};R}\left(x=\frac{l}{2}\right)=$ $\displaystyle\tilde{u}_{k_{y};C}\left(x=\frac{l}{2}\right)$ (83) $\displaystyle\tilde{v}_{I,k_{y};R}\left(x=\frac{l}{2}\right)=$ $\displaystyle\tilde{v}_{k_{y};C}\left(x=\frac{l}{2}\right)$ (84) Expressed in term of the $\alpha$\- dependent quantities these amount to $\displaystyle-\frac{K_{Fx}l}{2}+\theta_{B}-\theta=$ $\displaystyle\frac{\pi}{2}+b\pi$ (85) $\displaystyle-\frac{K_{Fx}l}{2}+\theta_{A}=$ $\displaystyle\frac{\pi}{2}+a\pi$ (86) $\displaystyle\frac{K_{Fx}l}{2}+\theta_{N}+\theta=$ $\displaystyle\frac{\pi}{2}+n\pi$ (87) $\displaystyle\frac{K_{Fx}l}{2}+\theta_{M}=$ $\displaystyle\frac{\pi}{2}+m\pi,$ (88) for the open BC’s. $a,b,m$, and $n$ are arbitrary integers. The continuity conditions imply that $\displaystyle\theta_{A}-\theta+\frac{\epsilon l}{v_{Fx}}=$ $\displaystyle\theta_{M}+\theta$ (89) $\displaystyle\theta_{B}-\frac{\epsilon l}{v_{Fx}}=$ $\displaystyle\theta_{N}$ (90) $\displaystyle|A|=$ $\displaystyle|M|$ (91) $\displaystyle|B|=$ $\displaystyle|N|$ (92) $\displaystyle\theta^{0}_{A}=$ $\displaystyle\theta^{0}_{M}$ (93) $\displaystyle\theta^{0}_{B}=$ $\displaystyle\theta^{0}_{M}-\phi.$ (94) Note that the _overall phase_ $\theta^{0}_{B}$ can incorporate the global phase difference of the pairing $\phi$. This is in contrast to the typical single-channel case and it leads to an insensitivity of the bound state spectrum w.r.t. to $\phi$. Note that there are five unknowns consisting of the five relative phase $\theta_{A},\theta_{B},\theta_{M},\theta_{N}$ together with the eigenenergies $\epsilon$. However, due to the open BC’s, there are six equations. Therefore, non-trivial solutions cannot be found for this system of equations. We consider instead solutions where either $A_{\alpha},M_{\alpha}$ or $B_{\alpha},N_{\alpha}$ are trivially zero. In either of these cases, the system involving the relative phase and $\epsilon$ reduces to three equations with three unknowns. Importantly, these solutions are found to be either hole- or electron-like, as either the $u$ or $v$ BdG coefficients vanish in the C part. The electron-like solutions obtained in this manner are $\displaystyle\begin{pmatrix}\tilde{u}_{I,k_{y};L-e}\\\ \tilde{u}_{II,k_{y};L-e}\\\ \tilde{v}_{I,k_{y};L-e}\\\ \tilde{v}_{II,k_{y};L-e}\end{pmatrix}=$ $\displaystyle 2|A||\Delta|e^{i\theta^{0}_{A}}\begin{pmatrix}\cos\left[K_{Fx}x-\frac{\epsilon l}{2v_{Fx}}+\frac{(a+m+1)\pi}{2}\right]\\\ 0\\\ 0\\\ \cos\left[K_{Fx}\left(x+\frac{l}{2}\right)+\frac{\pi}{2}+a\pi\right]\end{pmatrix}.$ (95) $\displaystyle\begin{pmatrix}\tilde{u}_{k_{y};C-e}\\\ \tilde{v}_{k_{y};C-e}\\\ \end{pmatrix}=$ $\displaystyle 2|A||\Delta|e^{i\theta^{0}_{A}}\begin{pmatrix}\cos\left[K_{Fx}x+\frac{\epsilon x}{v_{Fx}}+\frac{(a+m+1)\pi}{2}\right]\\\ 0\end{pmatrix}.$ (96) $\displaystyle\begin{pmatrix}\tilde{u}_{I,k_{y};R-e}\\\ \tilde{u}_{II,k_{y};R-e}\\\ \tilde{v}_{I,k_{y};R-e}\\\ \tilde{v}_{II,k_{y};R-e}\end{pmatrix}=$ $\displaystyle 2|A||\Delta|e^{i\theta^{0}_{A}}\begin{pmatrix}\cos\left[K_{Fx}x+\frac{\epsilon l}{2v_{Fx}}+\frac{(a+m+1)\pi}{2}\right]\\\ 0\\\ 0\\\ \cos\left[K_{Fx}\left(x-\frac{l}{2}\right)+\frac{\pi}{2}+m\pi\right]\end{pmatrix}.$ (97) Using $\displaystyle\arctan(\theta)=\frac{\Lambda}{\epsilon}$ (98) we determine the eigenvalues $\displaystyle\frac{\epsilon}{|\Delta|}=$ $\displaystyle\pm\cos\left(\frac{\epsilon l}{2v_{Fx}}+\frac{K_{Fx}l}{2}+\frac{(a-m)\pi}{2}\right).$ (99) Similarly, the hole-like solutions are $\displaystyle\begin{pmatrix}\tilde{u}_{I,k_{y};L-h}\\\ \tilde{u}_{II,k_{y};L-h}\\\ \tilde{v}_{I,k_{y};L-h}\\\ \tilde{v}_{II,k_{y};L-h}\end{pmatrix}=$ $\displaystyle 2|B|e^{i\theta^{B}_{0}}\begin{pmatrix}0\\\ \cos\left[K_{Fx}\left(x+\frac{l}{2}\right)+\frac{\pi}{2}+b\pi\right]\\\ \cos\left[K_{Fx}x+\frac{\epsilon l}{2v_{Fx}}+\frac{\pi}{2}+\frac{(n+b+1)\pi}{2}\right]\\\ 0\end{pmatrix}.$ (100) $\displaystyle\begin{pmatrix}\tilde{u}_{k_{y};C-h}\\\ \tilde{v}_{k_{y};C-h}\\\ \end{pmatrix}=$ $\displaystyle 2|B|e^{i\theta^{B}_{0}}\begin{pmatrix}0\\\ \cos\left[K_{Fx}x-\frac{\epsilon x}{v_{Fx}}+\frac{(n+b+1)\pi}{2}\right]\end{pmatrix}.$ (101) $\displaystyle\begin{pmatrix}\tilde{u}_{I,k_{y};R-h}\\\ \tilde{u}_{II,k_{y};R-h}\\\ \tilde{v}_{I,k_{y};R-h}\\\ \tilde{v}_{II,k_{y};R-h}\end{pmatrix}=$ $\displaystyle 2|B|e^{i(\theta^{B}_{0}+\phi)}\begin{pmatrix}0\\\ \cos\left[K_{Fx}\left(x-\frac{l}{2}\right)+\frac{\pi}{2}+n\pi\right]\\\ \cos\left[K_{Fx}x-\frac{\epsilon l}{2v_{Fx}}+\frac{(n+b+1)\pi}{2}\right]\\\ 0\end{pmatrix}.$ (102) with eigenvalues $\displaystyle\frac{\epsilon}{|\Delta|}=$ $\displaystyle\pm\cos\left(\frac{\epsilon l}{2v_{Fx}}-\frac{K_{Fx}l}{2}+\frac{(n-b)\pi}{2}\right).$ (103) The solutions in the original basis, obtained via the inverse transformation of Eq. 64 can be obtained from the solutions shown above. ## Appendix B $s\tau_{3}-N-s\tau_{3}$ junctions with unequal coupling to the C part In Fig. 10 (a), we show the bound state spectrum for a $s\tau_{3}-N-s\tau_{3}$ junction along $x$ as a function of $\delta V=V_{1}-V_{2}$, the difference between the coupling constants of the two degenerate orbitals in either lead to the single orbital of the C part. The results show that, in clear contrast to the case of equal-coupling $\delta V=0$, a finite $\delta V$ breaks the orbital-exchange symmetry and gaps the purely electron- and hole-like states. In panel (b) we show the spectrum as a function of $\phi$, for fixed $\delta V=0.7$. Again in contrast to the $\delta V=0$ cases, the spectrum becomes dependent on $\phi$, a feature which is also confirmed via a calculation of the Josephson current in Appendix F. Figure 10: Bound state spectrum for a $s\tau_{3}-N-s\tau_{3}$ junction along $x$, where the couplings to the single orbital of the C part, $V_{1}$ and $V_{2}$ (Eq. 13), are unequal. We consider degenerate orbitals and compare to the results for $V_{1}=V_{2}$ in Fig. 2 (a). In contrast to the case with equal $V_{1}=V_{2}$, the bound states acquire a gap with increasing $\delta V=V_{1}-V_{2}$. This can be understood via a broken orbital-exchange symmetry which mixes the purely electron- and hole-like states via Andreev scattering. (b) The spectrum changes as a function of global phase difference across the junction $\phi$. The results are for $\delta V=0.3$. This is in contrast to the case with $\delta V=0$, which is shown in Fig. 2. ## Appendix C $s\tau_{0}-N-s\tau_{0}$ junctions In this section, we consider $s\tau_{0}-N-s\tau_{0}$ junctions along $x$, where the leads are in $s\tau_{0}$ pairing states. In these cases, the pairing functions for both orbitals are identical. We consider degenerate orbitals which couple to a single orbital in the C part identically. We show that, in clear contrast to $s\tau_{3}-N-s\tau_{3}$ junctions, $s\tau_{0}-N-s\tau_{0}$ junctions behave essentially like a typical single-channel junction. In Fig. 11 we illustrate the evolution of the bound state spectrum as a function of global phase difference $\phi$. Figure 11: Bound state spectrum for a $s\tau_{0}-N-s\tau_{0}$ junction along $x$, where the two orbitals have identical pairing functions, as a function of global phase $\phi$. In contrast to $s\tau_{3}$ pairing, $s\tau_{0}$ is an orbitally-trivial pairing state. The overall setup of the junction is the same as in the $s\tau_{3}$ cases, with each of the two degenerate orbitals in either L and R leads coupling identically to a single orbital in the C part. All of the parameters of the model are the same as in Fig. 2 (a). As indicated by the results, the bound state spectrum for a $s\tau_{0}-N-s\tau_{0}$ junction differs dramatically from a $s\tau_{3}-N-s\tau_{3}$ junction in two main aspects: (i) The spectrum for $\phi=0$ is gapped, and (ii) for varying $\phi$ the spectrum evolves much like a typical single-channel junction, becoming gapless at $\phi=\pi$. In $s\tau_{0}$ junctions, the symmetric linear combination of the two orbitals couples across the junction as in the single- orbital case, while the remaining antisymmetric linear combination decouples entirely. ## Appendix D Analytical solutions of the $s\tau_{3}-N-I$ junction with degenerate orbitals The solution in the continuum limit is very similar to that of the $s\tau_{3}-N-s\tau_{3}$ case. Instead of continuity BC’s at the C-R interface, the C spinors obey open BC’s. Using the ansatze and the conventions of Appendix A, these conditions amount to $\displaystyle\frac{K_{Fx}l}{2}+\frac{\epsilon l}{2v_{Fx}}+\theta_{E}=$ $\displaystyle\frac{\pi}{2}+p\pi$ (104) $\displaystyle\frac{K_{Fx}l}{2}-\frac{\epsilon l}{2v_{Fx}}+\theta_{G}=$ $\displaystyle\frac{\pi}{2}+r\pi,$ (105) where $p,r$ are integers. As for the $s\tau_{3}-N-s\tau_{3}$ junctions, we obtain electron- and hole-like solutions of the form $\displaystyle\begin{pmatrix}u^{L}_{1e}&u^{L}_{2e}&v^{L}_{1e}&v^{L}_{2e}\end{pmatrix}^{T}$ $\displaystyle=$ $\displaystyle 2|A|e^{i\theta^{A}_{0}}|\Delta|\begin{pmatrix}\sin\left[K_{Fx}\left(x-\frac{l}{2}\right)-\frac{\epsilon l}{v_{Fx}}+p\pi\right]\\\ \sin\left[K_{Fx}\left(x-\frac{l}{2}\right)-\frac{\epsilon l}{v_{Fx}}+p\pi\right]\\\ \sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+a\pi\right]\\\ -\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+a\pi\right]\end{pmatrix}$ (106) $\displaystyle\begin{pmatrix}u^{C}_{e}v^{C}_{e}\end{pmatrix}^{T}$ $\displaystyle=$ $\displaystyle 2|A|e^{i\theta_{0}}|\Delta|\begin{pmatrix}\sin\left[K_{Fx}\left(x-\frac{l}{2}\right)+\frac{\epsilon}{v_{Fx}}\left(x-\frac{l}{2}\right)+p\pi\right]\\\ 0\end{pmatrix}$ (107) with eigenvalues $\displaystyle\frac{\epsilon}{|\Delta|}=$ $\displaystyle\pm\cos\left(\frac{\epsilon l}{v_{Fx}}+K_{Fx}l-(p-a)\pi\right)$ (108) for the electron-like states. The hole-like solutions are $\displaystyle\begin{pmatrix}u^{L}_{1h}&u^{L}_{2h}&v^{L}_{1h}&v^{L}_{2h}\end{pmatrix}^{T}$ $\displaystyle=$ $\displaystyle 2|B|e^{i\theta^{B}_{0}}|\Delta|\begin{pmatrix}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+b\pi\right]\\\ -\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+b\pi\right]\\\ \sin\left[K_{Fx}\left(x-\frac{l}{2}\right)+\frac{\epsilon l}{v_{Fx}}+r\pi\right]\\\ \sin\left[K_{Fx}\left(x-\frac{l}{2}\right)+\frac{\epsilon l}{v_{Fx}}+r\pi\right]\end{pmatrix}$ (109) $\displaystyle\begin{pmatrix}u^{C}_{h}v^{C}_{h}\end{pmatrix}^{T}$ $\displaystyle=$ $\displaystyle 2|B|e^{i\theta^{0}_{B}}|\Delta|\begin{pmatrix}0\\\ \sin\left[K_{Fx}\left(x-\frac{l}{2}\right)-\frac{\epsilon}{v_{Fx}}\left(x-\frac{l}{2}\right)+r\pi\right]\end{pmatrix}$ (110) with eigenvalues $\displaystyle\frac{\epsilon}{|\Delta|}=$ $\displaystyle\pm\cos\left(\frac{\epsilon l}{v_{Fx}}-K_{Fx}l+(r-b)\pi\right).$ (111) As before, $|A|,|B|$ are normalization constants, $\theta^{0}_{A/B}$ are arbitrary phases, while $a,b,p,r$ are arbitrary integers. ## Appendix E Analytical solutions of the $s\tau_{3}-N-s$ junction with degenerate orbitals Consider a junction of the $s\tau_{3}-N-s$ along $x$, where the R lead is in a single-channel, $s$-wave pairing state. The model for this junction was introduced in Sec. II.1. Here, we tackle this analytically for the case where the two channels of the L leads, which are in a $s\tau_{3}$ pairing state, are degenerate, and couple identically to the orbital in the C part. We proceed along the same lines as the $s\tau_{3}-N-s\tau_{3}$ case in the continuum limit. As in that case, only the symmetric linear combination of the two channels of the L lead couples to the C part. We therefore consider the same ansatz for the BdG coefficients in the L lead which also obey the same open and continuity BC at the L-C interface as before (Eqs. 81-82). The main distinction is due to the presence of a single channel in the R lead with general solutions in the bulk given by $\displaystyle\begin{pmatrix}\tilde{u}_{k_{y};R}\\\ \tilde{v}_{k_{y};R}\end{pmatrix}=$ $\displaystyle e^{iK_{F}x}e^{-\kappa\left(x-\frac{l}{2}\right)}\begin{pmatrix}M_{1}(\epsilon+i\Lambda)\\\ M_{1}|\Delta|e^{-i\phi}\end{pmatrix}+e^{-iK_{F}x}e^{-\kappa\left(x-\frac{l}{2}\right)}\begin{pmatrix}M_{\bar{1}}(\epsilon-i\Lambda)\\\ M_{\bar{1}}|\Delta|e^{-i\phi}\\\ \end{pmatrix}.$ (112) Furthermore, these BdG coefficients obey a single continuity BC at the C-R interface. Adopting the same conventions as in the $s\tau_{3}-N-s\tau_{3}$ case, we summarize the boundary conditions as $\displaystyle-\frac{K_{Fx}l}{2}+\theta_{B}-\theta=$ $\displaystyle\frac{\pi}{2}+b\pi$ (113) $\displaystyle-\frac{K_{Fx}l}{2}+\theta_{A}=$ $\displaystyle\frac{\pi}{2}+a\pi$ (114) $\displaystyle\theta_{A}-\theta+\frac{\epsilon l}{v_{Fx}}=$ $\displaystyle\theta_{M}+\theta$ (115) $\displaystyle\theta_{B}-\frac{\epsilon l}{v_{Fx}}=$ $\displaystyle\theta_{M}$ (116) for the relative phases and $\displaystyle|A|=$ $\displaystyle|M|$ (117) $\displaystyle|B|=$ $\displaystyle|N|$ (118) $\displaystyle\theta^{0}_{A}=$ $\displaystyle\theta^{M}_{0}$ (119) $\displaystyle\theta^{0}_{B}=$ $\displaystyle\theta^{0}_{M}-\phi.$ (120) for the amplitudes and global phases of the BdG coefficients. $a,b$ are arbitrary integers. In contrast to the case of the $s\tau_{3}-N-s\tau_{3}$ junction, the absence of additional open BC for the R lead ensures that the equations for the relative phases and $\epsilon$ have a unique solution. After straightforward algebra, we determine the eigenvalues from $\displaystyle\frac{\epsilon}{|\Delta|}=$ $\displaystyle\pm\cos\left(\frac{2\epsilon l}{3v_{Fx}}+\frac{(a-b)\pi}{3}\right).$ (121) The corresponding states are $\displaystyle\begin{pmatrix}\tilde{u}_{1,k_{y},L}\\\ \tilde{u}_{2,k_{y},L,e}\\\ \tilde{v}_{1,k_{y},L,e}\\\ \tilde{v}_{2,k_{y},L,e}\end{pmatrix}=$ $\displaystyle 2|A||\Delta|e^{i\theta^{A}_{0}}\begin{pmatrix}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)-\frac{2\epsilon l}{3v_{Fx}}+\frac{(2a+b)\pi}{3}\right]+e^{-i\phi}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+b\pi\right]\\\ \\\ \sin\left[K_{Fx}\left(x+\frac{l}{2}\right)-\frac{2\epsilon l}{3v_{Fx}}+\frac{(2a+b)\pi}{3}\right]-e^{-i\phi}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+b\pi\right]\\\ \\\ e^{-i\phi}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+\frac{2\epsilon l}{3v_{Fx}}+\frac{(a+2b)\pi}{3}\right]+\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+a\pi\right]\\\ \\\ e^{-i\phi}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+\frac{2\epsilon l}{3v_{Fx}}+\frac{(a+2b)\pi}{3}\right]-\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+a\pi\right]\end{pmatrix}.$ (122) $\displaystyle\begin{pmatrix}\tilde{u}_{k_{y},C}\\\ \tilde{v}_{k_{y},C}\\\ \end{pmatrix}=$ $\displaystyle 2|A||\Delta|e^{i\theta_{0}}\begin{pmatrix}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+\frac{\epsilon x}{v_{Fx}}-\frac{\epsilon l}{6v_{Fx}}+\frac{(2a+b)\pi}{3}\right]\\\ \\\ e^{-i\phi}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)-\frac{\epsilon x}{v_{Fx}}+\frac{\epsilon l}{6v_{Fx}}+\frac{(a+2b)\pi}{3}\right]\end{pmatrix}.$ (123) $\displaystyle\begin{pmatrix}\tilde{u}_{k_{y},R}\\\ \tilde{v}_{k_{y},R}\\\ \end{pmatrix}=$ $\displaystyle 2|A||\Delta|e^{i\theta_{0}}\begin{pmatrix}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)+\frac{\epsilon l}{3v_{Fx}}+\frac{(2a+b)\pi}{3}\right]\\\ \\\ e^{-i\phi}\sin\left[K_{Fx}\left(x+\frac{l}{2}\right)-\frac{\epsilon l}{3v_{Fx}}+\frac{(a+2b)\pi}{3}\right]\end{pmatrix}.$ (124) Note that the C spinor is a mixture of electron- and hole-like solutions, as in the case with typical Andreev bound states. ## Appendix F Josephson current in the tunneling limit In Sec. III.1 and IV, we showed that in the limit of degenerate orbitals which couple identically to a single orbital in the C part, $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-s$ junctions exhibit bound state spectra which are insensitive to changes in the relative global phase $\phi$. In view of the relation between the Josephson current and the derivative of the ground-state energy with $\phi$, which is typically determined by the Andreev bound state spectrum, our results suggest the absence of a static Josephson effect. In order to confirm these findings, we consider simplified models of the $s\tau_{3}-N-s\tau_{3}$ and $s\tau_{3}-N-s$ junctions and determine the Josephson current in the tunneling limit. We follow the standard approach in Ref. Sauls (2018). More precisely, we introduce a Hamiltonian which includes the leads but which also involves direct tunneling between the latter: $\displaystyle H=H_{\text{R}}+H_{\text{L}}+H_{\text{T}}.$ (125) For $s\tau_{3}-N-s\tau_{3}$ junctions, $H_{\text{L/R}}$ are those of Eq. 5, while for $s\tau_{3}-N-s$ junctions, $H_{\text{R}}$ is a single-channel $s$-wave bulk Hamiltonian, determined by the dispersion $\xi_{0}$ and $\Delta$ which also enter the expression for the two-orbital $H_{\text{L}}$. The two leads are connected via a tunneling Hamiltonian $H_{\text{T}}$. For the $s\tau_{3}-N-s\tau_{3}$ junctions, this takes the form $\displaystyle H_{\text{T},s\tau_{3}-N-s\tau_{3}}=$ $\displaystyle\sum_{\mathbf{k}\mathbf{p}\sigma}\bigg{(}T^{11}_{\mathbf{k}\mathbf{p}}c^{{\dagger}}_{\text{R},1\mathbf{k}\sigma}c_{\text{L},1\mathbf{p}\sigma}+T^{21}_{\mathbf{k}\mathbf{p}}c^{{\dagger}}_{\text{R},2\mathbf{k}\sigma}c_{\text{L},1\mathbf{p}\sigma}$ $\displaystyle+$ $\displaystyle T^{12}_{\mathbf{k}\mathbf{p}}c^{{\dagger}}_{\text{R},1\mathbf{k}\sigma}c_{\text{L},2\mathbf{p}\sigma}+T^{22}_{\mathbf{k}\mathbf{p}}c^{{\dagger}}_{\text{L},2\mathbf{k}\sigma}c_{\text{L},2\mathbf{p}\sigma}+\text{H.c.}\bigg{)},$ (126) where $T^{ij}_{\mathbf{k}\mathbf{p}}$ are tunneling matrix elements, and where we introduced R and L indices for the two leads in addition to the $1,2$ indices for the two orbitals. For the $s\tau_{3}-N-s$ junction $H_{\text{T}}$ has the simpler form $\displaystyle H_{\text{T},s\tau_{3}-N-s}=$ $\displaystyle\sum_{\mathbf{k}\mathbf{p}\sigma}\bigg{(}T^{(1)}_{\mathbf{k}\mathbf{p}}c^{{\dagger}}_{\text{R},\mathbf{k}\sigma}c_{\text{L},1\mathbf{p}\sigma}+T^{(2)}_{\mathbf{k}\mathbf{p}}c^{{\dagger}}_{\text{R},\mathbf{k}\sigma}c_{\text{L},2\mathbf{p}\sigma}$ $\displaystyle+$ $\displaystyle\text{H.c.}\bigg{)},$ (127) where the tunneling matrix elements are labeled by a single orbital index. We shall neglect the spin indices for simplicity. The total currents out of the L lead read $\displaystyle\dot{I}_{\text{Tot},s\tau_{3}-N-s\tau_{3}}=$ $\displaystyle- ie\left(\dot{N}_{\text{L},1}+\dot{N}_{\text{L},2}\right)$ (128) $\displaystyle=$ $\displaystyle- ie\sum_{\textbf{kp}}\sum_{ij}\left(T^{ij}_{kp}c^{{\dagger}}_{\text{R},i\mathbf{k}}c_{\text{L},j\mathbf{p}}-\text{H.c.}\right)$ (129) $\displaystyle\dot{I}_{\text{Tot},s\tau_{3}-N-s}=$ $\displaystyle- ie\left(\dot{N}_{\text{L},1}+\dot{N}_{\text{L},2}\right)$ (130) $\displaystyle=$ $\displaystyle- ie\sum_{\textbf{kp}}\sum_{i}\left(T^{(i)}_{kp}c^{{\dagger}}_{\text{R},\mathbf{k}}c_{\text{L},j\mathbf{p}}-\text{H.c.}\right),$ (131) where $N_{\text{L},i}$ is the total charge associated with either orbital in the L lead. As shown in Ref. Mahan (2000), the Josephson current to leading order in the tunneling matrix elements is determined from $\displaystyle I_{\text{J}}(t)=$ $\displaystyle 2e\text{Im}\left[e^{-2ieVt/\hbar}\theta(eU)\right],$ (132) where $eU$ is a potential drop across the junction. ### F.1 $s\tau_{4}-N-s\tau_{3}$ and $s\tau_{3}-N-s$ junction For these cases the quantity $\theta$ is obtained via analytical continuation from $\displaystyle\theta_{s\tau_{3}-N-s\tau_{3}}(eU)=$ $\displaystyle\lim_{i\omega\rightarrow eU+i\eta}\left[2\sum_{ij}\sum_{mn}\sum_{\mathbf{kp}}T^{ij}_{\mathbf{kp}}T^{mn}_{\mathbf{-k,-p}}\frac{1}{\beta}\sum_{i\nu}F^{{\dagger}}_{\text{R},im}(\mathbf{k},i\nu)F_{\text{L},jn}(\mathbf{p},i\nu-i\omega)\right]$ (133) $\displaystyle\theta_{s\tau_{3}-N-s}(eU)=$ $\displaystyle\lim_{i\omega\rightarrow eU+i\eta}\left[2\sum_{ij}\sum_{\mathbf{kp}}T^{(i)}_{\mathbf{kp}}T^{(j)}_{\mathbf{-k,-p}}\frac{1}{\beta}\sum_{i\nu}F^{{\dagger}}_{\text{R}}(\mathbf{k},\nu)F_{\text{L},ij}(\mathbf{p},i\nu-i\omega)\right],$ (134) where the anomalous Green’s functions are determined from the corresponding L and R lead Hamiltonians as $\displaystyle F_{\text{L/R},11}(\mathbf{k},i\nu)=$ $\displaystyle\frac{-\Delta_{\text{L/R}}\left[(i\nu)^{2}-\left(\xi_{0}-\xi_{3}\right)^{2}-|\Delta_{\text{L/R}}|^{2}+\xi^{2}_{1}\right]}{\Gamma(\mathbf{k},i\nu)}$ (135) $\displaystyle F_{\text{L/R},12}(\mathbf{k},i\nu)=$ $\displaystyle\frac{-2\xi_{1}\Delta_{\text{L/R}}\left(i\nu+\xi_{3}\right)}{\Gamma(\mathbf{k},i\nu)}$ (136) $\displaystyle F_{\text{L/R},21}(\mathbf{k},i\nu)=$ $\displaystyle\frac{2\xi_{1}\Delta_{\text{L/R}}\left(i\nu-\xi_{3}\right)}{\Gamma(\mathbf{k},i\nu)}$ (137) $\displaystyle F_{\text{L/R},22}(\mathbf{k},i\nu)=$ $\displaystyle\frac{\Delta_{\text{L/R}}\left[(i\nu)^{2}-\left(\xi_{0}+\xi_{3}\right)^{2}-|\Delta_{\text{L/R}}|^{2}+\xi^{2}_{1}\right]}{\Gamma(\mathbf{k},i\nu)}$ (138) for $s\tau_{3}$ leads, where $\displaystyle\Gamma(\mathbf{k},i\nu)=$ $\displaystyle\left(i\nu- E_{1}\right)\left(i\nu+E_{1}\right)\left(i\nu- E_{2}\right)\left(i\nu+E_{2}\right)$ (139) and $\displaystyle E_{1,2}=\sqrt{\xi^{2}_{0}+\xi^{2}_{3}+\xi^{2}_{1}+|\Delta|^{2}\pm 2\sqrt{\xi^{2}_{0}\left(\xi^{2}_{1}+\xi^{2}_{3}\right)+\xi^{2}_{1}|\Delta|^{2}}}$ (140) are the BdG bands corresponding to the eigenvalues of the Hamiltonian in Eq. 5. Note that all $\xi$ and $E$ terms are functions of $\mathbf{k}$. Since we consider $|\Delta_{\text{L}}|=|\Delta_{\text{R}}|$, $\Gamma$’s are independent of the lead index. For the single-channel $s$-wave lead, we have the standard expression $\displaystyle F_{\text{R}}(\mathbf{k},\nu)=$ $\displaystyle\frac{-\Delta_{\text{R}}}{\left[i\nu-\sqrt{\xi^{2}_{0}+|\Delta|^{2}}\right]\left[i\nu+\sqrt{\xi^{2}_{0}+|\Delta|^{2}}\right]}$ (141) These expressions simplify considerably if we consider the case where all of the tunneling coefficients are identical for all channels as in $T^{ij}_{\mathbf{kp}}=T_{\mathbf{kp}}$ and $T^{(i)}_{\mathbf{kp}}=T_{\mathbf{kp}}$. These correspond to junctions and the bound state spectra discussed in sections III and IV. The expressions for the $\theta$ functions are $\displaystyle\theta_{s\tau_{3}-N-s\tau_{3}}(eU)=$ $\displaystyle\lim_{i\omega\rightarrow eU+i\eta}2\sum_{\mathbf{kp}}T_{\mathbf{kp}}T_{\mathbf{-k,-p}}\frac{1}{\beta}\sum_{i\nu}\left[\sum_{im}F^{{\dagger}}_{\text{R},im}(\mathbf{k},i\nu)\right]\left[\sum_{jn}F_{\text{L},jn}(\mathbf{p},i\nu-i\omega)\right]$ (142) $\displaystyle\theta_{s\tau_{3}-N-s}(eU)=$ $\displaystyle\lim_{i\omega\rightarrow eU+i\eta}2\sum_{\mathbf{kp}}T_{\mathbf{kp}}T_{\mathbf{-k,-p}}\frac{1}{\beta}\sum_{i\nu}F^{{\dagger}}_{\text{R}}(\mathbf{k},\nu)\left[\sum_{ij}F_{\text{L},ij}(\mathbf{p},i\nu-i\omega)\right],$ (143) The sums in the square brakets amount to $\displaystyle\left[\sum_{ij}F_{\text{L/R},ij}(\mathbf{k},i\nu)\right]=$ $\displaystyle\frac{-4\Delta_{\text{L/R}}\xi_{3}\left(\xi_{0}+\xi_{1}\right)}{\Gamma(\mathbf{k},i\nu)}$ (144) From this expression, it is clear that, in the limit of zero intra-orbital hybridization $\xi_{3}=0$, the _total_ Josephson currents in the tunneling limit for either $s\tau_{3}-N-s\tau_{3}$ or $s\tau_{3}-N-s$ junctions vanish identically. This holds even for a finite inter-orbital hybridization $\xi_{1}$ on the L (and R lead if appropriate). The immediate reason for this remarkable result is that the L lead continues to preserve the orbital exchange symmetry defined in Sec. II.3 _in the bulk_ which ensures that $\displaystyle F_{\text{L},11}(\mathbf{k},i\nu)=$ $\displaystyle- F_{\text{L},22}(\mathbf{k},i\nu)$ (145) $\displaystyle F_{\text{L},12}(\mathbf{k},i\nu)=$ $\displaystyle- F_{\text{L},21}(\mathbf{k},i\nu)$ (146) as long as the intra-orbital hybridization terms $\xi_{3}$ vanish, even though the complete Hamiltonian does not preserve the symmetry for the $s\tau_{3}-N-s$ junction. In the limit considered here, where all of the orbitals couple identically across the junction, this symmetry in the bulk of the L lead ensures that the total current out of the L lead vanishes. Josephson currents which vanish in the limit of zero intra-orbital hybridization terms are completely consistent with the results of sections III and IV where the bound state spectra were shown to be insensitive to the relative phase precisely when the same intra-orbital hybridization terms were ignored. ### F.2 $s\tau_{0}-N-s\tau_{0}$ and $s\tau_{0}-N-s$ junction The expression for the current and the associated $\theta$ functions in Eqs. 142 is the same as for the $s\tau_{3}$ cases. However, the anomalous Green’s functions for $s\tau_{0}$ are $\displaystyle F_{\text{L/R},11}(\mathbf{k},i\nu)=$ $\displaystyle\frac{\Delta\left[-(i\omega)^{2}+\left(\xi_{0}-\xi_{3}\right)^{2}_{2}+\xi^{2}_{1}+|\Delta|^{2}\right]}{\chi(\textbf{k},i\nu)}$ (147) $\displaystyle F_{\text{L/R},12}(\mathbf{k},i\nu)=$ $\displaystyle\frac{2\Delta\xi_{1}\xi_{0}}{\chi(\textbf{k},i\nu)}$ (148) $\displaystyle F_{\text{L/R},21}(\mathbf{k},i\nu)=$ $\displaystyle\frac{2\Delta\xi_{1}\xi_{0}}{\chi(\textbf{k},i\nu)}$ (149) $\displaystyle F_{\text{L/R},22}(\mathbf{k},i\nu)=$ $\displaystyle\frac{\Delta\left[-(i\omega)^{2}+\epsilon^{2}_{1}+\xi^{2}_{1}+|\Delta|^{2}\right]}{\chi(\textbf{k},i\nu)}$ (150) where $\displaystyle\chi(\textbf{k},i\nu)=\left(i\nu- E_{1}\right)\left(i\nu+E_{1}\right)\left(i\nu-E_{2}\right)\left(i\nu- E_{2}\right)$ (151) and $\displaystyle E_{1/2}=$ $\displaystyle\sqrt{\left(\xi_{0}\pm\sqrt{\xi^{2}_{1}+\xi^{2}_{3}}\right)^{2}+|\Delta|^{2}}$ (152) is the bulk BdG spectrum for $s\tau_{0}$ pairing. We calculate $\displaystyle\left[\sum_{ij}F_{\text{L/R},ij}(\mathbf{k},i\nu)\right]=$ $\displaystyle\frac{-2\Delta\left\\{(i\omega)^{2}-\left[(\xi_{0}+\xi_{1})^{2}+\xi^{2}_{3}+|\Delta|^{2}\right]\right\\}}{\chi(\textbf{k},i\nu)}$ (153) This is clearly distinct from the expression for $s\tau_{3}$ pairing in Eq. 144. Indeed, in the $\xi_{1}=\xi_{3}$ limit corresponding to degenerate orbitals this reduces to $\displaystyle\left[\sum_{ij}F_{\text{L/R},ij}(\mathbf{k},i\nu)\right]=$ $\displaystyle\frac{-2\Delta}{\left[(i\omega)^{2}-\xi^{2}_{0}-|\Delta|^{2}\right]}$ (154) which amounts to twice the result for a single-channel junction, due to the presence of the two, decoupled orbitals. This is consistent with the dependence of the bound state spectrum on $\phi$ (Appendix C). ## References * Lee (2017) D.-H. Lee, Science 357, 32 (2017). * Mou _et al._ (2011) D. Mou, S. Liu, X. Jia, J. He, Y. Peng, L. Zhao, L. Yu, G. Liu, S. He, X. Dong, J. Zhang, H. Wang, C. Dong, M. Fang, X. Wang, Q. Peng, Z. Wang, S. Zhang, F. Yang, Z. Xu, C. Chen, and X. J. Zhou, Phys. Rev. Lett. 106, 107001 (2011). * X.-P. Wang and T. Qian and P. Richard and P. Zhang and J. Dong and H.-D. Wang and C.-H. Dong and M.-H. Fang and H. Ding (2011) X.-P. Wang and T. Qian and P. Richard and P. Zhang and J. Dong and H.-D. Wang and C.-H. Dong and M.-H. Fang and H. Ding, Europhy. Lett. 93, 57001 (2011). * Xu _et al._ (2012) M. Xu, Q. Q. Ge, R. Peng, Z. R. Ye, J. Jiang, F. Chen, X. P. Shen, B. P. Xie, Y. Zhang, A. F. Wang, X. F. Wang, X. H. Chen, and D. L. Feng, Phys. Rev. B 85, 220504 (2012). * Wang _et al._ (2012) X.-P. Wang, P. Richard, X. Shi, A. Roekeghem, Y.-B. Huang, E. Razzoli, T. Qian, E. Rienks, S. Thirupathaiah, H.-D. Wang, C.-H. Dong, M.-H. Fang, M. Shi, and H. Ding, Europhys. Lett. 99, 67001 (2012). * Park _et al._ (2011) J. T. Park, G. Friemel, Y. Li, J.-H. Kim, V. Tsurkan, J. Deisenhofer, H.-A. Krug von Nidda, A. Loidl, A. Ivanov, B. Keimer, and D. S. Inosov, Phys. Rev. Lett. 107, 177005 (2011). * Friemel _et al._ (2012) G. Friemel, J. T. Park, T. A. Maier, V. Tsurkan, Y. Li, J. Deisenhofer, H.-A. Krug von Nidda, A. Loidl, A. Ivanov, B. Keimer, and D. S. Inosov, Phys. Rev. B 85, 140511 (2012). * Eschrig (2006) M. Eschrig, Adv. Phys. 55, 47 (2006). * Stockert _et al._ (2011) O. Stockert, J. Arndt, E. Faulhaber, C. Geibel, H. S. Jeevan, S. Kirchner, M. Loewenhaupt, K. Schmalzl, W. Schmidt, Q. Si, and F. Steglich, Nat. Phys. 7, 119 (2011). * Maier _et al._ (2011) T. A. Maier, S. Graser, P. J. Hirschfeld, and D. J. Scalapino, Phys. Rev. B 83, 100515 (2011). * Dai (2015) P. Dai, Rev. Mod. Phys. 87, 855 (2015). * Si _et al._ (2016) Q. Si, R. Yu, and E. Abrahams, Nature Rev. Mater. 1, 16017 (2016). * Nica _et al._ (2017) E. M. Nica, R. Yu, and Q. Si, npj Quantum Materials 2, 24 (2017), arXiv:1505.04170 . * Nica and Si (2021) E. M. Nica and Q. Si, npj Quantum Materials 6, 3 (2021). * Steglich _et al._ (1979) F. Steglich, J. Aarts, C. D. Bredl, W. Lieke, D. Meschede, W. Franz, and H. Schäfer, Phys. Rev. Lett. 43, 1892 (1979). * Kittaka _et al._ (2014) S. Kittaka, Y. Aoki, Y. Shimura, T. Sakakibara, S. Seiro, C. Geibel, F. Steglich, H. Ikeda, and K. Machida, Phys. Rev. Lett. 112, 067002 (2014). * Pang _et al._ (2018) G. Pang, M. Smidman, J. Zhang, L. Jiao, Z. Weng, E. M. Nica, Y. Chen, W. Jiang, Y. Zhang, W. Xie, H. S. Jeevan, H. Lee, P. Gegenwart, F. Steglich, Q. Si, and H. Yuan, Proc. Nat. Acad. Sci. 115, 5343 (2018). * Yamashita _et al._ (2017) T. Yamashita, T. Takenaka, Y. Tokiwa, J. A. Wilcox, Y. Mizukami, D. Terazawa, Y. Kasahara, S. Kittaka, T. Sakakibara, M. Konczykowski, S. Seiro, H. S. Jeevan, C. Geibel, C. Putzke, T. Onishi, H. Ikeda, A. Carrington, T. Shibauchi, and Y. Matsuda, Sci. Adv. 3, e1601667 (2017). * Yu _et al._ (2014) R. Yu, J.-X. Zhu, and Q. Si, Phys. Rev. B 89, 024509 (2014). * Yin _et al._ (2014) Z. P. Yin, K. Haule, and G. Kotliar, Nat. Phys. 10, 845 (2014). * Ong _et al._ (2016) T. Ong, P. Coleman, and J. Schmalian, Proc. Nat. Acad. Sci. 113, 5486 (2016). * Sprau _et al._ (2017) P. O. Sprau, A. Kostin, A. Kreisel, A. E. Böhmer, V. Taufour, P. C. Canfield, S. Mukherjee, P. J. Hirschfeld, B. M. Andersen, and J. C. S. Davis, Science 357, 75 (2017). * Hu _et al._ (2018) H. Hu, R. Yu, E. M. Nica, J.-X. Zhu, and Q. Si, Phys. Rev. B 98, 220503 (2018). * Yu _et al._ (2018) R. Yu, J.-X. Zhu, and Q. Si, Phys. Rev. Lett. 121, 227003 (2018). * Raghu _et al._ (2008) S. Raghu, X.-L. Qi, C.-X. Liu, D. J. Scalapino, and S.-C. Zhang, Phys. Rev. B 77, 220503 (2008). * Si and Abrahams (2008) Q. Si and E. Abrahams, Phys. Rev. Lett. 101, 076401 (2008). * Daghofer _et al._ (2010) M. Daghofer, A. Nicholson, A. Moreo, and E. Dagotto, Phys. Rev. B 81, 014511 (2010). * Sauls (2018) J. A. Sauls, Philos. Trans. A Math. Phys. Eng. Sci. 376, 20180140 (2018). * Kane and Mele (2005) C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 146802 (2005). * Mahan (2000) G. D. Mahan, _Many-particle Physics_ (Plenum, New York, 2000).
# Gradient Descent Temporal Difference-difference Learning Rong J.B. Zhu Institute of Science and Technology for Brain-inspired Intelligence Fudan University, Shanghai 200433, People’s Republic of China <EMAIL_ADDRESS> &James M. Murray Institute of Neuroscience University of Oregon, Eugene, OR 97403, USA <EMAIL_ADDRESS> ###### Abstract Off-policy algorithms, in which a behavior policy differs from the target policy and is used to gain experience for learning, have proven to be of great practical value in reinforcement learning. However, even for simple convex problems such as linear value function approximation, these algorithms are not guaranteed to be stable. To address this, alternative algorithms that are provably convergent in such cases have been introduced, the most well known being gradient descent temporal difference (GTD) learning. This algorithm and others like it, however, tend to converge much more slowly than conventional temporal difference learning. In this paper we propose gradient descent temporal difference-difference (Gradient-DD) learning in order to improve GTD2, a GTD algorithm [1], by introducing second-order differences in successive parameter updates. We investigate this algorithm in the framework of linear value function approximation, theoretically proving its convergence by applying the theory of stochastic approximation. Studying the model empirically on the random walk task, the Boyan-chain task, and the Baird’s off-policy counterexample, we find substantial improvement over GTD2 and, in several cases, better performance even than conventional TD learning. ## 1 Introduction Off-policy algorithms for value function learning enable an agent to use a behavior policy that differs from the target policy in order to gain experience for learning. However, because off-policy methods learn a value function for a target policy given data due to a different behavior policy, they are slower to converge than on-policy methods and may even diverge when applied to problems involving function approximation [2, 3]. Two general approaches have been investigated to address the challenge of developing stable and effective off-policy temporal-difference algorithms. One approach is to use importance sampling methods to warp the update distribution so that the expected value is corrected [4, 5]. This approach is useful for the convergence guarantee, but it does not address stability issues. The second main approach to addressing the challenge of off-policy learning is to develop true gradient descent-based methods that are guaranteed to be stable regardless of the update distribution. [6, 1] proposed the first off-policy gradient-descent-based temporal difference algorithms (GTD and GTD2, respectively). These algorithms are guaranteed to be stable, with computational complexity scaling linearly with the size of the function approximator. Empirically, however, their convergence is much slower than conventional temporal difference (TD) learning, limiting their practical utility [7, 8]. Building on this work, extensions to the GTD family of algorithms (see [9] for a review) have allowed for incorporating eligibility traces [10, 11], non-linear function approximation such as with a neural network [12], and reformulation of the optimization as a saddle point problem [13, 14]. However, due to their slow convergence, none of these stable off- policy methods are commonly used in practice. In this work, we introduce a new gradient descent algorithm for temporal difference learning with linear value function approximation. This algorithm, which we call gradient descent temporal difference-difference (Gradient-DD) learning, is an acceleration technique that employs second-order differences in successive parameter updates. The basic idea of Gradient-DD is to modify the error objective function by additionally considering the prediction error obtained in the last time step, then to derive a gradient-descent algorithm based on this modified objective function. In addition to exploiting the Bellman equation to get the solution, this modified error objective function avoids drastic changes in the value function estimate by encouraging local search around the current estimate. Algorithmically, the Gradient-DD approach only adds an additional term to the update rule of the GTD2 method, and the extra computational cost is negligible. We prove its convergence by applying the theory of stochastic approximation. This result is supported by numerical experiments, which also show that Gradient-DD obtains better convergence in many cases than conventional TD learning. ### 1.1 Related Work In related approaches to ours, some previous studies have attempted to improve Gradient-TD algorithms by adding regularization terms to the objective function. These approaches have used have used $l_{1}$ regularization on weights to learn sparse representations of value functions [Liu:12], or $l_{2}$ regularization on weights [Ghiassian:20]. Our work is different from these approaches in two ways. First, whereas these previous studies investigated a variant of TD learning with gradient corrections, we take the GTD2 algorithm as our starting point. Second, unlike these previous approaches, our approach modifies the error objective function by using a distance constraint rather than a penalty on weights. The distance constraint works by restricting the search to some region around the evaluation obtained in the most recent time step. With this modification, our method provides a learning rule that contains second-order differences in successive parameter updates. Our approach is similar to trust region policy optimization [Schulman:15] or relative entropy policy search [Peters:10], which penalize large changes being learned in policy learning. In these methods, constrained optimization is used to update the policy by considering the constraint on some measure between the new policy and the old policy. Here, however, our aim is to find the optimal value function, and the regularization term uses the previous value function estimate to avoid drastic changes in the updating process. Our approach bears similarity to the natural gradient approach widely used in reinforcement learning [Amari:98, Bh:09, Degris:12, DT:14, Thomas:16], which also features a constrained optimization form. However, Gradient-DD is distinct from the natural gradient. The essential difference is that, unlike the natural gradient, Gradient-DD is a trust region method, which defines the trust region according to the difference between the current value and the value obtained from the previous step. From the computational cost viewpoint, unlike natural TD [DT:14], which needs to update an estimate of the metric tensor, the computational cost of Gradient-DD is essentially the same as that of GTD2. ## 2 Gradient descent method for off-policy temporal difference learning ### 2.1 Problem definition and background In this section, we formalize the problem of learning the value function for a given policy under the Markov decision process (MDP) framework. In this framework, the agent interacts with the environment over a sequence of discrete time steps, $t=1,2,\ldots$. At each time step the agent observes a state $s_{t}\in\mathcal{S}$ and selects an action $a_{t}\in\mathcal{A}$. In response, the environment emits a reward $r_{t}\in\mathbb{R}$ and transitions the agent to its next state $s_{t+1}\in\mathcal{S}$. The state and action sets are finite. State transitions are stochastic and dependent on the immediately preceding state and action. Rewards are stochastic and dependent on the preceding state and action, as well as on the next state. The process generating the agent’s actions is termed the behavior policy. In off-policy learning, this behavior policy is in general different from the target policy $\pi:\mathcal{S}\rightarrow\mathcal{A}$. The objective is to learn an approximation to the state-value function under the target policy in a particular environment: $V(s)=\text{E}_{\pi}\left[\sum\nolimits_{t=1}^{\infty}\gamma^{t-1}r_{t}|s_{1}=s\right],$ (1) where $\gamma\in[0,1)$ is the discount rate. In problems for which the state space is large, it is practical to approximate the value function. In this paper we consider linear function approximation, where states are mapped to feature vectors with fewer components than the number of states. Specifically, for each state $s\in\mathcal{S}$ there is a corresponding feature vector $\mathbf{x}(s)\in\mathbb{R}^{p}$, with $p\leq|\mathcal{S}|$, such that the approximate value function is given by $V_{\mathbf{w}}(s):=\mathbf{w}^{\top}\mathbf{x}(s).$ (2) The goal is then to learn the parameters $\mathbf{w}$ such that $V_{\mathbf{w}}(s)\approx V(s)$. ### 2.2 Gradient temporal difference learning A major breakthrough for the study of the convergence properties of MDP systems came with the introduction of the GTD and GTD2 learning algorithms [SSM09, SMP09]. We begin by briefly recapitulating the GTD algorithms, which we will then extend in the following sections. To begin, we introduce the Bellman operator $\mathbf{B}$ such that the true value function $\mathbf{V}\in\mathbb{R}^{|\mathcal{S}|}$ satisfies the Bellman equation: $\mathbf{V}=\mathbf{R}+\gamma\mathbf{P}\mathbf{V}=:\mathbf{B}\mathbf{V},$ where $\mathbf{R}$ is the reward vector with components $\text{E}(r_{n}|s_{n}=s)$, and $\mathbf{P}$ is a matrix of the state transition probabilities under the behavior policy. In temporal difference methods, an appropriate objective function should minimize the difference between the approximate value function and the solution to the Bellman equation. Having defined the Bellman operator, we next introduce the projection operator $\mathbf{\Pi}$, which takes any value function $\mathbf{V}$ and projects it to the nearest value function within the space of approximate value functions of the form Eqn. (2). Letting $\mathbf{X}$ be the matrix whose rows are $\mathbf{x}(s)$, the approximate value function can be expressed as $\mathbf{V}_{\mathbf{w}}=\mathbf{X}\mathbf{w}$. The projection operator is then given by $\mathbf{\Pi}=\mathbf{X}(\mathbf{X}^{\top}\mathbf{D}\mathbf{X})^{-1}\mathbf{X}^{\top}\mathbf{D},$ where the matrix $\mathbf{D}$ is diagonal, with each diagonal element $d_{s}$ corresponding to the probability of visiting state $s$. The natural measure of how closely the approximation $\mathbf{V}_{\mathbf{w}}$ satisfies the Bellman equation is the mean-squared Bellman error: $\text{MSBE}(\mathbf{w})=\|\mathbf{V}_{\mathbf{w}}-\mathbf{B}\mathbf{V}_{\mathbf{w}}\|_{\mathbf{D}}^{2},$ (3) where the norm is weighted by $\mathbf{D}$, such that $\|\mathbf{V}\|^{2}_{\mathbf{D}}=\mathbf{V}^{\top}\mathbf{D}\mathbf{V}$. However, because the Bellman operator follows the underlying state dynamics of the Markov chain, irrespective of the structure of the linear function approximator, $\mathbf{B}\mathbf{V}_{\mathbf{w}}$ will typically not be representable as $\mathbf{V}_{\mathbf{w}}$ for any $\mathbf{w}$. An alternative objective function, therefore, is the mean squared projected Bellman error (MSPBE), which we define as $\displaystyle J(\mathbf{w})=\|\mathbf{V}_{\mathbf{w}}-\mathbf{\Pi}\mathbf{B}\mathbf{V}_{\mathbf{w}}\|_{\mathbf{D}}^{2}.$ (4) Following [SMP09], our objective is to minimize this error measure. As usual in stochastic gradient descent, the weights at each time step are then updated by $\Delta\mathbf{w}=-\alpha\nabla J(\mathbf{w})$, where $\alpha>0$, and $\displaystyle-\frac{1}{2}\nabla J(\mathbf{w})$ $\displaystyle=-\text{E}[(\gamma\mathbf{x}_{n+1}-\mathbf{x}_{n})\mathbf{x}_{n}^{\top}][\text{E}(\mathbf{x}_{n}\mathbf{x}_{n}^{\top})]^{-1}\text{E}(\delta_{n}\mathbf{x}_{n}).$ (5) For notational simplicity, we have denoted the feature vector associated with $s_{n}$ as $\mathbf{x}_{n}=\mathbf{x}({s_{n}})$. We have also introduced the temporal difference error $\delta_{n}=r_{n}+(\gamma\mathbf{x}_{n+1}-\mathbf{x}_{n})^{\top}\mathbf{w}_{n}$. Let $\boldsymbol{\eta}_{n}$ denote the estimate of $[\text{E}(\mathbf{x}_{n}\mathbf{x}_{n}^{\top})]^{-1}\text{E}(\delta_{n}\mathbf{x}_{n})$ at the time step $n$. Because the factors in Eqn. (5) can be directly sampled, the resulting updates in each step are $\displaystyle\delta_{n}=$ $\displaystyle r_{n}+(\gamma\mathbf{x}_{n+1}-\mathbf{x}_{n})^{\top}\mathbf{w}_{n}$ $\displaystyle\boldsymbol{\eta}_{n+1}=$ $\displaystyle\boldsymbol{\eta}_{n}+\beta_{n}(\delta_{n}-\mathbf{x}_{n}^{\top}\boldsymbol{\eta}_{n})\mathbf{x}_{n}$ $\displaystyle\mathbf{w}_{n+1}=$ $\displaystyle\mathbf{w}_{n}-\alpha_{n}(\gamma\mathbf{x}_{n+1}-\mathbf{x}_{n})(\mathbf{x}_{n}^{\top}\boldsymbol{\eta}_{n}).$ (6) These updates define the GTD2 learning algorithm, which we will build upon in the following section. ## 3 Gradient descent temporal difference-difference learning In this section we modify the objective function by additionally considering the difference between $\mathbf{V}_{\mathbf{w}}$ and $\mathbf{V}_{\mathbf{w}_{n-1}}$, which denotes the value function estimate at step $n-1$ of the optimization. We propose a new objective $J_{\text{GDD}}(\mathbf{w}|\mathbf{w}_{n-1})$, where the notation “$\mathbf{w}|\mathbf{w}_{n-1}$" in the parentheses means that the objective is defined given $\mathbf{V}_{\mathbf{w}_{n-1}}$ of the previous time step $n-1$. Specifically, we modify Eqn. (4) as follows: $J_{\text{GDD}}(\mathbf{w}|\mathbf{w}_{n-1})=J(\mathbf{w})+\kappa\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2},$ (7) where $\kappa\geq 0$ is a parameter of the regularization. We show in Section A.1 of the appendix that minimizing Eqn. (7) is equivalent to the following optimization $\arg\min\limits_{\mathbf{w}}J(\mathbf{w})\text{ s.t. }\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}\leq\mu$ (8) where $\mu>0$ is a parameter which becomes large when $\kappa$ is small, so that the MSPBE objective is recovered as $\mu\to\infty$, equivalent to $\kappa\to 0$ in Eqn. (7). Figure 1: Schematic diagram of Gradient-DD learning with $\mathbf{w}\in\mathbb{R}^{2}$. Rather than updating $\mathbf{w}$ directly along the gradient of the MSPBE (black arrow), the update rule selects $\mathbf{w}_{n}$ (red arrow) that minimizes the MSPBE while satisfying the constraint $\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}\leq\mu$ (shaded ellipse). Rather than simply minimizing the optimal prediction from the projected Bellman equation, the agent makes use of the most recent update to look for the solution, choosing a $\mathbf{w}$ that minimizes the MSPBE while following the constraint that the estimated value function should not change too greatly, as illustrated in Fig. 1. In effect, the regularization term encourages searching around the estimate at previous time step, especially when the state space is large. Eqn. (8) shows that the regularized objective is a trust region approach, which seeks a direction that attains the best improvement possible subject to the distance constraint. The trust region is defined by the value distance rather than the weight distance, meaning that Gradient-DD also makes use of the natural gradient of the objective around $\mathbf{w}_{n-1}$ rather than around $\mathbf{w}_{n}$ (see Section A.2 of the appendix for details). In this sense, our approach can be explained as a trust region method that makes use of natural gradient information to prevent the estimated value function from changing too drastically. For comparison with related approaches using natural gradients, in Fig. 8 of the appendix we compare the empirical performance of our algorithm with natural GTD2 and natural TDC [DT:14] using the random walk task introduced below in Section 5. With these considerations in mind, the negative gradient of $J_{\text{GDD}}(\mathbf{w}|\mathbf{w}_{n-1})$ is $\displaystyle-\frac{1}{2}\nabla J_{\text{GDD}}(\mathbf{w}|\mathbf{w}_{n-1})$ $\displaystyle=$ $\displaystyle-\text{E}[(\gamma\mathbf{x}_{n+1}-\mathbf{x}_{n})\mathbf{x}_{n}^{\top}][\text{E}(\mathbf{x}_{n}\mathbf{x}_{n}^{\top})]^{-1}\text{E}(\delta_{n}\mathbf{x}_{n})-\kappa\text{E}[(\mathbf{x}_{n}^{\top}\mathbf{w}_{n}-\mathbf{x}_{n}^{\top}\mathbf{w}_{n-1})\mathbf{x}_{n}].$ (9) Because the terms in Eqn. (3) can be directly sampled, the stochastic gradient descent updates are given by $\displaystyle\delta_{n}=$ $\displaystyle r_{n}+(\gamma\mathbf{x}_{n+1}-\mathbf{x}_{n})^{\top}\mathbf{w}_{n}$ $\displaystyle\boldsymbol{\eta}_{n+1}=$ $\displaystyle\boldsymbol{\eta}_{n}+\alpha_{n}(\delta_{n}-\mathbf{x}_{n}^{\top}\boldsymbol{\eta}_{n})\mathbf{x}_{n}$ $\displaystyle\mathbf{w}_{n+1}=$ $\displaystyle\mathbf{w}_{n}-\kappa\alpha_{n}(\mathbf{x}_{n}^{\top}\mathbf{w}_{n}-\mathbf{x}_{n}^{\top}\mathbf{w}_{n-1})\mathbf{x}_{n}-\alpha_{n}(\gamma\mathbf{x}_{n+1}-\mathbf{x}_{n})(\mathbf{x}_{n}^{\top}\boldsymbol{\eta}_{n}).$ (10) These update equations define the Gradient-DD method, in which the GTD2 update equations (2.2) are generalized by including a second-order update term in the third update equation, where this term originates from the squared bias term in the objective (7). Since Gradient-DD is not sensitive to the step size of updating $\boldsymbol{\eta}$ (see Fig. 7 in the appendix), the updates of Gradient-DD only have a single shared step size $\alpha_{n}$ rather than two step sizes $\alpha_{n},\beta_{n}$ as GTD2 and TDC used. It is worth noting that the computational cost of our algorithm is essentially the same as that of GTD2. In the following sections, we shall analytically and numerically investigate the convergence and performance of Gradient-DD learning. ## 4 Convergence Analysis In this section we establish the convergence of Gradient-DD learning. Denote $\mathbf{G}_{n}=\left[\begin{array}[]{cc}\mathbf{x}_{n}\mathbf{x}_{n}^{\top}&\mathbf{x}_{n}(\mathbf{x}_{n}-\gamma\mathbf{x}_{n+1})^{\top}\\\ -(\mathbf{x}_{n}-\gamma\mathbf{x}_{n+1})\mathbf{x}_{n}^{\top}&\mathbf{0}\end{array}\right],\text{ and }\mathbf{H}_{n}=\left[\begin{array}[]{cc}\mathbf{0}&\mathbf{0}\\\ \mathbf{0}&\mathbf{x}_{n}\mathbf{x}_{n}^{\top}\end{array}\right]$. We rewrite the update rules in Eqn. (3) as a single iteration in a combined parameter vector with $2n$ components, $\boldsymbol{\rho}_{n}=(\boldsymbol{\eta}_{n}^{\top},\mathbf{w}_{n}^{\top})^{\top}$, and a new reward-related vector with $2n$ components, $\mathbf{g}_{n+1}=(r_{n}\mathbf{x}_{n}^{\top},\mathbf{0}^{\top})^{\top}$, as follows: $\displaystyle\boldsymbol{\rho}_{n+1}=$ $\displaystyle\boldsymbol{\rho}_{n}-\kappa\alpha_{n}\mathbf{H}_{n}(\boldsymbol{\rho}_{n}-\boldsymbol{\rho}_{n-1})+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}),$ (11) ###### Theorem 1. Consider the update rules (11) with step-size sequences $\alpha_{n}$. Let the TD fixed point be $\mathbf{w}^{*}$, such that $\mathbf{V}_{\mathbf{w}^{*}}=\mathbf{\Pi}\mathbf{B}\mathbf{V}_{\mathbf{w}^{*}}$. Suppose that (A0) $\alpha_{n}\in(0,1)$, $\sum\nolimits_{n=1}^{\infty}\alpha_{n}=\infty$, $\sum\nolimits_{n=1}^{\infty}\alpha_{n}^{2}<\infty$, (A1) $(\mathbf{x}_{n},r_{n},\mathbf{x}_{n+1})$ is an i.i.d. sequence with uniformly bounded second moments, (A2) $\text{E}[(\mathbf{x}_{n}-\gamma\mathbf{x}_{n+1})\mathbf{x}_{n}^{\top}]$ and $\text{E}(\mathbf{x}_{n}\mathbf{x}_{n}^{\top})$ are non-singular, (A3) $\sup_{n}\|\boldsymbol{\rho}_{n+1}-\boldsymbol{\rho}_{n}\|$ is bounded in probability, (A4) $\kappa\in[0,\infty)$. Then as $n\rightarrow\infty$, $\mathbf{w}_{n}\rightarrow\mathbf{w}^{*}$ with probability 1. Due to the second-order difference term in Eqn. (11), the analysis framework in [BM:00] does not directly apply to the Gradient-DD algorithm when (A0) holdes, i.e., step size is tapered. Likewise, the two-timescale convergence analysis [Bh:09] is also not directly applicable. Defining $\mathbf{u}_{n+1}=\boldsymbol{\rho}_{n+1}-\boldsymbol{\rho}_{n}$, we rewrite the iterative process in Eqn. (11) into two parallel processes which are given by $\displaystyle\boldsymbol{\rho}_{n+1}$ $\displaystyle=\boldsymbol{\rho}_{n}-\kappa\alpha_{n}\mathbf{H}_{n}\mathbf{u}_{n}+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}),$ (12) $\displaystyle\mathbf{u}_{n+1}$ $\displaystyle=-\kappa\alpha_{n}\mathbf{H}_{n}\mathbf{u}_{n}+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}).$ (13) We analyze the parallel processes Eqns. (12) & Eqn. (13) instead of directly analyzing Eqn. (11). Our proofs have three steps. First we show $\sup_{n}\|\boldsymbol{\rho}_{n}\|$ is bounded by applying the stability of the stochastic approximation [BM:00] into the recursion Eqn. (12). Second, based on this result, we shall show that $\mathbf{u}_{n}$ goes to 0 in probability by analyzing the recursion Eqn. (13). At last, along with the result that $\mathbf{u}_{n}$ goes to 0 in probability, by applying the convergence of the stochastic approximation [BM:00] into the recursion Eqn. (12), we show that $\boldsymbol{\boldsymbol{\rho}}_{n}$ goes to the TD fixed point which is given by the solution of $\mathbf{G}\boldsymbol{\rho}+\mathbf{g}=0$. The details are provided in Section A.3 of the Appendix. Theorem 1 shows that Gradient-DD maintains convergence as GTD2 under some mild conditions. The assumptions (A0), (A1), and (A2) are standard conditions in the convergence analysis of Gradient TD learning algorithms [SSM09, SMP09, Maei:11]. The assumption (A3) is weak since it means only that the incremental update in each step is bounded in probability. The assumption (A4) requires that $\kappa$ is a constant, meaning $\kappa=O(1)$. Given this assumption, the contribution of the term $\kappa\mathbf{H}_{n}\mathbf{u}_{n}$ is controlled by $\alpha_{n}$ as $n\rightarrow\infty$. ## 5 Empirical Study In this section, we assess the practical utility of the Gradient-DD method in numerical experiments. To validate performance of Gradient-DD learning, we compare Gradient-DD learning with GTD2 learning, TDC learning (TD with gradient correction [SMP09]), TDRC learning (TDC with regularized correction [Ghiassian:20]) and TD learning in both tabular representation and linear representation. We conducted three tasks: a simple random walk task, the Boyan-chain task, and Baird’s off-policy counterexample. In each task, we evaluate the performance of a learning algorithm by empirical root mean- squared (RMS) error: $\sqrt{\sum\nolimits_{s\in\mathcal{S}}d_{s}(V_{\mathbf{w}_{n}}(s)-V(s))^{2}}$. The reason we choose the empirical RMS error rather than root projected mean- squared error or other measures as [Gh:18, Ghiassian:20] used is because it is a direct measure of concern in practical performance. We set the $\kappa=1$. We also investigate the sensitivity to $\kappa$ in the Appendix, where we show that $\kappa=1$ is a good and natural choice in our empirical studies, although tuning $\kappa$ may be necessary for some tasks in practice. ### 5.1 Random walk task As a first test of Gradient-DD learning, we conducted a simple random walk task [Sutton:2018]. The random walk task has a linear arrangement of $m$ states plus an absorbing terminal state at each end. Thus there are $m+2$ sequential states, $S_{0},S_{1},\cdots,S_{m},S_{m+1}$, where $m=$ 10, 20, or 40. Every walk begins in the center state. At each step, the walk moves to a neighboring state, either to the right or to the left with equal probability. If either edge state ($S_{0}$ or $S_{m+1}$) is entered, the walk terminates. A walk’s outcome is defined to be $r=0$ at $S_{0}$ and $r=1$ at $S_{m+1}$. Our aim is to learn the value of each state $V(s)$, where the true values are $(1,\cdots,m)/(m+1)$. In all cases the approximate value function is initialized to the intermediate value $V_{0}(s)=0.5$. We first consider tabular representation of the value function. We also consider linear approximation representation and obtain the similar results, which are reported in Fig. 10 of the appendix. We consider that the learning rate $\alpha_{n}$ is tapered according to the schedule $\alpha_{n}=\alpha(10^{3}+1)/(10^{3}+n)$. We tune $\alpha\in\\{10^{-12/4},10^{-11/4},\cdots,10^{-1/4},1\\}$. We also consider the constant step sizes and obtain the similar results, which are reported in Fig. 9 of the appendix. For GTD2 and TDC, we set $\beta_{n}=\zeta\alpha_{n}$ with $\zeta\in\\{1/64,1/16,1/4,1,4\\}$. We first compare the methods by plotting the empirical RMS error from averaging the final 100 steps as a function of step size $\alpha$ in Fig. 2, where 20,000 episodes are used. We also plot the average error of all episodes during training and report these results in Fig. 5 of the Appendix. From these figures, we can make several observations. (1) Gradient-DD clearly performs better than the GTD2 and TDC methods. This advantage is consistent in various settings, and gets bigger as the state space becomes large. (2) Gradient-DD performs similarly to TDRC and conventional TD learning, with a similar dependence on $\alpha$, although Gradient-DD exhibits greater sensitivity to the value of $\alpha$ in the log domain than these other algorithms. In summary, Gradient-DD exhibits clear advantages over the GTD2 algorithm, and its performance is also as good as TDRC and conventional TD learning. Figure 2: The random walk task with tabular representation and tapering step size $\alpha_{n}=\alpha(10^{3}+1)/(10^{3}+n)$. Upper: Performance as a function of $\alpha$; Lower: performance over episodes. In each row, state space size 10 (left), 20 (middle), or 40 (right). The curves are averaged over 50 runs, with error bars denoting the standard error of the mean, though most are vanishingly small. Next we look closely at the performance during training in Fig. 2. For each method, we tuned $\alpha\in\\{10^{-12/4},\cdots,10^{-1/4},1\\}$ by minimizing the average error of the last 100 episodes. We also compare the performance when $\alpha$ is tuned by minimizing the average error of the last 100 episodes and report in Fig. 5 of the appendix. From these results, we draw several observations. (1) For all conditions tested, Gradient-DD converges much more rapidly than GTD2 and TDC. The results also indicate that Gradient- DD even converges as fast as TDRC and conventional TD learning. (2) The advantage of Gradient-DD learning over GTD2 grows as the state space increases in size. (3) Gradient-DD has consistent and good performance under both the constant step size setting and under the tapered step size setting. In summary, the Gradient-DD learning curves in this task show improvements over other gradient-based methods and performance that matches conventional TD learning. Like TDRC, the updates of Gradient-DD only have a single shared step size $\alpha_{n}$, i.e., $\beta_{n}=\alpha_{n}$, rather than two independent step sizes $\alpha_{n}$ and $\beta_{n}$ as in the GTD2 and TDC algorithms. A possible concern is that the performance gains in our second-order algorithm could just as easily be obtained with existing methods by adopting this two- timescale approach, where the value function weights are updated with a smaller step size than the second set of weights. Hence, in addition to investigating the effects of the learning rate, size of the state space, and magnitude of the regularization parameter, we also investigate the effect of using distinct values for the two learning rates, $\alpha_{n}$ and $\beta_{n}$, although we set $\beta_{n}=\zeta\alpha_{n}$ with $\zeta=1$. To do this, we set $\beta_{n}=\zeta\alpha_{n}$ for GDD, with $\zeta\in\\{1/64,1/16,1/4,1,4\\}$, and report the results in Fig. 7 of the appendix. The results show that comparably good performance of Gradient-DD is obtained under these various $\zeta$, providing evidence that the second-order difference term in our approach provides an improvement beyond what can be obtained with previous gradient-based methods using the two time scale approach. ### 5.2 Boyan-chain task We next investigate Gradient-DD learning on the Boyan-chain problem, which is a standard task for testing linear value-function approximation [Boyan02]. In this task we allow for $4p-3$ states, each of which is represented by a $p$-dimensional feature vector, with $p=20,50,$ or $100$. The $p$-dimensional representation for every fourth state from the start is $[1,0,\cdots,0]$ for state $s_{1}$, $[0,1,0,\cdots,0]$ for $s_{5}$, $\cdots$, and $[0,0,\cdots,0,1]$ for the terminal state $s_{4p-3}$. The representations for the remaining states are obtained by linearly interpolating between these. The optimal coefficients of the feature vector are $(-4(p-1),-4(p-2),\cdots,0)/5$. In each state, except for the last one before the end, there are two possible actions: move forward one step or move forward two steps, where each action occurs with probability 0.5. Both actions lead to reward -0.3. The last state before the end just has one action of moving forward to the terminal with reward -0.2. We tune $\alpha\in\\{10^{-2},10^{-1.5},10^{-1},10^{-3/4},10^{-1/2},10^{-1/4},10^{-1/8},1,10^{1/8},10^{1/4}\\}$ for each method by minimizing the average error of the final 100 episodes. The step size is tapered according to the schedule $\alpha_{n}=\alpha(2\times 10^{3}+1)/(2\times 10^{3}+n)$. For GTD2 and TDC, we set $\beta_{n}=\zeta\alpha_{n}$ with $\zeta\in\\{1/64,1/16,1/4,1,4\\}$. In this task, we set $\gamma=1$. We report the performance as a function of $\alpha$ and the performance over episodes in Fig. 3, where we tune $\alpha$ by the performance based on the average error of the last 100 episodes. We also compare the performance based on the average error of all episodes during training and report the results in Fig. 11 of the appendix. These figures lead to conclusions similar to those already drawn in the random walk task. (1) Gradient-DD has much faster convergence than GTD2 and TDC, and generally converges to better values. (2) Gradient-DD is competitive with TDRC and conventional TD learning despite being somewhat slower at the beginning episodes. (3) The improvement over GTD2 or TDC grows as the state space becomes larger. (4) Comparing the performance with constant step size versus that with tapered step size, the Gradient-DD method performs better with tapered step size than it does with constant step size. Figure 3: The Boyan Chain task with linear approximation and tapering step size $\alpha_{n}=\alpha(2\times 10^{3}+1)/(2\times 10^{3}+n)$. Upper: Performance as a function of $\alpha$; Lower: performance over episodes. In each row, the feature size is 20 (left), 50 (middle), or 100 (right). The curves are averaged over 50 runs, with error bars denoting the standard error of the mean, though most are vanishingly small across runs. ### 5.3 Baird’s off-policy counterexample We also verify the performance of Gradient-DD on Baird’s off-policy counterexample [Baird:95, Sutton:2018], illustrated schematically in Fig. 4, for which TD learning famously diverges. We show results from Baird’s counterexample with $N=7,20$ states. The reward is always zero, and the agent learns a linear value approximation with $N+1$ weights $w_{1},\cdots,w_{N+1}$: the estimated value of the $j$-th state is $2w_{j}+w_{N+1}$ for $j\leq N-1$ and that of the $N$-th state is $w_{N}+2w_{N+1}$. In the task, the importance sampling ratio for the dashed action is $(N-1)/N$, while the ratio for the solid action is $N$. Thus, comparing different state sizes illustrates the impact of importance sampling ratios in these algorithms. The initial weight values are $(1,\cdots,1,10,1)^{\top}$. Constant $\alpha$ is used in this task. We set $\gamma=0.99$. For TDC and GTD2, thus we tune $\zeta\in\\{4^{-2},4^{-1},1,4^{2},4^{3}\\}$. Meanwhile we tune $\alpha$ for TDC in a wider region. For Gradient-DD, we tune $\kappa\in\\{4^{-1},1,4\\}$. We tune $\alpha$ separately for each algorithm by minimizing the average error from the final 100 episodes. Fig. 4 demonstrates that Gradient-DD works better on this counterexample than GTD2, TDC, and TDRC. It is worthwhile to observe that when the state size is 20, TDRC become unstable, meaning serious unbalance of importance sampling ratios may cause TDRC unstable. We also note that, because the linear approximation leaves a residual error in the value estimation due to the projection error, the RMS errors of GTD2, TDC, and TDRC in this task do not go to zero. In contrast to other algorithms, the errors from our Gradient-DD converge to zero. (a) Illustration of the extended Baird’s off-policy counterexample. The “solid" action usually goes to the $N$-th state, and the “dashed" action usually goes to one of the other $N-1$ states, each with equal probability. (b) The performance of various algorithms. Figure 4: Baird’s off-policy counterexample. Upper in (b): Performance as a function of $\alpha$; Lower in (b): performance over episodes. From left to right in (b): 7-state and 20-state. ## 6 Conclusion and discussion In this work, we have proposed Gradient-DD learning, a new gradient descent- based TD learning algorithm. The algorithm is based on a modification of the projected Bellman error objective function for value function approximation by introducing a second-order difference term. The algorithm significantly improves upon existing methods for gradient-based TD learning, obtaining better convergence performance than conventional linear TD learning. Since GTD learning was originally proposed, the Gradient-TD family of algorithms has been extended to incorporate eligibility traces and learning optimal policies [Maei:10, GS:14], as well as for application to neural networks [Maei:11]. Additionally, many variants of the vanilla Gradient-TD methods have been proposed, including HTD [Hackman12] and Proximal Gradient-TD [Liu16]. Because Gradient-DD just modifies the objective error of GTD2 by considering an additional squared-bias term, it may be extended and combined with these other methods, potentially broadening its utility for more complicated tasks. One potential limitation of our method is that it introduces an additional hyperparameter relative to similar gradient-based algorithms, which increases the computational requirements for hyperparameter optimization. This is somewhat mitigated by our finding that the algorithm’s performance is not particularly sensitive to values of $\kappa$, and that $\kappa\sim 1$ was found to be a good choice for the range of environments that we considered. Another potential limitation is that we have focused on value function prediction in the two simple cases of tabular representations and linear approximation, which has enabled us to provide convergence guarantees, but not yet for the case of nonlinear function approximation. An especially interesting direction for future study will be the application of Gradient-DD learning to tasks requiring more complex representations, including neural network implementations. Such approaches are especially useful in cases where state spaces are large, and indeed we have found in our results that Gradient- DD seems to confer the greatest advantage over other methods in such cases. Intuitively, we expect that this is because the difference between the optimal update direction and that chosen by gradient descent becomes greater in higher-dimensional spaces (cf. Fig. 1). This performance benefit in large state spaces suggests that Gradient-DD may be of practical use for these more challenging cases. ## Acknowledgments and Disclosure of Funding This work was partially supported by the National Natural Science Foundation of China (No.11871459) and by the Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01). Support for this work was also provided by the National Science Foundation NeuroNex program (DBI-1707398) and the Gatsby Charitable Foundation. ## References * [1] R.S. Sutton, H.R. Maei, D. Precup, S. Bhatnagar, D. Silver, Cs. Szepesvári, and E. Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th International Conference on Machine Learning, 2009. * [2] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. In Proceedings of the 12 th International Conference on Machine Learning, pages 30–37, 1995. * [3] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition edition, 2018. * [4] D. Precup, R. S. Sutton, and S. Singh. Eligibility traces for off-policy policy evaluation. In Proceedings of the 17 th International Conference on Machine Learning, 2000. * [5] A. R. Mahmood, H. van Hasselt, and R. S. Sutton. Weighted importance sampling for off-policy learning with linear function approximation. In Advances in Neural Information Processing Systems 27, 2014. * [6] R. S. Sutton, Cs. Szepesvári, and H. R. Maei. A convergent O(n) algorithm for off-policy temporal difference learning with linear function approximation. In Advances in Neural Information Processing Systems 21, 2009. * [7] S. Ghiassian, A. Patterson, S. Garg, D. Gupta, A. White, and M. White. Gradient temporal-difference learning with regularized corrections. In International Conference on Machine Learning, 2020. * [8] A. White and M. White. Investigating practical linear temporal difference learning. In International Conference on Autonomous Agents and Multi-Agent Systems, 2016. * [9] S. Ghiassian, A. Patterson, M. White, R.S. Sutton, and A. White. Online off-policy prediction. arXiv:1811.02597, 2018. * [10] H.R. Maei and R.S. Sutton. $\text{GQ}(\lambda)$: A general gradient algorithm for temporal-difference prediction learning with eligibility traces. In Proceedings of the 3rd Conference on Artificial General Intelligence, pages 91–96, 2010. * [11] M. Geist and B. Scherrer. Off-policy learning with eligibility traces: A survey. Journal of Machine Learning Research, 15:289–333, 2014. * [12] H.R. Maei. Gradient temporal-difference learning algorithms. PhD thesis, University of Alberta, Edmonton, 2011. * [13] B. Liu, J. Liu, M. Ghavamzadeh, S. Mahadevan, and M. Petrik. Finite-sample analysis of proximal gradient TD algorithms. In Proceedings of the 31st International Conference on Uncertainty in Artificial Intelligence, pages 504–513, 2015. * [14] S. S. Du, J. Chen, L. Li, L. Xiao, and D. Zhou. Stochastic variance reduction methods for policy evaluation. In Proceedings of the 34 th International Conference on Machine Learning, 2017. * [15] B. Liu, S. Mahadevan, and J. Liu. Regularized off-policy TD-learning. In Advances in Neural Information Processing Systems, 2012. * [16] John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning, 2015. * [17] J. Peters, K. Mülling, and Y. Altün. Relative entropy policy search. In AAAI Conference on Artificial Intelligence, 2010. * [18] S. Amari. Natural gradient works efficiently in learning. Neural Computation, (10):251–276, 1998. * [19] S. Bhatnagar, R.S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithm. Automatic, (73):2471–2482, 2009. * [20] T. Degris, P.M. Pilarski, and R.S. Sutton. Model-free reinforcement learning with continuous action in practice. In Proceedings of the 2012 American Control Conference, 2012. * [21] W. Dabney and P. Thomas. Natural temporal difference learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2014. * [22] P. Thomas, B.C. Silva, C. Dann, and E. Brunskill. Energetic natural gradient descent. In Proceedings of the 33th International Conference on Machine Learning, 2016. * [23] V.S. Borkar and S.P. Meyn. The ODE method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization, 38(2):447–469, 2000. * [24] Justin A. Boyan. Technical update: least-squares temporal difference learning. Machine Learning, 49:233–246, 2002. * [25] L. Hackman. Faster gradient-TD algorithms. Master’s thesis, University of Alberta, Edmonton, 2012. * [26] B. Liu, J. Liu, M. Ghavamzadeh, S. Mahadevan, and M. Petrik. Proximal gradient temporal difference learning algorithms. In The 25th International Conference on Artificial Intelligence (IJCAI-16),, 2016. * [27] P. Thomas. GeNGA: A generalization of natural gradient ascent with positive and negative convergence results. In Proceedings of the 31st International Conference on Machine Learning, 2014. ## Appendix A Appendix ### A.1 On the equivalence of Eqns. (7) & (8) The Karush-Kuhn-Tucker conditions of Eqn. (8) are the following system of equations $\begin{cases}\frac{d}{d\mathbf{w}}J(\mathbf{w})+\kappa\frac{d}{d\mathbf{w}}(\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}-\mu)=0;\\\ \kappa(\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}-\mu)=0;\\\ \|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}\leq\mu;\\\ \kappa\geq 0.\end{cases}$ These equations are equivalent to $\begin{cases}\frac{d}{d\mathbf{w}}J(\mathbf{w})+\kappa\frac{d}{d\mathbf{w}}\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}=0\text{ and }\kappa>0,\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ if }\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}=\mu;\\\ \frac{d}{d\mathbf{w}}J(\mathbf{w})=0\text{ and }\kappa=0,\text{ if }\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}<\mu.\end{cases}$ Thus, for any $\mu>0$, there exists a $\kappa\geq 0$ such that $\frac{d}{d\mathbf{w}}J(\mathbf{w})+\mu\frac{d}{d\mathbf{w}}\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}=0$. ### A.2 The relation to natural gradients In this section, we shall show that Gradient-DD is related to, but distinct from, the natural gradient. We thank a reviewer for pointing out the connection between Gradient-DD and the natural gradient. Following [Amari:98] or [Thomas:14], the natural gradient of $J(\mathbf{w})$ is the direction obtained by solving the following optimization: $\lim_{\epsilon\rightarrow 0}\arg\min\limits_{\Delta}J(\mathbf{w}+\epsilon\Delta)\text{ s.t. }\epsilon^{2}\Delta^{\top}\mathbf{X}^{\top}\mathbf{D}\mathbf{X}\Delta\leq\mu.$ (A.1) We can note that this corresponds to the ordinary gradient in the case where the metric tensor $\mathbf{X}^{\top}\mathbf{DX}$ is proportional to the identity matrix. Now we rewrite Eqn. (8) as $\displaystyle\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}=(\mathbf{w}-\mathbf{w}_{n-1})^{\top}\mathbf{X}^{\top}\mathbf{D}\mathbf{X}(\mathbf{w}-\mathbf{w}_{n-1}).$ Denote $\epsilon\Delta=\mathbf{w}-\mathbf{w}_{n-1}$, where $\epsilon$ is the radius of the circle of $\mathbf{w}$ around $\mathbf{w}_{n-1}$ and $\Delta$ is a unit vector. Thus, we have $\displaystyle\|\mathbf{V}_{\mathbf{w}}-\mathbf{V}_{\mathbf{w}_{n-1}}\|_{\mathbf{D}}^{2}=\epsilon^{2}\Delta^{\top}\mathbf{X}^{\top}\mathbf{D}\mathbf{X}\Delta.$ For the MSPBE objective, we have $\displaystyle J(\mathbf{w})=J(\mathbf{w}_{n-1}+\mathbf{w}-\mathbf{w}_{n-1})=J(\mathbf{w}_{n-1}+\epsilon\Delta).$ Minimizing Eqn. (8) is equivalent to the following optimization $\arg\min\limits_{\Delta}J(\mathbf{w}_{n-1}+\epsilon\Delta)\text{ s.t. }\epsilon^{2}\Delta^{\top}\mathbf{X}^{\top}\mathbf{D}\mathbf{X}\Delta\leq\mu.$ (A.2) In the limit as $\epsilon\rightarrow 0$, the above optimization is equivalent to $\arg\min\limits_{\Delta}\Delta^{\top}\nabla J(\mathbf{w}_{n-1})\text{ s.t. }\epsilon^{2}\Delta^{\top}\mathbf{X}^{\top}\mathbf{D}\mathbf{X}\Delta\leq\mu.$ Thus, given the metric tensor $\mathbf{G}=\mathbf{X}^{\top}\mathbf{D}\mathbf{X}$, $-\mathbf{G}^{-1}\nabla J(\mathbf{w}_{n-1})$ is the direction of steepest descent, i.e. the natural gradient, of $J(\mathbf{w}_{n-1})$. The natural gradient of $J(\mathbf{w})$, on the other hand, is the direction of steepest descent at $\mathbf{w}$, rather than at $\mathbf{w}_{n-1}$. Therefore, our Gradient-DD approach makes use of the natural gradient of the objective around $\mathbf{w}_{n-1}$ rather than around $\mathbf{w}_{n}$ in Eqn. (A.1). This explains the distinction of the updates of our Gradient-DD approach from the updates of directly applying the natural gradient of the objective $\mathbf{w}$. ### A.3 Proof of Theorem 1 We introduce an ODE result on stochastic approximation in the following lemma, then show the convergence of our GDD approach in Theorem 1 by applying this result. ###### Lemma 1. (Theorems 2.1 & 2.2 of [BM:00]) Consider the stochastic approximation algorithm described by the $d$-dimensional recursion $\mathbf{y}_{n+1}=\mathbf{y}_{n}+a_{n}[f(\mathbf{y}_{n})+\mathbf{M}_{n+1}].$ Suppose the following conditions hold: (c1) The sequence $\\{\alpha_{n}\\}$ satisfies $0<\alpha_{n}<1$, $\sum_{n=1}^{n}\alpha_{n}=\infty$, $\sum_{n=1}^{n}\alpha_{n}^{2}<\infty$. (c2) The function $f$ is Lipschitz, and there exists a function $f_{\infty}$ such that $\lim_{r\rightarrow\infty}f_{r}(\mathbf{y})=f_{\infty}(\mathbf{y})$, where the scaled function $f_{r}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ is given by $f_{r}(\mathbf{y})=f(r\mathbf{y})/r$. (c3) The sequence $\\{\mathbf{M}_{n},\mathcal{F}_{n}\\}$, with $\mathcal{F}_{n}=\sigma(\mathbf{y}_{i},\mathbf{M}_{i},i\leq n)$, is a martingale difference sequence. (c4) For some $c_{0}<\infty$ and any initial condition $y_{0}$, $\text{E}(\|\mathbf{M}_{n+1}\|^{2}|\mathcal{F}_{n})\leq c_{0}(1+\|\mathbf{y}_{n}\|^{2})$. (c5) The ODE $\dot{\mathbf{y}}=f_{\infty}(\mathbf{y})$ has the origin as a globally asymptotically stable equilibrium. (c6) The ODE $\dot{\mathbf{y}}(t)=f(\mathbf{y}(t))$ has a unique globally asymptotically stable equilibrium $\mathbf{y}^{*}$. Then (1) under the assumptions (c1-c5), $\sup_{n}\mathbf{y}_{n}<\infty$ in probability. (2) under the assumptions (c1-c6), as $n\rightarrow\infty$, $\mathbf{y}_{n}$ converges to $\mathbf{y}^{*}$ with probability 1 . Now we investigate the stochastic gradient descent updates in Eqn. (11), which is recalled as follows: $\displaystyle\boldsymbol{\rho}_{n+1}$ $\displaystyle=\boldsymbol{\rho}_{n}-\kappa\alpha_{n}\mathbf{H}_{n}(\boldsymbol{\rho}_{n}-\boldsymbol{\rho}_{n-1})+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}).$ (A.3) The iterative process in Eqn. (A.3) can be rewritten as $\displaystyle(\boldsymbol{\rho}_{n+1}-\boldsymbol{\rho}_{n})$ $\displaystyle=-\kappa\alpha_{n}\mathbf{H}_{n}(\boldsymbol{\rho}_{n}-\boldsymbol{\rho}_{n-1})+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}).$ (A.4) Defining $\mathbf{u}_{n+1}=\boldsymbol{\rho}_{n+1}-\boldsymbol{\rho}_{n}.$ Eqn. (A.4) becomes $\displaystyle\mathbf{u}_{n+1}$ $\displaystyle=-\kappa\alpha_{n}\mathbf{H}_{n}\mathbf{u}_{n}+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}).$ Thus, the iterative process in Eqn. (A.3) is rewritten as two parallel processes that are given by $\displaystyle\boldsymbol{\rho}_{n+1}$ $\displaystyle=\boldsymbol{\rho}_{n}-\kappa\alpha_{n}\mathbf{H}_{n}\mathbf{u}_{n}+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}),$ (A.5) $\displaystyle\mathbf{u}_{n+1}$ $\displaystyle=-\kappa\alpha_{n}\mathbf{H}_{n}\mathbf{u}_{n}+\alpha_{n}(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}).$ (A.6) Our proofs have three steps. First we shall show $\sup_{n}\|\boldsymbol{\rho}_{n}\|$ is bounded by applying the ordinary differential equation approach of the stochastic approximation (the 1st result of Lemma 1) into the recursion Eqn. (A.5). Second, based on this result, we shall show that $\mathbf{u}_{n}$ goes to 0 in probability by analyzing the recursion Eqn. (A.6). At last, along with the result that $\mathbf{u}_{n}$ goes to 0 in probability, by applying the 2nd result of Lemma 1 into the recursion Eqn. (A.5), we show that $\boldsymbol{\boldsymbol{\rho}}_{n}$ goes to the TD fixed point, which is given by the solution of $\mathbf{G}\boldsymbol{\rho}+\mathbf{g}=0$. First, we shall show $\sup_{n}\|\boldsymbol{\rho}_{n}\|$ is bounded. Eqn. (A.5) is rewritten as $\boldsymbol{\rho}_{n+1}=\boldsymbol{\rho}_{n}+\alpha_{n}(f(\boldsymbol{\rho}_{n})+\mathbf{M}_{n+1}),$ (A.7) where $f(\boldsymbol{\rho}_{n})=(\mathbf{G}\boldsymbol{\rho}_{n}+\mathbf{g})-\kappa\mathbf{H}\mathbf{u}_{n}$ and $\mathbf{M}_{n+1}=((\mathbf{G}_{n}-\mathbf{G})\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1}-\mathbf{g})-\kappa(\mathbf{H}_{n}-\mathbf{H})\mathbf{u}_{n}$. Let $\mathcal{F}_{n}=\sigma(\mathbf{u}_{0},\boldsymbol{\rho}_{0},\mathbf{M}_{0},\mathbf{u}_{1},\boldsymbol{\rho}_{1},\mathbf{M}_{1},\cdots,\mathbf{u}_{n},\boldsymbol{\rho}_{n},\mathbf{M}_{n})$ be $\sigma$-fields generated by the quantities $\mathbf{u}_{i},\boldsymbol{\rho}_{i},\mathbf{M}_{i}$, $i\leq n$. Now we verify the conditions (c1-c5) of Lemma 1. Condition (c1) is satisfied under the assumption of the step sizes. Clearly, $f(\mathbf{u})$ is Lipschitz and $f_{\infty}(\boldsymbol{\rho})=\mathbf{G}\boldsymbol{\rho}$, meaning Condition (c2) is satisfied. Condition (c3) is satisfied by noting that $(M_{n},\mathcal{F}_{n})$ is a martingale difference sequence, i.e., $\text{E}(\mathbf{M}_{n+1}|\mathcal{F}_{n})=0$. We next investigate $\text{E}(\|\mathbf{M}_{n+1}\|^{2}|\mathcal{F}_{n})$. From the triangle inequality, we have that $\displaystyle\|\mathbf{M}_{n+1}\|^{2}\leq 2\|(\mathbf{G}_{n}-\mathbf{G})\|^{2}\|\boldsymbol{\rho}_{n}\|^{2}+2\|\kappa(\mathbf{H}_{n}-\mathbf{H})\|^{2}\|\mathbf{u}_{n}\|^{2}.$ (A.8) From Assumption A3 in Theorem 1 that $\|\mathbf{u}_{n}\|$ is bounded and the Assumption A1 that ($\mathbf{x}_{n},r_{n},\mathbf{x}_{n+1}$) is an i.i.d. sequence with uniformly bounded second moments, there exists some constant $c_{0}$ such that $\displaystyle\|\mathbf{G}_{n}-\mathbf{G}\|^{2}\leq c_{0}/2,\text{ and }\|\kappa(\mathbf{H}_{n}-\mathbf{H})\|^{2}\|\mathbf{u}_{n}\|^{2}\leq c_{0}/2.$ Thus, Condition (c4) is satisfied. Note that $\mathbf{G}$ is defined here with opposite sign relative to $\mathbf{G}$ in [Maei:11]. From [SSM09] and [Maei:11], the eigenvalues of the matrix $-\mathbf{G}$ are strictly negative under the Assumption A2. Therefore, Condition (c5) is satisfied. Thus, applying the 1st part of Lemma 1 shows that $\sup_{n}\|\boldsymbol{\rho}_{n}\|$ is bounded in probability. Second, we investigate the recursion Eqn. (A.6). Let $\mathbf{y}_{n+1}=(\mathbf{G}_{n}\boldsymbol{\rho}_{n}+\mathbf{g}_{n+1})$. Then $\displaystyle\mathbf{u}_{n+1}$ $\displaystyle=\alpha_{n}[-\kappa\mathbf{H}_{n}\mathbf{u}_{n}+\mathbf{y}_{n+1}]$ $\displaystyle=\alpha_{n}\mathbf{y}_{n+1}+\alpha_{n}\alpha_{n-1}(-\kappa\mathbf{H}_{n})\mathbf{y}_{n}+\alpha_{n}\alpha_{n-1}\alpha_{n-2}(-\kappa\mathbf{H}_{n})(-\kappa\mathbf{H}_{n-1})\mathbf{y}_{n-1}$ $\displaystyle\quad+\cdots+\alpha_{n}\prod_{k=0}^{n-1}\alpha_{k}(-\kappa\mathbf{H}_{k+1})\mathbf{y}_{1}+\prod_{k=0}^{n}\alpha_{k}(-\kappa\mathbf{H}_{k})\mathbf{u}_{0}.$ (A.9) Note that $\|\mathbf{H}_{n}\|\leq 1/\kappa$ due to $\|\mathbf{x}_{n}\|\leq 1/\kappa$ and that there exists a constant $c$ such that $\|\boldsymbol{\rho}_{n}\|\leq c$ due to the above result that $\sup_{n}\|\boldsymbol{\rho}_{n}\|<\infty$ in probability. Without loss of generality, we assume that $\|\mathbf{x}_{n}\|\leq 1/\kappa$. Eqn. (A.3) implies that $\displaystyle\|\mathbf{u}_{n+1}\|$ $\displaystyle\leq c\left(\alpha_{n}+\alpha_{n}\alpha_{n-1}+\alpha_{n}\alpha_{n-1}\alpha_{n-2}+\cdots+\prod_{k=0}^{n}\alpha_{k}\right)+\prod_{k=0}^{n}\alpha_{k}\|\mathbf{u}_{0}\|.$ (A.10) Under Assumption A0, $\alpha_{n}\rightarrow 0$ as $n\rightarrow 0$. Based on this, Lemma 2 (given in the following section) tells us that $\alpha_{n}+\alpha_{n}\alpha_{n-1}+\alpha_{n}\alpha_{n-1}\alpha_{n-2}+\cdots+\prod_{k=0}^{n}\alpha_{k}\rightarrow 0$ as $n\rightarrow 0$. Thus, Eqn. (A.10) implies that $\mathbf{u}_{n}\rightarrow 0$ in probability. Finally, for applying the 2nd part of Lemma 1, we just need to verify Condition (c6). Because $\mathbf{u}_{n}$ goes to 0 with probability 1, Eqn. (A.7) tells us that the associated ODE corresponds to $\mathbf{G}\boldsymbol{\rho}+\mathbf{g}=0.$ Thus, Condition (c6) is satisfied. The theorem is proved. ### A.4 A Lemma ###### Lemma 2. Denote $\epsilon_{n}=\alpha_{n}+\alpha_{n}\alpha_{n-1}+\cdots+\alpha_{n}\alpha_{n-1}\cdots\alpha_{0}$. If $\alpha_{n}\rightarrow 0$ as $n\rightarrow\infty$, then $\epsilon_{n}\rightarrow 0$ as $n\rightarrow\infty$. ###### Proof. Because $\alpha_{n}\rightarrow 0$ as $n\rightarrow\infty$, there exists $\alpha\in(0,1)$ and some integer $N$ such that $\alpha_{n}\leq\alpha<1$ when $n\geq N$. Define a sequence $\varepsilon_{n}$ such that $\displaystyle\varepsilon_{n}$ $\displaystyle=1+\alpha\varepsilon_{n-1}\text{ for }n\geq N+1;$ $\displaystyle\varepsilon_{N}$ $\displaystyle=\epsilon_{N}.$ Obviously, $\epsilon_{n}\leq\varepsilon_{n},\forall n\geq N.$ (A.11) Now we investigate the sequence $\varepsilon_{n}$. $\displaystyle\varepsilon_{n}$ $\displaystyle=1+\alpha\varepsilon_{n-1}=1+\alpha(1+\alpha\varepsilon_{n-2})=\cdots=1+\alpha+\cdots+\alpha^{n-N-1}+\alpha^{n-N}\varepsilon_{N}$ $\displaystyle\leq\sum\nolimits_{k=0}^{\infty}\alpha^{k}+\alpha^{n-N}\varepsilon_{N}=1/(1-\alpha)+\alpha^{n-N}\varepsilon_{N}.$ Thus, we have that $\sup_{n\geq N}\varepsilon_{n}<\infty.$ (A.12) From Eqns. (A.11) & (A.12), we have $\sup_{n\geq 0}\epsilon_{n}<\infty.$ (A.13) From the definition of $\epsilon_{n}$, we have that $\epsilon_{n}=\alpha_{n}+\alpha_{n}\epsilon_{n-1}$. It follows that $\alpha_{n}=\frac{\epsilon_{n}}{1+\epsilon_{n-1}}\geq\frac{\epsilon_{n}}{1+\sup_{k\geq 0}\epsilon_{k}}.$ From the assumption $\alpha_{n}\rightarrow 0$ as $n\rightarrow\infty$ and Eqn. (A.13), we have $\epsilon_{n}\rightarrow 0$ as $n\rightarrow\infty$. ∎ ### A.5 Additional empirical results Figure 5: The random walk task with tabular representation. The setting is similar to Fig. 2, but the performance is evaluated by the average error of all episodes, and $\alpha$ is tuned by minimizing the average error of all episodes. Upper: Performance as a function of $\alpha$; Lower: performance over episodes. From left to right: state space size 10 (left), 20 (middle), or 40 (right). Figure 6: Performance of Gradient-DD in the random walk task in the tabular representation with $\kappa\in\\{0.25,0.5,1,2,4\\}$. From left to right: state space size 10 (left), 20 (middle), or 40 (right). In each figure, $\alpha$ is tuned for each algorithm by minimizing the average error of the last 100 episodes. Results are averaged over 50 runs, with error bars denoting standard error of the mean. Figure 7: The random walk task in the tabular representation. Performance for various $\beta_{n}=\zeta\alpha_{n}$, with $\zeta\in\\{4^{-3},4^{-2},4^{-1},1,4\\}$. From left to right in each row: the size of the state space is $m=10$, $m=20$, and $m=40$. In each case $\alpha$ is tuned by by minimizing the average error of the last 100 episodes according to the their performance of corresponding algorithms. Results are averaged over 50 runs, with error bars denoting standard error of the mean. Figure 8: Performance of natural TDC and natural GTD2 in the random walk task with the tabular representation and $m=20$. Performance for various $\beta_{n}=\zeta\alpha_{n}$, with $\zeta\in\\{4^{-3},4^{-2},4^{-1},1,4\\}$. In each case $\alpha$ is tuned by by minimizing the average error of the last 100 episodes according to the their performance of corresponding algorithms. “NGTD" and “NTDC" denote the natural gradient-based algorithm of GTD2 and TDC, respectively. Results are averaged over 50 runs, with error bars denoting standard error of the mean. Figure 9: The random walk task with the tabular representation and constant step size. The state size is 20. The curves are averaged over 50 runs, with error bars denoting the standard error of the mean, though most are vanishingly small. Figure 10: The random walk task with linear value-function approximation, where $\alpha_{n}=\alpha(10^{3}+1)/(10^{3}+n)$. The state size is $m=20$. Each state is represented by a $p$-dimensional feature vector with $p=5$, corresponding to $m$-state with $m=20$. The $p$-dimensional representation for every fourth state from the start is $[1,0,\cdots,0]$ for state $s_{1}$, $[0,1,0,\cdots,0]$ for $s_{5}$, $\cdots$, and $[0,0,\cdots,0,1]$ for state $s_{4p+1}$. The representations for the intermediate states are obtained by linearly interpolating between these. The sequential states, $S_{1},\cdots,S_{m}$ are obtained by using the first $m$ states above. Left: Performance as $\alpha$; Right: performance over episodes. The curves are averaged over 50 runs. Figure 11: The Boyan Chain task with linear approximation. The setting is similar to Fig. 3, but we evaluate the performance the average error of all episodes, and $\alpha$ is tuned by minimizing the average error of all episodes. Upper: Performance as a function of $\alpha$; Lower: performance over episodes. Figure 12: Performance of Gradient-DD in the Boyan Chain task with $\kappa\in\\{2^{-2},2^{-1},1,2,2^{2}\\}$ and tapered $\alpha_{n}$. “GDD($\kappa$)" denotes the Gradient-DD with regularization parameter $\kappa$.
Bayesian prediction regions and density estimation with type-2 censored data 111. Akbar Asgharzadeha, Éric Marchandb & Ali Saadati Nika a Department of Statistics, University of Mazandaran, P.O. Box 47146-1407, Balbosar, IRAN b Département de mathématiques, Université de Sherbrooke, Sherbrooke Qc, CANADA, J1K 2R1 (e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> Summary For exponentially distributed lifetimes, we consider the prediction of future order statistics based on having observed the first $m$ order statistics. We focus on the previously less explored aspects of predicting: (i) an arbitrary pair of future order statistics such as the next and last ones, as well as (ii) the next $N$ future order statistics. We provide explicit and exact Bayesian credible regions associated with Gamma priors, and constructed by identifying a region with a given credibility $1-\lambda$ under the Bayesian predictive density. For (ii), the HPD region is obtained, while a two-step algorithm is given for (i). The predictive distributions are represented as mixtures of bivariate Pareto distributions, as well as multivariate Pareto distributions. For the non-informative prior density choice, we demonstrate that a resulting Bayesian credible region has matching frequentist coverage probability, and that the resulting predictive density possesses the optimality properties of best invariance and minimaxity. AMS 2020 subject classifications: 62F15, 62N01, 62N05, 62C10, 62C20. Keywords and phrases: Bayesian predictive density; Credibility; Coverage probability; Mixtures; Multivariate Pareto; Prediction region; Type-2 censoring. ## 1 Introduction Predictive analysis based on censored data in life testing experiments is fundamental and leads to interesting challenges. In this work, we are concerned with prediction regions for future order statistics based on the first $m$ order statistics generated by exponentially distributed data. There has been some previous work on such problems, but we focus here on the less explored: (a) multivariate aspects, and (b) use of Bayesian predictive densities to generate prediction regions and their related theoretical properties. We consider an i.i.d. sample of size $n$ from an exponential distribution with density $\theta e^{-\theta t}\mathbb{I}_{(0,\infty)}(t)$, but we are only able to observe the first $m$ order statistics $X_{1:n},\ldots,X_{m:n}$, commonly referred to as type-II censoring. Our objectives include: 1. (I) the joint prediction of two future order statistics $X_{r:n}$ and $X_{s:n}$, with $m<r<s\leq n$, as well as 2. (II) the joint prediction of the next $N$ order statistics with $2\leq N\leq n-m$. Scenario (II) seems a natural one to consider and includes the case of all future order statistics with $N=n-m$. Scenario (I) in more relevant in situations where the focus is for instance on the next and last order statistics (i.e., $r=m+1,s=n$), or the last two order statistics (i.e., $r=n-1,s=n$). Obviously, both scenarios overlap for $N=2$. Surprisingly perhaps, for Gamma priors which we consider, the specification of Bayesian predictive densities and regions leads to more complex representations in the bivariate case. The bivariate case (I) was considered recently by Bagheri et al. (2022) and we refer to this work for further motivation and historical aspects of the problem. Their prediction regions are non-Bayesian however and based on a pair pivotal quantities which will not arise as a Bayesian solution. It seems natural to generate prediction regions for future order statistics via a Bayesian predictive density, but such problems seem to have been relatively unexplored. An exception is given by Dunsmore (1974), but his work concerns a single order statistic. We proceed in doing so for Gamma prior densities as well as the non-informative density $\pi_{0}(\theta)=\frac{1}{\theta}\,\mathbb{I}_{(0,\infty)}(\theta)$. The obtained expressions are quite tractable and, interestingly, bring into play finite mixtures of bivariate Pareto distributions, and multivariate Pareto distributions. The obtained mixtures are “non-probabilistic” in the sense that the coefficients $a_{i}$ in $\sum_{i}a_{i}f_{i}(t)$; the $f_{i}$’s being densities; take on both positive and negative values. This aspect is less appreciated than “probabilistic mixtures”, but it does not hinder the usefulness of the representation for computational purposes of moments and cumulative distributions functions. For a given credibility $1-\lambda$, we fully describe the highest posterior density (HPD) prediction region for scenario (II), while we propose an algorithm to generate exact prediction regions for the bivariate scenario (I), based on a natural decomposition of the joint predictive density into its marginal and conditional parts. The analysis is carried out and much facilitated by considering the prediction of spacings between future order statistics, which can be converted back to the prediction of the order statistics themselves. Many of the recent studies on predictive density estimation focus on theoretical properties of the density itself in a decision-theoretic context. For scenarios (I) and (II), we report on the best invariant and minimax properties for Kullback-Leibler divergence loss of the Bayesian predictive density associated with the usual non-informative prior density $\pi(\theta)=\frac{1}{\theta}$, attributable to the work of Liang & Barron (2004). Moreover, we show that the frequentist probabilty of coverage for a prediction region generated by such a Bayesian predictive density matches its exact credibility. Therefore, such a Bayesian density not only possesses optimality properties in a decision-theoretic framework, but also provides a satisfactory frequentist option that compares favourably with previous solutions (e.g., Bagheri et al., 2022). The organization of the manuscript is as follows. The preliminary results of Chapter 2 cover model densities for scenarios (I) and (II), some general aspects on Bayesian predictive densities, and multivariate Pareto distributions. Bayesian predictive densities and regions are derived and illustrated in Section 3. In Section 4, we demonstrate in a more general context including ours that a Bayesian credible region with credibility $1-\lambda$ associated with the non-informative prior density $\pi_{0}$ yields matching frequentist coverage probability $1-\lambda$ for all $\theta>0$. Finally in Section 5, we concisely review optimality properties of the Bayesian predictive densities again associated with density $\pi_{0}$ and pertaining to the best invariant and minimaxity properties. ## 2 Preliminary results ### 2.1 Model densities For the first $m$ order statistics $X_{1:n},\ldots,X_{m:n}$ among $n$ generated from i.i.d. $Exp(\theta)$ data, it is well known (e.g., Lawless (1971)) $X=\sum_{i=1}^{m}X_{i:n}+(n-m)X_{m:n}$ is a sufficient statistic and Gamma distributed $\mathcal{G}(m,\theta)$. Hereafter, we therefore consider such a summary and corresponding density, that is $X\sim\frac{\theta^{m}}{\Gamma(m)}\,x^{m-1}\,e^{-\theta x}\,\mathbb{I}_{(0,\infty)}(x).$ (2.1) For the prediction of $X_{r:n}$ and $X_{s:n}$ based on $X$, it is convenient and equivalent to consider $Y=(Y_{1},Y_{2})^{\top}\hbox{ with }Y_{1}\,=\,X_{r:n}-X_{m:n},\,Y_{2}\,=\,X_{s:n}-X_{r:n}.$ (2.2) The equivalence stems from the correspondence between a prediction region $R$ for $Y$ and the inverse mapping $\\{(y_{1}+x_{m:n},y_{1}+y_{2}+x_{m:n}):(y_{1},y_{2})\in R\\}$ as a prediction region for $(X_{r:n},X_{s:n})$. For the multivariate version where the objective is to predict jointly the $N$ next future order statistics, it is analogously useful to consider $Z=(Z_{1},\ldots,Z_{N})\,,\hbox{ with }\,Z_{i}=X_{m+i:n}-X_{m+i-1:n}\hbox{ for }i\in\\{1,\ldots,N\\},2\leq N\leq n-m\,,$ (2.3) as the objects of prediction. As described with the next result, both transformations (2.2) and (2.3) lead to convenient underlying model densities for $Y$ or $Z$. ###### Lemma 2.1. Given set-up (2.1, 2.2, and 2.3) and fixed $\theta$: 1. (a) $Y$ and $X$ are independently distributed; 2. (b) $Z$ and $X$ are independently distributed; 3. (c) $Y_{1}$ and $Y_{2}$ are independently distributed with joint density on $\mathbb{R}_{+}^{2}$ given by $q_{\theta}(y)\,=\,\theta^{2}\,q_{1}(\theta y_{1},\theta y_{2})$ and with $q_{1}(y)\,=\,\frac{(n-m)!}{(r-m-1)!\,(s-r-1)!\,(n-s)!}\,\big{(}1-e^{-y_{1}}\big{)}^{r-m-1}\,\big{(}1-e^{-y_{2}}\big{)}^{s-r-1}\,e^{-(n-r+1)y_{1}-(n-s+1)y_{2}}\,;$ (2.4) 4. (d) $Z_{1},\ldots,Z_{N}$ are independently distributed with $Z_{i}\sim Exp\big{(}(n-m-i+1)\,\theta\big{)}$. Proof. Part (b) follows from a familiar renewal property of the exponential distribution, and furthermore implies (a) since $Y$ is a function $h(Z)$ of $Z$. For (c), since $(U=X_{r:n}-X_{m:n},V=X_{s:n}-X_{m:n})$ and $X_{m:n}$ are independently distributed, the distribution of $(U,V)$ matches that of its conditional distribution given $X_{m:n}$, and the latter joint distribution can by seen to be equivalent, with the above-mentioned renewal property, to that of the $(r-m)^{\hbox{th}}$ and $(s-m)^{\hbox{th}}$ order statistics from a sample of size $n-m$ from an $Exp(\theta)$ distribution. Such a joint density is given by $f(u,v)\,=\,\frac{(n-m)!}{(r-m-1)!\,(s-r-1)!\,(n-s)!}\,\big{(}1-e^{-\theta u}\big{)}^{r-m-1}\,\big{(}e^{-\theta u}-e^{-\theta v}\big{)}^{s-r-1}\,(e^{-\theta v})^{n-s}\,\theta^{2}\,e^{-\theta(u+v)},\,$ (2.5) for $0<u<v$. The result follows by transforming $(U,V)$ to $Y$. Finally for (d), since $Z$ and $X_{m:n}$ are independently distributed, the distribution of $Z$ matches that of its conditional distribution given $X_{m:n}$, and as above the latter joint distribution matches that of the distribution of the first $N$ order statistics spacings from a sample of size $N$ from an $Exp(\theta)$ distribution. It is well known that such order statistics spacings are independently distributed as $Exp\big{(}(n-m-i+1)\theta\big{)}$ (e.g., Lehmann & Casella, 1998, problem 6.18, page 71) which yields the result. ∎ ###### Remark 2.1. The marginal densities of $Y_{1}$ and $Y_{2}$ in (2.4) can be extracted from part (c) and are of the form $\mathcal{B}^{-1}(c_{1},c_{2})\,(1-e^{-u})^{c_{1}-1}\,e^{-c_{2}u}\,\mathbb{I}_{(0,\infty)}(u)$, with $\mathcal{B}(c_{1},c_{2})$ the Beta function. An alternative and readily verified representation for the distribution of the above $Y_{1}$ and $Y_{2}$ is: $e^{-\theta Y_{1}}\sim\hbox{Beta}(n-r+1,r-m)\,,\,e^{-\theta Y_{2}}\sim\hbox{Beta}(n-s+1,s-r)\,\hbox{ independent}\,.$ ### 2.2 Bayesian predictive densities A general set-up for predictive density estimation relates to the following model density: $(X,Y)|\theta\sim p_{\theta}(x)\,q_{\theta}(y|x)\,,x\in\mathbb{R}^{d_{1}},y\in\mathbb{R}^{d_{2}}.$ We observe $X$ according to $p_{\theta}$ and we wish to estimate the density $q_{\theta}(\cdot|x)$. Except for $\theta$, the densities are known. The observed $X$ provides information about $\theta$ and determines the conditional density $q_{\theta}(\cdot|x)$ to estimate. Much, but not all (e.g., Fourdrinier et al. 2019), previous work on properties of predictive densities focusses on models for which $X$ and $Y$ are conditionally independent, i.e., $q_{\theta}(\cdot|x)\equiv q_{\theta}(\cdot)$ for all $x$, but it is natural in general to estimate the density $q_{\theta}(\cdot|x)$ when there is dependence. Assume that we have a prior density $\pi$ for $\theta$, and a resulting posterior density $\pi(\cdot|x)$ taken to be absolutely continuous with respect to measure $\nu$. A natural estimator of the density $q_{\theta}(\cdot|x)$ is the Bayes predictive density given by the conditional density $q(\cdot|x)$ of $Y|X=x$. Integrating out $\theta$, we obtain the estimator $\hat{q}_{\pi}(y|x)\,=\,\int_{\Theta}q_{\theta}(y|x)\,\pi(\theta|x)\,d\nu(\theta).$ (2.6) The above is a fully Bayesian procedure that can be used for obtaining predictive point estimates or prediction regions for $Y$. The estimator or density $\hat{q}_{\pi}(\cdot;X)$ will naturally depend on $X$ through a sufficient statistics $T(X)$. ### 2.3 Multivariate Pareto densities The predictive densities that we elicit below bring into play univariate, bivariate, and multivariate type II Pareto distributions, as well as non- probabilistic mixtures of such distributions. Such distributions have been extensively studied (see for instance Section 52.4 of Kotz et al. 2000, or Arnold 2014, and the references therein). With convenient expressions for the moments and cumulative distribution functions (c.d.f.’s) relative to the $f_{i}$’s, the mixture representations clearly facilitate expressions for moments and c.d.f.’s for the full distribution. Multivariate Pareto distributions possess Pareto marginals with parameters $\ell,h>0$, densities and survival functions, which we denote and write as $f_{\ell,h}(t)\,=\,\frac{\ell\,h}{\;\;\;(1+ht)^{\ell+1}}\,\mathbb{I}_{(0,\infty)}(t)\,,\hbox{ and }\bar{F}_{l,h}(t)\,=\,(1+ht)^{-\ell}\,,$ (2.7) for $t>0$, respectively. Here is a multivariate Pareto definition; which includes the bivariate case; and some useful properties. ###### Definition 2.1. A random vector $Z=(Z_{1},\ldots,Z_{N})^{\top}$ has a multivariate Pareto type II distribution, denoted $Z\sim\mathcal{P}2(m,h_{1},\ldots,h_{N})$ when it has density $g_{m,h_{1},\ldots,h_{N}}(z)\,=\,\frac{(m)_{N}\prod_{i=1}^{N}h_{i}}{(1+\sum_{i=1}^{N}h_{i}z_{i})^{m+N}}\,\mathbb{I}_{\mathbb{R}_{+}^{N}}(z),$ (2.8) for parameters $h_{1},\ldots h_{N},m>0$, and $(m)_{N}=\frac{\Gamma(m+N)}{\Gamma(m)}$. Such distributions form a $N+1$ parameter family with scale parameters $1/h_{i},i=1,\ldots,N$, and shape parameter $m$. As recorded with the following lemma containing a selection of properties that are known and readily verified, such multivariate Pareto distributions possess univariate Pareto marginals and conditionals, subvectors that are also distributed as multivariate Pareto with densities, and a joint survival function having a rather simple form. ###### Lemma 2.2. Consider $Z\sim\mathcal{P}2(m,h_{1},\ldots,h_{N})$. Let $i\neq j$. Then, we have: (i) $Z_{i}\sim f_{m,h_{i}}$, (ii) $Z_{j}|Z_{i}=z_{i}\sim f_{m+1,h_{j}/(1+h_{i}z_{1})}$, (iii) $(h_{1}Z_{i},\ldots,h_{N}Z_{N})$ has density $g_{m,1,\ldots,1}$, (iv) the joint survival function is given by $\mathbb{P}(\cap_{i=1}^{N}\\{Z_{i}\geq z_{i}\\})\,=\,\\{(1+\sum_{i=1}^{N}h_{i}z_{i})\\}^{-m}$ for $z_{i}\geq 0,i=1,\ldots,N$, (v) the simple regressions are linear with (vi) $\mathbb{E}(Z_{j}|Z_{i})\,=\,\frac{1+h_{i}Z_{i}}{h_{j}m}\,$, and (vi) the correlation between $Z_{i}$ and $Z_{j}$ is given by $\rho(Z_{i},Z_{j})=\frac{1}{m}$ for $m>2$. Furthermore, we will require the following. ###### Lemma 2.3. Let $Z\sim\mathcal{P}2(m,h_{1},\ldots,h_{N})$ and $W=\sum_{i=1}^{N}h_{i}Z_{i}$. Then $W$ is Beta type-II distributed (denoted $W\sim B2(N,m)$) with p.d.f. $f_{W}(w)\,=\,\frac{\Gamma(N+m)}{\Gamma(N)\Gamma(m)}\frac{w^{N-1}}{(1+w)^{N+m}}$, for $w\in(0,\infty)$. Proof. The result is known but we provide a proof for completeness. Given the multivariate Pareto representation $(h_{1}Z_{1}\,\ldots,h_{N}Z_{N})=^{d}(\frac{E_{1}}{G},\ldots,\frac{E_{N}}{G})$ with $E_{1},\ldots E_{N},G$ independently distributed, and with $E_{i}\sim Exp(1)$ and $G\sim\mathcal{G}(m,1)$, we see that $W$ is distributed as the ratio of two independent $\mathcal{G}(N,1)$ and $\mathcal{G}(m,1)$ variables, hence Beta type II with the given parameters. ∎ ## 3 Predictive densities and regions Based on $X$ as in (2.1), we provide in this section Bayesian predictive densities and regions for $Y$ and $Z$ as defined in (2.2) and (2.3). We consider Gamma $\mathcal{G}(\alpha,\beta)$ prior densities $\pi_{\alpha,\beta}(\theta)\propto\theta^{\alpha-1}\,e^{-\beta\theta}\,\mathbb{I}_{(0,\infty)}(\theta)$, including the usual non-informative case $\pi_{0}(\theta)\,=\,\frac{1}{\theta}\,\mathbb{I}_{(0,\infty)}(\theta)$ for $\alpha=\beta=0$. ### 3.1 Predictive densities We begin with the future next $N$ order statistics. ###### Theorem 3.1. The Bayes predictive density of $Z$ in (2.3), based on $X\sim\mathcal{G}(m,\theta)$ and prior density $\pi_{\alpha,\beta}$ for $\theta$, is that of $\mathcal{P}2(m+\alpha,h_{1},\ldots,h_{N})$ distribution with $h_{i}=\frac{n-m-i+1}{x+\beta}$ for $i=1,\ldots,N$. Proof. With $q_{\theta}(z|x)\,=\,\frac{\theta^{N}\,(n-m)!}{(n-m-N)!}\,e^{-\theta\sum_{i=1}^{N}(n-m-i+1)z_{i}}$ and $\theta|x\sim\mathcal{G}(\alpha+m,x+\beta)$, we obtain from (2.6): $\displaystyle\hat{q}_{\pi_{\alpha,\beta}}(z|x)\,$ $\displaystyle=$ $\displaystyle\,\frac{(n-m)!\,(\beta+x)^{\alpha+m}}{(n-m-N)!\,\Gamma(\alpha+m)\,}\int_{0}^{\infty}\theta^{N+\alpha+m-1}\,e^{-\theta\big{(}\beta+x+\sum_{i=1}^{N}(n-m-i+1)z_{i}\big{)}}\,d\theta\,$ $\displaystyle=$ $\displaystyle\,\frac{(n-m)!\,(\beta+x)^{\alpha+m}}{(n-m-N)!\,\Gamma(\alpha+m)\,}\,\Gamma(N+\alpha+m)\,\big{\\{}\beta+x+\sum_{i=1}^{N}(n-m-i+1)z_{i}\big{\\}}^{-(N+\alpha+m)},$ which is indeed a $\mathcal{P}2(m+\alpha,h_{1},\ldots,h_{N})$ density. ∎ Observe that the density has univariate Pareto marginals and multivariate Pareto distributed subvectors in accordance to Lemma 2.2. We now turn to the Bayesian predictive density for two future order statistics and demonstrate a non-probabilistic mixture of bivariate Pareto densities as given in Definition 2.1 for $N=2$. ###### Theorem 3.2. The Bayes predictive density of $Y$ in (2.2), based on $X\sim\mathcal{G}(m,\theta)$ and prior density $\pi_{\alpha,\beta}$ for $\theta$, is given by $\hat{q}_{\pi_{\alpha,\beta}}(y|x)\,=\,\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}\omega_{i,j}\,g_{m+\alpha,\frac{a_{i}}{x+\beta},\frac{b_{j}}{x+\beta}}\big{(}y\big{)}\,,$ (3.9) with $g$ a bivariate Pareto density given in (2.8) for $N=2$, $a_{i}=n-r+i+1,b_{j}=n-s+j+1,\hbox{and }\omega_{i,j}\,=\,\frac{(n-m)!}{(n-s)!}\,\frac{(-1)^{i+j}}{i!\,j!}\,\,\frac{1}{(r-m-i-1)!\,(s-r-j-1)!}\frac{1}{a_{i}b_{j}}.$ Proof. From (2.4) and (2.6), with $\theta|x\sim\mathcal{G}(m+\alpha,x+\beta)$, we obtain $\hat{q}_{\pi_{\alpha,\beta}}(y|x)\,=\,\frac{(n-m)!}{(n-s)!}\frac{\,(x+\beta)^{m+\alpha}}{\Gamma(m+\alpha)}\,\int_{0}^{\infty}\theta^{m+\alpha+1}\,\frac{(1-e^{-\theta y_{1}})^{r-m-1}}{(r-m-1)!}\,\frac{(1-e^{-\theta y_{2}})^{s-r-1}}{(s-r-1)!}e^{-\theta(L(y)+x+\beta)}\,d\theta,$ with $L(y)\,=\,(n-r+1)y_{1}+(n-s+1)y_{2}$. Binomial expansions and an interchange of sum and integral yield $\displaystyle\hat{q}_{\pi_{\alpha,\beta}}(y|x)\,$ $\displaystyle=$ $\displaystyle\,\frac{\,(x+\beta)^{m+\alpha}}{\Gamma(m+\alpha)}\,\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}a_{i}b_{j}\,\omega_{i,j}\,\int_{0}^{\infty}\theta^{m+\alpha+1}\,e^{-\theta(L(y)+iy_{1}+jy_{2}+x+\beta)}\,d\theta\,$ $\displaystyle=$ $\displaystyle\,(m+\alpha)\,(m+\alpha+1)\,(x+\beta)^{m+\alpha}\,\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}a_{i}b_{j}\,\omega_{i,j}\,\big{(}L(y)+iy_{1}+jy_{2}+x+\beta\big{)}^{-(m+\alpha+2)},$ which leads to (3.9). ∎ We point out the following result for the next two order statistics, which follows immediately from either Theorem 3.1 or Theorem 3.2. ###### Corollary 3.1. For the particular case where $r=m+1,s=m+2$, the predictive density of $Y$ as in (2.2) is that of a $\mathcal{P}2(m+\alpha,h_{1},h_{2})$ distribution (3.9) with $h_{1}=\frac{n-m}{x+\beta}$ and $h_{2}=\frac{n-m-1}{x+\beta}$. ###### Remark 3.2. Interestingly, the above weights $\omega_{i,j}$ arise through a series expansion of the Beta function. Indeed, by the the binomial expansion of $(1-t)^{d-1}$ for $d\in\mathbb{N}_{+}$, we obtain for $c>0$ $\displaystyle\int_{0}^{1}t^{c-1}\,(1-t)^{d-1}\,dt\,$ $\displaystyle=$ $\displaystyle\,\frac{\Gamma(c)\Gamma(d)}{\Gamma(c+d)}$ $\displaystyle\Longrightarrow$ $\displaystyle\sum_{k=0}^{d-1}\gamma_{c,d,k}=1\,,$ with $\gamma_{c,d,k}\,=\,\frac{\Gamma(c+d)}{\Gamma(c)}\,\frac{(-1)^{k}}{k!\,(d-1-k)!}\,\frac{1}{c+k}$. Furthermore, observe that $\omega_{i,j}=\omega_{1,i}\,\omega_{2,j}$ with $\omega_{1,i}=\frac{(n-r)!}{(n-s)!}\,\gamma_{n-r+1,r-m,i}$ and $\omega_{2,j}=\frac{(n-s)!}{(n-r)!}\,\gamma_{n-s+1,s-r,j}$, demonstrating alternatively that $\sum_{i,j}\omega_{i,j}=1$. The mixture representation of the predictive density in Theorem 3.2 coupled with the bivariate Pareto properties of Lemma 2.2 facilitate the evaluation of the marginal and conditional distributions associated with (3.9). As shown below, mixture representations of univariate Pareto distributions arise. ###### Corollary 3.2. For $m<r\leq n-1$, the marginal density of $Y_{1}$ associated with the Bayes predictive density (3.9) is given by $\hat{q}_{\pi}(y_{1}|x)\,=\,\sum_{i=0}^{r-m-1}\gamma_{n-r+1,r-m,i}\,f_{m+\alpha,\frac{a_{i}}{x+\beta}}(y_{1})\,,$ (3.10) where $\gamma_{c,d,k}$ is given in Remark 3.2 and $f_{l,h}$ is a univariate Pareto density as given in (2.7). In the particular case where $r=m+1$, we have that $\hat{q}_{\pi}(y_{1};x)=f_{m+\alpha,\frac{n-m}{x+\beta}}(y_{1})$ for $y_{1}>0$. Proof. We have from Theorem 3.2 $\displaystyle\hat{q}_{\pi}(y_{1}|x)\,$ $\displaystyle=$ $\displaystyle\,\int_{0}^{\infty}\hat{q}_{\pi}(y|x)\,dy_{2}$ $\displaystyle=$ $\displaystyle\,\sum_{i=0}^{r-m-1}\gamma_{n-r+1,r-m,i}\sum_{j=0}^{s-r-1}\gamma_{n-s+1,s-r,j}\;\int_{0}^{\infty}g_{m+\alpha,\frac{a_{i}}{x+\beta},\frac{b_{j}}{x+\beta}}\big{(}y\big{)}\,dy_{2},$ which leads to (3.10) since the joint density $g_{m+\alpha,\frac{a_{i}}{x+\beta},\frac{b_{j}}{x+\beta}}$ for $Y$ has marginal $f_{m+\alpha,\frac{a_{i}}{x+\beta}}$ for $Y_{1}$, for all $i,j$ (Lemma 2.2) and since $\sum_{j=0}^{s-r-1}\gamma_{n-s+1,s-r,j}=1$. ∎ The result in itself is not new and was obtained with the univariate analysis carried out by Dunsmore (1974). A similar development establishes that (3.10) holds for the marginal predictive density of $X_{n:n}-X_{m:n}$. For the conditional distributions, we have the following. ###### Corollary 3.3. For the Bayes predictive density (3.9): 1. (a) The conditional density of $Y_{2}$ given $Y_{1}=y_{1}$ is given by the mixture representation $\hat{q}_{\pi}(y_{2}|y_{1};x)\,=\,\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}\alpha_{i}(y_{1})\,\beta_{j}\,f_{m+\alpha+1,\frac{b_{j}}{x\,+\,\beta\,+\,a_{i}\,y_{1}}}(y_{2}),$ with $\alpha_{i}(y_{1})\propto\gamma_{n-r+1,r-m,i}\,f_{m+\alpha,\frac{a_{i}}{x+\beta}}(y_{1})$ such that $\sum_{i=0}^{r-m-1}\alpha_{i}(y_{1})=1$, and $\beta_{j}=\gamma_{n-s+1,s-r,j}\,$; 2. (b) The conditional density of $Y_{1}$ given $Y_{2}=y_{2}$ is given by the mixture representation $\hat{q}_{\pi}(y_{1}|y_{2};x)\,=\,\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}\xi_{j}(y_{2})\,\nu_{i}\,f_{m+\alpha+1,\frac{a_{i}}{x\,+\,\beta\,+\,\beta_{j}\,y_{2}}}(y_{1}),$ with $\xi_{j}(y_{2})\propto\gamma_{n-s+1,s-r,j}\,f_{m+\alpha,\frac{b_{j}}{x+\beta}}(y_{2})$ such that $\sum_{j=0}^{s-r-1}\xi_{j}(y_{2})=1$, and $\nu_{i}=\gamma_{n-r+1,r-m,i}$. Proof. Part (a) follows directly with the properties of Lemma 2.2 upon writing $g_{m+\alpha,\frac{a_{i}}{x+\beta},\frac{b_{j}}{x+\beta}}\big{(}y\big{)}\,=f_{m+\alpha,\frac{a_{i}}{x\,+\,\beta}}(y_{1})\,f_{m+\alpha+1,\frac{b_{j}}{x\,+\,\beta\,+\,a_{i}y_{1}}}(y_{2})$. The same approach leads to (b). ∎ ###### Remark 3.3. With the above marginal and conditional distributions expressible as mixtures of univariate Pareto distributions, corresponding moments are readily available. For instance, one obtains $\displaystyle\mu_{1}=\mathbb{E}_{\pi}(Y_{1}|x)$ $\displaystyle=$ $\displaystyle\frac{x+\beta}{m+\alpha-1}\,\sum_{i=0}^{r-m-1}\gamma_{n-r+1,r-m,i}\,\frac{1}{a_{i}}\,$ $\displaystyle\hbox{ and }\mu_{2}(y_{1})\,=\,\mathbb{E}_{\pi}(Y_{2}|Y_{1};x)\,$ $\displaystyle=$ $\displaystyle\,\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}\alpha_{i}(y_{1})\,\beta_{j}\,\frac{x+\beta+a_{i}y_{1}}{(m+\alpha)b_{j}}$ $\displaystyle=$ $\displaystyle\,y_{1}\,\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}\alpha_{i}(y_{1})\,\beta_{j}\,\frac{a_{i}}{(m+\alpha)b_{j}}\,+\,\frac{x+\beta}{m+\alpha}\sum_{j=0}^{s-r-1}\,\,\frac{\beta_{j}}{b_{j}}.$ Observe that the conditional expectation $\mathbb{E}_{\pi}(Y_{2}|y_{1};x)$ is affine linear of the form $C+Dy_{1}$. ### 3.2 HPD Bayesian prediction regions A Bayesian prediction region with credibility $1-\lambda$ for $Z|\theta$ based on an observed value $x$ of $X|\theta$ and for a given prior density $\pi$ is such that $\int_{R(x)}\hat{q}_{\pi}(z|x)\,dz\,=\,1-\lambda,$ (3.11) with $\hat{q}_{\pi}$ given in (2.6). One such choice which minimizes volume is the ubiquitous HPD region which is of the form $R_{HPD}(x)\,=\,\big{\\{}z\in\mathbb{R}_{+}^{N}\,:\,\hat{q}_{\pi}(z|x)\,\geq k\big{\\}},$ (3.12) where $k$ is chosen so that (3.11) is satisfied. Here is an explicit form of the HPD credible region for the future order statistics spacings $Z_{1},\dots,Z_{N}$. ###### Theorem 3.3. Based on $X_{i:n},i=1,\ldots,m$, the first $m$ order statistics among $n$ from i.i.d. Exp$(\theta)$ data, setting $X$ as in (2.1) and $Z$ as in (2.3), the HPD region of credibility $1-\lambda$ for $Z$ associated with Gamma prior $\pi_{\alpha,\beta}$ for $\theta$ is given by $R_{HPD}(x)\,=\,\big{\\{}z\in\mathbb{R}_{+}^{N}\,:\,\sum_{i=1}^{N}(n-m-i+1)\,z_{i}\,\leq c_{0}(x+\beta)\big{\\}},$ (3.13) with $c_{0}$ the quantile of order $1-\lambda$ of a $B2(N,m+\alpha)$ distribution. Proof. It follows from Theorem 3.1 that $R_{HPD}$ defined generally in (3.12) is of form (3.13). From Lemma 2.3, the posterior predictive distribution of $W=\sum_{i=1}^{N}\frac{(n-m-i+1)\,z_{i}}{x+\beta}$ is $B2(N,m+\alpha)$ distributed and the result follows by setting $c_{0}$ such that $\mathbb{P}(W\leq c_{0})=1-\lambda$. ∎ ###### Remark 3.4. With simple integral or finite sum forms for the $B2(N,m+\alpha)$ c.d.f., the evaluation of the above quantiles is rather straightforward. For instance in the bivariate case, we have $\mathbb{P}(W\leq c)=1-\frac{1+c\,(m+\alpha+1)}{(1+c)^{m+\alpha+1}}$, so that $c_{0}$ is expressible as the solution in $c>0$ of $\frac{1+c\,(m+\alpha+1)}{(1+c)^{m+\alpha+1}}\,=\,\lambda.$ ###### Remark 3.5. HPD credible regions are in general not invariant with respect to transformations, but they are whenever the transformation is affine linear. Since the original order statistics $T=(X_{m+1:n},\ldots,X_{m+N;n})^{\top}$ are related to the spacings $Z=(Z_{1},\ldots,Z_{N})^{\top}$ by the affine linear transformation $T=b+AZ$, with $b=(X_{m:n},\ldots,X_{m:n})^{\top}$ and $A$ a lower triangular matrix with non-zero elements $a_{i,j}=1$ for $i\leq j$, the transformation of $R_{HPD}(x)$ given by (3.13) to the order statistics $X_{m:n},\ldots,X_{m+N;n}$ is also HPD. ### 3.3 Prediction regions: an algorithm for the bivariate case We present here a general bivariate case solution to obtain a prediction region with a given credibility based on Theorem 3.2’s predictive density for $Y$ as in (2.2). There are several options available, but we opt for a rather explicit strategy to construct a Bayesian credible region. It can be applied for any choice of $(r,s,m,n)$, prior $\pi_{\alpha,\beta}$ such that $m+\alpha>1$, and the targeted credibility $1-\lambda$. We refer to notation used throughout this paper, namely $a_{i}$ and $b_{j}$ as in Theorem 3.2, $\gamma_{c,d,k}$ as in Remark 3.2, $f_{l,h}$ and $\bar{F}_{l,h}$ as the univariate Pareto density and survival functions in (2.7), $\beta_{j}$ and $\alpha_{i}(y_{1})$ as in Corollary 3.3, and $\mu_{1}$ and $\mu_{2}(y_{1})$ as in Remark 3.3. 1. Step 1. Select a prediction region $A$ for $Y_{1}$ of credibility $\sqrt{1-\lambda}$ based on the marginal density in Corollary 3.2. Such a choice would desirably be an interval with relatively high levels of the predictive density. A suitable choice is: $A\,=\,\big{[}\mu_{1}-\Delta_{1},\mu_{1}+\Delta_{1}\big{]}\,\cap[0,\infty)\,,$ with $\Delta_{1}>0$ uniquely chosen such that $\displaystyle\int_{\mu_{1}-\Delta_{1}}^{\mu_{1}+\Delta_{1}}\hat{q}_{\pi_{\alpha,\beta}}(y_{1}|x)\,dy_{1}\,=\,\sqrt{1-\lambda}$ $\displaystyle\Longleftrightarrow$ $\displaystyle\sum_{i=0}^{r-m-1}\gamma_{n-r+1,r-m,i}\,\Big{(}\bar{F}_{m+\alpha,\frac{a_{i}}{x+\beta}}(\mu_{1}-\Delta_{1})_{+}\,-\,\bar{F}_{m+\alpha,\frac{a_{i}}{x+\beta}}(\mu_{1}+\Delta_{1})\Big{)}\,=\,\sqrt{1-\lambda},$ with $z_{+}=\max\\{0,z\\}$. Since this last expression is strictly increasing in $\Delta_{1}$, it is rather straightforward to approach $\Delta_{1}$ numerically. 2. Step 2. For each $y_{1}\in A$, select a prediction region $B(y_{1})$ of conditional credibility $\sqrt{1-\lambda}$ for $y_{2}$ based on the conditional density of $Y_{2}|Y_{1}=y_{1}$ given in part (a) of Corollary 3.3. The challenge here is similar to the one in Step 1 but to be repeated for all $y_{1}$. Analogously to Step 1, $B(y_{1})$ can reasonably be constructed pivoting around the mean as $B(y_{1})\,=\,\big{[}\mu_{2}(y_{1})-\Delta_{2}(y_{1}),\mu_{2}(y_{1})+\Delta_{2}(y_{1})\big{]}\,\cap[0,\infty),$ with $\Delta_{2}(y_{1})>0$ chosen such that $\displaystyle\\!\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}\alpha_{i}(y_{1})\,\beta_{j}\,\int_{\mu_{2}(y_{1})-\Delta_{2}(y_{1})}^{\mu_{2}(y_{1})+\Delta_{2}(y_{1})}\,f_{m+\alpha+1,\frac{b_{j}}{x+\beta+a_{i}y_{1}}}(y_{2})\,dy_{2}\,=\,\sqrt{1-\lambda}$ $\displaystyle\Longleftrightarrow\\!$ $\displaystyle\sum_{i=0}^{r-m-1}\sum_{j=0}^{s-r-1}\alpha_{i}(y_{1})\,\beta_{j}\,\,\Big{(}\bar{F}_{m+\alpha+1,\frac{b_{j}}{x+\beta+a_{i}y_{1}}}(\mu_{2}(y_{1})-\Delta_{2}(y_{1}))_{+}\,-\,\bar{F}_{m+\alpha+1,\frac{b_{j}}{x+\beta+a_{i}y_{1}}}(\mu_{2}(y_{1})+\Delta_{2}(y_{1}))\Big{)}\,\,$ $\displaystyle=\sqrt{1-\lambda}.$ The resulting prediction region $R=\\{(y_{1},y_{2}):y_{1}\in A,y_{2}\in B(y_{1})\\}$ has credibility $1-\lambda$ indeed since $\displaystyle\mathbb{P}_{\pi}\big{(}Y_{1}\in A,Y_{2}\in B(y_{1})|x\big{)}$ $\displaystyle=$ $\displaystyle\int_{A}\hat{q}_{\pi}(y_{1}|x)\big{\\{}\int_{B(y_{1})}\hat{q}_{\pi}(y_{2}|y_{1};x)\,dy_{2}\big{\\}}\,dy_{1}$ $\displaystyle=$ $\displaystyle\int_{A}\hat{q}_{\pi}(y_{1}|x)\,(\sqrt{1-\lambda})\,dy_{1}$ $\displaystyle=$ $\displaystyle 1-\lambda.$ ###### Remark 3.6. Other regions can also be selected. For instance, one-sided choices with $A$ of the form $[0,\bar{\Delta}_{1}]$ or $[\underline{\Delta}_{1},\infty)$, and with $B$ of the form $[0,\bar{\Delta}_{2}(y_{1})]$ or $[\underline{\Delta}_{2}(y_{1}),\infty)$. Another alternative would be to aim for different credibilities $1-\lambda_{1}$ and $1-\lambda_{2}$ in Steps 1 and 2, respectively, such that $(1-\lambda_{1})(1-\lambda_{2})=1-\lambda$. ### 3.4 Example The following dataset from Murthy et al. (2004) shows $n=30$ ordered failure times for repairable items: 0.11 | 0.30 | 0.40 | 0.45 | 0.59 | 0.63 | 0.70 | 0.71 | 0.74 | 0.77 ---|---|---|---|---|---|---|---|---|--- 0.94 | 1.06 | 1.17 | 1.23 | 1.23 | 1.24 | 1.43 | 1.46 | 1.49 | 1.74 1.82 | 1.86 | 1.97 | 2.23 | 2.37 | 2.46 | 2.63 | 3.46 | 4.36 | 4.73 For the purpose of illustration, suppose that only the $m=20$ first values are observed and that we wish to predict either: 1. (i) the next two order statistics spacings $(Z_{1},Z_{2})\,=\,\big{(}X_{21:30}-X_{20:30},X_{22:30}-X_{21:30}\big{)}$, or 1. (ii) the next and last order statistics $\big{(}X_{21:30},X_{30:30}\big{)}$. In both cases, we use the non-informative prior density $\pi_{(}\theta)\,=\,\frac{1}{\theta}\,\mathbb{I}_{(0,\infty)}(\theta)$, and consider credibility $1-\lambda=0.95$. As expanded upon in the next section, the frequentist coverage of such a prediction region matches the credibility for all $\theta>0$. The data yields $x\,=\,\sum_{i=1}^{20}x_{i:30}\,+\,10\,x_{20:30}\,=\,35.79$. For case (i), Theorem 3.3 applies and the HPD credible region for $(Z_{1},Z_{2})$ is given by $\displaystyle R_{\textrm{HPD}}(35.79)\,=\,\bigl{\\{}(z_{1},z_{2})\in\mathbb{R}_{+}^{2}\,:\,0.2794\,z_{1}\,+\,0.2515z_{2}\,\leq 0.2606\big{\\}},$ $c_{0}\,=\,0.2606$ being the quantile of order $0.95$ of a $B2(2,20)$ distribution. For case (ii), we illustrate the use of Section 3.3’s algorithm which is prescribed for $(Y_{1},Y_{2})$ with $Y_{1}\,=\,X_{21:30}-X_{20:30}$ and $Y_{2}\,=X_{30:30}-X_{21:30}$. Figure 1 (a) presents the resulting Bayesian prediction region along with the regression function or conditional expectation $\mathbb{E}(Y_{2}|y_{1},x=35.79)$ as a function of $y_{1}$. The first step yields $A=[0,0.722]$, while the $B(y_{1})$ intervals are shown in the figure. For example, $B(0.50)=\,[0,13.059]$. Observe that a left neighbourhood of $0$ is included in $B(y_{1})$ for all $y_{1}\in A$ as the second step credibility, equal to $(0.95)^{1/2}\approx 0.9747$, is quite large. In contrast, the Bayesian prediction region with credibility $0.80$ is displayed in Figure 1 (b) resulting in $A=[0,0.4258]$ and intervals $B(y_{1})$ centered at $\mathbb{E}(Y_{2}|y_{1},x=35.79)$ which exclude values close to $0$ for all $y_{1}\in A$. Finally, the corresponding prediction regions for $\big{(}X_{21:30},X_{30:30}\big{)}\,$ and credibilities $0.95$ and $0.80$ are obtained as $\\{(y_{1}+x_{20:30},y_{1}+y_{2}+x_{20:30}):(y_{1},y_{2})\in R\\}$ and displayed in Figure 2. (a) $1-\lambda=0.95$ (b) $1-\lambda=0.80$ Figure 1: Bayesian prediction regions and $\mathbb{E}(Y_{2}|y_{1},x=35.79)$ (dashed) (a) $1-\lambda=0.95$ (b) $1-\lambda=0.80$ Figure 2: Bayesian prediction regions for $(X_{21:30},X_{30:30})$ in case (ii) ## 4 Frequentist coverage and credibility Through Theorem 3.2’s Bayesian predictive densities, the prediction regions for future order statistics described in the previous section are constructed in order to attain exact Bayesian credibility. This includes the non- informative prior prediction densities $\hat{q}_{\pi_{0}}(\cdot;X)$ given by Theorems 3.1 and 3.2 for $\alpha=\beta=0$. Moreover, prediction regions based on $\hat{q}_{\pi_{0}}(\cdot;X)$ also lead to exact frequentist coverage as expanded on below. In fact, we cast the result in a more general scale parameter family model setting with scale parameter densities $p_{\sigma}(x)=\frac{1}{\sigma}\,p_{1}(\frac{x}{\sigma})\hbox{ and }q_{\sigma}(y^{\prime})=\frac{1}{\sigma^{d_{2}}}\,q_{1}(\frac{y_{1}^{\prime}}{\sigma},\ldots,\frac{y_{d_{2}}^{\prime}}{\sigma}).$ (4.14) We will make use of the following intermediate result, a univariate version of which was first given by L’Moudden et al. (2017) (i.e., $d_{2}=1$). ###### Lemma 4.4. Under model (4.14) and prior density $\pi_{0}(\sigma)\,=\,\frac{1}{\sigma}\,\mathbb{I}_{(0,\infty)}(\sigma)$, the Bayesian predictive density $\hat{q}_{\pi_{0}}$ is given by $\hat{q}_{\pi_{0}}(y^{\prime}|x)\,=\,\frac{1}{x^{d_{2}}}\,h(\frac{y^{\prime}}{x}),$ (4.15) where $h$ is the frequentist density of $R=\frac{Y^{\prime}}{X}=(R_{1},\ldots,R_{d_{2}})$, which is free of $\sigma$, and given by $h(r)\,=\,\int_{0}^{\infty}u^{d/2}\,q_{1}(r_{1}u,\ldots,r_{d_{2}}u)\,p(u)\,du.$ (4.16) Furthermore, it is the case that $\big{(}\frac{Y_{1}^{\prime}}{x},\ldots,\frac{Y^{\prime}_{d_{2}}}{x}\big{)}\big{|}x\,=^{d}\,\big{(}\frac{Y_{1}^{\prime}}{X},\ldots,\frac{Y^{\prime}_{d_{2}}}{X}\big{)}\big{|}\sigma\,\hbox{ for all }x,\sigma,$ (4.17) i.e., the posterior predictive and frequentist distributions of $R$ match and are furthermore independent of the observed value of $x$ and the parameter value of $\sigma$. Proof. Identity (4.17) follows from the first part of the lemma. A direct evaluation yields (4.16) as the density of $R|\sigma$. Finally, the posterior density of $\sigma$ is given by $\pi(\sigma|x)\,=\,\frac{x}{\sigma^{2}}\,p_{1}(\frac{x}{\sigma})$, which leads to the predictive density $\displaystyle\hat{q}_{\pi_{0}}(y^{\prime}|x)\,$ $\displaystyle=$ $\displaystyle\,\int_{0}^{\infty}\frac{1}{\sigma^{d_{2}}}\,q_{1}(\frac{y_{1}^{\prime}}{\sigma},\ldots,\frac{y_{d_{2}}^{\prime}}{\sigma})\,\frac{x}{\sigma^{2}}\,p_{1}(\frac{x}{\sigma})\,d\sigma$ $\displaystyle=$ $\displaystyle\frac{1}{x^{2}}\int_{0}^{\infty}u^{d_{2}}q_{1}\big{(}\frac{y_{1}^{\prime}u}{x},\ldots,\frac{y_{d_{2}}^{\prime}u}{x}\big{)}\,p(u)\,du$ $\displaystyle=$ $\displaystyle\frac{1}{x^{d_{2}}}\,h(\frac{y^{\prime}}{x})\,.\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\qed$ ###### Remark 4.7. Identity (4.17) is quite general and clarifies why Bayesian analysis with respect to the prior density $\pi_{0}$ matches pivotal based analysis that stems from the right-hand side. In the univariate case and for the prediction of a single future order statistic, as remarked upon by Dunsmore (1974), the identity explains leads to his Bayesian solutions matching the pivotal based solution of Lawless (1971). ###### Theorem 4.4. Consider model (4.14) and a Bayesian prediction region $R(X)$ with credibility $1-\lambda$ associated with the prior density $\pi_{0}(\theta)=\frac{1}{\theta}\,\mathbb{I}_{(0,\infty)}(\theta)$. Then, $R(X)$ has exact frequentist coverage probability, i.e., $\mathbb{P}(R(X)\ni Y^{\prime}|\theta)\,=\,1-\lambda$ for all $\theta>0$. Proof. Since $R(X)$ has credibility $1-\lambda$, we have $\mathbb{P}\big{(}Y^{\prime}\in R(x))|x\big{)}\,=\,\mathbb{P}\big{(}\frac{Y^{\prime}}{x}\in R^{*}(x)\big{|}x\big{)}\,=\,1-\lambda,$ where $R^{*}(x)\,=\,\\{r\in\mathbb{R}_{+}^{d_{2}}:\frac{r}{x}\in R(x)\\}$. Now, since the predictive distribution of $\frac{Y^{\prime}}{x}$ is free of $x$ with p.d.f. $h$ (Lemma 4.4), it follows that $R^{*}(x)$ is free of $x$ such that $\int_{R^{*}(x)}h(z)\,dz\,=\,1-\lambda$. On the other hand, the frequentist coverage of $R(X)$ is given by $\mathbb{P}(Y^{\prime}\in R(X)|\,\theta\,)\,=\,\mathbb{P}\big{(}\frac{Y^{\prime}}{X}\in R^{*}(X)\big{|}\theta\big{)}\,=\,\int_{R^{*}(x)}h(r)\,dr\,=\,1-\lambda,$ since $h$ is also the density of $\frac{Y^{\prime}}{X}|\theta$ (Lemma 4.4). ∎ To conclude, the above (with $Y^{\prime}=Y$ or $Y^{\prime}=Z$) applies to our set-ups as follows. ###### Corollary 4.4. Based on $X$ as in (2.1) and the non-informative prior density $\pi_{0}(\theta)=\frac{1}{\theta}\,\mathbb{I}_{(0,\infty)}(\theta)$, Bayesian predictive regions for $Z$ or $Y$ with credibility $1-\lambda$, based on the predictive densities given in Theorems 3.1 and 3.2, have matching frequentist coverage probability $1-\lambda$ for all $\theta>0$. ## 5 Best invariance and minimaxity As seen in Section 3, Bayesian predictive densities such as those given in Theorem 3.2 facilitate the construction of a prediction region for future values of $Y$ or $Z$ with a given credibility. Non-Bayesian predictive densities are also available, such as plug-in densities and those obtained by likelihood or pivotal-based methods. Kullback-Leibler (KL) divergence loss and accompanying risk can be used to evaluate the frequentist performance of density estimators $\hat{q}(\cdot;X)$. For our problem, these are given by $L_{KL}(\theta,\hat{q}(\cdot;x))\,=\,\int q_{\theta}(t)\,\log\big{(}\frac{q_{\theta}(t)}{\hat{q}(t;x)}\big{)}\,dt\,$ (assuming Lebesgue densities), and $R_{KL}(\theta,\hat{q})\,=\,\mathbb{E}_{\theta}\big{\\{}L_{KL}(\theta,\hat{q}(\cdot;X))\big{\\}}.$ It is of interest to assess the efficiency of predictive densities for KL risk, and decision-theoretic properties of invariance and minimaxity applicable to our contexts are reviewed in this section. An early contribution to the determination of a best invariant density is due to Murray (1977), while a more exhaustive treatment of best invariant densities, as well as minimaxity in predictive density estimation, appears in Liang & Barron (2004). A density $\hat{q}_{m}$ is minimax whenever $\hat{q}_{m}$ minimizes among all densities the supremum frequentist risk, i.e., in our cases when $\sup_{\theta>0}R_{KL}(\theta,\hat{q}_{m})\,=\,\inf_{\hat{q}}\sup_{\theta>0}R_{KL}(\theta,\hat{q})\,.$ For our problems, and more generally model (4.14) with $\sigma=1/\theta$, Kullback-Leibler divergence loss, a predictive density $\hat{q}$ is invariant under changes of scale whenever it satisfies the scale parameter family property $\hat{q}(y^{\prime};cx)\,=\,\frac{1}{c^{d_{2}}}\,\hat{q}(\frac{y^{\prime}}{c};x)\,,y^{\prime}\in\mathbb{R}_{+}^{d_{2}}\,,$ (5.18) for all $c,x>0$. The class of invariant densities here includes $\hat{q}_{\pi_{0}}$; as can be verified directly from the expressions given in Theorems 3.1 and 3.2 with $\alpha=\beta=0$, or even by (4.15); as well as plug-in densities $q_{\hat{\theta}}$ with $\hat{\sigma}(x)=kx$, i.e., a scale invariant point estimator $\hat{\sigma}$ of $\theta$ satisfying $\hat{\sigma}(cx)\,=\,c\,\hat{\theta}(x)$ for $c,x>0$; such as the maximum likelihood choice with $c=1/m$. The present invariance structure implies that an invariant density has constant risk as a function of $\theta$ as long as it is finite, from which it follows that there exists an optimal choice among invariant densities. The risk constancy follows for vastly more general settings (e.g., Berger, 1985), but can be also derived directly as follows: $\displaystyle R_{KL}(c\sigma,\hat{q})\,$ $\displaystyle=$ $\displaystyle\int_{(0,\infty)}p_{c\sigma}(x)\,\int_{(0,\infty)^{d_{2}}}q_{c\sigma}(y^{\prime})\,\log\big{(}\frac{q_{c\sigma}(y^{\prime})}{\hat{q}(y;x)}\big{)}\,dy^{\prime}\,dx$ $\displaystyle=$ $\displaystyle\int_{(0,\infty)}\frac{1}{c}\,p_{\sigma}(\frac{x}{c})\,\int_{(0,\infty)^{d_{2}}}q_{\sigma}(\frac{y^{\prime}}{c})\frac{1}{c^{d_{2}}}\,\log\big{(}\,\frac{q_{\sigma}(\frac{y^{\prime}}{c})}{\hat{q}(\frac{y^{\prime}}{c};\frac{x}{c})}\big{)}\,dy^{\prime}\,dx$ $\displaystyle=$ $\displaystyle\int_{(0,\infty)}p_{\sigma}(t)\,\int_{(0,\infty)^{d_{2}}}q_{\sigma}(u)\,\log\big{(}\frac{q_{\sigma}(u)}{\hat{q}(u;t)}\big{)}\,du\,dt$ $\displaystyle=$ $\displaystyle\,R_{KL}(\sigma,\hat{q}),$ for $c,\sigma>0$, with the change of variables $(t,u)\,=\,\frac{1}{c}(x,y^{\prime})$, and by making use of (5.18). Under general conditions for problems that are invariant, which are met here, a best invariant procedure exists and coincides with the generalized Bayes estimator associated with a (right) invariant prior density (e.g., Berger, 1985, section 6.6.2). In out set-up, such a prior density is the non- informative $\pi_{0}$, and it leads to the best invariant property of the density $\hat{q}_{\pi_{0}}$. Furthermore, $\hat{q}_{\pi_{0}}$ is minimax (see Liang & Barron (2004), Theorem 2 and Proposition 3). We conclude this section by summarizing the above as applicable (with $Y^{\prime}=Y$ or $Y^{\prime}=Z$) to the problems at hand. ###### Theorem 5.5. Consider $X_{i:n},i=1,\ldots,m$, the first $m$ order statistics among $n$ from i.i.d. Exp$(\theta)$ data, and $X$, $Y$, and $Z$ as in (2.1), (2.2) and (2.3). Then the best invariant densities for estimating the densities of $Y$ and $Z$, respectively, under KL divergence loss are $\hat{q}_{\pi_{0}}$, as given in Theorems 3.1 and 3.2 for $\alpha=\beta=0$. Furthermore, its KL risk is constant as a function of $\theta$ and $\hat{q}_{\pi_{0}}$ is minimax. ## 6 Concluding remarks We have illustrated the natural usage of Bayesian credible regions for the prediction of future order statistics under a type-2 censoring scheme with exponentially distributed data. We have emphasized multivariate aspects and provided explicit expressions for Bayesian predictive densities and resulting prediction credible regions. We have also addressed optimality properties achieved with the non-informative prior density choice, such as the matching of Bayesian credibility and frequentist probabilty coverage, as well as the best invariant and minimax properties of the corresponding Bayesian predictive density. It would be interesting to extend the analysis more broadly to other types of probability models and censoring schemes. We have provided a Bayesian HPD credible region (Theorem 3.3) for the next $N$ future order statistics. However, for the general bivariate case, such a solution is lacking. We do not know for instance if the Bayesian predictive density is unimodal which would facilitate the determination of such a region. ## Acknowledgements The authors are thankful to the Université de Sherbrooke for its financial support awarded as part of its visiting researcher’s program. Éric Marchand’s research is supported in part by the NSERC of Canada. ## References * Arnold (2014) Arnold, B.C. (2014). Univariate and multivariate Pareto models. Journal of Statistical Distributions and Applications, 1, article 11. * Bagheri et al. (2022) Bagheri, S.F., Asgharzadeh, A., Fernández, A.J., & Pérez-Gonzàlez, C.P. (2022). Joint prediction for future failure times under type-II censoring. IEEE Transactions on Reliability, 71, 100–110. * Berger (1985) Berger, J. (1985). Statistical Decision Theory and Related Topics. Springer Texts in Statistics. Springer-Verlag, New York. Second edition. * Dunsmore (1974) Dunsmore, I.R. (1974). The Bayesian prediction problem in life testing models. Technometrics, 16, 455–4600. * Fourdrinier et al. (2019) Fourdrinier, D., Marchand, É. & Strawderman, W.E. (2019). On efficient prediction and predictive density estimation for spherically symmetric models. Journal of Multivariate Analysis, 173, 18–25. * Kotz et al. (2000) Kotz, S., Balakrishnan, N. & Johnson, N.L. (2000). Multivariate continuous distributions: volume 1, Wiley. * Lawless (1971) Lawless, J.F. (1971). A prediction problem concerning samples from the exponential distributions, with application in life testing. Technometrics, 13, 725–730. * Lehmann & Casella (1998) Lehmann, E.L. & Casella, G. (1998). Theory of Point Estimation. Second edition, Springer, New York. * Liang & Barron (2004) Liang, F. & Barron, A. (2004). Exact minimax strategies for predictive density estimation, data compression, and model selection. IEEE Transactions on Information Theory, 62, 2708–2726. * L’Moudden et al. (2017) L’Moudden, A., Marchand, É., Kortbi, O. & Strawderman, W.E. (2017). On predictive density estimation for Gamma models with parametric constraints. Journal of Statistical Planning and Inference, 185, 56–68. * Murray (1977) Murray, G.D. (1977). A note on the estimation of probability density functions. Biometrika, 64, 150–152. * Murthy et al. (2004) Murthy, D.P., Xie, M. & Jiang, R. (2004). Weibull models. John Wiley & Sons.
# A New Look at the YY CrB Binary System Somayeh Soomandar Independent astrophysics researcher, Kerman, Iran Atila Poro Astronomy Department of the Raderon Lab., BC., Burnaby, Canada ###### Abstract This study presented a new analysis for the TESS-observed W Ursae Majoris (W UMa) binary star YY Coronea Borealis (YY CrB). The light curve was analyzed by the PHysics Of Eclipsing BinariEs (PHOEBE) Python version together with the Markov chain Monte Carlo (MCMC) method. The light curve solutions required a hot spot and $l_{3}$. New eclipse times from the TESS observations were extracted, and the O-C curve of primary and secondary minima showed an anti- correlated manner. In order to study the O-C curve of minima, minima times between 1991 and 2023 were collected. This investigation reported a new linear ephemeris and by fitting a quadratic function to the O-C curve of minima, calculated the orbital period rate of $\mathop{P}\limits^{.}\approx 5.786\times{10^{-8}}\frac{{day}}{{year}}$. Assuming mass conservation, a mass exchange rate of $\mathop{{M_{2}}}\limits^{.}=2.472\times{10^{-8}}$ calculated from the more massive component to the less massive one. Then, by using the light travel time function, the possible third body was determined in the binary and derived the mass of the third body as $0.498{M_{\odot}}$ with a period of $\simeq 7351.018$ days. The O-C curve analysis and the quantity of mass indicate that the presence of a third body is unlikely. This binary is expected to evolve into a broken-contact phase and is a good case to support the thermal relaxation oscillation model. binaries: eclipsing – method: photometric ## 1 Introduction W UMa-type systems are recognised by their eclipsing light curves with almost equal minima and a short orbital period. These stars have spectral types ranging from A to middle K, and the convective atmosphere is the main reason for chromosphere activity, as well as starspots, which are signs of the existence of dynamo-generated magnetic activity. YY CrB (HIP 77598, TIC 29287800) is a W UMa binary system discovered by Hipparcos ESA 1997. This system has been studied in some works; first Rucinski, et al. (2000) revealed the spectral type F8V for two components.Vaňko et al. (2004) found the light curves to be asymmetric and mentioned the existence of starspots on the components. Gazeas et al. (2005) analyzed the light curve and derived the geometric and photometric parameters and concluded that this target is a contact binary with weak magnetic activity. Essam et al. (2010) combined photometric and spectroscopic solutions and calculated the fill-out factor approximately equal to 64 percent and mass ratio of 0.241. In addition, they studied the changes in the orbital period using the O-C diagram and concluded that the orbital period is decreasing. Yu, Xiang, & Xiao (2015) studied the orbital period changes and implied that the decreased period rate and concluded that the sinusoidal oscillatory can be interrupted as magnetic activity. Essam et al. (2010) and Yu, Xiang, & Xiao (2015) on the rate of period decrease demonstrate that the value of reduced rate is lowering progressively, indicating that this system was going through an orbital expansion stage of thermal relaxation oscillation (TRO) cycles. Also, understanding the evolutionary status of this target could prove invaluable. Using new space-based data, we have re-analyzed the light curve solution and studied the O-C curve in detail. Moreover, we studied the possibility of a third body in this interesting system. The structure of the paper is as follows: Section 2 provides information on TESS observations and a data reduction process. The light curve solution and the estimation of absolute parameters are included in Sections 3 and 4 respectively, the orbital period variation analysis is presented in Section 5, and finally, Section 6 contains the discussion and conclusion. ## 2 Observation and Data Reduction YY CrB was observed by the TESS during sectors 24 and 51 (April 16, 2020-May 13, 2020, and April 22, 2022-May 18, 2022) on Cameras 1 and 3. There is two- minute cadence data for sector 24 that are processed by the Science Processing Operations Center (SPOC) pipeline (Jenkins et al. (2016); Jenkins (2015)). Photometric photos were downloaded using the Lightkurve package (Lightkurve Collaboration et al., 2018) that provides the functions to download TESS data from the public data archive at MAST111https://mast.stsci.edu. For sector 24, we used the Pre-search Data Conditioning flux of the Simple Aperture Photometry (PDCSAP). There is no detrended light curve for sector 51. Therefore, we download the TESS Full Frame Images (FFIs) from the MAST and used Lightkurve package to extract the SAP light curve with a mask that are defined by the pixels shown in the left panel of Figure 1. We used create threshold mask function to produce an aperture mask using a threshold equal to 10. This function identifies the pixels in the target pixel file and shows that a median flux that is brighter than the threshold times the standard deviation above the overall median. The right panel of Figure 1 shows the phased light curve that was produced. Figure 1: Left panel: TESS target pixel file of YY CrB in sector 51. The pixels included in the computation of the SAP as red bordered pixels. Right panel: phased light curve during sector 51. ## 3 Light curve solution Essam et al. (2010) calculated the optimal parameters by combining simultaneous radial velocity and light curve solutions. We started with the initial values of the parameters taken from the solution by the Essam et al. (2010) study. One-day data from TESS sector 24 were utilized to light curve solution. The observation of 2-min cadence help to better analysis of the effect of spots on the components. Photometric analysis of the YY CrB system was carried out using the PHOEBE 2.4.9 version, TESS filter of the code, and the MCMC approach (Prša & Zwitter 2005, Prša et al. 2016, Conroy et al. 2020, Poro et al. 2022). We selected the TESS passband from the code and chose the contact binary mode in the PHOEBE based on the light curve’s shape and solutions of previous studies. The initial and input parameters were as follows: The mass ratio $q=0.241$ and the effective temperature of primary component ${T_{1}}=5819$ are (Essam et al., 2010), the gravity darkening coefficients, ${g_{1}}={g_{2}}=0.32$ and the albedo coefficients, ${A_{1}}={A_{2}}=0.5$ (Lucy 1967, Ruciński 1969). The limb-darkening coefficients were employed as free parameters, and the Castelli & Kurucz (2004) method was used to model the stellar atmosphere. The parameters searched in MCMC include: the orbital inclination $i$, the mean temperature of the stars $T_{1,2}$, the mass ratio $q$, the fillout factor $f$, the bandpass luminosity of the primary star (${L_{1}}$), and the third light in total light (${l_{3}}$). We applied 46 walkers and 1000 iterations to each walker in MCMC. According to the asymmetry in the brightness of maxima in the light curve of the close eclipsing binary, the solution requires the assumption of a hot spot on the primary component (O’Connell 1951). According to observational and theoretical light curves in this study, it has not been possible to provide the solution without considering $l_{3}$. The theoretical fit on the observational light curve for the YY CrB system is given in Figure 2. The corner plot that MCMC produced is displayed in Figure 3. Also, the geometrical structure is plotted in Figure 4, which has a lower temperature at the point of contact between companion stars due to the gravity darkening (Prša et al. 2016). The calculated parameters together with the values obtained by Essam et al. (2010) are listed in Table 1. Figure 2: Light curve solution of the eclipsing binary YY CrB. Observational light curve (blank circle), synthetic light curve (solid red line), and the residuals (blue circle). Figure 3: The corner plots of the light curve solution. Figure 4: The geometrical structure of YY CrB. Table 1: The parameters of the eclipsing binary YY CrB. Parameter | This study | Essam et al. (2010) ---|---|--- $q=M_{2}/M_{1}$ | $0.2498_{\rm-0.0024}^{+0.0031}$ | $0.241\pm 0.002$ $T_{1}$ (K) | $5621_{\rm-3}^{+3}$ | 5819 $T_{2}$ (K) | $5944_{\rm-8}^{+6}$ | $6010\pm 72$ $i$ (deg) | $81.50_{\rm-0.29}^{+0.36}$ | $80.26\pm 0.05$ ${\Omega_{1}=\Omega_{2}}$ | $2.295\pm 0.079$ | 2.237 $l_{1}/l_{tot}$ | $0.730_{\rm-0.001}^{+0.001}$ | $0.7508\pm 0.0154$ $l_{2}/l_{tot}$ | $0.264\pm 0.001$ | 0.2492 $l_{3}/l_{tot}$ | $0.006_{\rm-0.001}^{+0.001}$ | $f$ | $0.363_{\rm-0.031}^{+0.025}$ | 0.64 ${r_{1}}_{mean}$ | $0.522\pm 0.018$ | 0.537 ${r_{2}}_{mean}$ | $0.287\pm 0.028$ | 0.282 Phase shift | $0.08\pm 0.005$ | Spot on the star 1: | | Colatitude $\theta$ | $99\pm 1$ | 90 Longitude $\lambda$ | $325\pm 1$ | $11.25\pm 1.638$ Angular radii $\gamma$ | $18\pm 1$ | $5.250\pm 0.573$ ${T_{star}}/{T_{spot}}$ | $1.04\pm 0.02$ | 0.750 Spot on the star 2: | | Colatitude $\theta$ | | $99.487\pm 3.919$ Longitude $\lambda$ | | 325 Angular radii $\gamma$ | | $16.300\pm 1.326$ ${T_{star}}/{T_{spot}}$ | | 1.351 ## 4 Absolute parameters The absolute parameters of the binary system including ${M_{v1,2}}$, ${M_{bol1,2}}$, $L_{1,2}$, $R_{1,2}$, $M_{1,2}$, $log(g)_{1,2}$, and $a$ were calculated. We used Gaia DR3 parallaxes and the parameters of the light curve solution in this study. We followed the same method as done by Poro et al. (2022). First the absolute magnitude ${M_{v}}$ of the system was calculated by Equation (1). ${M_{v(system)}}=V_{max}-5\log(d)+5-{A_{v}}$ (1) where the distance of the system from Gaia DR3 ($d_{pc}=90.07\pm 0.1$) was derived and $V_{max}=8.64\pm 0.08$ comes from the VSX222https://www.aavso.org/vsx/ database. Extinction coefficient ${A_{v}}=0.015\pm 0.002$ was obtained using the DUST-MAPS package in Python (Green et al., 2019). Also, Equation (2) can be utilized to determine the primary and secondary components’ absolute magnitude. ${M_{v1,2}}-{M_{vtot}}=-2.5\log(\frac{{{l_{1,2}}}}{{{l_{tot}}}})$ (2) The bolometric magnitude ${M_{bol}}$ of each component of the binary was obtained by Equation (3), ${M_{bol}}={M_{v}}+BC$ (3) where the effective temperature of the stars is employed to obtain the bolometric correction for the primary and secondary components retrieved $BC_{1}=-0.111$ and $BC_{2}=-0.052$ respectively (Flower, 1996). The bolometric correction is presented as polynomial fits in Equation (4). $BC=a+b(\log{T_{eff}})+c{(\log{T_{eff}})^{2}}+d{(\log{T_{eff}})^{3}}+e{(\log{T_{eff}})^{4}}$ (4) Then, the luminosity of two components is determined from Pogson’s relation (Pogson, 1856), ${M_{bol}}-{M_{bol\odot}}=-2.5\log(\frac{L}{{{L_{\odot}}}})$ (5) where ${M_{bol\odot}}$is taken as $4.73^{mag}$ from Torres (2010). The radius of primary and secondary components is calculated by the equation (6), $R={(\frac{L}{{4\pi\sigma{T^{4}}}})^{1/2}}$ (6) where the $\sigma$ is the Stephen-Boltzmann constant and $T$ is the temperatures of each components. Additionally, with considering ${r_{mean1,2}}$ and $a=\frac{R}{{{r_{mean}}}}$, we calculated the separation $a$ in average ${a_{1}}$ and ${a_{2}}$. The resulting parameters and the values obtained by the Essam et al. (2010) study are listed in Table LABEL:tab2. Table 2: The absolute parameters of YY CrB. Absolute parameters | This study | Essam et al. (2010) ---|---|--- ${M_{bol1}}(mag)$ | $4.083\pm 0.074$ | 3.939 ${M_{bol2}}(mag)$ | $5.246\pm 0.072$ | 5.173 ${L_{1}}({L_{\odot}})$ | $1.832\pm 0.121$ | 2.580 ${L_{2}}({L_{\odot}})$ | $0.628\pm 0.041$ | 0.668 ${R_{1}}({R_{\odot}})$ | $1.430\pm 0.049$ | 1.427 ${R_{2}}({R_{\odot}})$ | $0.749\pm 0.026$ | 0.757 $a({R_{\odot}})$ | $2.674\pm 0.080$ | 2.64 $M_{1}({M_{\odot}})$ | $1.448\pm 0.131$ | $1.467$ $M_{2}({M_{\odot}})$ | $0.362\pm 0.037$ | $0.357$ $log(g)_{1}(cgs)$ | $4.288\pm 0.008$ | 4.295 $log(g)_{2}(cgs)$ | $4.248\pm 0.012$ | 4.232 ## 5 The Orbital Period Changes To calculate the eclipse times of minima, we used the same method as done by Soomandar & Abedi (2020). First, split the detrended light curves for individual eclipses and fit a Lorentzian function to each eclipse by using the least-squares method. We used the Scipy.curve-fit package in Python to fit the Lorentzian function to the individual eclipses. We used the np.sqrt(np.diag(cov)) function to calculate the standard deviation errors on the parameters. TESS observations yielded a total of 220 primary and secondary minima, as displayed in the table 3. The new observational eclipse times were calculated. Then, we performed an analysis of observed (O) minus calculated (C) eclipse times (Sterken, 2005). We calculated the O-C curve using the following linear ephemeris (Kreiner 2004, Yu, Xiang, & Xiao 2015): $Min.I=2452500.1757+0.3765545\times E$ (7) The O-C curve of primary and secondary minima for sectors 24 and 51 are plotted in Figure 5. The anti-correlated manner between primary and secondary minima is obvious which is a confirmation of the presence of spots on the contact binary components (Tran et al. (2013); Balaji et al. (2015)). We averaged primary and secondary minima to eliminate the anti-correlated impact when analyzing orbital period changes (Balaji et al., 2015). The calculated values are shown in Table 3. 109 YY CrB observational minima times were recorded in the literature over a 31-year period. The appendix contains a list of the data gathered with uncertainty. Observational minima times converted to BJD-TDB333https://astroutils.astronomy.osu.edu/time/hjd2bjd.html. Figure 6’s left panel depicts the O-C curve of the minima. We presented a new ephemeris for this target as Equation (8) by fitting a linear function on the O-C curve of primary. $Min.I=2458955.8598(\pm 1.1e-4)+0.3765581(\pm 1.1e-7)\times E$ (8) Figure 5: Left panel: primary and secondary O-C curve of minima for sector 24. Right panel: primary and secondary minima for sector 51 (primary O-C curve in black circles and secondary O-C curve in red circles.). The O-C curve of minima is calculated with the new ephemeris and the resulting curve shows the same shape as the left panel of Figure 6. we fitted a quadratic function to the O-C curve in order to investigate the variations in the orbital period: ${T_{mid}}(E)={T_{0}}+PE+\frac{1}{2}\frac{{dP}}{{dt}}{E^{2}}$ (9) where mid-eclipse times ${T_{mid}}$ are described by ${T_{0}}$ is the reference mid-eclipse time, P is the orbital period and, E is the epoch of eclipses (Patra et al., 2017). And the quadratic fit showed a drop in period, which corresponded to the quadratic plot in Figure 6’s left panel. We determined the rate of period decline using the model’s quadratic coefficient as Equation (10) $\mathop{P}\limits^{.}=\frac{{2\times-1.124\times{{10}^{-11}}}}{{0.3765545}}=-5.786\times{10^{-8}}\pm 9.965\times{10^{-9}}\frac{{day}}{{year}}$ (10) Considering ${M_{1}}=1.448{M_{\odot}}$ for the primary and ${M_{2}}=0.362{M_{\odot}}$ for the secondary one calculated in this study and using Equation (11) and mass conservation, the rate of mass exchange between primary and secondary components was estimated. $\frac{{\mathop{P}\limits^{.}}}{P}=-3\frac{{{{\mathop{M}\limits^{.}}_{2}}({M_{1}}-{M_{2}})}}{{{M_{1}}{M_{2}}}}\Rightarrow\mathop{{M_{2}}}\limits^{.}=+2.472\times{10^{-8}}\pm 0.190\times{10^{-8}}{M_{\odot}}y{r^{-1}}$ (11) The positive sign indicates the direction of mass transfer from the more massive to the less massive component. The cyclic changes are shown by the residuals of the quadratic fit. As a result, we investigated the Light Travel Time Effect (LTTE) as a possible cause of the O-C curve variations. The following periodogram analysis was performed with the Period 04 software (Lenz & Breger, 2005) for the residuals of a quadratic fit. The peak of frequencies in the periodogram analysis of residuals shows a period of 7351.018 days. Then, we used the least square method to fit the Light Travel Time (LTT) formula on the O-C curve (Irwin, 1952): ${(O-C)_{LTT}}=A\times\left({\frac{{1-{e^{2}}}}{{1+e\cos\upsilon}}\sin(\upsilon+\omega)+e\sin(\omega)}\right)$ (12) where $A=\frac{{{a_{12}}\sin i}}{c}$, and ${a_{12}}$ is the semi-major axis of the relative orbit of the eclipsing system around the center of mass (in Au unit), $i$ is the inclination of the third-body orbit, $e$ is the eccentricity of the supposed third body, $\omega$ is the longitude of periastron passage in the plane of the orbit and, $\upsilon$ is the true anomaly. To fit the LTT function to the residuals of the O-C curve, we have to convert the epoch to the true anomaly and Kepler’s formula provides the link between the eccentric anomaly and the observed eclipse time: ${E_{3}}-e\sin E{}_{3}=\frac{{2\pi}}{{{P_{3}}}}(t-{T_{0}})$ (13) The equation (13) was calculated using Newton-Raphson’s method for every eclipse time of minima and considering equation (14) the epochs converted to the true anomaly. $\begin{array}[]{l}\tan\frac{\nu}{2}={(\frac{{1+e}}{{1-e}})^{1/2}}\tan\frac{{{E_{3}}}}{2}\\\ t={T_{0}}+epoch\times{P_{binary}}\end{array}$ (14) where ${P_{b}inary}$, t, ${P_{3}}$, ${E_{3}}$, and ${T_{0}}$ are the period of binary, the time of observed minima, the period of the third body, the eccentric anomaly and, the time of periastron passage respectively. By assuming a coplanar orbit ($i=90$), we determined the lower limit for the mass of the third body. The calculated parameters of the third body are listed in Table 4 and the related curve is plotted in the right panel of Figure 6. Figure 6: Left Panel: The O-C data points of the minima (black circle) and, the polynomial fit (red line). Right Panel: LTTE on the residuals of the polynomial fit; LTTE (black line), the O-C data points after subtracted polynomial fit (red circles), and the residuals of the LTT effect fit (blue circles). Table 3: Extracted the times of minima from TESS observations. All times of minima have been reduced to 2450000. Additionally, the minimum error is equal to 0.0001. Min. | Epoch | O-C | Min. | Epoch | O-C | Min. | Epoch | O-C | Min. | Epoch | O-C ---|---|---|---|---|---|---|---|---|---|---|--- 8955.8597 | 17144 | 0.0337 | 8966.5927 | 17172.5 | 0.0348 | 8978.8326 | 17205 | 0.0368 | 9702.0170 | 19125.5 | 0.0482 8956.0504 | 17144.5 | 0.0360 | 8966.7802 | 17173 | 0.0340 | 8979.0187 | 17205.5 | 0.0345 | 9702.2047 | 19126 | 0.0351 8956.2360 | 17145 | 0.0336 | 8966.9693 | 17173.5 | 0.0348 | 8979.2092 | 17206 | 0.0368 | 9702.2047 | 19126 | 0.0476 8956.6126 | 17146 | 0.0334 | 8967.1567 | 17174 | 0.0340 | 8979.3952 | 17206.5 | 0.0344 | 9702.3936 | 19126.5 | 0.0482 8956.8034 | 17146.5 | 0.0359 | 8967.3459 | 17174.5 | 0.0349 | 8979.5859 | 17207 | 0.0369 | 9702.5813 | 19127 | 0.0476 8956.9892 | 17147 | 0.0334 | 8967.5334 | 17175 | 0.0341 | 8979.7718 | 17207.5 | 0.0345 | 9702.7702 | 19127.5 | 0.0482 8957.1799 | 17147.5 | 0.0359 | 8967.7225 | 17175.5 | 0.0349 | 8979.9626 | 17208 | 0.0370 | 9702.9578 | 19128 | 0.0475 8957.3657 | 17151 | 0.0002 | 8967.9099 | 17176 | 0.0340 | 8980.1483 | 17208.5 | 0.0344 | 9703.1468 | 19128.5 | 0.0482 8957.3657 | 17148 | 0.0334 | 8968.0990 | 17176.5 | 0.0349 | 8980.3392 | 17209 | 0.0371 | 9703.3343 | 19129 | 0.0476 8957.5565 | 17148.5 | 0.0359 | 8969.4165 | 17180 | 0.0344 | 8980.5249 | 17209.5 | 0.0344 | 9703.5233 | 19129.5 | 0.0482 8957.7421 | 17149 | 0.0333 | 8969.6050 | 17180.5 | 0.0348 | 8980.9014 | 17210.5 | 0.0345 | 9703.7111 | 19130 | 0.0477 8957.9331 | 17149.5 | 0.0359 | 8969.7931 | 17181 | 0.0345 | 8981.2781 | 17211.5 | 0.0346 | 9703.8998 | 19130.5 | 0.0482 8958.1187 | 17150 | 0.0333 | 8969.9817 | 17181.5 | 0.0348 | 8981.4691 | 17212 | 0.0373 | 9704.0876 | 19131 | 0.0478 8958.3096 | 17150.5 | 0.0359 | 8970.1697 | 17182 | 0.0346 | 9692.9793 | 19101.5 | 0.0478 | 9704.2763 | 19131.5 | 0.0481 8958.4952 | 17151 | 0.0332 | 8970.3582 | 17182.5 | 0.0348 | 9693.1681 | 19102 | 0.0482 | 9704.4642 | 19132 | 0.0478 8958.6861 | 17151.5 | 0.0358 | 8970.5464 | 17183 | 0.0347 | 9693.3556 | 19102.5 | 0.0475 | 9704.6531 | 19132.5 | 0.0484 8958.8717 | 17152 | 0.0332 | 8970.7348 | 17183.5 | 0.0348 | 9693.5445 | 19103 | 0.0481 | 9705.9703 | 19136 | 0.0476 8959.0624 | 17152.5 | 0.0356 | 8970.9231 | 17184 | 0.0348 | 9693.7323 | 19103.5 | 0.0476 | 9706.1590 | 19136.5 | 0.048 8959.2482 | 17153 | 0.0332 | 8971.1114 | 17184.5 | 0.0349 | 9693.9211 | 19104 | 0.0482 | 9706.3469 | 19137 | 0.0478 8959.4390 | 17153.5 | 0.0356 | 8971.2997 | 17185 | 0.0349 | 9694.1092 | 19104.5 | 0.0479 | 9706.5354 | 19137.5 | 0.0479 8959.6249 | 17154 | 0.0333 | 8971.4880 | 17185.5 | 0.0349 | 9694.2977 | 19105 | 0.0482 | 9706.7234 | 19138 | 0.0477 8959.8155 | 17154.5 | 0.0355 | 8971.6763 | 17186 | 0.0349 | 9694.4855 | 19105.5 | 0.0478 | 9706.9121 | 19138.5 | 0.0481 8960.0014 | 17155 | 0.0332 | 8971.8645 | 17186.5 | 0.0348 | 9695.2398 | 19107.5 | 0.0489 | 9707.1000 | 19139 | 0.0477 8960.1919 | 17155.5 | 0.0354 | 8972.0531 | 17187 | 0.0351 | 9695.4272 | 19108 | 0.0481 | 9707.2889 | 19139.5 | 0.0483 8960.3778 | 17156 | 0.0331 | 8972.2411 | 17187.5 | 0.0348 | 9695.6154 | 19108.5 | 0.048 | 9707.4766 | 19140 | 0.0477 8960.5685 | 17156.5 | 0.0354 | 8972.4297 | 17188 | 0.0352 | 9695.8037 | 19109 | 0.0480 | 9708.2306 | 19142 | 0.0486 8960.7544 | 17157 | 0.0331 | 8972.6176 | 17188.5 | 0.0348 | 9695.9922 | 19109.5 | 0.0483 | 9709.5480 | 19145.5 | 0.0481 8960.9450 | 17157.5 | 0.0355 | 8972.8063 | 17189 | 0.0353 | 9696.1804 | 19110 | 0.0482 | 9710.4893 | 19148 | 0.048 8961.1310 | 17158 | 0.0331 | 8972.9941 | 17189.5 | 0.0348 | 9696.3688 | 19110.5 | 0.0483 | 9710.6776 | 19148.5 | 0.0480 8961.3215 | 17158.5 | 0.0354 | 8973.1830 | 17190 | 0.0354 | 9696.5567 | 19111 | 0.0479 | 9710.8660 | 19149 | 0.0481 8961.5077 | 17159 | 0.0333 | 8973.3707 | 17190.5 | 0.0348 | 9697.3101 | 19113 | 0.0482 | 9711.0543 | 19149.5 | 0.0481 8961.6980 | 17159.5 | 0.0354 | 8973.5597 | 17191 | 0.0355 | 9697.4983 | 19113.5 | 0.0481 | 9711.2425 | 19150 | 0.0481 8961.8842 | 17160 | 0.0332 | 8973.7471 | 17191.5 | 0.0347 | 9697.6865 | 19114 | 0.0480 | 9711.4307 | 19150.5 | 0.0481 8962.0746 | 17160.5 | 0.0354 | 8973.9364 | 17192 | 0.0357 | 9697.8749 | 19114.5 | 0.0482 | 9711.6190 | 19151 | 0.048 8962.2608 | 17161 | 0.0331 | 8974.1236 | 17192.5 | 0.0346 | 9697.8749 | 19114.5 | 0.0482 | 9711.8073 | 19151.5 | 0.0481 8962.4511 | 17161.5 | 0.0353 | 8974.3130 | 17193 | 0.0375 | 9698.0631 | 19115 | 0.0481 | 9711.9956 | 19152 | 0.0481 8962.6374 | 17162 | 0.0333 | 8974.5001 | 17193.5 | 0.0345 | 9698.2516 | 19115.5 | 0.0483 | 9712.1838 | 19152.5 | 0.048 8962.8276 | 17162.5 | 0.0353 | 8975.0664 | 17195 | 0.0360 | 9698.4395 | 19116 | 0.0479 | 9712.3722 | 19153 | 0.0482 8963.0140 | 17163 | 0.0334 | 8975.2533 | 17195.5 | 0.0347 | 9698.6281 | 19116.5 | 0.0482 | 9712.5602 | 19153.5 | 0.0479 8963.2041 | 17163.5 | 0.0352 | 8975.4430 | 17196 | 0.0360 | 9698.8160 | 19117 | 0.0478 | 9712.7487 | 19154 | 0.0481 8963.3905 | 17164 | 0.0333 | 8975.6298 | 17196.5 | 0.0346 | 9699.0047 | 19117.5 | 0.0483 | 9712.9368 | 19154.5 | 0.0479 8963.5807 | 17164.5 | 0.0351 | 8975.8196 | 17197 | 0.0361 | 9699.1926 | 19118 | 0.0479 | 9713.1252 | 19155 | 0.0481 8963.7673 | 17165 | 0.0335 | 8976.0063 | 17197.5 | 0.0346 | 9699.3812 | 19118.5 | 0.0483 | 9713.5019 | 19156 | 0.0481 8963.9570 | 17165.5 | 0.035 | 8976.1962 | 17198 | 0.0362 | 9699.5691 | 19119 | 0.0479 | 9713.6898 | 19156.5 | 0.0478 8964.1438 | 17166 | 0.0335 | 8976.3828 | 17198.5 | 0.0346 | 9699.7578 | 19119.5 | 0.0482 | 9713.8784 | 19157 | 0.0481 8964.3335 | 17166.5 | 0.035 | 8976.5729 | 17199 | 0.0363 | 9699.9456 | 19120 | 0.0479 | 9714.0664 | 19157.5 | 0.0478 8964.5204 | 17167 | 0.0336 | 8976.7594 | 17199.5 | 0.0345 | 9700.1344 | 19120.5 | 0.0483 | 9714.2550 | 19158 | 0.0482 8964.7101 | 17167.5 | 0.0349 | 8976.9495 | 17200 | 0.0364 | 9700.3221 | 19121 | 0.0478 | 9714.4429 | 19158.5 | 0.0477 8964.8970 | 17168 | 0.0336 | 8977.1360 | 17200.5 | 0.0345 | 9700.3222 | 19122 | 0.004 | 9714.6315 | 19159 | 0.0481 8965.0865 | 17168.5 | 0.0349 | 8977.3260 | 17201 | 0.0009 | 9700.5108 | 19121.5 | 0.0482 | 9714.8195 | 19159.5 | 0.0478 8965.2736 | 17169 | 0.0337 | 8977.5125 | 17201.5 | 0.0345 | 9700.6986 | 19122 | 0.0477 | 9715.0082 | 19160 | 0.0482 8965.4631 | 17169.5 | 0.0349 | 8977.7028 | 17202 | 0.0364 | 9700.8874 | 19122.5 | 0.0482 | 9715.1961 | 19160.5 | 0.0478 8965.6503 | 17170 | 0.0338 | 8977.8891 | 17202.5 | 0.0345 | 9701.0752 | 19123 | 0.0478 | 9715.1961 | 19160.5 | 0.0478 8965.8397 | 17170.5 | 0.0349 | 8978.0794 | 17203 | 0.0366 | 9701.2639 | 19123.5 | 0.0482 | 9715.3847 | 19161 | 0.0482 8966.0269 | 17171 | 0.0338 | 8978.2656 | 17203.5 | 0.0345 | 9701.4517 | 19124 | 0.0478 | 9715.5727 | 19161.5 | 0.0479 8966.2162 | 17171.5 | 0.0349 | 8978.4560 | 17204 | 0.0367 | 9701.6405 | 19124.5 | 0.0481 | 9715.7612 | 19162 | 0.0482 8966.4035 | 17172 | 0.0339 | 8978.6422 | 17204.5 | 0.0345 | 9701.8282 | 19125 | 0.0476 | | | Table 4: Parameters of the third body. Parameters of third body | | Value ---|---|--- Eccentricity (e) | | $0.689\pm 0.005$ The longitude of periastron passage ($\omega$) | | $57.6\pm 1.8$ Period (days) | | 7351.018 Amplitude (minutes) | | $17.28\pm 1.44$ The time of periastron passage ($T_{0}$) | | 2460201 Projected semi-major axis$\times\sin i$ | | $2.11\pm 0.17$ Mass Function $(MassFunction({f_{m}}))$ | | $0.023\pm 0.006$ ${M_{3}}\sin i(i=90,{M_{\odot}})$ | | 0.498 $\sum{{{(O-C)}^{2}}}$ | | 0.001 Assuming the third body is a main-sequence star, this corresponds to the M1V spectral type with a brightness of $0.041{L_{\odot}}$ 444http://www.pas.rochester.edu/~emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt , or $0.016$ of the total luminosity which doesn’t agree with the value $l_{3}$ determined in section 3. We are not certain that $l_{3}$ produced from the TESS light curve analysis represents a valid observation of flux from a third body in the system, despite the computation of the effect of the third body, especially as there is no exact radial-velocity curve. We estimate that this figure is most likely due to systematic errors in background flux level readings in TESS photos and/or an underestimation of photometric aperture contamination by other stars in the image. So, we investigated the Applegate’s effect as a plausible explanation for the cyclical fluctuations in the O-C curve. We calculated the observed relative change of the orbital period throughout one cycle of the binary using the previously obtained modulation period and the O-C amplitude computed from the orbit of the third body simulated in the previous section. $\frac{{\Delta P}}{P}=2\pi\frac{{(O-C)}}{{{P_{\bmod}}}}=1.025\times{10^{-5}}$ (Applegate, 1992). The value of $\frac{{\Delta P}}{P}$suggests that the Applegate effect can explain the cyclic changes in the O-C curve of minima. This system shows unequal maxima that is known as the O’Connell effect (O’Connell, 1951) due to the presence of the hot spots (Wilsey & Beaky, 2009). So, this difference implies that the hemisphere of a component emits a different amount of radiation than the other hemisphere. These types of systems have active chromospheres because of the existence of large spots (Knote et al., 2022). Starspots can alter the depth of minima and have an obvious effect on the eclipse light curve (Han, Muirhead, & Swift, 2019). To investigate the effect of the spots on the light curve over both sectors of TESS observations, we calculated the difference between the two depths of the primary and secondary minima. We considered the relative fluxes in phases 0 and 0.5 for every complete individual light curve. The curves that resulted were displayed in Figure 7. YY CrB has the values of the DepthI - DepthII as large as about $10\%$ of the variable light amplitude. And this is possible because of the migration and evolution of spots with time on the surface of two components that cause cyclic magnetic activity. Figure 7: Right panel: the difference between the depths of Primary and Secondary minima over the 24th sector of TESS observations. Left panel: the difference between the depths of Primary and Secondary minima over the 51st sector of TESS observations. ## 6 Discussion and conclusion Based on the estimated mass ratio, fill-out factor and inclination angle, YY CrB is an over-cantact binary with an increased orbital period. Essam et al. (2010) calculated the decreased period rate $1.194\times{10^{-6}}\frac{{day}}{{year}}$. Yu, Xiang, & Xiao (2015) considered all of the minima time published until 2013 and calculated a secular period decrease with a rate of $6.727\times{10^{-7}}\frac{{day}}{{year}}$. In this study, the decreasing value of period rate $5.786\times{10^{-8}}\frac{{day}}{{year}}$ indicates that the rate of period changes has been decreased. And, when mass conservation is taken into account, the mass transfer from the Roche-lobe-filling primary component to the secondary component is ${\mathop{M}\limits^{.}_{2}}=2.472\times{10^{-8}}{M_{\odot}}y{r^{-1}}$. When the stated results in Yu, Xiang, & Xiao (2015) are compared to the value of mass transfer in this study, it is clear that mass transfer has been decreased and the distance between two components is growing while the value of fill-out factor is decreasing (compare the values of fill-out factors in Table 1). And this target may evolve to shallow-contact binary via the thermal relaxation oscillation (TRO) model (Flannery (1976); Robertson & Eggleton (1977)) and ultimately reach a broken-contact phase ( Lucy (1976)). The mass ratio of the components, which is related to the mass transfer, is the crucial parameter in the evolution of the close binary stars. Table 5 contains a list of contact binaries with low mass ratios $<0.25$. In order to explain the evolutionary status of the YY CrB system, we provide the mass-luminosity ($M-L$) diagram displayed in Figure 8. The Zero-Age Main Sequence (ZAMS) and the Terminal‐Age Main Sequence (TAMS) are plotted along with the selected contact binaries with low mass ratios. It is obvious that the more massive primary components are around the ZAMS line, meaning they are not evolved or little evolved. Also, the less massive secondary components have evolved away from the main sequence stars and over-luminosity comparing the stars with the same mass in the main sequence. In addition, the orbital angular momentum of YY CrB has a value of $51.585\pm 0.067$. The $logJ_{0}-logM$ diagram shows the position of the system (Figure 9), and this diagram shows that YY CrB is in a contact binary systems region. Table 5: Absolute parameters for low mass ratio contact binaries. System | $q$ | ${M_{1}}({M_{\odot}})$ | ${M_{2}}({M_{\odot}})$ | ${R_{1}}({R_{\odot}})$ | ${R_{2}}({R_{\odot}})$ | ${L_{1}}({L_{\odot}})$ | ${L_{2}}({L_{\odot}})$ | Reference ---|---|---|---|---|---|---|---|--- V429 Cam | 0.206 | 1.36(12) | 0.28(3) | 1.55(3) | 0.78(2) | 3.56(9) | 0.85(2) | Li et al. (2021) V830 Cep | 0.23 | 0.84(5) | 0.19(1) | 0.91(1) | 0.47(1) | 0.98(1) | 0.29(1) | Li et al. (2021) FP Boo | 0.096 | 1.614(52) | 0.154(21) | 2.310(25) | 0.774(8) | 11.193(99) | 0.920(13) | Gazeas et al. (2006) DN Boo | 0.103 | 1.428(39) | 0.148(6) | 1.710(67) | 0.670(110) | 3.750(280) | 0.560(170) | Şenavcı et al. (2008) FG Hya | 0.112 | 1.444(25) | 0.161(7) | 1.405(9) | 0.591(8) | 2.158(86) | 0.412(17) | Qian & Yang (2005) CK Boo | 0.0108 | 1.442(14) | 0.154(2) | 1.453(3) | 0.577(10) | 2.74(1) | 0.47(2) | Kalci & Derman (2005) GR Vir | 0.122 | 1.37(16) | 0.17(6) | 1.42(7) | 0.61(4) | 2.87(28) | 0.48(6) | Qian & Yang (2004) CSS J234807.2$+$193717 | 0.176 | 1.19(4) | 0.21(3) | 1.36 (2) | 0.66 (1) | 1.45(24) | 0.42(5) | Christopoulou et al. (2022) J170307 | 0.092 | 1.134(253) | 0.105(24) | 1.204(120) | 0.436(48) | 1.874(572) | 0.271(72) | Liu et al. (2023) J1641000 | 0.095 | 1.402(287) | 0.133(28) | 1.580(144) | 0.577(58) | 3.912(1.109) | 0.512(142) | Liu et al. (2023) J223837 | 0.093 | 1.541(306) | 0.144(30) | 1.784(159) | 0.646(64) | 5.463(1.534) | 0.704(192) | Liu et al. (2023) CSS J222607.8$+$062107 | 0.221 | 1.49(3) | 0.33(13) | 1.51(5) | 0.81(2) | 3.35(65) | 1.15(24) | Sun et al. (2020) CSS J012559.7$+$203404 | 0.231 | 1.38(3) | 0.32(12) | 1.42(4) | 0.77(2) | 2.46(56) | 0.81(18) | Sun et al. (2020) CSS J153855.6$+$042903 | 0.187 | 1.44(5) | 0.27(12) | 1.37(4) | 0.66(2) | 2.94(96) | 0.30(10) | Sun et al. (2020) CSS J141923.2$-$013522 | 0.168 | 1.31(5) | 0.22(11) | 1.23(4) | 0.57(2) | 1.97(68) | 0.33(11) | Sun et al. (2020) CSS J130111.2$-$132012 | 0.108 | 1.38(3) | 0.15(12) | 1.49(5) | 0.61(2) | 2.49(57) | 0.40(9) | Sun et al. (2020) CSS J165813.7$+$390911 | 0.183 | 1.09(3) | 0.20(9) | 1.05(3) | 0.49(1) | 0.92(24) | 0.24(6) | Sun et al. (2020) V870 Ara | 0.082 | 1.546(54) | 0.127(37) | 1.64(6) | 0.63(5) | 2.64(17) | 0.42(6) | Poro et al. (2021) TYC 6995-813-1 | 0.111 | 1.23(1) | 0.135(1) | 1.46(1) | 0.60(1) | 2.293(4) | 0.58(2) | Wadhwa et al. (2021) NSVS 13602901 | 0.171 | 1.19(2) | 0.203(10) | 1.69(1) | 0.79(1) | 2.05(4) | 0.58(2) | Wadhwa et al. (2021) NSVS 5029961 | 0.151 | 1.872(468) | 0.284(72) | 1.573(119) | 0.680(53) | 3.403(14) | 0.610(26) | Zheng et al. (2021) CSS J022914.4$+$044340 | 0.201 | 1.44(25) | 0.29(5) | 1.26(8) | 0.65(4) | 1.718(191) | 0.416(50) | Liu & Li (2021) HV Aqr | 0.15 | 1.240(28) | 0.186(17) | 1.456(12) | 0.601(5) | 3.326(213) | 0.638(44) | Gazeas et al. (2021) ZZ PsA | 0.078 | 1.213(8) | 0.095(1) | 1.422(4) | 0.559(4) | 2.20(4) | 0.63(4) | Wadhwa et al. (2021) NSVS 1926064 | 0.160 | 1.558(38) | 0.249(6) | 1.605(13) | 0.755(42) | 3.91(28) | 0.641(33) | Kjurkchieva et al. (2020) Figure 8: $M-L$ diagram of selected contact binaries with low mass ratio. The primary and secondary components of YY CrB are plotted in red and blue colors, respectively. Figure 9: The location of YY CrB on the $logJ_{0}-logM$ diagram. The quadratic line is based on a study by Eker et al. (2006). According to the study Yu, Xiang, & Xiao (2015) and the periodic changes in the residuals of the quadratic fit on the O-C curve, the potential of excitability of the third body was investigated. The light-time function fitting revealed the existence of a body with the value of $0.498{M_{\odot}}$. This number equates to 0.016 of total brightness, which differs from the value of $l_{3}$ determined in section 3. We explored the Applegate effect as a possible explanation for the fluctuation in the O-C curve because there is no exact radial-velocity curve. YY CrB is a contact binary with a mass ratio less than $0.3$, so considering Hut’s criteria (Hut, 1980) to investigate the stability is necessary. We used Equation (15) to calculate the ratio of the spin angular momentum to the orbital angular momentum (Yang & Qian, 2015). $\frac{{{J_{s}}}}{{{J_{o}}}}=\frac{{q+1}}{q}[{({k_{1}}{r_{1}})^{2}}+{({k_{2}}{r_{2}})^{2}}q]$ (15) where $r_{1}$ and $r_{2}$ are the relative radii for the primary and secondary components and ${k^{2}}_{1,2}=0.06$ (Li & Zhang, 2006), are the dimensionless gyration radii. The calculated value of $\frac{{{J_{s}}}}{{{J_{o}}}}=0.087$, which is less than the threshold value therefore this system is stable. This target shows a period increase which is attributed to mass transfer. According to the existing data and the analyses done in this study, the existence of a third body is unlikely for this system, and detailed spectroscopic and photometric observations over a longer length of time are required for the definitive determination. ## Acknowledgements This manuscript has made use of data from the TESS mission. Funding for the TESS mission is provided by the NASA Science Mission Directorate. This research has made use of the SIMBAD and VIZIER databases, operated at CDS, Strasbourg, France. The time of minima data from the Variable Star Observers League in Japan (VSOLJ) websites proved invaluable to the assessment of potential period changes experienced by this variable star. The authors would like to thank Marco Brentel for his help. We are grateful to Ehsan Paki from the BSN project (https://bsnp.info/) for providing Figure 4 of this manuscript, which also shows the color-temperature scale. ## ORCID iDs Somayeh Soomandar: 0000-0002-9520-9573 Atila Poro: 0000-0002-0196-9732 ## Appendix A Available Minima Times The appendix table displays the minima times along with their error in the first column, the epochs in the third column, the O-C values in the fourth column, and the references in the final column. Table 1: Available mid-eclipse times of YY CrB system. ons. All minimum have been reduced to 2450000 Min.($BJD_{TDB}$) | Epoch | O-C | Reference | Min.($BJD_{TDB}$) | Epoch | O-C | Reference ---|---|---|---|---|---|---|--- 955.8695(12) | -4101 | -0.0562 | Pribulla et al. (2001) | 4308.3899(2) | 4802 | -0.0005 | Parimucha et al. (2009) 955.8718(6) | -4101 | -0.0539 | Rucinski, et al. (2000) | 4564.2595 | 5481.5 | 0.0003 | Nagai (2009) 1318.4993 | -3138 | -0.04847 | Essam et al. (2010) | 4604.36215(1) | 5588 | -0.0001 | Yilmaz et al. (2009) 1318.5001 | -3138 | -0.0475 | Essam et al. (2010) | 4605.4917(2) | 5591 | -0.0002 | Yilmaz et al. (2009) 1361.4275(3) | -3024 | -0.0475 | Keskin et al. (2000) | 4628.4657(2) | 5652 | 0.0039 | Parimucha et al. (2009) 1361.4275(2) | -3024 | -0.0474 | Keskin et al. (2000) | 4632.4144(3) | 5662.5 | -0.0012 | Parimucha et al. (2009) 1368.3965(4) | -3005.5 | -0.0447 | Keskin et al. (2000) | 4648.4191(30) | 5706 | 0.0000 | Parimucha et al. (2009) 1368.3966(4) | -3005.5 | -0.0446 | Keskin et al. (2000) | 4648.4201(10) | 5705 | 0.0009 | Hubscher et al. (2009) 1370.4659(6) | -3000 | -0.0463 | Keskin et al. (2000) | 4688.3352(3) | 5811 | 0.0013 | Yilmaz et al. (2009) 1372.3494(3) | -2995 | -0.0457 | Keskin et al. (2000) | 4931.4009(1) | 6456.5 | 0.0011 | Hubscher et al. (2011) 1668.3359 | -2209 | -0.0309 | Essam et al. (2010) | 4931.5883(1) | 6457 | 0.0001 | Hubscher et al. (2011) 1669.4602 | -2206 | -0.0363 | Essam et al. (2010) | 4958.5136(20) | 6528.5 | 0.0018 | Hubscher et al. (2011) 1670.3976 | -2203.5 | -0.0403 | Essam et al. (2010) | 4983.7401(3) | 6595.5 | -0.0009 | Diethelm (2009) 1670.3984 | -2203.5 | -0.0395 | Essam et al. (2010) | 5017.44086(3) | 6685 | -0.0017 | Parimucha et al. (2009) 1674.3548 | -2193 | -0.0369 | Karska & Maciejewski (2003) | 5213.6297(3) | 7206 | 0.0022 | Parimucha et al. (2011) 1692.4299 | -2144 | -0.0365 | Essam et al. (2010) | 5219.6532(1) | 7222 | 0.0009 | Parimucha et al. (2011) 1692.4319 | -2145 | -0.0345 | Essam et al. (2010) | 5261.8279(1) | 7334 | 0.0014 | Dvorak (2011) 1975.6050 | -1392 | -0.0304 | Pribulla, et al. (2003) | 5264.4630(3) | 7341 | 0.0006 | Parimucha et al. (2011) 1975.6061(1) | -1392 | -0.0293 | Pribulla et al. (2001) | 5311.3444(2) | 7465.5 | 0.001 | Parimucha et al. (2011) 1975.6064(1) | -1393 | -0.0290 | Pribulla et al. (2001) | 5311.5317(2) | 7466 | 0.000 | Parimucha et al. (2011) 1975.6108(7) | -1392 | -0.0246 | Pribulla & Vanko (2002) | 5351.4463(1) | 7572 | -0.0002 | Hubscher, et al. (2012) 2029.4398(7) | -1250 | -0.0426 | Pribulla & Vanko (2002) | 5354.4587(2) | 7580 | -0.0002 | Parimucha et al. (2011) 2031.5168(7) | -1244.5 | -0.0369 | Pribulla & Vanko (2002)) | 5420.3573(3) | 7755 | 0.0137 | Parimucha et al. (2011)) 2045.4589 | -1207.5 | -0.0273 | Essam et al. (2010) | 5652.8828(30) | 8372.5 | 0.0045 | Diethelm (2011) 2060.3320 | -1167 | -0.0281 | Essam et al. (2010) | 5665.4931(3) | 8406 | 0.0088 | Parimucha et al. (2013) 2060.3352 | -1167 | -0.0249 | Essam et al. (2010) | 5705.4093(23) | 8512 | 0.0016 | Hubscher, et al. (2012) 2400.1804(2) | -265.5 | -0.0202 | Pribulla et al. (2002) | 6011.5475(2) | 9325 | 0.0051 | Parimucha et al. (2013) 2400.3660 | -264 | -0.0227 | Karska & Maciejewski (2003) | 55987.6371(2) | 9261.5 | 0.0018 | Parimucha et al. (2013) 2469.4699(4) | -81.5 | -0.017 | Demircan et al. (2003) | 5987.6371(2) | 9261.5 | 0.0018 | Parimucha et al. (2013) 2472.2898(2) | -74 | -0.0209 | Petropoulou et al. (2015) | 5992.5319(2) | 9274.5 | 0.0014 | Parimucha et al. (2013) 2473.4197(2) | -71 | -0.0206 | Petropoulou et al. (2015) | 6005.5237(2) | 9309 | 0.0021 | Parimucha et al. (2013) 2473.4247(4) | -71 | -0.0156 | Demircan et al. (2003) | 6005.5237(2) | 9309 | 0.0021 | Parimucha et al. (2013) 2500.1757 | 0 | 0.000 | Kreiner (2004) | 6149.3679(2) | 9691 | 0.0026 | Parimucha et al. (2013) 2719.3200 | 582 | -0.0104 | Nagai (2004) | 2456199.26169(2) | 9823.5 | 0.0028 | Parimucha et al. (2013) 2764.5082(23) | 702 | -0.0088 | Hubscher et al. (2005) | 6742.4439(14) | 11266 | 0.00513 | Hubscher & Lehmann (2015) 2786.3500(2) | 761 | -0.0071 | Ak & Filiz (2003) | 6749.4115(5) | 11284.5 | 0.0066 | Hubscher & Lehmann (2015) 2793.5038(2) | 779 | -0.0079 | Ak & Filiz (2003) | 6011.5483(2) | 9325 | 0.0018 | Parimucha et al. (2013) 7074.9466(1) | 12149 | 0.010 | Nelson (2016) | 6754.4970(35) | 11298 | 0.0086 | Hubscher & Lehmann (2015) 2793.5042(21) | 779 | -0.0075 | Hubscher et al. (2005) | 7074.9466(1) | 12149 | 0.010 | Nelson (2016) 2814.4011(1) | 834.5 | -0.0093 | Selam et al. (2003) | 7084.1734 | 12173.5 | 0.0114 | Nagai (2016) 3151.0470 | 1728.5 | -0.0032 | Nagai (2005) | 7123.5238(28) | 12278 | -0.011 | Hubscher (2017)) 3458.8835(2) | 2546 | 0.000 | Dvorak (2006) | 6749.6002(8) | 11285 | 0.0068 | Hubscher & Lehmann (2015) 3466.41483(1) | 2566 | 0.000 | Pribulla et al. (2005) | 7513.0723 | 13312.5 | 0.0147 | Nagai (2017) 4201.4509(16) | 4518 | 0.0020 | Hubscher (2007) | 7489.5378(18) | 13250 | 0.015 | Hubscher (2017) 4201.6350(17) | 4518.5 | -0.0022 | Hubscher (2007) | 9024.3943 | 17326 | 0.0353 | Nagai (2021) 4245.5055(9) | 4635 | -0.0003 | Brát, et al. (2007) | 9037.3884 | 17360.5 | 0.0382 | Nagai (2021) 4224.4161(2) | 4579 | -0.0028 | Parimucha et al. (2009) | 9269.5368(10) | 17977 | 0.0408 | Paschke (2021) 4300.4828(1) | 4781 | 0.0000 | Parimucha et al. (2009) | 9328.4664(7) | 18133.5 | 0.0398 | Lienhard (2022) 4500.6209(3) | 5312.5 | -0.0006 | Parimucha et al. (2009) | 9605.99382(1) | 18870.5 | 0.0464 | Nelson & Alton (2022) 4504.5751(8) | 5323 | -0.0002 | Parimucha et al. (2009) | 10064.4578(30) | 20088 | 0.0553 | Paschke (2023) 14513.6131(6) | 5347 | 0.0005 | Parimucha et al. (2009) | | | | 14560.1297 | 5470.5 | 0.0126 | Nagai (2009) | | | | ## References * Ak & Filiz (2003) Ak H., Filiz N., 2003,Photoelectric Minimum Times of Some Eclipsing Binary Stars, IBVS, 5462, 1 * Applegate (1992) Applegate J. H., 1992, A Mechanism for Orbital Period Modulation in Close Binaries, ApJ, 385, 621. doi:10.1086/170967 * Balaji et al. (2015) Balaji B., Croll B., Levine A. M., Rappaport S., 2015, Tracking the stellar longitudes of starspots in short-period Kepler binaries, MNRAS, 448, 429. doi:10.1093/mnras/stv031 * Brát, et al. (2007) Brát L., Zejda M., Svoboda P., 2007, B.R.N.O. Contributions 34, OEJV, 0074, 1 * Castelli & Kurucz (2004) Castelli F., Kurucz R. L., 2004, Is missing Fe I opacity in stellar atmospheres a significant problem?, A&A, 419, 725. doi:10.1051/0004-6361:20040079 * Conroy et al. (2020) Conroy K. E., Kochoska A., Hey D., Pablo H., Hambleton K. M., Jones D., Giammarco J., et al., 2020, Physics of Eclipsing Binaries. V. General Framework for Solving the Inverse Problem, ApJS, 250, 34. doi:10.3847/1538-4365/abb4e2 * Christopoulou et al. (2022) Christopoulou P.-E., Lalounta E., Papageorgiou A., Ferreira Lopes C. E., Catelan M., Drake A. J., 2022,New low mass ratio contact binaries in the Catalina Sky Survey, MNRAS, 512, 1244. doi:10.1093/mnras/stac534 * Demircan et al. (2003) Demircan O., Erdem A., Ozdemir S., Cicek C., Bulut I., Soydugan F., Soydugan E., et al., 2003, The First Eclipsing Binary Observations at the Ulupinar Astrophysics Observatory, IBVS, 5364, 1 * Diethelm (2009) Diethelm R., 2009, Timings of Minima of Eclipsing Binaries, IBVS, 5894, 1 * Diethelm (2011) Diethelm R., 2011, Timings of Minima of Eclipsing Binaries, IBVS, 5960, 1 * Dvorak (2006) Dvorak S. W., 2006, Times of Minima for Neglected Eclipsing Binaries in 2005, IBVS, 5677, 1 * Dvorak (2011) Dvorak S. W., 2011, Times of Minima for Eclipsing Binaries 2010, IBVS, 5974, 1 * Eker et al. (2006) Eker Z., Demircan O., Bilir S., Karataş Y., 2006, Dynamical evolution of active detached binaries on the logJo-logM diagram and contact binary formation, MNRAS, 373, 1483. doi:10.1111/j.1365-2966.2006.11073.x * ESA (1997) ESA, 1997, The HIPPARCOS and TYCHO catalogues. Astrometric and photometric star catalogues derived from the ESA HIPPARCOS Space Astrometry Mission, ESASP, 1200 * Essam et al. (2010) Essam A., Saad S. M., Nouh M. I., Dumitrescu A., El-Khateeb M. M., Haroon A., 2010, Photometric and spectroscopic analysis of YY CrB, NewA, 15, 227. doi:10.1016/j.newast.2009.07.006 * Flannery (1976) Flannery B. P., 1976, A Cyclic Thermal Instability in Contact Binary Stars, ApJ, 205, 217. doi:10.1086/154266 * Flower (1996) Flower P. J., 1996,Transformations from Theoretical Hertzsprung-Russell Diagrams to Color-Magnitude Diagrams: Effective Temperatures, B-V Colors, and Bolometric Corrections, ApJ, 469, 355. doi:10.1086/177785 * Gazeas et al. (2005) Gazeas K. D., Baran A., Niarchos P., Zola S., Kreiner J. M., Ogloza W., Rucinski S. M., et al., 2005, Physical Parameters of Components in Close Binary Systems: IV, AcA, 55, 123 * Gazeas et al. (2006) Gazeas K. D., Niarchos P. G., Zola S., Kreiner J. M., Rucinski S. M., 2006, Physical Parameters of Components in Close Binary Systems: VI AcA, 56, 127. doi:10.48550/arXiv.0903.1364 * Green et al. (2019) Green G. M., Schlafly E., Zucker C., Speagle J. S., Finkbeiner D., 2019, A 3D Dust Map Based on Gaia, Pan-STARRS 1, and 2MASS, ApJ, 887, 93. doi:10.3847/1538-4357/ab5362 * Han, Muirhead, & Swift (2019) Han E., Muirhead P. S., Swift J. J., 2019, Magnetic Inflation and Stellar Mass. IV. Four Low-mass Kepler Eclipsing Binaries Consistent with Non-magnetic Stellar Evolutionary Models, AJ, 158, 111. doi:10.3847/1538-3881/ab2ed7 * Hubscher & Lehmann (2015) Hubscher J., Lehmann P. B., 2015, BAV-Results of observations - Photoelectric Minima of Selected Eclipsing Binaries and Maxima of Pulsating Stars, IBVS, 6149, 1 * Hubscher et al. (2005) Hubscher J., Paschke A., Walter F., 2005, Photoelectric Minima of Selected Eclipsing Binaries and Maxima of Pulsating Stars, IBVS, 5657, 1 * Hubscher et al. (2006) Hubscher J., Paschke A., Walter F., 2006, Photoelectric Minima of Selected Eclipsing Binaries and Maxima of Pulsating Stars, IBVS, 5731, 1 * Hubscher et al. (2009) Hubscher J., Steinbach H.-M., Walter F., 2009, BAV-Results of observations - Photoelectric Minima of Selected Eclipsing Binaries and Maxima of Pulsating Stars, IBVS, 5874, 1 * Hubscher et al. (2011) Hubscher J., Lehmann P. B., Monninger G., Steinbach H.-M., Walter F., 2011,VizieR Online Data Catalog: Minima and maxima of 452 variables, yCatp, 0185, J/other/IBVS/5918 * Hubscher, et al. (2012) Hubscher J., Lehmann P. B., Walter F., 2012,BAV-Results of observations - Photoelectric Minima of Selected Eclipsing Binaries and Maxima of Pulsating Stars, IBVS, 6010, 1 * Hubscher (2007) Hubscher J., 2007, Photoelectric Minima of Selected Eclipsing Binaries and Maxima of Pulsating Stars, IBVS, 5802, 1 * Hubscher (2017) Hubscher J., 2017, BAV-Results of observations - Photoelectric Minima of Selected Eclipsing Binaries and Maxima of Pulsating Stars, IBVS, 6196, 1. doi:10.22444/IBVS.6196 * Hut (1980) Hut P., 1980, Stability of tidal equilibrium, A&A, 92, 167 * Irwin (1952) Irwin J. B., 1952,The Determination of a Light-Time Orbit, ApJ, 116, 211. doi:10.1086/145604 * Jenkins et al. (2016) Jenkins J. M., Twicken J. D., McCauliff S., Campbell J., Sanderfer D., Lung D., Mansouri-Samani M., et al., 2016, The TESS science processing operations center, SPIE, 9913, 99133E. doi:10.1117/12.2233418 * Jenkins (2015) Jenkins J. M., 2015, Overview of the TESS Science Pipeline, ESS * Kalci & Derman (2005) Kalci R., Derman E., 2005,CK Bootis: a W UMa system with a small mass ratio, AN, 326, 342. doi:10.1002/asna.200510361 * Karska & Maciejewski (2003) Karska A., Maciejewski G., 2003, CCD Times of Minima of Some Eclipsing Binaries in 2002, IBVS, 5380, 1 * Keskin et al. (2000) Keskin V., Yasarsoy B., Sipahi E., 2000, Times of Minima of Some Eclipsing Binaries, IBVS, 4855, 1 * Knote et al. (2022) Knote M. F., Caballero-Nieves S. M., Gokhale V., Johnston K. B., Perlman E. S., 2022, Characteristics of Kepler Eclipsing Binaries Displaying a Significant O’Connell Effect, ApJS, 262, 10. doi:10.3847/1538-4365/ac770f * Kreiner (2004) Kreiner J. M., 2004, Up-to-Date Linear Elements of Eclipsing Binaries, AcA, 54, 207 * Lenz & Breger (2005) Lenz P., Breger M., 2005, Period04 User Guide, CoAst, 146, 53. doi:10.1553/cia146s53 * Li & Zhang (2006) Li L., Zhang F., 2006, The dynamical stability of W Ursae Majoris-type systems, MNRAS, 369, 2001. doi:10.1111/j.1365-2966.2006.10462.x * Li et al. (2021) Li K., Xia Q.-Q., Kim C.-H., Gao X., Hu S.-M., Guo D.-F., Gao D.-Y., et al., 2021,Photometric Study and Absolute Parameter Estimation of Six Totally Eclipsing Contact Binaries, AJ, 162, 13. doi:10.3847/1538-3881/abfc53 * Lightkurve Collaboration et al. (2018) Lightkurve Collaboration, Cardoso J. V. de M., Hedges C., Gully-Santiago M., Saunders N., Cody A. M., Barclay T., et al., 2018, Lightkurve: Kepler and TESS time series analysis in Python, ascl.soft. ascl:1812.013 * Lucy (1967) Lucy L. B., 1967, Gravity-Darkening for Stars with Convective Envelopes, ZA, 65, 89 * Lucy (1976) Lucy L. B., 1976, W Ursae Majoris systems with marginal contact, ApJ, 205, 208. doi:10.1086/154265 * Nagai (2004) Nagai, K., 2004, Visual and CCD minima of eclipsing binaries during 2003, Variable Star Bulletin, No.42, 1 * Nagai (2005) Nagai, K., 2005, Visual and CCD minima of eclipsing binaries during 2004, Variable Star Bulletin, No. 43, 1 * Nagai (2009) Nagai, K., 2009, Visual and CCD minima of eclipsing binaries during 2008, Variable Star Bull. No. 48, 1 * Nagai (2016) Nagai, K., 2016, Visual,CCD and DSLR minima of eclipsing binaries during 2015, Variable Star Bulletin, No. 61, 1 * Nagai (2017) Nagai, K., 2017, Visual,CCD and DSLR minima of eclipsing binaries during 2016, Variable Star Bulletin, No. 63, 1 * Nagai (2021) Nagai, K., 2021, Visual,CCD and DSLR minima of eclipsing binaries during 2020, Variable Star Bulletin, No. 69, 1 * Nelson (2016) Nelson R. H., 2016,CCD Minima for Selected Eclipsing Binaries in 2015, IBVS, 6164, 1 * O’Connell (1951) O’Connell D. J. K., 1951,The so-called periastron effect in close eclipsing binaries ; New variable stars (fifth list), PRCO, 2, 85 * Parimucha et al. (2009) Parimucha S., Dubovsky P., Baludansky D., Pribulla T., Hambalek L., Vanko M., Ogloza W., 2009, Minima Times of Selected Eclipsing Binaries, IBVS, 5898, 1 * Parimucha et al. (2011) Parimucha S., Dubovsky P., Vanko M., Pribulla T., Kudzej I., Barsa R., 2011,Minima Times of Selected Eclipsing Binaries, IBVS, 5980, 1 * Parimucha et al. (2013) Parimucha S., Dubovsky P., Vanko M., 2013, Minima Times of Selected Eclipsing Binaries, IBVS, 6044, 1 * Patra et al. (2017) Patra K. C., Winn J. N., Holman M. J., Yu L., Deming D., Dai F., 2017, The Apparently Decaying Orbit of WASP-12b, AJ, 154, 4. doi:10.3847/1538-3881/aa6d75 * Petropoulou et al. (2015) Petropoulou M., Gazeas K., Tzouganatos L., Karampotsiou E., 2015, 110 minima timings of eclipsing binaries, IBVS, 6153, 1 * Pogson (1856) Pogson N., 1856, Magnitudes of Thirty-six of the Minor Planets for the first day of each month of the year 1857, MNRAS, 17, 12. doi:10.1093/mnras/17.1.12 * Poro et al. (2022) Poro A., Sarabi S., Zamanpour S., Fotouhi S., Davoudi F., Khakpash S., Salehian S. R., et al., 2022, Investigation of the orbital period and mass relations for W UMa-type contact systems, MNRAS, 510, 5315. doi:10.1093/mnras/stab3775 * Poro et al. (2022) Poro A., Paki E., Blackford M. G., Davoudi F., Aladag Y., Zamanpour S., Sarabi S., et al., 2022, The Photometric Study of SixW UMa Systems and Investigation of the Mass-Radius Relations for Contact Binary Stars, PASP, 134, 064201. doi:10.1088/1538-3873/ac71cd * Prša et al. (2016) Prša A., Conroy K. E., Horvat M., Pablo H., Kochoska A., Bloemen S., Giammarco J., et al., 2016, Physics Of Eclipsing Binaries. II. Toward the Increased Model Fidelity, ApJS, 227, 29. doi:10.3847/1538-4365/227/2/29 * Prša & Zwitter (2005) Prša A., Zwitter T., 2005, A Computational Guide to Physics of Eclipsing Binaries. I. Demonstrations and Perspectives, ApJ, 628, 426. doi:10.1086/430591 * Pribulla & Vanko (2002) Pribulla T., Vanko M., 2002, Photoelectric photometry of eclipsing contact binaries: U Peg, YY CrB, OU Ser and EQ Tau, CoSka, 32, 79 * Pribulla et al. (2001) Pribulla T., Vanko M., Parimucha S., Chochol D., 2001, New Photoelectric Minima and Updated Ephemerides of Selected Eclipsing Binaries, IBVS, 5056, 1 * Pribulla et al. (2002) Pribulla T., Vanko M., Parimucha S., Chochol D., 2002, IBVS, 5341, 1 * Pribulla, et al. (2003) Pribulla T., Kreiner J. M., Tremko J., 2003, New Photoelectric and CCD Minima and Updated Ephemerides of Selected Eclipsing Binaries, CoSka, 33, 38 * Pribulla et al. (2005) Pribulla T., Baludansky D., Chochol D., Chrastina M., Parimucha S., Petrik K., Szasz G., et al., 2005, New Minima of Selected Eclipsing Close Binaries, IBVS, 5668, 1 * Qian & Yang (2004) Qian S.-B., Yang Y.-G., 2004, GR Virginis: A Deep Overcontact Binary, AJ, 128, 2430. doi:10.1086/425051 * Qian & Yang (2005) Qian S., Yang Y., 2005, Improved astrophysical parameters for the overcontact binary FG Hydrae, MNRAS, 356, 765. doi:10.1111/j.1365-2966.2004.08497.x * Robertson & Eggleton (1977) Robertson J. A., Eggleton P. P., 1977,The evolution of W Ursae Majoris systems, MNRAS, 179, 359. doi:10.1093/mnras/179.3.359 * Ruciński (1969) Ruciński S. M., 1969,The Proximity Effects in Close Binary Systems. II. The Bolometric Reflection Effect for Stars with Deep Convective Envelopes, AcA, 19, 245 * Rucinski, et al. (2000) Rucinski S. M., Lu W., Mochnacki S. W., 2000, Radial Velocity Studies of Close Binary Stars. III , AJ, 120, 1133. doi:10.1086/301458 * Selam et al. (2003) Selam S. O., Albayrak B., Senavci H. V., Tanriverdi T., Elmasli A., Kara A., Aksu O., et al., 2003, Photoelectric Minima of Some Eclipsing Binary Stars, IBVS, 5471, 1 * Şenavcı et al. (2008) Şenavcı H. V., Nelson R. H., Özavcı İ., Selam S. O., Albayrak B., 2008, 2008NewA…13..468S, NewA, 13, 468. doi:10.1016/j.newast.2008.01.001 * Sterken (2005) Sterken, C. 2005, The Light-Time Effect in Astrophysics: Causes and cures of the O-C diagram, 335, 3 * Soomandar & Abedi (2020) Soomandar S., Abedi A., 2020, First study of a low-amplitude eclipsing binary KIC11496078, NewA, 80, 101394. doi:10.1016/j.newast.2020.101394 * Sterken (2005) Sterken C., 2005,Binary Pulsars, General Relativity and Light-Time Effects, ASPC, 335, 215 * Torres (2010) Torres G., 2010, On the Use of Empirical Bolometric Corrections for Stars, AJ, 140, 1158. doi:10.1088/0004-6256/140/5/1158 * Tran et al. (2013) Tran K., Levine A., Rappaport S., Borkovits T., Csizmadia S., Kalomeni B., 2013, The Anticorrelated Nature of the Primary and Secondary Eclipse Timing Variations for the Kepler Contact Binaries, ApJ, 774, 81. doi:10.1088/0004-637X/774/1/81 * Vaňko et al. (2004) Vaňko M., Parimucha Š., Pribulla T., Chochol D., 2004, New Parameters of the Contact Binary Systems YY CRB and EQ Tau, BaltA, 13, 151 * Wilsey & Beaky (2009) Wilsey N. J., Beaky M. M., 2009, Revisiting the O’Connell Effect in Eclipsing Binary Systems, SASS, 28, 107 * Yang & Qian (2015) Yang Y.-G., Qian S.-B., 2015, Deep, Low Mass Ratio Overcontact Binary Systems. XIV. A Statistical Analysis of 46 Sample Binaries, AJ, 150, 69. doi:10.1088/0004-6256/150/3/69 * Yilmaz et al. (2009) Yilmaz M., Basturk O., Alan N., Senavci H. V., Tanriverdi T., Kilicoglu T., Caliskan S., et al., 2009, New Times of Minima of Some Eclipsing Binary Stars and Maxima of Pulsating Stars, IBVS, 5887, 1 * Yu, Xiang, & Xiao (2015) Yu Y.-X., Xiang F.-Y., Xiao T.-Y., 2015, Orbital period changes of YY Coronae Borealis, PASJ, 67, 42. doi:10.1093/pasj/psv014 * Liu et al. (2023) Liu X.-Y., Li K., Michel R., Gao X., Gao X., Liu F., Yin S.-P., et al., 2023, The study of 11 contact binaries with mass ratios less than 0.1, MNRAS, 519, 5760. doi:10.1093/mnras/stad026 * Sun et al. (2020) Sun W., Chen X., Deng L., de Grijs R., 2020, Physical Parameters of Late-type Contact Binaries in the Northern Catalina Sky Survey, ApJS, 247, 50. doi:10.3847/1538-4365/ab7894 * Poro et al. (2021) Poro A., Blackford M. G., Davoudi F., Mohandes A., Madani M., Rezaei S., Bozorgzadeh E., 2021, The New Ephemeris and Light Curve Analysis of V870 Ara by the Ground-Based and TESS Data, OAst, 30, 37. doi:10.1515/astro-2021-0004 * Wadhwa et al. (2021) Wadhwa S. S., Tothill N. F. H., DeHorta A. Y., Filipović M., 2021,Photometric analysis of two extreme low mass ratio contact binary systems, RAA, 21, 235. doi:10.1088/1674-4527/21/9/235 * Zheng et al. (2021) Zheng S.-Y., Li K., Xia Q.-Q., 2021, The first photometric and spectroscopic analysis of the extremely low mass-ratio contact binary NSVS 5029961, MNRAS, 506, 4251. doi:10.1093/mnras/stab1829 * Liu & Li (2021) Liu L., Li X.-Z., 2021, The deep and low-mass-ratio contact binary CSS J022914.4+044340 with a luminous additional companion, RAA, 21, 180. doi:10.1088/1674-4527/21/7/180 * Gazeas et al. (2021) Gazeas K., Zola S., Liakos A., Zakrzewski B., Rucinski S. M., Kreiner J. M., Ogloza W., et al., 2021, Physical parameters of close binary systems: VIII, MNRAS, 501, 2897. doi:10.1093/mnras/staa3753 * Kjurkchieva et al. (2020) Kjurkchieva D. P., Popov V. A., Petrov N. I., 2020, Global parameters of the totally-eclipsing W UMa stars NSVS 6673994, NSVS 4316778, PP Lac and NSVS 1926064, NewA, 77, 101352. doi:10.1016/j.newast.2019.101352 * Wadhwa et al. (2021) Wadhwa S. S., De Horta A., Filipović M. D., Tothill N. F. H., Arbutina B., Petrović J., Djurašević G., 2021, ZZ Piscis Austrinus (ZZ PsA): a bright red nova progenitor and the instability mass ratio of contact binary stars, MNRAS, 501, 229. doi:10.1093/mnras/staa3637 * Paschke (2021) Paschke, A., 2021, A LIST OF MINIMA AND MAXIMA TIMINGS, BAV Journal. No. 55, 1 * Paschke (2023) Paschke, A., 2023, A LIST OF MINIMA AND MAXIMA TIMINGS BAV Journal. No. 79, 1 * Lienhard (2022) Lienhard, P., 2022, BAV-Results of observations - Photoelectric Minima/Maxima of Selected Eclipsing Binaries and Maxima/Minima of Pulsating Stars, BAV Journal. No. 60, 1 * Nelson & Alton (2022) Nelson R. H., Alton K. B., 2022, CCD Minima for Selected Eclipsing Binaries in 2022, OEJV, 234, 1. doi:10.5817/OEJV2022-0234
# Effects of strangeness on the chiral pseudo-critical line Mahammad Sabir Ali<EMAIL_ADDRESS>School of Physical Sciences, National Institute of Science Education and Research, An OCC of Homi Bhabha National Institute, Jatni-752050, India Deeptak Biswas<EMAIL_ADDRESS>School of Physical Sciences, National Institute of Science Education and Research, An OCC of Homi Bhabha National Institute, Jatni-752050, India Amaresh Jaiswal <EMAIL_ADDRESS>School of Physical Sciences, National Institute of Science Education and Research, An OCC of Homi Bhabha National Institute, Jatni-752050, India Institute of Theoretical Physics, Jagiellonian University, ul. St. Łojasiewicza 11, 30-348 Krakow, Poland Hiranmaya Mishra <EMAIL_ADDRESS>School of Physical Sciences, National Institute of Science Education and Research, An OCC of Homi Bhabha National Institute, Jatni-752050, India ###### Abstract Within a 2+1 flavor Nambu–Jona-Lasinio model, we have calculated the curvature coefficients and checked them against available lattice QCD estimations. With the observation that the flavor mixing due to the ‘t Hooft determinant term significantly affects the $\kappa_{2}^{S}$, we explore the effect of $\mu_{S}$ on the $T-\mu_{B}$ cross-over lines. With the novel determination of negative $\kappa_{2}^{B}$ at large $\mu_{S}$, we advocate the importance of studying the same in lattice QCD. ## I Introduction The phase diagram of the strongly interacting matter necessitates the determination of the chiral transition line in the high-density and high- temperature regions. The chiral symmetry is broken in the low-density (temperature) phase of quantum chromodynamics (QCD), which gets restored as the temperature and/or density increases. At vanishing baryon density, the restoration of the chiral symmetry is determined to be a cross-over with a pseudo-critical temperature $T_{pc}=156.5\pm 1.5\text{MeV}$ [1]. On the other hand, the transition is expected to be a first order at high density, which is connected to the cross-over line through a critical endpoint (CEP). Although the determination of the cross-over line for small values of the baryon chemical potential ($\mu_{B}$) is quite settled with the recent advancements of lattice QCD (LQCD) [2, 1] calculations, the extension of the line at finite $\mu_{B}$ suffers from the infamous sign problem which leads to the oscillatory behaviour of the Monte Carlo sampling method. For small chemical potential ($\mu_{X}$), the pseudo-critical line can be Taylor expanded at the lowest order in $\mu_{X}^{2}$, where one defines the line with the following ansatz, [2, 3, 1], $\frac{T_{pc}(\mu_{X})}{T_{pc}(0)}=1-\kappa_{2}^{X}\left(\frac{\mu_{X}}{T_{pc}(0)}\right)^{2}-\kappa_{4}^{X}\left(\frac{\mu_{X}}{T_{pc}(0)}\right)^{4}~{}.$ (1) Here, $\mu_{X}$ corresponds to chemical potential associated with various charges like baryon charge $B$, electric charge $Q$, and strangeness $S$. Such parametrization allows for the comparison of results from different models and lattice QCD calculations within the same baseline. The curvature coefficients $\kappa_{2}$ and $\kappa_{4}$ have been examined by the Taylor expansion method on the lattice [4, 5, 1]. Another standard approach relies on performing the calculations at imaginary chemical potential, followed by analytic continuation to the real plane [2, 6, 7]. The above-mentioned results are in good agreement with each other within the respective variances. Similar studies have been performed within the perturbative QCD [8] as well as in the ideal and mean-field hadron resonance gas (HRG) model [9, 10], quark-meson model [11, 12, 13, 14, 15, 16]. Moreover, the Nambu–Jona-Lasinio (NJL) model has also been employed in this context[17] considering 2 flavors of light quark. The effective models, considering the symmetries of the QCD Lagrangian, enable one to probe the matter at extreme conditions like high temperature and(or) density, even in the presence of a magnetic field, to understand the phases of the QCD matter, and provide a bulk description [18, 19]. The NJL model relies on chiral symmetry and provides a qualitative description of the QCD matter considering the pseudo-scalar mesons [20, 21]. Despite the analytical simplicity and the dependence on the parameter sets of such an effective model, the estimations made with NJL are quite robust [22, 23]. It acts as a suitable alternative for benchmark estimation at high-density and low- temperature regions [24], as there is no restriction over the applicability of this model at finite density. Over the past few decades, LQCD and NJL have complemented each other while broadening our understanding of strong interaction in various scenarios. For example, magnetic catalysis (MC) was first shown within a NJL framework [25, 26]. Two decades later, lattice QCD not only looked at the MC feature [27, 28, 29, 30, 31], but also observed inverse magnetic catalysis (IMC) around the cross-over temperature [31]. This results in better versions of NJL models with nonlocal interactions [32] and external agent dependent interaction strength [33, 34]. Further, in an NJL-like model, the anomalous breaking of $U(1)_{A}$ symmetry is addressed by explicitly adding the ’t Hooft determinant interaction (characterized by coupling $G_{d}$), which also represents the flavor mixing. Recently, Ref. [35, 36] have explored the effect of $G_{d}$ on isospin-sensitive observables in 2-flavor NJL model and constrained $G_{d}$ using the same from LQCD. In the context of the 3-flavor NJL, $G_{d}$ is the most ill-constrained parameter with a large allowed range while reproducing acceptable values of physical observables [22, 23]. In this letter, for the first time in the $2+1$ flavor case with isospin symmetry, the effect of the $G_{d}$ has been explored by incorporating a finite strangeness chemical potential. This provides an opportunity to study the effect of a large $\mu_{S}$ on the pseudo-critical line and provide novel estimations. Although the large variation of $\mu_{S}$ ($0-200$) MeV is beyond the scope of the freeze-out lines in heavy-ion collisions owing to strangeness neutrality, the present investigation is of particular interest for extending the NJL model at very high density. We have organized the letter as follows, in Sec. II we describe the model formalism for a $2+1$ flavor NJL model with the isospin symmetry. We have presented our results in Sec. III and summarized our findings in Sec. IV. ## II Formalism The $2+1$-flavor NJL model Lagrangian is given by ${\cal L}_{\text{NJL}}=\bar{\psi}\left(i\gamma_{\mu}\partial^{\mu}-\hat{m}\right)\psi+{\cal L}_{\text{S}}+{\cal L}_{\text{D}},$ (2) where the four- and six-point interaction terms are given by $\begin{split}{\cal L}_{\text{S}}&=G_{s}\sum_{a=0}^{8}\left[\left(\bar{\psi}\lambda_{a}\psi\right)^{2}+\left(\bar{\psi}\,i\gamma_{5}\lambda_{a}\psi\right)^{2}\right],\\\ {\cal L}_{\text{D}}&=G_{d}\left[\det\bar{\psi}_{i}(1-\gamma_{5})\psi_{j}+\det\bar{\psi}_{i}(1+\gamma_{5})\psi_{j}\right].\end{split}$ (3) Here, $\psi^{\text{T}}=(u,d,s)$ is the quark triplet in flavor space with up, down and strange quark; and $\hat{m}=\text{diag}(m_{u},m_{d},m_{s})$ is the current quark mass matrix. In the interaction, the $\lambda$’s are the Gell- Mann matrices, and in ${\cal L}_{\text{D}}$, the determinant is taken in the flavor space. ${\cal L}_{\text{S}}$ represents the four-quark interaction, with the coupling strength $G_{s}$, which is symmetric under $U(3)\times U(3)$ symmetry. On the other hand, ${\cal L}_{\text{D}}$, with coupling strength $G_{d}$, describes the six-quark interactions, known as the ’t Hooft determinant. ${\cal L}_{\text{D}}$ is included to break the $U(1)_{A}$ symmetry explicitly as $U(1)_{A}$ is anomalous in quantum theory. To obtain the free energy, it is standard to introduce auxiliary fields using Hubbard-Stratonovich transformation [37] to make the Lagrangian quadratic in fermion fields. Within mean-field approximation, we can have nonzero vacuum expectation values of these auxiliary fields. In the absence of any other external agents (like a magnetic field, isospin chemical potential, etc.), symmetry only allows $\bar{\psi}\psi$ channel to acquire nonzero vacuum expectation values. And the meanfield Lagrangian becomes ${\cal L}_{\text{MFA}}=\bar{\psi}\left(i\gamma_{\mu}\partial^{\mu}-\hat{M}\right)\psi-2G_{s}\sum_{i}\sigma_{i}^{2}+4G_{d}\prod_{i}\sigma_{f},$ (4) where $\hat{M}$ is the constituent mass matrix, and the constituent masses are given by [22] $M_{i}=m_{i}-4G_{s}\sigma_{i}+2G_{d}\epsilon_{ijk}\sigma_{j}\sigma_{k},$ (5) with $\sigma_{i}=\langle\bar{\psi}_{i}\psi_{i}\rangle$ being the condensate that works as the order parameter of chiral symmetry breaking. As it is evident from the above equation, $G_{d}$ mixes different flavors. It is straightforward to integrate out the fermion degrees of freedom from Eq. (4) to obtain the free energy. To introduce temperature (T) and chemical potentials ($\mu_{f}$), it is customary to perform the following transformations [38] $p_{0}\rightarrow ip_{4}-\mu_{f},\qquad p_{4}=(2n+1)\pi T.$ (6) With the above transformation, the integration over $p_{0}$ gets replaced by the sum over Mastubara frequencies, $n$. Moreover, the free energy is given by [39, 40] $\Omega=\Omega_{\text{MF}}+\Omega_{\text{Vac}}+\Omega_{\text{Th}},$ (7) where $\displaystyle\Omega_{\text{MF}}$ $\displaystyle=$ $\displaystyle 2G_{s}\sum_{i}\sigma_{i}^{2}+4G_{d}\prod_{i}\sigma_{i},$ (8) $\displaystyle\Omega_{\text{Vac}}$ $\displaystyle=$ $\displaystyle-2N_{c}\sum_{i}\int^{\Lambda}\frac{d^{3}p}{(2\pi)^{3}}\,\varepsilon_{i}(p),$ (9) $\displaystyle\Omega_{\text{Th}}$ $\displaystyle=$ $\displaystyle-2N_{c}T\sum_{i}\int\frac{d^{3}p}{(2\pi)^{3}}\left[\ln{\left(1+e^{-(\varepsilon_{i}(p)-\mu_{i})/T}\right)}\right.$ (10) $\displaystyle+\left.\ln{\left(1+e^{-(\varepsilon_{i}(p)+\mu_{i})/T}\right)}\right].$ With, $N_{c}=3$ is the number of colors, $\varepsilon_{i}(p)=\sqrt{\vec{p}^{2}+M_{i}^{2}}$ is the energy of the $i$-th flavor quark, and $\Lambda$ is the three-momentum cutoff. To obtain the ground state, one can minimize the free energy, defined in Eqs. (7), by solving the following gap equations simultaneously $\displaystyle\frac{\partial\Omega}{\partial\sigma_{u}}=\frac{\partial\Omega}{\partial\sigma_{d}}=\frac{\partial\Omega}{\partial\sigma_{s}}=0.$ (11) In this study, we have considered the isospin symmetric case; in other words, the electric charge and associated chemical potential ($\mu_{Q}$) are ignored, which implies that $\sigma_{u}=\sigma_{d}=\sigma_{l}$. The quark chemical potential can be written in terms of baryon and strangeness chemical potential $\begin{split}\mu_{u}&=\mu_{d}=\frac{1}{3}\,\mu_{B},\\\ \mu_{s}&=\frac{1}{3}\,\mu_{B}-\mu_{S}.\end{split}$ (12) Finally, for a fixed $\mu_{B}$ and $\mu_{S}$, we define the pseudo-critical temperature ($T_{pc}$) as the inflection temperature where the curvature of $\sigma_{l}$ changes sign [15]. In the context of LQCD, $T_{pc}$ is generally determined from the maximum of the chiral susceptibility [15]. | $\Lambda$ (MeV) | $G_{s}\Lambda^{2}$ | $G_{d}\Lambda^{5}$ | $m_{l}$(MeV) | $m_{s}$(MeV) ---|---|---|---|---|--- set I | 631.4 | 1.835 | 9.29 | 5.5 | 135.7 set II | 602.3 | 1.835 | 12.36 | 5.5 | 140.7 Table 1: Parameter sets of NJL model used in the present work. Set I and set II are from Hatsuda and Kunihiro[22] and Rehberg, Klevansky, and Hufner[23] respectively Let us note that there are five parameters in this three-flavor NJL model, namely the current quark mass for the strange and light quarks ($m_{s}$ and $m_{l}$), two coupling $G_{s}$ and $G_{d}$ and the three-momentum cutoff $\Lambda$. After choosing the $m_{l}=5.5$ MeV, consistent with chiral perturbation theory [41], the remaining four parameters are fixed by fitting the pion decay constant and the masses of pion, kaon, and $\eta^{\prime}$ [22, 23] to their empirical values. With the parametrization of set I, the mass of $\eta$ meson is underestimated by $11\%$ while for set II, the same is underestimated by $6\%$. As may be seen from Tab. 1, the dimensionless coupling $G_{d}\Lambda^{5}$ defer by $30\%$ between the two sets. The value of $G_{d}$ even differs by about $70\%$. We intend to prescribe here a means of constraining $G_{d}$. The parameter sets of the present study are given in Table. 1. With this set of parameters, the mass of the $\eta$ meson is underestimated by $6\%$ [17] and the constituent masses in the vacuum turn out to be $M_{l}=368$ MeV and $M_{s}=549$ MeV. Among the widely used parameter sets [23, 22] (see Tab. 1), $G_{d}\Lambda^{5}$ or $G_{d}$ differ most and in this work, we intend to provide a prescribe a way to constrain it more precisely. ## III Results For a given value $\mu_{B}$ and $\mu_{S}$, the pseudo-critical temperature ($T_{pc}$) is defined to be the inflection point of light quark condensate (the order parameter of chiral symmetry breaking) as a function of temperature. Before proceeding to investigate the effect of finite $\mu_{S}$ on $T-\mu_{B}$ line, it is essential to check the model estimation against the available lattice QCD results of the curvature coefficients ($\kappa_{2,4}$). Considering $\mu_{Q}=0$, we have first investigated the $T-\mu_{B}$ ($\mu_{S}=0$) and $T-\mu_{S}$ ($\mu_{B}=0$) plane, and find the $\kappa_{2,4}$ by parameterizing the respective pseudo-critical lines with the ansatz of Eq. (1) for the range $\mu_{B,S}/T_{pc}(0)\leq 1.0$ with $T_{pc}(0)=171.1$ and $173.4$ MeV for parameter set I and II, respectively. We have tabulated our estimations for the curvature coefficients $\kappa_{2}$ and $\kappa_{4}$ in Tab. 2 and Tab. 3, respectively. The lattice results are taken from the HotQCD collaboration [1] and WB collaboration [2] There is excellent agreement with LQCD estimations for $\kappa_{2}$ in both the $T-\mu_{B}$ and $T-\mu_{S}$ line. We want to emphasize that the values of $\kappa_{2}^{B}$ are similar for both the parameter sets, which infers that the large difference in $G_{d}$ between two parameter sets does not influence the $T-\mu_{B}$ phase line. On the contrary, for the $\mu_{B}=0$ plane, $\kappa_{2}^{S}$ is distinctly different for the two parameter sets. Although these $\kappa_{2}^{S}$ values match with the lattice estimations within the variances, the estimation with parameter set II has a better agreement with the mean value. The difference in the $\kappa_{2}^{S}$ is attributed to $G_{d}$, which brings the influence of strange quark to the light quarks as pointed out in Eq. 5. This motivates us to examine the effect of $G_{d}$ on $\kappa_{2}^{B}$ by exploring the $T-\mu_{B}$ line at various values of $\mu_{S}$. | $\kappa_{2}^{B}$ ($\mu_{S}=0$) | $\kappa_{2}^{S}$($\mu_{B}=0$) | $\kappa_{2}^{B,n_{S}=0}$ ---|---|---|--- NJL, set I | 0.01627 | 0.01345 | 0.01478 NJL, set II | 0.01619 | 0.01719 | 0.01350 Lattice QCD | 0.016(6) [1] | 0.017(5) [1] | 0.012(4) [1] | | | 0.0153(18) [7] Table 2: Estimations of $\kappa_{2}$ for the two parameter sets. The Lattice QCD results are taken from Ref. [1, 7] (a) Figure 1: Phase line in $T-\mu_{B}$ plane for different $\mu_{S}$, evaluated with parameter set I. For representative purposes, axes are scaled with respective $T_{pc}(\mu_{B}=0)$. The continuous lines represent our estimations for various values of $\mu_{S}$. The data and band are from the HotQCD [1]. The dashed line corresponds to the strangeness neutral case. To appreciate the effects arising from the strange sector, we have restricted this study to smaller values of baryon chemical potential (up to $\mu_{B}/T_{c}(0)\leq 1.0$) and varied the $\mu_{S}$ from 0 to 200 MeV. We have restricted the $\mu_{S}$ within half the kaon mass to exclude the possibility of kaon condensation [42]. Due to the difference in the magnitude of the $T_{pc}$ between NJL and Lattice studies, we have scaled the results with their respective $T_{pc}(\mu_{B}=0)$ as shown in Fig. 1. As may be observed, for finite values of $\mu_{S}$, the $T_{pc}$ initially increases with $\mu_{B}$ and then decreases. For smaller values of $\mu_{B}$, a finite $\mu_{S}$ decreases the thermal weight in the strange sector ($\mu_{S}$ comes with a negative sign in the strange thermal distribution, see Eq. (12)) and therefore leads to a higher value of $T_{pc}/T_{pc}(0)$ for the same $\mu_{B}$ as shown in Fig. 1. As $\mu_{B}$ increases further, this rise in $T_{pc}$ gets saturated and eventually starts decreasing. For the first time, such a prominent increase in the pseudo-critical temperature ($T_{pc}$) along the $T-\mu_{B}$ line is observed, which arises due to a finite strangeness chemical potential. This trend was not observed in earlier studies within LQCD [1, 43] and HRG [10], as most of them were performed along the $\mu_{S}=0$ line or along the freeze-out line, where the strangeness neutrality sets up the limit of $\mu_{S}\leq\mu_{B}/3$ [43]. (a) Figure 2: $\kappa_{2}^{B}$ as a function of $\mu_{S}$ in MeV. The red (dashed) and green (dashed dot) lines are the central values for the set I and II, respectively. The blue and cyan bands are associated with $\pm 10\%$ change in $G_{d}$ and $\pm 5\%$ change in $G_{s}$ respectively for parameter set I. To quantify the increase in the $T_{pc}$ with $\mu_{B}$ for a given value of $\mu_{S}$, we have used the ansatz of Eq. (1) to extract the curvature coefficients. We have presented the variation of $\kappa_{2}^{B}$ with $\mu_{S}$ in Fig. 2 for both the parameter sets. The curvature coefficient $\kappa_{2}^{B}$ starts from a positive value for $\mu_{S}=0$ and decreases as we increase the strangeness chemical potential. We wish to emphasize that with $\mu_{S}$, $\kappa_{2}^{B}$ decreases from its positive value at $\mu_{S}=0$ and eventually becomes negative at some $\mu_{S}=\mu_{S}^{c}$. This negative sign of $\kappa_{2}^{B}$ is one of the novel results of the present investigation. This was earlier not observed in the context of pseudo-critical line [1, 7]. One important observation is that the $\mu_{S}^{\text{c}}$ are distinctively different for the two parameter sets. $G_{s}\Lambda^{2}$ being the same for both the sets, this difference in $\mu_{S}^{c}$ is essentially due to the variance in $G_{d}$. A large $G_{d}$ provides a stronger influence of the strange quark sector on the light quarks, resulting in a faster decrease in $\kappa_{2}^{B}$. At this juncture, it is instructive to check $\kappa_{2}^{B}$ along the strangeness neutrality line ($n_{S}=0$). A finite $\mu_{B}$ requires the strangeness chemical potential $\mu_{S}\neq 0$ to achieve zero net strangeness. This corresponds to $\mu_{S}=\mu_{B}/3$ as we are considering $\mu_{Q}=0$ and there is no vector interaction in the present model. We have found $\kappa_{2}^{B,n_{S}=0}<\kappa_{2}^{B,\mu_{S}=0}$ as listed in Table.2. The decrease of $\kappa_{2}^{B}$ for the strangeness neutral case is commensurate with the lattice estimations [1, 43] and in accordance with our findings of the reduction of $\kappa_{2}^{B}$ with $\mu_{S}$. We would like to comment here that while the behaviour of the $\kappa_{2}^{B}$ ($\mu_{S}\neq 0$) is similar to the lattice QCD calculations. The lattice estimations of $n_{S}=0$ correspond to values of $\mu_{S}$ which are not large enough to constrain the flavor mixing determinant coupling. This necessitates LQCD simulations at a larger value of $\mu_{S}$. It would be interesting to check the robustness of this negative $\kappa_{2}^{B}$ on the parametrization of the NJL model itself. For this purpose, we have varied $G_{s}$, $G_{d}$ by $5\%$ and $10\%$, respectively and examined the effect on $\kappa_{2}^{B}$ variation as shown in Fig. 2. As discussed earlier, a larger value for $G_{d}$ increases the coupling between the light and strange sector resulting in a faster decrease of $\kappa_{2}^{B}$. It is needless to say that $\kappa_{2}^{B}$ becomes independent of $\mu_{S}$ at $G_{d}=0$ as the strange and light quark sector decouple which is evident in the Lagrangian of the NJL model. On the contrary, the variation of $G_{s}$ has a weaker effect on the features mentioned above. | $\kappa_{4}^{B}$ ($\mu_{S}=0$) | $\kappa_{4}^{S}$ ($\mu_{B}=0$) | $\kappa_{4}^{B,n_{S}=0}$ ---|---|---|--- NJL, set I | 0.00006 | 0.001477 | 0.000081 NJL, set II | 0.00005 | 0.001892 | 0.000742 Lattice QCD | 0.001(7) [1] | 0.004(6) [1] | 0.000(4) [1] | | | 0.00032(67) [7] Table 3: Values of $\kappa_{4}$ for the pseudo-critical line for different cases. (a) Figure 3: $\kappa_{4}^{B}$ as a function of $\mu_{S}$ in MeV. Color code is same as Fig.2. Within LQCD, the numerical value of $\kappa_{4}^{B}$ is consistent with zero [1, 7], as for the small value of $\mu_{B,S}/T$, the $4^{th}$ order coefficients of $\mu_{X}/T$ expansion is prone to have a weaker effect on the $T-\mu_{X}$ line. In the present study, we have found $\kappa_{4}$ to have good agreement for the case (${\mu_{B}\neq 0,\ \mu_{S}=0}$) and (${\mu_{B}=0,\ \mu_{S}\neq 0}$) as shown in Table 3. It would be essential to investigate the same for the $T-\mu_{B}$ line at various $\mu_{S}$. For larger values of $\mu_{S}$, we have found the $\kappa_{4}$ to be finite (as in Fig.3), even with the different parameter sets, as mentioned earlier. These findings suggest that even within the small $\mu_{B}$ range, a non-zero $\kappa_{4}$ is possible by switching on a finite strangeness-chemical potential $\mu_{S}$, which is relevant in the context of lattice simulations. ## IV Summary and conclusion In this letter, we explore the chiral phase boundary of the QCD matter within a $2+1$ flavor Nambu–Jona-Lasinio model with special emphasis on the effect of strangeness on the curvature coefficients $\kappa_{2}^{B}$ and $\kappa_{4}^{B}$. To our knowledge, this is the first such exploration within a $2+1$ NJL model. We have considered the isospin symmetric case and $\mu_{Q}=0$. To have better control over the lowest-order coefficients ($\kappa_{2}^{X}$), we have limited the study within the range $\mu_{X}/T\leq 1$. As benchmark, we have first estimated the $\kappa_{2,4}^{X}$ for three separate cases, 1) $T-\mu_{B}$ plane ($\mu_{S}=0$) i.e $\kappa_{2,4}^{B}$ 2) $T-\mu_{S}$ plane ($\mu_{B}=0$) i.e $\kappa_{2,4}^{S}$ and 3) along the strangeness neutrality line $\kappa_{2,4}^{B,n_{S}=0}$. We have used two standard sets of parametrization of $2+1$ NJL model that differ significantly regarding the flavor mixing determinant interaction. Although we have an excellent agreement of $\kappa_{2}^{B}$ with the available LQCD finding for both the parameter sets, we observed that $\kappa_{2}^{S}$ has a strong dependence on the flavor mixing and $U(1)_{A}$ breaking ’t Hooft interaction. Between the two parameter sets used, the set II with a higher value of $G_{d}$ reproduces lattice estimation of $\kappa_{2}^{S}$ better. To explore the effects of flavor mixing through $G_{d}$, it is instructive to study the $\mu_{S}$ dependence of the $T-\mu_{B}$ lines, which have been quantified by estimating $\kappa_{2}^{B}$ as a function of $\mu_{S}$. It is observed that within the model $\kappa_{2}^{B}$ decreases with $\mu_{S}$ and becomes negative for sufficiently large values of $\mu_{S}$. Further, it is observed that the value of $\mu_{S}$ where $\kappa_{2}^{B}$ vanishes are different for two parameter sets. This difference is attributed to the fact that a larger value of $G_{d}$ strengthens the strange contribution to the light sector, resulting in a faster decrease. This novel observation provides a direct measure of the flavor mixing through ‘t Hooft interaction. We expect that outcomes from LQCD investigations for $\kappa_{2}^{B}$ at large enough $\mu_{S}$ will assist in better constraining the ‘t Hooft coupling $G_{d}$, thereby enhancing our understanding of effective models like NJL and QCD. ## References * Bazavov _et al._ [2019] A. Bazavov _et al._ (HotQCD), Chiral crossover in QCD at zero and non-zero chemical potentials, Phys. Lett. B 795, 15 (2019), arXiv:1812.08235 [hep-lat] . * Bellwied _et al._ [2015] R. Bellwied, S. Borsanyi, Z. Fodor, J. Günther, S. D. Katz, C. Ratti, and K. K. Szabo, The QCD phase diagram from analytic continuation, Phys. Lett. B 751, 559 (2015), arXiv:1507.07510 [hep-lat] . * Bonati _et al._ [2018] C. Bonati, M. D’Elia, F. Negro, F. Sanfilippo, and K. Zambello, Curvature of the pseudocritical line in QCD: Taylor expansion matches analytic continuation, Phys. Rev. D 98, 054510 (2018), arXiv:1805.02960 [hep-lat] . * Gavai and Gupta [2003] R. V. Gavai and S. Gupta, Pressure and nonlinear susceptibilities in QCD at finite chemical potentials, Phys. Rev. D 68, 034506 (2003), arXiv:hep-lat/0303013 . * Gavai and Gupta [2005] R. V. Gavai and S. Gupta, The Critical end point of QCD, Phys. Rev. D 71, 114014 (2005), arXiv:hep-lat/0412035 . * Bonati _et al._ [2015] C. Bonati, M. D’Elia, M. Mariti, M. Mesiti, F. Negro, and F. Sanfilippo, Curvature of the chiral pseudocritical line in QCD: Continuum extrapolated results, Phys. Rev. D 92, 054503 (2015), arXiv:1507.03571 [hep-lat] . * Borsanyi _et al._ [2020] S. Borsanyi, Z. Fodor, J. N. Guenther, R. Kara, S. D. Katz, P. Parotto, A. Pasztor, C. Ratti, and K. K. Szabo, QCD Crossover at Finite Chemical Potential from Lattice Simulations, Phys. Rev. Lett. 125, 052001 (2020), arXiv:2002.02821 [hep-lat] . * Haque and Strickland [2021] N. Haque and M. Strickland, Next-to-next-to leading-order hard-thermal-loop perturbation-theory predictions for the curvature of the QCD phase transition line, Phys. Rev. C 103, 031901 (2021), arXiv:2011.06938 [hep-ph] . * Biswas _et al._ [2022] D. Biswas, P. Petreczky, and S. Sharma, Chiral condensate from a hadron resonance gas model, Phys. Rev. C 106, 045203 (2022), arXiv:2206.04579 [hep-ph] . * Biswas _et al._ [2024] D. Biswas, P. Petreczky, and S. Sharma, Chiral condensate and the Equation Of State at non-zero baryon density from hadron resonance gas model with repulsive mean-field, (2024), arXiv:2401.02874 [hep-ph] . * Fu _et al._ [2020] W.-j. Fu, J. M. Pawlowski, and F. Rennecke, QCD phase structure at finite temperature and density, Phys. Rev. D 101, 054032 (2020), arXiv:1909.02991 [hep-ph] . * Schaefer and Wambach [2005] B.-J. Schaefer and J. Wambach, The Phase diagram of the quark meson model, Nucl. Phys. A 757, 479 (2005), arXiv:nucl-th/0403039 . * Braun _et al._ [2012] J. Braun, B. Klein, and B.-J. Schaefer, On the Phase Structure of QCD in a Finite Volume, Phys. Lett. B 713, 216 (2012), arXiv:1110.0849 [hep-ph] . * Fischer and Luecker [2013] C. S. Fischer and J. Luecker, Propagators and phase structure of Nf=2 and Nf=2+1 QCD, Phys. Lett. B 718, 1036 (2013), arXiv:1206.5191 [hep-ph] . * Pawlowski and Rennecke [2014] J. M. Pawlowski and F. Rennecke, Higher order quark-mesonic scattering processes and the phase structure of QCD, Phys. Rev. D 90, 076002 (2014), arXiv:1403.1179 [hep-ph] . * Fischer _et al._ [2014] C. S. Fischer, J. Luecker, and C. A. Welzbacher, Phase structure of three and four flavor QCD, Phys. Rev. D 90, 034022 (2014), arXiv:1405.4762 [hep-ph] . * Buballa [2005] M. Buballa, NJL model analysis of quark matter at large density, Phys. Rept. 407, 205 (2005), arXiv:hep-ph/0402234 . * Ghosh _et al._ [2006] S. K. Ghosh, T. K. Mukherjee, M. G. Mustafa, and R. Ray, Susceptibilities and speed of sound from PNJL model, Phys. Rev. D 73, 114007 (2006), arXiv:hep-ph/0603050 . * Pereira [2021] R. C. Pereira, _Quantum Chromodynamics Phase Diagram Under Extreme Conditions_ , Ph.D. thesis, Coimbra U. (2021). * Nambu and Jona-Lasinio [1961a] Y. Nambu and G. Jona-Lasinio, Dynamical Model of Elementary Particles Based on an Analogy with Superconductivity. 1., Phys. Rev. 122, 345 (1961a). * Nambu and Jona-Lasinio [1961b] Y. Nambu and G. Jona-Lasinio, Dynamical model of elementary particles based on an analogy with superconductivity. II., Phys. Rev. 124, 246 (1961b). * Hatsuda and Kunihiro [1994] T. Hatsuda and T. Kunihiro, QCD phenomenology based on a chiral effective Lagrangian, Phys. Rept. 247, 221 (1994), arXiv:hep-ph/9401310 . * Rehberg _et al._ [1996] P. Rehberg, S. P. Klevansky, and J. Hufner, Hadronization in the SU(3) Nambu-Jona-Lasinio model, Phys. Rev. C 53, 410 (1996), arXiv:hep-ph/9506436 . * Mishra and Mishra [2004] A. Mishra and H. Mishra, Chiral symmetry breaking, color superconductivity and color neutral quark matter: A Variational approach, Phys. Rev. D 69, 014014 (2004), arXiv:hep-ph/0306105 . * Klevansky and Lemmer [1989] S. P. Klevansky and R. H. Lemmer, Chiral symmetry restoration in the Nambu-Jona-Lasinio model with a constant electromagnetic field, Phys. Rev. D 39, 3478 (1989). * Gusynin _et al._ [1994] V. P. Gusynin, V. A. Miransky, and I. A. Shovkovy, Catalysis of dynamical flavor symmetry breaking by a magnetic field in (2+1)-dimensions, Phys. Rev. Lett. 73, 3499 (1994), [Erratum: Phys.Rev.Lett. 76, 1005 (1996)], arXiv:hep-ph/9405262 . * Buividovich _et al._ [2010a] P. V. Buividovich, M. N. Chernodub, E. V. Luschevskaya, and M. I. Polikarpov, Numerical study of chiral symmetry breaking in non-Abelian gauge theory with background magnetic field, Phys. Lett. B 682, 484 (2010a), arXiv:0812.1740 [hep-lat] . * Buividovich _et al._ [2010b] P. V. Buividovich, M. N. Chernodub, E. V. Luschevskaya, and M. I. Polikarpov, Chiral magnetization of non-Abelian vacuum: A Lattice study, Nucl. Phys. B 826, 313 (2010b), arXiv:0906.0488 [hep-lat] . * Braguta _et al._ [2012] V. V. Braguta, P. V. Buividovich, T. Kalaydzhyan, S. V. Kuznetsov, and M. I. Polikarpov, The Chiral Magnetic Effect and chiral symmetry breaking in SU(3) quenched lattice gauge theory, Phys. Atom. Nucl. 75, 488 (2012), arXiv:1011.3795 [hep-lat] . * D’Elia and Negro [2011] M. D’Elia and F. Negro, Chiral Properties of Strong Interactions in a Magnetic Background, Phys. Rev. D 83, 114028 (2011), arXiv:1103.2080 [hep-lat] . * Bali _et al._ [2012] G. S. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. D. Katz, and A. Schafer, QCD quark condensate in external magnetic fields, Phys. Rev. D 86, 071502 (2012), arXiv:1206.4205 [hep-lat] . * Pagura _et al._ [2017] V. P. Pagura, D. Gomez Dumm, S. Noguera, and N. N. Scoccola, Magnetic catalysis and inverse magnetic catalysis in nonlocal chiral quark models, Phys. Rev. D 95, 034013 (2017), arXiv:1609.02025 [hep-ph] . * Farias _et al._ [2014] R. L. S. Farias, K. P. Gomes, G. I. Krein, and M. B. Pinto, Importance of asymptotic freedom for the pseudocritical temperature in magnetized quark matter, Phys. Rev. C 90, 025203 (2014), arXiv:1404.3931 [hep-ph] . * Ferreira _et al._ [2014] M. Ferreira, P. Costa, O. Lourenço, T. Frederico, and C. Providência, Inverse magnetic catalysis in the (2+1)-flavor Nambu-Jona-Lasinio and Polyakov-Nambu-Jona-Lasinio models, Phys. Rev. D 89, 116011 (2014), arXiv:1404.5577 [hep-ph] . * Ali _et al._ [2021] M. S. Ali, C. A. Islam, and R. Sharma, Studying explicit U(1)A symmetry breaking in a hot and magnetized two flavor nonlocal NJL model constrained using lattice results, Phys. Rev. D 104, 114026 (2021), arXiv:2009.13563 [hep-ph] . * Ali _et al._ [2023] M. S. Ali, C. A. Islam, and R. Sharma, The role of U(1)A symmetry breaking in the QCD corrections to the pion mass difference, J. Phys. G 50, 115003 (2023), arXiv:2103.15849 [hep-ph] . * Hubbard [1959] J. Hubbard, Calculation of partition functions, Phys. Rev. Lett. 3, 77 (1959). * Mustafa [2023] M. G. Mustafa, An introduction to thermal field theory and some of its application, Eur. Phys. J. ST 232, 1369 (2023), arXiv:2207.00534 [hep-ph] . * Gastineau _et al._ [2002] F. Gastineau, R. Nebauer, and J. Aichelin, Thermodynamics of the three flavor NJL model: Chiral symmetry breaking and color superconductivity, Phys. Rev. C 65, 045204 (2002), arXiv:hep-ph/0101289 . * Kohyama _et al._ [2015] H. Kohyama, D. Kimura, and T. Inagaki, Regularization dependence on phase diagram in Nambu–Jona-Lasinio model, Nucl. Phys. B 896, 682 (2015), arXiv:1501.00449 [hep-ph] . * Gasser and Leutwyler [1982] J. Gasser and H. Leutwyler, Quark Masses, Phys. Rept. 87, 77 (1982). * Barducci _et al._ [2005] A. Barducci, R. Casalbuoni, G. Pettini, and L. Ravagli, Pion and kaon condensation in a 3-flavor NJL model, Phys. Rev. D 71, 016011 (2005), arXiv:hep-ph/0410250 . * Ding _et al._ [2024] H. T. Ding, O. Kaczmarek, F. Karsch, P. Petreczky, M. Sarkar, C. Schmidt, and S. Sharma, Curvature of the chiral phase transition line from the magnetic equation of state of (2+1)-flavor QCD, (2024), arXiv:2403.09390 [hep-lat] .
###### Abstract The Cosmic Ray Extremely Distributed Observatory (CREDO) is a newly formed, global collaboration dedicated to observing and studying cosmic rays (CR) and cosmic ray ensembles (CRE): groups of a minimum of two CR with a common primary interaction vertex or the same parent particle. The CREDO program embraces testing known CR and CRE scenarios, and preparing to observe unexpected physics, it is also suitable for multi-messenger and multi-mission applications. Perfectly matched to CREDO capabilities, CRE could be formed both within classical models (e.g. as products of photon-photon interactions), and exotic scenarios (e.g. as results of decay of Super Heavy Dark Matter particles). Their fronts might be significantly extended in space and time, and they might include cosmic rays of energies spanning the whole cosmic ray energy spectrum, with a footprint composed of at least two extensive air showers with correlated arrival directions and arrival times. Since CRE are mostly expected to be spread over large areas and, because of the expected wide energy range of the contributing particles, CRE detection might only be feasible when using available cosmic ray infrastructure collectively, i.e. as a globally extended network of detectors. Thus, with this review article, the CREDO Collaboration invites the astroparticle physics community to actively join or to contribute to the research dedicated to CRE, and in particular to share any cosmic ray data useful for the specific CRE detection strategies. ###### keywords: cosmic rays; cosmic ray ensembles; extensive air showers; large scale cosmic ray correlations; CREDO Collaboration xx 1 5 Received: date; Accepted: date; Published: date Cosmic Ray Extremely Distributed Observatory Piotr Homola1*, Dmitriy Beznosko2, Gopal Bhatta3, Łukasz Bibrzycki4, Michalina Borczyńska5, Łukasz Bratek6, Nikolai Budnev7, Dariusz Burakowski8, David E. Alvarez-Castillo1,9, Kevin Almeida Cheminant1, Aleksander Ćwikła10, Punsiri Dam-o11, Niraj Dhital12, Alan R. Duffy13, Piotr Głownia6, Krzysztof Gorzkiewicz1, Dariusz Góra1, Alok C. Gupta14, Zuzana Hlávková15, Martin Homola15, Joanna Jałocha6, Robert Kamiński1, Michał Karbowiak16, Marcin Kasztelan17, Renata Kierepko1, Marek Knap8, Péter Kovács18, Szymon Kuliński5, Bartosz Łozowski19, Marek Magryś20, Mikhail V. Medvedev21,22, Justyna Mędrala23, Jerzy W. Mietelski1, Justyna Miszczyk1, Alona Mozgova24, Antonio Napolitano25, Vahab Nazari1,9, Y. Jack Ng26, Michał Niedźwiecki10, Cristina Oancea27,28, Bogusław Ogan8, Gabriela Opiła23, Krzysztof Oziomek20, Maciej Pawlik20,23, Marcin Piekarczyk4, Bożena Poncyljusz5, Jerzy Pryga29, Matías Rosas30, Krzysztof Rzecki23, Jilberto Zamora-Saa31, Katarzyna Smelcerz10, Karel Smolek32, Weronika Stanek23, Jarosław Stasielak1, Sławomir Stuglik1, Jolanta Sulma33, Oleksandr Sushchov1, Manana Svanidze34, Kyle Tam35, Arman Tursunov36, José M. Vaquero37, Tadeusz Wibig${}^{16}\orcidY{}$, Krzysztof W. Woźniak1 Piotr Homola, Dmitriy Beznosko, Gopal Bhatta, Łukasz Bibrzycki, Michalina Borczyńska, Łukasz Bratek, Nikolai Budnev, Dariusz Burakowski, David E. Alvarez-Castillo, Kevin Almeida Cheminant, Aleksander Ćwikła, Punsiri Dam-o, Niraj Dhital, Alan R. Duffy, Piotr Głownia, Krzysztof Gorzkiewicz, Dariusz Góra, Alok C. Gupta, Zuzana Hlávková, Martin Homola, Joanna Jałocha, Robert Kamiński, Michał Karbowiak, Marcin Kasztelan, Renata Kierepko, Marek Knap, Péter Kovács, Szymon Kuliński, Bartosz Łozowski, Marek Magryś, Mikhail V. Medvedev, Justyna Mędrala, Jerzy W. Mietelski, Justyna Miszczyk, Alona Mozgova, Antonio Napolitano, Vahab Nazari, Y. Jack Ng, Michał Niedźwiecki, Cristina Oancea, Bogusław Ogan, Gabriela Opiła, Krzysztof Oziomek, Maciej Pawlik, Marcin Piekarczyk, Bożena Poncyliusz, Jerzy Pryga, Matías Rosas, Krzysztof Rzecki, Jilberto Zamora-Saa, Katarzyna Smelcerz, Karel Smolek, Weronika Stanek, Jarosław Stasielak, Sławomir Stuglik, Jolanta Sulma, Oleksandr Sushchov, Manana Svanidze, Kyle Tam, Arman Tursunov, José M. Vaquero, Tadeusz Wibig, Krzysztof W. Woźniak Correspondence: <EMAIL_ADDRESS>Tel.: +48-12-662-8341 ## Introduction While state-of-the-art cosmic ray research to date has been focused on the detection and analysis of cosmic particles observed through individual detectors or arrays, the correlated observations of cosmic rays on the global scale remain insufficiently explored, yet no less promising. This collaborative perspective could provide a deeper insight into the physical processes within energy ranges rarely considered, including the highest energies known. Here we discuss a general approach to research dedicated to detecting and studying the astroparticle physics phenomena called Cosmic Ray Ensembles (CRE) defined as groups of a minimum of two correlated, be it spatially or temporally, cosmic rays with a common primary interaction vertex or the same parent particle. Such particles, constituents of CRE, are messengers of the primary physical processes – probes of the physics that happened even billions of years ago, at energies even millions of times higher than energies to which we can accelerate particles using man-made infrastructure. Figure 1: Cosmic-Ray Ensembles: a novelty in cosmic-ray research and in multi- messenger astroparticle physics. Armed with the particle physics background telling us that cosmic ray particles are expected to interact with radiation and matter on their way through the Cosmos and give birth to CRE, we ask not whether CRE exist, but under which circumstances and with which conditions they can reach the Earth and be detected with the available or possible infrastructure. As illustrated in Fig. 1, the signatures of CRE might be spread over very large surfaces which might make them hardly detectable by existing detector systems operating individually. On the other hand, if these detector systems operate under a planetary (and beyond) network, as proposed by The Cosmic-Ray Extremely Distributed Observatory (CREDO) Homola and et al. (2018) (CREDO Collab.); Góra and et al. (2018) (CREDO Collab.); Homola and et al. (2020) (CREDO Collab.), the chances for detection of CRE will naturally increase. The components of CRE might have energies that practically span the whole cosmic-ray energy spectrum. Thus, all the cosmic ray detectors working in this range, beginning from smartphones (e.g. DECO Vandenbroucke and et al (2016), CRAYFIS Whiteson et al. (2016), CREDO Detector cre ; Bibrzycki and et al. (2020) (CREDO Collab.); Niedzwiecki and et al. (2019) (CREDO Collab.), Cosmic Ray App cos ) and pocket size open-hardware scintillator detectors (e.g. Cosmic Watch Axani et al. (2018); Schaub et al. (2018) or CosmicPi cos ), through numerous larger educational detectors and arrays (e.g. HiSPARC Colle et al. (2007); Fokkema (2012), QuarkNet Bardeen et al. (2018), Showers of Knowledge Bychkov and Guskov (2012), CZELTA Smolek and et al. (2008)) to the professional infrastructure that receives or will receive cosmic rays as a signal or as a background Pierre Auger Observatory Pierre Auger Collaboration (2015), Telescope Array Kawai and et al. (2008), JEM-EUSO Adams and et al. (2015a) (JEM-EUSO Collab.); Adams and et al. (2015b) (JEM-EUSO Collab.), HAWC DeYoung (2012), MAGIC Ferenc (2005), H.E.S.S. Vasileiadis and et al. (2005), VERITAS Krennrich and et al. (2004); Park and et al. (2015), IceCube Aartsen and et al. (2017) (IceCube Collab.), Baikal-GVD D and et al. (2016) (Baikal-GVD Collab.), ANTARES Telescope Ageron and et al. (2011) (ANTARES Collab), European Southern Observatory Franza and Wilson (1982), other astronomical observatories, underground observatories, accelerator experiments in the off- beam mode) could contribute to a common effort towards a hunt for CRE. Therefore it is not only desirable, but also feasible to put the CRE research into a routine implementation. So far, the experimental searches for cosmic- ray correlations have been realized on different scales with arrays of detectors located at schools and universities. However, all those efforts were dedicated to very specific scenarios, concerning mostly fragmentation of nuclei in background electromagnetic radiation fields Gerasimova and Zatsepin (1960), which limits the number of particles in the group (ensemble) to just a few. Some of these projects include e.g. CHICOS Carlson et al. (2005) in the U.S., ALTA Pospıšil et al. (2009) in Canada, GELATICA (Svanidze et al., 2011) in Georgia, EEE Abbrescia and et al (2016) in Italy, LAAS Ochi and et al (2003); Matsumoto and et al (2019) in Japan, and the aforementioned CZELTA in Czech Republic. Time correlation of registered showers was studied at distances from 100 m to 7000 km, and in some cases evidence for unexpected coincidences have been found. However, these were without any convincing follow-up studies and data taking campaigns, which is hard without global coordination. Only very recently the idea of looking for large scale correlations in a general and global way took shape with the CREDO Collaboration, formalized in September 2019 Collaboration . CREDO is meant to be a multi-technique (different detector types) and doubly open (for both data upload and offering access) infrastructure enabling global research programs concerning radiation (both cosmic and terrestrial), with a number of multi- messenger, multi-mission and transdisciplinary opportunities. With this review article we invite the community to both benefit from the openness of CREDO and to contribute to its program with the research dedicated to a general search for ensembles of cosmic rays, especially photons of different energies. Since the concept of CREDO assumes the openness for independent data streams, it is expected that the projects mentioned above, as well as the other cosmic ray infrastructure including private, widely spread detectors such as smartphones, will have a direct interest in connecting to the global CREDO system, thus reinforcing its scientific attractiveness, and chances of individual research programs. As this article is meant to serve as a review of the current status and perspectives of CREDO and the research related to CRE, its structure follows the scientific and logical roadmap to observing and studying cosmic ray ensembles. We begin with a general description of the field in Section 1 “Foundations of the CREDO methodology” and 2 “CREDO within the cosmic ray landscape”, and theoretical modeling of CRE sources in Section 3 “UHECR sources and cosmic ray ensembles”. Then we present and discuss example CRE scenarios with an emphasis on the simulations of propagation of the CRE components through the Cosmos and within the Earth’s atmosphere (Section 4 “CRE simulations”), describe the status of the observational efforts (Sections 5 “CREDO detectors: cloud of clouds”, 6 “Data management and analysis”, and 7 “Building the scale: public engagement as a scientific need”), and conclude the article with the outlook, summary and conclusion (Sections 8 and 9). ## 1 Foundations of the CREDO methodology The CREDO experiment by its very idea of making discoveries and approaching truth in various research areas, touches something one can call the borderline of the current state of our knowledge. As explained below, there are reasons to anticipate that the experiment has the potential to falsify some of the adopted models. In the context of CREDO one can even ask questions about science itself and its methodology. As an example one can consider exotic QED processes which are potentially within the observational scope of CREDO. Namely, pair creation or photon splitting in strong magnetic fields. These predictions of QED can now be tested in the context of cosmic rays physics under extreme astrophysical conditions met in pulsar magnetospheres (with typical magnetic field strengths of $10^{12}$ Gauss, or even $10^{14}$ Gauss in magnetars). In such conditions there are several observational signatures of the two processes Usov (2002); Harding and et al (1996). What is more, the photon splitting phenomenon (in exotic scenarios) fits very well in the context of cosmic ray ensembles (CRE) that are potentially within the observational scope of CREDO (as long as the opening angle of the secondary photons would not be too large). Therefore, the CREDO experiment opens up new opportunities to test the well-established QED theory as well as consider more exotic scenarios. The experiment is potentially capable of changing our view of the basis of science, specifically by providing a new type of means to falsify generally accepted theories in an energy range that has not been yet available for to date observatories. In this way one arrives at the strictly philosophical question to be addressed and that naturally arises in the context of CREDO – the issue of parsimony of the scientific method or Occam’s razor. It has been accepted that the scientific method of explaining and understanding facts should follow the principle of not multiplying entities without necessity. Explaining all new phenomena should be based on what one has already had, namely, based on the theories and models that successfully worked so far, and only if this attempt is found to have failed one may consider rejecting these theories and models. Occasionally, the parsimony principle, when understood in a fundamentalistic manner, would stand in the way of knowledge. This kind of epistemological attitude might lead to supplementing old models and theories with new “epicycles”, rather than encouraging one to humbly admit that some of the assumptions being adopted so far need to be rejected as inconsistent with reality. A solution to this state of things is to presume that the parsimony principle, although an invaluable component of the scientific method, is not something the scientific method entirely stands on. Sometimes one has to bypass the strict rule (which in practice is done by the majority of the scientific community). Accordingly, if the CREDO experiment (or some other experiment) comes up with an anomalous result, not falling within the framework of accepted models and theories, first the possibility of an error should be considered, be it in the measurement or interpretation side. However, one should also not be afraid of pushing the limits and going beyond the realm of modern knowledge by extending or redefining the adopted concepts as well as the language with which we describe reality. The CREDO initiative inspires one to deliberate on falsifiability, another important element of the scientific method. It is often said that scientific statements should be falsifiable – refutable by contradicting evidence. This is not definitely true. For example, general statements about existence are most often unfalsifiable, nevertheless they are easy to be verified empirically, which makes the statements quite scientific. Even though science is not entirely determined by falsifiability, this attribute is very advantageous and often characterizes statements formulated by science, either as models or scientific theories. Classical electrodynamics provides a good example of a theory that is falsifiable. For the purpose of illustration imagine that CREDO (or another experiment of this kind) comes up with some evidence for spontaneous photon splitting (to be in touch with the experimental scope of CREDO pointed out earlier) – a phenomenon there is no place for in the linear Maxwell electrodynamics. There are effective terms that could be incorporated to the electromagnetic Lagrangian to mimic at the classical level such effects. To be more specific, some QED vacuum polarization effects such as photon-photon scattering or photon splitting in external fields can be effectively described by classical theory (the Euler-Heisenberg Lagrangian is used in this context), however, these nonlinear phenomena are merely interaction effects arising upon quantization of linear Maxwell fields in accordance with methods of quantum field theory, therefore already explainable within the current paradigm. The real change in the electromagnetic theory required to incorporate new phenomena such as spontaneous photon splitting that could occur in free space and observed through cosmic ray ensemble evolution, would be to replace the Maxwell Lagrangian by an alternative one resulting in nonlinear equations before quantization, or maybe even to alter the quantization method as such. Is that to mean that Maxwell’s electrodynamics would just be falsified by an observation of a spontaneous photon splitting? Stated clearly, it is not that simple – scientific theories are deeply founded, routinely withstanding repeated attempts at falsification. In the first place, one should try to explain a particular observational result within the current theory. Only if such attempts turn out absolutely unsuccessful and a growing number of anomalies are observed at the same time, one would rather conclude that the theory has been falsified – though the old theory will probably still survive as a good approximation (as this was the case with Newtonian and relativistic mechanics). Thus, falsifying a theory is not an easy task, at least not possible based merely on a single observation or measurement. This being said, experiments such as CREDO appear even more valuable, since both for the verification or falsification to be feasible, one needs as many channels as possible through which the universe is looked at. ## 2 CREDO within the cosmic ray landscape Although CRE detectable on or around the Earth can be initiated by particles spanning a wide range of energies, and since it is not a priori evident whether the chances of observing a CRE increase with the energy of the primary particle, one should stay open-minded about possible focus concerning the energy regimes of investigation. Here, for clarity, we chose to focus on the cosmic rays of the highest energies known, $E$ > 1018 eV, hereafter referred to as ultra-high energy cosmic rays (UHECR), capable of initiating CRE composed of billions of component particles which can propagate unaffected large astrophysical distances, as in case of GeV-TeV photons (see e.g. Risse and Homola (2007)), of which an observable fraction may reach the human technosphere. The surprisingly constant, power-law character of the energy spectrum of cosmic radiation observed by more than ten orders of magnitude with an almost constant exponent of about (-3) could be, in principle, continued on without any obstacles. At least until the mid-1960s there was no reason to expect any definite end. Although the sources and mechanisms of acceleration of single elementary particles to the energy of a dozen joules both then and now are unknown (the search is still ongoing). Review of objects in the sky that would potentially come into question, their spatial sizes ($L$) and magnetic fields ($B$) suggests that their capabilities ($E_{max}<ZeBL$) do not reach far beyond the limit of 1020 eV. It should be remembered that any acceleration mechanism by its nature would have to be of a statistical character and therefore, when analysing the average or typical parameters of primary cosmic ray particles, it is difficult to exclude the occurrence of large and extremely large fluctuations Tsallis et al. (2003). The situation changed substantially after the discovery of the cosmic microwave background (CMB) radiation. If we assume that the primary particles of cosmic radiation are protons, as Greisen Greisen (1966), Zatsepin and Kuzmin Zatsepin and Kuzmin (1966) noticed immediately, a sudden end of the spectrum must be caused by collisions of them with CMB photons and resonant production of the $\Delta^{+}$ particle. $\Delta^{+}$ decays instantly again into a nucleon, and its energy is on average about 20% less than before the collision. This process occurs as long as the nucleons have enough energy, which happens at about 5-6$\times 10^{19}$ eV. However, particles of cosmic radiation do not have to be protons. Observations suggest (e.g. Wibig and Wolfendale (2005)), that heavier nuclei with a higher charge ($Ze$), thus easier to be accelerated, may entirely dominate the highest energy cosmic radiation flux. For nuclei with energies of about 1020 eV, when energies are calculated separately for each nucleon, the GZK mechanism does not work, but this does not mean that the Universe for them remains transparent. In the centre of mass system, when colliding with photons of intergalactic radiation (infrared mostly) they excite to a giant dipole resonance, and then fragment, most preferably emitting few neutrons. Although fragments retain the same energy per nucleon, as a whole they have correspondingly reduced their total energy. Searching for the end of the cosmic radiation spectrum, according to what has been just said, could lead to solving the problem of mass composition of cosmic radiation of the highest energy, and thus bring us closer to identifying its sources (and acceleration mechanisms). However, it unfortunately happens that both the GZK cut-off and the photodisintegration of the nuclei start to work effectively in a similar range of energy, just where today’s observations of cosmic radiation end because of the scarce cosmic ray flux. In addition to the statistics, another circumstance is that two giant experiments in operation today which statistically dominate the measurements of giant showers in the highest energy range: the Pierre Auger Observatory in Argentina and the Telescope Array in Utah, US, while claiming a general agreement of the energy spectra within experimental uncertainties up to 10 EeV, they admit the need for non-linearity to bring the spectra in agreement at the higher energies, and within the range of common declinations. However, the sources of this non-linearity have not been identified, yet Deligny (2020). It turns out, then, that the current status of the UHECR observations does not allow definite conclusions about the exact location and nature of the observed cut-off of the energy spectrum, implying the uncertainty about the cosmic ray composition at the highest energies known. Either way, the results of the leading UHECR observatories and their conclusions from widely quoted publications (Abbasi and et al. (2008) (HiRes Collaboration); Abbasi and et al (2014); Abraham and et al. (2008) (Pierre Auger Collaboration); Valino (2016)) indicate that the spectrum of cosmic radiation is significantly suppressed at the highest energies, although it is not known at which energy, and whether this energy depends e.g. on the kind of the sources. Such a picture is widely accepted and despite minor scratches, small inconsistencies and doubts it seems that we understand it. There shouldn’t be many particles above the cut-off and in fact this is the case. However, among the aforementioned minor doubts, there are still recorded in the last century in several (or actually almost all) large and significant giant air shower experiments, the cases initiated by particles with estimated energies exceeding $10^{20}$ eV. The first, historical, Volcano Ranch event was recorded by Linsley in 1962 with the energy of $10^{20}$ eV, Linsley (1963). In the 1980s the Haverah Park experiment reported a significant increase in the number of showers with energies exceeding $\sim 10^{20}$ eV Cunningham et al. (1980); Watson (1991); Lawrence et al. (1991) . In the Yakutsk array, the record energy was rated at $1.5\times 10^{20}$ eV Afanasiev and et al (1995). The Japanese AGASA experiment published the event from December 3, 1993 which had an energy of $2\times 10^{20}$ eV Teshima and el al (2000). However, a world record was set in the USA in the Fly’s Eye experiment: $3.2\times 10^{20}$ eV Bird and et al (1995). These cases have not been discussed recently in literature, but it seems that they require some explanation. The first, more straightforward explanation is that the experimenters might have been misled about the energy reconstruction of their record event by the imperfectness of the tools available at that time, with particular emphasis on numerical tools. Thankfully, the Monte-Carlo methods developed in the 21st century are certainly more precise and they allow today making more adequate corrections than decades ago, in particular in experimental procedures like localization of shower axis, or estimation of shower energy, whether by collecting fluorescent light or determining the charged particle density distribution on the ground. And this explanation could be enough, provided one does not discuss in detail the statistics of the “above 1020 eV” cosmic ray detections of 20th century experiments in contemporary analyses. But, as already mentioned, the two great experiments, the Pierre Auger Observatory and the Telescope Array, also reported around 20 UHECR cases of energies above 1020 eV observed already with the newest tools and methods (see e.g. Ref. Aab and et al. (2020) (Pierre Auger Collaboration) where 15 events with energies above 1020 eV are mentioned collectively). To list just a few such events in detail we mention the Pierre Auger Observatory measurements which contain an event with energy $1.4\times 10^{20}$ eV Abreu and et al. (2010) (Pierre Auger Collaboration) – or $1.3\times 10^{20}$ eV Aab and et al. (2015) (Pierre Auger Collaboration), and the Telescope Array announcement concerning an event of similar energy $1.39\times 10^{20}$ eV Abbasi and et al (2014). It is of course clear that if the cosmic ray energy spectrum breaks around 5-6$\times 10^{19}$ eV, this is most likely a (gradual) suppression rather than an abrupt cut-off, so there must be a few events exceeding $10^{20}$ eV. However, the current statistics of these events does not allow telling whether they are compatible with the spectrum cut-off or not. It is a great achievement that today we can tell that the UHECR spectrum has been quite well measured – see for example two recent Pierre Auger Observatory papers Aab and et al. (2020) (Pierre Auger Collaboration, Pierre Auger Collaboration), and that a lot of effort and resources are being currently invested into explaining the nature of the observed spectrum suppression (GZK- like versus acceleration limit). Nevertheless the current results are still inconclusive, and it is therefore advisable to continue research in the highest energy regime of cosmic rays, and to try alternative methods whenever possible. In other words, the status of the dispute in the UHECR area encourages a closer look at the field and being ready for a major revision or breakthrough in the understanding of physics at the highest energies known. The CREDO initiative with its objectives dedicated to going beyond studying individual cosmic rays and taking under investigation also UHECR products – cosmic ray ensembles, may provide a precious complementary approach to UHECR studies. ## 3 UHECR sources and cosmic ray ensembles The Standard Model (SM) of particle physics predicts that if cosmic rays (CR) are primarily composed by protons, there should exist a limit on the maximum energy of the CR coming from distances farther than $\sim 100$ Mpc. This bound is called the GZK limit and was presented in Refs. Greisen (1966); Zatsepin and Kuzmin (1966); Stecker (1968). Despite this, UHECR with energies above the GZK limit have been reported, by experimentalist, from directions where there are no nearby111The current experimental data suggests an extragalactic origin for UHECRs with energies above the GZK cutoff Aab et al. . sources Zotov et al. (2017); Verzi et al. (2017). Therefore, it seems that there is a missing piece in our understanding of the sourcing, nature, and/or propagation of the CR. The cosmic ray background has been simulated by means of numerical propagation codes Kalashev et al. (2001); Aloisio and et al (2012), and the results have shown that it is very unlikely that CR with energy greater than GZK limit to be photons Aab and et al. (2017) (Pierre Auger Collaboration). In general, one can distinguish two qualitatively different approaches in unveiling the physics of UHECRs: theoretical models assuming interactions of exotic super-heavy matter (including extra dimensions, Lorentz invariance violation, cosmic strings, existence of new particles etc Weiler (1982, 1984); Aloisio and et al (2015); Tyler et al. (2001); Domokos and Kovesi-Domokos (1999); Coleman and Glashow (1997); Bhattacharjee and Sigl (2000); Bietenholz (2011); Scully and Stecker (2009); Gorham et al. (2012); Rubtsov et al. (2014); Tasson (2014); Rubtsov et al. (2017); Mohanmurthy et al. (2016); Klinkhamer and Schreck (2008); Alcantara et al. (2019); Aloisio and Tortorici (2008); Supanitsky and Medina-Tanco (2019)) and acceleration scenarios describing processes, in which the particles are accelerated by a particular astrophysical object (shocks in relativistic plasma jets, unipolar induction mechanisms, second-order Fermi acceleration, etc.). Acceleration scenarios rely on the existence of powerful astrophysical sources with available energy that is sufficient for the energy transfer from these objects to cosmic ray particles. In the age of multi-wavelength and multi messenger astronomy, transient astronomical objects are of the great interest for UHECRs emission. There are several classes of astronomical objects which can be prime targets for UHECR observations e.g. gamma-ray bursts (GRBs); supernovae (SN); fast radio bursts (FRBs); various classes of active galactic nuclei (AGN) such as Seyfert galaxies, radio galaxies and blazars; and possible neutrino emitting blazars. In the recent past there are evidences that these objects emit or can emit UHECRs. A 5 millisecond bright fast radio burst of extragalactic origin was detected (Lorimer et al., 2007). It is claimed that the radio galaxies have emitted UHECRs (Nagar and Matulich, 2008). There is an evidence of hadronic $\gamma-$ray emission from supernova remnants (Moskalenko et al., 2008). Blazars are one of the most prominent sources of UHERCs emission. There are several papers which estimated / predicted neutrino and UHERCs emission from blazars (e.g., Murase et al., 2012; Rodrigues et al., 2018, and references therein). About 2 decades ago, it was predicted that there were some very bright high energy peaked blazars which would be neutrino loud(Neronov and Semikoz, 2002). And recently, the IceCube Collaboration found the evidence of neutrino emission from the blazar TXS 0506$+$056 (Collaboration, 2018) which opened a new window to search for such another blazars too. ### 3.1 Supermassive black holes as UHECR sources Among the most powerful astrophysical sources, one can highlight supermassive black holes (SMBHs) located in the centers of most galaxies due to their compactness and enormous energy available for the extraction. Below we shall briefly show the capability of SMBHs to power the UHECRs in a given model. It appears that up to $29\%$ of the total energy ($M_{\rm BH}c^{2}$) of the black hole is rotational energy and available for extraction (Bekenstein, 1973). Nowadays, all tests of general relativity indicate that astrophysical black holes can be fully characterized by their masses and spins, while other properties of black holes are hidden inside the event horizon and unavailable to the external observer. For a typical SMBH of $10^{9}$ solar masses the extractable energy is of the order of $10^{74}$eV, turning SMBHs into the Universe’s largest and the most compact energy reservoirs. Therefore, it is important to search for the processes, which tap these enormous energy sources in the most efficient way. Attempts to tap the energy of black holes started in 1969 by Roger Penrose Penrose (1969), followed by many authors (see, e.g. references Blandford and Znajek (1977); Wagh et al. (1985); Parthasarathy et al. (1986); Dadhich et al. (2018) and references therein), who used the existence of negative energy states of particles with respect to a stationary observer at infinity. If a particle decays into two fragments near a black hole with one of the fragments attaining negative energy, the other fragment (respecting energy conservation) may escape from the black hole with greater energy than those of the primary particle. The remarkable property of the rotating black hole is the existence of a special region outside the event horizon called the ergosphere, where the energy of a particle relative to infinity can be negative. However, the negative energy states may also appear due to purely electromagnetic interactions between the SMBH and surrounding matter without the need for ergosphere (Tursunov and Dadhich, 2019; Tursunov et al., 2020). Black holes, as any astrophysical object, are immersed into magnetic fields. Rotation of the black hole in an external magnetic field leads to the twisting of magnetic field lines due to the frame-dragging effect. Similar to a classical Faraday unipolar inductor, a black hole rotating in a magnetic field attains non-zero induced electric charge (Wald, 1974). This charge has been shown to be non-negligible for any astrophysical black hole and is potentially observationally measurable (Zajacek et al., 2018). Since the induced charge of the black hole is coupled and proportional to the black hole spin, its discharge is equivalent to the slowing down of the black hole and decreasing the black hole’s rotational energy. To demonstrate the process of black hole energy extraction and resulting acceleration of particles to ultra-high energy, one can consider the ionization or decay of a neutral particle into charged fragments in the vicinity of rotating SMBH immersed into an external magnetic field. If the black hole possesses induced electric charge, the energy of charged fragments after decay of the neutral particle obtains a strong Coulombic contribution in addition to the gravitational negative energy in the ergosphere. Since the induced charge of SMBHs is more likely positive (Zajacek et al., 2018), these are the protons, which would escape from the black hole with tremendous energy (Tursunov et al., 2020). This is the ultra- efficient regime of the so-called magnetic Penrose process that can serve as a possible mechanism for acceleration of UHECRs when applied to SMBH candidates. The energy of an escaping UHE particle depends on its charge to mass ratio, strength of external magnetic field and the mass of the black hole. For protons accelerated in this mechanism, the maximum energy is predicted to be $\centering E_{\rm p}=1.3\times 10^{21}{\rm eV}~{}\frac{Z~{}m_{p}}{m}~{}\frac{B}{10^{4}{\rm G}}~{}\frac{M}{10^{10}M_{\odot}}.\@add@centering$ (1) In Figure 2 we demonstrate the acceleration of UHE protons resulting from the hydrogen ionization in the SMBH vicinity. Similar results are obtained for the neutron beta decay ($n\rightarrow p^{+}+e^{-}+\bar{\nu}_{\rm e}$). Here we provide verifiable constraints on the SMBH mass and magnetic field as UHECR source. As an example we indicate the few famous nearby SMBH candidates, which are capable of producing UHE protons of certain energies. It is interesting to note that the black hole located at the centre of our Galaxy can accelerate particles up to the energies corresponding to the knee of the cosmic ray spectrum. On the right side of Figure 2 we demonstrate the results of numerical simulation of the magnetic Penrose process (for details, see Tursunov et al., 2020; Stuchlík et al., 2020; Tursunov and Dadhich, 2019). Remarkably, the mechanism operates in viable physical conditions for typical SMBHs, without the need for a large acceleration zone or exotic assumptions for black holes. Figure 2: Left: constraints on the SMBH mass and magnetic field for UHE protons and chosen nearby sources (at the distance $<20$ Mpc from the Solar system) fitting UHECRs of certain energy. Data is taken from (Baczko and et al, 2016; Eckart et al., 2012; Doeleman et al., 2012; Kino et al., 2015; Daly, 2019; Piotrovich et al., 2020), tSMBH source corresponds to a typical SMBH of mass $10^{9}M_{\odot}$ and magnetic field $10^{4\pm 0.5}$G. Right: numerical simulation of a decay (ionization) of a freely falling neutral particle (grey thick) into two charged fragments in the vicinity of a rotating black hole immersed in an external magnetic field. Positively charged fragment (blue) is accelerated by the black hole and escapes to infinity along the symmetry axis. Negatively charged fragment (red) ultimately falls into the black hole, extracting its rotational energy (see, details in Tursunov et al., 2020). ### 3.2 Axion-like particles as UHECRs A different approach to explain the observation of UHECR is avoiding the GZK cutoff via particles beyond the SM. In such a case UHECR must be composed by new particles that must fulfill the following conditions: (i) be stable enough to reach the Earth from cosmological distances; (ii) interact very weakly with the Cosmic Microwave Background and the extra galactic magnetic fields, in order they do not lose very much energy; (iii) have a significant flux at their origin; and (iv) interact strong enough in the near galaxy, with the Sun or Earth magnetic field. One of the most popular candidates are axions, which were first proposed by Peccei and Quinn, by introducing an additional global axial symmetry to the standard model Lagrangian Peccei and Quinn (1977a, b) allowing to solve the strong CP problem dynamically. Additionally, axions were proposed as candidates to avoid GZK cutoff in reference Mirizzi et al. (2008); Csaki et al. (2003). However, it has been shown that it is extremely unlikely that the axion production, together with their conversion to photons by the galactic magnetic field, accounts for UHECR within the present exclusion limits Gorbunov et al. (2001). A different scenario comes from the consideration of particles with similar features of axions, called Axion-Like Particles (ALPs). The general Lagrangian model involving ALPs read as $\mathscr{L}_{\textsc{alp}}=\frac{1}{2}\partial_{\mu}A\ \partial^{\mu}A-\frac{1}{2}m_{A}^{2}A^{2}-\frac{g_{\textsc{alp}}}{4}aF_{\mu\nu}\tilde{F}^{\mu\nu},$ (2) here $A$ is the ALP field, $m_{A}$ is the ALP mass, $g_{\textsc{alp}}$ is a model-dependent coupling between ALPs and photons, $F_{\mu\nu}$ and $\tilde{F}_{\mu\nu}$ are the electromagnetic field strength and its dual, respectively. The Lagrangian in Eq. (2) provides a decay channel for ALPs to photons ($A\to\gamma\gamma$), which plays a fundamental role in experimental searches. To leading order, the decay width for the above mentioned decay is given by Beringer and et al (2012); Cadamuro and Redondo (2012) $\Gamma_{A}\left(A\to\gamma\gamma\right)=\frac{g_{\textsc{alp}}^{2}m_{A}^{3}}{64\pi}.$ (3) An important feature of ALPs is that they can be converted into photons (and vice-versa) by means of the Primakoff effect Halprin et al. (1966); this effect occurs when a strong external magnetic field exist in the region where the ALPs are propagating. The Primakoff effect can induce an ALP-photon oscillation, similar to the neutrinos flavor oscillation Sikivie (1983). This ALP-Photon oscillation changes the polarization of photons traveling in external magnetic fields, providing an additional mechanism in order to detect these pseudoscalar particles Maiani et al. (1986). The ALP-Photon oscillation could also produce an apparent dimming of distant sources as well, affecting the luminosity-redshift relation of Ia supernovae, the dispersion of quasar spectra, and the spectrum of the CMB Mirizzi et al. (2008). On the other hand, due to the weak interaction between ALPs and the CMB, this can travel across the Universe essentially without decaying. The decay length of a particle is given by $\lambda_{A}=\frac{E_{A}}{\Gamma_{A}m_{A}},$ (4) where $E_{A}$ is the energy of ALP. If we require that the decay length has to be at least of the order of magnitude of the observed universe $R_{U}$ as considered in Ref. Gorbunov et al. (2001), one finds the following condition $R_{U}\lesssim\lambda_{A}\equiv\frac{E_{A}}{\Gamma_{A}m_{A}}\quad\Rightarrow\quad\Gamma_{A}\lesssim\frac{E_{A}}{R_{U}m_{A}}.$ (5) The Eq. (5) allows one to establish a restriction on the ALPs coupling, which allow it to reach the Earth from distances beyond the $R_{\textsc{gzk}}$ radius $g_{\textsc{alp}}\lesssim\left(\frac{64\pi E_{a}}{R_{U}m_{a}^{4}}\right)^{1/2}.$ (6) This scenario has been studied in Refs. Gorbunov et al. (2001); Csaki et al. (2003); Fairbairn et al. (2011), constraining their parameter space according to current experimental data. ### 3.3 Dark Matter as a source of UHECR Two long-standing problems of contemporary astrophysics can be formulated as: (i) What is the nature of Dark Matter? and (ii) How to explain the existence of cosmic rays with energies greater than $10^{20}$ eV? A plausible hypothesis is that these two mysteries of science can be explained with just one scenario. This is in agreement with “Occam’s razor”, which favors the least possible set of solutions to a collection of seemingly independent problems. According to this scenario, Super Heavy Dark Matter (SHDM) may decay or be destroyed via annihilation (see e.g., (Chung et al., 1999)). This implies a production of super massive – with rest energies of $E\geq 10^{23}$ eV – particles in the early Universe, during the inflation phase. Such particles could annihilate or decay presently, leading to the production of jets containing copious amounts of less massive secondaries, possibly or even mainly photons. The energies of these secondaries could easily be of the order of $10^{20}$ eV, the value that seems to be out of reach in the acceleration processes in the potential sources. The key prediction of such scenarios in the SHDM group is that the UHECR flux observed on Earth should be dominated by photons (see e.g., (Rubtsov and et al, 2006)). On the other hand, the highest energy events observed by the leading collaborations: Pierre Auger Observatory and Telescope Array, are not considered photon candidates if the present state of the art air shower reconstruction procedures are applied. In fact there are no photon candidates within the energy range where SHDM model should give a photon flux, i.e., for energies above above $10^{18}$ eV, and this non-observation result leads to very stringent upper limits on photon fluxes (Petrera, 2019; Aab and et al. (2019), Pierre Auger Collaboration; Abu-Zayyad and et al. (2013), Telescope Array Collaboration). However, there are two main concerns about such conclusions. First, the present state of the art analysis does not take into account mechanisms that could potentially lead to a good mimicking of hadronic air showers with the showers induced by photons, e.g., a very efficient splitting of the primary energy into secondary photons/particles and underestimation of photonuclear interaction cross-sections. Second, the present state-of-the-art analysis does not take into account mechanisms that could lead to an efficient screening or cascading of the very-high or ultra-high energy photons on their way to the Earth, e.g., interactions under special cases of Lorentz invariance violation (Klinkhamer and Schreck, 2008), so that the products of such screening or cascading are spread over large areas and thus out of reach of the presently operating observatories, which is interpreted as non-observation of ultra-high energy (UHE) photons. If the first possibility occurs, this would mean that UHE photons might have already been detected and they might be in the data but not properly identified. If the second possibility – namely that the real properties of cosmic rays and the relevant propagation mechanisms are not well understood – takes place, then one has little or no chance to detect most of the very high energy photons that travel towards us. Both possibilities obviously question the conclusion about limitations imposed on the SHDM scenarios by the presently accepted upper limits to photon fluxes. Furthermore, such a conclusion can be accepted only if both concerns above are alleviated. A detailed study of this issue is imperative in the forthcoming years. An inherent part of the SHDM scenario is the very existence of such DM particles. It is not an overstatement that science has currently no clue of what DM particles are, nor what their properties would be. For a long time, the Weakly Interacting Massive Particles (WIMPs) were the most favored candidate. There was a reason for that – the “WIMP Miracle”. The number density of the particles that freeze out in the early universe in a certain epoch is set by the balance of the two rates: the production- annihilation rate $n_{X}\langle\sigma v\rangle$ and the Universe’s expansion rate $H$ (the Hubble constant at that epoch). Note that $H$ characterizes the temperature of plasma in the universe $T$, so the number density of particles which are in thermal equilibrium can drop exponentially fast with lowering temperature if particles are heavy, $m_{X}c^{2}\gg k_{B}T$. On the other hand, the cross-section depends on the coupling constant (i.e., the type of interaction) and the particle’s mass. Next, the mass density of DM in the universe depends on both the density and mass, $\Omega_{X}\propto n_{X}m_{X}$, where $\Omega_{X}$ is the ratio (at present) of the particle-“X” mass density to the critical density in the universe. Furthermore, the coupling constant describing weak interactions – the Fermi constant, $G_{F}\approx 1.1\times 10^{5}$ GeV-2 naturally introduces the new mass scale of about $100$ GeV. Finally, if the cross-section is the weak cross-section and the particle’s mass is about $m_{X}\sim 100$ GeV, then the abundance of “$X$” is $\Omega_{X}\propto\langle\sigma v\rangle^{-1}\sim m^{2}_{X}/g_{X}^{4}$ and $g_{X}\sim 0.6$, so $\Omega_{X}\sim 0.1$. Thus, “$X$” is Dark Matter. This is the “WIMP miracle”: particle physics independently predicts particles with the right mass density to be Dark Matter. In this scenario, $\langle\sigma v\rangle\simeq 3\times 10^{-26}\textrm{ cm}^{3}\textrm{s}^{-1}$; since $v\leq c$, the gross-section should be $\sigma\geq 10^{-36}\textrm{ cm}^{2}$. Numerous ongoing direct-detection DM experiments have reached sensitivity levels that correspond to $\sigma\sim 10^{-44}-10^{-45}\textrm{ cm}^{2}$ in the mass range of interest – from tens of GeV to a few TeV, without a statistically significant and reproducible detection. Thus, the WIMP miracle is at odds with experiment. The dismissal of the most favorable WIMP miracle in Dark Matter theory has ignited great interest in alternative scenarios. At present, all bets regarding the non-standard DM scenarios are on the table. There is no deficit of the putative candidates, including the super-heavy dark sector candidates. Here are some examples, as follows. (1) Magnetic monopoles (’t Hooft, 1974; Polyakov, 1974) are topological defects that naturally appear in Grand Unified Theories (GUT) and carry a magnetic charge. The natural mass scale for them is the GUT scale, $\sim 10^{25}$ eV. However, the actual mass of a monopole is not constrained and all mass scales above the one that can be probed in an experiment are considered. Being topological defects, monopoles are copiously produced in a GUT phase transition, creating a severe over-closing problem $\Omega_{X}\gg 1$. This problem is remedied by inflation, which reduces the monopole abundance in the amount enough to not contradict observational and recent cosmological constraints (Medvedev and Loeb, 2017). (2) Wimpzillas (Kolb and Long, 2017) are superheavy DM particles, which mass scale is many orders of magnitude above the conventional WIMP scale, possibly as large as the GUT scale. The WIMP mass cannot exceed hundreds of TeV. Otherwise, heavier WIMPs would over-close the universe, $\Omega_{X}>1$. Therefore, Wimpzillas, like monopoles, are not ‘thermal relics’. They emerge out of thermal equilibrium right after inflation and their density is not determined by the balance of the production-annihilation rate and the universe expansion rate. (3) Planckian-scale particles (Garny et al., 2016, 2018, 2019) is a whole class of candidates that appears in a minimal scenario of DM, where only gravitational interactions with the standard model are assumed. There is only one parameter in the scenario – the particle mass, which ranges from TeV up to the GUT scale. These particles are assumed to be produced by gravitational scattering in the thermal plasma of the Standard Model sector after inflation. For example, the Kaluza-Klein excitation of the graviton in the string theory is one of the realizations of this scenario. The above discussion does not present a complete list of SHDM candidates, of course, but just a few popular examples. There are more theoretically predicted candidates. The search for signatures of propagation and/or decay of particle primaries beyond the Standard Model is to be taken seriously and thoroughly. In this regard, CREDO observatory can serve as an indirect search for one of the non- standard Dark Matter candidates: super heavy particles with masses equal or exceeding $10^{23}$ eV. Such particles could be produced in the early Universe around the inflation phase and decay or annihilate presently leading to observable products in the UHE range (Bhattacharjee, 2000). The interest in SHDM scenario is supported by certain difficulties the electromagnetic theory faces at explaining UHECR acceleration, which is discussed below, in Section 3.4. ### 3.4 Constraint on electromagnetic acceleration of UHECR Electrically charged UHECR traversing a region of size $R$ filled with magnetic field $B$ assumed to be uniform, loses its energy due to synchrotron radiative energy loss (Medvedev, 2003) according to $\frac{dE}{dx}=F_{rad}=-\frac{2}{3}\left(\frac{Ze}{Am_{p}c^{2}}\right)^{4}B^{2}E^{2},$ (7) where $A$ and $Z$ are the atomic mass and charge of an accelerated nucleus, $m_{p}$ is the proton mass. The solution to this equation is trivial: $E=\left(E_{0}^{-1}+E_{cr}^{-1}\right)^{-1}.$ (8) Thus, for an arbitrarily large initial energy, $E_{0}\to\infty$, the particle energy cannot exceed the ‘critical energy’ threshold: $E_{cr}=\frac{3}{2}\left(\frac{Am_{p}c^{2}}{Ze}\right)^{4}\left({B^{2}R}\right)^{-1}\sim 3\times 10^{16}\frac{(A/Z)^{4}}{B_{G}^{2}R_{kpc}}~{}\textrm{eV},$ (9) where $B_{G}$ is in Gauss and $R_{kpc}$ is in kiloparsec. Furthermore, even if one is to continuously accelerate the particle along its curved path via an inductive electric force $F_{EM}=Ze\,E_{ind}\simeq Ze(v/c)B\sim Ze\,B$, one has the following energy evolution: $\frac{dE}{dx}=F_{EM}+F_{rad}=ZeB-\frac{2}{3}\left(\frac{Ze}{Am_{p}c^{2}}\right)^{4}B^{2}E^{2}.$ (10) It has a beautiful solution (assuming the initial energy is small compared to the final energy): $E=\sqrt{E_{acc}E_{cr}}\tanh\sqrt{E_{acc}/E_{cr}},$ (11) where $E_{acc}=Ze\,B\,R\sim 9\times 10^{23}Z\,B_{G}\,R_{kpc}$ (12) is the maximum energy the accelerated particle can reach without losses. This solution has two obvious asymptotic scalings. If $E_{acc}\ll E_{cr}$, i.e., losses are small, one recovers the Hillas constraint (Hillas, 1984) $E\simeq E_{acc}$, and in the opposite limit one has $E_{\rm max}\simeq\sqrt{E_{acc}E_{cr}}\sim 10^{20}A^{2}Z^{-3/2}B^{-1/2}\ ~{}\textrm{eV}.$ (13) Figure 3: The $B$-$R$ “diagram of state” for UHECR acceleration (Medvedev, 2003). The long-dashed line is the original Hillas relation, Eqn. (12), for a proton of energy $3\times 10^{20}$ eV. The short-dashed and dot-dashed lines represent the radiative cooling constraints given by Eqns. (9) and (13), for the same proton energy. Above the dot-dashed line, the force of radiative friction dominates over an electromagnetic force. The solid lines represent the boundaries of the allowed parameter regions for protons with energies $3\times 10^{20}$, $10^{22}$, and $3\times 10^{23}$ eV, from the outermost to the innermost “wedge”, respectively. The dotted lines are the same boundaries for an iron nuclei with energy $3\times 10^{20}$ eV. Only those astrophysical objects which fall inside the “wedges” are, in principle, capable of accelerating the particles to such energies. The grey vertical lines mark two characteristic spatial scales: the GZK attenuation distance ($\sim 20$ Mpc) and the Hubble horizon size ($\sim 4$ Gpc). For GRB sources we took into account that the Lorentz boost changes with radius. These results are summarized in Figure 3. All the curves, except for two dotted ones, correspond to protons ($A=Z=1$), and the dotted are for iron nuclei ($A=56,\,Z=26$). The details are discussed in the figure caption. Here we make a few important conclusions. First, there is an absolute maximum energy of UHECR accelerated electromagnetically and subject to radiative losses (e.g., in dipolar magnetospheres, galactic and extragalactic shocks and such), which is given by the balance of the acceleration and losses. This puts the upper limit on the magnetic field strength, given by the dot-dashed horizontal line and by Eq. (13): $B\lesssim E_{20}^{-2}A^{4}Z^{-3}\textrm{ Gauss},$ (14) where $E_{20}=E/(10^{20}\textrm{ eV})$. Second, there is a corresponding limit on the size of an accelerator. This is the size that corresponds to the “tip of the wedges”, which can be obtained, for instance, from Equations (13) and (12), namely $R\gtrsim 6\times 10^{-2}E_{20}^{3}A^{-4}Z^{2}\textrm{ pc}.$ (15) This constraint effectively rules out compact accelerators, such as neutron stars, white dwarfs and such. Large objects, such as galactic halos, radio lobes, and galaxy clusters become favorable. Third, electromagnetic acceleration of UHECR beyond $\sim 3\times 10^{22}$ eV is hard because the size of an accelerator becomes comparable to the GZK distance. Furthermore, the acceleration beyond $\sim 3\times 10^{23}$ eV would require an accelerator of the size of the observable universe, which is questionable. Fourth, our analysis above does not take into account that the source may be moving relativistically with a large Lorentz factor, as in a gamma-ray burst outflow, for example. Accounting for this, relaxes the size and field strength constraints but not very dramatically, as far as UHECR are concerned. Fifth, our analysis excludes special arrangements such as linear accelerators. If one can arrange particle acceleration along a straight path, e.g., strictly along a static straight magnetic field line, then the particle would experience no synchrotron energy loss, regardless of the field strength (still well below the QED, ‘Schwinger field’ strength). In this case, our analysis is inapplicable. ### 3.5 SQM objects Recently, strange quark matter (SQM) objects (either in the form of stars or planets) have been shown to efficiently convert mechanical energy into hadronic energy when they oscillate Kutschera et al. (2020). This is possible thanks to the property that the mass density at the edge of SQM objects of $4.7\times 10^{14}\mathrm{g}/\mathrm{cm}^{3}$ is the critical density below which SQM is unstable with respect to decay into photons, hadrons, and leptons. Either compression or expansion of the SQM object, such as oscillation induced deformations, releases energy. Oscillations of SQM objects could be induced in stellar or planetary systems where tidal interactions are ubiquitous. The excitation energy is converted into electromagnetic energy in a short time of $1\,\mathrm{ms}$, during a few oscillations. Higher amplitude oscillations result in faster energy release that could lead even to fragmentation or dissolution of SQM objects. In the context of CREDO, it would be interesting to observe periodic signatures of such events. SQM stars and planets are very sensitive to radial oscillations. The amplitude of oscillations is physically determined by the excitation process in close encounters of SQM objects and another compact star or black hole. Depending on the closest approach distance, the energy transferred to the oscillations can reach $10^{53}\,\mathrm{erg}$ when the closest approach distance is 3 times the star radius. By mode couplings, monopole (that is, spherically symmetric radial) oscillations also would be excited. For violent encounters, the induced oscillations of the radius result in excitation energy in the surface layer of every SQM object, equally for stars of pulsar masses and planetary- mass SQM objects. Fractional amplitudes of radial oscillations $\chi=10^{-6}$ are quite possible. The corresponding deposited energy to be radiated away for a star of mass $1.4\mathrm{M}_{\odot}$ is estimated to be $6.6\times 10^{36}\,\mathrm{erg}$. For more violent encounters (such as inspiraling of a tightly bound binary system), one can expect even amplitudes $\chi={10}^{-3}$ and the corresponding energy orders of magnitudes higher. A tightly bound binary system could induce periodic distortions of a SQM companion and associated periodic bursts of intense radiation (associated with gravitational radiation). In the evolution of binary systems with the SQM objects, the energy loss due to excitations of SQM stars and planets must be accounted for and this can change significantly the predictions obtained with unexcited SQM objects (this would be particularly relevant to binary gravitational wave sources, its exact template calculation should account for excitations of SQM objects). To understand the possible radiation scenarios, one may bring to attention the following estimations taken from Kutschera et al. (2020). For small SQM objects called planets, the excited energy is deposited in the whole volume of the object. The energy then scales quadratically with the fractional amplitude of radial oscillations $\chi$ – for planets $E=7.4\times 10^{48}\cdot\frac{M}{\mathrm{M}_{\oplus}}\chi^{2}\,\mathrm{erg}$. In particular, $E=7.4\times 10^{38}\,\mathrm{erg}$ for an Earth mass SQM planet undergoing radial deformation of amplitude $\chi=10^{-5}$. As the mass of the considered objects increases, the deposition zone shrinks toward the surface and the energy law changes. In the intermediate region of masses the energy scaling law can be obtained only numerically. For masses on the order of solar mass, the energy deposition zone becomes a thin layer and the total energy to be eventually radiated away can again be estimated analytically and it scales cubically with the fractional amplitude of oscillations $\chi$ (the resulting formula is given in Kutschera et al. (2020)). Furthermore, one also has to take into account a more complicated mass-radius relation for relativistic SQM stars implied by the equilibrium state numerical solution. For the representative star model of $M=1.4\,\mathrm{M}_{\odot}$ the radius is $R=10.3\,\mathrm{km}$ and the energy to be released behaves as $E=6.6\times 10^{54}\chi^{3}\mathrm{erg}$. For the amplitude $\chi=0.001$ this gives $E=6.6\times 10^{45}\,\mathrm{erg}$ in a $20$m-wide deposition shell. The process of excitation of quark matter is an irreversible one. The excitation energy will be dissipated eventually into heat and radiation. In the intermediate phase the excitation energy is being converted into electromagnetic energy. The generation rate of radial oscillations of the SQM star can be estimated by assuming that the timescale of radial oscillations is comparable with that already known in the model without the excitation energy, which is $T=0.37\times 10^{-3}\mathrm{s}$. The period is much larger than the timescale for electromagnetic interactions of $10^{-16}\mathrm{s}$. Thus the excitation energy can be assumed to be converted into electromagnetic energy inside the SQM star instantaneously. The rate of electromagnetic energy generation within a quarter of the oscillation period is $7.2\times 10^{49}\mathrm{erg}/\mathrm{s}$ for $\chi=10^{-3}$ (and it increases with $\chi$, eg. $6.7\times 10^{55}\mathrm{erg}/\mathrm{s}$ for $\chi=0.1$). Further investigations are required to find how much of the energy will be radiated away and how fast this will proceed. A rough conservative estimation made in Kutschera et al. (2020) shows that the luminosity for the energy radiated away by the outermost shell of thickness on the order of the photon mean free path would amount to $L=1.3\times 10^{34}\mathrm{erg}/\mathrm{s}$ (for $\chi=10^{-3}$) with a corresponding effective temperature of $2.0\times 10^{6}\,\mathrm{K}$. The total energy of $E=6.6\times 10^{45}\,\mathrm{erg}$ released in the star would thus sustain radiation for $1.6\times 10^{4}$ years. However, neutrino cooling will switch on in $10^{-6}\,\mathrm{s}$ and the star will cool fast. The estimated neutrino cooling time for the volume of the star $1.4\times 10^{3}\,\mathrm{s}$ is $3.5\times 10^{8}$ times shorter (respectively, $1.5\times 10^{5}\,\mathrm{s}$ for the volume of the initial deposition shell is $3.4\times 10^{6}$ times shorter) than the electromagnetic radiation time. With the considered excitation mechanism accounted for, the extreme, e.g. $\chi=0.1$ large oscillations, are rather excluded in nature, since the energy of such oscillations would be less than the value of excitation energy generated during the time of a single oscillation, therefore oscillations would be damped during the first cycle). For lower amplitudes the energy generation time is longer. More accurate calculations would require taking into account the evolution of the temperature of the SQM star. The damping effect on radial oscillations by weak quark processes, although efficient, is not expected to determine the discussed strong interaction phase, although it might dominate the thermalization of excited SQM. ### 3.6 Neutron star collapse to third family Massive neutron stars may support in their cores exotic states of matter different from hadronic, like deconfined quark matter. These stars are denoted hybrid stars. In the case of a strong first order phase transition inside compact stars, their mass-radius relation presents a gap of unstable configurations between pure hadronic and hybrid stars. The latter ones will populate the so-called “third branch”, following the hadronic neutron stars and white dwarfs branches. The transition scenarios may correspond to a neutron star configuration lying at the top end of the hadronic branch whose central density is right below the critical value for deconfinement. There are possible ways for the increase of central density of such a neutron star, thus triggering the transition to a third branch configuration. For instance, a fast rotating neutron star may lose energy by dipole emission resulting in a spin down, or a neutron star in a binary system may undergo an accretion- induced spin-up by matter from a companion. The latter case has been recently proposed as an explanation for eccentric orbits of binary systems where the neutron star is able to accrete matter from a circumbinary disk created after a low-mass X-ray episode. Consequently, the neutrino burst which accompanies the deconfinement phase transition in the neutron star interior may trigger a pulsar kick producing the observed eccentric orbit Alvarez-Castillo and et al (2019). A pair of compact stars of about the same mass, each one of them lying in different branches of their mass-radius relation are usually called “twin stars”, see Benic et al. (2015); Alvarez-Castillo and Blaschke (2017); Blaschke et al. (2020). The aforementioned dynamical scenarios, where one of the neutron stars collapses into its twin star, may conserve the total baryon number resulting in a mass defect of less than a tenth of the solar mass for state-of-the-art equations of state, which corresponds to an energy of a few $10^{51}$ ergs Alvarez-Castillo et al. (2015). It is therefore feasible that the possible deconfinement phase transition in compact stars produce energetic emissions, for instance acting as an engine for Gamma Ray Bursts, with an accompanied characteristic neutrino signal as well as high energy cosmic rays. An analogous situation can be discussed in the framework of neutron star mergers, where both electromagnetic and gravitational radiation are emitted, together with cosmic rays. For the GW170817 event , it has been estimated that the cosmic rays flux is able to support the population of cosmic rays detected at Earth below the “ankle” Rodrigues et al. (2019). ### 3.7 UHECR as the spacetime structure probe? The CRE hypothesis can be considered a candidate scenario for a number of unexplained cosmic-ray measurements. The examples include Smith et al. 1983 Smith et al. (1983) (32 TeV EAS within 5 minutes while only one such event would have been expected) and Fegan et al. 1983 Fegan et al. (1983) (simultaneous increase in the cosmic-ray shower rate at two recording stations 250 km apart). On one hand the mentioned measurements were single observations, not seen by other groups, but on the other, nobody checked further on a global scale. We are going to do so within this project – using the CREDO meta-structure composed of the detectors operated by numerous research groups in all the available energy ranges. This will offer a chance to confirm or question the aforementioned historical reports, potentially leading to the observation of New Physics effects, including probing the spacetime structure. Although the expectations concerning the potential observations of spacetime structure manifestations through effects accumulated over large astrophysical distances are highly uncertain due to missing physics formalism below the Planck scale, as stated e.g. in Ng, Y Jack (2003); Carlip (2015), the cumulation of time delays of high energy photons travelling astrophysical distances is thinkable, especially if one considers photons of different energies emitted simultaneously. The cumulative time delay of higher energy photons with respect to the lower energy ones might or might not depend on photon energies – it is hard to tell due to the missing formalism regarding the proper averaging of spacetime foam fluctuations Ng, Y Jack (2003). (Due to quantum fluctuations, spacetime, when probed at very small scales, will appear very complicated – something akin in complexity to a chaotic turbulent froth, which physicist John Wheeler dubbed quantum foam, also known as spacetime foam.) If the cumulative effects of spacetime foam fluctuations are independent of photon energies, then they might be too small to be observed. On the other hand, if they depend on energy, one cannot exclude time delays even as large as of the order of minutes – comparable to the observations mentioned above. Therefore if, as hoped in CREDO, we can observe CRE composed of high energy photons of different energies, possibly spanning the whole cosmic ray energy spectrum, i.e. from GeV to ZeV, then in any case we will have a new “spacetime probe” at hand – to bring a new input for tuning the existing, or even building the new spacetime models, be it delay evidence or null results that further constrain the available theories. Thus, in any case, we are entitled to expect that within this proposal we will contribute in a novel way to getting closer to understanding the cumulative effects in spacetime foam fluctuations, which would mean a contribution to the foundations of science. While proper averaging methods of cumulative spacetime foam fluctuations are yet to be developed (as stated in Ng, Y Jack (2003)), and despite the uncertainty concerning energy dependence of such effects, not exploring the new research opportunity offered by CREDO would be a methodological mistake. The research in the proposed direction should begin with a wide review of spacetime structure models which predict differences in time delays between the arrival times of high energy photons of different energies. With such a review the experimental efforts dedicated to CRE could be prioritized accordingly so that appropriate detection and monitoring algorithms could be developed, including the alerting mechanisms on a subthreshold level to be analysed in accordance with the multi-messenger astrophysics strategies. The latter direction gives promising perspectives for new advances in astrophysics, reflected also in the new attention of theorists, also those addressing the questions related to space time structure (see e.g. Carlip (2019); Wang and Unruh (2019); Ng, Y Jack (2019)) and private interest in discussions222P. Homola’s private communication with S. Carlip, Y. J. Ng, and Q. Wang.. ## 4 CRE simulations High energy particles that propagate through the Universe unavoidably interact with background radiation, magnetic fields, and matter, initiating cascades of secondary particles which might be observed with available techniques – like in artificial particle colliders but on a much larger scale. While an interaction of a particle during its propagation through the Cosmos is commonly approximated with extinction, one has to admit that the logically correct question about chances for observing CRE is not if they exist, but when and how we could observe them. This argument is illustrated with Fig. 4. Let us consider a CRE initiated near the Earth, taking as an example the preshower effect Erber (1966); McBreen and Lambert (1981); Homola and et al (2005); Dhital and et al. (2018) (CREDO Collab.), i.e. an interaction of an UHE photon and subsequent secondary electrons with the geomagnetic field above the Earth’s atmosphere. The word “preshower” is meant to describe the result of the initial interaction: “pre” emphasizes the location of the interaction vertex above the atmosphere, i.e. before the extensive air showers (EAS) are initiated; “shower” addresses the number of particles that emerge instead of one primary UHE photon. Since the distribution of preshower particles (mostly photons) above the atmosphere is confined to a very small region (a fraction of cm2) the resultant extensive air shower will have properties very similar to air showers initiated by single primary particles of the energies corresponding to the photon which initiated this preshower. Thus the observation of a preshower-like CRE will require nothing else but a standard infrastructure dedicated to the detection of UHECR, and in Fig. 4 (left panel) we name this scenario an “obvious detection” example. On the other hand, if a CRE particle distribution is so sparse that only one particle can reach the ideal detector system at a time, there is no technical chance to interpret this individual particle as a component of a CRE. Thus we consider this scenario an obvious technical limitation for CRE observation, in Fig. 4 named “obvious extinction” (right panel). Consequently, the focus of any research dedicated to CRE is limited to the scenarios that can result in particle distributions less sparse than the “obvious extinction” limit, as also illustrated in Fig. 4 with the “obvious between” CRE case (middle panel), so far unexplored with a globally coordinated observational effort. Figure 4: Obviously in experimental reach, although not yet probed, detection of Cosmic Ray Ensembles, using ultra-high energy photons as example primary particles. As far as CRE scenarios are concerned, the ongoing simulation studies include synchrotron radiation of high energy electrons in the presence of planetary, stellar and galactic magnetic fields. As shown e.g. in Ref. Risse and Homola (2007), in case of the preshower effect occurring due to UHE photon interactions with the geomagnetic field the resultant EAS can hardly be distinguished from those initiated by individual particles based on standard air shower observables like the atmospheric depth of shower maximum development or muon content in the particle distribution on the ground – unless a large statistics of events is available. Given the stringent UHE photon limits one does not expect many photon- or preshower- induced events to be observed even with the largest air shower detector arrays like the Pierre Auger Observatory or Telescope Array. Instead, one might consider alternative observables and alternative infrastructures, more sensitive to electromagnetic multi-primary EAS origins, e.g. those connected with Cherenkov emission induced by air shower particles and being observed by gamma-ray telescopes. A study in this direction was presented in Ref. Almeida Cheminant and et al. (2020) (CREDO Collab.), where the authors analyze the feasibility of detecting preshower-induced EAS using Monte-Carlo simulations of nearly horizontal air showers for the example of the La Palma site of the Cherenkov Telescope Array (CTA), as illustrated in Fig. 5. Figure 5: An ultra-high energy photon interacting with the transverse component of the geomagnetic field produces an $e^{+}/e^{-}$ pair ${\sim}$1000 km above sea level which emits bremsstrahlung photons. As such a process can repeat itself for some of these photons, a collection of particles (mainly photons and a few $e^{+}$ and $e^{-}$) reaches the top of the atmosphere. Consequently, atmospheric air showers are produced and in the case of nearly horizontal showers, only the muonic component reaches the Imaging Atmospheric Cherenkov Telescopes (IACTs) on the ground, which detect the Cherenkov emission of this component. Almeida Cheminant and et al. (2020) (CREDO Collab.) It was demonstrated that there is a realistic chance for identification of preshowers induced by 40 EeV photons coming from an astrophysical point source during 30 hours of observation. This result confirms the Imaging Atmospheric Cherenkov Telescope (IACT) technique could be used to probe physical phenomena not only in the TeV domain, but also in the EeV regime, with a particular connection to the CRE-related physics. Although, as inferred from the upper limits to UHE photons, the rate of expected preshower events for the studied configuration of the observatory is quite low, the gamma/hadron separation obtained by adopting the nearly-horizontal observation mode allows for strong filters to be applied in order to identify such events with a high degree of confidence. Searches for particles with low expected flux using IACTs have been previously performed, as it is the case for the tau neutrino Ahnen et al. (2018) or UHE cosmic rays, as demonstrated in Neronov et al. (2016). Moreover, multimessenger alerts obtained from other operating observatories may allow fast pointing of the telescopes towards sources suspected to be capable of producing UHE photons, e.g. via interactions between UHE cosmic rays possibly produced by AGNs and the cosmic microwave background, or during gamma-ray bursts, which would significantly increase the chance probability to observe UHE photons and preshower-like CRE. Such potential is well illustrated by the correlation of the arrival direction of a 290 TeV neutrino observed by IceCube with gamma emissions from blazar TXS 0506+056 observed by MAGIC and FERMI-LAT IceCube Collaboration et al. (2018). Moreover, a program of observation could be run on catalogs of high energy sources, with an observation time significantly higher than the 30h presented in Ref. Almeida Cheminant and et al. (2020) (CREDO Collab.). Figure 6: An example energy spectrum of a cosmic ray ensemble generated by an ultra-high energy photon in the vicinity of the Sun, after the interaction with the solar magnetic field (primary photon energy: 1020 eV, the heliocentric latitude of “the closest approach point”: 0o, and the impact parameter: 3 R⊙). Dhital and et al. (2018) (CREDO Collab.) To generalize the notion “preshower”, the term “super-preshower” was introduced (see e.g. Dhital and et al. (2018) (CREDO Collab.)). Super- preshowers (SPS) are cascades of electromagnetic particles, thus a subset of potentially wider CRE family, originated above the Earth’s atmosphere, no matter the initiating process and distance to the Earth. Super-preshowers can be classified with respect to their potentially principal observable properties: spread in space and arrival times. For an example, a cascade initiated due to the preshower effect in the geomagnetic field (even at altitudes as high as 10000 km a.s.l.) is expected to have the lateral spread above the atmosphere (100 km a.s.l.) in the order of millimeters, and negligible spread of arrival times Homola and et al (2005); Homola et al. (2013). If the preshower effect was to occur in the vicinity of the Sun, one would expect still negligible spread of arrival times, but the lateral spread then might reach the size of the Earth Dhital and et al. (2018) (CREDO Collab.) or even larger. The resultant SPS signature is then expected to be composed of even hundreds of thousands of extensive air showers, forming a characteristic, very thin (order of centimeters) and elongated (up to millions of kilometers) pattern. This example shows that by analyzing the properties of CRE/SPS one might approach attributing a non-trivial physical scenario to the observable event category, provided the underlying uncertainties are properly understood and quantified – as planned by CREDO. The few calculations performed so far in this direction can still serve only as qualitative indications concerning potential CRE/SPS observables. For instance, in Ref.Bednarek (1999) the lateral spread of an SPS originated nearby the Sun is simulated using a private code and assuming that the SPS is composed only of photons of energies larger than E=1017 eV, while from the more detailed calculation done in Ref. Dhital and et al. (2018) (CREDO Collab.), obtained with an open source public code PRESHOWER Homola and et al (2005); Homola et al. (2013), one learns that the energy spectrum of SPS particles might be extended over a wide range, down to TeV and lower, as illutrated in Fig. 6. TeV photons would certainly induce air showers that would contain particles, mostly muons, observable on the Earth’s surface. While in large cosmic-ray observatories these muons are treated as background, the complete CRE/SPS- oriented research we propose here has to include a proper handling of this “unwanted muon background” which can be processed to extract a signal induced by ensembles of low energy air showers arriving simultaneously at the detector – very clear, unique, and so far untested signature. We make one step further and propose a global analysis of the data from the available detectors to search for extremely spread CRE/SPS events, inaccessible by the largest observatories taken individually. The experimental question which can be addressed in this regard is: “which CRE/SPS fronts, and in which circumstances, can be detected by a network of devices located on Earth or around it?”. It is the question about the SPS/CRE particle density on the top of the atmosphere (or speaking more general: on top and within the technosphere) and the particle density, or any other observable information, like e.g. Cherenkov light, related to the resultant air shower ensembles. Another type of a CRE scenario, where electromagnetic particles play a role, is the propagation of very high energy electrons through intergalactic and galactic magnetic fields. One of the analysis in this direction currently being carried out within the CREDO Collaboration concerns the propagation of electrons of energies between 1017 eV and 1019 eV within the Galaxy, using CRPropa Batista et al. (2016) – a Monte Carlo simulation of cosmic ray propagation and the state-of-the art modeling of the galactic magnetic fields to quantify the chances of observing a CRE on Earth. The intermediate results indicate qualitatively that one might have a chance of observing a CRE originating from synchrotron radiation occurring within the Galaxy or maybe event at some other extragalactic sources. Quantification of the observational chances is still on the way, but already the qualitative study conveys a very important message: even “conventional” and abundant electromagnetic processes like synchrotron radiation in galactic magnetic fields are expected to generate CRE reaching the Earth. At the end of this section it is worth emphasizing that the experimental strategies dedicated to CRE do not need to rely on specific theoretical scenarios – one might also “fish” for clearly non-random cosmic-ray global footprints. This approach is justified by the ultra-high energy physics uncertainties we are aware of – large enough to admit that we might not be able to imagine all the possible physics scenarios predicting a CRE signal that would be observable on Earth. It is therefore sensible “just” to explore fully the potential of the infrastructure we have at hand and go fishing for signatures we are not able to predict, but which we can distinguish from the diffuse (random) cosmic-ray background. ## 5 CREDO detectors: cloud of clouds As explained above, any experimental strategy oriented on observation and investigation on large scale cosmic ray correlations in the form of widely defined CRE require a global approach to cosmic-ray research. Since both extensive air showers and incoherent cosmic rays might originate from a CRE, one realizes that in fact any detectors capable of detecting cosmic ray signals, both on the ground or on satellites, can potentially serve as valuable components of a global data acquisition system. In other words, the chances for CRE observation increase with any single detector or observatory joining the collective observational effort. Within this general concept, schematically depicted in Fig. 7 the role of CREDO can be understood as an umbrella research program that enables a collaborative effort dedicated to CRE with the use of existing infrastructure and expertise, and with openness for designing and building complementary detectors or arrays – if required and justified by specific CRE models and research plans. Figure 7: The concept of the Cosmic Ray Extremely Distributed Observatory (CREDO): open on two ends (data upload and access), using both professional, dedicated cosmic ray infrastructure and off-the-shelf detection solutions, including smartphones. Widening the participation in the CREDO program increases its scientific potential. The CREDO program benefits all participants by being as open as possible through the following: removing or reducing as many non-scientific barriers that block findability, availability and interoperability of contributing data sets as possible, and facilitating the usage of the systems, taking into account different needs and levels of expertise of the individual participants and institutional stakeholders. To illustrate technological diversity behind the general CREDO concept in this section, we briefly sketch a landscape of detection techniques used in the state-of-the-art cosmic ray research with particular emphasis on the current situation, needs and interests of the CREDO Collaboration. ### 5.1 Cosmic ray detection techniques The current CREDO collaboration with existing experiments and all the future planned detector installations allows for the extension of the widely used EAS (Extensive Air Shower) detection and analysis techniques. EAS detector systems are typically designed as a grid of individual detectors that spreads over a large area. This is due to two reasons: the disk of the EAS at the observation level can be up to a few kilometers in diameter for ultra-high energy events, and the frequency of such events per km2 is low so higher grid area means higher event rate. The most common technique that is applied as part of data analysis uses the particle density distribution within the EAS disk. The average distribution of the particle density in the disk varies with the distance from the shower axis in a known way; it additionally depends on the energy of the primary – E0. Figure 8 Beisembaev et al. (2019) illustrates the distribution of the particle densities in simulated EAS disks at the different distance from the axis of simulated showers. This is shown for several values of the E0 of the primary protons. The simulations were done using CORSIKA Heck et al. (1998) software package. Note that the particle density distribution is non-linear (approx. inverse quadratic far from the axis). Additionally, each individual detector within the grid provides the time of the hit, e.g. some timing information when the signal in this detector went over a preset threshold. This information is used to determine the EAS arrival direction. Figure 8: Particle density distribution in simulated EAS disk vs. distance from axis for different E0 Beisembaev et al. (2019). However, there is an unused piece of information here. As the EAS disk passes the detection level, the detectors can measure not only the integral number of detected particles but their distribution over time as well, getting an effective width of the disk Beznosko et al. (2020). This technique was first developed and used at the Horizon-T Beisembaev et al. (2020) experiment that holds close ties with the CREDO collaboration. This approach uses the width of the EAS disk as measured by different detectors during the event. Figure 9 Beisembaev et al. (2019) illustrates the widths of the same simulated EAS from Figure 8. The grey lines of the plot show approximated widths at the distances from the axis where particle density is low for reliable determination. The width behaves in a more linear way when compared to particle density, specifically closer to the axis, and is weakly dependent on the primary particle energy E0. Using all available information from arrival time , particle density distribution and disk width, one gains advantages such as the ability to do some analysis on the events with axis outside of the detector active area. This ability is due to the fact that the approximate position of the axis position can be reasonably estimated from the disk width information. However, such detectors and specifically electronics are more costly as fast Flash ADC and fast PMT with matching detection medium are required Beznosko et al. (2017). Figure 9: Simulated EAS disk width vs. distance from axis for different E0 Beisembaev et al. (2019). ### 5.2 Overview of the different EAS detection techniques – scintillator, water Cherenkov, CMOS/CCD, air fluorescence, radio Numerous methods are used by existing experiments to detect charged particles from the EAS disk. As we cannot see the particles themselves, the principle is to convert the passage of the particle through the detector into a signal that is easy to process, or observe the result of the particle interaction with the detection medium. ### CMOS/CCD The most direct method involves using a silicon-based pixelated strip. The most commonly found detectors of this type are in everyone’s cell phones and photo cameras – they are the CMOS/CCD photosensors (more information is available in Fossum and Hondongwa (2014)). On passage through the pixel, charged particles as well as gamma and x-rays affect CMOS/CCD sensors in a similar way as light does. If the camera cap is closed (or cell phone lens is well covered) so that the sensor is in the dark, the pixels affected, or hit, by a particle passage will produce a response as if they were exposed to light. If more than a single pixel is hit, a part of the particle track may be detected as well as shown in Figure 10. This method is currently used by the CREDO Collaboration using the cell phone cameras and a special app Bibrzycki and et al. (2020) (CREDO Collab.); Niedzwiecki and et al. (2019) (CREDO Collab.) extendable or portable to a browser application. Figure 10: A sample event of a cosmic ray leaving a track on a CCD sensor Niedzwiecki and et al. (2019) (CREDO Collab.). ### Water-based Cherenkov detector Another detection method that is based on observing the results of particle- matter interaction relies on the detection of the Cherenkov light – an electromagnetic shock wave from the charged particle passing the transparent medium (water, air, glass, plastic) faster than the speed of light inside that medium (read more on Cherenkov radiation in Watson (2011)). The shock wave consists of thousands of photons per 1 cm of particle path in the medium that can be detected using fast photodetectors with internal signal amplification, such as PMT (Photo Multiplier Tube). The design of the water-based Cherenkov detector is very simple – an insulated volume of water and at least one PMT that is optically connected with this volume. A charged particle creates a cone of Cherenkov light that is detected by a PMT. If the volume is very large and many PMTs are used, the Cherenkov light cone can be imaged and used for measuring particle momentum and direction: this technique is used, for example, by Super-Kamiokande detector (more information please see Walter (2008)). The advantages of water-based detectors are the detection speed (on the order of one nanosecond Beznosko et al. (2017)) and relatively lower cost since the detection medium is water (typically, ultra-pure). However, the Cherenkov signal is relatively weak. Other disadvantages are low to no mobility and the insulation of high voltage electric components like PMT from the water. ### Scintillator-based particle detectors The scintillators are the materials that produce a so-called scintillation light in response to a charged particle passage in addition to Cherenkov light. The main advantages of scintillators over water or glass as detection medium is that there is about 10-15 times more scintillation light photons than from Cherenkov radiation. Also, most scintillators are solid and the detector is easily movable in most cases and does not require a lot of additional external support, but there are liquid and even water-based scintillators Bignell et al. (2015). Specifically, plastic scintillators are lightweight and inert solids that can be used for production of detectors that are suitable for installation at schools. In such cases, PMT cannot be used due to high voltage being a hazard, but there are silicon versions of PMT that work at the safe voltage such as SiPM, MPPC, MRS etc. Duspayev et al. (2016) Beznosko (2009). Thus, scintillator-based detectors are most suited for CREDO long-term outreach and educational goals. The main disadvantages are the cost and a slower response when compared to Cherenkov light Beznosko et al. (2017) – on the order of 10 ns for plastic and liquid scintillators, on the order of 100 ns for inorganic. ### Air fluorescence detectors As the EAS disk travels through the air, the charged particles that comprise it ionize the air molecules along the entire EAS path. As the electrons return to their ground state, the energy is released as light. This light can be collected by typically the collection of mirrors with multiple PMTs as light detectors (there are various designs and detectors are used). The main advantage for this method is that with some luck when viewed at the right angle, an actual development of EAS can be captured and accurate estimates of the properties of the parent particle can be made. The main con is that the observations can be done only during Moonless and clear nights. Fluorescent detectors are often combined with scintillator or water-Cherenkov detectors. More information can be found in Tkachev and et al (2013). ### Radio signal detectors As the EAS disk moves in the Earth magnetic field, the positive and negative charges within it experience Lorentz force in opposite directions and radiate in a radio frequency range around 30-80 MHz or so Aab and et al. (2016) (Pierre Auger Collab.). This effect is highest for the EAS closer to the horizon and moving perpendicular to the magnetic lines of Earth. While many experiments are actively trying to use this method, it is a supplement to all other detection methods listed. ### 5.3 The CREDO extension proposals The involvement of non-scientists and outreach is one of the pillars for CREDO collaboration. There are currently proposals to design and produce portable, most likely scintillator-based EAS detectors that could be used at education establishments (schools, universities) as both demonstrators and part of the education process. The designs are all being proposed with the flash ADC capabilities to utilize this expanded set of analysis techniques that involve using advanced EAS timing information and possibly extend the searches for new phenomena within EAS such as ‘unusual’ events described in Beisembaev et al. (2019) Beisembaev and et al (2019). One of such designs is CREDO-Maze, a project that will create a global, unique physical apparatus, which will consist of a network of local (school) measuring stations. The concept of the CREDO-Maze array was developed based on the 20 years old Roland Maze Project Gawin et al. (2002); Feder et al. (2005). The technology today has developed greatly and the local shower array idea of Linsley Linsley (1985) can now be implemented much more easily and, critically, much more cheaply. Eventually it is planned to equip high-schools with sets of at least four professional portable cosmic ray detectors connected locally and forming the small school EAS array. Feasibility of EAS detection with such a mini-network was demonstrated in Ref. Karbowiak et al. (2020) where pocket-size (sensitive surface of $\sim$ ($25cm^{2}$) affordable (cost $\sim$ 100 $ / piece) scintillator detectors Axani et al. (2018) were used. The CREDO-Maze project uses technologically sophisticated measuring equipment in extracurricular activities: detectors of charged relativistic elementary particles will be made of small ($0.02m^{2}$) plastic scintillators. The light pulses will be collected by optical fibers shifting the wavelength from ultraviolet to green and then light will be converted into electrical signals by Silicon Photomultipliers. Further electronics will be based on high speed digital circuits and microcontrollers to connect to higher-level servers via the Internet and WiFi links. Prototypes of individual components of the apparatus have been largely developed independently by several institutions. One of the important parameters of proposed equipment is the cost. It is easy to build expensive and complicated, 100$\%$ effective professional arrays. We are on the way of building the prototype which cost (including scintillators, SiPMs, trigger electronics, storage and data transmission micro-computer) is below 200 $ (compared to 3000 € per detector for $\mu$Cosmics detector in Petropoulos et al. (2020). Prototypes of individual elements of the apparatus have been largely developed independently in several partner academic centres: University of Lodz, National Centre for Nuclear Studies and Institute of Nuclear Physics in Poland, Institute of Experimental and Applied Physics, in Czech Republic, Swinburne University of Technology in Australia. Completion of the whole and its technical adaptation for replication will be one of the interesting tasks of the project. It is an interesting concept to deliver to some of the end-users (schools) kits, which are adapted for this purpose, assembled only in basic, skill-intensive parts. This would allow students in their local project groups to build and assemble from them a fully operational and efficient whole, under the supervision, of course, of the staff of the institutes managing the local project networks. The independent construction of the operating scientific equipment is an additional motivating element and undoubtedly increases the involvement of young people and the general interest of those not participating in the project. These effects were observed in previous attempts to implement similar activities on a smaller scale. On the other side it should be mentioned that proposed devices are designed and implemented in such a way that, while maintaining high standards, they are as inexpensive as possible. Technologies will be developed to ensure that the measurement kits can be duplicated and distributed to end users as “self- assembly kits” with different degrees of sophistication of the finished components. As potential business projects they will be able, together with educational material pledges and software, to provide a ready-made market product. With positive recommendations based on our research results, the potential market, the demand of educational institutions, seems to be quite considerable. The creation of local structures comprising young people involved and organised in research groups (led by teachers/educators) using network communication and based on science centres, as, e.g., higher education institutions, universities, is an important step in the development and institutional activities research performing organisations, including , as well as research funding organisations. The proposed actions open up new areas of innovation in non-formal non-school education. Creating a model system of social communication networks and demonstrating its effectiveness in the proposed field being an element of STEM will allow to plan and create similar networks realized in other areas of education. There are no contraindications for such networks to cover various groups of young people and research centres. Another concept of building a large scale cosmic-ray network is based on engaging the wide community, thanks to the attractive properties and portability of devices such as pixel cameras. A benchmark standard here is Minipix Timepix-EDU, a hybrid semiconductor pixel detector, with a silicone sensor of various thicknesses (e.g. 300, 500 $\mu m$) provided by ADVACAM adv , using the technology developed within the Medipix Collaboration at CERN med . This detector provides specific characteristics: fast-data acquisition, portable, lightweight, easy to use and easy to place in difficult-to-access environments, and can be operated remotely. Minipix Timepix EDU is a simplified and price effective version of the Minipix Timepix detector. It was designed and created with the purpose to make science accessible to the public (schools, scientific research centers, non-commercial institutions). The simplified version of the software Pixet, with predefined settings, makes it possible to operate the detector without advanced training in data acquisition just by connecting the device to the USB port of the PC. Figure 11: Illustration of the Minipix Timepix EDU detector dedicated for educational purposes, www.advacam.com, accessed on 22.09.2020. The ASIC read–out chip contains a matrix of 256 by 256 pixels (total 65,536 independent channels) and an active sensor area of 14 mm by 14 mm, where one pixel corresponds to 55 $\mu m$, see Fig. 11. The dark noise signal enables measurements of low–Linear Energy Transfer (LET) particles with high precision. Given to the per–pixel calibration, an adjustable threshold is made in each pixel. This allows a detection efficiency close to 100% for heavy charged particles, making these detectors unique and suitable for the detection of cosmic rays. ### Online track visualization and processing software The software package Pixet Pro (D. Turecek, J. Jakubek, 2020, Advacam s.r.o., Prague, Czech Republic) is used to operate the detector and to control the readout, data acquisition and recording. The detectors can be connected to a standard notebook using a USB 2.0 port. The PC connectivity and cross-platform operating system compatibility includes Windows, Linux and Mac OS. Figure 12: An example of per-pixel energy deposited by various particles in a mixed radiation field measured using a Minipix Timepix detector with Silicone sensor. The detector was operated in ToT-frame mode with an acquisition time of 10 ms. Low-energy, narrow, curly tracks are typical for electrons, high- energy, wide, straight tracks for energetic heavy charged particles such as protons. An example of a mixed field frame can be seen in Figure 12. Large roundish blobs are created by alpha particles, long strikes by cosmic muons, curving tracks by electrons or small dots by gamma or X-rays. Moreover, rare and exotic events can be observed: Delta electrons, recoiled nuclei, cascade of two or more nuclear transitions, proton tracks. According to the frame occupancy the acquisition time can be set. Other parameters such as threshold are already predefined. Further, the data can be processed as dose rates, absorbed dose, fluence maps, energy deposited, LET spectra Granja and et al (2016). A notable role in the implementation of the CRE-oriented detection strategies is also going to be played by professional, medium size detectors giving a high quality signal. Since 2018, low-background, digital gamma gamma-ray spectrometer with active shield has been operating in a ground level laboratory in the Department of Nuclear Physical Chemistry, Institute of Nuclear Physics Polish Academy of Sciences (IFJ PAN), Krakow, Poland. The spectrometer is equipped with Broad Energy Germanium detector BE5030 (Canberra, USA), multi-layer passive shield and five large, plastic scintillators (Scionix, Netherlands), playing a role of additional active shield (veto system). Areas of scintillation detectors range from ca. 0.14 up to 0.49 m2. Data acquisition as well as signal processing are performed by means of digital analyzer (so called digitizer) DT5725 (CAEN, Italy) Gorzkiewicz et al. (2019). The main role of both passive and active shielding is reduction of the radiation background of the germanium detector. However, thanks to digital data acquisition it has become possible to expand the research potential of the constructed spectrometer. Registering and storing data generated by all spectrometer’s detectors and using manifold, off-line data exploration techniques allowed one to initiate continuous monitoring of the cosmic-ray muon flux. From the CREDO Project point of view, such device may be used as a reference detector because scintillators register several dozen of muons per second and thanks to their non-collinear orientation (Fig. 1 in Gorzkiewicz et al. (2019)) it is possible to detect at least 3 muons correlated both geometrically and in time. Preliminary research showed that during single gamma-ray spectrometry measurement, that lasts 426721 s (about 5 days), scintillators registered 329 events of 5-fold detection coincidences of muons (Fig. 13) that could originate from air showers. Figure 13: Distribution of maximum time intervals of 329 events of five-fold coincidences registered by spectrometer’s active shield during single gamma- ray spectrometry measurement (time of measurement 426721 s). Additional advantage of continuous monitoring of cosmic-ray muons flux is the possibility to investigate correlation of changes in its intensity and Earthquakes that modulate the local geomagnetic field Kovalyov and Kovalyov (2014). ### 5.4 Inter-detector communication An important aspect of R$\&$D concerning the CRE detection feasibility is communication between detectors constituting an array. It requires attention especially in deployment areas with limited access to internet and/or electricity, where data received by an individual detector cannot be transferred directly to the central acquisition system. Communication might also be critically important in situations which require some preprocessing of the data collected by a subset of the array, before sending a trigger message to the central system. The communication issues concerning the CRE-related applications were addressed in Ref. Smelcerz and et al. (2020) (CREDO Collab.). They are the subject of further ongoing investigations and engineering works, below we briefly summarize the current status of these efforts. We focus on a scenario, when designing the network, which assumes that the detectors are mobile devices that can be located in hard-to-reach areas, such as deserts or forests, as in highly urbanized areas, i.e. cities, and even inside blocks (Fig. 14). Such assumptions force the network to have the following features: * • scalability – it is easy to connect new devices to the network, the network should be self-configuring * • low energy consumption during data transmission – an indispensable parameter to ensure support for mobile devices (without access to power from the network) * • universality – the network must operate both in dense urban buildings and in desert or forest areas * • wireless – provides the ability to collect data without the need for expensive infrastructure in the form of cables and the need to use human force (manual data collection from SD cards), in other words, significantly reduces the cost of maintaining the project * • as long range as possible – guarantees stable transmission without the need to use re-transmitters or uneconomical use of too many gates Figure 14: Symbolic diagram for the CREDO network scenario. #### 5.4.1 Solutions available on the market in wireless networks In order to choose the right solution for wireless transmission in the CREDO network, the solutions already existing on the market were compared in terms of parameters important for the CREDO project. The results are presented in Table 1. The table completely omits Bluetooth technology due to the very small range (10-50m) blu , which does not meet the long range criterion. Feature | ZigBeeZig | LoRaLoR | SigFoxSig | WiFiWiF | GPRSGPR ---|---|---|---|---|--- Frequency | 868/915 MHz and 2.4 GHz | 100 MHz to 1.67 GHz | 868/915 MHz | 2.4 GHz | 900–1800 MHz Power consumption tx | 37 mW | 100 mW | 122 mW | 835 mW | 560 mW Range | 100 m | 5 km | 10 km | 100 m | 1–10 km Cons | Requires infrastructure | Available Gateways on the market are only for 438 and 868MHz | Requires infrastructure similar to GSM (masts and receiving stations) | No Internet connection in non-urbanized areas; high power consumption | No access in non-urbanized areas; high power consumption Suitable for battery devices? | Yes | Yes | Yes | No, too much power consumption | No, too much power consumption Table 1: Comparison of wireless communication solutions available on the market. Based on the table above, ZigBee, LoRa and SigFox were selected for testing and deeper analysis. It is worth noting that the ranges given in the table are the best result, often achievable only in an open area, not in built-up areas. It was decided that all tests will be carried out in built-up areas, considering city centers as the most problematic environment for establishing long-range wireless communication. No retransmitters were used in the tests. The first tests were conducted for the ZigBee network. We managed to achieve a range of about 30m, similar to that presented by Kuzminykh et al. (2017). Then, tests were carried out for the sigFox network Dis (a). Due to the necessity to use ready-made SigFox infrastructure and the still incomplete coverage of the area with this infrastructure, further development work using this standard was abandoned. Finally, tests were carried out on LoRa modules Dis (a, b). We did not manage to achieve satisfactory results (100-300 m). Experiments carried out by Centenaro et al. (2016) in built-up areas in cities that used the actual LoRa network and its ready infrastructure also did not reach the maximum range of 5 km. After the tests, it became clear to us that in order to obtain good coverage both in the city without the use of repeaters, and in open areas, where there is no network infrastructure. We need to develop our own system. We immediately decided to work at lower frequencies (169 MHz) so as to minimize the attenuation of waves by obstacles Saunders and Aragón-Zavala (2007). #### 5.4.2 Definition of CREDO Wireless Sensor Detector Network The CREDO Wireless Sensor Detector Network (CREDO WSDN) is a fusion of the well-known Wireless Sensor Network (WSN) currently used most frequently in solutions for the Internet of Things (IoT). CREDO WSDN is a wireless network, organized in a star topology, based on radio waves at the frequency of 169MHz, used to transmit information to the Sink collective station, from dedicated mobile cosmic ray detectors. Sink has a COM connection to the PC station, which in turn is connected to the Internet. The internet provides the end user (e.g. a scientist) with access to the collected data. Additionally, additional sensors, such as humidity or temperature sensors, can be connected to the CREDO WSDN to investigate the correlation between events detected by the detectors and other parameters in the area. Fig.15 presents the CREDO WSDN topology. Figure 15: CREDO WSDN topology. Due to the fact that the network is to be specially adapted to communication on mobile devices (mobile detectors), in the current version of the system communication is only one-way, i.e. the data from the detector is sent to Sink. The radio transmitter will be integrated with the detector itself, not a separate device as it was in the first version of the system. These measures are primarily intended to reduce the power consumption of the detector itself, which is to operate on battery power. The CREDO network consists of several main elements, such as radio communication on 169 MHz, a Sink station, a microcontroller or a power source. The features and role of the most important elements of the entire system are discussed below. ### Radio communication Radio communication takes place at 169 MHz, unidirectional in a star topology. These features ensure not only excellent coverage in built-up areas or in buildings, but also the optimization of energy consumption – only the Sink station must remain listening all the time. At the moment, the best result that we have achieved is a range of 8 km in built-up areas. ### Transmitter We plan to introduce a new version of the transmitter. This year, a new STM32WL chip is to appear on the market – it is a microcontroller integrated with the radio STM . It has excellent sensitivity and transmitter power. At the same time the energy consumption is half less of the previous solution, thanks to the use of a built-in impulse converter. Usually, such a solution significantly worsens the sensitivity of the receiver, but this time it does not have this effect. Additional feature will be that the same system will count pulses caused by particle impacts. ### Receiving station (Sink) The receiving station in current version already has very good sensitivity Smelcerz and et al. (2020) (CREDO Collab.) however, as we plan to upgrade the transmitters, there’s a prototype of a new receiving station as well, currently during testing. The new version with a new LNA (Low Noise Amplifiers) amplifier with lower noise and much lower distortion for now increased the range by 4km compared to the previous version (from 4km to 8km). A new version will be tested as well with several antennas to increase the sensitivity. ### Mobile Detectors The reason why the network was created was the need to handle the transmission between battery-powered devices, and more specifically mobile cosmic ray detectors. At the moment, the first prototype of the scintillation detector has been developed, the tests are showing promising results, and the device is already adapted to battery power. ### Microcontroller The microcontroller acts as the brain of the entire system, makes decisions about how to configure the network, when to transmit the collected data, and informs Sink about the transmitter battery status. It is found both in Sink, transmitting stations and in the detector. #### 5.4.3 CREDO network tests in the field During the tests on the first version, it was possible to achieve a satisfactory range of 4 km in built-up areas Smelcerz and et al. (2020) (CREDO Collab.). Currently, after changes to the design of the transmitter and the collection station (Sink), the range has increased to 8 km. Also there was significant improvement in range in Elevated Areas (mountains). In the new version of the receiving station, we managed to establish communication on a cloudy day, in the area where the mountain was located. An interesting phenomenon that we were able to observe was the reflection of the wave from the stratosphere on a cloudy day (Fig. 16) Saunders and Aragón-Zavala (2007). Figure 16: Reflection of the radio wave. Saunders and Aragón-Zavala (2007) #### 5.4.4 Future work and conclusion The CREDO communication system, thanks to its scalability and universality, can be used in many areas of IoT and can be an alternative to existing solutions, where due to the lack of infrastructure or too high costs of their implementation, they cannot be launched. There are still many challenges ahead of us, we are constantly working on increasing the range. Maintaining security in the network and possibly encrypting data is also a challenge. The phenomenon that we want to study better is the reflection of waves from the stratosphere. We will also be conducting tests with new mobile scintillation detectors in the near future, which will be immediately integrated with transmitting stations. ### 5.5 Detection efficiency Independently of the available cosmic ray infrastructures and expertise that is or might be contributing to the implementation of the CREDO strategies, the novel hardware extensions of the global network of detectors require new studies and tools providing information of detection efficiency on different identification and reconstruction levels: a) individual particles and the corresponding detection rates, b) extensive air showers and the corresponding cosmic-ray fluxes, and finally c) Cosmic Ray Ensembles. The level a) is being addressed elsewhere within this Special Issue Bibrzycki and et al. (2020) (CREDO Collab.), considerations of level c) were initiated with Ref. Verbetsky and Svanidze (2020), and a study dedicated to arrays of portable detectors like CREDO-Maze described above is under preparation. ## 6 Data management and analysis ### 6.1 CREDO IT Infrastructure From its very beginning the CREDO initiative assumed a necessity for a scalable data acquisition and processing infrastructure. While most of the infrastructure is supplied by volunteers, in the form of smartphones and other detectors, there still exists a need to create a central repository of detection events. Information about each individual detection event is recorded first by the end device, then it is transferred to the central repository. This repository allows for managing and sharing information among interested parties, it serves as a base for CREDO data analysis. It also fulfills the role of the system’s central point, providing APIs (Application Programming Interface) and information about stored data itself. The CREDO it ecosystem, with some of the implementation details, is depicted in Figure 17. Figure 17: Diagram representing the architecture of storage application. The CREDO data repository uses a dedicated API, tailored for the project’s needs. The object structure used in the API closely models the information about physical detection events and additionally contains metadata, e.g. what detector was used or what was the operating time of the detector. Data storage in CREDO project is provided with respect to the FAIR principles Wilkinson et al. (2016), namely we provide means to Find the data, Access it, enable Interoperability and Reuse of the data. One of the means to achieve this goal is the API, which was implemented as a set of REST services adhering to the OpenAPI standard ope . OpenAPI is a widely accepted service standard, which includes community-developed guidelines for developing interfaces exposing data and metadata. This standard focuses on data openness, technical accessibility and discoverability, which allow for effortless integration. The stored data is publicly accessible, and so it will be in the future, which should enable the reuse of information and verification of experiments based on the given data. Central processing for CREDO data storage was implemented as an extensible database, a storage system running in a containerized environment with aim to enhance reliability, portability and security. The main component, responsible for gathering and managing data, was implemented as a Django dja web application. User devices communicate with the API exposed by the main application, the data is subjected to basic filtering and then placed in the storage back end. The storage is supplied through the usage of MariaDB mar relational database cluster. Redis red , a key-value in memory object store is used as a temporary cache, implemented in order to speed up queries and data presentation. The external service providing S3 compatible access is used for backups and exporting, prepared beforehand, larger chunks of data. Due to the nature of observed physics phenomena and the community factor, data stream characteristics are often difficult to predict. Additionally, data analysis is done mainly in an exploratory manner, so the underlying hardware infrastructure needs to be flexible enough to adapt to requirements of the moment. The presented system has been deployed in a production environment (in this case, virtual machines are hosted in the cloud provided by ACC Cyfronet AGH) and is continuously gathering information about detection events from the CREDO detector network. There are several performance metrics, i. e. CPU usage and request latency, which are constantly monitored thus allowing us to determine if the current hardware configuration delivers required performance. In the case of inadequate resources, the cloud management system is able to autonomously and proactively spawn additional service instances which will allow for load distribution and handling of the data influx. It is important to emphasize that the adopted solutions and protocols enable two-end open access: data collected by various detectors can be transmitted to the central system if only their format is kept compatible with the CREDO API structure, and also the access to the data being stored centrally is being granted to everybody upon request including a sensible motivation. The web- based monitor of the CREDO data acquisition system including basic user and detection statistics is available publicly api and the technical information regarding the full data access is being provided upon approving of individual requests. ### 6.2 The current data set Regardless of the ongoing efforts concerning the FAIR principles, the CREDO data set is continuously available on request for individual users if a sensible motivation for the usage is presented. In addition, all the currently available scripts and tools facilitating data access, selection, and further processing are made freely available in the public CREDO Collaboration repository cre as a project that can be imported into the PyCharm integrated development environment Pyc . The CREDO data set is divided into three basic categories: 1. 1. Detections – a set of detections containing detailed information about individual events on all devices; 2. 2. Pings – activity logs of devices, including the information about their connections to the database and time of work in the detecting mode; 3. 3. Mappings – three collections containing information about users, devices and teams. Most of the data collected by CREDO to date comes from smartphones with the CREDO Detector app, operating on the Android system. The data statistics since the premiere of the app until September 1, 2020 includes: * • 11,150 users (unique accounts) have registered * • 15,739 devices were used for particle detection * • 4,941,133 candidate detections registered * • the total operating time of the devices is over 379,629 days (over 1039 years) The raw data files currently occupy 44 GB (detections: 39.3 GB; pings: 1.6 GB; mappings: 3.1 GB). Assuming the single CMOS camera sensor has a diagonal in size of $1/3^{\prime\prime}$ (on average), the total area of all the presently registered devices is 0.56 m2. The daily average value of detections per device calculated from the number of candidate detections registered and the number of total operating time of the devices is about 13 candidate detections registered daily. Currently standard data filtering and clusterization are being performed continuously, and very early results show a perspective for more advanced image processing and classification using machine learning or deep learning techniques to perform more efficient classification of candidate detections. A single detection record is stored in JavaScript Object Notation (JSON) open standard data format and it contains the following information: 1. 1. user – detection user information: “team_id”, “user_id”; 2. 2. location – geographical coordinates: “latitude”, “longitude”; 3. 3. time – detection time information: “timestamp” (detection unix time in milliseconds), “time_received”: reception time in the database; 4. 4. picture – detection image information: “id” (unique detection identification), “frame_content”: image (a fragment of the snapshot, typically containing a margin of 30 pixels around the brightest pixel position – see below) code in base64, “height”: resolution “vertical” dimension, “width”: resolution “horizontal” dimension; 5. 5. server side visibility – “visible” (tells whether a detection pass through the server side filters, sensitive e.g. to repeatedly flashing pixels or incompatible versions of the applications sending the data); 6. 6. brightest pixel position – “x”, “y” (row and column number of the brightest pixel). It often happens that a single picture taken by a smartphone in the detection mode contains more than one pixel that fulfill trigger conditions and can be classified as detections. If these pixels are located sufficiently far one from another, i.e. more than the extraction margin explained in p. 4 above, then they are considered separate detections and each of them is assigned an individual detection record. In this way clearly distinguishable particle hits collected in one shot are easily identifiable as detections with the same “timestamp”. An example set of particle track candidates collected by CREDO Detector is presented in Fig. 18. Figure 18: Example particle candidate tracks recorded with a smartphone using the CREDO Detector mobile application. [source: the CREDO Collaboration materials and measurements] The nature of the penetrating radiation measurements made by CMOS sensors assumes identification of pixels that are significantly brighter than the background. The currently used algorithms allow such identification only when the visible light does not reach the sensor, i.e. with the smartphone camera covered tightly. Intentional or unintentional uncovering of the smartphone camera might result in collecting images generated by visible light, sometimes hardly distinguishable from signal excesses induced by penetrating particles (see Fig. 19 for some examples). Figure 19: Example artifact tracks recorded with a smartphone using the CREDO Detector mobile application. [source: the CREDO Collaboration materials and measurements] One might apply a specific software filter to remove such a contamination from the data sample being analysed. The internally available preliminary studies in this direction show that the number of bright pixels that compose the image might be a sufficient data quality measure. The rule of thumb currently used within the CREDO Collaboration is based on the data collected by a number of “trustable” devices (operated by non-anonymous team members engaged in the application development) and it shows that one does not expect detections of penetrating particles that generate tracks composed of more than 70 pixels with brightness above 70, in the 0-255 scale. Another type of contamination is the electronic noise which typically gives images composed of individual (or very few) bright pixels only that are being recorded very frequently compared to the expected detection frequency of the cosmic and local radiation. These electronic artifacts can be “detected” e.g. if the pixel brightness threshold is improperly set, and such a situation can be identified by monitoring the detection frequency and comparing it to the expected 100$\%$ sensor efficiency for the sensor in use. For an example, a typical CMOS sensor in a smartphone with a surface of 0.2 cm2 can see at most one cosmic background muon in 5 minutes (the expected integrated background muon flux is 1/cm2/minute). Assuming that the local radioactive sources might induce a signal at most $\sim$10 times more often, one does not expect detection rates larger than 2/minute, and this is in fact the case when looking at the statistics collected from the reference (“trustable”) devices. An example of the corresponding filter being used with the CREDO Collaboration requires that detection frequency is less than 10 detections with different timestamps per minute. Despite the fact that with full access to the raw data sets users are able and welcome to apply their own filters and perform their own analyses, the CREDO Collaboration will periodically release its official data sets and recommendations concerning the data quality which will be based on the appropriate and publically available studies. If it comes to scientific results, the researchers will be expected either to refer to the official data sets, or to describe and justify their own selections. The examples of the already ongoing analyses concerning the CREDO data set include image classification based on the shapes of particle track candidates Niedzwiecki and et al. (2019) (CREDO Collab.), monitoring detections collected by individual devices in search for temporal event clustering in 5 minute intervals Eur and muon identification and zenith angle reconstruction based on the lengths of the rectilinear particle tracks Karbowiak and et al. (2020) (CREDO Collaboariton). One of the examples of novel and promising directions of analyses related to the CREDO data is exploitation of the techniques of cyclostationary signal processing and its generalizations Napolitano (2012), Napolitano (2019). Cyclostationarity is a statistical property of science data generated by the combination/interaction of periodic and random phenomena. That is, these data have second- or higher-order statistical functions that are periodic functions of time. More general models can account for the presence of multiple, possibly incommensurate, and irregular periodicities (Napolitano, 2019, Chapters 1,2). The observed signals are not periodic, but the hidden periodicity can be restored by estimating statistical functions from the data. These statistical functions contain information of the generating mechanism of the data that cannot be extracted starting from the classical stationary modeling of the observed signals. The extensions of cyclostationarity can be of interest if time dilation effects have to be accounted for (Napolitano, 2012, Chapter 7). In particular, the effects of constant relative radial speed and/or constant relative radial acceleration between cosmic source and receiver can be suitably modeled by exploiting the generalizations of cyclostationarity. General time-warping of the source signal can be modeled by new and recently proposed signal models (Napolitano, 2012, Chapter 6), (Napolitano, 2019, Chapters 12-14). Source location and parameter estimation problems based on measurements taken at sensors very far apart separated are very interference tolerant if the
# Continuous Differentiability of the Value Function of Semilinear Parabolic Infinite Time Horizon Optimal Control Problems on $L^{2}(\Omega)$ under Control Constraints ††thanks: The authors were supported by the ERC advanced grant 668998 (OCLOC) under the EU’s H2020 research program. Karl Kunisch Institute for Mathematics and Scientific Computing, University of Graz, Heinrichstrasse 36, A-8010 Graz, Austria, and Radon Institute, Austrian Academy of Science, Linz, Austria. ([email protected]). Buddhika Priyasad Institute for Mathematics and Scientific Computing, University of Graz, Heinrichstrasse 36, A-8010 Graz, Austria. (b.sembukutti-liyanage@uni- graz.at). ###### Abstract An abstract framework guaranteeing the local continuous differentiability of the value function associated with optimal stabilization problems subject to abstract semilinear parabolic equations subject to a norm constraint on the controls is established. It guarantees that the value function satisfies the associated Hamilton-Jacobi-Bellman equation in the classical sense. The applicability of the developed framework is demonstrated for specific semilinear parabolic equations. ## 1 Introduction. Continuous differentiability of the value function with respect to the initial datum is an important problem in optimal feedback control theory. Indeed, if the value function is $C^{1}$ then it is the solution of a Hamilton Jacobi Bellman (HJB) equation and its negative gradient can be used to define on optimal state feedback law. The subject matter of this paper addresses local continuous differentiability of the value function $\mathcal{V}$ for infinite horizon optimal control problems subject to semilinear parabolic control problems and norm constraints on the control. Such problems are intimately related to stabilization problems which are often cast as infinite horizon optimal control problems. Investigating infinite horizon problems constitutes one of the specificities of this paper. Another one is the fact that we focus on the differentiability of $\mathcal{V}$ on (subsets of) $L^{2}(\Omega)$. Thus we need to consider the semilinear equations with initial data $y_{0}\in L^{2}(\Omega)$. As a consequence the solutions of the semilinear equations only enjoy low Sobolev-space regularity. This restricts the class of nonlinearities, compared to those which are admissible if the states are in $L^{\infty}((0,\infty)\times\Omega)$, which is the situation typically addressed in the literature on optimal control [Cas] and [Tro2]. The latter necessitates to take the initial conditions in spaces strictly smaller than $L^{2}(\Omega)$. Here we consider $L^{2}(\Omega)$, first due to intrinsic interest, secondly because ultimately the HJB equation should be solved numerically, which is easier in an $L^{2}(\Omega)$ setting than in other topologies, like $H^{1}(\Omega)$. Let us also recall that one of the approaches to solve the HJB equation is given by the policy iteration. It assumes that the value function is $C^{1}$. The underlying analysis demands stability and sensitivity analysis of infinite dimensional optimal control problems subject to nonlinear equations. For this purpose we utilize the theory of generalized equations as established by [Don] and [Rob]. It involves first order approximations of the state and adjoint equations, which lead to restrictions on the class of nonlinearities which can be admitted. We refer to the section on examples in this respect. The current investigations are to some degree a continuation of work the first author’s work on optimal feedback control for infinite dimensional systems. In [BKP1, BKP2, BKP3] Taylor approximations of the value function for problems with a concrete structure, namely, bilinear control systems, and the Navier Stokes equations were investigated and differentiability of the value function was obtained as a by product. In these investigations norm constraints were not considered. Here we admit norm constraints and we focus on semilinear equations. Let us also notice that the systems investigated in [BKP1, BKP2, BKP3] share the property that the second derivatives with respect to the state variable of the nonlinearity in the state equation do not depend on the state itself anymore. Let us also compare our work to the developments in the field of parametric sensitivity analysis of semilinear parabolic equations under control constraints. There are many papers focusing on stability and sensitivity analysis of finite time horizon problems along with pointwise control constraints, see e.g. [BM, GHH, Gri, GV, Mal, MT, Tro1, Wac], and the literature there. First, of these papers, except for [GV], consider the case with initial data in $L^{2}(\Omega)$. In [GV] again the third derivative of the nonlinearity is zero. Secondly, all of them consider the finite horizon case. Since we treat infinite horizon problems we have to guarantee stabilizability (for small initial data) under control constraints. Then we use a fixed point argument to obtain well-posedness of the system. Well- posedness and stability with respect to parameters of the adjoint equation is significantly more involved for infinite horizon problems than for finite horizon problems. It requires techniques, differently from those used in the finite horizon case. Another aspect is the proper characterization of the adjoint state at $t=\infty$. In the finite dimensional case, there is, of course a tremendous amount of work on the treatment of the value function if it is not $C^{1}$. Fewer papers concentrate on the case where the value function enjoys smoothness properties. We mention [Goe] and [CF] in this respect. In order to achieve the goal we desire, we lay out the following setup. In Section 2, we consider an abstract parametric optimization problem with an equality constraint and another convex constraint. Existence of an optimal solution, of a multiplier associate to the equality constraint, and Lipschitz stability of the component of the state variable which lies in the complement of the kernel of the linearized constraint will be established. This result is necessary but not sufficient for the further developments, since stability is obtained in a norm which is too weak and since the stability estimate does not involve the component in the kernel of the linearized constraint and the multiplier, i.e. the adjoint states, yet. At the level of Section 2 this remains as Assumption (H7). In Section 3 we specify the concrete optimal stabilization problem and a set of conditions, most importantly on the nonlinearity of the state equation, under which Assumption (H7) can be established, for initial data $y_{0}\in L^{2}(\Omega)$. Section 3 also contains a summary of the main results of this paper. They are stated as theorems with a little stronger assumptions than eventually necessary, for the saker of easing the presentation. Section 4 is dedicated to the proof of verifying the assumptions of the general setup of Section 2 for the concrete optimal control problem stated in Section 3. As conclusion we obtain the Lipschitz continuity in the appropriate norms of the all variables appearing in the optimality system with respect to the parameter of interest, which is the initial condition $y_{0}$, in our case. Since our analysis is a local one involving second order optimality conditions, solutions to the optimality system are related to local solutions to the optimal control problem. As a corollary to these results we obtain that the local value function is Fréchet differentiable. In Section 5, we show that in the neighborhood of global solutions the value function $\mathcal{V}$ satisfies the Hamilton-Jacobi- Bellman (HJB) equation in the strong sense. Finally, Section 6 is devoted to demonstrating that the developed framework is applicable for some concrete examples, namely for linear systems, Fisher’s equations, and parabolic equations with global Lipschitz nonlinearities. All our results require a smallness assumption on the initial conditions $y_{0}$. Two aspects need to be taken into consideration in this respect. First $y_{0}$ has to be sufficiently small so that the controlled system is stable. Secondly a second order optimality condition is needed. For this to hold a sufficient condition is provided by smallness of the adjoint state, which in turn can be implied by smallness of $y_{0}$. We stress that these two issues are of related, but independent nature. ## 2 Lipschitz stability for an abstract optimization problem. Here we present a stability result for an abstract, infinite dimensional optimization problem which will be the building block for the results below. This result is geared towards exploiting the specific nature of optimization problem with differential equations as constraints. First existence of a dual variable will result from a regular point condition. Subsequently the Lipschitz stability result is obtained in two steps. In the first one, we rely on the relationship between the linearized optimality conditions and an associated linear-quadratic optimal optimization problem, with an extra convex constraint. This approach is useful since it provides the existence of solutions to the linearized system on the basis of variational techniques. However it dictates a certain norms for the involved quantities. These norms are too weak for our goal of obtaining Lipschitz continuity of the adjoint variables in such a manner that differentiability of the cost with respect to the initial conditions can be argued. Therefore, in a second step we exploit the specific structure of the optimality systems, using the fact the it is related to a parabolic optimal control problem, to obtain the Lipschitz continuity in the stronger norms. This two step approach is also present in some of the earlier work on stability and sensitivity analysis which was quoted in the introduction. But due to that fact these papers considered finite horizon problems it came as a byproduct which improved the regularity of the adjoints. In our work it is essential to reach our goal. This is why we decided to formalize this two step approach which was not done in earlier work. Concretely, we consider the optimization problem $\begin{cases}\min\ f(x)\\\ e(x,q)=0,\ x\in C.\end{cases}$ ($P_{q}$) with a parameter dependent equality constraint, and a general constraint described by $x\in C$, where $C$ is a closed convex subset of a real Hilbert space $X$. Further, $W$ is a real Hilbert space and $P$ is a normed linear space. In the application that we have in mind, the parameter $q$ will appear as the initial condition in the dynamical system. The following Assumption (H1) is assumed to hold throughout. ###### Assumption H1. $q_{0}\in P$ is a nominal reference parameter, $x_{0}$ is a local solution ($P_{q_{0}}$), $f:X\longrightarrow\mathbb{R}^{+}$ is twice continuously differentiable in a neighborhood of $x_{0}$, $e:X\times P\longrightarrow W$ is continuous, and twice continuously differentiable w.r.t. $x$, with first and second derivative Lipschitz continuous in a neighborhood of $(x_{0},q_{0})$. The derivatives with respect to $x$ will be denoted by primes and the derivatives w.r.t. $y$ and $u$ later on, are denoted by subscripts. They are all considered in the sense of Lebesgue derivatives. We introduce the Lagrangian ${\mathcal{L}}:X\times P\times W^{*}\longrightarrow\mathbb{R}$ associated to ($P_{q}$) by ${\mathcal{L}}(x,q,\lambda)=f(x)+\langle\lambda,e(x,q)\rangle_{W^{*},W}.$ (2.1) Next further relevant assumptions are introduced: ###### Assumption H2 (regular point condition). $0\in\text{int }e^{\prime}(x_{0},q_{0})(C-x_{0}),$ where $int$ denotes the interior in the $W$ topology. This regularity condition implies the existence of a Lagrange multiplier $\lambda_{0}\in W^{*}$, see e.g. [MZ] such that the following first order condition holds: $\begin{cases}\langle{\mathcal{L}}^{\prime}(x_{0},q_{0},\lambda_{0}),c-x_{0}\rangle_{X^{*},X}\geq 0,\ \forall c\in C,\\\ e(x_{0},q_{0})=0.\end{cases}$ (2.2) It is equivalent to $\displaystyle\begin{cases}0\in{\mathcal{L}}^{\prime}(x_{0},q_{0},\lambda_{0})+\partial\mathbf{I}_{C}(x_{0}),\quad&\text{in }X^{*},\\\ e(x_{0},q_{0})=0,\quad&\text{in }W,\end{cases}$ (2.3) where $\partial\mathbf{I}_{C}(x)$ denotes the subdifferential of the indicator function of the set $C$ at $x\in X$. Let $\displaystyle A\in{\mathcal{L}}(X,X^{*})$ denote the operator representation of $\displaystyle{\mathcal{L}}^{\prime\prime}(x_{0},q_{0},\lambda_{0})$, i.e. $\langle Ax_{1},x_{2}\rangle_{X^{*},X}={\mathcal{L}}^{\prime\prime}(x_{0},q_{0},\lambda_{0})(x_{1},x_{2})$ (2.4) and define $E=e^{\prime}(x_{0},q_{0})\in{\mathcal{L}}(X,W).$ (2.5) We further require ###### Assumption H3 (positive definiteness). $\exists\kappa>0:\ \langle Ax,x\rangle_{X^{*},X}\geq\kappa\left\lVert x\right\rVert^{2}_{X},\ \forall x\in\text{ker }E.$ The stability result of $(x_{0},\lambda_{0})$ with respect to perturbation of $q$ at $q_{0}$ will be based on Robinson’s strong regularity condition which involves the following linearized form of the optimality condition, $\displaystyle\begin{cases}0\in{\mathcal{L}}^{\prime}(x_{0},q_{0},\lambda_{0})+A(x-x_{0})+E^{*}(\lambda-\lambda_{0})+\partial\mathbf{I}_{C}(x)&\text{in }X^{*},\\\ 0=e(x_{0},q_{0})+E(x-x_{0})&\text{in }W.\end{cases}$ (2.6) We define a multivalued operator $\displaystyle{\mathcal{T}}:X\times W^{*}\longrightarrow X^{*}\times W$ by ${\mathcal{T}}\begin{pmatrix}x\\\ \lambda\end{pmatrix}=\begin{pmatrix}A&E^{*}\\\ E&0\end{pmatrix}\begin{pmatrix}x\\\ \lambda\end{pmatrix}+\begin{pmatrix}f^{\prime}(x_{0})-Ax_{0}\\\ -Ex_{0}\end{pmatrix}+\begin{pmatrix}\partial\mathbf{I}_{C}(x)\\\ 0\end{pmatrix},$ (2.7) and observe that (2.6) is equivalent to $\displaystyle 0\in{\mathcal{T}}\begin{pmatrix}x\\\ \lambda\end{pmatrix}.$ Here it is understood that ${\mathcal{T}}$ is evaluated at $(x_{0},q_{0},\lambda_{0})\in X\times P\times W^{*}$. But ${\mathcal{T}}$ is not yet the mapping for which we need to verify the Robinson-Dontchev strong regularity condition in our context. It relates to the fact that we must to treat the multiplier $\lambda$ in smaller space than $W^{*}$. Before we can properly specify this condition some additional preparation is necessary. We first introduce Banach spaces: $\underline{X}\subset X,\ \underline{W^{*}}\subset W^{*},\ \underline{X^{*}}\subset X^{*},$ (2.8) with continuous injections. A restriction of ${\mathcal{T}}$ will be defined as multivalued operator $\displaystyle\underline{{\mathcal{T}}}:\underline{X}\times\underline{W^{*}}\to\underline{X^{*}}\times W$. Indeed, in applications to optimal control problems extra regularity of multipliers can be obtained by investigating the solutions (2.3), see e.g. Section 3. In the context of optimal stabilization problems this structural property will become transparent in Proposition 4.1 and Proposition 4.2, see also [BKP3, Proposition 15]. It will turn out to be essential for our purposes. But this situation where the multiplier has extra regularity is also of abstract interest. When studying stability in this setting this means that the second coordinate of the domain of ${\mathcal{T}}$ needs to be changed from $W^{*}$ to $\underline{W^{*}}$. This entails that the range space of ${\mathcal{T}}$ has to be modified appropriately, in order to obtain stability of the $\lambda$ coordinate. For this purpose we introduce $\underline{X^{*}}\subset X^{*}$. The reason for further restricting $X$ to $\underline{X}$ will become evident in the proof of Proposition 4.2. It is related to the fact that we consider infinite horizon problems. Now we adapt the conditions on $f$ and $e$ to the choice of the spaces in (2.8). ###### Assumption H4. There exists a neighborhood $\displaystyle\widetilde{U}_{1}\times\widetilde{U}_{2}\subset\underline{X}\times P$ of $(x_{0},q_{0})$ such that 1. (i) the restriction of $x\mapsto f^{\prime}(x)$ to $\underline{X}$ defines a mapping from $\underline{f^{\prime}}(x)$ from $\widetilde{U}_{1}\subset\underline{X}$ to $\underline{X^{*}}$, 2. (ii) the restriction $e^{\prime}(x,q)^{*}\in{\mathcal{L}}(W^{*},X^{*})$ to $\underline{W^{*}}$ defines operators $\underline{e^{\prime}(x,q)^{*}}\in{\mathcal{L}}(\underline{W^{*}},\underline{X^{*}})$ for every $(x,q)\in\widetilde{U}_{1}\times\widetilde{U}_{2}$. With these assumption holding we define the restricted linearized Lagrangian $\underline{{\mathcal{L}}^{\prime}}:\widetilde{U}_{1}\times\widetilde{U}_{2}\times\underline{W^{*}}\subset\underline{X}\times P\times\underline{W^{*}}\longrightarrow\underline{X^{*}}\quad\text{by}\quad\underline{{\mathcal{L}}^{\prime}}(x,q,\lambda)=\underline{f^{\prime}}(x)+\underline{e^{\prime}(x,q)^{*}}\lambda.$ (2.9) Next we adapt $\partial\mathbf{I}_{C}\subset X^{*}$ to the situation of (2.8) and define for $x\in\underline{X}$ the set valued mapping ${\underline{\partial\mathbf{I}_{C}}(x)}=\left\\{y\in\underline{X^{*}}:\langle y,v-x\rangle_{X^{*},X}\leq 0,\ \forall v\in C\cap\underline{X}\right\\}\subset\underline{X^{*}}.$ (2.10) We henceforth assume that $(x_{0},\lambda_{0})\in\underline{X}\times\underline{W^{*}}$, it will also follow as a special case of (H7) below. The following assumption will guarantee that the restriction $\underline{{\mathcal{T}}}$ of ${\mathcal{T}}$ is well-defined as operator from $\underline{X}\times\underline{W^{*}}$ to $\underline{X^{*}}\times W$, and the one beyond is needed for Lipschitz continuous dependence of local solutions to ($P_{q}$) with respect to $q$. ###### Assumption H5. $\displaystyle\underline{{\mathcal{L}}^{\prime}}:\widetilde{U}_{1}\times\widetilde{U}_{2}\times\underline{W^{*}}\subset\underline{X}\times P\times\underline{W^{*}}\longrightarrow\underline{X^{*}}$ is Fréchet differentiable with respect to $x$, and $(\underline{{\mathcal{L}}^{\prime}})^{\prime}$ is continuous at $(x_{0},q_{0},\lambda_{0})\in\underline{X}\times P\times\underline{W^{*}}$. ###### Assumption H6. There exists $\nu>0$ such that: $\displaystyle\left\lVert e(x,q_{1})-e(x,q_{2})\right\rVert_{W}$ $\displaystyle\leq\nu\left\lVert q_{1}-q_{2}\right\rVert_{P},\ \forall(x,q_{1})\text{ and }(x,q_{2})\in\widetilde{U}_{1}\times\widetilde{U}_{2},$ (2.11a) $\displaystyle\left\lVert{\underline{e^{\prime}(x,q_{1})^{*}}-\underline{e^{\prime}(x,q_{2})^{*}}}\right\rVert_{{\mathcal{L}}(\underline{W^{*}},\underline{X^{*}})}$ $\displaystyle\leq\nu\left\lVert q_{1}-q_{2}\right\rVert_{P},\ \forall(x,q_{1})\text{ and }(x,q_{2})\in\widetilde{U}_{1}\times\widetilde{U}_{2}.$ (2.11b) Let us further set $\underline{E^{*}}=\underline{e^{\prime}(x_{0},q_{0})^{*}}\in{\mathcal{L}}(\underline{W^{*}},\underline{X^{*}})\text{ and }\underline{A}=(\underline{{\mathcal{L}}^{\prime}}(x_{0},q_{0},\lambda_{0}))^{\prime}\in{\mathcal{L}}(\underline{X},\underline{X^{*}}).$ With Assumptions (H1)-(H5) holding (2.3) can be expressed as $\displaystyle\begin{cases}0\in\underline{{\mathcal{L}}^{\prime}}(x_{0},q_{0},\lambda_{0})+\underline{\partial\mathbf{I}_{C}}(x_{0}),\quad&\text{in }\underline{X^{*}},\\\ e(x_{0},q_{0})=0,\quad&\text{in }W.\end{cases}$ (2.12) Moreover (2.6) restricted to $\underline{X}\times\underline{X^{*}}$ result in: $\displaystyle 0\in\begin{cases}\underline{{\mathcal{L}}^{\prime}}(x_{0},q_{0},\lambda_{0})+\underline{A}(x-x_{0})+\underline{E^{*}}(\lambda-\lambda_{0})+\underline{\partial\mathbf{I}_{C}}(x)&\text{ in }\underline{X^{*}},\\\ e(x_{0},q_{0})+E(x-x_{0})&\text{ in }W,\end{cases}$ (2.13) and the multivalued operator $\displaystyle\underline{{\mathcal{T}}}:\underline{X}\times\underline{W^{*}}\longrightarrow\underline{X^{*}}\times W$ related to (2.7) is defined as $\underline{{\mathcal{T}}}\begin{pmatrix}x\\\ \lambda\end{pmatrix}=\begin{pmatrix}\underline{A}&\underline{E^{*}}\\\ E&0\end{pmatrix}\begin{pmatrix}x\\\ \lambda\end{pmatrix}+\begin{pmatrix}\underline{f^{\prime}}(x_{0})-\underline{A}x_{0}\\\ -Ex_{0}\end{pmatrix}+\begin{pmatrix}\underline{\partial\mathbf{I}_{C}}(x)\\\ 0\end{pmatrix}.$ (2.14) Observe that (2.13) is equivalent to $\displaystyle 0\in\underline{{\mathcal{T}}}\begin{pmatrix}x\\\ \lambda\end{pmatrix}.$ Existence and Lipschitz continuity of solutions in a neighborhood $(x_{0},q_{0},\lambda_{0})$ will follow from the strong regularity assumption which requires us to show that there exist neighborhoods $\hat{V}\subset\underline{X^{*}}\times W$ of $0$ and $\hat{U}=\hat{U}_{1}\times\hat{U}_{2}\subset\underline{X}\times\underline{W^{*}}$ of $(x_{0},q_{0})$ such that $\underline{{\mathcal{T}}}^{-1}$ has the properties that $\underline{{\mathcal{T}}}^{-1}(\hat{V})\cap\hat{U}$ is single-valued and that it is Lipschitz continuous from $\hat{V}$ to $\hat{U}$, see [Don], (and also [Rob], [IK, Definition 2.2, p 31], in case $\underline{X}=X,\ \underline{W^{*}}=W^{*},\ \underline{X^{*}}=X^{*}$). We approach the strong regularity assumption in two steps. In the first one we argue invertibility of ${\mathcal{T}}$ and Lipschitz continuity of the variable $x$ in $X$. For this purpose we exploit the symmetry of ${\mathcal{T}}$ and consider an associated variational problem. In our specific situation the inverse of ${\mathcal{T}}$ \- and consequently of $\underline{{\mathcal{T}}}$ \- is single-valued and thus the restriction to the neighborhood $\hat{U}$ is not needed. Existence and Lipschitz continuity of $\lambda$ as well as Lipschitz continuity of $x$ in the small space $\underline{X}\times\underline{W^{*}}$ remains an assumption in the generality of problem ($P_{q}$). It will be verified in a second step for the optimal stabilization problems in the following sections. ###### Assumption H7. For $(\beta_{1},\beta_{2})\in\hat{V}\subset\underline{X^{*}}\times W$ , the solution $\displaystyle\left(x_{(\beta_{1},\beta_{2})},\lambda_{(\beta_{1},\beta_{2})}\right)$ to $\displaystyle{\mathcal{T}}\begin{pmatrix}x\\\ \lambda\end{pmatrix}=\begin{pmatrix}\beta_{1}\\\ \beta_{2}\end{pmatrix}$ lies in $\underline{X}\times\underline{W^{*}}$. Moreover there exists a constant $k>0$ such that $\left\lVert x_{(\beta_{1},\beta_{2})}-x_{(\hat{\beta}_{1},\hat{\beta}_{2})}\right\rVert_{\underline{X}}+\left\lVert\lambda_{(\beta_{1},\beta_{2})}-\lambda_{(\hat{\beta}_{1},\hat{\beta}_{2})}\right\rVert_{\underline{W^{*}}}\leq k\left[\left\lVert(\beta_{1},\beta_{2})-(\hat{\beta}_{1},\hat{\beta}_{2})\right\rVert_{\underline{X^{*}}\times W}+\left\lVert x_{(\beta_{1},\beta_{2})}-x_{(\hat{\beta}_{1},\hat{\beta}_{2})}\right\rVert_{X}\right]$ for all $(\beta_{1},\beta_{2})\in\hat{V},(\hat{\beta}_{1},\hat{\beta}_{2})\in\hat{V}$. This condition is used after the existence of $x_{\beta}=x_{(\beta_{1},\beta_{2})}$ was already established. Note that for $(\beta_{1},\beta_{2})^{T}=0$ we have $(x_{(0,0)},\lambda_{(0,0)})=(x_{0},\lambda_{0})$ and hence (H7) in particular implies that $(x_{0},\lambda_{0})\in\underline{X}\times\underline{W^{*}}$. We arrive at the announced stability result. ###### Theorem 2.1. Assume that (H1)-(H7) hold at a local solution $x_{0}$ of ($P_{q_{0}}$). Then there exist a neighborhood $U=U(x_{0},\lambda_{0})\subset\underline{X}\times\underline{W^{*}}$, a neighborhood $N=N(q_{0})\subset P$, and a constant $\mu$ such that for all $q\in N$ there exists a unique $(x(q),\lambda(q))\in U$ satisfying $\displaystyle 0\in\begin{cases}\underline{{\mathcal{L}}^{\prime}}(x(q),q,\lambda(q))+\underline{\partial\mathbf{I}_{C}}(x(q)),\quad&\text{ in }\underline{X^{*}},\\\ e(x(q),q),\quad&\text{ in }W,\end{cases}$ (2.15) and $\left\lVert(x(q_{1}),\lambda(q_{1}))-(x(q_{2}),\lambda(q_{2}))\right\rVert_{\underline{X}\times\underline{W^{*}}}\leq\mu\left\lVert q_{1}-q_{2}\right\rVert_{P},\ \forall q_{1},q_{2}\in N.$ (2.16) In addition there exists a nontrivial neighborhood $\widetilde{N}\subset N$ of $q_{0}$ such that $x(q)$ is a local solution of ($P_{q}$) for $q\in\widetilde{N}$. For the proof we shall employ the following lemma in which $A\in{\mathcal{L}}(X,X^{*})$ and $E\in{\mathcal{L}}(X,W)$ denote generic operators. For the sake of completeness we also include its proof. ###### Lemma 2.1. Let $(\tilde{a},\tilde{b})\in X^{*}\times W$, assume that $A\in{\mathcal{L}}(X,X^{*})$ is self-adjoint and satisfies (H3), and that the set $\displaystyle S(\tilde{b})=\\{x\in C:\ Ex=\tilde{b}\\}$ is nonempty. Then the problem $\begin{cases}\min_{x\in C}\tilde{J}(x)=\min_{x\in C}\frac{1}{2}\langle Ax,x\rangle_{X^{*},X}+\langle\tilde{a},x\rangle_{X^{*},X},\\\ Ex=\tilde{b},\end{cases}$ (2.17) admits a unique solution $x=x(\tilde{a},\tilde{b})$ satisfying $\langle Ax+\tilde{a},v-x\rangle_{X^{*},X}\geq 0,\ \text{for all }v\in S(\tilde{b}).$ (2.18) If moreover the regular point condition $0\in\text{int }E(C-x(\tilde{a},\tilde{b}))$ holds, then there exists $\lambda=\lambda(\tilde{a},\tilde{b})\in W^{*}$ such that $0\in\begin{cases}\begin{pmatrix}A&E^{*}\\\ E&0\end{pmatrix}\begin{pmatrix}x\\\ \lambda\end{pmatrix}+\begin{pmatrix}\tilde{a}\\\ -\tilde{b}\end{pmatrix}+\begin{pmatrix}\partial\mathbf{I}_{C}(x)\\\ 0\end{pmatrix}.\end{cases}$ (2.19) ###### Proof. Since $C$ is a closed and convex, $S(\tilde{b})$ is closed and convex. By assumption $S(\tilde{b})$ is nonempty. Hence there exists an $x\in C$ such that $Ex=\tilde{b}$. Note that $x$ can be uniquely decomposed as $x=w+y$, with $y\in\text{ker}E,\,w\in\text{ker}E^{\perp}$ and $Ew=\tilde{b}$. By (H3) the functional $\tilde{J}$ is bounded from below and coercive on $S(\tilde{b})$. Hence there exists a minimizing sequence $\\{x_{n}\\}$ in $S(\tilde{b})$ such that $\displaystyle\lim_{n\rightarrow\infty}\tilde{J}(x_{n})=\inf_{x\in S(\tilde{b})}\tilde{J}(x)$. Each $x_{n}$ can be decomposed as $x_{n}=w+y_{n}$, with $y_{n}\in\text{ker}E$. By (H3) the sequences $\displaystyle\\{y_{n}\\}_{n=1}^{\infty}$ and hence $\displaystyle\\{x_{n}\\}_{n=1}^{\infty}$ are bounded. Thus there exists a subsequence $\displaystyle\\{x_{n_{k}}\\}$ with weak limit $x=x(\tilde{a},\tilde{b})$ in $S(\tilde{b})$. Since $\tilde{J}$ weakly lower semi-continuous, we have that $\displaystyle\tilde{J}(x)\leq\liminf_{k\rightarrow\infty}\tilde{J}(x_{n_{k}})$ and $x$ minimizes $\tilde{J}$ over $S(\tilde{b})$. This further implies that $\displaystyle\langle Ax+\tilde{a},v-x\rangle_{X^{*},X}\geq 0$ for all $v\in S(\tilde{b})$. Uniqueness of $x$ follows from (H3). The regular point condition implies the existence of a multiplier $\lambda=\lambda(\tilde{a},\tilde{b})\in W^{*}$ such that (2.19) holds. See e.g. [IK, Theorem 1.6] ∎ Proof of the Theorem 2.1. 1. (i) The proof of the first assertion of the Theorem 2.1 is based on the implicit function theorem of Dontchev for generalized equations, see [Don, Theorem 2.4, Remark 2.5]. We introduce the mapping $\underline{{\mathcal{F}}}:\underline{X}\times P\times\underline{W^{*}}\longrightarrow\underline{X^{*}}\times W$ given by $\underline{{\mathcal{F}}}(x,q,\lambda)=\begin{pmatrix}\underline{{\mathcal{L}}^{\prime}}(x,q,\lambda)\\\ e(x,q)\end{pmatrix},$ and observe that Assumption (H6) implies that for all $(x,q_{1},\lambda),$ and $(x,q_{2},\lambda)\in\widetilde{U}_{1}\times\widetilde{U}_{2}\times\underline{W^{*}}$ $\left\lVert\underline{{\mathcal{F}}}(x,q_{1},\lambda)-\underline{{\mathcal{F}}}(x,q_{2},\lambda)\right\rVert_{\underline{X^{*}}\times W}\leq\nu\left(1+\left\lVert\lambda\right\rVert_{\underline{W^{*}}}\right)\left\lVert q_{1}-q_{2}\right\rVert_{P}.$ (2.20) By (H1) and (H5), and using the integral mean value theorem it can be argued that $\begin{pmatrix}x\\\ \lambda\end{pmatrix}\to\begin{pmatrix}\underline{A}&\underline{E^{*}}\\\ E&0\end{pmatrix}\begin{pmatrix}x\\\ \lambda\end{pmatrix}+\begin{pmatrix}\underline{f^{\prime}}(x_{0})-\underline{A}x_{0}\\\ -Ex_{0}\end{pmatrix}$ strongly approximates $\underline{{\mathcal{F}}}$ at $(x_{0},q_{0},\lambda_{0})$, [Don]. In the next two steps the strong regularity condition for $\underline{{\mathcal{T}}}$ will be verified. 2. (ii) (Existence). Let, at first, $(\beta_{1},\beta_{2})\in X^{*}\times W$ and consider $\displaystyle{\mathcal{T}}\begin{pmatrix}x\\\ \lambda\end{pmatrix}=\begin{pmatrix}\beta_{1}\\\ \beta_{2}\end{pmatrix}$ which is equivalent to, $0\in\begin{pmatrix}a\\\ -b\end{pmatrix}+\begin{pmatrix}A&E^{*}\\\ E&0\end{pmatrix}\begin{pmatrix}x\\\ \lambda\end{pmatrix}+\begin{pmatrix}\partial\mathbf{I}_{C}(x)\\\ 0\end{pmatrix},$ (2.21) with $a=f^{\prime}(x_{0})-Ax_{0}-\beta_{1},\ b=Ex_{0}+\beta_{2}$, and $A,E$ defined in (2.4), (2.5). To solve (2.21) we consider, $\begin{cases}\min_{x\in C}\ \frac{1}{2}\langle Ax,x\rangle_{X^{*},X}+\langle a,x\rangle_{X^{*},X},\\\ Ex=b.\end{cases}$ (2.22) This corresponds to (2.17) with $\tilde{a}=a$, $\tilde{b}=b$ and feasible set $\displaystyle S(\beta_{2})=\\{x\in C:Ex=b\\}$. Clearly $\displaystyle x_{0}\in S(0)=\\{x\in C:Ex=Ex_{0}\\}$. By (H2) and [IK, Theorem I.2.8], there exists a neighborhood of the origin $\displaystyle\tilde{V}\subset X^{*}\times W$ such that $S(\beta_{2})$ is not empty for all $(\beta_{1},\beta_{2})\in\tilde{V}$. Thus by Lemma 2.1 there exists a unique solution $x=x(\beta_{1},\beta_{2})$ to (2.22) for each $(\beta_{1},\beta_{2})\in\tilde{V}$. By [IK, Theorem I.2.11, I.2.12, I.2.15], possibly after reducing $\tilde{V}$, these solutions depend Hölder continuously on $(\beta_{1},\beta_{2})\in\tilde{V}\subset X^{*}\times W$, with exponent $\frac{1}{2}$. The regular point condition for the solution $x(\beta_{1},\beta_{2})$ is $0\in\text{int }E(C-x(\beta_{1},\beta_{2}))=\text{int }E(C-x_{0})-\beta_{2},$ which is satisfied due to (H2), possibly after again reducing $\tilde{V}$. Hence there exists a Lagrange multiplier $\lambda=\lambda(\beta_{1},\beta_{2})$ associated to $Ex=b$, and (2.21) admits a unique solution $(x(\beta_{1},\beta_{2}),\lambda(\beta_{1},\beta_{2}))$ since it is the first order optimality condition for (2.17). 3. (iii) (Uniqueness and Lipschitz continuity) Let $(\beta_{1},\beta_{2})\in\tilde{V}$ and $(\hat{\beta_{1}},\hat{\beta_{2}})\in\tilde{V}$ with corresponding solutions $(x,\lambda)\in X\times W^{*}$ and $(\hat{x},\hat{\lambda})\in X\times W^{*}$. This implies that $\begin{cases}\langle a+Ax+E^{*}\lambda,c-x\rangle_{X^{*},X}\geq 0,\ \forall c\in C,\\\ Ex=b,\text{ with }a=f^{\prime}(x_{0})-Ax_{0}-\beta_{1},\ b=Ex_{0}+\beta_{2},\end{cases}$ (2.23) and $\begin{cases}\langle\hat{a}+A\hat{x}+E^{*}\hat{\lambda},c-\hat{x}\rangle_{X^{*},X}\geq 0,\ \forall c\in C,\\\ E\hat{x}=\hat{b},\text{with }\hat{a}=f^{\prime}(x_{0})-Ax_{0}-\hat{\beta_{1}},\ \hat{b}=Ex_{0}+\hat{\beta_{2}}.\end{cases}$ (2.24) By the first equations in (2.23) and (2.24) we obtain that $\langle a+Ax+E^{*}\lambda,\hat{x}-x\rangle_{X^{*},X}\geq 0,\ \langle\hat{a}+A\hat{x}+E^{*}\hat{\lambda},x-\hat{x}\rangle_{X^{*},X}\geq 0,\ x,\hat{x}\in C.$ (2.25) Combining these inequalities, we have that $\langle a-\hat{a}+A(x-\hat{x})+E^{*}(\lambda-\hat{\lambda}),x-\hat{x}\rangle_{X^{*},X}\leq 0.$ (2.26) The second equalities in (2.23) and (2.24) imply that $E(x-\hat{x})=b-\hat{b}.$ (2.27) Let us set $\delta x=\hat{x}-x,\ \delta\lambda=\hat{\lambda}-\lambda,\ \delta a=\hat{a}-a,\ \delta b=\hat{b}-b.$ Then $\delta\beta_{1}=-(\hat{\beta_{1}}-\beta_{1})$ and $\delta\beta_{2}=\hat{\beta_{2}}-\beta_{2}$, and (2.26), (2.27) result in $\langle\delta x,A\delta x\rangle_{X,X^{*}}+\langle\delta\lambda,E\delta x\rangle_{W^{*},W}-\langle\delta\beta_{1},\delta x\rangle_{X^{*},X}\leq 0,$ (2.28) and $E\delta x=\delta b.$ (2.29) By (H2) the operator $E$ is surjective. Hence by the closed range theorem expressing $\delta x=\delta v+\delta w\in\text{ker }E+\text{range }E^{*}$ implies that $\displaystyle E\delta x=E\delta w=\delta\beta_{2}$. Again by the closed range theorem there exists $k_{1}>0$: $\left\lVert\delta w\right\rVert_{X}\leq k_{1}\left\lVert\delta\beta_{2}\right\rVert_{W}.$ (2.30) From the first equation in (2.21) we have $Ax+E^{*}\lambda- Ax_{0}+f^{\prime}(x_{0})-\beta_{1}\in-\partial{\mathbf{I}}_{C}(x).$ Next we restrict the perturbation parameters to satisfy $(\beta_{1},\beta_{2})\in(\underline{X^{*}}\times W)\cap\tilde{V}$. Due to Assumptions (H4) and (H7) we have $(x,\lambda)\in\underline{X}\times\underline{W^{*}}$, $\underline{A}x+{\underline{E^{*}}}\lambda-\underline{A}x_{0}+\underline{f^{\prime}}(x_{0})-\beta_{1}\in\underline{X^{*}}$ and hence $\underline{A}x+{\underline{E^{*}}}\lambda-\underline{A}x_{0}+\underline{f^{\prime}}(x_{0})-\beta_{1}\in-\underline{\partial\mathbf{I}_{C}}(x).$ The analogous equation holds with $(x,\lambda,\beta_{1})$ replaced by $(\hat{x},\hat{\lambda},\hat{\beta}_{1})$. By (H3), (2.28) and Assumption (H7) we find $\displaystyle\kappa\left\lVert\delta v\right\rVert^{2}$ $\displaystyle\leq\langle\delta v,A\delta v\rangle_{X,X^{*}}=\langle\delta x,A\delta x\rangle_{X,X^{*}}-2\langle\delta v,A\delta w\rangle_{X,X^{*}}-\langle\delta w,A\delta w\rangle_{X,X^{*}}$ (2.31) $\displaystyle\leq-\langle\delta\lambda,E\delta x\rangle_{W^{*},W}+\langle\delta\beta_{1},\delta x\rangle_{X,X^{*}}-2\langle\delta v,A\delta w\rangle_{X,X^{*}}-\langle\delta w,A\delta w\rangle_{X,X^{*}}$ $\displaystyle\leq\tilde{k}\left\lVert\delta\lambda\right\rVert_{\underline{W}^{*}}\left\lVert\delta\beta_{2}\right\rVert_{W}+\left\lVert\delta\beta_{1}\right\rVert_{X^{*}}\left\lVert\delta x\right\rVert_{X}+\left\lVert A\right\rVert\left\lVert\delta w\right\rVert_{X}\left(2\left\lVert\delta v\right\rVert_{X}+\left\lVert\delta w\right\rVert_{X}\right)$ $\displaystyle\leq\tilde{k}k(\left\lVert\delta w\right\rVert_{X}+\left\lVert\delta v\right\rVert_{X}+\left\lVert\delta\beta\right\rVert_{\underline{X}^{*}\times W})\left\lVert\delta\beta_{2}\right\rVert_{W}+(\left\lVert\delta\beta_{1}\right\rVert_{X^{*}}+2\left\lVert A\right\rVert\left\lVert\delta w\right\rVert_{X})(\left\lVert\delta v\right\rVert_{X}+\left\lVert\delta w\right\rVert_{X}),$ where $\tilde{k}$ denotes the embedding constant of $\underline{W}^{*}$ into $W^{*}$. Using (2.30) and rearranging terms there exists a constant $k_{2}>0$ such that $\left\lVert\delta v\right\rVert_{X}\leq k_{2}\left(\left\lVert\delta\beta_{1}\right\rVert_{\underline{X}^{*}}+\left\lVert\delta\beta_{2}\right\rVert_{W}\right).$ (2.32) Applying (2.30) again this implies the existence of $k_{3}$ such that $\left\lVert\delta x\right\rVert_{X}\leq k_{3}\left(\left\lVert\delta\beta_{1}\right\rVert_{\underline{X}^{*}}+\left\lVert\delta\beta_{2}\right\rVert_{W}\right)\text{ for all }(\beta_{1},\beta_{2})\in(\underline{X^{*}}\times W)\cap\tilde{V}.$ (2.33) Another application of (H7) and (2.33) imply the existence of a constant $k_{4}$ and a neighborhood $\hat{V}$ of the origin in $\underline{X^{*}}\times W$ such that the desired Lipschitz stability estimate for $(\underline{{\mathcal{T}}})^{-1}$ $\left\lVert\delta x\right\rVert_{\underline{X}}+\left\lVert\delta\lambda\right\rVert_{\underline{W^{*}}}\leq k_{4}\left(\left\lVert\delta\beta_{1}\right\rVert_{\underline{X^{*}}}+\left\lVert\delta\beta_{2}\right\rVert_{W}\right)\text{ for all }(\beta_{1},\beta_{2})\in\hat{V}\subset\underline{X^{*}}\times W$ (2.34) holds. 4. (iv) As a consequence of the previous two steps $\underline{{\mathcal{T}}}$ is strongly regular at $(x_{0},q_{0},\lambda_{0})$. Together with step (i), Dontchev’s theorem is applicable [Don, Theorem 2.4, Remark 2.5], and (2.15), and (2.16) follow. 5. (v) (Local solution to ($P_{q}$)) Now we show that there exists a neighborhood $\tilde{N}$ of $q_{0}$ such that for $q\in\tilde{N}$ the second order sufficient optimality condition is satisfied at $x(q)$, so that $x(q)$ is a local solution of ($P_{q}$) by eg. [IK, Theorem 2.12, p42]. Due to (H3) and regularity of $f,e$ we obtain ${\mathcal{L}}^{\prime\prime}(x(q),q,\lambda(q))(h,h)\geq\frac{\kappa}{2}\left\lVert h\right\rVert^{2},\ \text{for all }h\in\text{ker }E,\ \text{if }q\in N(q_{0}).$ (2.35) Let us define $\displaystyle E_{q}=(e_{y}(x(q),q))$ for $q\in N(q_{0})$. By the surjectivity of $E_{q_{0}}$ and regularity of $e$ there exists a neighborhood $\tilde{N}\subset N(q_{0})$ such that $E_{q}$ is surjective for all $q\in\tilde{N}$. There exist $\delta_{0},\gamma>0$ such that ${\mathcal{L}}^{\prime\prime}(x(q),q,\lambda(q))(h+z,h+z)\geq\delta_{0}\left\lVert h+z\right\rVert^{2},\ \text{for all }h\in\text{ker }E,z\in X$ (2.36) satisfying $\left\lVert z\right\rVert\leq\gamma\left\lVert h\right\rVert$ by [IK, Lemma 2.13, p43]. Let us define the orthogonal projection onto ker$\ E_{q}$ given by $\displaystyle P_{\text{ker }E_{q}}=I-E_{q}^{*}(E_{q}E_{q}^{*})^{-1}E_{q}$. We choose $\tilde{N}$ so that $\left\lVert E_{q}^{*}(E_{q}E_{q}^{*})^{-1}E_{q}-E_{q_{0}}^{*}(E_{q_{0}}E_{q_{0}}^{*})^{-1}E_{q_{0}}\right\rVert\leq\frac{\gamma}{1+\gamma}$ for all $q\in\tilde{N}$. For $x\in\text{ker }E_{q}$, we have $x=h+z$ for $h\in\text{ker }E,\ z\in(\text{ker }E)^{\perp}$ and $\left\lVert x\right\rVert^{2}=\left\lVert h\right\rVert^{2}+\left\lVert z\right\rVert^{2}$. Thus, $\left\lVert z\right\rVert\leq\left\lVert E_{q}^{*}(E_{q}E_{q}^{*})^{-1}E_{q}x-E_{q_{0}}^{*}(E_{q_{0}}E_{q_{0}}^{*})^{-1}E_{q_{0}}x\right\rVert\leq\frac{\gamma}{1+\gamma}\big{(}\left\lVert h\right\rVert+\left\lVert z\right\rVert\big{)}$ and hence $\left\lVert z\right\rVert\leq\gamma\left\lVert h\right\rVert$. From (2.35) this implies ${\mathcal{L}}^{\prime\prime}(x(q),q,\lambda(q))\geq\delta_{0}\left\lVert x\right\rVert^{2},\ \text{for all }x\in\text{ker }E.$ This concludes the proof. ∎ ## 3 Differentiability of value function for optimal stabilization subject to semi-linear parabolic equations. Here we describe the optimal control problems which we shall analyze and state the main results. ### 3.1 Notation Let $\Omega$ be an open connected bounded subset of $\mathbb{R}^{d}$ with a Lipschitz continuous boundary $\Gamma$. The associated space-time cylinder is denoted by $Q=\Omega\times(0,\infty)$ and the associated lateral boundary by $\Sigma=\Gamma\times(0,\infty)$. We define the Hilbert spaces $Y=L^{2}(\Omega),\quad V=H^{1}_{0}(\Omega),\text{ and }\quad U=L^{2}(0,\infty;\,{\mathcal{U}}),$ where ${\mathcal{U}}$ is a Hilbert space which will be identified with its dual. Observe that the embedding $V\subset Y$ is dense and compact. Further $V\subset Y\subset V^{*}$, is a Gelfand triple. Here $V^{*}$ denotes the topological dual of $V$ with respect to the pivot space $Y$. For any $T\in(0,\infty)$ we define the space $W(0,T)=\bigg{\\{}y\in L^{2}(0,T;V);\ \frac{dy}{dt}\in L^{2}(0,T;V^{*})\bigg{\\}},$ and for $T=\infty$, we write $W_{\infty}$ and $I=(0,\infty)$. We further set $W^{0}_{\infty}=\\{y\in W_{\infty}:y(0)=0\\}$. We also set $W(T,\infty)=\bigg{\\{}y\in L^{2}(T,\infty;V);\ \frac{dy}{dt}\in L^{2}(T,\infty;V^{*})\bigg{\\}}.$ We shall frequently use that $W_{\infty}$ embeds continuously into $C([0,\infty),Y)$, see e.g. [LM, Theorem 4.2] and that $\displaystyle\lim_{t\to\infty}y(t)=0$, for $y\in W_{\infty}$, see e.g. [CK]. The set of admissible controls $U_{ad}$ is chosen to be $U_{ad}\subset\\{u\in U:\|u(t)\|_{{\mathcal{U}}}\leq\eta,\text{ for }a.e.\ t>0\\}$ (3.1) where $\eta$ is a positive constant. We further set ${\mathcal{U}}_{ad}=\\{v\in{\mathcal{U}}:\|v\|_{{\mathcal{U}}}\leq\eta\\}$ and denote by $\displaystyle\mathbb{P}_{{\mathcal{U}}_{ad}}$ the projection of ${\mathcal{U}}$ on ${\mathcal{U}}_{ad}$. For this choice of admissible controls the dynamical system can be stabilized for all sufficiently small initial conditions in $Y$, see Corollary 4.3 and Remark 4.1. For $\delta>0$ and $\bar{y}\in Y$, we define the open neighborhoods $B_{Y}(\delta)=\left\\{y\in Y:\left\lVert y\right\rVert_{Y}<\delta\right\\},$ and $\quad B_{Y}(\bar{y},\delta)=\left\\{y\in Y:\left\lVert y-\bar{y}\right\rVert_{Y}<\delta\right\\}$. ### 3.2 Problem formulation and assumptions. We focus on the stabilization problem for an abstract semi-linear parabolic equation formulated as infinite horizon optimal control problem under control constraints: $\displaystyle(\mathcal{P})\qquad\mathcal{V}(y_{0})=\min_{(y,u)\in W_{\infty}\times U_{ad}}\ J(y,u)$ $\displaystyle=\min_{(y,u)\in W_{\infty}\times U_{ad}}\ \frac{1}{2}\int_{0}^{\infty}\left\lVert y(t)\right\rVert^{2}_{Y}dt+\frac{\alpha}{2}\int_{0}^{\infty}\left\lVert u(t)\right\rVert^{2}_{Y}dt,$ (3.2a) subject to the semilinear parabolic equation $\displaystyle y_{t}$ $\displaystyle={\mathcal{A}}y+{\mathcal{F}}(y)+Bu\quad\text{ in }L^{2}(I;V^{*})$ (3.2b) $\displaystyle y(x,0)$ $\displaystyle=y_{0}\quad\text{ in }Y.$ (3.2c) Throughout ${\mathcal{F}}$ is the substitution operator associated to a mapping $\mathfrak{f}:\mathbb{R}\to\mathbb{R}$ so that $({\mathcal{F}}y)(t)=\mathfrak{f}(y(t))$. Sufficient conditions which guarantee the existence of solutions to (3.2b), (3.2c), as well as solutions $(\bar{y},\bar{u})$ to ($\mathcal{P}$), for $y_{0}\in Y$ sufficiently small, will be given below. We shall also make use of the adjoint equation associated to an optimal state $\bar{y}$, given by $-p_{t}-{\mathcal{A}}^{*}p-{\mathcal{F}}^{\prime}(\bar{y})^{*}p=-\bar{y}\quad\text{ in }L^{2}(I;V^{*}).$ (3.2d) Its adjoint state $p$ which will be considered in $L^{2}(I;V)$ or in $W_{\infty}$. The following assumption will be essential. #### 3.2.1 Assumptions A. * A1 The operator ${\mathcal{A}}$ with domain ${\mathcal{D}}({\mathcal{A}})\subset Y$ and range in $Y$, generates a strongly continuous analytic semigroup $\displaystyle e^{{\mathcal{A}}t}$ on $Y$ and can be extended to ${\mathcal{A}}\in{\mathcal{L}}(V,V^{*})$. * A2 $B\in{\mathcal{L}}({\mathcal{U}},Y)$ and there exists a stabilizing feedback operator $K\in{\mathcal{L}}(Y,{\mathcal{U}})$ such that the semigroup $\displaystyle e^{({\mathcal{A}}-BK)t}$ is exponentially stable on $Y$. * A3 The nonlinearity $\displaystyle{\mathcal{F}}:W_{\infty}\to L^{2}(I;V^{*})$ is twice continuously Fréchet differentiable, with second Fréchet derivative ${\mathcal{F}}^{\prime\prime}$ bounded on bounded subsets of $W_{\infty}$, and ${\mathcal{F}}(0)=0$. * A4 ${\mathcal{F}}:W(0,T)\to L^{1}(0,T;{\mathcal{H}}^{*})$ is weak-to-weak continuous for every $T>0$, for some Hilbert space ${\mathcal{H}}$ which embeds densely in $V$. Note that $\displaystyle\left(L^{1}(0,T;{\mathcal{H}}^{*})\right)^{*}=L^{\infty}(0,T;{\mathcal{H}})$, see [Emm, Theorem 7.1.23(iv), p 164]. Moreover, $L^{\infty}(0,T;{\mathcal{H}})$ is dense in $L^{2}(0,T;V)$, see [MS, Lemma A.1, p 2231]. * A5 ${\mathcal{F}}^{\prime}(\bar{y})\in{\mathcal{L}}{(L^{2}(I;V),L^{2}(I;V^{*}))}$. ###### Remark 3.1. The requirement that $\mathcal{F}(0)=0$ in (A2) is consistent with the fact that we focus on the stabilization problem with $0$ as steady state for (3.2b). Without loss of generality we further assume that $\mathcal{F}^{\prime}(0)=0,$ (3.3) which can always be achieved by making $\mathcal{F}^{\prime}(0)$ to be perturbation of $\mathcal{A}$. ###### Remark 3.2. Let us assume that (A3) holds. Then in view of the fact that ${\mathcal{F}}$ is a substitution operator we have $[{\mathcal{F}}^{\prime}(y)v](t)=\mathfrak{f}^{\prime}(y(t))v(t)$ for $y$ and $v$ in $W_{\infty}$, and ${\mathcal{F}}^{\prime}(y)\in{\mathcal{L}}(W_{\infty},L^{2}(I;V^{*}))$. Its adjoint $[{\mathcal{F}}^{\prime}(y)^{*}v](t)=\mathfrak{f}^{\prime}(y(t))v(t)$, for $v\in L^{2}(I;V)$, satisfying ${\mathcal{F}}^{\prime}(y)^{*}\in{\mathcal{L}}(L^{2}(I;V),W_{\infty}^{*})$. It has a natural restriction to an operator $\underline{{\mathcal{F}}^{\prime}(y)^{*}}\in{\mathcal{L}}(W_{\infty},L^{2}(I;V^{*}))$. With (A3) holding it is differentiable and $[\underline{{\mathcal{F}}^{\prime}(y)}^{*}]^{\prime}$ is a bilinear form on $W_{\infty}\times W_{\infty}$ with values in $L^{2}(I;V^{*})$. #### 3.2.2 Abstract setup. Here we relate problem ($\mathcal{P}$) to the abstract problem ($P_{q}$), which is used with the following spaces: $\begin{array}[]{l}X=W_{\infty}\times U,\;W=L^{2}(I;V^{*})\times Y,\;P=Y,\;C=U_{ad},\;X^{*}=W_{\infty}^{*}\times U,\;W^{*}=L^{2}(I;V)\times Y,\\\\[7.3194pt] \underline{X}=W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}})),\quad\underline{X^{*}}=L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}})),\quad\underline{W^{*}}=\widetilde{W}_{\infty},\end{array}$ (3.4) where $\widetilde{\mathcal{W}}=\\{(\varphi,\varphi(0)):\varphi\in W_{\infty}\\}$, endowed with the norm of $W_{\infty}$. At times we identify $\widetilde{\mathcal{W}}$ with $W_{\infty}$. We recall that the dual space of $\displaystyle W_{\infty}=L^{2}(I;V)\cap W^{1,2}(I;V^{*})$ is $\displaystyle W^{*}_{\infty}=L^{2}(I;V^{*})+(W^{1,2}(I;V^{*}))^{*}$, endowed with the norm $\displaystyle\left\lVert z\right\rVert_{W^{*}_{\infty}}=\inf_{z=z_{1}+z_{2}}\left\lVert z_{1}\right\rVert_{L^{2}(I;V^{*})}+\left\lVert z_{2}\right\rVert_{W^{1,2}(I;V^{*})^{*}}$, where $\displaystyle z_{1}\in L^{2}(I;V^{*}),z_{2}\in(W^{1,2}(I;V^{*}))^{*}$. To express ($P_{q}$) for the present case, we set $x=(y,u)\in W_{\infty}\times U$, and the parameter $q$ becomes the initial condition $y_{0}\in Y$. Further $f:W_{\infty}\times U\longrightarrow\mathbb{R}$ is given by $f(y,u)=\frac{1}{2}\int_{0}^{\infty}\left\lVert y(t)\right\rVert^{2}_{Y}dt+\frac{\alpha}{2}\int_{0}^{\infty}\left\lVert u(t)\right\rVert^{2}_{{\mathcal{U}}}dt,$ (3.5) and $e(x,q)=e(y,u,y_{0})$ is $e(y,u,y_{0})=\begin{pmatrix}y_{t}-{\mathcal{A}}y-{\mathcal{F}}(y)-Bu\\\ y(0)-y_{0}\end{pmatrix}:W_{\infty}\times U\times Y\longrightarrow L^{2}(I;V^{*})\times Y$ (3.6) By (A3) the mapping $e$ is Fréchet differentiable with respect to $\displaystyle x=(y,u)\in W_{\infty}\times U$ and thus for $(y,u,y_{0})\in W_{\infty}\times U\times Y$ we have $e^{\prime}(y,u,y_{0})(v,w)=\begin{pmatrix}v_{t}-{\mathcal{A}}v-{\mathcal{F}}^{\prime}(y)v-Bw\\\ v(0,\cdot)\end{pmatrix}:W_{\infty}\times U\longrightarrow L^{2}(I;V^{*})\times Y.$ (3.7) The Lagrange functional ${\mathcal{L}}:W_{\infty}\times U\times Y\times L^{2}(I;V)\times Y\longrightarrow\mathbb{R}$ corresponding to our optimal control problem is given by ${\mathcal{L}}(y,u,y_{0},p,p_{1})=J(y,u)+\int_{0}^{\infty}\langle p,y_{t}-{\mathcal{A}}y-{\mathcal{F}}(y)-Bu\rangle_{V,V^{*}}dt+(p_{1},y(0)-y_{0})_{Y},$ where $(p,p_{1})\in L^{2}(I;V)\times Y$ corresponds to the abstract Lagrange multiplier $\lambda\in W^{*}$. In the remainder of this subsection we specify the mappings ${\mathcal{T}}$ and $\underline{{\mathcal{T}}}$ for problem ($\mathcal{P}$). This will facilitate the proofs of the main results further below. At first we take a closer look to the adjoint $\displaystyle\widetilde{E}^{*}:=e^{\prime}(y,u,y_{0})^{*}\in{\mathcal{L}}(L^{2}(I;V)\times Y,W^{*}_{\infty}\times U)$ at a generic element $(y,u,y_{0})\in W_{\infty}\times U\times Y$. It is characterized by the property that for all $(v,w)\in W_{\infty}\times U,\ (p,p_{1})\in L^{2}(I;V)\times Y$ we have $\langle\widetilde{E}(v,w),(p,p_{1})\rangle_{L^{2}(I;V^{*})\times Y,L^{2}(I;V)\times Y}=\langle v,\widetilde{E}_{1}^{*}(p,p_{1})\rangle_{W_{\infty},W^{*}_{\infty}}+(w,\widetilde{E}_{2}^{*}(p,p_{1}))_{U}$ where $\langle v,\widetilde{E}_{1}^{*}(p,p_{1})\rangle_{W_{\infty},W^{*}_{\infty}}=\langle v_{t}-{\mathcal{A}}v-{\mathcal{F}}^{\prime}(y)v,p\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}+(v(0),p_{1})_{Y},$ and $(w,\widetilde{E}_{2}^{*}(p,p_{0}))_{U}=-(w,B^{*}p)_{U}.$ If for some $\tilde{\beta}_{1}\in L^{2}(I;V^{*})$ the pair $(p,p_{1})\in L^{2}(I;V)\times Y$ is a solution to $\tilde{E}_{1}^{*}(p,p_{1})=\tilde{\beta}_{1}$ then for all $v\in W_{\infty}$: $\langle v_{t}-{\mathcal{A}}v-{\mathcal{F}}^{\prime}(y)v,p\rangle_{L^{2}(V^{*}),L^{2}(V)}+(v(0),p_{1})_{Y}-(w,B^{*}p)_{U}=\langle\tilde{\beta}_{1},v\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}.$ (3.8) Now we assume that ${\mathcal{F}}^{\prime}(y)$ is not only an element of $\displaystyle{\mathcal{L}}(W_{\infty},L^{2}(I;V^{*}))$ but rather that it can be extended to an operator ${{\mathcal{F}}^{\prime}(y)}\in{\mathcal{L}}(L^{2}(I;V),L^{2}(I;V^{*}))$. This is guaranteed by (A5) at minimizers $\bar{y}$. Then (3.8) implies that $p\in W_{\infty}$, and $p_{1}=p(0)$. In particular $(p,p_{1})=(p,p(0))\in\widetilde{W}_{\infty}$, and (3.8) can equivalently be expressed as $\langle v,\widetilde{E}_{1}^{*}(p,p_{1})\rangle_{L^{2}(I;V),L^{2}(I;V^{*})}=\langle v_{t}-{\mathcal{A}}v-{\mathcal{F}}^{\prime}(y)v,p\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}=\langle v,\widetilde{\beta}_{1}\rangle_{L^{2}(I;V),L^{2}(I;V^{*})},$ (3.9) for all $v\in L^{2}(I;V)$, where we assumed that $\widetilde{\beta}_{1}\in L^{2}(I;V^{*})$. Conversely, of course, if $p\in\widetilde{W}_{\infty}$, then $\displaystyle\widetilde{E}^{*}_{1}(p,p(0))=-p_{t}-{\mathcal{A}}^{*}p-{\mathcal{F}}^{\prime}(y)^{*}p\in L^{2}(I;V^{*})$. From now on, let $q_{0}=\bar{y}_{0}$ denote a reference (or nominal) parameter with associated solution $x_{0}=(\bar{y},\bar{u})$. In Proposition 4.1 we shall argue that the regular point condition Assumption (H2) is satisfied and that consequently there exists a Lagrange multiplier $(\bar{p},\bar{p}_{1})$ such that the pair $(x_{0},\lambda_{0})=(\bar{y},\bar{u},\bar{p},\bar{p}_{1})$ satisfies (2.3). Moreover, it will turn out that $\bar{p}\in W_{\infty}$, $\bar{p}_{1}=p(0)$, and that $\bar{u}\in U\cap C(I;{\mathcal{U}})$. For convenience let us present (2.3) for the present case $\displaystyle 0\in\begin{cases}\bar{y}+E^{*}_{1}(\bar{p},\bar{p}(0)),\\\ \alpha\bar{u}-B^{*}\bar{p}+\partial\mathbf{I}_{U_{ad}}(\bar{u}),\\\ \bar{y}_{t}-{\mathcal{A}}\bar{y}-{\mathcal{F}}(\bar{y})-B\bar{u},\\\ {\bar{y}(0)}-y_{0},\end{cases}$ (3.10) where $\displaystyle E=\begin{pmatrix}E_{1}\\\ E_{2}\end{pmatrix}=e^{\prime}(\bar{y},\bar{u},\bar{y}_{0})$. We stress that while the Lagrange multiplier $\bar{p}\in W_{\infty}$, the operator $E^{*}_{1}$ in (3.9) is still considered as an element of ${\mathcal{L}}(L^{2}(I;V)\times Y,W^{*}_{\infty})$. We are now prepared to specify the multivalued operators $\displaystyle{\mathcal{T}}:$ $\displaystyle W_{\infty}\times U\times L^{2}(I;V)\times Y\longrightarrow W_{\infty}^{*}\times U\times L^{2}(I;V^{*})\times Y,\quad\text{and}$ (3.11) $\displaystyle\underline{{\mathcal{T}}}:$ $\displaystyle W_{\infty}\times(U\cap C(I;{\mathcal{U}}))\times\widetilde{W}_{\infty}\longrightarrow L^{2}(I;V^{*})\times(U\cap C(I;{\mathcal{U}}))\times L^{2}(I;V^{*})\times Y$ (3.12) corresponding to (2.7) and (2.14) by ${\mathcal{T}}\begin{pmatrix}y\\\ u\\\ p\\\ p_{1}\end{pmatrix}=\begin{pmatrix}E^{*}_{1}(p,p_{1})+y-[{\mathcal{F}}^{\prime}(\bar{y})^{*}p]^{\prime}(y-\bar{y})\\\ \alpha u-B^{*}p\\\ y_{t}-{\mathcal{A}}y-Bu-{\mathcal{F}}^{\prime}(\bar{y})(y-\bar{y})-{\mathcal{F}}(\bar{y})\\\ y(0)-{\bar{y}_{0}}\end{pmatrix}+\begin{pmatrix}0\\\ \partial\mathbf{I}_{U_{ad}}(u)\\\ 0\\\ 0\end{pmatrix},$ (3.13) and $\underline{{\mathcal{T}}}\begin{pmatrix}y\\\ \underline{u}\\\ \underline{p}\\\ \underline{p}(0)\end{pmatrix}=\begin{pmatrix}-\underline{p}_{t}-{\mathcal{A}}^{*}\underline{p}-\underline{{\mathcal{F}}^{\prime}(\bar{y})^{*}}\,\underline{p}+y-[\underline{{\mathcal{F}}^{\prime}(\bar{y})^{*}}\,\underline{\bar{p}}]^{\prime}(y-\bar{y})\\\ \alpha\underline{u}-B^{*}\underline{p}\\\ y_{t}-{\mathcal{A}}y-B\underline{u}-{\mathcal{F}}^{\prime}(\bar{y})(y-\bar{y})-{\mathcal{F}}(\bar{y})\\\ y(0)-{\bar{y}_{0}}\end{pmatrix}+\begin{pmatrix}0\\\ \underline{\partial\mathbf{I}_{U_{ad}}}(\underline{u})\\\ 0\\\ 0\end{pmatrix}.$ (3.14) where $\underline{\partial\mathbf{I}_{U_{ad}}}(\underline{u})=\left\\{\widetilde{u}\in U\cap C(I;{\mathcal{U}}):\ (\widetilde{u}(t),v-\underline{u}(t))_{\mathcal{U}}\leq 0,\ \forall t\in I,\ v\in B_{\eta}(0)\right\\},$ (3.15) with $\displaystyle B_{\eta}(0)=\left\\{v\in{\mathcal{U}};\left\lVert v\right\rVert_{{\mathcal{U}}}\leq\eta\right\\}$. In (3.14), we underline the elements which are taken from different domains when compared to (3.13). The range of the first two coordinates of $\underline{{\mathcal{T}}}$ is smaller than that of ${\mathcal{T}}$. Accordingly we can make use of (3.9) when moving from the first row of (3.13) to the first row of (3.14). For convenience for the subsequent work we recall that the strong regularity condition introduced below (2.14) requires us to find neighborhoods of $0$ and $(\bar{y},\bar{u},\bar{p},\bar{p}(0))$ of the form $\hat{V}\subset L^{2}(I;V^{*})\times(U\cap C(\overline{I};{\mathcal{U}}))\times L^{2}(I;V^{*})\times Y$ and $\hat{U}\subset W_{\infty}\times(U\cap C(\overline{I};{\mathcal{U}}))\times\widetilde{W}_{\infty}$, such that for all $\boldsymbol{\beta}=(\beta_{1},\beta_{2},\beta_{3},\beta_{4})\in\hat{V}$ the equation $\underline{{\mathcal{T}}}\left(y,u,\underline{p},\underline{p}(0)\right)^{T}=\left(\beta_{1},\beta_{2},\beta_{3},\beta_{4}\right)^{T}$ (3.16) admits a unique solution $(y,\underline{u},\underline{p},\underline{p}(0))\in\hat{U}$ depending Lipschitz-continuously on $\boldsymbol{\beta}$. ###### Remark 3.3. We observe that as a consequence of (A3) and Remark 3.2 the operator $\underline{{\mathcal{T}}}$ is continuous. Subsequently we shall frequently refrain from the underline-notation since the meaning should be clear from the context. ### 3.3 Main Theorems. In this subsection, we present the main theorems of this paper. The first theorem asserts local continuous differentiability of the value function $\mathcal{V}$ w.r.t. $y_{0}$, with $y_{0}$ small enough. The second theorem establishes that $\mathcal{V}$ satisfies the HJB equation in the classical sense. The proof of the first theorem is based on Theorem 2.1. It will be given in Section 4 below. For this purpose it will be shown that assumptions A imply (H1)-(H7). Moreover we need to assert the underlying assumption that problem ($\mathcal{P}$) is well-posed. This will lead to a smallness assumption on the initial states $y_{0}$. Consequently it would suffice to assume that (A3) and (A4) only hold locally in the neighborhood of the origin. Concerning (A5) observe that it is not implied by (A3). It is vacuously satisfied for $\bar{y}=0$, which is the case for $y_{0}=0$, since then $\mathcal{F}^{\prime}(0)=0$, see (3.3). We invoke Theorem 2.1 to assert the Lipschitz continuity of the state, the adjoint state, and the control with respect to the initial condition $y_{0}\in Y$ in the neighborhood of a locally optimal solution $(\bar{y},\bar{u})$ corresponding to a sufficiently small reference initial state $\bar{y}_{0}$. This will imply the differentiability of the value function associated to local minima. We shall refer to the value function associated to local minima as ’local value function’. ###### Theorem 3.1. Let the assumptions (A) hold. Then associated to each local solution $(\bar{y}(y_{0}),\bar{u}(y_{0}))$ of ($\mathcal{P}$) there exists a neighborhood of $U(y_{0})$ such that the local value function ${\mathcal{V}}:U(y_{0})\subset Y\to\mathbb{R}$ is continuously differentiable, provided that $y_{0}$ is sufficiently close to the origin in $Y$. To obtain a HJB equation we require additionally that $t\to({\mathcal{F}}(\bar{y}))(t)$ is continuous with values in $Y$ for global solutions $(\bar{y},\bar{u})$ to ($\mathcal{P}$), with $y_{0}\in{\mathcal{D}}({\mathcal{A}})$. In view of the fact that for $y_{0}\in V$ we can typically expect that the solution of semilinear parabolic equations satisfy $y\in L^{2}(I;{\mathcal{D}}({\mathcal{A}}))\cap W^{1,2}(I;Y)\subset C([0,\infty),V)$ this is not a restrictive assumption beyond that what is already assumed in (A3). ###### Theorem 3.2. Let the assumptions (A) hold, and let $(\bar{y}(y_{0}),\bar{u}(y_{0}))$ denote a global solution of ($\mathcal{P}$), for $y_{0}\in{\mathcal{D}}({\mathcal{A}})$ with sufficiently small norm in $Y$. Assume that there exists $T_{y_{0}}>0$ such that ${\mathcal{F}}(\bar{y})\in C([0,T_{y_{0}});Y)$. Then the following Hamilton-Jacobi-Bellman equation holds at $y_{0}$: $\mathcal{V}^{\prime}(y)({\mathcal{A}}y+{\mathcal{F}}(y))+\frac{1}{2}\left\lVert y\right\rVert^{2}_{Y}+\frac{\alpha}{2}\left\lVert\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}(y)\right)\right\rVert^{2}_{Y}+\left\langle B^{*}\mathcal{V}^{\prime}(y),\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}(y)\right)\right\rangle_{Y}=0.$ (3.17) Moreover the optimal feedback law is given by ${\bar{u}}(0)=\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}({\bar{y}}(0))\right).$ (3.18) The condition on the smallness of $y_{0}$ will be discussed in Remark 4.2 below. Roughly it involves well-posedness of the optimality system and second order sufficient optimality at local solutions. A more detailed, respectively stronger statement of Theorem 3.1 and Theorem 3.2, will be given in Theorem 4.1 and Theorem 5.1 below. The regularity assumptions ${\mathcal{F}}(\bar{y})\in C([0,T_{y_{0}});Y)$ of Theorem 3.2 will be addressed in Section 6. ## 4 Proof of Theorem 3.1. In this section we give the proof for Theorem 3.1. Many of the technical difficulties arise from the fact that we are working with an infinite horizon optimal control problem. In this respect we can profit from techniques which were developed in [BKP3], which, however, do not include the case of constraints on the norm. Throughout we assume that assumptions (A1) - (A4) hold. ### 4.1 Well-posedness of problem ($\mathcal{P}$). Here we prove well-posedness for ($\mathcal{P}$) with small initial data. First, we recall two consequences of the assumption that ${\mathcal{A}}$ is the generator of an analytic semigroup. ###### Consequence 1. Since ${\mathcal{A}}$ generates a strongly continuous analytic semigroup on $Y$, there exist $\rho\geq 0$ and $\theta>0$ such that $\langle(\rho I-{\mathcal{A}})v,v\rangle_{V^{*},V}\geq\theta\left\lVert v\right\rVert^{2}_{V}$ See [BPDM, Part II, Chaptor 1, p 115], [Paz, Theorem 4.2, p14]. ###### Consequence 2. For all $y_{0}\in Y,f\in L^{2}(0,T;V^{*})$, and $T>0$, there exists a unique solution $y\in W(0,T)$ to $\dot{y}={\mathcal{A}}y+f,\quad y(0)=y_{0}.$ (4.1) Furthermore, $y$ satisfies $\left\lVert y\right\rVert_{W(0,T)}\leq c(T)\Big{(}\left\lVert y_{0}\right\rVert_{Y}+\left\lVert f\right\rVert_{L^{2}(0,T;V^{*})}\Big{)}$ (4.2) for a continuous function $c$. Assuming that $y\in L^{2}(0,\infty;Y)$, consider the equation $\dot{y}=\underbrace{({\mathcal{A}}-\rho I)y}_{{\mathcal{A}}_{\rho}}+\underbrace{\rho y+f}_{f_{\rho}},\quad y(0)=y_{0},$ where $f_{\rho}\in L^{2}(I;V^{*})$. Then the operator ${\mathcal{A}}_{\rho}$ generates a strongly continuous analytic semigroup on $Y$ which is exponentially stable, see [BPDM, p 115, Theorem II.1.2.12]. It follows that $y\in W_{\infty}$, that there exists $M_{\rho}$ such that $\left\lVert y\right\rVert_{W_{\infty}}\leq M_{\rho}\Big{(}\left\lVert y_{0}\right\rVert_{Y}+\left\lVert f_{\rho}\right\rVert_{L^{2}(I;V^{*})}\Big{)},$ (4.3) and that $y$ is the unique solution to (4.1) in $W_{\infty}$, see [BKP3, Section 2.2] . ###### Lemma 4.1. There exists a constant $C>0$, such that for all $\delta<(0,1]$ and for all $y_{1}$ and $y_{2}$ in $W_{\infty}$ with $\displaystyle\left\lVert y_{1}\right\rVert_{W_{\infty}}\leq\delta$ and $\displaystyle\left\lVert y_{2}\right\rVert_{W_{\infty}}\leq\delta$, it holds that $\left\lVert{\mathcal{F}}(y_{1})-{\mathcal{F}}(y_{2})\right\rVert_{L^{2}(I;V^{*})}\leq\delta C\left\lVert y_{1}-y_{2}\right\rVert_{W_{\infty}}.$ (4.4) ###### Proof. Let $y_{1},y_{2}$ be as in the statement of the lemma. Using Remark 3.1 we obtain the estimate $\displaystyle\left\lVert{\mathcal{F}}(y_{1})-{\mathcal{F}}(y_{2})\right\rVert_{L^{2}(I,V^{*})}\leq\int_{0}^{1}\left\lVert{\mathcal{F}}^{\prime}(y_{1}+t(y_{2}-y_{1}))-{\mathcal{F}}^{\prime}(0)\right\rVert_{{\mathcal{L}}(W_{\infty},L^{2}(I,V^{*}))}\,dt\,\|y_{2}-y_{1}\|_{W_{\infty}}$ $\displaystyle\leq\int_{0}^{1}\int_{0}^{1}\left\lVert{\mathcal{F}}^{\prime\prime}(s(y_{1}+t(y_{2}-y_{1})))(ty_{2}+(1-t)y_{1})\right\rVert_{{\mathcal{L}}(W_{\infty},L^{2}(I,V^{*}))}\,dsdt\,\|y_{2}-y_{1}\|_{W_{\infty}}.$ Now the claim follows by assumption (A3). ∎ ###### Lemma 4.2. Let $\displaystyle{\mathcal{A}}_{s}$ be the generator of an exponentially stable analytic semigroup $\displaystyle e^{{\mathcal{A}}_{s}t}$ on $Y$. Let $C$ denote the constant from Lemma 4.1. Then there exists a constant $M_{s}$ such that for all $y_{0}\in Y$ and $f\in L^{2}(I;V^{*})$ with $\tilde{\gamma}=\left\lVert y_{0}\right\rVert_{Y}+\left\lVert f\right\rVert_{L^{2}(I;V^{*})}\leq\frac{1}{4CM^{2}_{s}}$ the system $y_{t}={\mathcal{A}}_{s}y+{\mathcal{F}}(y)+f,\quad y(0)=y_{0}$ (4.5) has a unique solution $y\in W_{\infty}$, which satisfies $\left\lVert y\right\rVert_{W_{\infty}}\leq 2M_{s}\tilde{\gamma}.$ With Lemma 4.1 holding, this lemma can be verified in the same manner as [BKP3, Lemma 5, p 6]. In the following corollary we shall use Lemma 4.2 with ${\mathcal{A}}_{s}={\mathcal{A}}-BK$, and the constant corresponding to $M_{s}$ will be denoted by $M_{K}$. Further $\|{\mathcal{I}}\|$ denotes the norm of the embedding constant of $W_{\infty}$ into $C(I;Y)$, $\|i\|$ is the norm of the embedding $V$ into $Y$, and we recall the constant $\eta$ from (3.1). ###### Corollary 4.3. For all $\displaystyle y_{0}\in Y$ with $\left\lVert y_{0}\right\rVert_{Y}\leq\min\left\\{\frac{1}{4CM^{2}_{K}},\frac{\eta}{2M_{K}\left\lVert K\right\rVert_{{\mathcal{L}}(Y)}\left\lVert{\mathcal{I}}\right\rVert}\right\\}$ there exists a control $u\in{U_{ad}}$ such that the system $y_{t}={\mathcal{A}}y+{\mathcal{F}}(y)+Bu,\quad y(0)=y_{0}$ (4.6) has a unique solution $y\in W_{\infty}$ satisfying $\left\lVert y\right\rVert_{W_{\infty}}\leq 2M_{K}\|y_{0}\|_{Y}\quad\text{ and }\quad\left\lVert u\right\rVert_{U}\leq\|K\|_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\|{\mathcal{I}}\|\|y\|_{W_{\infty}}\leq 2M_{K}\|y_{0}\|_{Y}\|K\|_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\|{\mathcal{I}}\|.$ (4.7) ###### Proof. By Assumption (A2), there exists $K$ such that ${\mathcal{A}}-BK$ generates an exponentially stable analytic semigroup on $Y$. Taking $u=-Ky$, equation (4.6) becomes $y_{t}=({\mathcal{A}}-BK)y+{\mathcal{F}}(y),\quad y(0)=y_{0}.$ (4.8) Then by Lemma 4.2 with $\tilde{\gamma}=\|y_{0}\|_{Y}$ there exists $M_{K}$ such that (4.8) has a solution $y\in W_{\infty}$ satisfying $\left\lVert y\right\rVert_{W_{\infty}}\leq 2M_{K}\|y_{0}\|_{Y},$ and thus the first inequality in (4.7) holds. For every $t\in I$ we have $\|u\|_{U}=\|Ky\|_{U}\leq\|K\|_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\|y\|_{Y}\leq\|K\|_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\|{\mathcal{I}}\|\|y\|_{W_{\infty}}\leq 2M_{K}\|y_{0}\|_{Y}\|K\|_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\|{\mathcal{I}}\|,$ (4.9) and thus the second inequality in (4.7) holds.We still need to assert that $u\in U_{ad}$. This follows from the second smallness condition on $\left\lVert y_{0}\right\rVert_{Y}$ and (4.9). ∎ ###### Remark 4.1. In the above proof stabilization was achieved by the feedback control $u=-Ky$. For this $u$ to be admissible it is needed that ${\mathcal{U}}_{ad}$ has nonempty interior. The upper bound $\eta$ could be allowed to be time dependent as long as it satisfies $\displaystyle\inf_{t\geq 0}|\eta(t)|>0$. ###### Corollary 4.4. Let $\displaystyle y_{0}\in Y$ and let $u\in U_{ad}$ be such that the system $y_{t}={\mathcal{A}}y+{\mathcal{F}}(y)+Bu,\quad y(0)=y_{0}$ (4.10) has a unique solution $y\in L^{2}(I;Y)$. If $\displaystyle\gamma:=\left\lVert y_{0}\right\rVert_{Y}+\left\lVert\rho y+Bu\right\rVert_{L^{2}(I;V^{*})}\leq\min\left\\{\frac{1}{4CM^{2}_{\rho}},\frac{\eta}{2M_{\rho}\left\lVert K\right\rVert_{{\mathcal{L}}(Y)}\left\lVert{\mathcal{I}}\right\rVert}\right\\},$ then $y\in W_{\infty}$ and it holds that $\left\lVert y\right\rVert_{W_{\infty}}\leq 2M_{\rho}\gamma.$ ###### Proof. Since $y\in L^{2}(I;Y)$, we can apply Lemma 4.2 to the equivalent system $y_{t}=({\mathcal{A}}-\rho I)y+{\mathcal{F}}(y)+\tilde{f},$ where $\tilde{f}=\rho y+Bu$. This proves the assertion. ∎ ###### Lemma 4.5. There exists $\delta_{1}>0$ such that for all $\displaystyle y_{0}\in B_{Y}(\delta_{1})$, problem ($\mathcal{P}$) possesses a solution $(\bar{y},\bar{u})\in W_{\infty}\times U_{ad}$. Moreover, there exists a constant $M>0$ independent of $y_{0}$ such that $\max\big{\\{}\left\lVert\bar{y}\right\rVert_{W_{\infty}},\left\lVert\bar{u}\right\rVert_{U}\big{\\}}\leq M\left\lVert y_{0}\right\rVert_{Y}$ (4.11) ###### Proof. The proof of this lemma follows with analogous argumentation as provided in [BKP3, Lemma 8]. Let us choose, $\delta_{1}\leq\min\left\\{\frac{1}{4CM^{2}_{K}},\frac{\eta}{2M_{K}\left\lVert K\right\rVert_{{\mathcal{L}}(Y)}\left\lVert{\mathcal{I}}\right\rVert}\right\\}$, where $C$ as in Lemma 4.1 and $M_{K}$ denotes the constant from the Corollary 4.3. We obtain that for each $y_{0}\in B_{Y}(\delta_{1})$, there exists a control $u\in U_{ad}$ with associated state $y$ satisfying $\max\big{\\{}\left\lVert u\right\rVert_{U},\left\lVert y\right\rVert_{W_{\infty}}\big{\\}}\leq\tilde{M}\left\lVert y_{0}\right\rVert_{Y},$ (4.12) where $\displaystyle\tilde{M}=2M_{K}\text{ max }\big{(}1,\left\lVert i\right\rVert\left\lVert K\right\rVert_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\big{)}$. We can thus consider a minimizing sequence $\displaystyle(y_{n},u_{n})_{n\in\mathbb{N}}\in W_{\infty}\times U_{ad}$ with $\displaystyle J(y_{n},u_{n})\leq\frac{1}{2}M^{2}\left\lVert y_{0}\right\rVert^{2}_{Y}(1+\alpha)$. For all $n\in\mathbb{N}$ that $\left\lVert y_{n}\right\rVert_{L^{2}(I;Y)}\leq\tilde{M}\left\lVert y_{0}\right\rVert_{Y}\sqrt{1+\alpha}\quad\text{and}\quad\left\lVert u_{n}\right\rVert_{L^{2}(I;\,{\mathcal{U}})}\leq\tilde{M}\left\lVert y_{0}\right\rVert_{Y}\sqrt{\frac{1+\alpha}{\alpha}}.$ (4.13) We set $\eta(\alpha,\tilde{M})=\Big{[}1+\tilde{M}\|i\|\ \sqrt{(1+\alpha)}\Big{(}\rho+\frac{\left\lVert B\right\rVert_{{\mathcal{L}}({\mathcal{U}},Y)}}{\sqrt{\alpha}}\Big{)}\Big{]}$. Then we have $\left\lVert y_{0}\right\rVert+\left\lVert\rho y_{n}+Bu_{n}\right\rVert_{L^{2}(I;V^{*})}\leq\eta(\alpha,\tilde{M})\left\lVert y_{0}\right\rVert_{Y}$. After further reduction of $\delta_{1}$, we obtain with $M_{\rho}$ from Corollary 4.4: $\left\lVert y_{0}\right\rVert+\left\lVert\rho y_{n}+Bu_{n}\right\rVert_{L^{2}(I;V^{*})}\leq\frac{1}{4CM^{2}_{\rho}}.$ It follows from this corollary that the sequence $\\{y_{n}\\}_{n\in\mathbb{N}}$ is bounded in $W_{\infty}$ with $\sup_{n\in\mathbb{N}}\left\lVert y_{n}\right\rVert_{W_{\infty}}\leq 2M_{\rho}(1+\eta(\alpha,\tilde{M}))\left\lVert y_{0}\right\rVert_{Y}.$ (4.14) Extracting if necessary a subsequence, there exists $\displaystyle(\bar{y},\bar{u})\in W_{\infty}\times U$ such that $\displaystyle({y}_{n},{u}_{n})\rightharpoonup(\bar{y},\bar{u})\in W_{\infty}\times U$, and $(\bar{y},\bar{u})$ satisfies (4.12). Let us prove that $(\bar{y},\bar{u})$ is feasible and optimal. Since $U_{ad}$ is weakly sequentially closed and $u_{n}\in U_{ad}$, we find $\bar{u}\in U_{ad}$. For each fixed $T>0$ and arbitrary $z\in L^{\infty}(0;T;{\mathcal{H}})\subset L^{2}(0,T;V)$, see (A4), we have for all $\displaystyle n\in\mathbb{N}$ that $\int_{0}^{T}\langle\dot{y}_{n}(t),z(t)\rangle_{V^{*},V}dt=\int_{0}^{T}\langle{\mathcal{A}}y_{n}(t)+{\mathcal{F}}(y_{n}(t))+Bu_{n}(t),z(t)\rangle_{V^{*},V}dt.$ (4.15) Since $\dot{y}_{n}\rightharpoonup\dot{y}$ in $L^{2}(0,T;V^{*})$, we can pass to the limit in the l.h.s. of the above equality. Moreover, since ${\mathcal{A}}y_{n}\rightharpoonup{\mathcal{A}}y$ in $L^{2}(0,T;V^{*})$, $\int_{0}^{T}\langle{\mathcal{A}}y_{n}(t),z(t)\rangle_{V^{*},V}dt\xrightarrow[n\rightarrow\infty]{}\int_{0}^{T}\langle{\mathcal{A}}\bar{y}(t),z(t)\rangle_{V^{*},V}dt.$ Analogously, we obtain that $\int_{0}^{T}\langle Bu_{n}(t),z(t)\rangle_{V^{*},V}dt\xrightarrow[n\rightarrow\infty]{}\int_{0}^{T}\langle B\bar{u}(t),z(t)\rangle_{V^{*},V}dt.$ If moreover $z\in L^{\infty}(0,T;{\mathcal{H}})\subset L^{2}(0,T;V)$, we use (A4) to assert $\int_{0}^{T}\langle{\mathcal{F}}(y_{n}(t))-{\mathcal{F}}(\bar{y}(t)),z(t)\rangle_{V^{*},V}dt=\int_{0}^{T}\langle{\mathcal{F}}(y_{n}(t))-{\mathcal{F}}(\bar{y}(t)),z(t)\rangle_{{\mathcal{H}}^{*},{\mathcal{H}}}dt\xrightarrow[n\rightarrow\infty]{}0.$ Thus we have for all $z\in L^{\infty}(0,T;{\mathcal{H}})$ $\int_{0}^{T}\langle\dot{y}(t)-{\mathcal{A}}y(t)-Bu(t),z(t)\rangle_{V^{*},V}dt=\int_{0}^{T}\langle{\mathcal{F}}(y(t)),z(t)\rangle_{V^{*},V}dt.$ (4.16) Since $\displaystyle\dot{y}-{\mathcal{A}}y-Bu\in L^{2}(0,T;V^{*})$ and $L^{\infty}(0,T;{\mathcal{H}})$ is dense in $L^{2}(0,T;V)$ we conclude that (4.16) holds for all $z\in L^{2}(0,T;V)$ and $T>0$. This yields $e(\bar{y},\bar{u})=(0,0)$, and thus $(\bar{y},\bar{u})$ is feasible. By weak lower semicontinuity of norms it follows that $\displaystyle J(\bar{y},\bar{u})\leq\liminf_{n\rightarrow\infty}J(\bar{y}_{n},\bar{u}_{n})$, which proves the optimality of $(\bar{y},\bar{u})$, and (4.11) follows from (4.13). ∎ For the derivation of the optimality system for ($\mathcal{P}$), we need the following lemma which is taken from [BKP1, Lemma 2.5]. ###### Lemma 4.6. Let $\displaystyle G\in{\mathcal{L}}(W_{\infty},L^{2}(I;V^{*}))$ such that $\displaystyle\left\lVert G\right\rVert<\frac{1}{M_{K}}$, where $\left\lVert G\right\rVert$ denotes the operator norm of $G$. Then for all $\displaystyle f\in L^{2}(I;V^{*})$ and $y_{0}\in Y$, there exists a unique solution to the problem: $y_{t}=({\mathcal{A}}-BK)y(t)+(Gy)(t)+f(t),\quad y=y_{0}.$ Moreover, $\left\lVert y\right\rVert_{W_{\infty}}\leq\frac{M_{K}}{1-{M_{K}\|G\|}}\left(\left\lVert f\right\rVert_{L^{2}(I;V^{*})}+\left\lVert y_{0}\right\rVert_{Y}\right).$ We close this section by deriving the optimality conditions for ($\mathcal{P}$). ###### Proposition 4.1. Let the assumptions (A1) - (A4) hold. Then there exists $\delta_{2}\in(0,\delta_{1}]$ such that each local solution $(\bar{y},\bar{u})$ with $y_{0}\in B_{Y}(\delta_{2})$ is a regular point, i.e. (2.3) is satisfied, and there exists an adjoint state $(\bar{p},\bar{p}_{1})\in L^{2}(I;V)\times Y$ satisfying $\displaystyle\langle v_{t}-{\mathcal{A}}v-{\mathcal{F}}^{\prime}(\bar{y})v,\bar{p}\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}+(v(0),\bar{p}_{1})_{Y}+(\bar{y},\bar{p})_{L^{2}(I;V)}$ $\displaystyle=0,\quad\text{for all }v\in W_{\infty},$ (4.17) $\displaystyle\langle\alpha\bar{u}-B^{*}\bar{p},u-\bar{u}\rangle_{U}$ $\displaystyle\geq 0,\quad\text{for all }u\in U_{ad}.$ (4.18) If the assumption (A5) is satisfied, then $-\bar{p}_{t}-{\mathcal{A}}^{*}\bar{p}-{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}=-\bar{y}\quad\text{ in }L^{2}(I;V^{*}),$ and hence $\bar{p}\in W_{\infty}$ and $\lim_{t\rightarrow\infty}\bar{p}(t)=0.$ (4.19) Moreover, there exists $\widetilde{M}>0$, independent of $y_{0}\in B_{Y}(\delta_{2})$, such that $\left\lVert\bar{p}\right\rVert_{W_{\infty}}\leq\widetilde{M}\left\lVert y_{0}\right\rVert_{Y},\text{ and }u\in C(\overline{I},{\mathcal{U}}).$ (4.20) ###### Proof. To verify the regular point condition, we evaluate $e$ defined in (3.6) at $(\bar{y},\bar{u},y_{0})$. To check the claim on the range of $e^{\prime}(\bar{y},\bar{u},y_{0})$ we consider for arbitrary $\displaystyle(r,s)\in L^{2}(I,V^{*})\times Y$ the equation $z_{t}-{\mathcal{A}}z-{\mathcal{F}}^{\prime}(\bar{y})z-B(w-\bar{u})=r,\ z(0)=s,$ (4.21) for unknowns $(z,w)\in W_{\infty}\times{U_{ad}}$. By taking $w=-Kz\in U$ we obtain $z_{t}-({\mathcal{A}}-BK)z-{\mathcal{F}}^{\prime}(\bar{y})z+B\bar{u}=r,\ z(0)=s.$ We apply Lemma 4.6 to this equation with $\displaystyle G=-{\mathcal{F}}^{\prime}(\bar{y})$ and $f=r-B\bar{u}$. By Lemma 4.5 and (3.3) in Remark 3.1 there exists $\delta_{2}\in(0,\delta_{1}]$ such that $\|{\mathcal{F}}^{\prime}(\bar{y})\|_{{\mathcal{L}}(W_{\infty},L^{2}(I;V^{*}))}\leq\frac{1}{2}M_{K}$. Consequently by Lemma 4.6 there exists $\widetilde{M}$ such that $\displaystyle\left\lVert z\right\rVert_{W_{\infty}}$ $\displaystyle\leq\widetilde{M}\big{(}\left\lVert r\right\rVert_{L^{2}(I;V^{*})}+\left\lVert s\right\rVert_{Y}+\left\lVert B\right\rVert_{{\mathcal{L}}({\mathcal{U}},Y)}\left\lVert\bar{u}\right\rVert_{U}\big{)}$ $\displaystyle\leq\widetilde{M}\big{(}\left\lVert r\right\rVert_{L^{2}(I;V^{*})}+\left\lVert s\right\rVert_{Y}+\left\lVert B\right\rVert_{{\mathcal{L}}({\mathcal{U}},Y)}M\left\lVert y_{0}\right\rVert_{Y}\big{)},$ (4.22) with $M$ as in (4.11). We shall need to check whether $w=-Kz$ is feasible, which will be the case if $w(t)\leq\eta$ for a.e. $t\in I$. Indeed we have $\left\lVert w(t)\right\rVert_{Y}\leq\left\lVert K\right\rVert_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\left\lVert z(t)\right\rVert_{Y}\leq\left\lVert K\right\rVert_{{\mathcal{L}}(Y,\,{\mathcal{U}})}\left\lVert{\mathcal{I}}\right\rVert\widetilde{M}\big{(}\left\lVert r\right\rVert_{L^{2}(I;V^{*})}+\left\lVert s\right\rVert_{Y}+\left\lVert B\right\rVert_{{\mathcal{L}}({\mathcal{U}},Y)}M\left\lVert y_{0}\right\rVert_{Y}\big{)}.$ Consequently, possibly after further reducing $\delta_{2}$, and choosing $\tilde{\delta}>0$ sufficiently small we have $\left\lVert w\right\rVert_{L^{\infty}(I;Y)}\leq\eta\text{ for all }\displaystyle y_{0}\in B_{Y}(\delta_{2})\text{ and all }(r,s)\text{ satisfying }\left\lVert(r,s)\right\rVert_{L^{2}(I;V^{*})\times Y}\leq\tilde{\delta}.$ (4.23) Consequently the regular point condition is satisfied. Hence there exists a multiplier $\displaystyle\lambda=(p,p_{1})\in L^{2}(I;V)\times Y$ satisfying, $\langle{\mathcal{L}}_{y}(\bar{y},\bar{u},y_{0},\bar{p},\bar{p}_{1}),v\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}=0,\quad\langle{\mathcal{L}}_{u}(\bar{y},\bar{u},y_{0},\bar{p},\bar{p}_{1}),u-\bar{u}\rangle_{U}\geq 0,\ \forall u\in U_{ad}$ (4.24) where ${\mathcal{L}}(y,u,y_{0},p,p_{1})=J(y,u)+\int_{0}^{\infty}\langle p,y_{t}-{\mathcal{A}}y-{\mathcal{F}}(y)-Bu\rangle_{V,V^{*}}dt+\langle p_{1},y(0)-y_{0}\rangle_{Y}.$ This implies that (4.17) holds. Now, if we impose the additional assumption (A5), we have ${\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}\in L^{2}(I;V^{*})$. Thus $-{\mathcal{A}}^{*}\bar{p}-{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}+\bar{y}\in L^{2}(I;V^{*})$ and the previous identity implies that $\bar{p}\in W_{\infty}$. Thus we derive $-\bar{p}_{t}-{\mathcal{A}}^{*}\bar{p}-{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}=-\bar{y}\ \text{in }L^{2}(I;V^{*})\text{ and }\lim_{t\rightarrow\infty}\bar{p}(t)=0,$ and (4.17)-(4.19). Testing the first identity in (4.24) with $v\in L^{2}(I;V)$ we also have $\bar{p}_{1}=\bar{p}(0)$, which is well-defined since $\bar{p}\in W_{\infty}\subset C(I;Y)$. The second identity in (4.24) gives (4.18). It remains to estimate $\displaystyle\bar{p}\in W_{\infty}$. Let $\displaystyle r\in L^{2}(I;V^{*})$ with $\displaystyle\left\lVert r\right\rVert_{L^{2}(I;V^{*})}\leq\tilde{\delta}$ and consider $z_{t}-{\mathcal{A}}z-{\mathcal{F}}^{\prime}(\bar{y})z-B(w-\bar{u})=-r,\ z(0)=0.$ (4.25) Arguing as in (4.21)-(4.22) there exists a solution to (4.25) with $w=-Kz$ such that $\left\lVert z\right\rVert_{W_{\infty}}\leq\widetilde{M}\big{(}\tilde{\delta}+\left\lVert B\right\rVert_{{\mathcal{L}}({\mathcal{U}},Y)}M\left\lVert y_{0}\right\rVert_{Y}\big{)}\leq\widetilde{M}\big{(}\tilde{\delta}+\left\lVert B\right\rVert_{{\mathcal{L}}({\mathcal{U}},Y)}M\delta_{2}\big{)}=:C_{1}.$ (4.26) From (4.1) we have that $\displaystyle\left\lVert w\right\rVert_{L^{\infty}(I,\,{\mathcal{U}})}\leq\eta$. Let us now observe that $\displaystyle\langle\bar{p},r\rangle_{L^{2}(I,V),L^{2}(I,V^{*})}$ $\displaystyle=\langle\bar{p},-z_{t}+{\mathcal{A}}z+{\mathcal{F}}^{\prime}(\bar{y})z\rangle_{L^{2}(I,V),L^{2}(I,V^{*})}+\langle\bar{p},B(w-\bar{u})\rangle_{L^{2}(I;Y)}$ $\displaystyle=\langle\bar{p}_{t}+{\mathcal{A}}^{*}\bar{p}+{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p},z\rangle+\langle B^{*}\bar{p},w-\bar{u}\rangle_{U},$ where we have used that $z(0)=0$ and $\displaystyle\lim_{t\rightarrow\infty}\bar{p}(t)=0$, since $\bar{p}\in W_{\infty}$. We next estimate using (4.17), (4.19) and (4.26) $\displaystyle\langle\bar{p},r\rangle_{L^{2}(I,V),L^{2}(I,V^{*})}$ $\displaystyle\leq\left\lVert\bar{y}\right\rVert_{L^{2}(I,V^{*})}\left\lVert z\right\rVert_{L^{2}(I,V)}+\alpha\langle\bar{u},w-\bar{u}\rangle_{U}$ $\displaystyle\leq\left(\left\lVert\bar{y}\right\rVert_{L^{2}(I,V^{*})}+\alpha\left\lVert\bar{u}\right\rVert_{U}\right)\left(C_{1}+\eta+\left\lVert\bar{u}\right\rVert_{U}\right).$ By (4.11), this implies the existence of a constant $C_{2}$ such that $\sup_{\left\lVert r\right\rVert_{L^{2}(I,V^{*})}\leq\tilde{\delta}}\langle\bar{p},r\rangle_{L^{2}(I,V),L^{2}(I,V^{*})}\leq C_{2}\left\lVert y_{0}\right\rVert_{Y}$ and thus $\left\lVert\bar{p}\right\rVert_{L^{2}(I,V)}\leq\frac{C_{2}}{\tilde{\delta}}\left\lVert y_{0}\right\rVert_{Y},\quad\text{for all }y_{0}\in B_{Y}(\delta_{2}).$ (4.27) Now we estimate, again using (A5) $\displaystyle\left\lVert\bar{p}_{t}\right\rVert_{L^{2}(I;V^{*})}$ $\displaystyle\leq\left\lVert{\mathcal{A}}^{*}\bar{p}+{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}-\bar{y}\right\rVert_{L^{2}(I;V^{*})}\leq C_{3}\left\lVert\bar{p}\right\rVert_{L^{2}(I,V)}+C_{4}\left\lVert\bar{p}\right\rVert_{L^{2}(I,V)}+\left\lVert\bar{y}\right\rVert_{L^{2}(I;V^{*})}.$ By (4.11) and (4.27) we obtain $\left\lVert\bar{p}_{t}\right\rVert_{L^{2}(I;V^{*})}\leq C_{5}\left\lVert y_{0}\right\rVert_{Y}.$ Combining this estimate with (4.27) yields (4.20). Finally, by (4.18) we find $\displaystyle\bar{u}(t)=\mathbb{P}_{{\mathcal{U}}_{ad}}\left(\frac{1}{\alpha}B^{*}\bar{p}(t)\right)$. Since $\bar{p}\in C(\bar{I};Y)$ and $B^{*}\in{\mathcal{L}}(Y,U)$ this implies that $u\in C(\bar{I};{\mathcal{U}})$. ∎ ### 4.2 Verification of (H1)-(H6). In this section we specialize the previously proved abstract results in Section 2 to the semilinear parabolic setting. We start with the following lemma which shows that assumptions A imply (H1)-(H6). ###### Lemma 4.7. Consider problem ($\mathcal{P}$) with assumptions (A1)-(A4) holding. Then (H1)–(H4), (H6) are satisfied for ($\mathcal{P}$) uniformly for all $y_{0}\in B_{Y}(\widetilde{\delta}_{2})$ for some $\displaystyle\widetilde{\delta}_{2}\in(0,\delta_{2}]$. If moreover, (A5) holds, then (H5) holds as well. ###### Proof. Throughout $y_{0}\\!\in\\!B_{Y}(\delta_{2})$, $(\bar{y},\bar{u})$ denotes a local solution to ($\mathcal{P}$), and $(p,p_{1})\in L^{2}(I;V^{*})\times Y$ the associated Lagrange multiplier. 1. (i) Verification of (H1): The initial condition $y_{0}$ is our nominal reference parameter $q$. Lemma 4.5 guarantees the existence of a local solution $(\bar{y},\bar{u})\sim x_{0}$ to ($\mathcal{P}$)$\sim$ ($P_{q_{0}}$). Clearly $f$ defined in (3.5) satisfies the required regularity assumptions. Moreover $e$ satisfies the regularity assumptions as a consequence of (A3). 2. (ii) Verification of (H2): Proposition 4.1 implies that $(\bar{y},\bar{u})$ is a regular point. 3. (iii) Verification of (H3): The second derivative of $e$ is given by $e^{\prime\prime}(\bar{y},\bar{u},y_{0})((v_{1},w_{1}),(v_{2},w_{2}))=\begin{pmatrix}{\mathcal{F}}^{\prime\prime}(\bar{y})(v_{1},v_{2})\\\ 0\end{pmatrix},\quad\forall\ v_{1},v_{2}\in W_{\infty},\ \forall w_{1},w_{2}\in U.$ (4.28) For the second derivative of ${\mathcal{L}}$ w.r.t. $(y,u)$, we find ${\mathcal{L}}^{\prime\prime}(\bar{y},\bar{u},y_{0},\bar{p},\bar{p}_{1})((v_{1},w_{1}),(v_{2},w_{2}))=\int_{0}^{\infty}(v_{1},v_{2})_{Y}dt+\alpha\int_{0}^{\infty}(w_{1},w_{2})_{Y}dt\\\ +\int_{0}^{\infty}\langle\bar{p},{\mathcal{F}}^{\prime\prime}(\bar{y})(v_{1},v_{2})\rangle_{V,V^{*}}dt.$ (4.29) By (A3) for ${\mathcal{F}}^{\prime\prime}$ and Lemma 4.5 , there exists $\widetilde{M}_{1}$ such that $\int_{0}^{\infty}\langle\bar{p},{\mathcal{F}}^{\prime\prime}(\bar{y})(v,v)\rangle_{V,V^{*}}dt\leq\widetilde{M}_{1}\|\bar{p}\|_{L^{2}(I;V)}\left\lVert v\right\rVert^{2}_{W_{\infty}},\quad\forall\ v\in W_{\infty},$ (4.30) for each solution $(\bar{y},\bar{u})$ of ($\mathcal{P}$) with $y_{0}\in B_{Y}(\delta_{2})$. Then we obtain ${\mathcal{L}}^{\prime\prime}(\bar{y},\bar{u},y_{0},\bar{p},\bar{p}_{1})((v,w),(v,w))\geq\int_{0}^{\infty}\left\lVert v\right\rVert_{Y}^{2}dt+\alpha\int_{0}^{\infty}\left\lVert w\right\rVert^{2}_{{\mathcal{U}}}dt-\widetilde{M}_{1}\|\bar{p}\|_{L^{2}(I;V)}\left\lVert v\right\rVert^{2}_{W_{\infty}}.$ (4.31) Now let $0\neq(v,w)\in\ker E\subset W_{\infty}\times U_{ad}$, where $E$ as defined in (3.7) is evaluated at $(\bar{y},\bar{u})$. Then, $v_{t}-{\mathcal{A}}v-{\mathcal{F}}^{\prime}(\bar{y})v-Bw=0,\quad v(0)=0.$ Next choose $\rho>0$, such that the semigroup generated by $({\mathcal{A}}-\rho I)$ is exponentially stable. This is possible due to (A1). We equivalently write the system in the previous equation as, $v_{t}-({\mathcal{A}}-\rho I)v-{\mathcal{F}}^{\prime}(\bar{y})v-\rho v-Bw=0,\quad v(0)=0.$ Now, we invoke Lemma 4.6 with ${\cal A}-BK$ replaced by ${\mathcal{A}}-\rho I$, $\displaystyle G={\mathcal{F}}^{\prime}(\bar{y})$, and $f(t)=\rho v(t)+Bw(t)$, and the role of the constant $M_{K}$ will now be assumed by a parameter $M_{\rho}$. By selecting $\widetilde{\delta}_{2}\in(0,\delta_{2}]$ such that $\left\lVert\bar{y}\right\rVert_{W_{\infty}}$ sufficiently small, we can guarantee that $\displaystyle\left\lVert{\mathcal{F}}^{\prime}(\bar{y})\right\rVert_{{\mathcal{L}}(W_{\infty};L^{2}(I;V^{*}))}\leq{}^{1}\\!/_{2M_{\rho}}$, see (4.11) and (3.3) in Remark 3.1. Then the following estimate holds, $\left\lVert v\right\rVert_{W_{\infty}}\leq 2{M_{\rho}}\left\lVert v+Bw\right\rVert_{L^{2}(I;V^{*})}.$ This implies that $\left\lVert v\right\rVert^{2}_{W_{\infty}}\leq\widetilde{M}_{2}(\left\lVert v\right\rVert^{2}_{L^{2}(I;Y)}+\left\lVert w\right\rVert^{2}_{L^{2}(I;Y)}).$ (4.32) for a constant $\widetilde{M}_{2}$ depending on $M_{\rho},\|B\|$, and the embedding of $Y$ into $V^{*}$. These preliminaries allow the following lower bound on ${\mathcal{L}}^{\prime\prime}$: $\displaystyle{\mathcal{L}}^{\prime\prime}(\bar{y},\bar{u},y_{0},\bar{p},\bar{p}_{1})((v,w),(v,w))$ $\displaystyle\geq\int_{0}^{\infty}\left\lVert v\right\rVert_{Y}^{2}dt+\alpha\int_{0}^{\infty}\left\lVert w\right\rVert^{2}_{Y}dt-\widetilde{M}_{1}\|\bar{p}\|_{L^{2}(I;V)}\left\lVert v\right\rVert^{2}_{W_{\infty}}$ $\displaystyle\text{by ~{} \eqref{ker_vw} ~{}}\geq\int_{0}^{\infty}\left\lVert v\right\rVert_{Y}^{2}+\alpha$ $\displaystyle\int_{0}^{\infty}\left\lVert w\right\rVert^{2}_{Y}-\widetilde{M}_{1}\widetilde{M}_{2}\|\bar{p}\|_{L^{2}(I;V)}\left[\left\lVert v\right\rVert^{2}_{L^{2}(I;Y)}+\left\lVert w\right\rVert^{2}_{L^{2}(I;Y)}\right]$ $\displaystyle=\left(1-\widetilde{M}_{1}\widetilde{M}_{2}\right.$ $\displaystyle\left.\|\bar{p}\|_{L^{2}(I;V)}\right)\left\lVert v\right\rVert^{2}_{L^{2}(I;Y)}+\left(\alpha-\widetilde{M}_{1}\widetilde{M}_{2}\|\bar{p}\|_{L^{2}(I;V)}\right)\left\lVert w\right\rVert^{2}_{L^{2}(I;Y)}$ $\displaystyle\geq\tilde{\gamma}\left[\left\lVert v\right\rVert^{2}_{L^{2}(I;Y)}+\left\lVert w\right\rVert^{2}_{L^{2}(I;Y)}\right]$ (4.33) where $\displaystyle\tilde{\gamma}=\min\left\\{1-\widetilde{M}_{1}\widetilde{M}_{2}\|\bar{p}\|_{L^{2}(I;V)},\alpha-\widetilde{M}_{1}\widetilde{M}_{2}\|\bar{p}\|_{L^{2}(I;V)}\right\\}$. By possible further reduction of $\tilde{\delta}_{2}$ it can be guaranteed that $\tilde{\gamma}>0$, see (4.27). Then by (4.32), we obtain, $\displaystyle{\mathcal{L}}^{\prime\prime}(\bar{y},\bar{u},y_{0},\bar{p},\bar{p}_{1})((v,w),(v,w))$ $\displaystyle\geq\frac{\tilde{\gamma}}{2}\left[\left\lVert v\right\rVert^{2}_{L^{2}(I;Y)}+\left\lVert w\right\rVert^{2}_{L^{2}(I;Y)}\right]+\frac{\tilde{\gamma}}{2\widetilde{M}_{2}}\left\lVert v\right\rVert^{2}_{W_{\infty}},$ $\displaystyle\geq\frac{\tilde{\gamma}}{2\widetilde{M}_{2}}\left\lVert v\right\rVert^{2}_{W_{\infty}}+\frac{\tilde{\gamma}}{2}\left\lVert w\right\rVert^{2}_{L^{2}(I;Y)}.$ By selecting $\displaystyle\bar{\gamma}=\min\left\\{\frac{\tilde{\gamma}}{2\widetilde{M}_{2}},\frac{\tilde{\gamma}}{2}\right\\}$, we obtain the positive definiteness of ${\mathcal{L}}^{\prime\prime}$, i.e. ${\mathcal{L}}^{\prime\prime}(\bar{y},\bar{u},y_{0},\bar{p},\bar{p}_{1})((v,w),(v,w))\geq\bar{\gamma}\left\lVert(v,w)\right\rVert^{2}_{W_{\infty}\times U},\ y_{0}\in B_{Y}(\widetilde{\delta}_{2}),\ (v,w)\in\text{ker }E.$ (4.34) Thus (H3) is satisfied. 4. (iv) Verification of (H4): It can easily be checked that $f^{\prime}(y,u)$ can be extended to an element in $\underline{X^{*}}=L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}}))$ for each $(y,u)\in X=W_{\infty}\times U$. We refer to Remark 3.2 to show that the restriction of $e^{\prime}(y,u,y_{0})^{*}$ to $\underline{W^{*}}$ satisfies $\underline{e^{\prime}(y,u,y_{0})^{*}}\in{\mathcal{L}}(\underline{W^{*}},\underline{X^{*}})={\mathcal{L}}(W_{\infty}\times Y,L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}})))$. 5. (v) Verification of (H6): This is trivially satisfied. Thus we have proved that assumptions (A1)-(A4) imply (H1)-(H4), and (H6) for all $\displaystyle y_{0}\in B_{Y}(\widetilde{\delta}_{2})$. 6. (vi) Verification of (H5): Here we use (A5) and have $(p,p_{1})=(p,p(0))\in\widetilde{W}_{\infty}$. Observe that $\underline{{\mathcal{L}}^{\prime}}:W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}}))\times Y\times W_{\infty}\to L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}}))$ evaluated at $(\bar{y},\bar{u},y_{0},p)$ is given by $\underline{{\mathcal{L}}^{\prime}}(\bar{y},\bar{u},y_{0},\bar{p})=\begin{pmatrix}\bar{y}+\underline{e^{\prime}(\bar{y},\bar{u},y_{0})^{*}}\bar{p}\\\ \alpha\bar{u}-B^{*}\bar{p}\end{pmatrix}=\begin{pmatrix}\bar{y}-\bar{p}_{t}-{\mathcal{A}}^{*}\bar{p}-{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}\\\ \alpha\bar{u}-B^{*}\bar{p}\end{pmatrix}.$ Further for $(v.w)\in W_{\infty}\times U$ we have $(\underline{{\mathcal{L}}^{\prime}})^{\prime}(\bar{y},\bar{u},y_{0},\bar{p})(v,w)=\begin{pmatrix}v-[{\mathcal{F}}^{\prime}(\bar{y})^{*}]^{\prime}(\bar{p},v)\\\ \alpha w\end{pmatrix}\in L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}})).$ By (A3) and Remark 3.2 we have that $\underline{{\mathcal{L}}^{\prime}}$ and $(\underline{{\mathcal{L}}^{\prime}})^{\prime}$ are continuous as mappings from $W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}}))\times Y\times W_{\infty}$ to $L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}}))$, respectively to ${\mathcal{L}}(W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}}));L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}})))$. This proves the lemma. ∎ ###### Remark 4.2. Let us summarize our findings so far. There exists $\tilde{\delta}_{2}$ such that for each $y_{0}\in B_{Y}(\tilde{\delta}_{2})$ problem ($\mathcal{P}$) posesses a solution $(\bar{y},\bar{u})\in W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}}))$, with an adjoint $\bar{p}\in\widetilde{W}_{\infty}$. Further (A1)-(A5) imply (H1)-(H6) for ($\mathcal{P}$) with $y_{0}\in B_{Y}(\tilde{\delta}_{2})$. As a consequence for each $y_{0}\in B_{Y}(\tilde{\delta}_{2})$ and each associated local solution $(\bar{y},\bar{u})$ there exists a neighborhood $\hat{V}$ of the origin in ${\mathcal{Y}}:=L^{2}(I;V^{*})\times(U\cap C(\bar{I};{\mathcal{U}}))\times L^{2}(I;V^{*})\times Y$ such that for each $\boldsymbol{\beta}\in\hat{V}$ there exists a unique solution $\displaystyle\left(y_{(\boldsymbol{\beta})},u_{(\boldsymbol{\beta})},p_{(\boldsymbol{\beta})},{p_{1}}_{(\boldsymbol{\beta})}\right)\in W_{\infty}\times U\times L^{2}(I;V)\times Y$ to ${\mathcal{T}}(y,u,p,p_{1})=\boldsymbol{\beta}$, see step (ii) of the proof of Theorem 2.1. – To verify the remaining assumption (H7) we need to argue that $\left(u_{(\boldsymbol{\beta})},p_{(\boldsymbol{\beta})}\right)\in(U\cap C(\bar{I};{\mathcal{U}}))\times W_{\infty}$ and that $\displaystyle\boldsymbol{\beta}\mapsto\left(y_{(\boldsymbol{\beta})},u_{(\boldsymbol{\beta})},p_{(\boldsymbol{\beta})}\right)$ is Lipschitz continuous from $\hat{V}\subset\mathcal{Y}$ to $W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}}))\times W_{\infty}$ . ###### Remark 4.3. Here we remark on the smallness assumption on $y_{0}$ expressed by $\delta_{2}$, respectively $\tilde{\delta}_{2}$. The condition $y_{0}\in B_{Y}(\delta_{2})$ guarantees the well-posedness of ($\mathcal{P}$), existence and boundedness of adjoint states as expressed in Proposition 4.1. The additional condition $y_{0}\in B_{Y}(\tilde{\delta}_{2})$ implies that the second order optimality condition (H3) is satisfied, for each local solution associated to an initial condition $y_{0}\in B_{Y}(\delta_{2})$. In what follows formulate the results for all $y_{0}\in B_{Y}(\tilde{\delta}_{2})$. Alternatively we could narrow down the claims to neighborhoods of single local solutions $(\bar{y},\bar{u})$ with $y_{0}\in B_{Y}(\delta_{2})$ and additionally assuming that the second order condition is satisfies at $(\bar{y},\bar{u})$. Concerning the second order condition itself, in some publications, see e.g. [Gri], it is required to hold only for elements $x=(y,u)\in\text{ker}E$ and $u=u_{1}-u_{2}$, with $u_{1},u_{2}$ in $U_{ad}$. By a scaling argument it can easily be seen that this condition is equivalent to the one we use. ### 4.3 Verification of (H7) and Lipschitz stability of the linearized problem. Throughout the remainder, we assume that (A1)-(A5) are satisfied and that $y_{0}\in B_{Y}(\tilde{\delta}_{2})$ so that Proposition 4.1 and Lemma 4.7 are applicable. In the following, the triple $(y,u,p)$ refers to the solution ${\mathcal{T}}(y,u,p,p_{1})=\boldsymbol{\beta}$. Throughout without loss of generality, we also assume that $\hat{V}$ is bounded. ###### Lemma 4.8. Let assumptions (A) hold and let $(\bar{y},\bar{u})$, and $\bar{p}$ denote a local solution and associated adjoint state to ($\mathcal{P}$) corresponding to an initial datum $y_{0}\in B_{Y}(\tilde{\delta}_{2})$. Then the mapping $\boldsymbol{\beta}\mapsto p_{(\boldsymbol{\beta})}$ is continuous from $\hat{V}$ to $W_{\infty}$. ###### Proof. Step 1: For $\boldsymbol{\beta}\in\hat{V}$, with $\hat{V}$ as in Remark 4.2, let $\displaystyle(y_{(\boldsymbol{\beta})},u_{(\boldsymbol{\beta})},p_{(\boldsymbol{\beta})},{p_{1}}_{(\boldsymbol{\beta})})$ be the solution to ${\mathcal{T}}(y,u,p,p_{1})=\boldsymbol{\beta}$. As a consequence of (A5) it is also a solution to $\underline{{\mathcal{T}}}(y,u,p,p(0))=\boldsymbol{\beta}$ with $p\in W_{\infty}$. Thus the first two equations of this latter equality can be expressed as $\displaystyle- p_{t}-{\mathcal{A}}^{*}p-{\mathcal{F}}^{\prime}(\bar{y})^{*}p+y-[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime}(y-\bar{y})$ $\displaystyle=\beta_{1}\quad\text{ in }L^{2}(I;V^{*}),$ (4.35a) $\displaystyle\langle\alpha u-B^{*}p-\beta_{2},w-u\rangle_{U}$ $\displaystyle\geq 0\quad\text{for all }w\in U_{ad},$ (4.35b) where we dropped the dependence of $\displaystyle\left(y_{(\boldsymbol{\beta})},u_{(\boldsymbol{\beta})},p_{(\boldsymbol{\beta})}\right)$ on $\displaystyle\boldsymbol{\beta}$. Since $p\in\widetilde{W}_{\infty}\subset C(\bar{I};Y)$ inequality (4.35b) implies that $u\in C(\bar{I};{\mathcal{U}})$. Step 2: (Boundedness of $\\{p_{(\boldsymbol{\beta})}:\boldsymbol{\beta}\in\hat{V}\\}$). Since $\hat{V}$ is assumed to be bounded, the discussion in Remark 4.2 shows that there exists a constant $M>0$ such that $\left\lVert y_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert u_{(\boldsymbol{\beta})}\right\rVert_{U}\leq M\quad\text{for all }\boldsymbol{\beta}\in\hat{V}.$ To argue the boundedness of $p_{(\boldsymbol{\beta})}$, we use similar techniques as in the proof of Proposition 4.1. In the following $\tilde{\delta},z$ and $C_{1}$ are taken as in the proof of that proposition. For arbitrary $r\in R=\left\\{r\in L^{2}(I;V^{*}):\left\lVert r\right\rVert_{L^{2}(I;V^{*})}\leq\tilde{\delta}\right\\}$, $z$ denotes the solution to $z_{t}-{\mathcal{A}}z-{\mathcal{F}}^{\prime}(\bar{y})z-B(w-u)=-r,\ z(0)=0,\ w=-Kz.$ (4.36) We know that $\left\lVert z\right\rVert_{W_{\infty}}\leq C_{1}$ and $\left\lVert w\right\rVert_{L^{\infty}(I;{\mathcal{U}})}\leq\eta$ independently of $r\in R$. Due to (4.35a), we have $\displaystyle\langle p,r\rangle_{L^{2}(I;V),L^{2}(I;V^{*})}$ $\displaystyle=\langle p,-z_{t}+{\mathcal{A}}z+{\mathcal{F}}^{\prime}(\bar{y})z\rangle_{L^{2}(I;V),L^{2}(I;V^{*})}+\langle B^{*}p,w-u\rangle_{U},$ $\displaystyle=\langle y-[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime}(y-\bar{y})-\beta_{1},z\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}+\langle\alpha u-\beta_{2},w-u\rangle_{U},$ (4.37) where we used (4.35b) and the feasibility of $w\in U_{ad}$. Consequently $\langle p,r\rangle_{L^{2}(I;V),L^{2}(I;V^{*})}\leq\left\lVert z\right\rVert_{L^{2}(I;V)}\left(\left\lVert y\right\rVert_{L^{2}(I;V^{*})}+\left\lVert\beta_{1}\right\rVert_{L^{2}(I;V^{*})}+\left\lVert[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime}(y-\bar{y})\right\rVert_{L^{2}(I;V^{*})}\right)\\\ +\left(\alpha\left\lVert u\right\rVert_{U}+\left\lVert\beta_{2}\right\rVert_{U}\right)\left\lVert w-u\right\rVert_{U}.$ The right hand side is uniformly bounded for $\boldsymbol{\beta}$ in the bounded set $\hat{V}$ and w.r.t. $r\in R$. Hence taking the supremum w.r.t. $r\in R$ we verified that $\displaystyle\left\\{\left\lVert p_{(\boldsymbol{\beta})}\right\rVert_{L^{2}(I;V^{*})}:\boldsymbol{\beta}\in\hat{V}\right\\}$ is bounded. Boundedness of $\displaystyle\left\\{\left\lVert p_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}:\boldsymbol{\beta}\in\hat{V}\right\\}$ follows from (4.35a). Step 3: (Continuity of $p_{(\boldsymbol{\beta})}$ in $W_{\infty}$). Let $\\{\boldsymbol{\beta}_{n}\\}$ be a convergent sequence in $\hat{V}$ with limit $\boldsymbol{\beta}$. Since $\displaystyle\left\\{\left\lVert p_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}:n\in\mathbb{N}\right\\}$ is bounded, there exists a subsequence $\\{\boldsymbol{\beta}_{n_{k}}\\}$ such that $\displaystyle p_{(\boldsymbol{\beta}_{n_{k}})}\rightharpoonup\tilde{p}$ weakly in $W_{\infty}$ and strongly $L^{2}(I;Y)$. Here we need the compactness of $W_{\infty}$ in $L^{2}(I;Y)$, see e.g. [Emm, Satz 8.1.12, pg 213]. Passing to the limit in the variational form of $-\partial_{t}p_{(\boldsymbol{\beta}_{n_{k}})}-{\mathcal{A}}^{*}p_{(\boldsymbol{\beta}_{n_{k}})}-{\mathcal{F}}^{\prime}(\bar{y})^{*}p_{(\boldsymbol{\beta}_{n_{k}})}+y-[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime}\left(y_{(\boldsymbol{\beta}_{n_{k}})}-\bar{y}\right)=(\boldsymbol{\beta}_{n_{k}})_{1},$ we obtain $-\partial_{t}\tilde{p}-{\mathcal{A}}^{*}\tilde{p}-{\mathcal{F}}^{\prime}(\bar{y})^{*}\tilde{p}+y_{(\boldsymbol{\beta})}-[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime}\left(y_{(\boldsymbol{\beta})}-\bar{y}\right)=(\boldsymbol{\beta})_{1}.$ (4.38) Since the solution to this equation is unique we have $\displaystyle p_{(\boldsymbol{\beta}_{n})}\rightharpoonup p_{(\boldsymbol{\beta})}$ weakly in $W_{\infty}$. To obtain strong convergence we set $\delta\boldsymbol{\beta}=\boldsymbol{\beta}_{n}-\boldsymbol{\beta},\ \delta p=p_{(\boldsymbol{\beta}_{n})}-p_{(\boldsymbol{\beta})},\ \delta y=y_{(\boldsymbol{\beta}_{n})}-y_{(\boldsymbol{\beta})}$. From (4.35a) we derive that $-\partial_{t}(\delta p)-{\mathcal{A}}^{*}(\delta p)-{\mathcal{F}}^{\prime}(\bar{y})^{*}(\delta p)+(I-[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p})]^{\prime}(\delta y)=(\delta\boldsymbol{\beta})_{1}.$ (4.39) Consider again (4.36) with $r\in R$. Then we obtain $\langle\delta p,r\rangle_{L^{2}(I;V),L^{2}(I;V^{*})}=\langle(I-[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime})(\delta y)-(\delta\boldsymbol{\beta})_{1},z\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}+\langle\delta p,BKz-\bar{u}\rangle_{U}.$ This implies that for some $C_{2}>0$, $\sup_{r\in R}\ \langle\delta p,r\rangle_{L^{2}(I;V),L^{2}(I;V^{*})}\leq C_{2}\left(\left\lVert\delta y\right\rVert_{W_{\infty}}+\left\lVert\delta p\right\rVert_{L^{2}(I;Y)}+\left\lVert(\delta\boldsymbol{\beta})_{1}\right\rVert_{L^{2}(I;V^{*})}\right).$ (4.40) Since $\left\lVert\delta y\right\rVert_{W_{\infty}}\to 0,\ \left\lVert\delta p\right\rVert_{L^{2}(I;Y)}\to 0,\ \left\lVert(\delta\boldsymbol{\beta})_{1}\right\rVert_{L^{2}(I;V^{*})}\to 0$ for $n\to 0$ this implies that $\left\lVert\delta p\right\rVert_{L^{2}(I;V)}\to 0$. Together with (4.39) this implies that $\displaystyle\lim_{n\rightarrow\infty}\left\lVert\delta p\right\rVert_{W_{\infty}}=0$. ∎ ###### Proposition 4.2. Let assumptions (A) hold and let $(\bar{y},\bar{u})$, and $\bar{p}$ denote a local solution and associated adjoint state state to ($\mathcal{P}$) corresponding to an initial condition $\bar{y}_{0}\in B_{Y}(\tilde{\delta}_{2})$. Then there exists $\varepsilon>0$ and $C>0$ such that for all $\boldsymbol{\hat{\beta}}$ and $\boldsymbol{\beta}\in\hat{V}\cap B_{{\mathcal{Y}}}(\varepsilon)$ $\left\lVert\hat{y}_{(\hat{\boldsymbol{\beta}})}-y_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert\hat{p}_{(\hat{\boldsymbol{\beta}})}-p_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert u_{(\hat{\boldsymbol{\beta}})}-u_{(\boldsymbol{\beta})}\right\rVert_{C(\bar{I};{\mathcal{U}})}\\!\\\ \leq C\left(\left\lVert\hat{y}_{(\hat{\boldsymbol{\beta}})}-y_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert u_{(\hat{\boldsymbol{\beta}})}-u_{(\boldsymbol{\beta})}\right\rVert_{U}+\left\lVert\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right\rVert_{{\mathcal{Y}}}\right)$ (4.41) holds. ###### Proof. Since $\bar{p}\in W_{\infty}$, there exists $T>0$ such that $\frac{1}{\alpha}\left\lVert B^{*}\bar{p}\right\rVert_{Y}\leq\frac{\eta}{2},\quad\forall t>T.$ Since $p(0)=\bar{p}$ and since by Lemma 4.8, $\displaystyle\boldsymbol{\beta}\in{\mathcal{Y}}\mapsto p_{(\boldsymbol{\beta})}\in W_{\infty}\subset C(\bar{I};Y)$ is continuous, there exists $\varepsilon>0$ such that $\frac{1}{\alpha}\left\lVert B^{*}p_{(\boldsymbol{\beta})}(t)+\beta_{2}(t)\right\rVert_{Y}\leq\frac{\eta}{4},\quad\forall t\geq T,\ \forall\boldsymbol{\beta}\in\hat{V}\cap B_{{\mathcal{Y}}}(\varepsilon).$ Consequently the constraints are inactive for these parameter values, i.e. we have $u_{(\boldsymbol{\beta})}(t)=\frac{1}{\alpha}\left[B^{*}p_{(\boldsymbol{\beta})}(t)+\beta_{2}(t)\right],\ \left\lVert u_{(\boldsymbol{\beta})}(t)\right\rVert_{Y}\leq\eta,\quad\forall t\geq T,\ \forall\boldsymbol{\beta}\in\hat{V}\cap B_{{\mathcal{Y}}}(\varepsilon).$ (4.42) Let us henceforth set $(y,u,p)=\left(y_{(\boldsymbol{\beta})},u_{(\boldsymbol{\beta})},p_{(\boldsymbol{\beta})}\right)$, and $\left(\hat{y},\hat{u},\hat{p}\right)=\left(\hat{y}_{(\boldsymbol{\hat{\beta}})},\hat{u}_{(\boldsymbol{\hat{\beta}})},\hat{p}_{(\boldsymbol{\hat{\beta}})}\right)$. We shall use that $\left\lVert\hat{p}-p\right\rVert_{L^{2}(T,\infty;V)}=\sup_{\left\lVert r\right\rVert_{L^{2}(T,\infty;V^{*})}\leq 1}\int_{T}^{\infty}\langle\hat{p}(t)-p(t),r(t)\rangle_{V,V^{*}}dt.$ Let $z\in W(T,\infty)$ be such that, $z_{t}-({\mathcal{A}}-BK)z-{\mathcal{F}}^{\prime}(\bar{y})z=r,\ z(T)=0,$ From Lemma 4.6, see also the proof of Proposition 4.1, we know that there exists a constant $C_{1}>0$ such that $\displaystyle\left\lVert z\right\rVert_{W(T,\infty)}\leq C_{1}\left\lVert r\right\rVert_{L^{2}(T,\infty;V^{*})}$. Then we can estimate $\displaystyle\left\lVert\hat{p}-p\right\rVert_{L^{2}(T,\infty;V)}$ $\displaystyle=\sup_{\left\lVert r\right\rVert_{L^{2}(T,\infty;V^{*})}\leq 1}\int_{T}^{\infty}\langle\hat{p}-p,r\rangle_{V,V^{*}}dt$ $\displaystyle\leq\sup_{\left\lVert r\right\rVert\leq 1}\int_{T}^{\infty}-\langle(\hat{p}_{t}-p_{t})+{\mathcal{A}}^{*}(\hat{p}-p)+{\mathcal{F}}^{\prime}(\bar{y})^{*}(\hat{p}-p),z\rangle_{V^{*},V}dt$ $\displaystyle\hskip 199.16928pt+\sup_{\left\lVert r\right\rVert\leq 1}\int_{T}^{\infty}\langle B^{*}(\hat{p}-p),Kz\rangle_{V,V^{*}}dt.$ In the following, $C_{i}$ denote constants independent of $\boldsymbol{\hat{\beta}}\text{ and }\boldsymbol{\beta}\in\hat{V}\cap B_{{\mathcal{Y}}}(\varepsilon)$. From (4.35a) and (4.42) we obtain, for $C_{2}>0$, $\left\lVert\hat{p}-p\right\rVert_{L^{2}(T,\infty;V)}\leq C_{2}\sup_{\left\lVert r\right\rVert\leq 1}\int_{T}^{\infty}\left[\left\lVert\hat{y}-y\right\rVert_{V^{*}}+\left\lVert[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime}(\hat{y}-y)\right\rVert_{V^{*}}+\left\lVert\hat{\beta}_{1}-\beta_{1}\right\rVert_{V^{*}}\right.\\\ \left.+\left\lVert\hat{\beta}_{2}-\beta_{2}\right\rVert_{U\cap C(I;{\mathcal{U}})}+\alpha\left\lVert B^{*}\right\rVert\left\lVert K\right\rVert\left\lVert\hat{u}-u\right\rVert\right]\left\lVert z\right\rVert_{V}dt.$ From (A3) recall that $\displaystyle\left\lVert[{\mathcal{F}}^{\prime}(\bar{y})^{*}\bar{p}]^{\prime}(\hat{y}-y)\right\rVert_{L^{2}(I;V^{*})}\leq C_{3}\left\lVert\hat{y}-y\right\rVert_{W_{\infty}}$. This gives the following estimate for $C_{4}>0$, $\left\lVert\hat{p}-p\right\rVert_{L^{2}(T,\infty;V)}\leq C_{4}\left(\left\lVert\hat{y}-y\right\rVert_{W_{\infty}}+\left\lVert\hat{u}-u\right\rVert_{U}+\left\lVert\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right\rVert_{{\mathcal{Y}}}\right).$ (4.43) By (4.35a) we have $\displaystyle\left(\hat{p}_{t}-p_{t}\right)\in L^{2}(T,\infty;V^{*})$. Then we obtain $\hat{p}-p\in W(T,\infty)$. Then there exists $C_{5}>0$ independent of $\boldsymbol{\hat{\beta}}\text{ and }\boldsymbol{\beta}\in\hat{V}\cap B_{{\mathcal{Y}}}(\varepsilon)$ such that, $\left\lVert\hat{p}-p\right\rVert_{W(T,\infty)}\leq C_{5}\left(\left\lVert\hat{y}-y\right\rVert_{W_{\infty}}+\left\lVert\hat{u}-u\right\rVert_{U}+\left\lVert\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right\rVert_{{\mathcal{Y}}}\right).$ (4.44) By the embedding $\displaystyle W(T,\infty)\subset C(T,\infty;Y)$, there exists a constant $C_{6}>0$: $\left\lVert\hat{p}-p\right\rVert_{C([T,\infty);Y)}\leq C_{6}\left(\left\lVert\hat{y}-y\right\rVert_{W_{\infty}}+\left\lVert\hat{u}-u\right\rVert_{U}+\left\lVert\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right\rVert_{{\mathcal{Y}}}\right).$ (4.45) Similarly, we estimate on $[0,T]$: $\left\lVert\hat{p}-p\right\rVert_{L^{2}(0,T;V)}=\sup_{\left\lVert r\right\rVert_{L^{2}(0,T;V^{*})}\leq 1}\int_{0}^{T}\langle\hat{p}-p,r\rangle_{V,V^{*}}dt.$ (4.46) Choose $z$ as $z_{t}-\left({\mathcal{A}}z+{\mathcal{F}}^{\prime}(\bar{y})z\right)=r,\ z(0)=0,$ Then there exists $\displaystyle C_{7}>0$ such that $\displaystyle\left\lVert z\right\rVert_{W(0,T)}\leq C_{7}\left\lVert r\right\rVert_{L^{2}(0,T;V^{*})}$ by Lemma 4.6. Note that $C_{7}$ depends on $T$, but $T$ is fixed. We obtain the following estimate, $\left\lVert\hat{p}-p\right\rVert_{L^{2}(0,T;V)}\leq\sup_{\left\lVert r\right\rVert\leq 1}\int_{0}^{T}-\langle(\hat{p}_{t}-p_{t})+{\mathcal{A}}^{*}(\hat{p}-p)+{\mathcal{F}}^{\prime}(\bar{y})^{*}(\hat{p}-p),z\rangle_{V^{*},V}dt\\\ +\left\lVert\hat{p}(T)-p(T)\right\rVert_{Y}\left\lVert z(T)\right\rVert_{Y}.$ Then by a similar computation to that for $t\in[T,\infty)$ case, we obtain, $\left\lVert\hat{p}-p\right\rVert_{L^{2}(0,T;V)}\leq C_{8}\left(\left\lVert\hat{y}-y\right\rVert_{W_{\infty}}+\left\lVert\hat{\beta}_{1}-\beta_{1}\right\rVert_{L^{2}(I;V^{*})}\right)+\left\lVert\hat{p}(T)-p(T)\right\rVert_{Y}\left\lVert z(T)\right\rVert_{Y}.$ (4.47) By (4.45) with $\left\lVert z(T)\right\rVert_{Y}\leq C_{9}$, we obtain $\left\lVert\hat{p}(T)-p(T)\right\rVert_{Y}\left\lVert z(T)\right\rVert_{Y}\leq C_{7}C_{9}\left(\left\lVert\hat{y}-y\right\rVert_{W_{\infty}}+\left\lVert\hat{u}-u\right\rVert_{U}+\left\lVert\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right\rVert_{{\mathcal{Y}}}\right).$ Combining this estimate with (4.43) and (4.47), we obtain for some $C_{10}>0$ $\left\lVert\hat{p}_{(\hat{\boldsymbol{\beta}})}-p_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}\leq C_{10}\left(\left\lVert\hat{y}_{(\hat{\boldsymbol{\beta}})}-y_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert\hat{u}_{(\hat{\boldsymbol{\beta}})}-u_{(\boldsymbol{\beta})}\right\rVert_{U}+\left\lVert\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right\rVert_{{\mathcal{Y}}}\right).$ (4.48) We also have $u_{(\boldsymbol{\beta})}=\mathbb{P}_{\mathcal{U}_{ad}}\left[-\frac{1}{\alpha}\left(B^{*}p_{(\boldsymbol{\beta})}+\beta_{2}\right)\right]\in U\cap C(\bar{I};{\mathcal{U}}),$ and thus $\displaystyle\left\lVert\hat{u}_{(\hat{\boldsymbol{\beta}})}(t)-u_{(\boldsymbol{\beta})}(t)\right\rVert_{{\mathcal{U}}}$ $\displaystyle\leq\left\lVert\mathbb{P}_{\mathcal{U}_{ad}}\left[-\frac{1}{\alpha}\left(B^{*}\hat{p}_{(\hat{\boldsymbol{\beta}})}(t)+\hat{\beta}_{2}(t)\right)\right]-\mathbb{P}_{\mathcal{U}_{ad}}\left[-\frac{1}{\alpha}\left(B^{*}p(t)_{(\boldsymbol{\beta})}+\beta_{2}(t)\right)\right]\right\rVert_{{\mathcal{U}}}$ $\displaystyle\leq\frac{1}{\alpha}\left(\left\lVert B^{*}\right\rVert\left\lVert\hat{p}_{(\hat{\boldsymbol{\beta}})}(t)-p(t)_{(\boldsymbol{\beta})}\right\rVert_{Y}+\left\lVert\hat{\beta}_{2}(t)-\beta_{2}(t)\right\rVert_{{\mathcal{U}}}\right).$ This yields $\left\lVert\hat{u}_{(\hat{\boldsymbol{\beta}})}-u_{(\boldsymbol{\beta})}\right\rVert_{C(\bar{I};{\mathcal{U}})}\leq C_{11}\left(\left\lVert\hat{p}_{(\hat{\boldsymbol{\beta}})}-p_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert\hat{\beta}_{2}-\beta_{2}\right\rVert_{C(\bar{I};{\mathcal{U}})}\right),$ (4.49) and (4.41) follows. Combining Remark 4.2 and (4.41) there exists a constant $L$ such that $\left\lVert\hat{y}_{(\hat{\boldsymbol{\beta}})}-y_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert\hat{p}_{(\hat{\boldsymbol{\beta}})}-p_{(\boldsymbol{\beta})}\right\rVert_{W_{\infty}}+\left\lVert\hat{u}_{(\hat{\boldsymbol{\beta}})}-u_{(\boldsymbol{\beta})}\right\rVert_{U\cap C(\bar{I};Y)}\leq L\left\lVert\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\right\rVert_{{\mathcal{Y}}}$ (4.50) for all $\displaystyle\boldsymbol{\hat{\beta}}$ and $\displaystyle\boldsymbol{\beta}\in\hat{V}\cap B_{{\mathcal{Y}}}(\varepsilon)$. Here and in the following the $p_{1}$ coordinate of the adjoint state coincides with $p(0)$. Therefore it is not indicated. ∎ With the above proposition, the verification of (H1)–(H7) is concluded. We also obtain the following corollary to Theorem 2.1. ###### Corollary 4.9. Let assumptions (A) hold and let $(\bar{y},\bar{u})$ be a local solution of ($\mathcal{P}$) corresponding to an initial datum $\bar{y}_{0}\in B_{Y}(\tilde{\delta}_{2})$. Then there exist $\delta_{3}>0$, a neighborhood $\hat{U}=\hat{U}(\bar{y},\bar{u},p)\subset W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}}))\times W_{\infty}$, and a constant $\mu>0$ such that for each $y_{0}\in B_{Y}(\bar{y}_{0},\delta_{3})$ there exists a unique $(y(y_{0}),u(y_{0}),p(y_{0}))\in\hat{U}$ satisfying the first order condition, and $\left\lVert\left(y(\hat{y}_{0}),u(\hat{y}_{0})),p(\hat{y}_{0}))\right)-\left(y(\tilde{y}_{0}),u(\tilde{y}_{0}),p(\tilde{y}_{0}))\right)\right\rVert_{W_{\infty}\times(U\cap C(\bar{I};{\mathcal{U}}))\times W_{\infty}}\leq\mu\left\lVert\hat{y}_{0}-\tilde{y}_{0}\right\rVert_{Y},$ (4.51) for all $\hat{y}_{0},\tilde{y}_{0}\in B_{Y}(\bar{y}_{0},\delta_{3})$, and $\left(y(y_{0}),u(y_{0})\right)$ is a local solution of ($\mathcal{P}$). Next we obtain one of the main results of this paper, the Fréchet differentiability of the local value function associated to ($\mathcal{P}$). By local value function we refer to the fact that for some $y_{0}\in B_{Y}(\tilde{\delta}_{2})$ may not be unique. But since due to the second order optimality condition local solutions are locally unique under small perturbations of $y_{0}$, there is a well-defined local value function. We continue to use the notation for $\hat{U}$ and $B_{Y}(\bar{y}_{0},\delta_{3})$ of Corollary 4.9. ###### Theorem 4.1. (Sensitivity of Cost) Let assumptions (A) hold and let $(\bar{y},\bar{u})$ be a local solution of ($\mathcal{P}$) corresponding to an initial datum $\bar{y}_{0}\in B_{Y}(\tilde{\delta}_{2})$. Then for each $y_{0}\in B_{Y}(\bar{y}_{0},\delta_{3})$ the local value function $\mathcal{V}$ associated to ($\mathcal{P}$) is Fréchet differentiable with derivative given by $\mathcal{V}^{\prime}(y_{0})=-p(0;y_{0}).$ (4.52) ###### Proof. Let $\bar{y}_{0}\in B_{Y}(\tilde{\delta}_{2}),y_{0}\in B_{Y}(\bar{y}_{0},\delta_{3})$, and choose $\delta y_{0}$ sufficiently small so that $y_{0}+\delta y_{0}\in B_{Y}(\bar{y}_{0},\delta_{3})$ as well. Following Corollary 4.9 let $(\tilde{y}(y_{0}+s(\delta y_{0})),\tilde{u}(y_{0}+s(\delta y_{0})),\tilde{p}(y_{0}+s(\delta y_{0})))\in\hat{U}$ for $s\in[0,1]$ be solutions of the optimality system with $(\tilde{y}(y_{0}+s(\delta y_{0})),\tilde{u}(y_{0}+s(\delta y_{0})))$ local solutions to ($\mathcal{P}$). We obtain $\mathcal{V}(y_{0}+s(\delta y_{0}))-\mathcal{V}(y_{0})=\left(\frac{1}{2}\left\lVert\tilde{y}\right\rVert^{2}_{L^{2}(I,Y)}+\frac{\alpha}{2}\left\lVert\tilde{u}\right\rVert^{2}_{U}\right)-\left(\frac{1}{2}\left\lVert y\right\rVert^{2}_{L^{2}(I,Y)}+\frac{\alpha}{2}\left\lVert u\right\rVert^{2}_{U}\right)\\\ =\langle y,\tilde{y}-y\rangle_{L^{2}(I,Y)}+\alpha\langle u,\tilde{u}-u\rangle_{U}+\frac{1}{2}\left\lVert\tilde{y}-y\right\rVert^{2}_{L^{2}(I,Y)}+\frac{{\alpha}}{2}\left\lVert\tilde{u}-u\right\rVert^{2}_{U}.$ (4.53) Observe the identity $\langle y,\tilde{y}-y\rangle_{L^{2}(I,Y)}+\alpha\langle u,\tilde{u}-u\rangle_{U}={-}(p(0),s(\delta y_{0}))_{Y}{-}\langle(\tilde{y}_{t}-y_{t})-{\mathcal{A}}(\tilde{y}-y)-{\mathcal{F}}^{\prime}(y)(\tilde{y}-y),p\rangle+\alpha\langle u,\tilde{u}-u\rangle_{U}\\\ ={-}(p(0),s(\delta y_{0}))_{Y}{-}\langle{\mathcal{F}}(\tilde{y})-{\mathcal{F}}(y)-{\mathcal{F}}^{\prime}(y)(\tilde{y}-y),p\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}+\langle\alpha u-B^{*}p,\tilde{u}-u\rangle_{U},$ where $p=p(y_{0})$. Now we have for $\displaystyle\mathcal{V}(y_{0}+s(\delta y_{0}))-\mathcal{V}(y_{0})$, $\mathcal{V}(y_{0}+{s}(\delta y_{0}))-\mathcal{V}(y_{0})={-}(p(0),{s}(\delta y_{0}))_{Y}+\langle{\mathcal{F}}(y)-{\mathcal{F}}(\tilde{y})+{\mathcal{F}}^{\prime}(y)(\tilde{y}-y),p\rangle_{L^{2}(I,V^{*}),L^{2}(I,V)}\\\ +\langle\alpha u-B^{*}p,\tilde{u}-u\rangle_{U}+\frac{1}{2}\left\lVert\tilde{y}-y\right\rVert^{2}_{L^{2}(I,Y)}+\frac{{\alpha}}{2}\left\lVert\tilde{u}-u\right\rVert^{2}_{U}.$ (4.54) Since $p\in L^{2}(I;V)$, $\left\lVert\tilde{y}-y\right\rVert_{W_{\infty}}=O(s)$, and by the continuous Fréchet differentiability of ${\mathcal{F}}^{\prime}$ due to (A3) we have $\left\lvert\langle{\mathcal{F}}(\tilde{y})-{\mathcal{F}}(y)+{\mathcal{F}}^{\prime}(y)(\tilde{y}-y),p\rangle_{L^{2}(I,V^{*}),L^{2}(I,V)}\right\rvert=o(s)$ (4.55) Let $s_{n}\to 0$ be an arbitrary convergent sequence. By Corollary 4.9 we have that $\left\lVert\tilde{u}(y_{0}+s_{n}(\delta y_{0}))-u(y_{0})\right\rVert_{U}\leq\mu s_{n}(\delta y_{0})$ for all $s_{n}$ sufficiently small. Hence there exists a subsequence, denoted by the same notation and some $\dot{u}$ such that $s_{n}^{-1}\left(\tilde{u}(y_{0}+s_{n}(\delta y_{0}))-u(y_{0})\right)\rightharpoonup\dot{u}\text{ weakly in }U.$ Using (4.18), we have $\lim_{n\rightarrow\infty}s_{n}^{-1}\langle\alpha u-B^{*}p,\tilde{u}-u\rangle_{U}=\langle\alpha u-B^{*}p,\dot{u}\rangle_{U}\geq 0.$ Analogously $\lim_{n\rightarrow\infty}s_{n}^{-1}\langle\alpha\tilde{u}-B^{*}p,u-\tilde{u}\rangle_{U}=\langle\alpha u-B^{*}p,\dot{u}\rangle_{U}\leq 0.$ and hence $\langle\alpha u-B^{*}p,\dot{u}\rangle_{U}=0$. Since the sequence $\\{s_{n}\\}$ is arbitrary, we obtain $\langle\alpha u-B^{*}p,\tilde{u}-u\rangle_{U}=o(s).$ (4.56) Corollary 4.9 yields, $\left\lVert\tilde{y}(y_{0}+{s_{n}}(\delta y_{0}))-y(y_{0})\right\rVert^{2}_{L^{2}(I;Y)}+\alpha\left\lVert\tilde{u}(y_{0}+{s_{n}}(\delta y_{0}))-u(y_{0})\right\rVert^{2}_{L^{2}(I;Y)}=o(s_{n}).$ (4.57) Combining (4.55), (4.56), and (4.57) we obtain $\lim_{s\rightarrow 0^{+}}{s^{-1}}\left(\mathcal{V}(y_{0}+s(\delta y_{0}))-\mathcal{V}(y_{0})\right)={-}(p(0),(\delta y_{0}))_{Y}.$ (4.58) This implies the Gateaux differentiability. Since $y_{0}\to p(y_{0})$ is continuous from $B_{Y}(\bar{y}_{0};\delta_{3})$ to $C(\bar{I},Y)$ the mapping $y_{0}\to\mathcal{V}(y_{0})$ is Fréchet differentiable in $B_{Y}(\bar{y}_{0};\delta_{3})$. ∎ ###### Remark 4.4 (Sensitivity w.r.t. other parameters). We have developed a technique to verify the continuous differentiability of the local value function $\mathcal{V}$ pertaining to a semilinear parabolic equation on infinite time horizon subject to control constraints with respect to small initial data $y_{0}\in Y$. Thus the parameter $q$ in ($P_{q}$) is the initial condition $y_{0}$. The reason to focus on this case is due to feedback control. Without much additional effort the sensitivity analysis of the value function could be carried out with respect to other parameters as for instance additive noise on the right hand side of the state equation. The papers cited in the introduction, see e.g. [GHH], [GV], consider such situations for the finite horizon case. ## 5 Proof of Theorem 3.2: Derivation of the HJB Equation. Utilizing the results established so far we now verify that the (global) value function $\mathcal{V}$ (i.e. the value function associated to global minima) is a solution to a Hamilton-Jacobi-Bellman equation. The initial conditions will be chosen from the neighborhood $Y_{0}$ of the origin in $Y$ so that the assertions of Theorem 4.1 and Corollary 4.9 are available. It will be convenient to recall the dynamic programming principle for the infinite time horizon problem: let $y_{0}$ be an initial condition for which a solution to ($\mathcal{P}$) exists. Then for all $\tau>0$, we have $\mathcal{V}(y_{0})=\inf_{u\in L^{2}(0,\tau;\mathcal{U}_{ad})}\int_{0}^{\tau}\ell(S(u,y_{0};t),u(t))dt+\mathcal{V}(S(u,y_{0};\tau)),$ (5.1) where $\displaystyle\ell(y,u)=\frac{1}{2}\left\lVert y\right\rVert^{2}_{Y}+\frac{\alpha}{2}\left\lVert u\right\rVert^{2}_{{\mathcal{U}}}$, and $S(u,y_{0};t)$ denotes the solution to (3.2b), (3.2c) on $(0,\tau]$. For convenience we restate Theorem 3.2. Utilizing the notation that we have already established we can now slightly ease the assumption on the regularity of ${\mathcal{F}}(\bar{y})$. ###### Theorem 5.1. Let assumptions (A) hold and let $(\bar{y},\bar{u})$ be a global solution of ($\mathcal{P}$) corresponding to an initial datum $\bar{y}_{0}\in B_{Y}(\tilde{\delta}_{2})$. Let $Y_{0}$ denote the subset of initial conditions in $B_{Y}(\bar{y}_{0},\delta_{3})$ which allow global solutions in $\hat{U}$, and assume that for each $y_{0}\in{\mathcal{D}}({\mathcal{A}})\cap Y_{0}$ there exists $T_{y_{0}}>0$ such that ${\mathcal{F}}(\bar{y})\in C([0,T_{y_{0}});Y)$. Then the following Hamilton-Jacobi-Bellman equation holds at $y_{0}$: $\mathcal{V}^{\prime}(y)({\mathcal{A}}y+{\mathcal{F}}(y))+\frac{1}{2}\left\lVert y\right\rVert^{2}_{Y}+\frac{\alpha}{2}\left\lVert\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}(y)\right)\right\rVert^{2}_{Y}+\left\langle B^{*}\mathcal{V}^{\prime}(y),\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}(y)\right)\right\rangle_{Y}=0.$ (5.2) If for the optimal trajectory $\bar{y}(t)\in B_{Y}(\bar{y}_{0},\delta_{3})\cap{\mathcal{D}}({\mathcal{A}})$ for a.a. $t\in(0,\infty$) and $T_{y_{0}}=\infty$, then (5.2) holds at a.a. $t\in(0,\infty$) and ${\bar{u}}(t)=\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}({\bar{y}}(t))\right).$ (5.3) ###### Proof. The proof is similar to that of [BKP1, Proposition 10]. For the sake of completeness and since it also requires some changes we provide it here. Choose and fix some $y_{0}\in{\mathcal{D}}({\mathcal{A}})\cap Y_{0}$. Then the existence of a (globally) optimal pair $(\hat{y},\hat{u})\in W_{\infty}\times U_{ad}$ to ($\mathcal{P}$) and of an associated adjoint state $\hat{p}\in W_{\infty}$ with $(\hat{y},\hat{u},\hat{p})\in\hat{U}$ are guaranteed, see Corollary 4.9. In particular we have that $\displaystyle\hat{u}(t)=\mathbb{P}_{\mathcal{U}_{ad}}\left(\frac{1}{\alpha}B^{*}{\hat{p}}(t)\right)$, and since $\displaystyle\hat{p}\in C([0,\infty);Y)$ we have that $\displaystyle\hat{u}\in C([0,\infty);Y)$. Let $u_{0}$ denote the limit of $\hat{u}$ as time $t$ tends to $0$. Since $\hat{y}\in C([0,\infty);Y)$ and since $B_{Y}(y_{0},\delta_{3})$ is open there exists $\tau_{y_{0}}>0$ such that $\hat{y}(t)\in B_{Y}(y_{0},\delta_{3})$, for all $t\in[0,\tau_{y_{0}})$. Step 1: Let us first prove that $\mathcal{V}^{\prime}(y_{0})\big{(}{\mathcal{A}}y_{0}+{\mathcal{F}}(y_{0})+Bu_{0}\big{)}+\ell(y_{0},u_{0})=0.$ (5.4) For this purpose we invoke the dynamic programing principle: We have $\frac{1}{\tau}\int_{0}^{\tau}\ell(\hat{y}(s),\hat{u}(s))ds+\frac{1}{\tau}\big{(}\mathcal{V}(\hat{y}(\tau))-\mathcal{V}(y_{0})\big{)}=0,$ (5.5) where we choose $\tau\in(0,\min(T_{y_{0}},\tau_{y_{0}}))$ . By continuity of $\hat{y}$ and $\hat{u}$ at time $0$, the first term converges to $\displaystyle\ell(y_{0},u_{0})$ as $\tau\to 0$. To take $\tau\to 0$ in the second term we first consider $\frac{1}{\tau}\big{(}\hat{y}(\tau)-y_{0}\big{)}=\frac{1}{\tau}\big{(}e^{{\mathcal{A}}\tau}y_{0}-y_{0}\big{)}+\frac{1}{\tau}\int_{0}^{\tau}e^{{\mathcal{A}}(\tau-s)}\big{[}{\mathcal{F}}(\hat{y}(s))+B\hat{u}(s)\big{]}ds.$ (5.6) Using the facts that $y_{0}\in{\mathcal{D}}({\mathcal{A}})$, that the terms in square brackets are continuous with values in $Y$, and that ${\mathcal{A}}$ generates a strongly continuous semigroup on $Y$, we can pass to the limit in (5.6) to obtain that $\lim_{\tau\to 0^{+}}\frac{1}{\tau}\big{(}\hat{y}(\tau)-y_{0}\big{)}={\mathcal{A}}y_{0}+{\mathcal{F}}(y_{0})+Bu_{0}\text{ in }Y.$ (5.7) Now we return to the second term in (5.5) which we express as $\frac{1}{\tau}\big{(}\mathcal{V}(\hat{y}(\tau))-\mathcal{V}(y_{0})\big{)}=\int_{0}^{1}\mathcal{V}^{\prime}\big{(}y_{0}+s(\hat{y}(\tau)-y_{0})\big{)}ds\;\frac{1}{\tau}(\hat{y}(\tau)-y_{0}).$ (5.8) Using (5.7) and since $y\to\mathcal{V}^{\prime}(y)$ is continuously differentiable at $y_{0}$, we can pass to the limit in (5.8) to obtain $\lim_{\tau\to 0^{+}}\frac{1}{\tau}\big{(}\mathcal{V}(\hat{y}(\tau))-\mathcal{V}(y_{0})\big{)}=\mathcal{V}^{\prime}(y_{0})\big{(}{\mathcal{A}}y_{0}+{\mathcal{F}}(y_{0})+Bu_{0}\big{)}.$ (5.9) Now we can pass to the limit in (5.5) and obtain (5.4). Step 2: For $u\in\mathcal{U}_{ad}$ we define $\tilde{u}\in U_{ad}$ by, $\widetilde{u}(\tau,x)=\begin{cases}u\quad\text{for }\tau\in(0,1)\\\ 0\quad\text{ for }\tau\in[1,\infty)\end{cases}$ and define $\tilde{y}=S(y_{0},\tilde{u})$ as the solution to (3.2b), (3.2c). Then $\tilde{y}(t)\in B_{Y}(\bar{y}_{0},\delta_{3})$, for all $t$ sufficiently small, and by (5.1) we have, $\frac{1}{\tau}\int_{0}^{\tau}\ell(\tilde{y}(s),u(s))ds+\frac{1}{\tau}\big{(}\mathcal{V}(\tilde{y}(\tau))-\mathcal{V}(y_{0})\big{)}\geq 0,$ for all $\tau$ sufficiently small. We pass to the limit $\tau\to 0^{+}$ with the same arguments as in Step 1 and obtain $\mathcal{V}^{\prime}(y_{0})\big{(}{\mathcal{A}}y_{0}+{\mathcal{F}}(y_{0})+Bu\big{)}+\ell(y_{0},u)\geq 0.$ (5.10) This inequality becomes an equality if $u=u_{0}$, and thus the quadratic function on the left had side of (5.10) reaches its minimum $0$ at $u=u_{0}$. This implies that $u_{0}=\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}(y_{0})\right).$ Inserting this expression into (5.4) we obtain $\mathcal{V}^{\prime}(y_{0})({\mathcal{A}}y_{0}+{\mathcal{F}}(y_{0}))+\frac{1}{2}\left\lVert y_{0}\right\rVert^{2}_{Y}+\frac{\alpha}{2}\left\lVert\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}(y_{0})\right)\right\rVert^{2}_{Y}+\left\langle B^{*}\mathcal{V}^{\prime}(y_{0}),\mathbb{P}_{\mathcal{U}_{ad}}\left(-\frac{1}{\alpha}B^{*}\mathcal{V}^{\prime}(y_{0})\right)\right\rangle_{Y}=0.$ (5.11) Under the additional assumptions on the trajectory, (5.3) follows. ∎ ## 6 Some Applications In this section we discuss the applicability of the framework in two specific cases. It should be noted that even for linear state equations, the sensitivity result for the constraint infinite horizon optimal control problem may be new. ### 6.1 Fisher’s Equation We consider the optimal stabilization problem for the Fisher equation in an open connected bounded domain $\Omega$ in $\mathbb{R}^{d},\ d\in\\{1,2,3,4\\}$, with Lipschitzian boundary $\Gamma=\partial\Omega$: $\displaystyle(\mathcal{P}_{Fis})\qquad\mathcal{V}(y_{0})=\min_{\begin{matrix}(y,u)\in W_{\infty}\times U_{ad}\end{matrix}}\ \frac{1}{2}\int_{0}^{\infty}\left\lVert y\right\rVert^{2}_{Y}dt+\frac{\alpha}{2}\int_{0}^{\infty}\left\lVert u\right\rVert^{2}_{{\mathcal{U}}}dt,$ (6.1a) subject to $\displaystyle y_{t}$ $\displaystyle=\Delta y+y(1-y)+Bu\quad$ $\displaystyle\text{ in }Q=(0,\infty)\times\Omega$ (6.1b) $\displaystyle y$ $\displaystyle=0\quad$ $\displaystyle\text{ on }\Sigma=(0,\infty)\times\Gamma$ (6.1c) $\displaystyle y(0)$ $\displaystyle=y_{0}\quad$ $\displaystyle\text{ in }\Omega.$ (6.1d) where ${\mathcal{U}}$ and $U_{ad}$ are as in Section 3.1, $B\in{\mathcal{L}}({{\mathcal{U}},Y})$, with $Y=L^{2}(\Omega)$ and $V=H^{1}_{0}(\Omega)$. To further cast this problem in the framework of Section 3, we define the operator ${\mathcal{A}}y=(\Delta+\mathbf{I})y\quad\text{and}\quad y|_{\Gamma}=0,\quad{\mathcal{D}}({\mathcal{A}})=H^{2}(\Omega)\cap V.$ Clearly ${\mathcal{A}}$ has an extension as operator ${\mathcal{A}}\in{\mathcal{L}}(V,V^{*})$. Moreover it generates an analytic semigroup on $Y$. Thus (A1) holds. For ${\mathcal{U}}=Y$ and $B=\mathbf{I}$, condition (A2) is trivially satisfied. Feedback stabilization by finite dimensional controllers was analyzed in [Tri], for example. It can readily be checked that the nonlinearity ${\mathcal{F}}(y)=-y^{2}$ is twice continuously differentiable as mapping ${\mathcal{F}}:W_{\infty}\to L^{2}(I;V^{*})$. We note that the second derivative is constant, so the required boundedness of the second derivative is automatic. For the sake of illustration we verify the boundedness of the bilinear form of the second derivative on $W_{\infty}\times W_{\infty}$. For this purpose, for arbitrary $y\in W_{\infty},v_{1},v_{2}\in W_{\infty},\phi\in L^{2}(I;V)$ we estimate $\displaystyle\int_{0}^{\infty}\langle{\mathcal{F}}^{\prime\prime}({y})(v_{1},v_{2}),\phi\rangle_{V^{*},V}dt$ $\displaystyle\leq 2\int_{0}^{\infty}\int_{\Omega}v_{1}v_{2}\phi\ dxdt\leq\int^{\infty}_{0}\left\lVert v_{1}\right\rVert_{L^{2}(\Omega)}\left\lVert v_{2}\right\rVert_{L^{4}(\Omega)}\left\lVert\phi\right\rVert_{L^{4}(\Omega)}\,dt,$ (6.2) $\displaystyle\leq C_{1}\left\lVert v_{1}\right\rVert_{W_{\infty}}\int^{\infty}_{0}\left\lVert v_{2}\right\rVert_{V}\left\lVert\phi\right\rVert_{V}\,dt\leq C_{2}\left\lVert v_{1}\right\rVert_{W_{\infty}}\left\lVert v_{2}\right\rVert_{L^{2}(I;V)}\left\lVert\phi\right\rVert_{L^{2}(I;V)},$ $\displaystyle\leq C_{3}\left\lVert v_{1}\right\rVert_{W_{\infty}}\left\lVert v_{2}\right\rVert_{W_{\infty}}\left\lVert\phi\right\rVert_{L^{2}(I;V)},$ where $C_{i}$ are embedding constants, independent of $y\in W_{\infty},v\in W_{\infty},\phi\in L^{2}(I;V)$. We use that $V$ embeds continuously into $L^{4}(\Omega)$ in dimension up to 4. This implies that $\left\lVert{\mathcal{F}}^{\prime\prime}({y})(v_{1},v_{2})\right\rVert_{L^{2}(I;V^{*})}\leq C_{3}\left\lVert v_{1}\right\rVert_{W_{\infty}}\left\lVert v_{2}\right\rVert_{W_{\infty}}$. Finally we have ${\mathcal{F}}(0)={\mathcal{F}}^{\prime}(0)=0$ and thus (A3) and (3.3) are satisfied. Turning to (A4) we show that ${\mathcal{F}}(y):W(0,T)\rightarrow L^{1}(0,T;V^{*})$ is continuous for every $T>0$. We consider the sequence $\displaystyle y_{n}\rightharpoonup\hat{y}$ in $W_{\infty}$ and let $z\in L^{\infty}(0,T;V)$ be given. Then we estimate $\displaystyle\int_{0}^{T}\langle{\mathcal{F}}(y_{n})-{\mathcal{F}}(\hat{y}),z\rangle_{V^{*},V}dt$ $\displaystyle=\int_{0}^{T}\langle y_{n}^{2}-\hat{y}^{2},z\rangle_{V^{*},V}=\int_{0}^{T}\int_{\Omega}(y_{n}-\hat{y})(y_{n}+\hat{y})z\ dxdt$ $\displaystyle\leq C_{4}\int_{0}^{T}\left\lVert y_{n}-\hat{y}\right\rVert_{Y}\left\lVert y_{n}+\hat{y}\right\rVert_{L^{4}(\Omega)}\left\lVert z\right\rVert_{L^{4}(\Omega)}\ dt$ $\displaystyle\leq C_{4}\left\lVert y_{n}-\hat{y}\right\rVert_{L^{2}(0,T;Y)}\left[\left\lVert y_{n}\right\rVert_{L^{2}(0,T;V)}+\left\lVert\hat{y}\right\rVert_{L^{2}(0,T;V)}\right]\left\lVert z\right\rVert_{L^{\infty}(0,T;V)}.$ Since $V$ is compactly embedded in $Y$, we obtain by the Aubin Lions lemma that $\displaystyle\left\lVert y_{n}-\hat{y}\right\rVert_{L^{2}(0,T;Y)}\to 0$ for $n\to\infty$. This implies $\int_{0}^{T}\langle{\mathcal{F}}(y_{n})-{\mathcal{F}}(\hat{y}),z\rangle_{V^{*},V}dt\xrightarrow[n\rightarrow\infty]{}0,$ and (A4) follows. It is simple to check that $\displaystyle{\mathcal{F}}^{\prime}(\bar{y})=2\bar{y}\in{\mathcal{L}}(L^{2}(I;V),L^{2}(I;V^{*}))$ and thus (A5) holds as well. We turn to the assumption ${\mathcal{F}}(\bar{y})\in C([0,T_{y_{0}});Y)$, for $y_{0}\in{\mathcal{D}}({\mathcal{A}})$ and some $T_{y_{0}}$, arising in Theorem 3.2 for $y_{0}\in{\mathcal{D}}({\mathcal{A}})$. Utilizing the fact that $V$ embeds continuously into $L^{4}(\Omega)$ in dimension $d\leq 4$ and $\bar{y}\in L^{2}(I;V)$, we have ${\mathcal{F}}(\bar{y})\in L^{2}(I;Y)$. Hence parabolic regularity theory implies that $\bar{y}\in C([0,\infty);V)$ for $y_{0}\in V$, and ${\mathcal{F}}(\bar{y})\in C([0,\infty);Y)$ follows. ###### Remark 6.1. The specificity of this example rests in the fact that the second derivative is independent of the point were it is taken. Other nontrivial cases of analogous structure are reaction diffusion systems with bilinear coupling, see [Gri] where the finite horizon case was treated. Even the case of the Navier Stokes equations falls in this category. Sensitivity for the infinite horizon problems was treated by independent techniques in [BKP3]. ### 6.2 Nonlinearities induced by functions with globally Lipschitz continuous second derivative. Consider the system ($\mathcal{P}$) with ${\mathcal{A}}$ associated to a strongly elliptic second order operator with domain $H^{2}(\Omega)\cap H^{1}_{0}(\Omega)$, so that (A1)-(A2) are satisfied. Let ${\mathcal{F}}:W_{\infty}\to L(I;V^{*})$ be the Nemytskii operator associated to a mapping $\mathfrak{f}:\mathbb{R}\to\mathbb{R}$ which is assumed to be $C^{2}(\mathbb{R})$ with first and second derivatives globally Lipschitz continuous, and second derivative globally bounded. The regularity assumption ${\mathcal{F}}(\bar{y})\in C([0,T_{y_{0}});Y)$ for $y_{0}\in V=H^{1}_{0}(\Omega)$ is satisfied by parabolic regularity theory. We discuss assumption (A3)-(A5) for such an ${\mathcal{F}}$, and show that they are satisfied for dimensions $d\in\\{1,2\\}$. For the finite horizon problem it will turn out that $d=3$ is also admissible. By direct calculation it can be checked that ${\mathcal{F}}$ is continuously Fréchet differentiable for $d\in\\{1,2,3\\}$. We leave this part to the reader and immediately turn to the second derivative. We proceed by considering the general dimension $d$ to highlight, how the restrictions on the dimension arise. Thus let $d\in\mathbb{N}$ with $d>1$. The case $d=1$ can be treated with minor modifications from those in the following steps. #### 6.2.1 Second derivative of ${\mathcal{F}}(y)$. For $y,h_{1},h_{2}\in W_{\infty}$ the relevant expression is given by $\left\lVert{\mathcal{F}}^{\prime}(y+h_{2})h_{1}-{\mathcal{F}}^{\prime}(y)h_{1}-{\mathcal{F}}^{\prime\prime}(y)(h_{1},h_{2})\right\rVert_{L^{2}(I;V^{*})}\\\ \quad=\sup_{\left\lVert\varphi\right\rVert_{L^{2}(I;V)}\leq 1}\langle{\mathcal{F}}^{\prime}(y+h_{2})h_{1}-{\mathcal{F}}^{\prime}(y)h_{1}-{\mathcal{F}}^{\prime\prime}(y)(h_{1},h_{2}),\varphi\rangle_{L^{2}(I;V^{*}),L^{2}(I;V)}\\\ \quad=\sup_{\left\lVert\varphi\right\rVert_{L^{2}(I;V)}\leq 1}\int_{0}^{\infty}\int_{\Omega}(\mathfrak{f}^{\prime}(y(t,x)+h_{2}(t,x))-\mathfrak{f}^{\prime}(y(t,x))-\mathfrak{f}^{\prime\prime}(y(t,x))h_{2}(t,x))h_{1}(t,x)\varphi(t,x)dxdt\\\ \quad=\sup_{\left\lVert\varphi\right\rVert_{L^{2}(I;V)}\leq 1}\int_{0}^{\infty}\int_{\Omega}g(t,x)\ h_{2}(t,x)h_{1}(t,x)\varphi(t,x)dxdt,$ where $\displaystyle g(t,x)=\int_{0}^{1}(\mathfrak{f}^{\prime\prime}(y(t,x)+sh_{2}(t,x))-\mathfrak{f}^{\prime\prime}(y(t,x)))ds$. Note that $g$ is bounded on $I\times\Omega$ and $g\in W_{\infty}$. Here we use that $\mathfrak{f}^{\prime\prime}$ is globally Lipschitz continuous and that $h_{1}\in W_{\infty}$. Henceforth we let $r\in(1,\frac{2d}{d-2}]$ so that $W^{1,2}(\Omega)\subset L^{r}(\Omega)$ continuously. Let $r^{\prime}$ denote the conjugate of $r$ so that $r^{\prime}\in[\frac{2d}{d+2},\infty)$ for $d>2$ and $r^{\prime}\in(1,\infty)$ for $d=2$. We further choose $\rho>1,\sigma>2$
# Modelling the development of counting with memory-augmented neural networks Zack Dulberg<EMAIL_ADDRESS> Princeton Neuroscience Institute, Princeton, NJ Taylor Webb <EMAIL_ADDRESS> University of California Los Angeles, Los Angeles, CA Jonathan Cohen <EMAIL_ADDRESS> Princeton Neuroscience Institute, Princeton, NJ ###### Abstract Learning to count is an important example of the broader human capacity for systematic generalization, and the development of counting is often characterized by an inflection point when children rapidly acquire proficiency with the procedures that support this ability. We aimed to model this process by training a reinforcement learning agent to select N items from a binary vector when instructed (known as the give-$N$ task). We found that a memory- augmented modular network architecture based on the recently proposed Emergent Symbol Binding Network (ESBN) exhibited an inflection during learning that resembled human development. This model was also capable of systematic extrapolation outside the range of its training set - for example, trained only to select between 1 and 10 items, it could succeed at selecting 11 to 15 items as long as it could make use of an arbitrary count sequence of at least that length. The close parallels to child development and the capacity for extrapolation suggest that our model could shed light on the emergence of systematicity in humans. Keywords: counting; development; give-$N$; reinforcement learning; memory- augmented neural networks ## Introduction Humans are capable of systematic generalization, that is, performing well outside the range of values on which they were trained (?, ?, ?). For example, a human could fetch 12 apples if asked, despite having only ever grabbed up to 9 apples in the past. Although this capacity falls short of perfect systematicity (?, ?), artificial neural networks have much greater difficulty performing well in contexts outside the convexity of their training data (?, ?, ?). Learning to count is one of the earliest systematic behaviours acquired in human development, and is foundational with respect to further development of abstract procedures like mathematics. Here, we present a counting model that exhibits both a developmental trajectory similar to humans as well as systematicity. The development of counting in childhood has previously been summarized by the knower-level theory (?, ?, ?, ?, ?). This theory suggests a number of distinct stages in acquiring an understanding of the cardinal meaning of numbers. A child begins as a pre-numeral knower, with no understanding of cardinality, and then becomes a subset-knower, learning subsequent numbers in order (i.e., becomes a one-knower, then a two-knower, then a three-knower, etc). Around the time a child becomes a five-knower, there appears to be an inductive transition, after which the child becomes a cardinal-principle-knower (CP- knower), and understands the cardinal meaning of numbers as high as they can count. Some data have supported this view, though children pass through knower-levels at different rates (?, ?), and early stages may be more approximate than previously thought (?, ?), while other data have called into question whether a true semantic induction underlies this apparent transition (?, ?). The knower-level theory has generally been based on data from the widely used give-$N$ task (?, ?, ?, ?). In this task, a child is instructed to ’give $N$ objects’, and is typically considered an $N$-knower if they can select the correct number of objects twice as often as they make an error (66 % accuracy). Our goal was to train a neural network to perform the give-$N$ task and examine its performance and developmental trajectory. We suggest the give-$N$ task is most realistically modelled using a reinforcement learning framework. First, the task can be successfully completed in many different ways (i.e. by selecting objects in various orders), so a reinforcement rather than a more specific instructive signal is appropriate. Furthermore, children receive a substantial amount of reinforcement from adults when learning tasks like these (”good job!”). Finally, the give-$N$ task can easily be expressed as a decision process with discrete actions, which therefore lends itself naturally to reinforcement learning. We report simulations of a model that, trained to perform the give-$N$ task using reinforcement learning, displays both an inflection point consistent with the human developmental trajectory, as well as the capacity for systematic extrapolation when presented with stimuli out of the range of its training. The model extended the Emergent Symbol Binding Network (ESBN) (?, ?), in which internal representations of a control network were separated from task inputs, interacting only through binding in a differentiable external memory module. We developed a variant of the ESBN within a reinforcement learning framework to address how its capacity for symbol-like behavior might support the development of systematic counting proficiency. We also softened the separation constraint, giving the ESBN access to both internal and input streams (and thus the option to over-fit to training inputs), and it still exhibited symbol-like behavior and the capacity for systematic generalization. Baseline models did not display inflection or extrapolation, suggesting variable binding and dot-product similarity evaluation as the relevant architectural inductive biases that promoted systematicity and emulated human development. This work also supports the broader idea that the development of abstract conceptual knowledge may be supported by the learning of systematic procedures. ## Related Work There have been various attempts to model counting. One line of inquiry investigated the ability of neural networks to estimate numerosity by, for example, looking at an image of objects (?, ?, ?, ?). Another study used counting equivariance relations to support the learning of visual representations (?, ?). However, such perceptual tasks do not address the capacity for systematic counting behavior. Other work has shown that recurrent neural networks can keep track of counts in order to predict the next character in a string derived from some grammar (?, ?, ?). Though accuracy was limited, some of these networks could extrapolate to longer strings than those on which they were trained. However, this was a character prediction task rather than a counting task, making the parallel to human counting development unclear. Finally, Bayesian models have been proposed to address the development of systematic counting in humans (?, ?, ?). These models generally assumed knower-levels (including CP-knower) as primitive hypotheses. The model we propose does not include such primitives, seeking to explain systematicity as an emergent property that arises through learning. Some neural network modeling studies have directly addressed learning to count. ? (?) introduced a feed-forward network with a visual attention mechanism that was trained through a combination of reinforcement learning and ‘social scaffolding’ (demonstrations from a teacher) to perform the how-many task (a simpler counting task that children often master before they can perform the give-$N$ task). ? (?) developed an extension of this model that used a recurrent network and operated over more realistic two-dimensional inputs. Although this network learned numbers in the correct order (passing appropriately through subset-knower levels), it did not display an inflection point for higher numbers, and was not tested on extrapolation for numbers outside the range on which it was trained. ? (?) expanded further on this work by training a recurrent convolutional network to perform a variety of counting tasks, including give-$N$. However, the give-$N$ task did not display a familiar developmental trajectory (for example, it learned to give-1 last rather than first), and this model was also not tested on numbers outside the training range. A common element of these studies was the use of teacher- guided learning; that is, networks were trained to imitate a specific way of solving the task, rather than learning from reinforcement alone. Imitation has in fact been shown to improve learning of multiple tasks in a rich 3D environment, but interestingly, in that work, agents still struggled most with learning to count (?, ?). Although imitation likely plays a role in the development of many skills, here we investigated to what extent a model trained from reinforcement alone could account for the relevant developmental phenomena. ## Methods ### Environment We represented a set of objects in the give-$N$ task as a binary vector in which 1’s corresponded to the presence of an object at a given location. For example, the vector [0,1,0,1,1,0] indicated that there were objects at locations 2, 4, and 5. Vectors were of length 40 so that the space of possible object arrangements was combinatorically large ($2^{40}$ states), in order to prevent memorization. All models were trained on the give-$N$ task as follows: given an initial object vector $\bm{o_{0}}$, and an instruction to give $N$ objects (represented by the one-hot vector $\bm{x_{N}}$), the model should select $N$ unique objects one at a time from the object vector to produce a correct response. If a location with an object was selected at time $t-1$, the value at that location changed from 1 to 0 in $\bm{o_{t}}$. The binary object vector can be interpreted as the output of an object segmentation model; focusing on this intermediate level of abstraction was motivated by the desire to understand the acquisition of counting competency separately from the details of sensory processing and object segmentation. ### Model (a) Counter (b) ESBN (c) Dot-product (d) LSTM (e) Transformer Figure 1: Schematics. Temporal count sequence shown in (a), spatial architecture of models in (b-e). The counter (a) consisted of the encoder $e$ (which translated one-hot vectors $\bm{x_{N}}$ / $\bm{x_{1}}$ into count embeddings $\bm{z_{0}}$ / $\bm{z_{1}}$), and the successor function $s$ (which iteratively produced $\bm{z_{t}}$ from $\bm{z_{t-1}}$ starting with $\bm{z_{1}}$). The ESBN model (b) consisted of an LSTM controller which at each time step received a concatenation of key $\bm{k_{r_{t}}}$ (read from memory), object vector $\bm{o_{t}}$ and count embedding $\bm{z_{t}}$ as inputs, wrote key $\bm{k_{w_{t}}}$ to memory, and selected action $\bm{a_{t}}$. The count embeddings also interacted with the controller via a key/value memory system indicated by matrices $\bm{M_{k}}$/$\bm{M_{v}}$. In the dot product model (c), only the cosine similarity (dot symbol in brackets) of each count embedding $\bm{z_{t}}$ with the instruction embedding $\bm{z_{0}}$ was input to the LSTM controller along with $\bm{o_{t}}$. In the LSTM baseline (d), $\bm{z_{t}}$ and $\bm{o_{t}}$ were input directly into the LSTM controller. In the transformer baseline (e), the full history of count embeddings and object vectors up until time $t$ was input into a transformer layer. First, we pre-trained a counter module that consisted of an encoding function $e$ (a 128 unit linear projection), a successor function $s$ (another 128 unit linear projection), and a decoding function $d$ (a 15 unit linear projection). Given a 15-unit one-hot vector $\bm{x_{n}}$ representing an integer $n\in\\{1,15\\}$, the counter was trained to produce an embedding $\bm{z}$ such that $\bm{z}=e(\bm{x_{n}})$, $\bm{x_{n}}=d(e(\bm{x_{n}}))$, and $\bm{x_{n+1}}=d(s(e(\bm{x_{n}})))$. In this way, $s$ was a successor operation that could iterate through a learned sequence of $\bm{z}$ embeddings, $e$ was an encoder that could translate a one-hot instruction into one of the embeddings in that sequence, and $d$ was a decoder that could translate embeddings back into one-hots (in the our models, $d$ only played a role during pre-training). These components implemented a counting capability assumed to have been acquired by children at the time they are asked to perform the give-$N$ task. The temporal structure of the counter outputs was the same for all models (Fig. 1(a)), where $\bm{z_{t}}$ represented the embedding produced by the counter and passed to the networks at teach time step $t$. At $t=0$, the task instruction $\bm{x_{N}}$ was passed through the encoder $e$ to generate $\bm{z_{0}}$. At $t=1$, the one-hot representing the start of the count sequence, $\bm{x_{1}}$, was passed through $e$ to generate $\bm{z_{1}}$. For each subsequent time step, the embedding from the previous time step $\bm{z_{t-1}}$ was passed through the successor function $s$ to generate the embedding for the next number in the count list $\bm{z_{t}}=s(\bm{z_{t-1}})$ (representing the counter reciting its learned count sequence). The ESBN model consisted of a set of components outlined in Fig. 1(b). The controller was an LSTM (?, ?) augmented with a differentiable (external) memory separated into keys ($\bm{M_{k}}$) and values ($\bm{M_{v}}$). The memory was initialized with one learned key/value pair so it did not start out empty, and an additional key/value pair ($\bm{k_{w_{t}}}$ / $\bm{z_{t}}$) was written to memory at each subsequent time-step. Note the value here was the $\bm{z_{t}}$ just described. As in the original ESBN, the LSTM controller had a single layer with 512 units. The controller received a concatenation of three vectors as input at each time step: $\bm{k_{r_{t}}}$, $\bm{z_{t}}$, and $\bm{o_{t}}$. The input $\bm{k_{r_{t}}}$ was a key initialized to zeros at $t=0$, and retrieved from memory at each subsequent time-step. To retrieve $\bm{k_{r_{t}}}$, the cosine similarity was calculated between $\bm{z_{t}}$ and all previous values in memory $\bm{M_{v_{t-1}}}$, passed through a softmax to produce a set of weights $\bm{w_{k_{t}}}$, and used to calculate a weighted sum over $\bm{M_{k_{t-1}}}$, the keys in memory. The second input to the controller, $\bm{z_{t}}$, was the count embedding itself (so that the decision to use $\bm{k_{r_{t}}}$, $\bm{z_{t}}$ or some combination thereof to solve the task was left up to the model). The third input was the object vector $\bm{o_{t}}$. The ESBN model had two output heads. The action head was a fully-connected layer with input size $512$ and output size 41 (corresponding to the 40 possible object locations, plus an additional done action). A softmax activation was applied to the output logits to produce a vector of action probabilities. The controller also had a key output head (256 units with ReLU nonlinearities) that produced the key $\bm{k_{w_{t}}}$ written to memory at each time step . We compared the ESBN model to a set of models that lacked key/value memory modules. The dot-product model (Fig. 1(c)) was meant to elucidate the role of similarity-based memory retrieval in the success of the ESBN model, albeit in a manner that is specifically tailored to the counting task (unlike the original ESBN). We computed the cosine-similarity between $\bm{z_{t}}$ and $\bm{z_{0}}$ directly, so the controller only received $\bm{o_{t}}$ and the scalar similarity score as inputs. In the LSTM baseline (Fig. 1(d)), the controller simply received $\bm{o_{t}}$ and $\bm{z_{t}}$ as inputs. Finally, in the transformer baseline, the controller was a single transformer layer (?, ?) (8 self-attention heads, 512-unit MLP, positional encoding) which at each time step received the entire past sequence of $\bm{o_{0..t}}$ and $\bm{z_{0..t}}$, concatenated as shown in Fig. 1(e). ### Training #### Pre-training Counter The counter was pre-trained on its auto-encoding and successor functions. Given a one-hot input vector $\bm{x_{n}}$ where $n\sim$ Uniform$(1,15)$, the counter produced an output of $d(s^{i}(e(\bm{x_{n}})))$. $s^{i}$ represented iterating $s$, the successor function, $i$ times in a row, with $i\sim$ Uniform$(0,15-n)$. This allowed for interleaved learning of both encoding and decoding between one-hot space and embedding space, as well as iterating through the learned embeddings sequentially with the successor function. The loss was computed as the mean-squared-error between the output vector and the desired one-hot vector $\bm{x_{n+i}}$. As well, a similarity penalty on the embeddings was added to the loss (for $i\neq 0$) as the dot product $e(\bm{x_{n}})\cdot s^{i}(e(\bm{x_{n}}))$. Without this penalty, repeated applications of the successor function caused embeddings to drift apart from their corresponding one-hot encodings. The Adam optimizer (?, ?) was used to perform weight updates on mini-batches of 1 with a learning rate of $10^{-4}$. The weights of the counter network were frozen before being used in the subsequent reinforcement learning task. #### Reinforcement Learning of give-$N$ Task During training, action $a_{t}$ was sampled at each time-step from a categorical distribution using action probabilities produced by our models. We used a two-step training curriculum. In the first step, agents were trained only to select 1s and not 0s. The object vector was the only input (all other input units were set to zero), and the agent received a reward of 0 if it selected an object slot that contained a 1 and a reward of -1 otherwise. Episodes ended after 20 time-steps. In the second step, we switched to the give-$N$ task. Each training episode started by randomly selecting an integer $N$ between 1 and $N_{max}$, represented as the one-hot instruction vector $\bm{x_{N}}$. $N_{max}$ was initialized to 1, and incremented by 1 once the network achieved at least $66\%$ accuracy on give-$N_{max}$. This curriculum progressed until $N_{max}$ was fixed at 10. Having selected $N$ for a given episode, the object vector was populated with $j$ objects, where $j$ $\sim$ Uniform ($N+10,35$). This was done to reduce the correlation between $N$ and the number of objects in the object vector, while keeping the space of possible object arrangements very large. If the agent selected an object location containing a $1$, that object was replaced with a $0$ on the next time step, and the agent received a reward of $0$. If the agent selected a location containing a $0$, it received a reward of $-1$. Finally, if the agent selected done (ending the episode), it received a reward of $+5$ if it had by that time selected exactly $N$ objects, and otherwise a reward of $-|N-n|$ if it had selected $n\neq N$ objects. At the end of each episode, one gradient descent step in weight space was performed according to the REINFORCE policy gradient algorithm (?, ?) (chosen because it was the simplest algorithm capable of learning the task). The Adam optimizer with a learning rate of $5\times 10^{-5}$ was used for a total of 500,000 episodes (50,000 episodes on step 1 and the remaining 450,000 episodes on give-$N$). A set of 30 randomly initialized models were trained for each condition. ### Testing In order to track developmental trajectories, models were tested on give-$N$ at check-points during training every 1000 episodes. At each check-point, an accuracy score was produced by calculating the proportion of correct responses out of 30 new object vectors (unseen during training) for each requested $N\in\\{1,10\\}$. Actions were selected during testing based on the maximum action probability rather than categorical sampling. A correct response was defined as receiving no negative rewards during the complete episode (based on our reward scheme, this meant the agent selected exactly $N$ objects, and did not select any empty locations, before indicating done). The episode at which training accuracy exceeded a threshold of $66\%$ was recorded for each $N\in\\{1,10\\}$. This produced a developmental trajectory for the accuracy of each model, and we compared how well these trajectories were fit by linear, exponential, logarithmic or sigmoidal functions using the Bayesian information criterion (?, ?). (a) ESBN Model (b) Dot-product Model (c) LSTM Baseline (d) Transformer Baseline Figure 2: Accuracy on the give-$N$ task for all trained models. Two bars are displayed for each $N$: the point during training with the highest average accuracy across N from 1 to 10 (dark blue) and the best accuracy for each N at any point in training (light blue, left out for extrapolation to avoid using extrapolation performance to select when to test the model). Bars in each subplot represent mean and standard error across an ensemble of 30 trained models. The red line separates the training regime (left) from the extrapolation regime (right). Finally, best performance as well as extrapolation performance was determined by calculating the accuracy of each model at the episode during training that had the highest average accuracy across $N\in\\{1,10\\}$. The extrapolation set was defined as $N=\\{11,15\\}$, a set of instructions for give-$N$ that was never presented to the model during training, but was nevertheless tested for accuracy at each checkpoint. Occasionally, our models exhibited unstable behavior (i.e., initially learning the task well, but then dropping in performance prior to the final episode). This required us to select an appropriate testing point, intentionally not using accuracy on the extrapolation set to determine this point in order to avoid test-set leakage into our results. In order to evaluate more modest success, particularly for the baseline models, we also report the best performance for each $N\in\\{1,10\\}$ at any point during training individually. (a) ESBN Model (b) Dot-product Model (c) LSTM Baseline (d) Transformer Baseline Figure 3: Developmental trajectories for all models. For each requested $N$ on the x-axis, the episode at which a threshold of $66\%$ accuracy was crossed is displayed on the y-axis (which begins after 50,000 episodes of the step-1 curriculum). Insets show zoomed-in trajectories, colours represent individual models, and error bars represent standard deviation across models. Out of a training ensemble of 30 models, only those that reached threshold performance for all $N$ at some point during training were included (n=30/30 for ESBN and dot-product models, n=25/30 for LSTM baseline, and n=3/30 for the transformer baseline). ## Results Overall performance and extrapolation performance are shown in Figure 2. The ESBN model learned the task well, and achieved significant extrapolation. The dot-product model did even better in these respects. In contrast, while the baseline models performed well for $N$ up to around 5, performance degraded in various ways past that. For example, unlike the ESBN and dot-product models, the LSTM baseline struggled to perform well for higher values of $N$ simultaneously, indicated by the difference between best accuracy at any time during training, and best accuracy when the model was doing its best on average. The transformer baseline struggled as $N$ increased, with only 3 models ever crossing threshold on Give-10. The baseline models were also incapable of extrapolation ($N$=11-15; Fig. 2(c) and 2(d)). The developmental trajectory of performance on the training set is shown in Figure 3. In order to be included in the plot, a model had to have crossed threshold at some point during training for all $N$ displayed. The ESBN and dot-product models achieved criterial accuracy (66%) for all values of $N$ for 30/30 models; in contrast, only 25/30 LSTM baseline and 3/30 transformer baseline models met this criterion. Though not shown, the transformer model also failed when we input the object vector after rather than before the transformer layer. All models that met criterion displayed sequential learning, with higher values of $N$ crossing the threshold later than lower values of $N$ (this occurred even without being enforced by curriculum training, but those results are not shown). However, the ESBN and dot product models achieved these thresholds much earlier in training ($\sim 100k$ episodes) compared to the baseline models ($\sim 450k$ episodes). However, only the ESBN displayed an inflection point, past which criterial performance for higher $N$’s were reached after many fewer epochs of training, and sometimes almost immediately. This was confirmed using a Bayesian information criterion to compare linear, exponential, logarithmic and sigmoidal fits to the development trajectory of the ESBN model. The sigmoidal fit best; and, when fit to the trajectory of individual instances of the model to quantify the $N$ of inflection, exhibited a mean of $4.38\pm 0.39$ (mean $\pm$ sd). ## Discussion We showed that a model of counting based on the ESBN architecture, and trained with reinforcement learning, exhibited a developmental trajectory qualitatively similar to the one observed in humans learning to count, as well as the capacity for systematic extrapolation. A model that implemented only the retrieval operation required for the counting task (the dot-product similarity operation) displayed good performance and extrapolation, but not a clear inflection in its developmental trajectory. Baseline models using either an LSTM or transformer as a controller, but without the external memory component, displayed much slower learning overall, no inflection in the learning trajectory, and no capacity for extrapolation. The transformer model did particularly poorly, possibly because standard transformers are ill-suited to processing adjacent time-steps (?, ?). One explanation for the success of the ESBN model was identified where it was first described (?, ?). There, the authors argued that because the information stream accessible to the controller was isolated from the incoming data stream by the key/value memory, the controller was free to produce and respond to abstract representations needed to perform the task, without being shaped or tied to individual items (tokens); these could be thought of as fulfilling the role of symbols in traditional architectures. Here, the key associated with (i.e. bound to) the instruction embedding $\bm{z_{0}}$ was this symbol, which functioned as $N$ in the give-$N$ task. Since the controller’s job was to report when the correct count was reached, it simply had to recognize when $\bm{k_{r}}$ was close enough to this key. Once learned, it could quickly gain the capacity to give any $N$ for as high as it could count. Here, we relaxed the isolation of the controller from the data, giving it access to both the key and value streams at every time-point, and allowed it to learn which source of information was most useful. In principle, the network could have ignored the input from its external memory (the retrieved keys), performing the task only on the basis of the count embeddings that it received directly as input. However, this likely would have resulted in overfitting to the count embeddings observed in the training set, preventing extrapolation to new count embeddings, as was observed for the LSTM and transformer baseline models. Surprisingly, the ESBN did not display this overfitting, suggesting that it was indeed performing the task on the basis of information retrieved from its external memory. We hypothesize that this occurred because the gradients associated with the direct input of count embeddings at the beginning of an episode tended to vanish over the course of the episode, whereas the information retrieved from external memory was available at the time point immediately before the relevant action (done) was taken. This suggests that the strict architectural separation in the original ESBN model might not be necessary to achieve systematic behavior. It is also interesting to note that both baseline models had good performance until around $N=5$, past which performance degraded, reflecting the possibility that increased task difficulty after this point pressured the transition from a specific to a systematic solution in the ESBN model. It could be that a similar mechanism is responsible for the transition seen around $N=5$ in children. We found that a simpler version of the model, incorporating only the dot- product similarity operation, displayed a comparable level of task performance and extrapolation. This suggests that, in the context of this task, this dot- product operation was the key inductive bias that enabled the ESBN to display systematic counting behavior. However, this simpler version of the model was specifically designed for the counting task, since the relevant similarity value (between the instruction and the count at each time step) was passed directly to the controller, rather than being embedded in a more general- purpose memory architecture. Furthermore, this version of the model did not exhibit an inflection in its learning trajectory, suggesting that this inflection may reflect the difficulty of learning to interact with memory. For these reasons, this model should be viewed as an attempt to better understand the operations of the ESBN, rather than a competing theoretical account. One concern with the developmental trajectory displayed by the ESBN might be that, although it exhibits an inflection, that is softer than a more discrete transition to the CP-knower stage suggested by some developmental data (?, ?). It is worth emphasizing that this data was collected at longitudinal intervals of a minimum of 5-8 weeks, and so may not have adequate temporal resolution to clearly distinguish between these two possibilities. Future work involving more fine-grained longitudinal evaluation might test whether this transition is truly discrete, or closer to the sigmoidal trajectory displayed by the ESBN. We exploited a form of curriculum learning in the present work, first pre- training networks to iterate through the count sequence, then training them to select objects, and finally to perform the give-$N$ task. It was possible for networks to learn all of these tasks at the same time, but we chose a curricular approach to mirror the fact that children are typically able to memorize arbitrary sequences early on, and often can count to numbers much higher than they are able to successfully employ in tasks such as give-$N$ (?, ?). Importantly, the pre-learned count sequence could be used as scaffolding to support extrapolation to numbers outside the range of training on the give-$N$ task. Some have argued that the transition to being a CP-knower, as measured by the give-$N$ task, marks a more general semantic induction of abstract number concepts, as measured by other related tasks (such as the ability to judge which of two numbers is greater) (?, ?). Our model, which was only trained to perform the give-$N$ task, offers an alternative interpretation: the developmental trajectory observed in this task might reflect the learning of a systematic, but narrow, procedure, rather than a general understanding of cardinality. In line with this view, there is some evidence to suggest that children can often succeed on the give-$N$ task while failing to perform closely related tasks (?, ?). Thus, similar to our model, children may indeed undergo a phase in which their ability to perform related counting tasks is not yet integrated, and is better characterized as a set of ‘blind’ procedures specific to each task, despite being able to perform those procedures in a systematic manner. This is in line with the ‘knowledge-in-pieces’ view of development, whereby early concepts are not immediately integrated into a coherent whole (?, ?). #### Limitations and Future Work Despite the limitation of having trained our models on only one task, it is possible that the ESBN architecture may not only facilitate the learning of systematic procedures in specific tasks (as demonstrated here) but in a multi- task learning context as well. In future work, we plan to train networks on multiple related tasks (e.g. the how-many task, unit task, and direction task (?, ?)) and study whether the ESBN affords a similar benefit in terms of the ability to perform these tasks systematically, and in a manner that mirrors the human developmental trajectory. We are also interested in allowing the controller to select the most useful stream of data to bind to its memory depending on its goal, so that we don’t need to hard-code its use of the pre- trained counter. Preliminary data suggest the model is capable of making this selection. Additionally, some instances of the baseline models failed to learn the full task, and some of the ESBN models were unstable, learning the task well initially but then having performance drop as training progressed. This could have been due to the high-variance of the policy gradient estimator (the REINFORCE algorithm). We tried to replicate the human ability to learn from a single episode at a time, but off-policy learning methods (e.g. replay) might be required to ensure all networks reliably learn the task. As well, more advanced policy gradient algorithms might improve performance of our models, and are left to future work for implementation. Finally, our model was restricted in that it could only count as high as the length of the count sequence it was trained to memorize, representing an intermediate level of human development (i.e. the stage when children cannot count higher than their memorized count sequence, despite being presumed CP- knowers). Most adults are capable of counting to arbitrarily large numbers, using a recursive, hierarchical algorithm; that is, cycling through digits and keeping track of place values. We believe that introducing hierarchical structure, context segmentation and normalization (?, ?), together with recursive application of the counter’s ability to iterate through a fixed sequence, might permit the capacity for unbounded counting. This offers the promise of providing a neurally plausible mechanism for symbolic counting, which lies at the heart of many powerful cognitive functions of which humans are capable, including mathematical reasoning. ## References * Abramson et al.Abramson et al. Abramson, J., Ahuja, A., Brussee, A., Carnevale, F., Cassin, M., Clark, S., … others (2020). Imitating interactive intelligence. _arXiv preprint arXiv:2012.05672_. * Barrett, Hill, Santoro, Morcos, LillicrapBarrett et al. Barrett, D., Hill, F., Santoro, A., Morcos, A., Lillicrap, T. (2018). Measuring abstract reasoning in neural networks. In _International conference on machine learning_ (pp. 511–520). * CareyCarey Carey, S. (2001). Cognitive foundations of arithmetic: Evolution and ontogenisis. _Mind & Language_, _16_(1), 37–55. * Chen, Zhou, Fang, McClellandChen et al. Chen, S., Zhou, Z., Fang, M., McClelland, J. (2018). Can generic neural networks estimate numerosity like humans? In _Cogsci._ * CholletChollet Chollet, F. (2019). On the measure of intelligence. _arXiv preprint arXiv:1911.01547_. * Davidson, Eng, BarnerDavidson et al. Davidson, K., Eng, K., Barner, D. (2012). Does learning to count involve a semantic induction? _Cognition_ , _123_(1), 162–173. * DiSessaDiSessa DiSessa, A. (2014). _A history of conceptual change research: Threads and fault lines_. * Fang, Zhou, Chen, McClellandFang et al. Fang, M., Zhou, Z., Chen, S., McClelland, J. (2018). Can a recurrent neural network learn to count things? In _Cogsci._ * Frye, Braisby, Lowe, Maroudas, NichollsFrye et al. Frye, D., Braisby, N., Lowe, J., Maroudas, C., Nicholls, J. (1989). Young children’s understanding of counting and cardinality. _Child development_ , 1158–1171. * Fuson, Richards, BriarsFuson et al. Fuson, K., Richards, J., Briars, D. (1982). The acquisition and elaboration of the number word sequence. In _Children’s logical and mathematical cognition_ (pp. 33–92). Springer. * Hochreiter SchmidhuberHochreiter Schmidhuber Hochreiter, S., Schmidhuber, J. (1997). Long short-term memory. _Neural computation_ , _9_(8), 1735–1780. * Kingma BaKingma Ba Kingma, D., Ba, J. (2014). Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_. * Lake BaroniLake Baroni Lake, B., Baroni, M. (2018). Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In _International conference on machine learning_ (pp. 2873–2882). * Lake, Linzen, BaroniLake et al. Lake, B., Linzen, T., Baroni, M. (2019). Human few-shot learning of compositional instructions. _arXiv preprint arXiv:1901.04587_. * Lee SarneckaLee Sarnecka Lee, M., Sarnecka, B. (2010). A model of knower-level behavior in number concept development. _Cognitive science_ , _34_(1), 51–67. * Lu McClellandLu McClelland Lu, Q., McClelland, J. (2016). Teaching a neural network to count: reinforcement learning with “social scaffolding”. * MarcusMarcus Marcus, G. (2001). The algebraic mind. * Mishra, Rohaninejad, Chen, AbbeelMishra et al. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P. (2017). A simple neural attentive meta-learner. _arXiv preprint arXiv:1707.03141_. * Noroozi, Pirsiavash, FavaroNoroozi et al. Noroozi, M., Pirsiavash, H., Favaro, P. (2017). Representation learning by learning to count. In _Proceedings of the ieee international conference on computer vision_ (pp. 5898–5906). * Piantadosi, Tenenbaum, GoodmanPiantadosi et al. Piantadosi, S., Tenenbaum, J., Goodman, N. (2012). Bootstrapping in a language of thought: A formal model of numerical concept learning. _Cognition_ , _123_(2), 199–217. * RodriguezRodriguez Rodriguez, P. (2001). Simple recurrent networks learn context-free and context-sensitive languages by counting. _Neural computation_ , _13_(9), 2093–2118. * Rodriguez, Wiles, ElmanRodriguez et al. Rodriguez, P., Wiles, J., Elman, J. (1999). A recurrent neural network that learns to count. _Connection Science_ , _11_(1), 5–40. * Sabathiel, McClelland, SolstadSabathiel et al. Sabathiel, S., McClelland, J., Solstad, T. (2020). A computational model of learning to count in a multimodal, interactive environment. In _Proceedings of the meeting of the cognitive science society._ * Sarnecka CareySarnecka Carey Sarnecka, B., Carey, S. (2006). _The development of human conceptual representations_. * Sarnecka CareySarnecka Carey Sarnecka, B., Carey, S. (2008). How counting represents number: What children must learn and when they learn it. _Cognition_ , _108_(3), 662–674. * Sarnecka LeeSarnecka Lee Sarnecka, B., Lee, M. (2009). Levels of number knowledge during early childhood. _Journal of experimental child psychology_ , _103_(3), 325–337. * SchwarzSchwarz Schwarz, G. (1978). Estimating the dimension of a model. _Annals of statistics_ , _6_(2), 461–464. * Stoianov ZorziStoianov Zorzi Stoianov, I., Zorzi, M. (2012). Emergence of a’visual number sense’in hierarchical generative models. _Nature neuroscience_ , _15_(2), 194–196. * Vaswani et al.Vaswani et al. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., … Polosukhin, I. (2017). Attention is all you need. _arXiv preprint arXiv:1706.03762_. * Wagner, Chu, BarnerWagner et al. Wagner, K., Chu, J., Barner, D. (2019). Do children’s number words begin noisy? _Developmental science_ , _22_(1), e12752. * Webb et al.Webb et al. Webb, T., Dulberg, Z., Frankland, S., Petrov, A., O’Reilly, R., Cohen, J. (2020). Learning representations that support extrapolation. In _International conference on machine learning_ (pp. 10136–10146). * Webb, Sinha, CohenWebb et al. Webb, T., Sinha, I., Cohen, J. (2021). Emergent symbols through binding in external memory. In _International conference on learning representations._ * WilliamsWilliams Williams, R. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_ , _8_(3-4), 229–256. * WynnWynn Wynn, K. (1990). Children’s understanding of counting. _Cognition_ , _36_(2), 155–193. * WynnWynn Wynn, K. (1992). Children’s acquisition of the number words and the counting system. _Cognitive psychology_ , _24_(2), 220–251. * Zorzi TestolinZorzi Testolin Zorzi, M., Testolin, A. (2018). An emergentist perspective on the origin of number sense. _Philosophical Transactions of the Royal Society B: Biological Sciences_ , _373_(1740), 20170043. ## Acknowledgements This project / publication was made possible through the support of a grant from the John Templeton Foundation. Thank you to Steven Frankland, Simon Segert, Randall O’Reilly, and Alexander Petrov for their helpful discussions.
# Time-dependent orbital-optimized coupled-cluster methods families for fermion-mixtures dynamics Haifeng Lang<EMAIL_ADDRESS>Department of Nuclear Engineering and Management, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan Takeshi Sato <EMAIL_ADDRESS>Department of Nuclear Engineering and Management, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo- ku, Tokyo 113-8656, Japan Photon Science Center, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan Research Institute for Photon Science and Laser Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 Japan ###### Abstract Five time-dependent orbital optimized coupled-cluster (TD-ooCC) methods, of which four can converge to the complete active space self-consistent-field method, are presented for fermion-mixtures with arbitrary fermion kinds and numbers. Truncation schemes maintaining the intragroup orbital rotation invariance, as well as equations of motion of CC amplitudes and orbitals, are derived. Present methods are compact CC-parameterization alternatives to the time-dependent multiconfiguration self-consistent-field method for systems consisting of arbitrarily different kinds and numbers of interacting fermions. Theoretical analysis of applications of present methods to various chemical systems are reported. ## I introduction Simulating dynamics of fermions and fermion mixtures quantitatively is essential for understanding and predicting strong-field and ultrafast phenomenaProtopapas, Keitel, and Knight (1997); Agostini and DiMauro (2004); Krausz and Ivanov (2009); Gallmann, Cirelli, and Keller (2013); Nisoli _et al._ (2017) of molecular systems (Noticing that Hilbert spaces of vibrational systems and photons are isomorphic to fermion mixtures with one occupation in each kindMordovina _et al._ (2020); Christiansen (2004); Hansen _et al._ (2019); Christiansen (2004); Sverdrup Ofstad _et al._ (2023)). However, solving the time-dependent Schrödinger equation (TDSE) exactly is impractical for even moderate size systems due to the factorial growth of computational costs. Commonly used benchmark methods to describe such dynamics are the multiconfiguration time-dependent Hartree-Fock (MCTDHF) method,Zanghellini _et al._ (2003); Kato and Kono (2004); Caillat _et al._ (2005); Nest, Klamroth, and Saalfrank (2005); Kato and Kono (2008); Hochstuhl and Bonitz (2011); Haxton, Lawler, and McCurdy (2012); Haxton and McCurdy (2014) as well as its extensions for fermion-mixturesLode _et al._ (2020); Alon, Streltsov, and Cederbaum (2007). In these methods, the wavefunction is expanded as a full configuration-interaction (CI) form inside an active space, of which single particle orbitals are time-dependent. Equations of motion (EoMs) of parameters are determined by time-dependent variational principle (TDVP). Due to the compactness of the ansatz, the numbers of optimized time-dependent orbitals are usually much less than the numbers of grid points in TDSE, and thus reducing the computational cost hugely. However, the full CI expansion makes methods still suffer from the factorial growth of the computational cost with respect to the number of fermions. Therefore, the methods can be only applied to moderate size systems. The more general time-dependent multiconfiguration self-consistent-field (TD-MCSCF) methodNguyen-Dang _et al._ (2007); Ishikawa and Sato (2015); Anzaki, Sato, and Ishikawa (2017) further restricts configurations in the wavefunction expansion, which bridges the gap bewteen time-dependent Hartree-Fock and MCTDHF. One of the attractive TD-MCSCF methods, the time-dependent complete-active-space self-consistent-field (TD- CASSCF) method Sato and Ishikawa (2013); Li, Sato, and Ishikawa (2021), significantly reduces the cost by assigning some occupied orbitals as cores (which must be occupied in all configurations within the expansion), resulting in a computational cost that scales factorially with respect to the number of fermions in the active orbitals rather than the total number of fermions. It is also possible to restrict configurations in the active space by the excitation levels with respect to the reference state, for instance, MCTDH[n]Madsen _et al._ (2020a) and multireference MCTDH[n]Madsen _et al._ (2020b) for the vibrational dynamics, and time-dependent restricted active space self-consistent field (TD-RASSCF)Miyagi and Madsen (2013, 2014); Haxton and McCurdy (2015) and time-dependent occupation-restricted multiple active space (TD-ORMAS)Sato and Ishikawa (2015) for the electron dynamics, to achieve the polynomial cost scalingHaxton and McCurdy (2015); Sato _et al._ (2016); Sawada, Sato, and Ishikawa (2016); Omiste, Li, and Madsen (2017) with a drawback of not being size extensiveHelgaker, Jørgensen, and Olsen (2002). Recently, the polynomial scaling and size-extensive CC parameterizationSzabo and Ostlund (1996); Helgaker, Jørgensen, and Olsen (2002); Kümmel (2003); Shavitt and Bartlett (2009) in the time-dependent active space gains lots of popularity. In the CC parameterization, the ket vector is described by an exponential truncated excitation operator acting on the reference state, while the bra vector is not the Hermitian conjugate of the ket vector. For this reason, the time-dependent bivariational principleArponen (1983) and biorthogonal orbitals are also widely used in additional to the TDVP and orthogonal orbitals. Many methods have been developed and successively implemented, e.g., time-dependent optimized coupled-cluster method (TD- OCC)Sato _et al._ (2018); Pathak, Sato, and Ishikawa (2020a, b, 2021) and orbital-adapted time-dependent coupled-clusterKvaal (2012) (OATDCC) method for the electron dynamics, and time-dependent vibrational coupled cluster with time-dependent modals (TDMVCC) familiesMadsen _et al._ (2020c); Højlund, Zoccante, and Christiansen (2024); Højlund and Christiansen (2024); Sverdrup Ofstad _et al._ (2023); Højlund _et al._ (2022); Jensen _et al._ (2023) for the vibrational dynamics. These methods can be considered as the time- dependent extensions of stationary OCCG. E. Scuseria and H. F. Schaefer III (1987); Sherrill _et al._ (1998); Krylov _et al._ (1998); Köhn and Olsen (2005) and nonorthogonal orbital optimized coupled cluster (NOCC)Pedersen, Fernández, and Koch (2001). Despite the fruitful results of CC parameterization of the active space wavefunction, it is still less explored than the CI parameterization. First, previous works mainly focus on either electron dynamics (one kind fermion) or vibrational dynamics (fermion mixtures but only one occupation in each fermion kind). A general theory of arbitrary occupations of fermion mixtures is still unclear. Second, numerically stable and convergentKöhn and Olsen (2005); Myhre (2018) (in the sense of converging to the CASSCF/TD-CASSCF) CC parameterizations using orthogonal orbitals, for instance, the standard CCHuber and Klamroth (2011); D. R. Nascimento and A. E. DePrince III (2016) and Brueckner coupled cluster (BCC)Raghavachari _et al._ (1990); Hampel, Peterson, and Werner (1992); Chiles and Dykstra (1981); Handy _et al._ (1989) types of parameterization in the active space, are unknown. In this paper, we advance in these directions by presenting five time-dependent orbital- optimized coupled-cluster (TD-ooCC) methods for a system composed of arbitrarily different kinds and numbers of interacting fermions (with particle conservation). This paper is organized as follows. In the theory section Sec. II, we first analyze the convergence condition of time-dependent CC with time varying orthogonal orbitals to the TD-CASSCF. Further, the selections of both single CC (de)exication operators ($\hat{T}_{1}$ and $\hat{\Lambda}_{1}$, defined in Eq. (3,4)), which determines the type of the method, and non-single CC (de)exication operators ($\hat{T}_{0}$, $\hat{T}_{1}$, $\cdots$, $\hat{\Lambda}_{0}$ $\hat{\Lambda}_{2}$, $\cdots$, defined in Eq. (3,4)) , which can reduce the computational resources potentially, in the wavefunction ansatz of TD-ooCC methods are introduced. The equations of motion (EoMs) for CC amplitudes and orbitals are presented. Finally, we discuss their advantages and potential drawbacks. In the application section Sec. III, we discuss three major applications of our methods to the chemical systems, electronic dynamics, vibrational dynamics, and non-adiabatic dynamics. Sec. IV summarizes our theories. The Hartree atomic units are used throughout unless otherwise noted. ## II Theory ### II.1 Full coupled cluster expansion in the complete active space Following the CASSCF method, general orthogonal time-dependent orbitals $\mu^{m},\nu^{m},\cdots$ of fermion kind $m$ are classified into virtual orbitals $\alpha^{m},\beta^{m},\cdots$, and occupied orbitals $\bar{p}^{m},\bar{q}^{m},\cdots$, which can be further classified into frozen- cores $i^{\prime\prime m},j^{\prime\prime m}\cdots$, dynamical-cores $i^{\prime m},j^{\prime m}$, and active orbitals $t^{m},u^{m},\cdots$ that contain hole orbitals $i^{m},j^{m},\cdots$, and particle orbitals $a^{m},b^{m},\cdots$. Here, the hole orbitals and particle orbitals refer to active orbitals that are occupied and unoccupied, respectively, in the reference state. EoMs of orbitals can be formally expressed as $i|\dot{\psi}_{\nu^{m}}\rangle=i|\psi_{\mu^{m}}\rangle X^{\mu^{m}}_{\nu^{m}}\,,$ (1) where $X^{\mu^{m}}_{\nu^{m}}=\braket{\psi_{\mu^{m}}}{\dot{\psi}_{\nu^{m}}}$ is anti-Hermitian to enforce the orthonormality of orbitals $\langle\psi_{\mu^{m}}(t)|\psi_{\nu^{m}}(t)\rangle=\delta^{\mu^{m}}_{\nu^{m}}$ at any time, and thus called the constraints of orbital rotations. There are two types of constraints of orbitals: the first type includes pre-determined constraints that are intra-group rotation constraints and frozen-core related orbital constraintsCon ; the second type includes others that are variational parameters which should be determined by the time-dependent variational principle. In this article, we will use the Einstein notation except for fermion kind index $m,n,\cdots$. For the convenience, we use the notations $p^{m},q^{m},\cdots$ to label time-dependent occupied orbitals except for the frozen-cores due to the fact that frozen-cores are pre-determineded by the gauge choiceSato _et al._ (2016). Additionally, $\hat{c}_{\mu^{m}}^{\dagger}$ and $\hat{c}_{\mu^{m}}$ are defined as the creation and annihilation operators for $\psi_{\mu^{m}}$ (as well as other orbital labels). The most general ansatz of the CC wave function with time-dependent orbital can be expressed as $\ket{\Psi_{\rm R}}=e^{\hat{T}}\ket{\Phi}\,,\quad\bra{\Psi_{\rm L}}=\bra{\Phi}\hat{\Lambda}e^{-\hat{T}}\,,$ (2) where $\ket{\Phi}$ and $\bra{\Phi}$ are the reference states, i.e., tensor product of Slater determinants of all fermion kinds $\ket{\Phi}=\prod_{m}\prod_{i^{\prime\prime m}i^{\prime m}i^{m}}\hat{c}^{\dagger}_{i^{\prime\prime m}}\hat{c}^{\dagger}_{i^{\prime m}}\hat{c}^{\dagger}_{i^{m}}\ket{0}$, and $\displaystyle\hat{T}$ $\displaystyle=\hat{T}_{0}+\hat{T}_{1}+\hat{T}_{2}+\hat{T}_{3}+\cdots$ (3) $\displaystyle=\tau_{0}\hat{E}_{0}+\sum_{m}\tau_{i^{m}}^{a^{m}}\hat{E}_{i^{m}}^{a^{m}}+\sum_{m_{1}m_{2}}\tau_{i^{m_{1}}j^{m_{2}}}^{a^{m_{1}}b^{m_{2}}}\hat{E}_{i^{m_{1}}j^{m_{2}}}^{a^{m_{1}}b^{m_{2}}}$ $\displaystyle+\sum_{m_{1}m_{2}m_{3}}\tau_{i^{m_{1}}j^{m_{2}}k^{m_{3}}}^{a^{m_{1}}b^{m_{2}}c^{m_{3}}}\hat{E}_{i^{m_{1}}j^{m_{2}}k^{m_{3}}}^{a^{m_{1}}b^{m_{2}}c^{m_{3}}}+\cdots$ $\displaystyle=\tau^{\mathring{A}}_{\mathring{I}}\hat{E}^{\mathring{A}}_{\mathring{I}}=\sum_{m}\tau_{i^{m}}^{a^{m}}\hat{E}_{i^{m}}^{a^{m}}+\tau^{{A}}_{{I}}\hat{E}^{{A}}_{{I}}\,,$ $\displaystyle\hat{\Lambda}$ $\displaystyle=\hat{\Lambda}_{0}+\hat{\Lambda}_{1}+\hat{\Lambda}_{2}+\hat{\Lambda}_{3}+\cdots$ (4) $\displaystyle=\lambda_{0}\hat{E}_{0}+\sum_{m}\lambda^{i^{m}}_{a^{m}}\hat{E}^{i^{m}}_{a^{m}}+\sum_{m_{1}m_{2}}\lambda^{i^{m_{1}}j^{m_{2}}}_{a^{m_{1}}b^{m_{2}}}\hat{E}^{i^{m_{1}}j^{m_{2}}}_{a^{m_{1}}b^{m_{2}}}$ $\displaystyle+\sum_{m_{1}m_{2}m_{3}}\lambda^{i^{m_{1}}j^{m_{2}}k^{m_{3}}}_{a^{m_{1}}b^{m_{2}}c^{m_{3}}}\hat{E}^{i^{m_{1}}j^{m_{2}}k^{m_{3}}}_{a^{m_{1}}b^{m_{2}}c^{m_{3}}}+\cdots$ $\displaystyle=\lambda_{\mathring{A}}^{\mathring{I}}\hat{E}_{\mathring{A}}^{\mathring{I}}=\sum_{m}\lambda^{i^{m}}_{a^{m}}\hat{E}^{i^{m}}_{a^{m}}+\lambda_{{A}}^{{I}}\hat{E}_{{A}}^{{I}}\,,$ Here, $\mathring{I}=ijk\cdots$ and $\mathring{A}=abc\cdots$ (can be empty) denote hole and particle orbital strings to make the notation brevity. The general-rank excitation (deexcitation) operators and amplitudes are $\hat{E}^{\mathring{A}}_{\mathring{I}}$ ($\hat{E}_{\mathring{A}}^{\mathring{I}}$) and $\tau^{\mathring{A}}_{\mathring{I}}$ ($\lambda_{\mathring{A}}^{\mathring{I}}$), where ${\hat{E}}^{\mu_{1}^{m_{1}}\mu_{2}^{m_{2}}\mu_{3}^{m_{3}}\cdots}_{\nu_{1}^{m_{1}}\nu_{2}^{m_{2}}\nu_{3}^{m_{3}}\cdots}=\hat{c}^{\dagger}_{\mu_{1}^{m_{1}}}\hat{c}^{\dagger}_{\mu_{2}^{m_{2}}}\hat{c}^{\dagger}_{\mu_{3}^{m_{3}}}\cdots{\hat{c}}_{\nu_{3}^{m_{3}}}{\hat{c}}_{\nu_{2}^{m_{2}}}{\hat{c}}_{\nu_{1}^{m_{1}}}$, and $\hat{E}_{0}=\hat{I}$ for the zero-rank excitation (empty strings of $\mathring{I}$ and $\mathring{A}$). We will also use the notation $I$ and $A$ to label non-single hole and particle orbital strings (the orbital number of the strings is not one) in this article. Before working on time-dependent variational principles, we stress that single amplitudes terms $\hat{T}_{1}$ and $\hat{\Lambda}_{1}$ are included in the ansatz due to the convenience of the theoretical analysis although they can cause the numerical instability in practical simulations and are omitted in previous workSato _et al._ (2018); Köhn and Olsen (2005). Additionally, zero- rank amplitudes $\tau_{0}$ and $\lambda_{0}$ are also explicitly included in the ansatz to recover the CASSCF wavefucntion in the full CC expansion limit. The normalization condition makes $\lambda_{0}\equiv 1$ conserved, and $\tau_{0}$ has no impacts on observables but essential for the autocorrelations. Following the previous workPedersen, Koch, and Hättig (1999); Kvaal (2012); Helgaker, Jørgensen, and Olsen (2002); Sato _et al._ (2018), we begin with the coupled-cluster Lagrangian $L$ and real action functional $S$, $\displaystyle L$ $\displaystyle=\langle\Psi_{\rm L}|(\hat{H}-i{\partial_{t}})|\Psi_{\rm R}\rangle\,,$ (5) $\displaystyle S$ $\displaystyle=\Re\int_{t_{0}}^{t_{1}}Ldt=\frac{1}{2}\int_{t_{0}}^{t_{1}}\left(L+L^{*}\right)dt\,.$ (6) Here, the Hamiltonian $\hat{H}$ can be an arbitrary Hamiltonian with particle conservations. EoMs of parameters are given by the stationary condition of the action with respect to the variation of all parameters of left- and right- state blackwavefunctions (with necessary constraints), black $\displaystyle{\color{black}2}\delta S$ $\displaystyle=$ $\displaystyle\delta\tau^{\mathring{A}}_{\mathring{I}}\left\\{\langle\Psi_{\rm L}|[\hat{H}-i\hat{X},\hat{E}^{\mathring{A}}_{\mathring{I}}]|\Psi_{\rm R}\rangle+i\dot{\lambda}_{\mathring{A}}^{\mathring{I}}\right\\}+\delta\lambda_{\mathring{A}}^{\mathring{I}}\left\\{\langle\Phi^{\mathring{A}}_{\mathring{I}}|e^{-\hat{T}}(\hat{H}-i\hat{X})|\Psi_{\rm R}\rangle-i\dot{\tau}^{\mathring{A}}_{\mathring{I}}\right\\}+c.c$ (7) $\displaystyle+$ $\displaystyle\sum_{m}\Delta^{\mu^{m}}_{\nu^{m}}\left\\{\langle\Psi_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\hat{T})-i\hat{X},\hat{E}^{\mu^{m}}_{\nu^{m}}]|\Psi_{\rm R}\rangle+i\langle\Phi|(\partial^{\hskip 0.20999pt\prime}_{t}\hat{\Lambda})e^{-\hat{T}}\hat{E}^{\mu^{m}}_{\nu^{m}}|\Psi_{\rm R}\rangle\right.$ $\displaystyle\hskip 13.00005pt\left.-\langle\Psi_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\hat{T})-i\hat{X},\hat{E}_{\mu^{m}}^{\nu^{m}}]|\Psi_{\rm R}\rangle^{*}+i\langle\Phi|(\partial^{\hskip 0.20999pt\prime}_{t}\hat{\Lambda})e^{-\hat{T}}\hat{E}_{\mu^{m}}^{\nu^{m}}|\Psi_{\rm R}\rangle^{*}\right\\}\,.$ where $\partial^{\hskip 0.20999pt\prime}_{t}\hat{T}=\dot{\tau}^{\mathring{A}}_{\mathring{I}}\hat{E}^{\mathring{A}}_{\mathring{I}}$, $\partial^{\hskip 0.20999pt\prime}_{t}\hat{\Lambda}=\dot{\lambda}_{\mathring{A}}^{\mathring{I}}\hat{E}_{\mathring{A}}^{\mathring{I}}$, $\hat{X}=\sum_{m}X^{\mu^{m}}_{\nu^{m}}\hat{E}^{\mu^{m}}_{\nu^{m}}$, $\hat{\Delta}=\sum_{m}\Delta^{\mu^{m}}_{\nu^{m}}\hat{E}^{\mu^{m}}_{\nu^{m}}$, and $\Delta^{\mu^{m}}_{\nu^{m}}=\braket{\psi_{\mu^{m}}}{\delta{\psi}_{\nu^{m}}}$. Similar to the matrices $X^{\mu^{m}}_{\nu^{m}}$, $\Delta^{\mu^{m}}_{\nu^{m}}$ are also anti-Hermitian to enforce the orthonormality of orbitals. Meanwhile, $\Delta^{\mu^{m}}_{i^{\prime\prime m}}\equiv 0$ due to the fact that EoMs of frozen-cores are not determined by variational principle. Additionally, we only consider the ansatz that satisfies the intragroup rotation invariance including dynamical-cores-dynamical-cores, hole-hole, particle-particle, and virtual-virtual rotation invariance, i.e., the wavefunction is invariant up to a phase factor with these orbital rotations and the corresponding inverse rotations of CC amplitudes. These invariances give the redundancy of $X^{i^{\prime m}}_{j^{\prime m}}$, $X_{i^{m}}^{j^{m}}$, $X_{a^{m}}^{b^{m}}$, and $X_{\alpha^{m}}^{\beta^{m}}$. For simplicity, we always set all of them as zero. The details of the wavefunction ansatz that maintain the hole-hole, particle-particle, and virtual-virtual rotation invariance will be discussed in the Sec. II.2.2 and B. Variations with respect to CC amplitudes, $\delta S/\delta\lambda_{\mathring{A}}^{\mathring{I}}=0$ and $\delta S/\delta\tau^{\mathring{A}}_{\mathring{I}}=0$ give $\displaystyle i\dot{\tau}^{A}_{I}$ $\displaystyle=\langle\Phi^{{A}}_{{I}}|e^{-\hat{T}}(\hat{H}-i\hat{X})|\Psi_{\rm R}\rangle\,,$ (8) $\displaystyle i\dot{\tau}_{i^{m}}^{a^{m}}$ $\displaystyle=\bra{\Phi_{i^{m}}^{a^{m}}}e^{-\hat{T}}(\hat{H}-i\hat{X})\ket{\Psi_{\rm R}}\,,$ (9) $\displaystyle-i\dot{\lambda}^{I}_{A}$ $\displaystyle=\langle\Psi_{\rm L}|e^{-\hat{T}}[\hat{H}-i\hat{X},\hat{E}^{A}_{I}]|\Psi_{\rm R}\rangle\,,$ (10) $\displaystyle-i\dot{\lambda}^{i^{m}}_{a^{m}}$ $\displaystyle=\bra{\Psi_{\rm L}}[\hat{H}-i\hat{X},\hat{E}_{i^{m}}^{a^{m}}]\ket{\Psi_{\rm R}}\,.$ (11) Here, we separate EoMs of CC amplitudes into non-single amplitudes part and single amplitudes part. The variation with respect to $\Delta_{a^{m}}^{i^{m}}$ gives $\displaystyle\bra{\Psi_{\rm L}}[\hat{H}-i\hat{X}-i(\partial_{t}^{\prime}\hat{T}),\hat{E}^{i^{m}}_{a^{m}}]\ket{\Psi_{\rm R}}$ (12) $\displaystyle+$ $\displaystyle i\bra{\Phi}(\partial_{t}^{\prime}\hat{\Lambda})e^{-\hat{T}}\hat{E}^{i^{m}}_{a^{m}}|\Psi_{\rm R}\rangle$ $\displaystyle-$ $\displaystyle\bra{\Psi_{\rm L}}[\hat{H}-i\hat{X}-i(\partial_{t}^{\prime}\hat{T}),\hat{E}_{i^{m}}^{a^{m}}]\ket{\Psi_{\rm R}}^{*}$ $\displaystyle+$ $\displaystyle i\bra{\Phi}(\partial_{t}^{\prime}\hat{\Lambda})e^{-\hat{T}}\hat{E}_{i^{m}}^{a^{m}}|\Psi_{\rm R}\rangle^{*}=0\,,$ Substituting Eq. (8,10) into Eq. (12) yields, $\displaystyle i(\delta_{b^{m}}^{a^{m}}D_{i^{m}}^{j^{m}}-D_{b^{m}}^{a^{m}}\delta_{i^{m}}^{j^{m}})(\dot{\tau}_{j^{m}}^{b^{m}}+X_{j^{m}}^{b^{m}})+i\frac{1}{2}(\dot{\lambda}^{i^{m}}_{a^{m}})^{*}=$ (13) $\displaystyle B_{i^{m}}^{a^{m}}-i\frac{1}{2}\sum_{n}\left\\{\langle\Phi_{j^{n}}^{b^{n}}|e^{-\hat{T}}\hat{E}^{i^{m}}_{a^{m}}|\Psi_{\rm R}\rangle\dot{\lambda}^{j^{n}}_{b^{n}}+A^{a^{m}j^{n}}_{i^{m}b^{n}}(X^{b^{n}}_{j^{n}})^{*}\right\\}\,,$ where $\displaystyle A^{a^{m}j^{n}}_{i^{m}b^{n}}$ $\displaystyle=\bra{\Psi_{\rm L}}[\hat{E}^{A}_{I},\hat{E}^{j^{n}}_{b^{n}}]\ket{\Psi_{\rm R}}\langle\Phi^{A}_{I}|e^{-\hat{T}}\hat{E}^{i^{m}}_{a^{m}}\ket{\Psi_{\rm R}}$ $\displaystyle-\bra{\Psi_{\rm L}}[\hat{E}^{A}_{I},\hat{E}^{i^{m}}_{a^{m}}]\ket{\Psi_{\rm R}}\langle\Phi_{I}^{A}|e^{-\hat{T}}\hat{E}^{j^{n}}_{b^{n}}\ket{\Psi_{\rm R}},$ (14) $\displaystyle B_{i^{m}}^{a^{m}}$ $\displaystyle=-\frac{1}{2}\braket{\Psi_{\rm L}}{[\hat{H},\hat{E}^{i^{m}}_{a^{m}}]}{\Psi_{\rm R}}-\frac{1}{2}\braket{\Psi_{\rm R}}{[\hat{H},\hat{E}^{i^{m}}_{a^{m}}]}{\Psi_{\rm L}}$ $\displaystyle-\frac{1}{2}\bra{\Psi_{\rm L}}[\hat{E}^{A}_{I},\hat{H}]\ket{\Psi_{\rm R}}\langle\Phi^{A}_{I}|e^{-\hat{T}}\hat{E}^{i^{m}}_{a^{m}}\ket{\Psi_{\rm R}}$ $\displaystyle+\frac{1}{2}\bra{\Psi_{\rm L}}[\hat{E}^{A}_{I},\hat{E}^{i^{m}}_{a^{m}}]\ket{\Psi_{\rm R}}\langle\Phi_{I}^{A}|e^{-\hat{T}}\hat{H}\ket{\Psi_{\rm R}}\,.$ (15) Solving coupled equations Eq. (9,11,13), EoMs of single amplitudes and particle-hole orbital rotations $\dot{\tau}_{i^{m}}^{a^{m}}$, $\dot{\lambda}^{i^{m}}_{a^{m}}$, and $X_{i^{m}}^{a^{m}}$ can be obtained. Substituting $X_{i^{m}}^{a^{m}}$ into Eq. (8,10), EoMs of non-single amplitudes can be solved. In the full CC expansion limit, the particle-hole orbital rotations $X_{i^{m}}^{a^{m}}$ are redundant, and can be freely selected. For an explicit proof, see the Appendix. B. The simplest and the most common choice is to set them as zero. However, coupling equations Eq. (9,11,13) suggest that the redundancy of $X_{i^{m}}^{a^{m}}$ can be transferred to $\lambda^{i^{m}}_{a^{m}}$ or $\tau_{i^{m}}^{a^{m}}$. For instance, one can set $\lambda_{a^{m}}^{i^{m}}\equiv 0$ as redundant parameters, and this type of wavefunction parameterization is known as the BCCRaghavachari _et al._ (1990); Hampel, Peterson, and Werner (1992); Chiles and Dykstra (1981); Handy _et al._ (1989). In fact, one can arbitrarily select one of $X_{i^{m}}^{a^{m}}$, $\tau_{i^{m}}^{a^{m}}$ and ${\lambda}^{i^{m}}_{a^{m}}$ as redundant parameter for each kind of fermion, and set them as zero. In the Appendix. B, more rigorous arguments of transferring redundancy and preparation of wavefunctions are presented. ### II.2 Wavefunction Ansatz of Time dependent orbital-optimized coupled- cluster families #### II.2.1 Selection of single amplitudes In principle, the redundancy caused by including all $X_{i^{m}}^{a^{m}}$, $\tau_{i^{m}}^{a^{m}}$, and $\lambda^{i^{m}}_{a^{m}}$ only occurs at the full CC expansion limit. Nonetheless, our experience shows that including them all is numerically instable even when only double (rank two) excitations are considered, which suggests the compactness of CC wavefunction parameterization. One has to exclude (at least) one of $X_{i^{m}}^{a^{m}}$, $\tau_{i^{m}}^{a^{m}}$ and $\lambda^{i^{m}}_{a^{m}}$ in practical simulations. The first method we will consider is TD-OCC (terminology adapted from the referenceSato _et al._ (2018)), of which the ansatz only includes $X_{i^{m}}^{a^{m}}$. This method fails to converge to the CASSCF limitKöhn and Olsen (2005); Myhre (2018); Højlund, Zoccante, and Christiansen (2024), but shows an acceptable accuracy in ultrafast electron dynamicsSato _et al._ (2018) and vibrational systemsHøjlund, Zoccante, and Christiansen (2024) according to previous reports. Specifically, TD-OCC with double and triple excitations (TD-OCCDT) is in nearly perfect agreement with the CASSCF results for benchmarks of ultrafast electron dynamicsSato _et al._ (2018). We will also present three methods with the absence of one of $X_{i^{m}}^{a^{m}}$, $\tau_{i^{m}}^{a^{m}}$, and $\lambda^{i^{m}}_{a^{m}}$ in the ansatz, thus all three methods can converge to the CASSCF limit. The details of four methods are summarized as: (a). TD-OCC: only $X_{i^{m}}^{a^{m}}$ are included in the ansatz, or equivalently $\tau_{i^{m}}^{a^{m}}=\lambda^{i^{m}}_{a^{m}}=\delta\tau_{i^{m}}^{a^{m}}=\delta\lambda^{i^{m}}_{a^{m}}\equiv 0$. (b). TD-OCCT1: $\tau_{i^{m}}^{a^{m}}$ and $X_{i^{m}}^{a^{m}}$ are included in the ansatz, or equivalently $\lambda^{i^{m}}_{a^{m}}=\delta\lambda^{i^{m}}_{a^{m}}\equiv 0$. (c). TD-BCC: $X_{i^{m}}^{a^{m}}$ and $\lambda^{i^{m}}_{a^{m}}$ are included in the ansatz, or equivalently $\tau_{i^{m}}^{a^{m}}=\delta\tau_{i^{m}}^{a^{m}}\equiv 0$. (d). TD-OCCX0: $\tau_{i^{m}}^{a^{m}}$ and $\lambda^{i^{m}}_{a^{m}}$ are included in the ansatz, or equivalently $X_{i^{m}}^{a^{m}}=\Delta_{i^{m}}^{a^{m}}\equiv 0$. Here, the latter definition of each method can be regarded as the constraints in the sense that some parameters are pre-determined (to be zero). We stress that the absence of particle-hole orbital rotations $X_{i^{m}}^{a^{m}}$ of TD- OCCX0 is only justified for the dynamics without external fields. In the presence of external fields $X_{i^{m}}^{a^{m}}$ has to be assigned different values to ensure the gauge invariance, akin to the treatment of frozen- coresSato _et al._ (2016). Additionally, we will consider the most general hybrid form of wavefunction parameterization, i.e., the parameter selections (a), (b), (c), or (d) is separately, and flexibly adopted for each fermion kinds, which is named as TD-OCCH and will be discussed with all the details in the Appendix.D. #### II.2.2 Selection of non-single excitations Inspired by TD-MCSCF, one can restrict the non-single excitations in TD-ooCC to make the ansatz more compact. To ensure the intragroup rotation invariance of particle and hole orbitals, one can select non-single excitations that satisfy the property $\displaystyle\hat{E}^{\mu^{m}}_{\nu^{m}}\ket{\Psi_{\rm R}}=e^{\hat{T}}\hat{\Pi}e^{-\hat{T}}\hat{E}^{\mu^{m}}_{\nu^{m}}\ket{\Psi_{\rm R}}\,,$ (16) $\displaystyle\bra{\Psi_{\rm L}}\hat{E}^{\mu^{m}}_{\nu^{m}}=\bra{\Psi_{\rm L}}\hat{E}^{\mu^{m}}_{\nu^{m}}e^{\hat{T}}\hat{\Pi}e^{-\hat{T}}\,,$ where $\hat{\Pi}=\ket{\Phi_{\mathring{I}}^{\mathring{A}}}\bra{\Phi_{\mathring{I}}^{\mathring{A}}}$. Here $\mu^{m}$ and $\nu^{m}$ should be particle or hole orbital index simultaneously. For the details of the proof, see the Appendix. B. One of the simplest selection scheme is the $[k]$-truncation scheme, i.e., non-single excitation is included in the ansatz if and only if it is not larger than the certain truncation order $k$, which has been successfully implemented in TD- OCC. It is also possible to assign different truncation orders for different kinds to adapt their contribution to the dynamics. Here, we propose two possible schemes. The first, the weighted excitation truncation scheme, is the modified version of the truncation scheme reported in Ref. [Madsen _et al._ , 2020d]. Define the nonnegative excitation weight and the excitation number of each kind as $w^{m}$ and $n_{\rm ext}^{m}$, respectively. The generalized excitation level of a given excitation amplitude is $w=\sum_{m}n_{\rm ext}^{m}w^{m}\,,$ (17) and all non-single excitation amplitude that satisfy $w\leq k$ should be included in the ansatz, where $k$ is the maximum generalized excitation level. Here, different spin fermion configurations can be regarded as different “fermion kinds” for both unrestricted and restricted spin orbitals if the each spin component is conserved. This truncation scheme can reduce to the scheme in Ref. [Madsen _et al._ , 2020d] in the vibrational system, in which $n_{\rm ext}^{m}$ can only be 1 or 0. One can set the weights of important kinds as relatively small numbers, even zero. When all $w^{m}=1$, the weighted excitation truncation scheme reduces to the $[k]$-truncation scheme. The second one, group excitation truncation scheme, is the following: (1) all kinds of fermions can be divided into several groups A,B,C,$\cdots$, and their truncation order of excitations can be assigned as $n^{A}_{A}$, $n^{B}_{B}$, $n^{C}_{C}$,$\cdots$. Here, $n=1$ if only zero excitations (and single excitations when included) are included in the ansatz. All possible non-single excitations beyond the truncation order must be excluded while all others must be included in the ansatz. The different spin components of a fermion can also be regarded as different kinds if they are conserved; (2) for each type of excitations across groups, truncation order of the total excitation and all single groups should be assigned, for instance, $(n_{AB},n_{AB}^{A},n_{AB}^{B})$ should be assigned for the excitation acrossing groups $A$ and $B$, and $(n_{ABC},n_{ABC}^{A},n_{ABC}^{B},n_{ABC}^{C})$ should be assigned for the excitation acrossing groups $A$, $B$ and $C$. Iff both the total excitation number and the excitation number of each group are not larger than the corresponding truncation parameters, it is included in the ansatz; (3) $n$ must satisfy the inequalities: for all $X$, $n^{X}_{Y}\leq n^{X}_{Z}$ if the group numbers of $Y$ is larger than $Z$. We will list two explicit examples on the electron dynamics. We set the $\alpha$ spin and $\beta$ spin are two different groups. The first example is $n_{\alpha}^{\alpha}=n_{\beta}^{\beta}=n_{\alpha\beta}^{\alpha}=n_{\alpha\beta}^{\beta}=2$ and $n_{\alpha\beta}=3$, which means all double excitations of $\alpha$ and $\beta$ spins are included, and all $2\alpha 1\beta$ and $2\beta 1\alpha$ type triple excitations are included. The second is $n_{\alpha}^{\alpha}=n_{\beta}^{\beta}=n_{\alpha\beta}^{\alpha}=n_{\alpha\beta}^{\beta}=1$, and $n_{\alpha\beta}=2$, which means only $1\alpha 1\beta$ excitations are included. For a slightly more complicated example, see Fig. (1). Figure 1: Illustration of the Truncation of non-single CC amplitudes (cores and virtual orbitals are neglected). In the example of this figure, three kinds fermions ($m_{1}$, $m_{2}$, and $m_{3}$) are classified into two groups: Group A containing $m_{1}$ and $m_{2}$, and Group B containing $m_{3}$. We set $n_{A}^{A}=n_{B}^{B}=n_{AB}^{B}=2$, $n_{AB}=3$, and $n_{AB}^{A}=1$. Fig. (1a) is the reference configuration, where dashed line is the fermion surface. Figs. (1b,c) show two of allowed CC excitations across the group, where blue rectangles circle the excitation kinds. Fig. (1d) shows the forbidden excitations, where red rectangles circle the excitation kinds. In the Fig. (1d), although the total excitation and excitation of group B satisfy the conditions, $3=n_{AB}$ and $1<n_{AB}^{B}$, the excitation is still forbidden due to the condition violation of the group A excitation $2>n_{AB}^{A}$. ### II.3 Equation of motions of Time dependent optimized coupled cluster families EoMs of single amplitudes and particle-hole orbital rotations of five methods can be obtained by setting corresponding constraints on Eq. (9,11,13), which will be presented in the corresponding subsubsections and the Appendix. EoMs of other variables are the same for all five methods [Eq. (8,10) ]. EoMs of intergroup rotations $X^{\alpha^{m}}_{p^{m}}$ and $X^{t^{m}}_{i^{\prime m}}$ are obtained from the variation with respect to $\Delta_{\alpha^{m}}^{p^{m}}$ and $\Delta_{t^{m}}^{i^{\prime m}}$, $\displaystyle\bra{\Psi_{\rm L}}[\hat{H}-i\hat{X},\hat{E}_{\alpha^{m}}^{p^{m}}]\ket{\Psi_{\rm R}}+\bra{\Psi_{\rm R}}[\hat{H}-i\hat{X},\hat{E}_{\alpha^{m}}^{p^{m}}]\ket{\Psi_{\rm L}}=0\,,$ (18) $\displaystyle\bra{\Psi_{\rm L}}[\hat{H}-i\hat{X},\hat{E}_{t^{m}}^{i^{\prime m}}]\ket{\Psi_{\rm R}}+\bra{\Psi_{\rm R}}[\hat{H}-i\hat{X},\hat{E}_{t^{m}}^{i^{\prime m}}]\ket{\Psi_{\rm L}}=0\,,$ which can be further simplified as $\displaystyle 2iX^{u^{m}}_{i^{\prime m}}$ $\displaystyle(2\delta_{u^{m}}^{t^{m}}-D_{u^{m}}^{t^{m}})=$ $\displaystyle\braket{\Psi_{\rm L}}{\hat{E}_{t^{m}}^{i^{\prime m}}\hat{H}}{\Psi_{\rm R}}+\braket{\Psi_{\rm R}}{\hat{E}_{t^{m}}^{i^{\prime m}}\hat{H}}{\Psi_{\rm L}}\,,$ (19) $\displaystyle 2iX^{\alpha^{m}}_{q^{m}}$ $\displaystyle D^{q^{m}}_{p^{m}}=\braket{\Psi_{\rm L}}{\hat{E}_{\alpha^{m}}^{p^{m}}\hat{H}}{\Psi_{\rm R}}+\braket{\Psi_{\rm R}}{\hat{E}_{\alpha^{m}}^{p^{m}}\hat{H}}{\Psi_{\rm L}}\,,$ (20) with $\displaystyle\rho_{q^{m}}^{p^{m}}$ $\displaystyle=\bra{\Psi_{\rm L}}\hat{E}^{q^{m}}_{p^{m}}\ket{\Psi_{\rm R}}\,,\hskip 5.0ptD_{q^{m}}^{p^{m}}=\frac{1}{2}(\rho_{q^{m}}^{p^{m}}+(\rho^{q^{m}}_{p^{m}})^{*})$ (21) In practical simulations, virtual orbitals are never propagated. Instead, one can convertBeck _et al._ (2000); Anzaki, Sato, and Ishikawa (2017) the propagation of occupied orbitals arisen from the virtual-occupied orbital rotations to $\displaystyle i\braket{\psi_{\bar{\mu}^{m}}}{\psi_{\alpha^{m}}}X^{\alpha^{m}}_{p^{m}}=(D^{-1})_{q^{m}}^{p^{m}}$ (22) $\displaystyle\times$ $\displaystyle\Big{\\{}\braket{\Psi_{\rm L}}{\hat{E}_{\bar{\mu}^{m}}^{q^{m}}\hat{H}}{\Psi_{\rm R}}-\braket{\Psi_{\rm L}}{\hat{E}_{\bar{o}^{m}}^{q^{m}}\hat{H}}{\Psi_{\rm R}}\braket{\psi_{\bar{\mu}^{m}}}{\psi_{\bar{o}^{m}}}$ $\displaystyle+\braket{\Psi_{\rm R}}{\hat{E}_{\bar{\mu}^{m}}^{q^{m}}\hat{H}}{\Psi_{\rm L}}-\braket{\Psi_{\rm R}}{\hat{E}_{\bar{o}^{m}}^{q^{m}}\hat{H}}{\Psi_{\rm L}}\braket{\psi_{\bar{\mu}^{m}}}{\psi_{\bar{o}^{m}}}\Big{\\}}\,,$ where $\bar{\mu}^{m}$ is the primary time-independent basis to expand orbitals that can be discrete variables representationBeck _et al._ (2000), grid pointsLi, Sato, and Ishikawa (2021), etc. #### II.3.1 TD-OCC Setting $\tau_{i^{m}}^{a^{m}}={\lambda}^{i^{m}}_{a^{m}}\equiv 0$ in Eq. (13), $i(\delta_{b^{m}}^{a^{m}}D_{i^{m}}^{j^{m}}-D_{b^{m}}^{a^{m}}\delta_{i^{m}}^{j^{m}})X_{j^{m}}^{b^{m}}+i\sum_{n}\frac{1}{2}A^{a^{m}j^{n}}_{i^{m}b^{n}}(X^{b^{n}}_{j^{n}})^{*}=B_{i^{m}}^{a^{m}}\,.$ (23) In the case of one occupation of each kind of fermion and a single kind of fermion, the method reduces to oTDMVCCHøjlund, Zoccante, and Christiansen (2024) for vibrational systems and TD-OCCSato _et al._ (2018) for electrons, respectively. #### II.3.2 TD-OCCT1 Setting ${\lambda}^{i^{m}}_{a^{m}}\equiv 0$ in Eq. (11,13), $i\left(\rho_{j^{m}}^{i^{m}}\delta_{b^{m}}^{a^{m}}-\rho_{a^{m}}^{b^{m}}\delta_{j^{m}}^{i^{m}}\right)X^{j^{m}}_{b^{m}}=\bra{\Psi_{\rm L}}[\hat{H},\hat{E}_{i^{m}}^{a^{m}}]\ket{\Psi_{\rm R}}\,,$ (24) $\displaystyle i(\delta_{b^{m}}^{a^{m}}D_{i^{m}}^{j^{m}}-D_{b^{m}}^{a^{m}}\delta_{i^{m}}^{j^{m}})\dot{\tau}_{j^{m}}^{b^{m}}=$ (25) $\displaystyle B_{i^{m}}^{a^{m}}-i(\delta_{b^{m}}^{a^{m}}D_{i^{m}}^{j^{m}}-D_{b^{m}}^{a^{m}}\delta_{i^{m}}^{j^{m}})X_{j^{m}}^{b^{m}}-i\sum_{n}\frac{1}{2}A^{a^{m}j^{n}}_{i^{m}b^{n}}(X^{b^{n}}_{j^{n}})^{*}\,.$ Interestingly, TD-OCCT1 is mathematically equivalent to spilt OATDCC (sOATDCC, or equivalently, sTDMVCC)Højlund and Christiansen (2024), which uses orthogonal virtual orbitals and bi-orthogonal active orbitals without single CC amplitudes and is reported in the vibrational systems and electron dynamics recently. The biorthogonal orbitlas in sOATDCC have the same role as $\hat{T}_{1}$ in the TD-OCCT1, however, they will give different $X_{i^{m}}^{j^{m}}$ and $X_{a^{m}}^{b^{m}}$ constraints, which might give different numerical performances. For more details, see Appendix. C. When virtual orbitals do not exist, TD-OCCT1 and sOATDCC are equivalent to TDMVCCMadsen _et al._ (2020c) and OATDCCKvaal (2012) (NOCC)Pedersen, Fernández, and Koch (2001) for vibrational systems and electron dynamics, respectively. #### II.3.3 TD-BCC Setting ${\tau}^{a^{m}}_{i^{m}}\equiv 0$ in Eq. (9,13), $\displaystyle iX_{i^{m}}^{a^{m}}+\sum_{n}i$ $\displaystyle\langle\Phi_{i^{m}}^{a^{m}}|e^{-\hat{T}}\hat{E}^{j^{n}}_{b^{n}}|\Psi_{\rm R}\rangle X^{j^{n}}_{b^{n}}=\langle\Phi_{i^{m}}^{a^{m}}|e^{-\hat{T}}\hat{H}|\Psi_{\rm R}\rangle\,,$ (26) $\displaystyle i\frac{1}{2}\left\\{\sum_{n}\langle\Phi_{j^{n}}^{b^{n}}|e^{-\hat{T}}\hat{E}^{i^{m}}_{a^{m}}|\Psi_{\rm R}\rangle\dot{\lambda}^{j^{n}}_{b^{n}}+(\dot{\lambda}^{i^{m}}_{a^{m}})^{*}\right\\}=$ $\displaystyle B_{i^{m}}^{a^{m}}-i(\delta_{b^{n}}^{a^{m}}D_{i^{m}}^{j^{n}}-D_{b^{n}}^{a^{m}}\delta_{i^{m}}^{j^{n}})X_{j^{n}}^{b^{n}}-i\sum_{n}\frac{1}{2}A^{a^{m}j^{n}}_{i^{m}b^{n}}(X^{b^{n}}_{j^{n}})^{*}\,.$ (27) #### II.3.4 TD-OCCX0 TD-OCCX0 can be considered as the most straightforward extension of TD-CC when virtual orbitals are considered, due to the similarity of the parameterization of the CC amplitudes parts and the absence of particle-hole orbital rotations. The EoMs of the single amplitudes of TD-OCCX0 are given by Eq. (9,11). The nonvariance of particle-hole orbital rotations makes the method not fully variational. Therefore, the Ehrenfest theorem should be modifiedSato _et al._ (2016) even when frozen-cores are excluded, $\displaystyle 2i\frac{d}{dt}\braket{\hat{O}}$ $\displaystyle=\braket{\Psi_{\rm L}}{[\hat{O},\hat{H}]}{\Psi_{\rm R}}+\braket{\Psi_{\rm R}}{[\hat{O},\hat{H}]}{\Psi_{\rm L}}$ (28) $\displaystyle+\sum_{m}O_{t^{m}}^{u^{m}}\\{\braket{\Psi_{\rm L}}{(\hat{H}-i\hat{X})e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}\hat{E}_{t^{m}}^{u^{m}}}{\Psi_{\rm R}}$ $\displaystyle-\braket{\Psi_{\rm L}}{\hat{E}_{t^{m}}^{u^{m}}e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}(\hat{H}-i\hat{X})}{\Psi_{\rm R}}$ $\displaystyle+\braket{\Psi_{\rm R}}{(\hat{H}-i\hat{X})e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}\hat{E}_{t^{m}}^{u^{m}}}{\Psi_{\rm L}}$ $\displaystyle-\braket{\Psi_{\rm R}}{\hat{E}_{t^{m}}^{u^{m}}e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}(\hat{H}-i\hat{X})}{\Psi_{\rm L}}\\}\,,$ where $\bar{\hat{\Pi}}=\hat{I}-\hat{\Pi}$. Similarly, TD-OCCH is not fully variational either when TD-OCCX0-type parameterization is considered, and its modified Ehrenfest theorem (as well as frozen-cores including scenario) will be presented in the Appendix. D. As a comparison, Ehrenfest theorem holds for other three methods since they are fully variational. When frozen orbitals are included, the modified Ehrenfest theoremsSato _et al._ (2016) are necessary for all five methods: two additional terms that replace $t^{m}$, $u^{m}$ with $i^{\prime\prime m}$, $\mu^{m}$ and $\mu^{m}$, $i^{\prime\prime m}$ in the last term of Eq. (28) should be added in case of TD-OCCX0, and replace $t^{m}$, $u^{m}$ with $i^{\prime\prime m}$, $\mu^{m}$ and $\mu^{m}$, $i^{\prime\prime m}$ in the last term of Eq. (28) in case of TD-BCC, TD-OCC, and TD-OCCT1. The modified Ehrenfest theorem of TD-OCCH with frozen-cores will be reported in the Appendix. D. ### II.4 Discussion One of the major problems of TD-CC is that the growth of CC amplitudes could cause numerical instabilityPedersen and Kvaal (2019); Kristiansen _et al._ (2020), which can be mitigated by rotations of reference determinants that could compensate for excitations and reduce the amplitude norms $\tau$ and $\lambda$ in all five methods considered in this article. However, one must be careful when using TD-OCCX0 since such a compensation might not be huge when virtual orbital subspaces are relatively small due to the absence of particle- hole orbital rotations. Although the absence of particle-hole orbital rotations in TD-OCCX0 poses inconveniences for observable evaluations and numerical stability, it does save computing resources, especially for vibrational systems. Inverse matrix operations required to solve $X_{i^{m}}^{a^{m}}$ in other methods are unnecessary for TD-OCCX0. Comparable methods in this regard are TD-OCCT1, in which $X_{i^{m}}^{a^{m}}$ [Eq. (24)] and $\tau_{i^{m}}^{a^{m}}$ [Eq. (25)] of different modes do not couple with each other, and TD-OCCH with few fermion kinds using BCC and OCC types parameterization. For the details, see the Appendix. D. In both two methods, only several small-dimensional matrix inverse operations are required, which is much cheaper than TD-OCC [Eq. (23)] and TD-BCC [Eq. (26,II.3.3)] cases, requiring huge-dimensional matrix inverse operations. However, when strong external fields exist, the advantage of TD- OCCX0 and TD-OCCH can be dismissed due to the unfavorable resource cost of the evaluations of position matrix elementsSato _et al._ (2016); Sato, Teramura, and Ishikawa (2018), which will be discussed in Sec. III.1 and Appendix. D. ## III Applications In this section, we will discuss three major applications of our methods that cover most of chemical systems: electron dynamics, nuclei dynamics, and nuclei-electron coupling dynamics. ### III.1 Electron dynamics in a strong laser field For a many-electron system in external semi-classical electromagnetic fields, the Hamiltonian in the second quantization can be expressed as $\displaystyle\quad\quad\quad\quad\hat{H}(t)=h^{\mu}_{\nu}(t)\hat{E}^{\mu}_{\nu}+\frac{1}{2}g^{\mu\gamma}_{\nu\delta}\hat{E}^{\mu\gamma}_{\nu\delta},$ (29) $\displaystyle\quad\quad\quad h^{\mu}_{\nu}(t)=\int dx\psi^{*}_{\mu}[h_{0}+V_{\rm ext}(t)]\psi_{\nu},$ (30) $\displaystyle g^{\mu\gamma}_{\nu\delta}=\int\int dx_{1}dx_{2}\psi^{*}_{\mu}(x_{1})\psi^{*}_{\gamma}(x_{2})r^{-1}_{12}\psi_{\nu}(x_{1})\psi_{\delta}(x_{2}),$ (31) where $x\\!=\\!(\boldsymbol{r},\sigma)$ labels spatial-spin coordinate. Here, we drop off the kind lable for the simplicity (although one can treat different spin component as different kinds and use group excitation truncation scheme to save the computational resources). $V_{\rm ext}$ is the electron-laser interaction term. $h_{0}$ is the one-electron field free Hamiltonian that contains the kinetic term and Coulomb’s terms induced by fixed nuclei. Under the framework of the electric dipole approximation, $V_{\rm ext}=\boldsymbol{E}(t)\cdot\boldsymbol{r}$ in the length gauge (LG) or $V_{\rm ext}=-i\boldsymbol{A}(t)\cdot\boldsymbol{\nabla}_{\boldsymbol{r}}$ in the velocity gauge (VG), with $\boldsymbol{E}(t)$ and $\boldsymbol{A}(t)=-\int^{t}dt^{\prime}\boldsymbol{E}(t^{\prime})$ being the electric field and vector potential, respectively. In principle, two gauges are equivalent in the basis-set limit (basis numbers tend to the infinity), but inequivalent in practical numerical simulations due to different convergence speeds of basis-sets. Empirically, VG is more suitable for ultra strong fields scenario while LG is more suitable for other casesHan and Madsen (2010). When methods are fully variational, the gauge invariance of variational methods are satisfied automatically. Otherwise, the non-variance part of orbital rotations should be adjusted to ensure the gauge invariance. For the methods considered in this article, FC-part of $\hat{X}$ need to be selected as $X_{\mu}^{i^{\prime\prime}}=i\boldsymbol{E}(t)\cdot\braket{\psi_{i^{\prime\prime}}}{\boldsymbol{r}}{\psi_{\mu}}\,,$ (32) and the particle-hole orbital rotations of TD-OCCX0 should be $X_{i}^{a}=i\boldsymbol{E}(t)\cdot\braket{\psi_{a}}{\boldsymbol{r}}{\psi_{i}}\,.$ (33) in the VGSato _et al._ (2016). These adjustments are also necessary in the many-kind fermion systems, and the kind label should be explicitly included. Specifically, Eq. (33) should be applied in all kinds with TD-OCCX0 type parameterization in TD-OCCH method, see also Appendix. D. ### III.2 Vibrational systems The application of methods to vibrational problems is straightforward. A vibrational system with $m$ vibrational modes is isomorphic to a $m$-specie fermion system with only one fermion occupation in each specie. We point out that quantum photons, which are essential for molecules in cavities, can also be described in the same framework. Previous works of TDVCC familiesHøjlund, Zoccante, and Christiansen (2024); Højlund and Christiansen (2024); Højlund _et al._ (2022); Hansen _et al._ (2019); Madsen _et al._ (2020c) focus mainly on the $[k]$-truncation scheme, which is sufficient for ordinary systems. Nonetheless, for the systems with double well potentials and vibronic coupling systems (see next subsection), weighted excitation truncation scheme or group excitation truncation scheme is necessary for the faithful simulations since the degree of freedom (DOF) with double well potentials has more contribution than other DOFs to the dynamics. A practical strategy of weighted excitation truncation scheme for such a system is to set the DOF with double well potentials as 0 and other DOFs as 1. Currently, special truncation schemes have been successfully implemented in TDVCCMadsen _et al._ (2020d) and TDMVCCJensen _et al._ (2023); TDH , but lacking theoretical discussion on orbital rotation invariance. The proof of the intragroup rotations of particle and hole orbitals in the Appendix. B also holds for the biorthogonal orbitals scenario. Therefore, we show that weighted excitation truncation scheme can be safely used in TDVCC families without compromising the intragroup rotations of particle and hole orbitals, which benefits future numerical implementations. Despite existing TDVCC families and weighted excitation truncation schemes, our work provides new ansatz, TD- OCCT1, TD-BCC, TD-OCCX0, and TD-OCCH, and the group excitation truncation scheme as new candidates for future implementations. Additionally, although TD-OCCT1 and sTDMVCC are mathematically equivalent, different constraints $X_{i^{m}}^{j^{m}}$ and $X_{a^{m}}^{b^{m}}$, as well as using orthogonal or nonorthogonal orbitals, might give difference performances in practical implementations. Therefore, they should be benchmarked systematically. ### III.3 Vibronic coupling systems There are two ways to simulate vibronic coupling systems. Traditionally, the electronic DOF is described as a $S$-level system, thus a vibronic coupling system is isomorphic to a vibrational system. The major differences are: 1. almost always all $S$ orbitals of the electronic DOF are active in the wavefunction ansatz (no virtual orbitals); 2. vibrational contributions to the dynamics associated with different electronic levels are comparableWorth and Cederbaum (2004); Köuppel, Domcke, and Cederbaum (1984); Bao, Raymond, and Nooijen (2024). These make TD-OCCX0 and TD-OCC unsuitable for the vibronic coupling simulations, and one should also avoid using OCCX0 and OCC types parameterization for the electronic DOF in TD-OCCH. Additionally, the same weighted excitation truncation scheme as systems with the double well potentials (proposed in the last subsection) should be used. In recent years, treating electronic and nuclear DOF on an equal footing has attracted increasing attention due to advancements in theorySibaev _et al._ (2020); Muolo _et al._ (2020); Sasmal and Vendrell (2020, 2022); Sasmal, Schröder, and Vendrell (2024). The five CC ansatz and two special truncation schemes proposed in this article can be regarded as the prototype of future advanced CC simulations. ## IV Summary We have successfully formulated five new orbital-optimized time-dependent coupled-cluster methods, which are distinguished by whether single amplitudes, $\tau_{i^{m}}^{a^{m}}$, $\lambda_{a^{m}}^{i^{m}}$ and particle-hole orbital rotations $X_{i^{m}}^{a^{m}}$ are constraints, for arbitrary fermion mixtures. When one of $\tau_{i^{m}}^{a^{m}}$, $\lambda_{a^{m}}^{i^{m}}$, and $X_{i^{m}}^{a^{m}}$ is a constraint for each fermion kind $m$, the method can converge to the CASSCF. Two advanced truncation schemes of higher-order amplitudes which maintain the intragroup rotation invariance are introduced. The applications to electronic dynamics, vibrational dynamics, and non- adiabatic dynamics are also discussed. Our methods are more compact CC- parameterization alternatives of the CI-parameterization of TD-MCSCF method, and would shed light on the high-accuracy numerical simulations of large systems. The numerical implementations on the electronic dynamics will be presented in a forthcoming article. ###### Acknowledgements. This research was supported in part by a Grant-in-Aid for Scientific Research Grant Number JP22H05025 from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. This research was also partially supported by MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0118067246 and JST COI-NEXT Grant Number JPMJPF2221. This work is partly supported by IBM-UTokyo lab. ## AUTHOR DECLARATIONS ### Conflict of Interest The authors have no conflicts to disclose. ### Author Contributions Haifeng Lang: Conceptualization (equal); Methodology (lead); Investigation (lead); Writing – original draft (lead); Writing – review & editing (equal). Takeshi Sato: Conceptualization (equal); Methodology (supporting); Funding acquisition (lead); Writing –review & editing (equal). ## DATA AVAILABILITY The data that supports the findings of this study are available within the article. ## Appendix A Redundancy transfer of $X_{i^{m}}^{a^{m}}$, $\tau_{i^{m}}^{a^{m}}$ and ${\lambda}^{i^{m}}_{a^{m}}$ in the full CC limit In this section, we will demonstrate that the redundancy of $X_{i^{m}}^{a^{m}}$ can be transferred to $\lambda^{i^{m}}_{a^{m}}$ or $\tau_{i^{m}}^{a^{m}}$, and the preparation of the exact wavefunction without $\lambda^{i^{m}}_{a^{m}}$ or $\tau_{i^{m}}^{a^{m}}$ is possible in the full CC limit. The analysis on $\lambda^{i^{m}}_{a^{m}}$ and $\tau_{i^{m}}^{a^{m}}$ are extremely similar, for convenience, we will consider the case of all $\lambda^{i^{m}}_{a^{m}}$ redundancy first and briefly discuss the most general scenario. Mathematically, the redundancy of $X_{i^{m}}^{a^{m}}$ means that the coefficient of $\Delta_{i^{m}}^{a^{m}}$ is identically zero, i.e, Eq. (13) is identiy regardless the choice of $X_{i^{m}}^{a^{m}}$. This can be rigorously proved using the same method in the next section. Now we assume that the time- dependent trajectories of $\lambda^{i^{m}}_{a^{m}}$ are arbitrarily given. Eq. (11) becomes equations of $X_{i^{m}}^{a^{m}}$. If the solution of Eq. (11) exists and is unique, Eq. (9,13) should hold due to the redundancy of $X_{i^{m}}^{a^{m}}$. Further, if one treats Eq. (13) as equations of $\dot{\tau}_{i^{m}}^{a^{m}}$ and assumes the solution exists and is unique, the redundancy of $X_{i^{m}}^{a^{m}}$ has been successfully transferred to $\lambda^{i^{m}}_{a^{m}}$. In general, it is very hard to prove the existence and uniqueness of the solution even without constraint $X_{a^{m}}^{b^{m}}=X_{i^{m}}^{j^{m}}\equiv 0$, but one can expect that the statement is true, at least when the state is dominantly of single reference characterMyhre (2018). To completely exclude $\lambda^{i^{m}}_{a^{m}}$ in the time-dependent full CC expansion, one needs to prepare the initial state with the parameterization $\lambda^{i^{m}}_{a^{m}}(0)=0$ and use the constraint $\dot{\lambda}^{i^{m}}_{a^{m}}\equiv 0$. Here, we provide two possible methods to convert the parameterization of an arbitrary state $\ket{\Psi}$ into the vanishing $\lambda^{i^{m}}_{a^{m}}$ form. The first is to prepare the initial state as an arbitrary Slater determinant $\ket{\Phi_{0}}$, and let it evolve from $t=0$ to $t=\pi/2$ under the Hamiltonian $\hat{H}=-i(\ket{\Phi_{0}}\bra{\Psi}-\ket{\Psi}\bra{\Phi_{0}})$ with $\dot{\lambda}^{i^{m}}_{a^{m}}\equiv 0$. The second is to let the state $\ket{\Psi}$ in an arbitrary parameterization form evolve from $t=0$ to $t=1$ under the Hamiltonian $\hat{H}=0$ with $\dot{\lambda}^{i^{m}}_{a^{m}}\equiv-\lambda^{i^{m}}_{a^{m}}(0)$. Now we are in the position to give the most general redundancy analysis of parameters. Assuming one of $X_{i^{m}}^{a^{m}}$, $\lambda^{i^{m}}_{a^{m}}$, and $\tau_{i^{m}}^{a^{m}}$ is pre-determined for each kind of fermion, equations corresponding to the variation coefficients in $\delta S$ should be removed in coupled equations Eq. (9,11,13). If the solution of remaining equations exists and is unique, the removing equations are identities and the redundancy of parameters have been proved. To exclude these redundant parameters in the time-dependent full CC expansion, one may follow the same procedures as for the $\lambda^{i^{m}}_{a^{m}}$-redundancy scenario described above, with the replacements with corresponding pre-determined variables (constraints). ## Appendix B Redundancy proof of particle-particle and hole-hole orbital rotations We will explicitly show the redundancy of $X_{i^{m}}^{j^{m}}$ and $X_{a^{m}}^{b^{m}}$ from $\delta S$. For the scenarios $\mu^{m},\nu^{m}\in\\{i^{m}\\}$, and $\mu^{m},\nu^{m}\in\\{a^{m}\\}$, $\displaystyle\langle\Psi_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\hat{T})-i\hat{X},\hat{E}^{\mu^{m}}_{\nu^{m}}]|\Psi_{\rm R}\rangle$ $\displaystyle+$ $\displaystyle i\langle\Phi|(\partial^{\hskip 0.20999pt\prime}_{t}\hat{\Lambda})e^{-\hat{T}}\hat{E}^{\mu^{m}}_{\nu^{m}}|\Psi_{\rm R}\rangle$ $\displaystyle=$ $\displaystyle\langle\Psi_{\rm L}|[\hat{H}-i\hat{X},\hat{E}^{\mu^{m}}_{\nu^{m}}]|\Psi_{\rm R}\rangle$ $\displaystyle-$ $\displaystyle i\dot{\tau}_{\mathring{I}}^{\mathring{A}}\langle\Psi_{\rm L}|[\hat{E}_{\mathring{I}}^{\mathring{A}},\hat{E}^{\mu^{m}}_{\nu^{m}}]|\Psi_{\rm R}\rangle+i\dot{\lambda}^{\mathring{I}}_{\mathring{A}}\langle\Phi_{\mathring{I}}^{\mathring{A}}|e^{-\hat{T}}\hat{E}^{\mu^{m}}_{\nu^{m}}|\Psi_{\rm R}\rangle$ (34) $\displaystyle=$ $\displaystyle\braket{\Psi_{\rm L}}{\hat{E}_{\mathring{I}}^{\mathring{A}}(\hat{H}-i\hat{X})}{\Psi_{\rm R}}\langle\Phi_{\mathring{I}}^{\mathring{A}}|e^{-\hat{T}}\hat{E}^{\mu^{m}}_{\nu^{m}}|\Psi_{\rm R}\rangle$ $\displaystyle-$ $\displaystyle\braket{\Phi_{\mathring{I}}^{\mathring{A}}}{e^{-\hat{T}}(\hat{H}-i\hat{X})}{\Psi_{\rm R}}\langle\Psi_{\rm L}|\hat{E}_{\mathring{I}}^{\mathring{A}}\hat{E}^{\mu^{m}}_{\nu^{m}}|\Psi_{\rm R}\rangle\,,$ (35) Here, time derivatives of all single amplitudes are included in Eq. (34). Converting Eq. (34) to Eq. (35), one needs to replace all EoMs of CC amplitudes with Eq. (8,9,10,11) and use Eq. (16). The equations are straightforward for TD-OCCX0, and we will explain more on other methods. We first consider TD-OCCT1. $\lambda_{a^{m}}^{i^{m}}$ in the ansatz should be understood as a constraint $\lambda_{a^{m}}^{i^{m}}\equiv 0$. Therefore, Eq. (34) holds. Noticing that the multiplier of $\dot{\tau}^{a^{m}}_{i^{m}}$ in Eq. (34) vanishes, one can safely replace $\dot{\tau}^{a^{m}}_{i^{m}}$ with Eq. (9). Additionally, Eq. (11) also ensures $\dot{\lambda}_{a^{m}}^{i^{m}}\equiv 0$, and we have finished the proof of equations for TD-OCCT1. For the discussion on TD-BCC, one just needs to swap $\lambda_{a^{m}}^{i^{m}}$ and $\tau^{a^{m}}_{i^{m}}$ in the proof of TD-OCCT1. For the TD-OCC method, both $\lambda_{a^{m}}^{i^{m}}$ and $\tau^{a^{m}}_{i^{m}}$ should be regarded as constraints and their multipliers in Eq. (34) are always zero, and thus equations hold. Additionally, equations of TD-OCCH automatically hold when equations of four other methods hold. Notice that $\langle\Phi^{\mathring{B}}_{\mathring{J}}|\hat{E}_{\mathring{I}}^{\mathring{A}}$ is not zero iff $\mathring{A}$ and $\mathring{I}$ are substrings of $\mathring{B}$ and $\mathring{J}$, respectively. For convenience, we use ${\mathring{B}}-{\mathring{A}}$ to express the absolute complementary string of $\mathring{A}$ with respect to $\mathring{B}$. Expanding $\bra{\Psi_{\rm L}}$ in Eq. (35) as $\lambda_{\mathring{B}}^{\mathring{J}}\bra{\Phi^{\mathring{B}}_{\mathring{J}}}$, Eq. (35) can be converted to $\displaystyle\lambda_{\mathring{B}}^{\mathring{J}}$ $\displaystyle\big{(}\bra{\Phi^{\mathring{B}-\mathring{A}}_{\mathring{J}-\mathring{I}}}(\hat{H}-i\hat{X})\ket{\Psi_{\rm R}}{\bra{\Phi^{\mathring{A}}_{\mathring{I}}}}e^{-\hat{T}}\hat{E}_{\nu^{m}}^{\mu^{m}}\ket{\Psi_{\rm R}}$ (36) $\displaystyle-{\bra{\Phi^{\mathring{A}}_{\mathring{I}}}}(\hat{H}-i\hat{X})\ket{\Psi_{\rm R}}\bra{\Phi^{\mathring{B}-\mathring{A}}_{\mathring{J}-\mathring{I}}}e^{-\hat{T}}\hat{E}_{\nu^{m}}^{\mu^{m}}\ket{\Psi_{\rm R}}\big{)}\,.$ Resuming the second term by swapping ${\mathring{B}}-{\mathring{A}}$, ${\mathring{J}}-{\mathring{I}}$ and ${\mathring{A}}$, ${\mathring{I}}$, one can immediately obtain that Eq. (35) is zero, which completes the proof that $X_{i^{m}}^{j^{n}}$ and $X_{a^{m}}^{b^{n}}$ are redundant. Following the same discussion, one can also explicitly show that $\hat{X}$ is redundant for TD-ooCC with $\hat{T}_{1}$ and $\hat{\Lambda}_{1}$ in the CASSCF limit. In this case, it is easy to check that Eq. (34) and Eq. (35) also hold, and applying the same resummation argument as the proof of redundancy of $X_{i^{m}}^{j^{m}}$ and $X^{b^{m}}_{a^{m}}$, the proof of redundancy of $X_{i^{m}}^{a^{m}}$ can be obtained. ## Appendix C OATDCC and OATDCC with split orthogonal and biorthogonal orbitals Before presenting the equivalence proof of sTDMVCCHøjlund and Christiansen (2024) (sOATDCC) and TD-OCCT1, we will briefly review TDVCC families, including TDVCCHansen _et al._ (2019), TDMVCC, restricted polar TDMVCC (rpTDMVCC)Højlund _et al._ (2022), orthogonal TDMVCC (oTDMVCC)Højlund, Zoccante, and Christiansen (2024), and sTDMVCCHøjlund and Christiansen (2024). First, we will focus on the EoMs and parameterization of singlel amplitudes of CC coefficients and orbitals. In TDVCC, all orbitals are active, and TD-OCCX0 is equivalent to TDVCC in this case. The fast growth of norms of CC amplitudes arisen from the absence of orbital rotations makes applications of TDVCC limited. It is the vibrational analog of TD-CC. To resolve this problem, orbital rotations and virtual orbitals are introduced. All other four methods, TDMVCC, rpTDMVCC, sTDMVCC and oTDMVCC have formally same (but not equivalent due to orthogonal/biorthogonal orbitals) CC part parameterizations and EoMs, and their major differences are the parameterization of orbitals and their EoMs. oTDMVCC is completely identical to TD-OCCSato, Teramura, and Ishikawa (2018) of this article, which uses orthogonal orbitals but suffers from the huge matrix inversion and nonconvergence to CASSCF. Instead of using orthogonal orbitals, TDMVCC, which is the vibrational analog of OATDCC, solves the matrix inversion and convergence problems by using biorthogonal orbitals, but the non-orthogonality causes numerical instabilities in long time simulations. One can force active orbitals lie in the conjugate ket and bra spaces to mitigate instabilities, which is implemented by using orthogonal virtual orbitals and biorthogonal active orbitals in rpTDMVCC and sTDMVCC. EoMs of active-virtual orbital rotations are determined by the variational principle and projection in sTDMVCC and rpTDMVCC, respectively, while EoMs of CC amplitudes and particle-hole rotations of two methods are completely identical. In fact, the contribution of biorthogonal active orbitals is equivalent to the one-rank excitation in TD-OCCT1, and the TDVP ensures the equivalence of TD-OCCT1 and sTDMVCC. For convenience, we only consider models without dynamical-core in this Appendix. For models with dynamical-core, the treatments of dynamical-core orbitals are almost identical to the virtual orbitals except that they should be evolved explicitly. We will first review the standard OATDCC, which uses birothogonal bra and ket orbitals rather than orthogonal orbitals in TD-OCC. The highly flexible variational space choices could cause serious numerical instabilities arisen from the different spaces spanned by ket and bra active orbitals in practical implementationsHøjlund _et al._ (2022). Next, we will review sOATDCC, i.e., OTADCC with virtual orbitals of ket and bra are orthogonal in the Lagrangian and active orbitals are biorthogonal. Such a parameterization forces the spaces spanned by active orbitals of ket and bra are identical, which eliminates the numerical instability. The modifications only appear in the intergroup rotation between active orbitals and other types of orbitals. OATDCC and sOATDCC reviewed in this Appendix are describing fermion mixtures with arbitrary occupations, which are more general than the references results. When sOATDCC is applied to the vibrational problem, it is the sTDMVCC and inherits the advantage of TDMVCCMadsen _et al._ (2020c); Højlund, Zoccante, and Christiansen (2024): EoMs of different vibrational modes are separate and converge to the CASSCF. In contrast to rpTDMVCCHøjlund _et al._ (2022), this approach is fully variational. We further prove that this approach is equivalent to TD-OCCT1. We define $\ket{\phi_{\mu^{m}}}$ and $\bra{\tilde{\phi}_{\mu^{m}}}$ as biorthogonal bases of OATDCC, and their corresponding creation (annihilation) operators are denoted as $\hat{a}_{\mu^{m}}^{\dagger}$ ($\hat{a}_{\mu^{m}}$) and $\tilde{\hat{a}}_{\mu^{m}}^{\dagger}$ ($\tilde{\hat{a}}_{\mu^{m}}$). They satisfy the following relations, $\displaystyle\braket{\tilde{\phi}_{\mu^{m}}}{\phi_{\nu^{m}}}=\delta_{\mu^{m}\nu^{m}}\,,$ (37) $\displaystyle\\{\hat{a}_{\nu^{m}}^{\dagger},\tilde{\hat{a}}_{\mu^{m}}\\}=\braket{\tilde{\phi}_{\mu^{m}}}{\phi_{\nu^{m}}}=\delta_{\mu^{m}\nu^{m}}\,.$ (38) In the standard OATDCC, $\hat{a}_{\mu^{m}}$ and $\tilde{\hat{a}}_{\mu^{m}}^{\dagger}$ are never used, nonetheless, they play important roles in the sOATDCC. We also introduce the operator $\tilde{\hat{E}}^{\mu_{1}^{m_{1}}\mu_{2}^{m_{2}}\mu_{3}^{m_{3}}\cdots}_{\nu_{1}^{m_{1}}\nu_{2}^{m_{2}}\nu_{3}^{m_{3}}\cdots}=\hat{a}^{\dagger}_{\mu_{1}^{m_{1}}}\hat{a}^{\dagger}_{\mu_{2}^{m_{2}}}\hat{a}^{\dagger}_{\mu_{3}^{m_{3}}}\cdots\tilde{\hat{a}}_{\nu_{3}^{m_{3}}}\tilde{\hat{a}}_{\nu_{2}^{m_{2}}}\tilde{\hat{a}}_{\nu_{1}^{m_{1}}}$, and all other quantities in OATDCC are denoted as corresponding quantities in TD-OCC with tildes, similarly. $m_{1},m_{2},\cdots,m_{3}$ can be the same index of fermion kind. The ket and bra reference wavefunctions of OATDCC are tensor products Slater determinants of $\ket{\phi_{\mu^{m}}}$ and $\bra{\tilde{\phi}_{\mu^{m}}}$ for all $m$, denoted as $\ket{{\Phi}}$ and $\bra{\tilde{\Phi}}$, respectively. Wavefunction and their time derivatives and variations as well as the Lagrangian in OATDCC are formally identical to the TD-OCC correspondence with tildes replacements. Due to the loss of orthogonality, $(\tilde{\hat{E}}_{I}^{A})^{\dagger}\neq\tilde{\hat{E}}_{A}^{I}$, $(\tilde{X}^{\mu^{m}}_{\nu^{m}})^{*}\neq\tilde{X}_{\mu^{m}}^{\nu^{m}}$, and $(\tilde{\Delta}^{\mu^{m}}_{\nu^{m}})^{*}\neq\tilde{\Delta}_{\mu^{m}}^{\nu^{m}}$ in general. Also, the anti-Hermicity of $\tilde{\hat{X}}$ and $\tilde{\hat{\Delta}}$ cannot be expected. Therefore, all variables and their complex conjugate are independent quantities in OATDCC. The action of OATDCC is a complex functional $\displaystyle\tilde{S}$ $\displaystyle=$ $\displaystyle\int_{t_{0}}^{t_{1}}dt\tilde{L}\,,$ (39) and require the action to be stationary $\displaystyle{\color{black}}\delta\tilde{S}$ $\displaystyle=$ $\displaystyle\langle\delta\tilde{\Psi}_{\rm L}|\hat{H}|\tilde{\Psi}_{\rm R}\rangle+\langle\tilde{\Psi}_{\rm L}|\hat{H}|\delta\tilde{\Psi}_{\rm R}\rangle$ (40) $\displaystyle-$ $\displaystyle i\langle\delta\tilde{\Psi}_{\rm L}|\dot{\tilde{\Psi}}_{\rm R}\rangle+i\langle\dot{\tilde{\Psi}}_{\rm L}|\delta\tilde{\Psi}_{\rm R}\rangle=0\,.$ The non-existence of complex conjugate variables in OATDCC action makes the action complex analytic, and ensures the existence of the solution. The complex even dimension of the variational manifold of OATDCC ensures uniqueness of the solution. OTADCC converges to CASSCF when all excitation and deexcitation operators except for the single excitation and deexcitation ones are included. We also point out that the real action choice will lead the same solutions due to the independence of all variables and their complex conjugates. Working out the variations explicitly black $\displaystyle\delta\tilde{S}$ $\displaystyle=$ $\displaystyle\delta\tilde{\tau}^{A}_{I}\left\\{\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{A}_{I}]|\tilde{\Psi}_{\rm R}\rangle+i\dot{\tilde{\lambda}}^{I}_{A}\right\\}+\delta\tilde{\lambda}^{I}_{A}\left\\{\langle\tilde{\Phi}^{A}_{I}|e^{-\tilde{\hat{T}}}(\hat{H}-i\tilde{\hat{X}})e^{\tilde{\hat{T}}}|\tilde{\Phi}\rangle-i\dot{\tilde{\tau}}^{A}_{I}\right\\}$ (41) $\displaystyle+$ $\displaystyle\tilde{\Delta}^{\mu^{m}}_{\nu^{m}}\left\\{\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{T}})-i\tilde{\hat{X}},\tilde{\hat{E}}^{\mu^{m}}_{\nu^{m}}]|\tilde{\Psi}_{\rm R}\rangle+i\langle\tilde{\Phi}|(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{\Lambda}})e^{-\tilde{\hat{T}}}\tilde{\hat{E}}^{\mu^{m}}_{\nu^{m}}e^{\tilde{\hat{T}}}|\tilde{\Phi}\rangle\right\\}\,,$ black EoMs of CC coefficients, $\displaystyle i\dot{\tilde{\tau}}^{A}_{I}$ $\displaystyle=$ $\displaystyle\langle\tilde{\Phi}_{I}^{A}|e^{-\tilde{\hat{T}}}(\hat{H}-i\tilde{\hat{X}})e^{\tilde{\hat{T}}}|\tilde{\Phi}\rangle\,,$ (42) $\displaystyle-i\dot{\tilde{\lambda}}^{I}_{A}$ $\displaystyle=$ $\displaystyle\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{A}_{I}]|\tilde{\Psi}_{\rm R}\rangle\,,$ (43) and EoMs of orbital rotations, $\displaystyle\bra{\tilde{\Psi}_{\rm L}}[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{\alpha^{m}}_{p^{m}}]\ket{\tilde{\Psi}_{\rm R}}=0\,,$ (44) $\displaystyle\bra{\tilde{\Psi}_{\rm L}}[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}_{\alpha^{m}}^{p^{m}}]\ket{\tilde{\Psi}_{\rm R}}=0\,,$ (45) $\displaystyle\bra{\tilde{\Psi}_{\rm L}}[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}_{i^{m}}^{a^{m}}]\ket{\tilde{\Psi}_{\rm R}}=0\,,$ (46) $\displaystyle\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{T}})-i\tilde{\hat{X}},\tilde{\hat{E}}_{a^{m}}^{i^{m}}]|\tilde{\Psi}_{\rm R}\rangle$ $\displaystyle+$ $\displaystyle i\langle\tilde{\Phi}|(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{\Lambda}})e^{-\tilde{\hat{T}}}\tilde{\hat{E}}_{a^{m}}^{i^{m}}e^{\tilde{\hat{T}}}|\tilde{\Phi}\rangle=0\,,$ (47) can be obtained. Similar to the TD-OCC scenario, all intragroup rotations are redundant, thus should be pre-selected, for instance, setting all the redundant rotations as zero. In principle, $\\{\ket{\phi_{p^{m}}}\\}$ and $\\{\ket{\tilde{\phi}_{p^{m}}}\\}$ can span different spaces, which can make the propagation unstable. To resolve this problem, we adapt real action with the constraints that virtual orbitals are orthogonal, $\displaystyle S_{s}$ $\displaystyle=$ $\displaystyle\frac{1}{2}\int_{t_{0}}^{t_{1}}dt\left(\tilde{L}+\tilde{L}^{*}\right),$ (48) We still use the notations of OATDCC in sOATDCC when there is no ambiguity. In this case, corresponding ket and bra virtual orbitals are identical, $\ket{\phi_{\alpha^{m}}}^{*}=\bra{\tilde{\phi}_{\alpha^{m}}}$, $\hat{a}_{\alpha^{m}}=\tilde{\hat{a}}_{\alpha^{m}}$ and $\hat{a}_{\alpha^{m}}^{\dagger}=\tilde{\hat{a}}_{\alpha^{m}}^{\dagger}$. Therefore, $\\{\ket{\phi_{p^{m}}}\\}$ and $\\{\ket{\tilde{\phi}_{p^{m}}}\\}$ span the same space, and the following relations hold, $\displaystyle\ket{\tilde{\phi}_{p^{m}}}=\braket{\tilde{\phi}_{q^{m}}}{\tilde{\phi}_{p^{m}}}\ket{\phi_{q^{m}}}\,,\quad\tilde{\hat{a}}_{p^{m}}^{\dagger}=\braket{\tilde{\phi}_{q^{m}}}{\tilde{\phi}_{p^{m}}}\hat{a}_{q^{m}}^{\dagger}\,,$ (49) $\displaystyle\bra{\phi_{p^{m}}}=\bra{\tilde{\phi}_{q^{m}}}\braket{\phi_{p^{m}}}{\phi_{q^{m}}}\,,\quad\hat{a}_{p^{m}}=\braket{\phi_{p^{m}}}{\phi_{q^{m}}}\tilde{\hat{a}}_{q^{m}}\,.$ The constraints also force $\ket{\delta\phi_{\alpha^{m}}}\equiv\ket{\delta\tilde{\phi}_{\alpha^{m}}}$ and $\ket{\dot{\phi}_{\alpha^{m}}}\equiv\ket{\dot{\tilde{\phi}}_{\alpha^{m}}}$. Using above relations, one could find that $(\tilde{X}^{\alpha^{m}}_{p^{m}})^{*}$ and $\tilde{X}_{\alpha^{m}}^{p^{m}}$, as well as $(\tilde{\Delta}^{\alpha^{m}}_{q^{m}})^{*}$ and $\tilde{\Delta}_{\alpha^{m}}^{q^{m}}$ are not independent, $\displaystyle(\tilde{X}^{\alpha^{m}}_{p^{m}})^{*}=-\tilde{X}_{\alpha^{m}}^{q^{m}}\braket{\phi_{p^{m}}}{\phi_{q^{m}}}\,,$ (50) $\displaystyle\tilde{X}_{\alpha^{m}}^{p^{m}}=-(\tilde{X}^{\alpha^{m}}_{q^{m}})^{*}\braket{\tilde{\phi}_{p^{m}}}{\tilde{\phi}_{q^{m}}}\,,$ (51) $\displaystyle(\tilde{\Delta}^{\alpha^{m}}_{p^{m}})^{*}=-\tilde{\Delta}_{\alpha^{m}}^{q^{m}}\braket{\phi_{p^{m}}}{\phi_{q^{m}}}\,,$ (52) $\displaystyle\tilde{\Delta}_{\alpha^{m}}^{p^{m}}=-(\tilde{\Delta}^{\alpha^{m}}_{q^{m}})^{*}\braket{\tilde{\phi}_{p^{m}}}{\tilde{\phi}_{q^{m}}}\,,$ (53) $\displaystyle(\tilde{X}^{\alpha^{m}}_{p^{m}}\tilde{\hat{E}}^{\alpha^{m}}_{p^{m}})^{\dagger}=-\tilde{X}_{\alpha^{m}}^{p^{m}}\tilde{\hat{E}}_{\alpha^{m}}^{p^{m}}\,,$ (54) $\displaystyle(\tilde{\Delta}^{\alpha^{m}}_{p^{m}}\tilde{\hat{E}}^{\alpha^{m}}_{p^{m}})^{\dagger}=-\tilde{\Delta}_{\alpha^{m}}^{p^{m}}\tilde{\hat{E}}_{\alpha^{m}}^{p^{m}}\,.$ (55) Therefore, the explicit form of the variation with independent variational variables is black $\displaystyle 2\delta{S}_{s}$ $\displaystyle=$ $\displaystyle\delta\tilde{\tau}^{A}_{I}\left\\{\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{A}_{I}]|\tilde{\Psi}_{\rm R}\rangle+i\dot{\tilde{\lambda}}^{I}_{A}\right\\}+\delta\tilde{\lambda}^{I}_{A}\left\\{\langle\tilde{\Phi}^{A}_{I}|e^{-\tilde{\hat{T}}}(\hat{H}-i\tilde{\hat{X}})e^{\tilde{\hat{T}}}|\tilde{\Phi}\rangle-i\dot{\tilde{\tau}}^{A}_{I}\right\\}+c.c.$ (56) $\displaystyle+$ $\displaystyle\tilde{\Delta}^{p^{m}}_{q^{m}}\left\\{\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{T}})-i\tilde{\hat{X}},\tilde{\hat{E}}^{p^{m}}_{q^{m}}]|\tilde{\Psi}_{\rm R}\rangle+i\langle\tilde{\Phi}|(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{\Lambda}})e^{-\tilde{\hat{T}}}\tilde{\hat{E}}^{p^{m}}_{q^{m}}e^{\tilde{\hat{T}}}|\tilde{\Phi}\rangle\right\\}\,+c.c.$ $\displaystyle+$ $\displaystyle\tilde{\Delta}^{p^{m}}_{\alpha^{m}}\left\\{\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{p^{m}}_{\alpha^{m}}]|\tilde{\Psi}_{\rm R}\rangle-\braket{\phi_{q^{m}}}{\phi_{p^{m}}}\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}_{q^{m}}^{\alpha^{m}}]|\tilde{\Psi}_{\rm R}\rangle^{*}\right\\}$ $\displaystyle+$ $\displaystyle\tilde{\Delta}_{p^{m}}^{\alpha^{m}}\left\\{\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}_{p^{m}}^{\alpha^{m}}]|\tilde{\Psi}_{\rm R}\rangle-\braket{\tilde{\phi}_{p^{m}}}{\tilde{\phi}_{q^{m}}}\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{q^{m}}_{\alpha^{m}}]|\tilde{\Psi}_{\rm R}\rangle^{*}\right\\}\,.$ Compared with Eq. (41), it is clear that the constraints do not change the EoMs of coefficients and intergroup orbital rotations between particle and hole orbitals. The variations with respect to $\Delta_{\alpha^{m}}^{p^{m}}$ give EoMs of intergroup rotation between active orbitals and virtual orbitals $\displaystyle\bra{\tilde{\Psi}_{\rm L}}[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}_{\alpha^{m}}^{p^{m}}]\ket{\tilde{\Psi}_{\rm R}}+\bra{\tilde{\Psi}_{\rm R}}[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}_{\alpha^{m}}^{p^{m}}]\ket{\tilde{\Psi}_{\rm L}}=0\,,$ (57) $\displaystyle\bra{\tilde{\Psi}_{\rm L}}[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{\alpha^{m}}_{p^{m}}]\ket{\tilde{\Psi}_{\rm R}}+\bra{\tilde{\Psi}_{\rm R}}[\hat{H}-i\tilde{\hat{X}},\tilde{\hat{E}}^{\alpha^{m}}_{p^{m}}]\ket{\tilde{\Psi}_{\rm L}}=0\,.$ (58) sOATDCC is equivalent to TD-OCCT1. To see this, one just needs to prove their manifolds are identical up to normalization factors, which corresponds to a point transformation in Lagrange MechanicsGoldstein (1980). More explicitly speaking, for any states $\ket{\tilde{\Psi}_{\rm R}}$ and $\bra{\tilde{\Psi}_{\rm L}}$ parameterized via the sOATDCC formalism, one can always find states parameterized via the TD-OCCT1 formalism $\ket{\Psi_{\rm R}}$ and $\bra{\Psi_{\rm L}}$ that satisfy $\ket{\tilde{\Psi}_{\rm R}}=A\ket{\Psi_{\rm R}}$, and $\bra{\tilde{\Psi}_{\rm L}}=A^{-1}\bra{\Psi_{\rm L}}$ and vice versa. Here, $A$ is a normalization factor and only gives a trivial total derivative in Lagrange. In both transformations, one can always choose $\ket{\psi_{\alpha}}\equiv\ket{\phi_{\alpha}}$ and they only appear in the time derivatives of wavefunctions. We will first consider converting TD-OCCT1 to sOATDCC. The transformation is given by $\displaystyle\ket{\phi_{i^{m}}}=\ket{\psi_{i^{m}}}+\tau_{i^{m}}^{a^{m}}\ket{\psi_{a^{m}}}\,,\quad\hat{a}^{\dagger}_{i^{m}}=\hat{c}_{i^{m}}^{\dagger}+\tau_{i^{m}}^{a^{m}}\hat{c}^{\dagger}_{a^{m}}\,,$ (59) $\displaystyle\bra{\tilde{\phi}_{a^{m}}}=\bra{\psi_{a^{m}}}-\tau_{i^{m}}^{a^{m}}\bra{\psi_{i^{m}}}\,,\quad\tilde{\hat{a}}_{a^{m}}=\hat{c}_{a^{m}}-\tau_{i^{m}}^{a^{m}}\hat{c}_{i^{m}}\,,$ (60) $\displaystyle\ket{\phi_{a^{m}}}=\ket{\psi_{a^{m}}}\,,\quad\hat{a}^{\dagger}_{a^{m}}=\hat{c}_{a^{m}}^{\dagger}\,,$ (61) $\displaystyle\bra{\tilde{\phi}_{i^{m}}}=\bra{\psi_{i^{m}}}\,,\quad\tilde{\hat{a}}_{i^{m}}=\hat{c}_{i^{m}}\,,$ (62) $\displaystyle\tilde{\tau}_{I}^{A}=\tau_{I}^{A}\,,\quad\tilde{\lambda}_{A}^{I}=\lambda_{A}^{I}\,,\quad\ket{\tilde{\Psi}_{\rm R}}=\ket{\Psi_{\rm R}}\,,\quad\bra{\tilde{\Psi}_{\rm L}}=\bra{\Psi_{\rm L}}\,.$ (63) These identities are ensured by the invariance of particle-hole orbital rotation in OATDCC parameterization: $\displaystyle\hat{a}_{p^{m}}^{\dagger}\rightarrow e^{\tilde{\hat{T}}_{1}}\hat{a}_{p^{m}}^{\dagger}e^{-\tilde{\hat{T}}_{1}}\,,\quad\tilde{\hat{a}}_{p^{m}}\rightarrow e^{\tilde{\hat{T}}_{1}}\tilde{\hat{a}}_{p^{m}}e^{-\tilde{\hat{T}}_{1}}\,,$ (64) $\displaystyle\bra{\tilde{\Phi}}\rightarrow\bra{\tilde{\Phi}}e^{-\tilde{\hat{T}}_{1}}=\bra{\tilde{\Phi}}\,,\quad\ket{{\Phi}}\rightarrow e^{\tilde{\hat{T}}_{1}}\ket{{\Phi}}\,,$ $\displaystyle\bra{\tilde{\Psi}_{\rm L}}\rightarrow\bra{\tilde{\Phi}}(1+e^{\tilde{\hat{T}}_{1}}\tilde{\hat{\Lambda}}e^{-\tilde{\hat{T}}_{1}})e^{-\tilde{\hat{T}}+\tilde{\hat{T}}_{1}}=\bra{\tilde{\Psi}_{\rm L}}\,,$ $\displaystyle\ket{\tilde{\Psi}_{\rm R}}\rightarrow e^{\tilde{\hat{T}}-\tilde{\hat{T}}_{1}}e^{\tilde{\hat{T}}_{1}}\ket{\tilde{\Phi}}=\ket{\tilde{\Psi}_{\rm R}}\,.$ where $\tilde{\hat{T}}_{1}=\tilde{\tau}_{i^{m}}^{a^{m}}\tilde{\hat{E}}_{i^{m}}^{a^{m}}$ is an arbitrary single excitation operator. Such a rotation invariance causes the $\tilde{\hat{T}}_{1}$ redundancy in OATDCC and has been extensively discussed in the supplementary material of the original OATDCC paperKvaal (2012). To convert sOATDCC wavefunctions to TD-OCCT1 correspondence needs two steps. The first step is to orthonormalize $\ket{\phi_{a^{m}}}$ and $\bra{\tilde{\phi}_{i^{m}}}$, $\ket{\phi_{a^{m}}^{\prime}}=G_{b^{m}}^{a^{m}}\ket{\phi_{b^{m}}}\,,\quad\bra{\tilde{\phi}_{i^{m}}^{\prime}}=\tilde{G}_{j^{m}}^{i^{m}}\bra{\tilde{\phi}_{j^{m}}}\,,$ (65) where $\ket{\phi_{a^{m}}^{\prime}}$ and $\bra{\tilde{\phi}_{i^{m}}^{\prime}}$ are orthonormalized vectors, $G$ and $\tilde{G}$ are transformation matrices. We also denote quantities after the orthonormalization as original quantities with primes. Dual vectors of $\ket{\phi_{a^{m}}^{\prime}}$ and $\bra{\tilde{\phi}_{i^{m}}^{\prime}}$ are $\ket{\phi_{i^{m}}^{\prime}}=(\tilde{G}^{-1})_{i^{m}}^{j^{m}}\ket{\phi_{j^{m}}}\,,\quad\bra{\tilde{\phi}_{a^{m}}^{\prime}}=({G}^{-1})_{a^{m}}^{b^{m}}\bra{\tilde{\phi}_{b^{m}}}\,.$ (66) We also define general-rank transformations of $G$ and $\tilde{G}$ as $G_{I}^{J}$ and $\tilde{G}_{A}^{B}$, which are the tensor products of single- rank transformations of $G_{i^{m}}^{j^{m}}$ and $\tilde{G}_{a^{m}}^{b^{m}}$, respectively. Wavefunctions as well as relevant quantities under such a transformation are given by $\displaystyle\tilde{\hat{E}}_{I}^{A}\rightarrow(\tilde{\hat{E}}_{I}^{A})^{\prime}=G^{A}_{B}\tilde{\hat{E}}_{J}^{B}\tilde{G}^{I}_{J}\,,$ (67) $\displaystyle\tilde{\hat{E}}_{A}^{I}\rightarrow(\tilde{\hat{E}}_{A}^{I})^{\prime}=(\tilde{G}^{-1})_{I}^{J}\tilde{\hat{E}}_{B}^{J}({G}^{-1})_{A}^{B}\,,$ $\displaystyle\tilde{{\tau}}_{I}^{A}\rightarrow(\tilde{\tau}_{I}^{A})^{\prime}=(G^{-1})_{A}^{B}\tilde{\tau}_{J}^{B}(\tilde{G}^{-1})_{I}^{J}\,,$ $\displaystyle\tilde{{\lambda}}_{A}^{I}\rightarrow(\tilde{\lambda}_{A}^{I})^{\prime}=\tilde{G}^{I}_{J}\tilde{\lambda}_{B}^{J}{G}_{B}^{A}\,,$ $\displaystyle\ket{\tilde{\Psi}_{\rm R}}\rightarrow\ket{\tilde{\Psi}_{\rm R}^{\prime}}=\prod_{m}(\det{(\tilde{G}_{j^{m}}^{i^{m}})})^{-1}\ket{\tilde{\Psi}_{\rm R}}\,,$ $\displaystyle\bra{\tilde{\Psi}_{\rm L}}\rightarrow\bra{\tilde{\Psi}_{\rm L}^{\prime}}=\prod_{m}\det{(\tilde{G}_{j^{m}}^{i^{m}})}\bra{\tilde{\Psi}_{\rm L}}\,.$ In fact, if one replaces single-rank $G$ and $\tilde{G}$ with arbitrary invertible matrices, the above transformation invariance still holds and that is the reason of the redundancy of $\tilde{X}_{i^{m}}^{j^{m}}$ and $\tilde{X}_{a^{m}}^{b^{m}}$ Kvaal (2012). The second step is to express $\ket{{\Psi}_{\rm R}}$ and $\bra{{\Psi}_{\rm L}}$ via $\ket{\tilde{\Psi}_{\rm R}^{\prime}}$ and $\bra{\tilde{\Psi}_{\rm L}^{\prime}}$, $\displaystyle\ket{\psi_{a^{m}}}=\ket{\phi_{a^{m}}}\,,\quad\hat{c}_{a^{m}}^{\dagger}=(\hat{a}^{\dagger}_{a^{m}})^{\prime}\,,$ (68) $\displaystyle\bra{\psi_{i^{m}}}=\bra{\tilde{\phi}_{i^{m}}}\,,\quad\hat{c}_{i^{m}}=(\tilde{\hat{a}}_{i^{m}})^{\prime}\,,$ (69) $\displaystyle\tau_{I}^{A}=(\tilde{\tau}_{I}^{A})^{\prime}\,,\quad\lambda_{A}^{I}=(\tilde{\lambda}_{A}^{I})^{\prime}\,,\quad\tau_{i^{m}}^{a^{m}}=\braket{\phi_{a^{m}}^{\prime}}{\phi_{i^{m}}^{\prime}}\,,$ (70) $\displaystyle\ket{\Psi_{\rm R}}=\ket{\tilde{\Psi}_{\rm R}^{\prime}}\,,\quad\bra{\Psi_{\rm L}}=\bra{\tilde{\Psi}_{\rm L}^{\prime}}\,.$ (71) Now we have finished the proof of the equivalence of sOATDCC and TD-OCCT1. Although it is unnecessary for the proof of the equivalence of two methods, we will present the explicit form of the transformation and show that EoMs of two methods coincide for the scenario that converting TD-OCCT1 to sOATDCC since it will give readers a straightforward understanding. Useful transformations are $\displaystyle\tilde{\hat{E}}_{i^{m}}^{a^{m}}=\hat{E}_{i^{m}}^{a^{m}}\,,\quad\tilde{X}_{i^{m}}^{a^{m}}=X_{i^{m}}^{a^{m}}+\dot{\tau}_{i^{m}}^{a^{m}}-\tau_{j^{m}}^{a^{m}}\tau_{i^{m}}^{b^{m}}X^{j^{m}}_{b^{m}}\,,$ (72) $\displaystyle\tilde{\hat{E}}_{i^{m}}^{j^{m}}=\hat{E}_{i^{m}}^{j^{n}}+\tau_{j^{m}}^{a^{m}}\hat{E}_{i^{m}}^{a^{m}}\,,\quad\tilde{X}_{i^{m}}^{j^{m}}=X_{a^{m}}^{j^{m}}\tau_{i^{m}}^{a^{m}}\,,$ $\displaystyle\tilde{\hat{E}}_{a^{m}}^{b^{m}}=\hat{E}_{a^{m}}^{b^{n}}-\tau_{i^{m}}^{a^{m}}\hat{E}_{i^{m}}^{b^{m}}\,,\quad\tilde{X}_{a^{m}}^{b^{m}}=-\tau_{i^{m}}^{b^{m}}X^{i^{m}}_{a^{m}}\,,$ $\displaystyle\tilde{\hat{E}}^{i^{m}}_{a^{m}}=\hat{E}^{i^{m}}_{a^{m}}-\tau_{j^{m}}^{a^{m}}\hat{E}_{j^{m}}^{i^{m}}+\tau_{i^{m}}^{b^{m}}\hat{E}_{a^{m}}^{b^{m}}-\tau_{j^{m}}^{a^{m}}\tau_{i^{m}}^{b^{m}}\hat{E}_{j^{m}}^{b^{m}}\,,$ $\displaystyle\tilde{\hat{E}}_{\alpha^{m}}^{i^{m}}=\hat{E}_{\alpha^{m}}^{i^{m}}+\tau_{i^{m}}^{a^{m}}\hat{E}_{\alpha^{m}}^{a^{m}}\,,\quad\tilde{X}_{\alpha^{m}}^{i^{m}}=X_{\alpha^{m}}^{i^{m}}\,,\quad\tilde{X}^{i^{m}}_{a^{m}}=X^{i^{m}}_{a^{m}}\,,$ $\displaystyle\tilde{\hat{E}}_{\alpha^{m}}^{a^{m}}=\hat{E}_{\alpha^{m}}^{a^{m}}\,,\quad\tilde{X}_{\alpha^{m}}^{a^{m}}=X_{\alpha^{m}}^{a^{m}}-\tau_{i^{m}}^{a^{m}}{X}_{\alpha^{m}}^{i^{m}}\,,$ $\displaystyle\tilde{\hat{E}}^{\alpha^{m}}_{i^{m}}=\hat{E}^{\alpha^{m}}_{i^{m}}\,,\quad\tilde{X}^{\alpha^{m}}_{i^{m}}=X^{\alpha^{m}}_{i^{m}}+\tau_{i^{m}}^{a^{m}}{X}^{\alpha^{m}}_{a^{m}}\,,$ $\displaystyle\tilde{\hat{E}}^{\alpha^{m}}_{a^{m}}=\hat{E}^{\alpha^{m}}_{a^{m}}-\tau_{i^{m}}^{a^{m}}\hat{E}^{\alpha^{m}}_{i^{m}}\,,\quad\tilde{X}^{\alpha^{m}}_{a^{m}}=X^{\alpha^{m}}_{a^{m}}\,,$ $\displaystyle\tilde{\hat{X}}=\hat{X}+(\partial^{\hskip 0.20999pt\prime}_{t}{\hat{T}_{1}})\,.$ where we already set $X_{i^{m}}^{j^{m}}$ and $X_{a^{m}}^{b^{m}}$ as zero. We stress that the constraint selection of $\tilde{X}_{i^{m}}^{j^{m}}$ and $\tilde{X}_{a^{m}}^{b^{m}}$ in sOATDCC in this case is no longer zero, which is not a usual choice in practical simulations. It is straightforward to verify that EoMs of active-virtual orbital rotations and CC coefficients are satisfied. To show the EoMs of intergroup rotations of active orbitals also hold, one notices that their EoMs can be expressed uniformly as $\displaystyle\langle\tilde{\Psi}_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{T}})-i\tilde{\hat{X}},\tilde{\hat{E}}_{p^{m}}^{q^{m}}]|\tilde{\Psi}_{\rm R}\rangle$ (73) $\displaystyle+$ $\displaystyle i\langle\tilde{\Phi}|(\partial^{\hskip 0.20999pt\prime}_{t}\tilde{\hat{\Lambda}})e^{-\tilde{\hat{T}}}\tilde{\hat{E}}_{p^{m}}^{q^{m}}e^{\tilde{\hat{T}}}|\tilde{\Phi}\rangle=0\,,$ which also holds since it is the summation (with some weights) of $\displaystyle\langle\Psi_{\rm L}|[\hat{H}-i(\partial^{\hskip 0.20999pt\prime}_{t}\hat{T})-i\hat{X},\hat{E}^{q^{m}}_{p^{m}}]|\Psi_{\rm R}\rangle$ (74) $\displaystyle+$ $\displaystyle i\langle\Phi|(\partial^{\hskip 0.20999pt\prime}_{t}\hat{\Lambda})e^{-\hat{T}}\hat{E}^{q^{m}}_{p^{m}}|\Psi_{\rm R}\rangle=0\,.$ ## Appendix D EoMs of TD-OCCH In this section, we will present the EoMs of single amplitudes and particle- hole orbital rotations of TD-OCCH. EoMs of other parameters of the method have already been presented in the Sec. II.3. We will use notations $m_{\rm O}$, $m_{\rm T}$, $m_{\rm B}$, and $m_{\rm X}$ to label the fermion kinds with parameterization of TD-OCC, TD-OCCT1, TD-BCC, and TD-OCCX0. $m$ without any subscripts represent an arbitrary fermion kind. $X_{i^{m_{\rm X}}}^{a^{m_{\rm X}}}$ are set as Eq. (33) in VG and zero in LG. EoMs of $\lambda^{i^{m_{\rm X}}}_{a^{m_{\rm X}}}$ and $X_{i^{m_{\rm T}}}^{a^{m_{\rm T}}}$ are decoupled with other EoMs $i\left(\rho_{j^{m_{\rm T}}}^{i^{m_{\rm T}}}\delta_{b^{m_{\rm T}}}^{a^{m_{\rm T}}}-\rho_{a^{m_{\rm T}}}^{b^{m_{\rm T}}}\delta_{j^{m_{\rm T}}}^{i^{m_{\rm T}}}\right)X_{b^{m_{\rm T}}}^{j^{m_{\rm T}}}=\bra{\Psi_{\rm L}}[\hat{H},\hat{E}_{i^{m_{\rm T}}}^{a^{m_{\rm T}}}]\ket{\Psi_{\rm R}}\,,$ (75) $\displaystyle-i\dot{\lambda}^{i^{m_{\rm X}}}_{a^{m_{\rm X}}}=$ (76) $\displaystyle\bra{\Psi_{\rm L}}[\hat{H},\hat{E}_{i^{m_{\rm X}}}^{a^{m_{\rm X}}}]\ket{\Psi_{\rm R}}-i\left(\rho_{j^{m_{\rm X}}}^{i^{m_{\rm X}}}\delta_{b^{m_{\rm X}}}^{a^{m_{\rm X}}}-\rho_{a^{m_{\rm X}}}^{b^{m_{\rm X}}}\delta_{j^{m_{\rm X}}}^{i^{m_{\rm X}}}\right)X_{b^{m_{\rm X}}}^{j^{m_{\rm X}}}\,.$ After obtaining $\dot{\lambda}^{i^{m_{\rm X}}}_{a^{m_{\rm X}}}$ and $X_{i^{m_{\rm T}}}^{a^{m_{\rm T}}}$, one can solve the following coupled equations to get $\dot{\lambda}^{i^{m_{\rm B}}}_{a^{m_{\rm B}}}$, $X_{i^{m_{\rm B}}}^{a^{m_{\rm B}}}$, and $X_{i^{m_{\rm O}}}^{a^{m_{\rm O}}}$. $\displaystyle i\frac{1}{2}\left\\{\sum_{n_{\rm B}}\langle\Phi_{j^{n_{\rm B}}}^{b^{n_{\rm B}}}|e^{-\hat{T}}\hat{E}^{i^{m_{\rm B}}}_{a^{m_{\rm B}}}|\Psi_{\rm R}\rangle\dot{\lambda}^{j^{n_{\rm B}}}_{b^{n_{\rm B}}}+(\dot{\lambda}^{i^{m_{\rm B}}}_{a^{m_{\rm B}}})^{*}+\sum_{n\in n_{\rm B},n_{\rm O}}A^{a^{m_{\rm B}}j^{n}}_{i^{m_{\rm B}}b^{n}}(X^{b^{n}}_{j^{n}})^{*}\right\\}+i(\delta_{b^{m_{\rm B}}}^{a^{m_{\rm B}}}D_{i^{m_{\rm B}}}^{j^{m_{\rm B}}}-D_{b^{m_{\rm B}}}^{a^{m_{\rm B}}}\delta_{i^{m_{\rm B}}}^{j^{m_{\rm B}}})X_{j^{m_{\rm B}}}^{b^{m_{\rm B}}}=$ $\displaystyle B_{i^{m_{\rm B}}}^{a^{m_{\rm B}}}-i\sum_{n\in n_{\rm T},n_{\rm X}}\frac{1}{2}A^{a^{m_{\rm B}}j^{n}}_{i^{m_{\rm B}}b^{n}}(X^{b^{n}}_{j^{n}})^{*}-i\frac{1}{2}\sum_{n_{\rm X}}\langle\Phi_{j^{n_{\rm X}}}^{b^{n_{\rm X}}}|e^{-\hat{T}}\hat{E}^{i^{m_{\rm B}}}_{a^{m_{\rm B}}}|\Psi_{\rm R}\rangle\dot{\lambda}^{j^{n_{\rm X}}}_{b^{n_{\rm X}}}\,,$ (77) $\displaystyle i\frac{1}{2}\left\\{\sum_{n_{\rm B}}\langle\Phi_{j^{n_{\rm B}}}^{b^{n_{\rm B}}}|e^{-\hat{T}}\hat{E}^{i^{m_{\rm O}}}_{a^{m_{\rm O}}}|\Psi_{\rm R}\rangle\dot{\lambda}^{j^{n_{\rm B}}}_{b^{n_{\rm B}}}+\sum_{n\in n_{\rm B},n_{\rm O}}A^{a^{m_{\rm O}}j^{n}}_{i^{m_{\rm O}}b^{n}}(X^{b^{n}}_{j^{n}})^{*}\right\\}+i(\delta_{b^{m_{\rm O}}}^{a^{m_{\rm O}}}D_{i^{m_{\rm O}}}^{j^{m_{\rm O}}}-D_{b^{m_{\rm O}}}^{a^{m_{\rm O}}}\delta_{i^{m_{\rm O}}}^{j^{m_{\rm O}}})X_{j^{m_{\rm O}}}^{b^{m_{\rm O}}}=$ $\displaystyle B_{i^{m_{\rm O}}}^{a^{m_{\rm O}}}-i\sum_{n\in n_{\rm T},n_{\rm X}}\frac{1}{2}A^{a^{m_{\rm O}}j^{n}}_{i^{m_{\rm O}}b^{n}}(X^{b^{n}}_{j^{n}})^{*}-i\frac{1}{2}\sum_{n_{\rm X}}\langle\Phi_{j^{n_{\rm X}}}^{b^{n_{\rm X}}}|e^{-\hat{T}}\hat{E}^{i^{m_{\rm O}}}_{a^{m_{\rm O}}}|\Psi_{\rm R}\rangle\dot{\lambda}^{j^{n_{\rm X}}}_{b^{n_{\rm X}}}\,,$ (78) $\displaystyle iX_{i^{m_{\rm B}}}^{a^{m_{\rm B}}}+\sum_{n\in n_{\rm B},n_{\rm O}}i\langle\Phi_{i^{m_{\rm B}}}^{a^{m_{\rm B}}}|e^{-\hat{T}}\hat{E}^{j^{n}}_{b^{n}}|\Psi_{\rm R}\rangle X^{j^{n}}_{b^{n}}=\langle\Phi_{i^{m_{\rm B}}}^{a^{m_{\rm B}}}|e^{-\hat{T}}\hat{H}|\Psi_{\rm R}\rangle-\sum_{n\in n_{\rm T},n_{\rm X}}i\langle\Phi_{i^{m_{\rm B}}}^{a^{m_{\rm B}}}|e^{-\hat{T}}\hat{E}^{j^{n}}_{b^{n}}|\Psi_{\rm R}\rangle X^{j^{n}}_{b^{n}}\,.$ (79) Finally, $\dot{\tau}_{i^{m_{\rm T}}}^{a^{m_{\rm T}}}$ and $\dot{\tau}_{i^{m_{\rm X}}}^{a^{m_{\rm X}}}$ can be solved $\displaystyle i(\delta_{b^{m_{\rm T}}}^{a^{m_{\rm T}}}D_{i^{m_{\rm T}}}^{j^{m_{\rm T}}}-D_{b^{m_{\rm T}}}^{a^{m_{\rm T}}}\delta_{i^{m_{\rm T}}}^{j^{m_{\rm T}}})\dot{\tau}_{j^{m_{\rm T}}}^{b^{m_{\rm T}}}=B_{i^{m_{\rm T}}}^{a^{m_{\rm T}}}$ $\displaystyle-$ $\displaystyle i(\delta_{b^{m_{\rm T}}}^{a^{m_{\rm T}}}D_{i^{m_{\rm T}}}^{j^{m_{\rm T}}}-D_{b^{m_{\rm T}}}^{a^{m_{\rm T}}}\delta_{i^{m_{\rm T}}}^{j^{m_{\rm T}}})X_{j^{m_{\rm T}}}^{b^{m_{\rm T}}}-\sum_{n}\frac{i}{2}A^{a^{m_{\rm T}}j^{n}}_{i^{m_{\rm T}}b^{n}}(X^{b^{n}}_{j^{n}})^{*}$ $\displaystyle-$ $\displaystyle\frac{i}{2}\sum_{n\in n_{\rm B},n_{\rm X}}\langle\Phi_{j^{n}}^{b^{n}}|e^{-\hat{T}}\hat{E}^{i^{m_{\rm T}}}_{a^{m_{\rm T}}}|\Psi_{\rm R}\rangle\dot{\lambda}^{j^{n}}_{b^{n}}\,,$ (80) $i\dot{\tau}_{i^{m_{\rm X}}}^{a^{m_{\rm X}}}=\bra{\Phi_{i^{m_{\rm X}}}^{a^{m_{\rm X}}}}e^{-\hat{T}}(\hat{H}-i\hat{X})\ket{\Psi_{\rm R}}\,.$ (81) It is straightforward to verify that TD-OCCH reduce to other four methods when the corresponding parameterization is used in all fermion kinds. It is also possible to use two or three types of parameterization in TD-OCCH method, and one needs to remove unused types of parameters in above EoMs. An attractive parameterization is the OCCT1-OCCX0 mixed one, in which all single amplitudes and particle-hole rotations of different kinds are decoupled. One can also assign few kinds fermions OCC and BCC type parameterization, in which only few of coupled equations need to be solved. Similar to the TD-OCCX0 method, the non-variance of $X_{i^{m_{\rm X}}}^{a^{m_{\rm X}}}$ makes the Ehrenfest theoremSato _et al._ (2016) modified even without frozen-cores. One only needs to account for the TD- OCCX0-type parameterization fermion kinds $\displaystyle 2i\frac{d}{dt}\braket{\hat{O}}$ $\displaystyle=\braket{\Psi_{\rm L}}{[\hat{O},\hat{H}]}{\Psi_{\rm R}}+\braket{\Psi_{\rm R}}{[\hat{O},\hat{H}]}{\Psi_{\rm L}}$ (82) $\displaystyle+\sum_{m\in m_{\rm X}}O_{t^{m}}^{u^{m}}\\{\braket{\Psi_{\rm L}}{(\hat{H}-i\hat{X})e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}\hat{E}_{t^{m}}^{u^{m}}}{\Psi_{\rm R}}$ $\displaystyle-\braket{\Psi_{\rm L}}{\hat{E}_{t^{m}}^{u^{m}}e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}(\hat{H}-i\hat{X})}{\Psi_{\rm R}}$ $\displaystyle+\braket{\Psi_{\rm R}}{(\hat{H}-i\hat{X})e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}\hat{E}_{t^{m}}^{u^{m}}}{\Psi_{\rm L}}$ $\displaystyle-\braket{\Psi_{\rm R}}{\hat{E}_{t^{m}}^{u^{m}}e^{\hat{T}}\bar{\hat{\Pi}}e^{-\hat{T}}(\hat{H}-i\hat{X})}{\Psi_{\rm L}}\\}\,.$ When frozen orbitals are included, two additional terms that replace $t^{m}$, $u^{m}$ with $i^{\prime\prime m}$, $\mu^{m}$ and $\mu^{m}$, $i^{\prime\prime m}$ in the last term of Eq. (82) should be added. Additionally, the summation for the new terms should be for all kinds of fermions. ## References * Protopapas, Keitel, and Knight (1997) M. Protopapas, C. H. Keitel, and P. L. Knight, Rep. Prog. Phys. 60, 389 (1997). * Agostini and DiMauro (2004) P. Agostini and L. F. DiMauro, Rep. Prog. Phys. 67, 813 (2004). * Krausz and Ivanov (2009) F. Krausz and M. Ivanov, Rev. Mod. Phys. 81, 163 (2009). * Gallmann, Cirelli, and Keller (2013) L. Gallmann, C. Cirelli, and U. Keller, Annu. Rev. Phys. Chem. 63, 447 (2013). * Nisoli _et al._ (2017) M. Nisoli, P. Decleva, F. Calegari, A. Palacios, and F. Martín, Chem. Rev. 117, 10760 (2017). * Mordovina _et al._ (2020) U. Mordovina, C. Bungey, H. Appel, P. J. Knowles, A. Rubio, and F. R. Manby, Physical Review Research 2, 023262 (2020). * Christiansen (2004) O. Christiansen, The Journal of chemical physics 120, 2149 (2004). * Hansen _et al._ (2019) M. B. Hansen, N. K. Madsen, A. Zoccante, and O. Christiansen, The Journal of Chemical Physics 151 (2019). * Sverdrup Ofstad _et al._ (2023) B. Sverdrup Ofstad, E. Aurbakken, Ø. Sigmundson Schøyen, H. E. Kristiansen, S. Kvaal, and T. B. Pedersen, Wiley Interdisciplinary Reviews: Computational Molecular Science 13, e1666 (2023). * Zanghellini _et al._ (2003) J. Zanghellini, M. Kitzler, C. Fabian, T. Brabec, and A. Scrinzi, Laser Physics 13, 1064 (2003). * Kato and Kono (2004) T. Kato and H. Kono, Chem. Phys. Lett. 392, 533 (2004). * Caillat _et al._ (2005) J. Caillat, J. Zanghellini, M. Kitzler, O. Koch, W. Kreuzer, and A. Scrinzi, Phys. Rev. A 71, 012712 (2005). * Nest, Klamroth, and Saalfrank (2005) M. Nest, T. Klamroth, and P. Saalfrank, J. Chem. Phys. 122, 124102 (2005). * Kato and Kono (2008) T. Kato and H. Kono, J. Chem. Phys. 128, 184102 (2008). * Hochstuhl and Bonitz (2011) D. Hochstuhl and M. Bonitz, J. Chem. Phys. 134, 084106 (2011). * Haxton, Lawler, and McCurdy (2012) D. J. Haxton, K. V. Lawler, and C. W. McCurdy, Phys. Rev. A 86, 013406 (2012). * Haxton and McCurdy (2014) D. J. Haxton and C. W. McCurdy, Phys. Rev. A 90, 053426 (2014). * Lode _et al._ (2020) A. U. Lode, C. Lévêque, L. B. Madsen, A. I. Streltsov, and O. E. Alon, Reviews of Modern Physics 92, 011001 (2020). * Alon, Streltsov, and Cederbaum (2007) O. E. Alon, A. I. Streltsov, and L. S. Cederbaum, The Journal of chemical physics 127 (2007). * Nguyen-Dang _et al._ (2007) T. T. Nguyen-Dang, M. Peters, S.-M. Wang, E. Sinelnikov, and F. Dion, J. Chem. Phys. 127, 174107 (2007). * Ishikawa and Sato (2015) K. L. Ishikawa and T. Sato, IEEE J. Sel. Topics Quantum Electron 21, 8700916 (2015). * Anzaki, Sato, and Ishikawa (2017) R. Anzaki, T. Sato, and K. L. Ishikawa, Phys. Chem. Chem. Phys. 19, 22008 (2017). * Sato and Ishikawa (2013) T. Sato and K. L. Ishikawa, Phys. Rev. A 88, 023402 (2013). * Li, Sato, and Ishikawa (2021) Y. Li, T. Sato, and K. L. Ishikawa, Physical Review A 104, 043104 (2021). * Madsen _et al._ (2020a) N. K. Madsen, M. B. Hansen, G. A. Worth, and O. Christiansen, The Journal of Chemical Physics 152 (2020a). * Madsen _et al._ (2020b) N. K. Madsen, M. B. Hansen, G. A. Worth, and O. Christiansen, Journal of Chemical Theory and Computation 16, 4087 (2020b). * Miyagi and Madsen (2013) H. Miyagi and L. B. Madsen, Phys. Rev. A 87, 062511 (2013). * Miyagi and Madsen (2014) H. Miyagi and L. B. Madsen, Phys. Rev. A 89, 063416 (2014). * Haxton and McCurdy (2015) D. J. Haxton and C. W. McCurdy, Phys. Rev. A 91, 012509 (2015). * Sato and Ishikawa (2015) T. Sato and K. L. Ishikawa, Phys. Rev. A 91, 023417 (2015). * Sato _et al._ (2016) T. Sato, K. L. Ishikawa, I. Břesinová, F. Lackner, and J. Burgdörfer, Phys. Rev. A 94, 023405 (2016). * Sawada, Sato, and Ishikawa (2016) R. Sawada, T. Sato, and K. L. Ishikawa, Phys. Rev. A 93, 023434 (2016). * Omiste, Li, and Madsen (2017) J. J. Omiste, W. Li, and L. B. Madsen, Phys. Rev. A 95, 053422 (2017). * Helgaker, Jørgensen, and Olsen (2002) T. Helgaker, P. Jørgensen, and J. Olsen, _Molecular Electronic-Structure Theory_ (Wiley, 2002). * Szabo and Ostlund (1996) A. Szabo and N. S. Ostlund, _Modern Quantum Chemistry_ (Dover, Mineola, 1996). * Kümmel (2003) H. G. Kümmel, Int. J. Mod. Phys. B 17, 5311 (2003). * Shavitt and Bartlett (2009) I. Shavitt and R. J. Bartlett, _Many-body methods in chemistry and physics: MBPT and coupled-cluster theory_ (Cambridge university press, 2009). * Arponen (1983) J. Arponen, Ann. Phys. 151, 311 (1983). * Sato _et al._ (2018) T. Sato, H. Pathak, Y. Orimo, and K. L. Ishikawa, The Journal of chemical physics 148 (2018). * Pathak, Sato, and Ishikawa (2020a) H. Pathak, T. Sato, and K. L. Ishikawa, The Journal of chemical physics 152 (2020a). * Pathak, Sato, and Ishikawa (2020b) H. Pathak, T. Sato, and K. L. Ishikawa, The Journal of Chemical Physics 153 (2020b). * Pathak, Sato, and Ishikawa (2021) H. Pathak, T. Sato, and K. L. Ishikawa, The Journal of Chemical Physics 154 (2021). * Kvaal (2012) S. Kvaal, J. Chem. Phys. 136, 194109 (2012). * Madsen _et al._ (2020c) N. K. Madsen, M. B. Hansen, O. Christiansen, and A. Zoccante, The Journal of Chemical Physics 153 (2020c). * Højlund, Zoccante, and Christiansen (2024) M. G. Højlund, A. Zoccante, and O. Christiansen, The Journal of Chemical Physics 160 (2024). * Højlund and Christiansen (2024) M. G. Højlund and O. Christiansen, arXiv preprint arXiv:2402.11378 (2024). * Højlund _et al._ (2022) M. G. Højlund, A. B. Jensen, A. Zoccante, and O. Christiansen, The Journal of Chemical Physics 157 (2022). * Jensen _et al._ (2023) A. B. Jensen, M. G. Højlund, A. Zoccante, N. K. Madsen, and O. Christiansen, The Journal of Chemical Physics 159 (2023). * G. E. Scuseria and H. F. Schaefer III (1987) G. E. Scuseria and H. F. Schaefer III, Chem. Phys. Lett. 142, 354 (1987). * Sherrill _et al._ (1998) C. D. Sherrill, A. I. Krylov, E. F. C. Byrd, and M. Head-Gordon, J. Chem. Phys. 109, 4171 (1998). * Krylov _et al._ (1998) A. I. Krylov, C. D. Sherrill, E. F. C. Byrd, and M. Head-Gordon, J. Chem. Phys. 109, 10669 (1998). * Köhn and Olsen (2005) A. Köhn and J. Olsen, J. Chem. Phys. 122, 084116 (2005). * Pedersen, Fernández, and Koch (2001) T. B. Pedersen, B. Fernández, and H. Koch, The Journal of chemical physics 114, 6983 (2001). * Myhre (2018) R. H. Myhre, The Journal of chemical physics 148 (2018). * Huber and Klamroth (2011) C. Huber and T. Klamroth, J. Chem. Phys. 134, 054113 (2011). * D. R. Nascimento and A. E. DePrince III (2016) D. R. Nascimento and A. E. DePrince III, J. Chem. Theory Comput. 12, 5834 (2016). * Raghavachari _et al._ (1990) K. Raghavachari, J. A. Pople, E. S. Replogle, M. Head-Gordon, and N. C. Handy, Chemical physics letters 167, 115 (1990). * Hampel, Peterson, and Werner (1992) C. Hampel, K. A. Peterson, and H.-J. Werner, Chemical physics letters 190, 1 (1992). * Chiles and Dykstra (1981) R. A. Chiles and C. E. Dykstra, The Journal of Chemical Physics 74, 4544 (1981). * Handy _et al._ (1989) N. C. Handy, J. A. Pople, M. Head-Gordon, K. Raghavachari, and G. W. Trucks, Chemical physics letters 164, 185 (1989). * (61) The pre-determined constraints can be further classified into those that remove the redundancy of the wavefunction parameterization, which are intra-group orbital rotations, and those that restrict the variational space of the wavefunction, which are frozen-core related orbital rotations. Strictly speaking, the first subclass of the first type of constraint is the true constraint, the second subclass of the first type of constraint is the restriction, and the second type of constraint is the variational parameter. However, we do not distinguish these three types of constraints since “constraint” is widely used in referencesMadsen _et al._ (2020c); Højlund, Zoccante, and Christiansen (2024); Højlund and Christiansen (2024); Sverdrup Ofstad _et al._ (2023); Højlund _et al._ (2022); Jensen _et al._ (2023). * Pedersen, Koch, and Hättig (1999) T. B. Pedersen, H. Koch, and C. Hättig, J. Chem. Phys. 110, 8318 (1999). * Madsen _et al._ (2020d) N. K. Madsen, A. B. Jensen, M. B. Hansen, and O. Christiansen, The Journal of Chemical Physics 153 (2020d). * Beck _et al._ (2000) M. H. Beck, A. Jäckle, G. A. Worth, and H.-D. Meyer, Physics reports 324, 1 (2000). * Pedersen and Kvaal (2019) T. B. Pedersen and S. Kvaal, The Journal of chemical physics 150 (2019). * Kristiansen _et al._ (2020) H. E. Kristiansen, Ø. S. Schøyen, S. Kvaal, and T. B. Pedersen, The Journal of chemical physics 152 (2020). * Sato, Teramura, and Ishikawa (2018) T. Sato, T. Teramura, and K. L. Ishikawa, Applied Sciences 8, 433 (2018). * Han and Madsen (2010) Y.-C. Han and L. B. Madsen, Physical Review A 81, 063430 (2010). * (69) The truncation in TDMVCC is a TDH-TDMVCC hybrid form, which is equivalent to setting the weights of the TDH kinds as infinity. * Worth and Cederbaum (2004) G. A. Worth and L. S. Cederbaum, Annu. Rev. Phys. Chem. 55, 127 (2004). * Köuppel, Domcke, and Cederbaum (1984) H. Köuppel, W. Domcke, and L. S. Cederbaum, Advances in chemical physics , 59 (1984). * Bao, Raymond, and Nooijen (2024) S. Bao, N. Raymond, and M. Nooijen, The Journal of Chemical Physics 160 (2024). * Sibaev _et al._ (2020) M. Sibaev, I. Polyak, F. R. Manby, and P. J. Knowles, The Journal of Chemical Physics 153 (2020). * Muolo _et al._ (2020) A. Muolo, A. Baiardi, R. Feldmann, and M. Reiher, The Journal of chemical physics 152 (2020). * Sasmal and Vendrell (2020) S. Sasmal and O. Vendrell, The Journal of Chemical Physics 153 (2020). * Sasmal and Vendrell (2022) S. Sasmal and O. Vendrell, The Journal of Chemical Physics 157 (2022). * Sasmal, Schröder, and Vendrell (2024) S. Sasmal, M. Schröder, and O. Vendrell, The Journal of Chemical Physics 160 (2024). * Goldstein (1980) H. Goldstein, _Classical Mechanics (2nd ed.)_ (Addison-Wesley., 1980).
$\mathcal{W}$, characterized by $\mathbf{n}\in\mathcal{W}_{i}\Leftrightarrow n_{i}=n_{i+1}$. Then (8.2.1) is immediate from (8.1). Hence we may assume that $d\geq 2$ and work in our usual framework using $d$-diagrams. 2. (ii) We will show more precisely: The map (8.2.2) $\displaystyle\delta\colon\operatorname{diag}^{(d)}(\mathbf{n})$ $\displaystyle\longrightarrow\operatorname{diag}^{(d)}(\mathbf{n}\hat{})$ $\displaystyle(i,v)$ $\displaystyle\longmapsto(r+1-i,n_{1}+d-1-v)$ is well-defined and bijective, and sends (8.2.3) $B_{k}^{(d)}(\mathbf{n})\text{ to }B_{rd+1-k}^{(d)}(\mathbf{n}\hat{}).$ 3. (iii) As both sets $\operatorname{diag}^{(d)}(\mathbf{n})$ and $\operatorname{diag}^{(d)}(\mathbf{n}\hat{})$ have the same cardinality, it suffices for (8.2.2) to show that $\delta$ is well-defined and satisfies $\delta^{2}=\operatorname{id}$. Now $\displaystyle(i,v)\in\operatorname{diag}^{(d)}(\mathbf{n})\Longleftrightarrow n_{i}\leq v<n_{i}+d$ $\displaystyle(1\leq i<r)$ and, after a little calculation, this turns out to be equivalent with $\mathbf{n}\hat{}_{r+1-i}\leq n_{1}+d-1-v<\mathbf{n}\hat{}_{r+1-i}+d$ for all $i$, that is, $\delta$ maps in fact to $\operatorname{diag}^{(d)}(\mathbf{n}\hat{})$. The equation $\delta^{2}=1$ is trivial. 4. (iv) Recall the order relation “$\leq$” on $\operatorname{diag}^{(d)}(\mathbf{n})$ given by (4.1.1). It is obvious from the definition of $\delta$ that it reverses this order, which gives (8.2.3). ∎ ### 8.3. Coming back to the arithmetic significance of $\mathcal{BT}(d,k)=\mathcal{BT}({}_{a}\ell_{k})=\lambda(\Omega({}_{a}\ell_{k}))$, where $a\in A=\mathds{F}[T]$ has degree $d$, we find the following relationship. Let $a\in A$ of degree $d$ be given, and consider for $1\leq k<rd$ the modular forms ${}_{a}\ell_{k}$. Then the zero sets $\Omega({}_{a}\ell_{k})$ of ${}_{a}\ell_{k}$ and $\Omega({}_{a}\ell_{rd-k})$ of ${}_{a}\ell_{rd-k}$ are related by (8.3.1) $\lambda(\Omega({}_{a}\ell_{k}))\hat{}=\lambda(\Omega({}_{a}\ell_{rd-k})).$ ### 8.4. Note that the considerations of this section also apply for $r=2$. Here $\mathcal{BT}(d,k)=\mathcal{BT}^{2}(d,k)$ is a finite set of vertices of the Bruhat-Tits tree $\mathcal{BT}=\mathcal{BT}^{2}$. The involution $(\,.\,)\hat{}$ is trivial, and (8.2.1) resp. (8.3.1) degenerate to $\mathcal{W}(d,k)=\mathcal{W}(d,k)\hat{}$ resp. $\lambda(\Omega({}_{a}\ell_{k}))=\lambda(\Omega({}_{a}\ell_{2d-k}))$. This is in accordance with the symmetry property in Theorem 5.1 of [Gekeler2011]. ## 9\. Examples and concluding remarks Note that our Theorems 5.17, 6.10, 7.4 and 8.2 along with Proposition 3.8 assert all the ingredients of the Main Theorem 1.8, which thereby is proved. Here we present examples for the simplicial complexes $\mathcal{W}(d,k)$. For reasons of presentation, we restrict to the case $r=3$, where $\mathcal{W}(d,k)$ has dimension 1, i.e., is a graph embedded into $\mathcal{W}=\mathcal{W}^{3}$ (see 3.2 and the legend given there). ###### Example 9.1 ($r=3$, the graphs $\mathcal{W}(2,k)$, $1\leq k\leq 5$): $\mathcal{W}=\mathcal{W}^{3}$$\mathcal{W}_{1}=\mathcal{W}(2,5)$$\mathcal{W}_{2}=\mathcal{W}(2,1)$$\sigma$$x\hat{}$$x$$=\mathcal{W}(2,2)$$=\mathcal{W}(2,3)$$=\mathcal{W}(2,4)$ Note that $\mathcal{W}(2,1)=\mathcal{W}(1)=$ the wall $\mathcal{W}_{2}$ and $\mathcal{W}(2,2)$ have been drawn in [V] Figure 1. By symmetry, this gives $\mathcal{W}(2,5)=\mathcal{W}(2,1)\hat{}=\mathcal{W}_{1}$ and $\mathcal{W}(2,4)=\mathcal{W}(2,2)\hat{}$. Let $a\in A$ have degree 2, e.g., $a=T^{2}$. Then $f\vcentcolon={}_{a}\ell_{1}{}_{a}\ell_{2}{}_{a}\ell_{3}$ has the property: The edges of the leftmost triangle $\sigma$ of $\mathcal{W}$ belong to $\mathcal{W}(f)$, while $\overset{\circ}{\sigma}(\mathds{Q})\cap\mathcal{W}(f)=\varnothing$. Hence the zero set $\mathcal{W}(f)$ is not a full subcomplex of $\mathcal{W}$ and $f$ is not simplicial. Examples of modular forms of this shape that are not simplicial are abundant. ###### Example 9.2 ($r=3$, the graphs $\mathcal{W}(3,k)$, $k=3$ or $4$): $\mathcal{W}=\mathcal{W}^{3}$$=\mathcal{W}(3,3)=\mathcal{W}(3)$$=\mathcal{W}(3,4)$ As $\mathcal{W}(3,k)=\mathcal{W}(k)$ for $k=1,2$ has been given in 9.1 and $\mathcal{W}(3,k)\hat{}=\mathcal{W}(3,9-k)$, we may restrict to $k=3$ or $4$. ###### Example 9.3 ($r=3$, the graphs $\mathcal{W}(4,k)$, $k=4,5,6$): $\mathcal{W}=\mathcal{W}^{3}$$=\mathcal{W}(4,4)=\mathcal{W}(4)$$=\mathcal{W}(4,5)$$=\mathcal{W}(4,6)$ As $\mathcal{W}(4,k)=\mathcal{W}(k)$ for $k=1,2,3$ is in 9.1 and 9.2 and $\mathcal{W}(4,k)\hat{}=\mathcal{W}(4,12-k)$, we restrict to $k=4,5,6$. Note that the graph $\mathcal{W}(4,6)$ has a non-trivial cycle. ###### Remark 9.4 (about the involution $(\,.\,)\hat{}$): A very satisfactory explanation/interpretation of Theorem 8.2 would be given by the existence of a lift of $(\,.\,)\hat{}$ to $\Omega$, i.e., an involution $\boldsymbol{\omega}\mapsto\boldsymbol{\omega}\boldsymbol{\hat{}}$ on $\Omega$ such that (9.4.1) ${\Omega}$${\Omega}$${\mathcal{BT}}$${\mathcal{BT}}$$\scriptstyle{\lambda}$$\scriptstyle{(\,.\,)\boldsymbol{\hat{}}}$$\scriptstyle{\lambda}$$\scriptstyle{(\,.\,)\hat{}}$ commutes. In view of (8.1), a natural candidate for $(\,.\,)\boldsymbol{\hat{}}$ would be the map (9.4.2) $\boldsymbol{\omega}=(\omega_{1},\dots,\omega_{r-1},1)\longmapsto\left(\omega_{1},\frac{\omega_{1}}{\omega_{r-1}},\dots,\frac{\omega_{1}}{\omega_{2}},1\right).$ This however fails for a number of reasons. First, and most important, $\Omega$ is not stable under the map described in (9.4.2) (I owe this hint to Andreas Schweizer). Second, a reasonable lift $(\,.\,)\boldsymbol{\hat{}}$ as in (9.4.1) had to interchange the zeroes of ${}_{a}\ell_{k}$ with those of ${}_{a}\ell_{rd-k}$, which however is excluded for “weight reasons”. Let ${}_{d}L_{k}$ be the $\mathds{Q}$-valued function on $\mathcal{BT}(\mathds{Q})$ given by (9.4.3) ${}_{d}L_{k}(\boldsymbol{x})\vcentcolon=\log_{q}\lVert{}_{a}\ell_{k}\rVert_{\mathbf{x}},$ where $\lVert f\rVert_{\boldsymbol{x}}$ is the spectral norm $\sup_{\boldsymbol{\omega}\in\lambda^{-1}(\boldsymbol{x})}\lvert f(\boldsymbol{x})\rvert$ on the affinoid $\lambda^{-1}(\boldsymbol{x})$ (see [I], [II], [IV]). (Actually, $\lVert{}_{a}\ell_{k}\rVert_{\boldsymbol{x}}$ depends only on $d=\deg a$, see [V] Proposition 3.8.) Now, as ${}_{a}\ell_{k}$ has turned out to be simplicial, ${}_{d}L_{k}$ encodes the zero set $\mathcal{BT}({}_{a}\ell_{k})=\mathcal{BT}(d,k)$, see Proposition 1.8 in [V]. Therefore, as a substitute for some diagram (9.4.1), we expect(?) a simple relation between ${}_{d}L_{k}\hat{}$ and ${}_{d}L_{rd-k}$ or perhaps between the van der Put transforms $P({}_{a}\ell_{k})\hat{}$ and $P({}_{a}\ell_{rd-k})$ (see [V] Section 1) that implies Theorem 8.2. ### 9.5. Similar to $\mathcal{BT}(d,k)$, which presents a coarse picture of the zero locus $\Omega({}_{a}\ell_{k})$ of ${}_{a}\ell_{k}$ in $\Omega$, the object $\Gamma\backslash\mathcal{BT}(d,k)$ may be seen as a coarse picture of the quotient ${\Gamma\backslash\Omega({}_{a}\ell_{k})}$${\Gamma\backslash\Omega}$${M^{r}({}_{a}\ell_{k})}$${M^{r}.}$$\scriptstyle{\cong}$$\scriptstyle{\cong}$ Here $M^{r}$ is (the set of $C_{\infty}$-points of) the moduli scheme of rank-$r$ Drinfeld $A$-modules. Now, despite the fact that $\Gamma$ acts simplicially on $\mathcal{BT}$ and its subcomplex $\mathcal{BT}(d,k)$, the quotient modulo $\Gamma$ doesn’t inherit a structure as a simplicial complex. This comes from the (well-known but annoying) fact that, e.g. in the case of dimension 1, endpoints of $1$-simplices $\sigma,\tau$ are possibly identified under $\Gamma$, but $\sigma$ and $\tau$ are not. This leads to double edges $[\sigma]$$[\tau]$ and similar phenomena in $\Gamma\backslash\mathcal{BT}$. Hence the topological space $\Gamma\backslash\mathcal{BT}(d,k)(\mathds{R})$ has only the weaker structure of cell complex. Nevertheless, the next topics to deal with should be: * • Study the spaces $\Gamma\backslash\mathcal{BT}(d,k)(\mathds{R})$, the natural map $\mathcal{W}(d,k)(\mathds{R})\to\Gamma\backslash\mathcal{BT}(d,k)(\mathds{R})$, their Betti numbers, etc.; * • Work out the relationship between, say, the $\ell$-adic cohomology of the moduli scheme $M({}_{a}\ell_{k})$ and the (co-)homology of $\Gamma\backslash\mathcal{BT}(d,k)(\mathds{R})$; * • Instead of the zero sets of one modular form ${}_{a}\ell_{k}$ in $\Omega$, $\mathcal{BT}$, $\Gamma\backslash\mathcal{BT}$, study the simultaneous zero sets of families of modular forms in $\Omega$, $\mathcal{BT}$ and their quotients modulo $\Gamma$, or modulo congruence subgroups $\Gamma^{\prime}$ of $\Gamma$. For the relatively simple case of the family $\\{g_{k}\mid 1\leq k<r,k\neq j\\}$ where $j$ is fixed, see [Gekeler2019]. ### 9.6. As the reader will have noticed, $q=\\#(\mathds{F})$ has completely vanished from the largest part of the paper. This is due to the fact that we mainly worked with $\mathcal{A}$ and $\mathcal{W}$ and the combinatorics of its subcomplexes. These are independent of $q$ and are entirely determined through the root system $\Phi$ and its Weyl group $W$. The quantity $q$ occurs only in the way how the different apartments $\mathcal{A}^{\prime}\cong\mathcal{A}$ are glued together, i.e., how many neighbors $v^{\prime}\in\mathcal{BT}(\mathds{Z})$ there are for a given vertex $v=\mathbf{n}$ in $\mathcal{A}$. Therefore, the simplicial complexes $\mathcal{A}(k)$, $\mathcal{A}(d,k)$, $\mathcal{BT}(k)$, $\mathcal{BT}(d,k)$ are fundamental structures for $\Phi$, and should play a role beyond their describing the zero sets of the Drinfeld modular forms ${}_{a}\ell_{k}$. ## References
# Bijective proof of a conjecture on unit interval posets Wenjie Fang LIGM, Univ Gustave Eiffel, CNRS, ESIEE Paris, F-77454 Marne-la-Vallée, France ###### Abstract In a recent preprint, Matherne, Morales and Selover conjectured that two different representations of unit interval posets are related by the famous zeta map in $q,t$-Catalan combinatorics. This conjecture was proved recently by Gélinas, Segovia and Thomas using induction. In this short note, we provide a bijective proof of the same conjecture with a reformulation of the zeta map using left-aligned colored trees, first proposed in the study of parabolic Tamari lattices. ## 1 Introduction The study of unit interval posets started in statistics (see [WF57]) and psychology (see [Sco64]). However, it has since been connected to other objects in algebraic combinatorics, such as $(3+1)$-free posets [SR03] and positroids [CG18]. The connection to $(3+1)$-free posets is particularly important, as these posets are at the center of the Stanley-Stembridge conjecture [SS93], which states that the chromatic symmetric function of the incomparability graph of a $(3+1)$-free poset has only positive coefficients when expanded in the basis of elementary symmetric functions. Unit interval posets thus receive some attention as it can be used to study the structure of $(3+1)$-free posets [SR03, GPMR14]. It is known that unit interval posets are counted by Catalan numbers [WF57], and researchers have given two different bijections to represent a unit interval poset by Dyck paths [SR03, GPMR14]. In a recent preprint of Matherne, Morales and Selover [MMS], it was conjectured that the two bijections are related by the famous zeta map in $q,t$-Catalan combinatorics (see [Hag08]), which was first given in [AKOP02]. In this short note, we settle this conjecture using bijections (see 4.6), in contrast to a recent inductive proof of the same conjecture by Gélinas, Segovia and Thomas [GST]. To this end, we first introduce a bijection $\Xi_{\mathrm{poset}}$ between unit interval posets and plane trees. We then show that the two different known bijections from unit interval posets are conjugates of special cases of some bijections $\Xi_{\mathrm{steep}}$ and $\Xi_{\mathrm{bounce}}$ in [CFM20] constructed for the study of parabolic Tamari lattices. Using the link between $\Xi_{\mathrm{steep}},\Xi_{\mathrm{bounce}}$ and the zeta map also established in the same article, we conclude our proof. The rest of the article is organized as follows. In Section 2 we give the basic definitions. Then we give our bijection between unit interval posets and plane trees in Section 3, along with the related bijections in [CFM20]. Finally in Section 4 we recall the two known bijections between unit interval posets and Dyck paths and establish their link with the bijections in Section 3, leading to a bijective proof of our main result (4.6). ### Acknowledgment We thank Adrien Segovia for bringing this conjecture to our attention. This work is not supported by any funding with precise predefined goal, but it is supported by the publicly funded laboratory LIGM of Université Gustave Eiffel. ## 2 Preliminary For convenience, we write $[n]$ for the set $\\{1,2,\ldots,n\\}$. We consider finite partially order sets (or _poset_ for short) of the form $P=(P_{\mathrm{elem}},\preceq)$, where $P_{\mathrm{elem}}$ is a finite set and $\preceq$ a partial order on $P_{\mathrm{elem}}$. Given a set $S$ of reals $x_{1}<x_{2}<\cdots<x_{n}$, we define a partial order $\preceq_{S}$ on $[n]$ by taking $i\preceq_{S}j$ if and only if $x_{i}+1<x_{j}$. The order $\preceq_{S}$ can also be seen as defined on intervals of unit length starting at the $x_{i}$’s, where $i\preceq_{S}j$ if the interval starting with $x_{i}$ is on the left of that with $x_{j}$ without overlap. A poset $P$ with $n$ elements is a _unit interval order_ if there is some $S\subset\mathbb{R}$ with $n$ elements such $P\cong([n],\preceq_{S})$. In this case, we call $S$ a _starting set_ of $P$. We sometimes represent unit interval orders as $([n],\preceq_{S})$ for some $S$ hereinafter. We have the following characterization of unit interval orders. ###### Theorem 2.1 (Theorem 2.1 of [Sco64]). A poset $P$ is a unit interval poset if and only if it is $(3+1)$-free and $(2+2)$-free, that is, the order induced on any four elements cannot be a chain of $3$ elements plus an incomparable elements, or two disjoint chains each containing $2$ elements. Unit interval orders are counted by Catalan numbers. ###### Proposition 2.2 ([WF57]). The number of unit interval posets with $n$ elements is the $n$-th Catalan number $\mathrm{Cat}_{n}=\frac{1}{2n+1}\binom{2n+1}{n}$. By definition, we may represent a unit interval poset $P$ by a set of reals $S$, though the choice of $S$ is clearly not unique. In the following, for convenience, we use this perspective of (starting points of) intervals, which is arguably easier to manipulate. We denote by $\mathcal{P}_{n}$ the set of unit interval posets with $n$ elements. There are many other families of combinatorial objects counted by Catalan numbers, such as Dyck paths and plane trees. A _Dyck path_ is a lattice path formed by north steps $\uparrow=(0,1)$ and east steps $\rightarrow=(1,0)$, starting at $(0,0)$ and ending on the diagonal $y=x$ while staying weakly above it. The _size_ of a Dyck path is the number of its north steps. We denote by $\mathcal{D}_{n}$ the set of Dyck paths of size $n$. We define plane trees recursively: a _plane tree_ $T$ is either a single node (a _leaf_) or an _internal node_ $u$ linked by edges to a sequence of plane trees called _sub- trees_. In the latter case, the node $u$ is also called the _root_ of $T$, and the roots of sub-trees are called _children_ of $u$. We denote by $\mathcal{T}_{n}$ the number of plane trees with $n$ non-root nodes. For a node $u$ in a plane tree $T$, its _depth_ is the distance (number of edges) between $u$ and the root of $T$. ## 3 Unit interval posets and plane trees We start by a new bijection between unit interval posets and plane trees. ###### Construction 3.1. Let $S=\\{x_{1}<\cdots<x_{n}\\}$ be a starting set of a unit interval poset $P=([n],\preceq_{S})$. We define a plane tree $T$ as follows (see Figure 1 for an example). We set $x_{0}=x_{1}-2$. We denote by $v_{i}$ the node of $T$ corresponding to $x_{i}$. For $i\in[n]$, the parent of $v_{i}$ is $v_{j}$ if and only if $j$ is the largest index such that $j\preceq_{S}i$. By the definition of $x_{0}$, the parent of each $v_{i}$ is well-defined, and all nodes are descendants of $x_{0}$. We then order children of the same node from left to right with _decreasing_ index. We thus conclude that $T$ is a well- defined plane tree. We note that $T$ depends only on $\preceq_{S}$. We define $\Lambda_{\mathrm{poset}}(P)=T$. Figure 1: Example of $\Lambda_{\mathrm{poset}}$ from a unit interval poset defined by a set of unit intervals to a plane tree ###### Construction 3.2. Given a plane tree $T$ with $n$ non-root nodes, we define an order relation $\preceq_{S}$ on $[n]$ induced by some $S\subset\mathbb{R}$. Let $m$ be the maximal arity of $T$, and $r_{T}$ the root of $T$. Given a node $u$ in $T$, if it is the $i$-th child of its parent _from right to left_ , then we define $c(u)=i$. We observe that $0<c(u)<m$. For a non-root node $u$ in $T$, let $u_{0}=r_{T},u_{1},\ldots,u_{\ell}=u$ be the nodes on the unique path from the root $u_{0}=r_{T}$ to $u_{\ell}=u$. We then define a real number $x_{u}$ associated to $u$ using base $(m+2)$: $x_{u}=\ell+(0.c(u_{1})c(u_{2})\cdots c(u_{\ell}))_{(m+2)}=\ell+\sum_{i=1}^{\ell}c(u_{i})(m+2)^{-i}.$ (1) It is clear from (1) that $x_{u}$ is strictly decreasing from left to right for nodes of the same depth in $T$. Let $S$ be the set of $x_{u}$ for non-root nodes $u$ in $T$. We define $\Xi_{\mathrm{poset}}(T)=([n],\preceq_{S})$. The following lemma gives a direct connection between a plane tree and its corresponding unit interval poset obtained from $\Xi_{\mathrm{poset}}$. ###### Lemma 3.3. For $T\in\mathcal{T}_{n}$, we take $S$ to be the set constructed in 3.2, and $x_{u}$ the number constructed from a non-root node $u$ of $T$. Then for non- root nodes $u,v$, the parent of $u$ in $T$ is $v$ if and only if $x_{v}$ is the largest element in $S$ smaller than $1+x_{u}$. ###### Proof. Suppose that $T$ has maximal arity $m$. Let $v^{\prime}$ be the parent of $u$. By (1), we have $1<x_{u}-x_{v^{\prime}}<1+(m+2)^{-\ell+1}$, with $\ell$ the distance from $v^{\prime}$ to the root. Then, $x_{v}$ satisfying $x_{v^{\prime}}<x_{v}<1+x_{u}$ means $0<x_{v}-x_{v^{\prime}}<(m+2)^{-\ell+1}$, which means $v=v^{\prime}$ by (1), as it requires $v$ and $v^{\prime}$ to have the same distance to the root, but in this case $|x_{v}-x_{v^{\prime}}|$ cannot be smaller than $(m+2)^{-\ell+1}$ if $v\neq v^{\prime}$. We thus have the equivalence. ∎ We now show that $\Lambda_{\mathrm{poset}}$ and $\Xi_{\mathrm{poset}}$ are bijections. ###### Proposition 3.4. For all $n\geq 1$, the map $\Lambda_{\mathrm{poset}}$ is a bijection from $\mathcal{P}_{n}$ to $\mathcal{T}_{n}$, with $\Xi_{\mathrm{poset}}$ its inverse. ###### Proof. It is well-known that there are $\mathrm{Cat}_{n}$ elements in $\mathcal{T}_{n}$. By 2.2, we have $|\mathcal{P}_{n}|=|\mathcal{T}_{n}|$. We thus only need to show that $\Lambda_{\mathrm{poset}}\circ\Xi_{\mathrm{poset}}=\operatorname{id}$. Given a plane tree $T\in\mathcal{T}_{n}$, let $P=([n],\preceq_{S})=\Xi_{\mathrm{poset}}(T)$ and $T^{\prime}=\Lambda_{\mathrm{poset}}(P)$. Let $S=\\{x_{1}<\cdots<x_{n}\\}$ be the set of real numbers constructed in 3.2 for $\preceq_{S}$. Given $i\in[n]$, let $u_{i}$ be the node in $T$ that gives rise to $x_{i}$ in $S$, and $u^{\prime}_{i}$ the node in $T^{\prime}$ that represents $x_{i}$. By 3.3 and 3.1, for any $i,j\in[n]$, the node $u_{i}$ is the parent of $u_{j}$ in $T$ if and only if $u^{\prime}_{i}$ is the parent of $u^{\prime}_{j}$ in $T^{\prime}$. Then, $T^{\prime}$ has the same order of siblings as $T$, as in both we order siblings with decreasing order in their corresponding reals in $S$. We thus conclude that $T=T^{\prime}$. ∎ We now restate two previously known bijections between plane trees and Dyck paths that will be used to prove our main result. ###### Construction 3.5. Let $n\geq 1$ and $T\in\mathcal{T}_{n}$, we construct a Dyck path $D$ by taking the _clockwise_ contour walk starting from the top of the root of $T$. During the walk, when we pass an edge for the first (resp. second) time, we append $\uparrow$ (resp. $\rightarrow$) to $D$. We define $\Xi_{\mathrm{steep}}(T)=D$. The map $\Xi_{\mathrm{steep}}$ is a bijection from $\mathcal{T}_{n}$ to $\mathcal{D}_{n}$ for all $n\geq 1$, and we denote its inverse by $\Lambda_{\mathrm{steep}}$. ###### Construction 3.6. Let $n\geq 1$ and $T\in\mathcal{T}_{n}$, we construct a Dyck path $D$ by appointing the number of north steps at each $x$-coordinate. Let $\alpha_{\ell}$ be the number of nodes of depth $\ell$ (thus distance $\ell$ to the root), and $\ell_{\max}$ the maximal depth of nodes in $T$. We take $s_{\ell}=\alpha_{1}+\cdots+\alpha_{\ell}$. Given $1\leq k\leq n-1$, there is a unique way to write $k=s_{\ell}-r$ with $0\leq r<\alpha_{\ell}$ and $1\leq\ell\leq\ell_{\max}$. In this case, the number of north steps in $D$ on $x=k$ is the number of children of the $(r+1)$-st node of depth $\ell$. The number of north steps in $D$ on $x=0$ is the number of children of the root. We define $\Xi_{\mathrm{bounce}}(T)=D$. The map $\Xi_{\mathrm{bounce}}$ is a bijection from $\mathcal{T}_{n}$ to $\mathcal{D}_{n}$ for all $n\geq 1$ (see Lemma 3.18 in [CFM20]), and we denote its inverse by $\Lambda_{\mathrm{bounce}}=\Xi_{\mathrm{bounce}}^{-1}$. ###### Proposition 3.7 (Theorem IV of [CFM20]). Let $\zeta$ be the zeta map from $\mathcal{D}_{n}$ to $\mathcal{D}_{n}$ with $n\geq 1$. We have $\zeta=\Xi_{\mathrm{bounce}}\circ\Lambda_{\mathrm{steep}}$. See Figure 2 for an illustration of 3.7. Figure 2: The zeta map as composition of bijections mediated by trees. The nodes of the same depth of the tree are grouped together. ###### Remark 1. Albeit the same notation, the maps $\Xi_{\mathrm{steep}}$ and $\Xi_{\mathrm{bounce}}$ are only special cases of the ones in [CFM20, Construction 3.10 and Equation 14]. More precisely, the central objects in [CFM20] are called _LAC trees_ , which are plane trees with non-root nodes divided into regions algorithmically. We are only using the special case where the region $d$ consists of nodes of depth $d$, which is also the one used in [CFM20, Section 3.3] to prove 3.7. ###### Remark 2. The bijection $\Xi_{\mathrm{steep}}$ is classical, except that we do the contour walk from right to left instead of from left to right. The bijection $\Xi_{\mathrm{bounce}}$ is quite close to the classical bijection between plane trees and Łukasiewicz words (see [FS09, Section I.5.3]), which can be seen as sequences $(x_{0},x_{2},\ldots,x_{n-1})$ with $x_{i}\geq-1$ such that $\sum_{i=0}^{k}x_{i}\geq 0$ for $0\leq k<n-1$, and the sum of all $x_{i}$’s is $-1$. However, in the classical bijection of Łukasiewicz, we deal with nodes in a depth-first order, but in $\Xi_{\mathrm{bounce}}$ they are processed in a breadth-first order. ## 4 Unit interval posets and the zeta map In the following, we give the two representations of unit interval posets as Dyck paths, and we detail how they are related to the bijections detailed in Section 3, leading to a proof of our main result (4.6). We start by the first representation, which was first defined implicitly using anti-adjacency matrices of unit interval posets in [SR03], only involving the poset structure. The following form was first given in [CG18]. ###### Construction 4.1. Given $P=([n],\preceq_{S})$ a unit interval poset with $S\subset\mathbb{R}$, we define a Dyck path $D$ as follows. Let $S^{+}=\\{x+1\mid x\in S\\}$. Without loss of generality, we can choose $S$ such that $S\cap S^{+}=\varnothing$. Then suppose that $S\cup S^{+}=\\{y_{1}<\cdots<y_{2n}\\}$. For $i\in[2n]$, the $i$-th step of $D$ is $\uparrow$ if $y_{i}\in S$, and is $\rightarrow$ otherwise. We define $\varphi(P)=D$. See Figure 3 for an example. Although defined here using the set $S$, the map $\varphi$ in 4.1, with this form given in [CG18] and proved in Lemma 5.7 therein to be equivalent to the original definition, does not depend on the choice of $S$. Figure 3: Example of the bijections $\varphi$ and $\psi$ ###### Proposition 4.2. We have $\varphi\circ\Xi_{\mathrm{poset}}=\Xi_{\mathrm{bounce}}$, and $\varphi$ is a bijection from $\mathcal{P}_{n}$ to $\mathcal{D}_{n}$ for all $n\geq 1$. ###### Proof. Let $T\in\mathcal{T}_{n}$ and $D=\Xi_{\mathrm{bounce}}(T)$. We take $P=\Xi_{\mathrm{poset}}(T)=([n],\preceq_{S})\in\mathcal{P}_{n}$ with $S=\\{x_{1}<\cdots<x_{n}\\}$ to be that given in 3.2. Let $D^{\prime}=\varphi(P)$. To show that $D^{\prime}=D$, we first notice that a Dyck path is determined by the number of north steps on each line $x=k$ for $0\leq k\leq n-1$. Thus, we only need to show that for each $k$, $D^{\prime}$ has the same number of north steps on $x=k$ as in $D$ given in 3.6. For the line $x=0$, we observe that the number of elements in $S$ strictly smaller than $x_{1}+1$ must be those with integer part equal to $1$, thus those from the children of the root of $T$. For $1\leq k\leq n-1$, from 4.1, we know that the number of north steps on the line $x=k$ is the number of elements $x_{i}$ such that $x_{k-1}+1<x_{i}<x_{k}+1$, as $x_{k}+1$ corresponds to the $k$-th east step in $D$. Let $u_{k}$ be the node in $T$ corresponding to $x_{k}$. By 3.3, the nodes in $T$ corresponding to such $x_{i}$ are children of $u_{k}$, meaning that the total number of north steps on $x=k$ is the number of children of $u_{k}$. Furthermore, suppose that $u_{k}$ is of depth $\ell$, then we can write $k=s_{\ell}-r$ with $0\leq r\leq\alpha_{\ell}$ as in 3.6. We know that $u_{k}$ is the $(r+1)$-st node of depth $\ell$ in $T$, as in 3.1 the values corresponding to nodes of the same depth are decreasing from left to right. We thus conclude that $D=D^{\prime}$, and $\varphi\circ\Xi_{\mathrm{poset}}=\Xi_{\mathrm{bounce}}$. As both $\Xi_{\mathrm{bounce}}$ (Lemma 3.18 in [CFM20] and $\Xi_{\mathrm{poset}}$ (3.4) are bijections, we conclude that $\varphi$ is also a bijection. ∎ The second presentation was first defined in [GPMR14] for $(3+1)$-free posets, and it takes a simpler form for unit interval posets. ###### Construction 4.3. For $n\geq 1$, let $D\in\mathcal{D}_{n}$ be a Dyck path of size $n$. Its _area vector_ , denoted by $\operatorname{Area}(D)$, is a vector $(a_{1},\ldots,a_{n})$ with $a_{i}$ the number of full unit squares with the upper edge on $y=i$ between $D$ and the diagonal $y=x$. We then define a poset $P=([n],\preceq)$ by taking $i\prec j$ for $i,j\in[n]$ such that * • Either $a_{i}+2\leq a_{j}$; * • Or $a_{i}+1=a_{j}$ and $i<j$. We define $\psi(D)=P$, and its inverse $\psi^{-1}(P)=D$. See Figure 3 for an example. The following result ensures that $\psi$ is well-defined. ###### Proposition 4.4 (Special case of Theorem 5.2 in [MMS]). The map $\psi$ is a bijection from Dyck paths to unit interval posets preserving sizes. ###### Proposition 4.5. We have $\psi=\Xi_{\mathrm{poset}}\circ\Lambda_{\mathrm{steep}}$. ###### Proof. Let $P=\psi(D)$, $T=\Lambda_{\mathrm{steep}}(D)$ and $P^{\prime}=\Xi_{\mathrm{poset}}(D)$. We write $P=([n],\preceq)$ and $P^{\prime}=([n],\preceq_{S})$, with $S$ constructed in 3.2. For $i\in[n]$, let $u_{i}$ be the $i$-th non-root node in $T$ that is visited in the clockwise contour walk of $T$. Suppose that $\operatorname{Area}(D)=(a_{1},\ldots,a_{n})$. By 3.5, the depth of $u_{i}$ is $a_{i}+1$. We now define a partial order $\preceq_{T}$ on non-root nodes in $T$ by taking $u_{i}\preceq_{T}u_{j}$ if and only if $i\preceq j$ in $P$. We denote by $d(v)$ the depth of $v$ in $T$. By 4.3, two non-root nodes $v,w$ in $T$ satisfy $v\prec_{T}w$ if and only if * • Either $d(v)+2\leq d(w)$; * • Or $d(v)+1=d(w)$, and the parent of $w$ is either $v$ or on the left of $v$. Now, for $v$ a non-root node in $T$, we take $x_{v}$ defined in (1). Suppose that there are two non-root nodes $v,w$ of $T$ such that $x_{v}+1<x_{w}$. Then we have $d(v)+1\leq d(w)$. There are two possibilities: either $d(v)+1=d(w)$, or $d(v)+2\leq d(w)$. In the first case, let $w^{\prime}$ be the parent of $w$, we have $x_{w^{\prime}}+1<x_{w}$. By 3.3, we know that $x_{v}\leq w_{w^{\prime}}$. From (1), we know that either $v=w^{\prime}$ or $v$ is on the right of $w^{\prime}$, as $x_{v}$ is decreasing from left to right for nodes in $T$ of the same depth. Combining with the second case, we conclude that $x_{v}+1<x_{w}$ if and only if $v\prec_{T}w$. By 3.2, we have $P\cong P^{\prime}$, which concludes the proof. ∎ ###### Theorem 4.6 (Conjecture 7.1 in [MMS]). We have $\varphi\circ\psi=\zeta$. ###### Proof. This is a consequence of 3.7, 4.2 and 4.5. ∎ ## References * [AKOP02] G. E. Andrews, C. Krattenthaler, L. Orsina, and P. Papi. ad-nilpotent $\mathfrak{b}$-ideals in ${\rm sl}(n)$ having a fixed class of nilpotence: Combinatorics and enumeration. Trans. Amer. Math. Soc., 354(10):3835–3853, 2002. * [CFM20] C. Ceballos, W. Fang, and H. Mühle. The Steep-Bounce Zeta Map in Parabolic Cataland. J. Combin. Theory Ser. A, 172:105210, 2020. * [CG18] A. Chavez and F. Gotti. Dyck paths and positroids from unit interval orders. J. Combin. Theory Ser. A, 154:507–532, 2018. * [FS09] Ph. Flajolet and R. Sedgewick. Analytic combinatorics. Cambridge University Press, Cambridge, 2009. * [GPMR14] M. Guay-Paquet, A. H. Morales, and E. Rowland. Structure and Enumeration of $(3+1)$-Free Posets. Ann. Comb., 18(4):645–674, 2014. * [GST] F. Gélinas, A. Segovia, and H. Thomas. Proof of a conjecture of Matherne, Morales, and Selover on encodings of unit interval orders. arXiv:2212.12171 [math.CO]. * [Hag08] J. Haglund. The $q,t$-Catalan Numbers and the Space of Diagonal Harmonics, volume 41. American Mathematical Society, Providence, RI, 2008. * [MMS] J. P. Matherne, A. H. Morales, and J. Selover. The Newton polytope and Lorentzian property of chromatic symmetric functions. arXiv:2201.07333 [math.CO]. * [Sco64] D. Scott. Measurement structures and linear inequalities. J. Math. Psychol., 1(2):233–247, 1964. * [SR03] M. Skandera and B Reed. Total nonnegativity and $(3+1)$-free posets. J. Combin. Theory Ser. A, 103(2):237–256, 2003. * [SS93] Richard P Stanley and John R Stembridge. On immanants of Jacobi-Trudi matrices and permutations with restricted position. J. Combin. Theory Ser. A, 62(2):261–279, 1993. * [WF57] R. L. Wine and J. E. Freund. On the Enumeration of Decision Patterns Involving $n$ Means. Ann. Math. Stat., 28(1):256–259, 1957.
# Experimental Contexts _Can_ Facilitate Robust Semantic Property Inference in Language Models, but Inconsistently Kanishka Misraτ Allyson Ettingerα Kyle Mahowaldτ τThe University of Texas at Austin αAllen Institute for Artificial Intelligence <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Recent zero-shot evaluations have highlighted important limitations in the abilities of language models (LMs) to perform meaning extraction. However, it is now well known that LMs can demonstrate radical improvements in the presence of experimental contexts such as in-context examples and instructions. How well does this translate to previously studied meaning- sensitive tasks? We present a case-study on the extent to which experimental contexts can improve LMs’ robustness in performing property inheritance—predicting semantic properties of novel concepts, a task that they have been previously shown to fail on. Upon carefully controlling the nature of the in-context examples and the instructions, our work reveals that they can indeed lead to non-trivial property inheritance behavior in LMs. However, this ability is inconsistent: with a minimal reformulation of the task, some LMs were found to pick up on shallow, non-semantic heuristics from their inputs, suggesting that the computational principles of semantic property inference are yet to be mastered by LMs. Experimental Contexts _Can_ Facilitate Robust Semantic Property Inference in Language Models, but Inconsistently Kanishka Misraτ Allyson Ettingerα Kyle Mahowaldτ τThe University of Texas at Austin αAllen Institute for Artificial Intelligence<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ## 1 Introduction Carefully controlled behavioral analyses on meaning-sensitive tasks have revealed holes in the ability of language models (LMs) to demonstrate robust meaning extraction and use (Pandia and Ettinger, 2021; Elazar et al., 2021; Schuster and Linzen, 2022; Misra et al., 2023; Kim and Schuster, 2023, i.a). However, since a large subset of these investigations uses zero-shot evaluation as the primary methodology, there are growing concerns that they do not paint a complete picture of LMs’ abilities (Lampinen, 2022; Sinclair et al., 2022; Sinha et al., 2023). Conclusions that LMs lack a particular ability may be overhasty if it turns out the ability is easily accessed through in- context learning, different question formulations, or particular instructions (Lampinen, 2022; Wei et al., 2022). Figure 1: We prompt LMs with in-context examples that are compatible with both, robust property inheritance, as well as position-based heuristics. At test time, we evaluate the model on cases where the heuristics support desirable behavior and on cases where they do not. We use stimuli from comps and its reformulation as a QA task (comps-qa). Our focus111Our code can be found in this link in this paper is a particularly challenging data set for meaning-sensitive behavior: comps (Misra et al., 2023), a dataset of minimal pair sentences that tests the ability of LMs on property knowledge of everyday concepts (a beaver/gorilla has a flat tail) and their inheritance for novel concepts (a wug is a beaver/gorilla. therefore a wug has a flat tail). Contemporary LMs failed miserably on the hardest subset of the comps stimuli, the examples of which contain two novel concepts (wug vs. dax), where only one of them inherits the target property (has a flat tail): * (1) A wug is a beaver. A dax is a gorilla. Therefore, a wug/dax has a flat tail. Given the success of LMs on a wide variety of complicated tasks, their utter failure on this seemingly straightforward task remains puzzling. Here, we systematically explore comps on five modern LMs ranging from 1.5–13B parameters, varying (a) whether models are evaluated zero-shot or with multiple examples and (b) whether or not instructions are present. Unlike other minimal-pair datasets, using comps in an in-context learning setting is non-trivial (and thus potentially informative). This is because the task can be solved using a position-based heuristic. For example, in one subset of comps, the target property is always attached to the first novel concept—like in (1). Importantly, models’ failures on comps were shown to be in part a result of models’ tendencies towards heuristic behavior: the performance of LMs is particularly bad when the distractor (a dax is a gorilla) is recent—i.e., autoregressive LMs show a recency bias in attributing properties to novel concepts. In that sense, comps follows a rich body of work in which tasks are set up in a manner that two types of generalization mechanisms can lead to the same prediction, but only one of which is desirable (McCoy et al., 2019, 2020; Warstadt et al., 2020b; Mueller et al., 2022; Si et al., 2023; Mueller et al., 2023). We find that experimental contexts, as operationalized using in-context examples and instructions, can in fact demonstrate robust improvements in LMs’ property inheritance behavior as measured by the stimuli in comps. However, this improvement comes with a caveat: With a minimal reformulation of comps into a QA task, where there is a direct link between the LMs’ output space and the features of the input that control the heuristic, LMs show a strong preference towards the heuristic, and are therefore at chance. This discrepancy suggests that the improvements on the original task do not necessarily indicate that the models have successfully mastered the reasoning ability required to perform property inheritance, which remains a key challenge for them. ## 2 Methodology #### Dataset We use the most difficult subset of the comps dataset (Misra et al., 2023)—comps-wugs-dist—for our experiments. This dataset contains 13,828 sentence pairs of the form similar to (1), constructed using 152 animal concepts and 991 properties. #### Stimuli re-design We take a number of steps to minimize noise from other (likely uninterpretable) heuristics beyond the ones we have set out to target. First, we enforce that the concepts and properties that appear in the in-context examples are disjoint from ones that are used in tests. To this end, we sample 50 concepts and their relevant properties and reserve it for our in-context examples, leaving the rest to be sampled for our test set. We also enforce this constraint for our novel concepts—i.e., all in-context examples contain different nonce words, and the collection of nonce words for the in-context examples and the test set is disjoint. Furthermore, we counterbalance the nonce words in the test set in a manner that having a bias towards one of them would lead to chance performance. We additionally also use multiple different sets of in-context examples, to add variability and to ensure that the results are not only due to one particular choice of in-context examples. In total, we use 10 different in-context learning example sets, each containing 6 different comps stimuli. For our test set, we use a constant set of 256 unique pairs sampled from our pool of stimuli containing unused concepts and properties. #### Heuristics Our most important design decision is to consider two distinct sets of stimuli—each separately making available the two types of heuristics that the LMs could rely upon: first-correct and recent-correct, where the property is inherited by the first and the most recent novel concept, respectively. That is, for the same set of in-context examples, we have a version where the first concept is correct like in (1), and one where the most recent concept is correct: * (2) A wug is a gorilla. A dax is a beaver. Therefore, a wug/dax has a flat tail. For each type of in-context stimuli, we similarly have two versions of test stimuli: one that is consistent with the target heuristic, and one that is not. That is, a test example that is consistent with the first-correct heuristic will also have its first concept be the one that inherits the property in question, while one which is inconsistent will have the most recent concept be the inheritor of the property. Therefore, a model that shows a preference for a given heuristic will succeed only on one test set and succumb on the other, while a model that is robust to the heuristics will succeed on both. #### Reformulation into QA The original comps stimuli test for property inheritance using declarative statements, where models are tested for the log-probability they asign to the property (has a flat tail) given either of the two concepts (wug vs. dax). Here we additionally consider an alternate formulation of comps as a question answering task (comps-qa), where we make the property explicit in the prompt to the model and instead ask which of the two concepts possesses it: * (3) A wug is a beaver. A dax is a gorilla. Question: Which one of them has a flat tail? Answer: wug/dax Since the shallow heuristics we consider are controlled by the relative ordering of the novel concepts, this formulation of the task directly allows us to link the models’ output space (the novel concepts) to the heuristics (positions). #### Testing setup For the original comps setting we follow Misra et al. (2023) and compare the log-probability of the property phrase given the correct vs. the incorrect prefix. For comps-qa however, since we have a constant prefix (same premises and question), we evaluate the relative log-probability of the two novel concepts, only one of which is the correct answer. Accuracy in both cases is the proportion of cases the correct surface form was assigned relatively higher log-probability. Since we use pairwise comparisons throughout, chance performance is 50%. #### Instructions We consider four different kinds of instruction templates, with varying levels of detail (see appendix 0.B) per formulation (comps and comps-qa). In our experiments we report results on the instruction that gives the best average performance for a given model. #### LMs tested We evaluated 5 different open-source LMs, all of which are decoder-only, and were accessed using the huggingface hub (Wolf et al., 2020): GPT-2 XL (Radford et al., 2019); OPT-6.7b (Zhang et al., 2022); Llama-2 (we used the 7b and the 13b versions; Touvron et al., 2023); and Mistral-7b (Jiang et al., 2023). Details about the models can be found in the appendix. Figure 2: Overall results from our experiments testing LMs on comps and comps- qa using in-context examples, with and without instructions. Results are aggregated across both heuristics: first-correct and recent-correct. Error bars are over different sets of in-context examples. All models start off near chance in the 0-shot case, but many improve as more examples are given. Some (e.g., GPT-2 XL on comps-qa) show strongly heuristically driven behavior, as evidenced by the diverging performance on items where heuristics work and those where they do not. Figures 3 and 4 show fine-grained results. ## 3 Analyses and Results We evaluate on comps and comps-qa, with and without instructions. In each case, we progressively supply 0 through 6 in-context examples, allowing us to track the dynamics of the models’ performance with an increasing amount of demonstrations. Together with our separate types of test sets and heuristics encoded in the in-context examples, along with five different instruction settings (four with and one without) we run 2420 experiments per LM. We hypothesize that LMs would be more sensitive to the positional heuristics in comps-qa because of the clear link between their output space and the relative position of the novel concepts—the feature that controls our target heuristics. Figure 2 shows accuracies of the tested LMs on our four different comps settings as a function of the number of in-context examples provided to them, for both: cases where the heuristics are consistent with success on the test set, and cases where they are not. We also show an additional curve denoting the average performance across the two types of test sets to paint an overall picture of the models’ performance. In this figure, the extent to which a model relies on heuristic is indicated by the gap between the dotted ($\newmoon$) and the dashed ($\blacktriangle$) lines. A model that is robust to the heuristics will have curves of both colors rise above chance, with no gap between the two, while one that is prone to using heuristics will have its dotted ($\newmoon$) curve be substantially greater than its dashed ($\blacktriangle$) curve. #### Experimental context can improve attribution of properties to concepts… On comps, models unsurprisingly start off at chance performance on average, corroborating the previous findings of Misra et al. (2023). However, in the presence of in-context examples and instructions, they are able to improve monotonically as the number of in-context examples increases. It is worth noting Llama-2-13b does occasionally show a slight preference for heuristics in the absence of instructions (e.g., 84% vs. 62% when prompted with 2 examples). An intermediate conclusion that we draw here is that LMs can indeed demonstrate non-trivial property inheritance on observing a few examples that reflect that behavior. #### …but not the attribution of concepts to properties While experimental context seems to aid models in attributing properties to the right concept in context, the same does not hold on comps-qa. Similar to comps, models start off at chance performance on average with a zero-shot set up, however, unlike in the case of comps, LMs seem to consistently prefer the heuristics available in the prompt, showing worse than chance performance on cases where the test set does not follow the heuristic. This is most apparent for GPT-2 XL, OPT6.7b, and Llama-2-7b—here the gap between the accuracy for cases where heuristics support performance on the test set and the accuracy for cases where they do not almost always worsens with an increase in the number of exemplars, on average. This is especially notable for OPT6.7b, which attains perfect performance on cases where the heuristics match up with the test set and at the same time it is close to 0% on cases where they do not. A notable exception to this trend is Mistral-7b, which seems to be resilient to the spurious heuristics, showing a net-positive improvement from the zero-shot case, especially in the presence of instructions. Nevertheless in the absence of instructions, it too shows a slight preference for position-based heuristics—for instance, its accuracy with 6 in-context exemplars when the heuristics support success on the test set is 82% and on cases where the heuristics oppose the test set is 65%. Our results suggest that LMs are more likely to show behavior that is compatible with the use of positional heuristics when their output space (choice between the two novel concepts) has a clear connection with positional artifacts in their input (relative ordering of the novel concepts). This is consistent with our hypothesis in about 8 out of 10 cases. When this link is not clear and models must instead predict likely properties given a novel concept (i.e., in comps), instructions and in-context examples do seem to lead to robust performance. It is important to note that instructions alone do not always account for the observed improvement—LMs’ performance on zero-shot settings are consistently still at chance in all cases, suggesting that it is the in-context examples that critically alter models’ output distribution to support desirable property inference behavior. ## 4 Conclusion In this work, we investigated the extent to which in-context examples and instructions—key components that drive impressive performance in contemporary LMs—can overcome important limitations of LMs at tests that have poked holes in their ability to extract conceptual meaning from text. As a case study, we analyzed how well such experimental contexts can improve LM abilities to perform property inheritance (Murphy, 2002; Misra et al., 2023) in context—binding of novel concepts to existing concepts, and endowing them with valid property inferences as a result. Our findings suggest that mastery of this ability has yet to be robustly achieved, and that LMs in general are still prone to using shallower patterns in their context (when available) rather than systematically extracting conceptual meaning. At the same time, exploring precisely what makes Mistral less susceptible to heuristics will be useful to design more robust LMs, which we leave for future work. ## 5 Limitations #### Single dataset A clear limitation of this work is that it exclusively focuses on a single dataset: comps (Misra et al., 2023). So, a question that arises here is to what extent are our findings localized to the chosen dataset vs. meaning- sensitive evaluations in general. This would require a further non-trivial, non-straightforward amount of work, since: (1) different meaning sensitive evaluations focus on different (though equally useful) operationalizations of meaning; and more importantly (2) not all prior work in this area focuses on a standardized and well-defined usage of heuristics that is directly transferable to the experimental setup we have used in this work (following McCoy et al., 2019, 2020; Warstadt et al., 2020b; Mueller et al., 2022; Si et al., 2023). We do hope that our work contributes to the larger-scale vision of carefully benchmarking different types of meaning extraction abilities in LMs in a controlled manner. #### Lack of mechanistic insight Our work continues the long-standing precedent of using carefully constructed behavioral experiments to conclude about the competence of LMs (Linzen et al., 2016; Gulordava et al., 2018; Futrell et al., 2019; Ettinger, 2020; Warstadt et al., 2020a) However, recent works have made impressive strides in localizing the kinds of computations that give rise to the observed behavior in LMs (Hanna et al., 2023; Wang et al., 2023, i.a.) Therefore, it is entirely possible that our conclusions about the precise nature of computations carried out by LMs can be greatly strengthened when supplemented by the methods developed in these aforementioned works. #### Single Language Finally, this work only focuses on property inheritance problems stated in the English language. This does little to contribute towards diversity in NLP research. ## 6 Acknowledgments We thank Tom McCoy and Andrew Lampinen for providing comments on a previous version of the draft. (KM)2 acknowledge funding from NSF Grant 2104995 awarded to Kyle Mahowald. ## References * Elazar et al. (2021) Yanai Elazar, Hongming Zhang, Yoav Goldberg, and Dan Roth. 2021. Back to square one: Artifact detection, training and commonsense disentanglement in the Winograd schema. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 10486–10500, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. * Ettinger (2020) Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. _Transactions of the Association for Computational Linguistics_ , 8:34–48. * Futrell et al. (2019) Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 32–42, Minneapolis, Minnesota. Association for Computational Linguistics. * Gulordava et al. (2018) Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1195–1205, New Orleans, Louisiana. Association for Computational Linguistics. * Hanna et al. (2023) Michael Hanna, Ollie Liu, and Alexandre Variengien. 2023. How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. In _Thirty-seventh Conference on Neural Information Processing Systems_. * Hu et al. (2020) Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 1725–1744, Online. Association for Computational Linguistics. * Jiang et al. (2023) Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. _arXiv preprint arXiv:2310.06825_. * Kim and Schuster (2023) Najoung Kim and Sebastian Schuster. 2023. Entity tracking in language models. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 3835–3855, Toronto, Canada. Association for Computational Linguistics. * Lampinen (2022) Andrew Kyle Lampinen. 2022. Can language models handle recursively nested grammatical structures? a case study on comparing models and humans. _arXiv preprint arXiv:2210.15303_. * Linzen et al. (2016) Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. _Transactions of the Association for Computational Linguistics_ , 4:521–535. * McCoy et al. (2020) R Thomas McCoy, Robert Frank, and Tal Linzen. 2020. Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks. _Transactions of the Association for Computational Linguistics_ , 8:125–140. * McCoy et al. (2019) Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3428–3448, Florence, Italy. Association for Computational Linguistics. * Misra (2022) Kanishka Misra. 2022. minicons: Enabling flexible behavioral and representational analyses of transformer language models. _arXiv preprint arXiv:2203.13112_. * Misra et al. (2023) Kanishka Misra, Julia Rayz, and Allyson Ettinger. 2023. COMPS: Conceptual minimal pair sentences for testing robust property knowledge and its inheritance in pre-trained language models. In _Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 2928–2949, Dubrovnik, Croatia. Association for Computational Linguistics. * Mueller et al. (2022) Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, and Sebastian Schuster. 2022. Coloring the blank slate: Pre-training imparts a hierarchical inductive bias to sequence-to-sequence models. In _Findings of the Association for Computational Linguistics: ACL 2022_ , pages 1352–1368, Dublin, Ireland. Association for Computational Linguistics. * Mueller et al. (2023) Aaron Mueller, Albert Webson, Jackson Petty, and Tal Linzen. 2023. In-context learning generalizes, but not always robustly: The case of syntax. _arXiv preprint arXiv:2311.07811_. * Murphy (2002) Gregory L Murphy. 2002. _The Big Book of Concepts_. MIT press. * Pandia and Ettinger (2021) Lalchand Pandia and Allyson Ettinger. 2021. Sorting through the noise: Testing robustness of information processing in pre-trained language models. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1583–1596, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. * Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. _OpenAI_. * Schuster and Linzen (2022) Sebastian Schuster and Tal Linzen. 2022. When a sentence does not introduce a discourse entity, transformer-based models still sometimes refer to it. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 969–982, Seattle, United States. Association for Computational Linguistics. * Si et al. (2023) Chenglei Si, Dan Friedman, Nitish Joshi, Shi Feng, Danqi Chen, and He He. 2023. Measuring inductive biases of in-context learning with underspecified demonstrations. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 11289–11310, Toronto, Canada. Association for Computational Linguistics. * Sinclair et al. (2022) Arabella Sinclair, Jaap Jumelet, Willem Zuidema, and Raquel Fernández. 2022. Structural persistence in language models: Priming as a window into abstract language representations. _Transactions of the Association for Computational Linguistics_ , 10:1031–1050. * Sinha et al. (2023) Koustuv Sinha, Jon Gauthier, Aaron Mueller, Kanishka Misra, Keren Fuentes, Roger Levy, and Adina Williams. 2023. Language model acceptability judgements are not always robust to context. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 6043–6063, Toronto, Canada. Association for Computational Linguistics. * Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_. * Wang et al. (2023) Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. 2023. Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. In _The Eleventh International Conference on Learning Representations_. * Warstadt et al. (2020a) Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020a. BLiMP: The benchmark of linguistic minimal pairs for English. _Transactions of the Association for Computational Linguistics_ , 8:377–392. * Warstadt et al. (2020b) Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, and Samuel R. Bowman. 2020b. Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually). In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 217–235, Online. Association for Computational Linguistics. * Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ , 35:24824–24837. * Wilcox et al. (2019) Ethan Wilcox, Roger Levy, and Richard Futrell. 2019. Hierarchical representation in neural language models: Suppression and recovery of expectations. In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 181–190, Florence, Italy. Association for Computational Linguistics. * Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics. * Zhang et al. (2022) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. OPT: Open pre-trained transformer language models. _arXiv preprint arXiv:2205.01068_. ## Appendix 0.A Dataset and implementation details Our experiments use the stimuli from comps, released with an MIT License by Misra et al. (2023), but with a modification that involves changing of the nonce words to obey the constraint that the in-context examples all have different nonce word pairs. To this end, we use the following nonce-words: * • In-context examples: wug, dax, fep, zek, blick, toma, kiki, glorp, bova, zup, tufa, flib (counter-balanced) * • Test examples: gek, wif (counter-balanced) ### 0.A.1 Methodological details Following comps, as well as the precedent set by a number of previous minimal pair analyses (Linzen et al., 2016; Gulordava et al., 2018; Futrell et al., 2019; Wilcox et al., 2019; Warstadt et al., 2020a; Hu et al., 2020), we use a forced choice task to evaluate our LM subjects. Like in comps, we compare the log-probability of the property phrase (here, has a flat tail) given the choice of left contexts (which indicate whether the right vs. the wrong concept has the property). For example, we measure: $\displaystyle\log P_{\theta}(\textit{has a flat tail}\mid$ a gek is a beaver. a wif is a $\displaystyle\textit{ gorilla. therefore, a {{\color[rgb]{0.23828125,0.5234375,0.77734375}\definecolor[named]{pgfstrokecolor}{rgb}{0.23828125,0.5234375,0.77734375}gek}/{\color[rgb]{0.9453125,0.76171875,0.1953125}\definecolor[named]{pgfstrokecolor}{rgb}{0.9453125,0.76171875,0.1953125}wif}}}),$ and for comps-qa, we compare the relative probabilities of the two novel concepts given a fixed left prefix which contains a question about the property. For example, we measure: $\displaystyle\log P_{\theta}(\textit{{{\color[rgb]{0.23828125,0.5234375,0.77734375}\definecolor[named]{pgfstrokecolor}{rgb}{0.23828125,0.5234375,0.77734375}gek}/{\color[rgb]{0.9453125,0.76171875,0.1953125}\definecolor[named]{pgfstrokecolor}{rgb}{0.9453125,0.76171875,0.1953125}wif}}}\mid$ a gek is a beaver. a wif is a gorilla. Question: Which one of them has a $\displaystyle\textit{ flat tail? {Answer:}})$ In both cases above, gek is the concept that should inherit the property. While these examples show the zero-shot case, cases with in-context examples and instructions simply add more context to the prefix, therefore the surface form of the output space remains the same regardless of the number of in- context examples or the presence of instructions. Log-probabilities for all models were accessed using minicons (Misra, 2022),222https://github.com/kanishkamisra/minicons a library that wraps around transformers (Wolf et al., 2020) by huggingface, and is written in pytorch. For our experiments with Llama-13B, we quantize the model to 4-bits in order to fit it onto a single GPU. All experiments were run on a cluster with 4 NVIDIA A40 GPUs, though each individual experiment on a model was computed on a single A40 GPU. ### 0.A.2 Model Metadata Model | Params | Pre-training Tokens | Vocab size ---|---|---|--- GPT-2 XL | 1.5B | 8B | 50,257 OPT-6.7b | 6.7B | 180B | 50,272 Llama-2-7b | 7B | 2T | 32,000 Llama-2-13b | 13B | 2T | 32,000 Mistral-7b | 7B | ? | 32,000 Table 1: Overview of the LMs used in this work. ‘?’ indicates that the given value was not made available in the LM’s release. Table 1 shows the LMs used in this work, along with their total parameters, tokens encountered during training, and vocabulary size. ## Appendix 0.B Instructions Tables 2, 3, 4, 5 show our instruction templates. comps version | Instruction Template ---|--- comps | Given a pair of statements that introduce novel entities as types of real world animals, write a true statement about the properties of the novel entities: {exemplars} (omitted in zero-shot) {test-stimulus} comps-qa | Given a pair of statements that introduce novel entities as types of real world animals, answer the question that follows: {exemplars} (omitted in zero-shot) {test-stimulus} Table 2: Instructions for comps and comps-qa with instruction type: “minimal” comps version | Instruction Template ---|--- comps | Some aliens have come to earth, and it turns out they have their own language for talking about our animals here on Earth. Your job is to help the aliens learn about our Earthling animals by giving them some information about the animals. Let’s get started: {exemplars} (omitted in zero-shot) {test-stimulus} comps-qa | Some aliens have come to earth, and it turns out they have their own language for talking about our animals here on Earth. Your job is to help the aliens learn about our Earthling animals by answering some questions about them. Let’s get started: {exemplars} (omitted in zero-shot) {test-stimulus} Table 3: Instructions for comps and comps-qa with instruction type: “aliens” comps version | No. of shots | Instruction Template ---|---|--- comps | Zero-shot | It is important to know and reason about the properties of entities in the world. The following is a pair of premise statements that introduce novel entities as new types of real world animals. Your task is to make a conclusion about the properties of one of the entities by reasoning over the premise statements. {test_stimulus} Few-shot | It is important to know and reason about the properties of entities in the world. The following example(s) show a pair of premise statements that introduce novel entities as new types of real world animals, followed by another statement that attributes a property to one of the entities introduced in the premise statements. Examples: {examples} Here is another pair of premise statements. Your task is to make a conclusion about the properties of one of the entities by reasoning over the premise statements. {test_stimulus} comps-qa | Zero-shot | It is important to know and reason about the properties of entities in the world. The following is a pair of premise statements that introduce novel entities as new types of real world objects. The statements are followed by a question that asks which novel entity in the premise can a specific property can be attributed to. Answer the question by reasoning over the premise statements. {test_stimulus} Few-shot | It is important to know and reason about the properties of entities in the world. The following example(s) show a pair of premise statements that introduce novel entities as new types of real world animals. The statements are followed by a question that asks which novel entity in the premise can a specific property can be attributed to, and the answer to the question, obtained by reasoning over the premise statements. Examples: {examples} Here is another pair of premise statements. Answer the question that follows. {test_stimulus} Table 4: Instructions for comps and comps-qa with instruction type: “Detailed-1” comps version | No. of shots | Instruction Template ---|---|--- comps | Zero-shot | It is important to know and reason about the properties of entities in the world. The following is a pair of premise statements that introduce novel entities as new types of real world animals. Your task is to write a true statement about the properties of the novel entities. {test_stimulus} Few-shot | It is important to know and reason about the properties of entities in the world. The following example(s) show a pair of premise statements that introduce novel entities as new types of real world animals, followed by another statement that attributes a property to one of the entities introduced in the premise statements. Examples: {examples} Here is another pair of premise statements. Your task is to write a true statement about the properties of the novel entities. {test_stimulus} comps-qa | Zero-shot | It is important to know and reason about the properties of entities in the world. The following is a pair of premise statements that introduce novel entities as new types of real world animals. The statements are followed by a question that asks which of the introduced entities a specific property can be attributed to. Answer the question by reasoning over the premise statements. {test_stimulus} Few-shot | It is important to know and reason about the properties of entities in the world. The following example(s) show a pair of premise statements that introduce novel entities as new types of real world animals. The statements are followed by a question that asks which of the introduced entities a specific property can be attributed to, and the answer to the question, obtained by reasoning over the premise statements. Examples: {examples} Here is another pair of premise statements. Answer the question that follows. {test_stimulus} Table 5: Instructions for comps and comps-qa with instruction type: “Detailed-2” ## Appendix 0.C Fine-grained results While Figure 2 shows results aggregated over both types of heuristics that we have used in this work, we additionally display finer-grained, heuristics-wise results in this section. Again, in each of these plots, the extent to which a model relies on heuristic is indicated by the gap between the dotted ($\newmoon$) and the dashed ($\blacktriangle$) lines. This is now separately shown for each of our heuristics. Figure 3 shows results on comps with and without instructions for both the heuristics, and similarly Figure 4 shows results on comps-qa with and without instructions for both the heuristics. (a) comps (b) comps with Instructions Figure 3: Fine-grained results on COMPS as a function of the number of in- context examples (with and without instructions). (a) comps-qa (b) comps-qa with Instructions Figure 4: Fine-grained results on COMPS-QA as a function of the number of in- context examples (with and without instructions).
# Mixing matters Douglas Rennehan1 1Department of Physics & Astronomy, University of Victoria, BC V8P 5C2, Canada E-mail<EMAIL_ADDRESS>(DR) (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract All hydrodynamical simulations of turbulent astrophysical phenomena require sub-grid scale models to properly treat energy dissipation and metal mixing. We present the first implementation and application of an anisotropic eddy viscosity and metal mixing model in Lagrangian astrophysical simulations, including a dynamic procedure for the model parameter. We compare these two models directly to the common Smagorinsky and dynamic variant. Using the mesh- free finite mass method as an example, we show that the anisotropic model is best able to reproduce the proper Kolmogorov inertial range scaling in homogeneous, isotropic turbulence. Additionally, we provide a method to calibrate the metal mixing rate that ensures numerical convergence. In our first application to cosmological simulations, we find that all models strongly impact the early evolution of galaxies leading to differences in enrichment and thermodynamic histories. The anisotropic model has the strongest impact, with little difference between the dynamic and the constant- coefficient variant. We also find that the metal distribution functions in the circumgalactic gas are significantly tighter at all redshifts, with the anisotropic model providing the tightest distributions. This is contrary to a recent study that found metal mixing to be relatively unimportant on cosmological scales. In all of our experiments the constant-coefficient Smagorinsky and anisotropic models rivaled their dynamic counterparts, suggesting that the computationally inexpensive constant-coefficient models are viable alternatives in cosmological contexts. ###### keywords: methods: numerical – turbulence – hydrodynamics – galaxies: evolution – galaxies: general ††pubyear: 2021††pagerange: Mixing matters–B ## 1 Introduction Galaxies form and evolve in tempestuous gaseous environments where hydrodynamics, radiative cooling, and gravity synergize to produce rich emergent phenomena on a myriad of spatial scales. The immense dynamic range of scales involved and their interconnectedness prove to be limiting factors in advancing our understanding of the complete picture of galaxy evolution (see Naab & Ostriker 2017 for an excellent review). At the forefront of the issue is hydrodynamical turbulence as it is a multi-scale, non-linear phenomenon that occurs in almost all galactic environments — directly impacting our theoretical understanding of galaxy evolution. While the importance of turbulence in the interstellar medium of galaxies has been long recognized (see Elmegreen & Scalo 2004 and Scalo & Elmegreen 2004 for reviews), only recently has the role of turbulence in halo gas come under careful consideration. Indeed, both the circumgalactic medium (CGM) of $L^{*}$ galaxies and the intracluster medium (ICM) of groups and clusters of galaxies show signs of turbulence playing an important role in their evolution (see, for example, Prasad et al. 2018 and Wang et al. 2020). Observationally, there is evidence of complex kinematic structure in the CGM of $L^{*}$-galaxies that emerged through the revolutionary Cosmic Origins Spectrograph halo survey on the Hubble space telescope (COS-halos; Tumlinson et al. 2013, 2017). For instance, Werk et al. (2016) found that turbulent velocities of $50$-$75$ $\mathrm{km}\,\mathrm{s}^{-1}$ explains the broadening of absorption lines in the CGM that is not otherwise explainable that has subsequently been confirmed using numerical studies (Buie et al., 2020). Moving up in mass scale, the ICM also shows evidence of turbulence through indirect observational methods such as X-ray surface brightness and Sunyaev-Zeldovich fluctuations (Zhuravleva et al., 2014; Pinto et al., 2015; Zhuravleva et al., 2015; Khatri & Gaspari, 2016; Zhuravleva et al., 2018). There are two main drivers of turbulence in galactic environments: (a) global outflows that emerge from star formation processes and supermassive black holes (SMBHs) within galaxies (Prasad et al., 2015, 2018; Karen Yang & Reynolds, 2016; Bourne & Sijacki, 2017; Fielding et al., 2017; Fielding et al., 2018; Sokołowska et al., 2018; Li et al., 2020) and (b) shearing motion driven by gas in-fall during structure formation (Dekel et al., 2009; Vazza et al., 2010, 2012; Vazza et al., 2017; Wittor et al., 2017; Bennett & Sijacki, 2020), mergers (ZuHone et al., 2013), and ram-pressure stripping of galaxies moving through the ICM (Ruggiero & Lima Neto, 2017; Simons et al., 2020). In both the CGM and ICM, turbulence could provide additional pressure support (Poole et al., 2006; Vazza et al., 2018; Lochhaas et al., 2020) that prevents the gas from collapsing and rapidly converting into stars as well as a physical mechanism to transport energy and metals throughout gas directly – impacting the cooling profile, star formation cycle, and metal distribution functions (Shen et al., 2010; Shen et al., 2012, 2013; Brook et al., 2014; Sokołowska et al., 2018; Escala et al., 2018; Tremmel et al., 2019; Rennehan et al., 2019; Hafen et al., 2019, 2020). Therefore, understanding the nature of turbulence is imperative to understand the complete picture of galaxy evolution. In a general sense, turbulence results from a large-scale injection of kinetic energy into a gas, followed by the eventual transformation of that kinetic energy into thermal energy through a small-scale viscous dissipation process. Physically, the injected kinetic energy on large scales breaks the gas into eddies of smaller and smaller sizes until the energy thermalises at the dissipation scale. This transfer of kinetic energy from the large scale to the small scale is the turbulent cascade and the ratio between the energy injection scale $L$ and the viscous dissipation scale, $\eta$, determines the dynamic range of scales within the cascade. The dissipation scale $\eta$ is proportional to viscosity of the gas; a more viscous fluid has a larger $\eta$ and a smaller range over which the turbulent cascade may exist (Landau & Lifshitz, 1987). It is in the cascade where the redistribution of energy and metals occurs as the eddies transport the fluid properties as mixing approaches a homogeneous state. Understanding of galaxy evolution requires treating turbulence simultaneously with the complexity of the astrophysics — a fact that demands the use of numerical simulations. Contemporary computational power limits both the spatial resolution, $h$, of cosmological simulations and the volume of the Universe, $V$, that we may simulate. To obtain a statistically sound sample of galaxies requires a representative (large) volume, while capturing the relevant astrophysics, such as star formation, stellar and supermassive black hole feedback, requires a small spatial resolution. Therefore, the dynamic range of a simulation, $L/h$ (where $L\sim V^{1/3}$), is the limiting factor for cosmological simulations. As a result, those important processes related to star formation and supermassive black holes usually occur on scales much smaller than $h$ and are, therefore, unresolved in most simulations. That fact necessitates sub-grid astrophysical models in cosmological simulations that reduce the relevant small-scale physics into a coarse-grained representation, allowing the physics below a scale $h$ to interact with those on scales $\ell$, where $L>\ell>h$ (for a more in-depth overview, see Somerville & Davé 2015). While there are many successful simulations that use a variety of sub- grid assumptions to broadly reproduce galaxy populations (e.g. Guedes et al. 2011; Hopkins et al. 2014; Vogelsberger et al. 2014; Schaye et al. 2015; Genel et al. 2014; Davé et al. 2017; Davé et al. 2019; Tremmel et al. 2019; Pillepich et al. 2018), one aspect that is often overlooked is the treatment of sub-grid turbulence. Therein lies a problem: in hydrodynamical simulations, the physical dissipation scale $\eta$ is almost always much smaller than the resolution scale, $\eta\ll h$ (Pope, 2000). For that reason, the kinetic energy that is flowing in the turbulent cascade reaches some scale $H\gtrsim h$ where it may no longer progress. If the numerical viscosity of the hydrodynamical method is not sufficiently strong to thermalise the kinetic energy, there will be a build-up of kinetic energy at that scale $H\gtrsim h\gg\eta$ (Garnier et al., 2009). The kinetic energy build-up is a completely unphysical representation of turbulence and not only impacts the energetics, but the large scale flow properties such as the redistribution of metals. Although not explicitly stated, many cosmological hydrodynamical simulation studies implicitly assume that numerical dissipation is sufficient to mimic sub-grid turbulence. However, numerical dissipation is not be sufficient to reduce the kinetic energy build-up and may not represent turbulent flow statistics in all cases (Sagaut, 2006). This has led to the development of sub-grid turbulence models whose purpose is to eliminate the build-up of kinetic energy at the resolution scale by providing a new viscous dissipation mechanism (or transport mechanism, in the case of metals). In simulations that use Eulerian (i.e. grid-based) hydrodynamics, there has been extensive effort to developing models for sub-grid turbulence (see Sagaut 2006; Garnier et al. 2009 and Schmidt 2015 for extensive reference lists). For astrophysically-relevant work, we refer the reader to Scannapieco & Brüggen (2008), Pan et al. (2013), Federrath (2013), Schmidt et al. (2014), and Semenov et al. (2016) (for a review, see Schmidt 2015). In this paper, we focus on the Lagrangian hydrodynamical methods. In Lagrangian hydrodynamics, the fluid equations are approximated via fluid elements that move with the flow. There are three main approaches that are most common in cosmological simulations: smoothed particle hydrodynamics (SPH) (Gingold & Monaghan, 1977; Lucy, 1977; Hernquist & Katz, 1989; Hopkins, 2013), the moving-mesh method (MM) (Springel, 2010), and mesh-free methods (MF) (Lanson & Vila, 2008a, b; Gaburov & Nitadori, 2011; Hopkins, 2015). Each of these methods track fluid elements using different discretisation techniques that lead to different levels of numerical dissipation. In SPH, there is no inherent numerical dissipation and it, counter-intuitively, produces a deficit of kinetic energy near the resolution scale rather than a build-up (Bauer & Springel, 2012; Price, 2012). The MM and MF methods use Riemann solvers to approximate the fluid equations of motion between neighbouring fluid elements. Riemann solvers are generally diffusive due to their approximate nature, and the build-up of kinetic energy is present in both methods on scales up to $\sim 10$ times the resolution scale (Bauer & Springel, 2012; Hopkins, 2015). A solution to kinetic energy build-up is to model the action of turbulent eddies as a viscous process that diffuses momentum (and metals) and dissipates kinetic energy via a diffusion equation and source term in the energy equation, respectively. Usually the assumption is that the viscosity, or diffusivity, depends on velocity fluctuations $v_{\mathrm{eddy}}$ near the resolution scale $h$. The resulting diffusivity is $D\propto hv_{\mathrm{eddy}}$, where $v_{\mathrm{eddy}}$ may be a characteristic velocity, or velocity difference, within the neighbourhood of a fluid element (Wadsley et al., 2008; Greif et al., 2009). One particularly important choice of $v_{\mathrm{eddy}}$ is the Smagorinsky model (Smagorinsky, 1963), which assumes that velocity shear fluctuations drive dissipation and mixing through $v_{\mathrm{eddy}}\sim h|S^{*}|$, where $|S^{*}|$ is the magnitude of the trace-free shear tensor. Wadsley et al. (2008) showed, using SPH, that a Smagorinsky-like model for thermal energy mixing improved resolved turbulent mixing in their hydrodynamical experiments – a problem that plagued SPH for decades (Agertz et al., 2007). The Smagorinsky model has been successfully used to treat metal and thermal energy mixing in SPH (Wadsley et al., 2008; Shen et al., 2010; Shen et al., 2012; Brook et al., 2014; Williamson et al., 2016; Tremmel et al., 2017; Wadsley et al., 2017; Su et al., 2017) and has been extended to other Lagrangian hydrodynamical methods, such as the MFM method (Rennehan et al., 2019). In particular, the Smagorinsky model produces a reasonable description of turbulence in isotropic scenarios (Colbrook et al., 2017), as well as essential for understanding metal distribution functions of stellar populations in dwarf galaxies (Escala et al., 2018), and understanding gas flows in the CGM (Hafen et al., 2019, 2020). While the Smagorinsky model improves mixing in Lagrangian simulations, it is important to consider two assumptions in the model: (a) that shear fluctuations (i.e., changes in $|S^{*}|$) always represent turbulence and (b) that the diffusive process acts isotropically through the magnitude of the trace-free (i.e. ignoring compression) shear tensor over the scale $h$. Are either of these assumptions reasonable? To address point (a), consider two incompressible fluids of different densities in contact and in parallel opposing motion. Will one fluid encroach the other? Will the flow become turbulent? Such a situation results in the Kelvin-Helmholtz instability and, given certain conditions on the density and velocities, we know that perturbations on the contact interface will grow exponentially and mix the fluids, generating vorticity and turbulence (Chandrasekhar, 1961). However, if the instability condition is not met then there remains a large $|S^{*}|$ at the interace without turbulence. If we used the simple assumption of a diffusivity $D\propto h|S^{*}|$ in a simulation we would diffuse rapidly at the interface even though there are no turbulent eddies! Piomelli & Liu (1995) recognized the problem and presented a solution: a method of dynamically calculating the model coefficient at simulation time. That method was employed in Rennehan et al. (2019) for the first time in Lagrangian hydrodynamics and they found that the model significantly reduced over-diffusion in non-turbulent shear flows, such as in rotating galactic disks and the Kelvin-Helmholtz instability. The second point (b) concerns the isotropy in $D$ and the discounting of compression in $|S^{*}|$. Recently Hu & Chiang (2020) showed that a better representation of sub-grid scale turbulence is obtained by treating the full velocity tensor $\boldsymbol{\nabla}\otimes\boldsymbol{u}$ for the diffusivity $\boldsymbol{D}$, now a tensor. The model is the gradient model (Clark et al., 1979) and is a major change since diffusion now depends on the directionality encoded in $\boldsymbol{\nabla}\otimes\boldsymbol{u}$ rather than equally in every spatial direction. The trace of $\boldsymbol{\nabla}\otimes\boldsymbol{u}$ is automatically included and, therefore, compression is automatically handled111The trace of $\boldsymbol{\nabla}\otimes\boldsymbol{u}$ is $\boldsymbol{\nabla}\cdot\boldsymbol{u}$, the divergence of the flow. – an important point for highly-compressible turbulence in cosmological flows. However, Hu & Chiang (2020) used a common approach from the fluid mechanics community to post-process driven turbulence simulations in order to check if the model would have improved the results. Our goal is to implement the model for the mesh-free finite mass method, a method growing in popularity, and determine its feasibility at simulation time. We introduce an implementation of the gradient model for Lagrangian astrophysical simulations and additionally provide methods for computing the model coefficient at simulation time. In Section 2 we provide a derivation of the model, as well as a derivation of the dynamic procedure in Section 2.1 that allows calculation of the model coefficient at simulation time. Section 3 describes driven turbulence validation tests of the gradient model, included at run time, for both eddy viscosity and metal mixing. As a first application, we describe the qualitative impact of the eddy viscosity and metal mixing model on cosmological gas phases in Section 4. We present our conclusions and recommendations in Section 5. ## 2 The gradient model In finite-mass Lagrangian hydrodynamics, such as the mesh-free finite mass (MFM) method (Lanson & Vila, 2008a, b; Gaburov & Nitadori, 2011; Hopkins, 2015), the build-up of kinetic energy at the resolution scale demands an additional dissipation mechanism. Additionally, metals follow the fluid mass elements throughout the simulation volume and, therefore, the exchange of metals between fluid elements due to sub-grid scale turbulent motion does not occur. The crux of the issue is that discretisation of the fluid field leads to damping out of the high-frequency turbulent fluctuations that should continue down to the physical dissipation scale. It is useful to think of the damping action as a high-pass filter acting on the fluid equations of motion. By applying a general filter to the momentum conservation equation, it is possible to derive the correct level of mixing that should occur between fluid elements due to unresolved turbulence. In general, the filtering action over a scalar field $f_{i}(\boldsymbol{r})$ can be represented as, $\overline{f}_{i}(\boldsymbol{r})=\int_{D}f_{i}(\boldsymbol{r}^{\prime})G(|\boldsymbol{r}-\boldsymbol{r}^{\prime}|,h)d\boldsymbol{r}^{\prime},$ (1) where $h$ is the smoothing scale over the domain $D$. We apply this to the momentum equation to determine the correction terms due to unresolved turbulence, $\frac{\partial(\rho\boldsymbol{u})}{\partial t}+\nabla\cdot(\rho\boldsymbol{u}\otimes\boldsymbol{u}+P\boldsymbol{I})=0,$ (2) where $\rho$ is the gas density, $P$ is the pressure, and $\boldsymbol{u}$ is the velocity vector. If we apply equation (1) to equation (2) it follows that, assuming the filtering operation and derivatives commute, $\frac{\partial(\overline{\rho\boldsymbol{u}})}{\partial t}+\nabla\cdot(\overline{\rho\boldsymbol{u}\otimes\boldsymbol{u}}+\overline{P}\boldsymbol{I})=0.$ (3) For simplicity, we switch to density-weighted variables such that $\widetilde{\boldsymbol{u}}\equiv\overline{\rho\boldsymbol{u}}/\overline{\rho}$ and $\frac{\partial(\overline{\rho}\tilde{\boldsymbol{u}})}{\partial t}+\nabla\cdot(\overline{\rho}\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}+\overline{P}\boldsymbol{I})=0.$ (4) The term $\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}$ is unknown at simulation time because it relies on information below the resolution scale. To put the equation in a more manageable form, we add $\nabla\cdot(\overline{\rho}[\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}}-\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}])$ and rearrange, $\frac{\partial(\overline{\rho}\widetilde{\boldsymbol{u}})}{\partial t}+\nabla\cdot(\overline{\rho}\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}}+\overline{P}\boldsymbol{I})=\nabla\cdot(\overline{\rho}[\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}}-\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}]).$ (5) Therefore, we define the sub-grid scalar flux $\boldsymbol{F}$, $\boldsymbol{F}\equiv\overline{\rho}(\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}-\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}}),$ (6) and retrieve a new equation, $\frac{\partial(\overline{\rho}\widetilde{\boldsymbol{u}})}{\partial t}+\nabla\cdot(\overline{\rho}\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}}+\overline{P}\boldsymbol{I})=-\nabla\cdot\boldsymbol{F}.$ (7) The sub-grid scale momentum flux $\boldsymbol{F}$ is unknown at simulation time and must be modelled yet is widely ignored in cosmological simulation studies which usually focus only on the thermal energy and metal fluxes via the Smagorinsky model. There are a myriad of models in the literature for $\boldsymbol{F}$ (see Garnier et al. 2009 for extensive lists) but here we take the direct approach of using a Taylor series approximation following Clark et al. (1979) and Hu & Chiang (2020). We expand222See Appendix A for a full derivation. $\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}$ via a Taylor expansion as $\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}\approx\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}}+\epsilon\nabla^{2}(\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}})$ (8) and, therefore, the flux becomes $\boldsymbol{F}=\overline{\rho}(\widetilde{\boldsymbol{u}\otimes\boldsymbol{u}}-\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}})\approx\overline{\rho}\epsilon\nabla^{2}(\widetilde{\boldsymbol{u}}\otimes\widetilde{\boldsymbol{u}}),$ (9) where $\epsilon$ is a constant that depends on the kernel scale $h$ as $\epsilon\propto h^{2}$ (Monaghan, 1989, 2002, 2011). Expanding the term on the right hand side of equation (9) and keeping only the first derivative terms we find $\boldsymbol{F}=2\overline{\rho}Ch^{2}(\nabla\otimes\widetilde{\boldsymbol{u}})(\nabla\otimes\widetilde{\boldsymbol{u}})^{\mathrm{T}},$ (10) where $C$ is our model parameter. A similar result emerges when considering the mass flux of metals in a fluid, $\frac{\partial(\rho Z)}{\partial t}+\boldsymbol{\nabla}\cdot(\rho\boldsymbol{u}Z)=0.$ (11) Applying the same approach as before gives an equation for the sub-grid flux of metals, $\boldsymbol{F}=2\overline{\rho}C_{\mathrm{Z}}h^{2}(\nabla\otimes\widetilde{\boldsymbol{u}})\cdot\mathbf{\nabla}Z$ (12) The method for solving equation (7) is detailed in Hopkins (2017). From their equation (2), $\boldsymbol{F}=\boldsymbol{K}\cdot(\nabla\otimes\boldsymbol{q}),$ (13) where $\boldsymbol{K}$ is the tensor describing the diffusive strength, and $\boldsymbol{q}$ is the fluid field property. Therefore, we identify, $\boldsymbol{K}\equiv 2\overline{\rho}Ch^{2}(\nabla\otimes\widetilde{\boldsymbol{u}}),$ (14) $\boldsymbol{q}\equiv\widetilde{\boldsymbol{u}}.$ (15) The model in equation (10) is known to lead to numerical instability due to locally stretching flows. Balarac et al. (2013) showed that ignoring the positive eigenvalues of $\boldsymbol{S}$ (the stretching effect) ensures the model is always well behaved. Therefore, we follow Balarac et al. (2013) and only keep the contribution due to the negative eigenvalues of the shear tensor. However, we must first decompose $\nabla\otimes\widetilde{\boldsymbol{u}}$ into the symmetric and anti- symmetric parts, $\nabla\otimes\widetilde{\boldsymbol{u}}=\frac{1}{2}(\nabla\otimes\widetilde{\boldsymbol{u}}+[\nabla\otimes\widetilde{\boldsymbol{u}}]^{\mathrm{T}})+\frac{1}{2}(\nabla\otimes\widetilde{\boldsymbol{u}}-[\nabla\otimes\widetilde{\boldsymbol{u}}]^{\mathrm{T}})\equiv\boldsymbol{S}+\boldsymbol{\Omega}.$ (16) We further decompose $\boldsymbol{S}$ into the contribution from the positive eigenvalues and negative eigenvalues as $\boldsymbol{S}\equiv\boldsymbol{S}_{\oplus}+\boldsymbol{S}_{\ominus}$. It then follows that $\boldsymbol{S}_{\ominus}\equiv\sum_{k=1}^{3}\mathrm{min}(0,\lambda^{(k)})\boldsymbol{e}^{(k)}\otimes\boldsymbol{e}^{(k)},$ (17) where $\lambda^{(k)}$ is the $k$th eigenvalue and $\boldsymbol{e}^{(k)}$ is the corresponding eigenvector. Therefore, the new diffusion coefficient tensor is $\boldsymbol{K}=2\overline{\rho}Ch^{2}\boldsymbol{S}_{\ominus},$ (18) where we have reduced the shear contribution into $\boldsymbol{S}_{\ominus}$. Removing the negative eigenvalues appears to not be necessary for the metal mixing case as we found no cases of numerical instability in all of our hydrodynamical tests and cosmological simulations. However, we find that it is absolutely necessary in cosmological simulations for the momentum flux. It is important to note that our choice of $h$ differs from that in Hopkins et al. (2018). We choose $h$ as the kernel radius of compact support as we found better results in our tests in Section 3. Our value is approximately twice that found in the FIRE studies (Su et al., 2017; Escala et al., 2018; Hafen et al., 2019, 2020), but produces $4$ times as much dissipation and metal mixing since the dependence is squared in equation (18). Our choice of $h$ is the same as Wadsley et al. (2017) who use the radius of compact support for turbulent mixing in the GASOLINE-2 code. ### 2.1 The dynamic gradient model We apply the same procedure in Balarac et al. (2013) combined with the density-weighted filtering procedure in Rennehan et al. (2019). Following their notation, we replace $\tilde{f}$ with $\overline{f}$ since fluid properties are inherently density-weighted in the mesh-free finite mass method. Although we focus on the velocity fluctuations in the following procedure, we note that it applies equally to the metal field. The resolved fluctuations in the flow are, $\boldsymbol{\mathcal{L}}=\widehat{\boldsymbol{\overline{u}}\otimes\overline{\boldsymbol{u}}}-\widehat{\boldsymbol{\overline{u}}}\otimes\widehat{\overline{\boldsymbol{u}}},$ (19) where $\widehat{\overline{f}}$ represents the quantity filtered on twice the resolution scale ($\sim 2\overline{h}$, see Section 2.4 of Rennehan et al. 2019). If we use only resolved quantities (i.e. doubly-filtered) then the gradient model in equation (10) should reproduce $\boldsymbol{\mathcal{L}}$, $\boldsymbol{\mathcal{L}}=\widehat{\boldsymbol{\overline{u}}\otimes\overline{\boldsymbol{u}}}-\widehat{\boldsymbol{\overline{u}}}\otimes\widehat{\overline{\boldsymbol{u}}}=2C\widehat{\overline{h}}^{2}\widehat{\overline{\boldsymbol{S}}}_{\ominus}(\boldsymbol{\nabla}\otimes\widehat{\overline{\boldsymbol{u}}})^{\mathrm{T}}.$ (20) The above equation results in a solution for the one unknown parameter $C$, $C=\frac{\boldsymbol{\mathcal{L}}\cdot\big{[}2\widehat{\overline{h}}^{2}\widehat{\overline{\boldsymbol{S}}}_{\ominus}(\boldsymbol{\nabla}\otimes\widehat{\overline{\boldsymbol{u}}})^{\mathrm{T}}\big{]}}{||2\widehat{\overline{h}}^{2}\widehat{\overline{\boldsymbol{S}}}_{\ominus}(\boldsymbol{\nabla}\otimes\widehat{\overline{\boldsymbol{u}}})^{\mathrm{T}}||^{2}}=\frac{1}{2}\frac{\boldsymbol{\mathcal{L}}\cdot\boldsymbol{\alpha}}{||\boldsymbol{\alpha}||^{2}},$ (21) where we have defined $\alpha\equiv\widehat{\overline{h}}^{2}\widehat{\overline{\boldsymbol{S}}}_{\ominus}(\boldsymbol{\nabla}\otimes\widehat{\overline{\boldsymbol{u}}})$. Using this determined value of $C$, we use equation (18) as the diffusivity tensor in the additional flux term from equation (10). Note that $\overline{h}$ is the resolution scale and not the radius of compact support of the kernel. The resolution scale is approximately half of the radius of compact support, or the mean interparticle spacing. Applying the same procedure to the metal field yields a separate equation for $C_{\mathrm{Z}}$. The resolved fluctuations take a similar form to equation (19), $\boldsymbol{\mathcal{L}}_{\mathrm{Z}}=\widehat{\boldsymbol{\overline{u}}\,\overline{Z}}-\widehat{\boldsymbol{\overline{u}}}\,\widehat{\overline{Z}}.$ (22) Note that $\boldsymbol{\mathcal{L}}$ is now a vector rather than a rank-2 tensor. Following the procedure we outline above results in an equation for $C_{\mathrm{Z}}$ $C_{\mathrm{Z}}=\frac{\boldsymbol{\mathcal{L}}_{\mathrm{Z}}\cdot\big{[}2\widehat{\overline{h}}^{2}(\boldsymbol{\nabla}\otimes\widehat{\overline{\boldsymbol{u}}})\cdot\boldsymbol{\nabla}\widehat{\overline{Z}}\big{]}}{||2\widehat{\overline{h}}^{2}(\boldsymbol{\nabla}\otimes\widehat{\overline{\boldsymbol{u}}})\cdot\boldsymbol{\nabla}\widehat{\overline{Z}}||^{2}}=\frac{1}{2}\frac{\boldsymbol{\mathcal{L}}_{\mathrm{Z}}\cdot\boldsymbol{\beta}}{||\boldsymbol{\beta}||^{2}},$ (23) where we have defined $\boldsymbol{\beta}\equiv\widehat{\overline{h}}^{2}(\boldsymbol{\nabla}\otimes\widehat{\overline{\boldsymbol{u}}})\cdot\boldsymbol{\nabla}\widehat{\overline{Z}}$. ### 2.2 Comparison to the Smagorinsky model The gradient model differs from the widely used Smagorinsky model in a subtle yet important way. Returning to the definition of the sub-grid flux in equation (10), the Smagorinsky model represents $\boldsymbol{F}$ as, $\boldsymbol{F}=2\overline{\rho}(C_{\mathrm{s}}h)^{2}||\boldsymbol{S}^{*}||\boldsymbol{S}^{*},$ (24) where $\boldsymbol{S}^{*}$ is the trace-free symmetric shear tensor333$\boldsymbol{S}^{*}\equiv\boldsymbol{S}-\frac{1}{3}\mathrm{tr}(\boldsymbol{S})\cdot\boldsymbol{I}$. The diffusivity tensor is isotropic, $\boldsymbol{K}_{\mathrm{Smag}}\equiv 2\overline{\rho}(C_{\mathrm{s}}h)^{2}||\boldsymbol{S}^{*}||\,\boldsymbol{I}.$ (25) and $||\boldsymbol{K}_{\mathrm{Smag}}||=2\overline{\rho}(C_{\mathrm{s}}h)^{2}||\boldsymbol{S}^{*}||$ with $C_{\mathrm{s}}\sim 0.15$. The constant nature of $C_{\mathrm{s}}$ implies that the diffusivity scales with the magnitude of the symmetric shear or, more directly, that the strength of turbulent fluctuations is only determined by fluctuations in the shear strength. That is a good assumption in purely turbulent flows but fails dramatically in laminar shear flows, where the shear is not a good indicator of the presence of turbulence. One solution to over-diffusion in the Smagorinsky model is to dynamically calculate the coefficient $C_{\mathrm{s}}$ at simulation time based on the local fluid properties. We showed in Rennehan et al. (2019) that the dynamic procedure predicts much lower values of $C_{\mathrm{s}}$ in the majority of our simple hydrodynamical tests. However, we did not consider the impact of altering the isotropic nature of the diffusivity. The gradient model has $\boldsymbol{K}_{\mathrm{Grad}}\propto\boldsymbol{S}_{\ominus}$ with $||\boldsymbol{K}_{\mathrm{Grad}}||=2\overline{\rho}Ch^{2}||\boldsymbol{S}_{\ominus}||$. The constant $C$ is yet to be determined but the direction differs from the dynamic and non-dynamic Smagorinsky models. The diffusivity itself no longer acts in each direction equally but acts in the direction of the eigenvectors of $\boldsymbol{S}_{\ominus}$. However, in a simple incompressible, low Mach number turbulent flows we expect that $||\boldsymbol{K}_{\mathrm{Smag}}||\sim||\boldsymbol{K}_{\mathrm{Grad}}||$ given that the velocity derivative tensor $\boldsymbol{\nabla}\otimes\boldsymbol{\tilde{u}}$ is approximately isotropic in that regime. Our application of the dynamic method (Piomelli & Liu, 1995) to the gradient model simultaneously allows $C=C(\boldsymbol{x},t)$ and the diffusivity to be anisotropic, $\boldsymbol{K}_{\mathrm{Grad}}\propto\boldsymbol{S}_{\ominus}$. This should be important for any complicated astrophysical flows such as those we investigate in the following sections. ### 2.3 Models Table 1: Turbulence models and parameters. The dynamic model calculates the model parameters at simulation and, therefore, the values listed below are the forced upper limit. Label | Dynamic | (Max.) Parameter Value | Anisotropic ---|---|---|--- None | N/A | N/A | N/A Smag. | $\times$ | 0.15 | $\times$ Dyn. Smag. | ✓ | 0.20 | $\times$ FIRE | $\times$ | 0.05 | $\times$ Grad. | $\times$ | 0.22 | ✓ Dyn. Grad. | ✓ | 1.00 | ✓ Table 1 contains a compact description of our model set. In all cases where there is a sub-grid turbulence model, we treat both metals and viscosity simultaneously. There are three categories of models: no sub-grid model (None), the Smagorinksy model, and the gradient model. The dynamic procedure allows us to extend the Smagorinsky and gradient models with a model parameter that depends on spatio-temporal coordinates. Additionally, we test the only other calibration of the Smagorinsky model in the mesh-free finite mass method from the FIRE collaboration (Escala et al., 2018). For the Smagorinsky model, Smag., we use the theoretical value of $C_{\mathrm{s}}$ $\sim 0.15$ and limit $C_{\mathrm{s}}$ to $0.20$ for the dynamic Smagorinsky model (Dyn. Smag.) to avoid numerical instability. The FIRE calibration of the Smagorinsky model (labelled FIRE) is $C_{\mathrm{s}}$ $\approx 0.046$ and we adopt $C_{\mathrm{s}}$ $=0.05$ for simplicity. For these three versions of the Smagorinsky model the value of $C_{\mathrm{s}}$ is the same for both metals and eddy-viscosity. In the new gradient model we use fixed values of $C=0.22$ and $C_{\mathrm{Z}}=0.22$ for the baseline comparison and label these as Grad. In our other tests, we use the dynamic procedure outlined in Section 2.1 and label these tests as Dyn. Grad. We derive our fixed values of $C$ and $C_{\mathrm{Z}}$ from the approximate median value predicted by the dynamic procedure in the driven turbulence tests in Section 3. ## 3 Homogeneous Turbulence Turbulence is ubiquitous in astrophysical flows on a myriad of scales and Mach numbers. Therefore, in this section, we investigate the impact of the gradient model on the velocity statistics and metal distributions in homogeneous, isotropic, driven turbulence at Mach numbers $\mathcal{M}\in\\{0.3,0.7,2.1\\}$. For each $\mathcal{M}$, our control (i.e., no sub-grid turbulence model) simulation set comprises $5$ simulations with particle counts $N\in\\{64^{3},128^{3},256^{3},512^{3},768^{3}\\}$ within a box of side length $L=1$ with initial pressure, density, and specific internal energy of $P=1$, $\rho=1$, $u=1000$, respectively444We use arbitrary code units in all of our hydrodynamical tests.. Initially, we place equal mass particles on a uniform grid and then subsequently mix the gas via the prescription of Bauer & Springel (2012) and Hopkins (2015). We list our turbulent driving parameters for GIZMO in Table 2 (cf. Table 1 in Bauer & Springel 2012). For more details of our approach, see Section 3.1 of Rennehan et al. (2019). Table 2: Parameters for our driven turbulence experiments. Units are arbitrary code units. $\sim\mathcal{M}$ | $\sigma$ | $\Delta t$ | $t_{\mathrm{s}}$ | $k_{\mathrm{min}}$ | $k_{\mathrm{max}}$ | $\tau_{\mathrm{mix}}$ ---|---|---|---|---|---|--- 0.3 | 0.014 | 0.005 | 1 | 6.27 | 12.57 | 3.33 0.7 | 0.045 | 0.005 | 1 | 6.27 | 12.57 | 1.67 2.1 | 0.21 | 0.005 | 0.5 | 6.27 | 12.57 | 0.50 Our interest lies in measuring the impact of (1) the eddy-viscosity model on the velocity power spectra of these driven turbulence volumes and (2) the convergence of metal distribution functions in these volumes. However, these properties rely on driven turbulence volumes that are in statistical equilibrium. To gauge whether our simulations are in equilibrium, we define a mixing timescale $\tau_{\mathrm{mix}}\equiv L/\langle v\rangle$, where $\langle v\rangle$ is the expected average velocity of the particles in each volume. However, $\langle v\rangle=v_{\mathrm{s}}\mathcal{M}$, where $v_{s}=1$ is the isothermal sound speed of the gas and, additionally, $L=1$. Therefore, $\tau_{\mathrm{mix}}=1/\mathcal{M}$. We evolved each control simulation for several mixing timescales ($\sim 4\tau_{\mathrm{mix}}$) to ensure that the gas is in steady-state statistical equilibrium and we confirmed the stability of the Mach number. The mixing timescale and steady-state Mach numbers are listed in Table 2. We measure the velocity power spectra following the same method as Bauer & Springel (2012), which is available in the public version of GIZMO. It may seem out of place to study low resolutions, such as $64^{3}$ and $128^{3}$, in this study when it is clearly possible to study resolutions up to $768^{3}$. The ultimate goal of our work is to apply the model to cosmological simulations which have a huge dynamic range and, therefore, low resolutions in individual galaxies. For example, IllustrisTNG (Pillepich et al., 2018) and the RomulusC (Tremmel et al., 2019) simulations both have particle mass resolutions of $\sim 10^{5}$ $\mathrm{M}_{\odot}$ for each gas particle, at their best. Consider that in an $L^{*}$-galaxy, we expect perhaps $\lesssim 10^{10}$ $\mathrm{M}_{\odot}$ of hot gas in the circumgalactic medium (Anderson & Bregman, 2010). At the best resolutions we have today, that gives $\sim 10^{5}$ particles per $L^{*}$-halo or $\sim 50^{3}$ particles. Evidently, contemporary cosmological simulations that capture both hundreds of $\mathrm{Mpc}$ on the large-scale as well as individual galaxies are far off from $\gtrsim 256^{3}$ per galactic halo. ### 3.1 Velocity power spectra A standard measure to determine if eddy viscosity models improve the accuracy of hydrodynamical simulations is whether the velocity power spectra reproduces the theoretically predicted Kolmogorov scaling, $E(k)\sim k^{-5/3}$. That scaling holds for incompressible, low Mach number turbulence ($\mathcal{M}\lesssim 1$) but is shallower than the apparent scaling in supersonic turbulence, $E(k)\sim k^{-2}$ (Federrath, 2013). In physical turbulence, the dissipation scale is demarcated by a sharp decline from the Kolmogorov slope on the smallest scales. In simulations, the resolution scale forces dissipation to occur on a much larger scale than what would occur in nature as the physical dissipation scale is unresolved. If the numerical viscosity of a hydrodynamical method cannot rapidly dissipate that energy, there will be a build-up of kinetic energy near the resolution scale that causes an unphysical representation of turbulence. Eddy-viscosity models introduce additional dissipation in the gas by accounting for the unrepresented scales in the flow, or equivalently by minimizing the error from the missing terms in the equations of motion. The build-up of kinetic energy is usually observed as a "bump" in the velocity power spectra where there is artificial correlation in the velocities on small scales. However, before we discuss the impact of eddy viscosity models on the power spectrum we must first test the convergence of the mesh-free finite mass (MFM) method in simulations without eddy viscosity. Figure 1: The velocity power spectra of our simulations with no sub-grid eddy viscosity model at three Mach numbers from top to bottom, respectively: $\mathcal{M}=0.3$, $0.7$, and $2.1$. We compensate the power spectra by $k$ for easy comparison with Bauer & Springel 2012. The coloured lines show resolutions $64^{3}$, $128^{3}$, $256^{3}$, $512^{3}$, and $768^{3}$ from lightest to darkest, respectively. In each panel, we show the predicted scaling with a dotted line of arbitrary normalisation. Only in the highest Mach number case do we see many of the scales in the inertial range represented and clear convergence of the hydrodynamical method. Fig. 1 shows the velocity power spectra for our set of simulations with particle counts $64^{3}$, $128^{3}$, $256^{3}$, $512^{3}$, and $768^{3}$ coloured by lines from lightest to darkest, respectively. The power spectra are compensated by $k$ for easy comparison to Bauer & Springel (2012). The panels are ordered from lowest to highest Mach number from top to bottom — $\mathcal{M}\sim 0.3$, $0.7$ and $2.1$, respectively. In each panel, the dotted line represents the predicted scaling but at an arbitrary normalisation. From the top panel of Fig. 1, it is apparent that $64^{3}$ and $128^{3}$ do not faithfully represent a turbulent gas as they are dominated by the bump. More precisely, the $E(k)$ scaling is much too shallow compared to Kolmogorov turbulence for a wide range of $k$. Our $256^{3}$ simulation shows an inkling of the inertial range scaling but is slightly too steep below $k\lesssim 20$ and dominated by the bump at $k\gtrsim 30$. As we move up in resolution the inertial range only grows slightly. At our highest resolution, the inertial range spans $k\sim 40$ to $k\sim 60$ and the bump dominates the small scales. We skip discussion of the middle panel as the results are qualitatively equivalent between $\mathcal{M}\sim 0.3$ and $0.7$. In the bottom panel of Fig. 1, we show the compensated power spectra for supersonic turbulence at $\mathcal{M}\sim 2.1$. It is immediately evident that the simulations converge much more rapidly to the proper scaling than in subsonic turbulence. Still, at $64^{3}$ resolution the power spectra is dominated by the build-up of kinetic energy near the resolution scale, with the inertial range only beginning to appear at $128^{3}$ resolution. At our highest resolution, $768^{3}$, there is arguably an entire order-of-magnitude resolved in the inertial range before the build-up of kinetic energy dominates at the smallest scales. Figure 2: The velocity power spectrum of our turbulence volumes with various eddy viscosity models. Rows are Mach numbers $0.3$, $0.7$, and $2.1$ from top to bottom, respectively. The left column has no additional increase in dissipation whereas the right column has an order-of-magnitude boost in dissipation on particles with $\mathcal{M}<1$. The black curve shows the $256^{3}$ simulation with no eddy viscosity model. The other solid lines show the Smag. and Dyn. Smag. models, and the dashed and dotted lines show the Grad. and Dyn. Grad. models, respectively. While all of the models improve the inertial scaling, an additional boost factor ($\gamma\sim 10$) for subsonic particles is required to reproduce the proper scaling at all Mach numbers. Evidently, sub-grid eddy viscosity models are required for the mesh-free finite mass (MFM) method at any resolutions that may be used in astrophysical environments. We show the impact of our eddy viscosity models in Fig. 2. The left column shows the compensated power spectra, $E(k)$, as a function of wavenumber $k$. The rows represent the same three Mach numbers $\mathcal{M}=0.3$, $0.7$, and $2.1$ from top to bottom, respectively. The dotted black line shows the Kolmogorov scaling at each Mach number. All of the simulations were run at $256^{3}$ resolution since that is the resolution where, with no eddy viscosity model, we begin to see an extended inertial range and distinguish the kinetic energy "bump". The coloured curves show the eddy viscosity models, shaded from lightest to darkest: Dyn. Smag (solid salmon), Smag. (solid magenta), Dyn. Grad. (dotted purple), and Grad. (dashed purple). To explain the right column, we must first explain the results in the left column. The left column of Fig. 2 shows the velocity power spectra of the turbulent gas in our simulated volumes with the same models presented in Section 2. The black curve shows the control experiment at $256^{3}$ resolution, i.e. the simulation with no eddy viscosity model and only numerical dissipation as in Fig. 1. All Mach numbers show the same trend: the eddy viscosity models have little impact on reducing the build-up of kinetic energy at small scales. Especially important is that the sub-sonic ($\mathcal{M}\lesssim 1$) simulations are much less improved than the supersonic case. However, the new gradient model variants, Grad. and Dyn. Grad., dissipate slightly more rapidly and allow for a steeper slope closer to the Kolmogorov scaling. We did not expect that all of the eddy viscosity implementations would fail to reduce the kinetic energy build-up a priori. The fact that there is not enough dissipation suggests that some physical property in the diffusion tensor was assigned incorrectly. As we outlined in Section 2.2, the diffusion strength for the Smagorinsky and the gradient model classes should effectively scale with each other ($||\boldsymbol{K}_{\mathrm{Smag}}||\sim||\boldsymbol{K}_{\mathrm{Grad}}||$) in isotropic, homogeneous turbulence. Therefore, in both classes of models, there are only two physically-motivated quantities that control the strength of diffusion: the length-scale $h$ and the velocity tensor $\boldsymbol{\nabla}\otimes\boldsymbol{u}$. The velocity tensor should not be the issue since it has been verified through the hydrodynamical tests in Hopkins (2015) and would cause the MFM method to fail drastically if the velocity gradients were incorrect. That leaves $h$ as the issue, suggesting that our estimate of the scale over which the eddy viscosity interactions propagate is underestimated555Recall that our definition of $h$ is already twice as large as Hopkins (2017), leading to $4$ times as much dissipation and mixing.. Therefore, we introduce a boost factor $\gamma$ to the diffusion tensor $||\boldsymbol{K}||$ (i.e., $||\boldsymbol{K}||\rightarrow\gamma||\boldsymbol{K}||$) in order to get a more reasonable scaling in the inertial range. We determined the boost factor by running a series of driven turbulence tests with discrete $\gamma$ from $\gamma=1$ to $\gamma=100$. We did not perform a quantitative fit to the Kolmogorov scaling as our interest is in the approximate offset required to improve the inertial scaling and the exact $\gamma$ is unimportant for the statistics of the flow. We additionally found that we only need to correct the diffusion strength in particles that are subsonic, $\mathcal{M}<1$, with the Mach number for each particle derived from the current velocity of the particle divided by its thermal sound speed. The right column of Fig. 2 shows the velocity power spectra of our simulations with a dissipation boost factor of $\gamma\sim 10$ on only the subsonic particles in each simulation. At all Mach numbers the additional dissipation causes the kinetic energy to convert into thermal energy much more readily, causing the disappearance of the additional power at small scales, $k\gtrsim 40$. The gradient models Grad. and Dyn. Grad. perform better than the Smagorinksy variants, but only slightly. That is expected in isotropic homogeneous turbulence since the dissipation strength is effectively the same, and confirms that our implementation of the gradient model is a successful eddy viscosity model. It is very important that we emphasise the results we observe in Fig. 1 are very similar to those found in Bauer & Springel (2012) and Hopkins (2015) for the moving-mesh method (MM; as implemented in AREPO) and the mesh-free finite mass method (MFM; as implemented in GIZMO), respectively. The only difference is that we show results for much higher resolutions where we begin to see an extended inertial range. Both the MFM and MM hydrodynamical methods produce the same kinetic energy bump that exists in grid-based methods and, therefore, we would expect an eddy viscosity model to also solve the problem in the MM method, although we do not test that in this work. To reiterate, it is necessary that all hydrodynamical simulations resolve the inertial range combined with an immediate sharp drop-off in power, or they are not reproducing what we physically observe as turbulence below a certain scale, where the build-up of kinetic energy begins to dominate. ### 3.2 Metal mixing After each simulation reached $\sim 4\tau_{\mathrm{mix}}$, we treated each steady-state volume as new initial conditions for our metal mixing study. In each volume, we gave the densest $50\%$ of particles a metal mass fraction of $Z=1$ while keeping the rest $Z=0$. The metals in our simulations act as passive scalars and have no impact on the flow properties. We will test the model with more realistic metal distributions in Section 4. First, we must determine if our simulations converge toward a solution for the metal distribution as we increase resolution. We ran each of the metal enriched volumes for an additional $4\tau_{\mathrm{mix}}$ to sample a wide variety of metal distribution states. We expect a priori that by $\sim 2\tau_{\mathrm{mix}}$ the metal-enriched particles should be scattered approximately homogeneously since a particle with the typical velocity $\langle v\rangle$ should have crossed the volume twice in that time. Although that is true for all resolutions, how can we compare each resolution on equal footing after it has reached equilibrium? The appropriate comparison involves smoothing the spatial distribution of metals on the same scale in all of our simulations. The main assumption is that our simulations with particle counts $\geqslant 128^{3}$ contain more accurate information on the scales equivalent to our $64^{3}$ simulations. Equivalently, if we degrade the resolution of the highest resolution simulations to the lowest resolution, we should hope to obtain a result similar to the lowest resolution simulation. To degrade the resolution for each simulation, we first kernel-weight the particle data onto a grid with resolution twice as fine as minimum smoothing length in the simulation, $\Delta x_{\mathrm{sim,i}}$. Next, we smooth the grid data on a physical scale equivalent to our $64^{3}$ simulation using a uniform top-hat filter666Specifically, we use the uniform_filter function from the scipy package in Python, with periodicity enabled. with width $w=\Delta x_{\mathrm{low}}/\Delta x_{\mathrm{sim,i}}$, where $\Delta x_{\mathrm{low}}$ is always $\Delta x_{\mathrm{low}}\equiv 1/64$ since our box has length $L=1$. See Appendix B for a comparison of the smoothed metal distributions. Figure 3: The standard deviation of the Gaussian fit to the metal distributions in our driven turbulence experiments as a function of resolution, $N_{\mathrm{x}}$. Mach numbers are $\mathcal{M}=0.3$, $0.7$, and $2.1$ from top to bottom, respectively. The stars show the simulations with no sub-grid metal mixing model at resolutions given by the number associated with each point. The simulations at $64^{3}$ resolution with sub-grid metal mixing models are given by the figure labels. The dotted line shows an exponential fit to the no metal mixing model simulations. Clearly, the gradient model allows us, at $64^{3}$ resolution, to reproduce the spatial redistribution of metals due to turbulence at a level better than $4-8$ times the resolution. Fig. 3 shows the standard deviation ($\sigma_{\mathrm{Z}}$) of the smoothed metal distribution as a function of $N_{\mathrm{x}}=\\{64,128,256,512,768\\}$ in our simulations at $t\sim 2\tau_{\mathrm{mix}}$, for Mach numbers $0.3$, $0.7$, and $2.1$ from top to bottom, respectively. The stars correspond to the simulations without a sub-grid metal mixing model at the resolution given by their labels. The remaining symbols in the legend correspond to the simulations at $64^{3}$ with a sub-grid metal mixing model (see Table 1 for a description). The dotted line shows an exponential decay fit to test convergence in the simulations without metal mixing. We expect decreasing $\sigma_{\mathrm{Z}}$ with increasing resolution since hydrodynamical mixing is more resolved and the metal value in each grid cell approaches the mean. At $\mathcal{M}\sim 0.3$ in the top panel of Fig. 3, $\sigma_{\mathrm{Z}}$ follows an exponentially decreasing trend with resolution in the simulations without metal mixing, as expected. However, there is not evidence for strong convergence at our highest resolution — although it appears the curve is beginning to flatten. The inverted triangle shows the results for the Dyn. Smag. model and the left-pointing triangle shows the result for the Smag. model. Both the dynamic and standard Smagorinsky models predict a more reasonable $\sigma_{\mathrm{Z}}$, reproducing a metal distribution closer to a resolution of $512^{3}$. The FIRE calibration of the Smagorinsky model, marked as $\times$, shows little improvement in $\sigma_{\mathrm{Z}}$; the result is effectively equivalent to having no model at all. The Dyn. Grad. and Grad. models are marked by a plus sign and diamond, respectively. The gradient model obviously improves $\sigma_{\mathrm{Z}}$ and can, at $64^{3}$ resolution, also reproduce a metal distribution equivalent to a resolution $512^{3}$. The trends are equivalent for $\mathcal{M}\sim 0.7$ turbulence, so we continue to the next panel. The bottom panel of Fig. 3 shows how the sub-grid metal mixing models impact supersonic turbulence at $\mathcal{M}\sim 2.1$. There is much better convergence of the metal distribution at $t\sim 2\tau_{\mathrm{mix}}$ than in subsonic turbulence at this time. The qualitative trend remains the same for the $64^{3}$ simulations with sub-grid metal mixing: both the gradient models and Smagorinsky models perform equally as well. However, the width $\sigma_{\mathrm{Z}}$ of the distributions are much wider in $\mathcal{M}\sim 2.1$ turbulence. With the exception of the FIRE calibration, the other metal mixing models at $64^{3}$ resolution predict metal distributions similar to $256^{3}$. Our turbulence tests with metals demonstrate that sub-grid metal mixing models are necessary in the mesh-free finite mass (MFM) method if one desires more accurate metal distributions. The method we provided for calibrating the metal mixing models is important and, we argue, must be investigated whenever one implements a new metal mixing model into any hydrodynamics solver. Particularly, one must calibrate the model parameter at the resolution they desire for their mixing model if a dynamic procedure is not applied. That is an important point: the dynamic procedure (Dyn. Smag. and Dyn. Grad.) allows us to approximate the calibrated model parameter for the corresponding model (Smag. and Grad.) without carrying out the calibration. However, the true power of the dynamic procedure is in simulations with mixtures of non- turbulent and turbulent gas at various Mach numbers. In those complex environments the dynamic procedure automatically adjusts the model parameter and, as we showed in Rennehan et al. (2019) for the Dyn. Smag. model, drastically alters the resulting metal distributions. As expected, in pure homogeneous, isotropic turbulence all of the sub-grid metal mixing models improved the accuracy of the metal distributions when using a proper calibration. That is expected in the homogeneous, isotropic case since all of the models effectively act in the same way, on average. The power of the dynamic procedure, and the new anisotropic model is in cosmological simulations where many complex flows interact. ### 3.3 Applicability to other hydrodynamical methods While our interest lies in the mesh-free finite mass (MFM) method, the models in Section 2 and experiments in Section 3 are applicable to other hydrodynamical solvers. Specifically, our results extrapolate with minor modification to grid-based hydrodynamics. Only slight changes to the filtering method must be implemented, as outlined in Schmidt (2015). More care must be taken when applying the model to smoothed particle hydrodynamics (SPH). However, as we outline below, there is much broader applicability to the moving-mesh (MM) method. First we consider eddy viscosity. In SPH, it is well-established that there is a deficit in power near the resolution scale rather than a build-up of kinetic energy (Bauer & Springel, 2012; Price, 2012; Hopkins, 2013, 2015). That fact suggests that SPH reproduces turbulence better than the MFM or MM methods, but produces results at a much lower effective resolution. The lack of power, rather than the overabundance of power, implies that an eddy viscosity model would only further degrade the resolution of SPH results and not improve the inertial scaling. Therefore, we do not recommend eddy viscosity models for SPH but rather the work of Di Mascio et al. (2017), who recently provided an SPH equivalent. For the MM method, the build-up of kinetic energy at the resolution scale is present (Bauer & Springel, 2012) and of equivalent magnitude to our results in the MFM method. Therefore, we recommend investigation into eddy viscosity models for the MM method as they could improve the inertial scaling. In terms of implementation, all of the derivations in Section 2 apply to the MM method. Metal mixing using the Smagorinsky model has been studied in cosmological simulations involving SPH but widely ignored in the MM method. However, no calibration technique has been provided by the community for SPH and the calibrations usually follow the theoretical value for the Smagorinsky model (e.g., Shen et al. 2010; Williamson et al. 2016) or calibrations that depend on sub-grid astrophysics models (e.g., Wadsley et al. 2017; Escala et al. 2018). The metal mixing calibration technique in Section 3 is completely applicable to SPH since metals are treated equivalently to the MFM method, and they are both constant mass methods. Additionally, our calibration technique does not depend on the uncertainties within astrophysical sub-grid models — only pure hydrodynamics. For the MM method, all of Section 2 is applicable for metal mixing since the MM method relies on transport equations such as equation (2) for advecting metals throughout the fluid (Springel, 2010). In fact, Balarac et al. (2013) find that the gradient model improves the inertial scaling in the power in the metal field through identical transport equations. However, convergence tests such as those in Fig. 3 are necessary in order to determine whether they are truly required, or if numerical dissipation is adequate. ## 4 Cosmological Simulations Understanding the evolution of galaxies is a complex enterprise involving highly non-linear coupled physical processes. Not only do stellar feedback and active galactic nuclei produce powerful outflows that drive turbulence locally in the interstellar media of galaxies, but also in the gas reservoirs surrounding galaxies. Turbulence also appears through the Kelvin-Helmholtz instability in ram pressure stripping of galaxies moving through a hot medium, and through the stellar winds from stars making their way out from the galaxy into the circumgalactic medium. The physical processes above occur on spatial scales much smaller than currently possible to resolve in the average contemporary cosmological simulation. For that reason, the majority of astrophysics in cosmological simulations are encoded into parametrised sub-grid models that use the information on the largest scales to predict what occurs below the resolution of the simulation. The resulting calculations usually indicate how much mass and energy should be injected (or removed) from the large-scale gas and stellar components. However, there is no one correct way to approximate the astrophysics on the sub-grid scale since it highly depends on the maximum possible resolution, hydrodynamical method, as well as other complex numerical effects. All of the issues with numerics and missing physics usually ends up in one or more tunable free parameters in the model. Assuming such a sub-grid astrophysical model is developed, how do we know that it is correct? Or, at least approximating reality? Normally, one or more trusted astronomical observation is used to test the validity of all of the sub-grid astrophysics that may exist. A common example would be the galaxy stellar mass function, or the $\mathrm{M}_{\mathrm{BH}}$ \- $\sigma_{\mathrm{*}}$ relationship that links supermassive black hole masses to the stellar masses of their host galaxies. However, two different hydrodynamical methods may provide different parameter values for the same sub-grid astrophysical models. Additionally, there may be two completely different approaches to modelling the same physical phenomenon with no clear mapping between free parameters! Calibrating sub-grid astrophysical models is obviously a complicated endeavour and must be built on a strong hydrodynamics base. How can we begin to trust that our understanding of the astrophysics of small scale approximate reality if the hydrodynamics, as we showed in Section 3, does not reproduce reality? Our goal is in determining whether the converged and separately calibrated sub-grid turbulence models we presented in Section 2 have any significant impact over the sub-grid astrophysical models that we use in large-volume cosmological simulations. As a first step, we only investigate the broad, qualitative impact on the gas properties in gaseous halos in a single set of sub-grid astrophysical models. We stress that we do not intend to reproduce the full galaxy population in a calibrated and predictive sense. Additionally, we note that more testing is required across the all of sub-grid astrophysical models that exist in the literature, as the turbulent mixing models may interact in unexpected ways due to the non-linearity of the problem. ### 4.1 The simulations For our comparison, we choose to use the SIMBA galaxy formation model. SIMBA includes robust sub-grid models of star formation, cooling, stellar feedback, chemical enrichment, active galactic nuclei feedback, and dust evolution — all evolved in concert with the mesh-free mass method (MFM) (Davé et al., 2016, 2019). For this study, we implemented the SIMBA models into the public version of GIZMO as described in Davé et al. (2019) and we point the interested reader to that study for the details of the sub-grid models. We follow the approach of Schaye et al. (2015) and calibrate our implementation of the SIMBA model only to the galaxy stellar mass function and the $M_{\mathrm{BH}}$-$M_{*}$ relationship at $z=0$ for the purposes of this study. Table 3: Cosmological parameters and simulation information for our simulation set. Our parameters follow Davé et al. 2019 except begin at a higher redshift. Cosmological Parameters | ---|--- $\Omega_{\mathrm{m,0}}$ | $0.3$ $\Omega_{\mathrm{\Lambda,0}}$ | $0.7$ $\Omega_{\mathrm{b,0}}$ | $0.048$ $h$ | $0.68$ $\sigma_{\mathrm{8}}$ | $0.82$ $n_{\mathrm{spec}}$ | $0.97$ Simulation Information | $z_{\mathrm{begin}}$ | 249 $N_{\mathrm{part}}$ | $2\times 256^{3}$ $L_{\mathrm{side}}$ | $25$ $\mathrm{cMpc}\,h^{-1}$ $m_{\mathrm{part,gas}}$ | $1.26\times 10^{7}$ $\mathrm{M}_{\odot}h^{-1}$ $m_{\mathrm{part,dark}}$ | $6.88\times 10^{7}$ $\mathrm{M}_{\odot}h^{-1}$ $\epsilon_{\mathrm{soft,min}}$ | $0.5$ $\mathrm{kpc}\,h^{-1}$ We run 6 cosmological-scale volumes of side-length $L=25$ $\mathrm{cMpc}\,h^{-1}$ ($\sim 37$ $\mathrm{cMpc}$) in order to compare our various mixing models. The simulations begin from initial conditions generated with MUSIC (Hahn & Abel, 2011) at a redshift of $z=249$, with a standard $\Lambda$ Cold Dark Matter cosmology (see Table 3 for values). The mass resolution in gas and dark matter follows the SIMBA simulations with $m_{\mathrm{part,gas}}=1.26\times 10^{7}$ $\mathrm{M}_{\odot}h^{-1}$ and $m_{\mathrm{part,dark}}=6.88\times 10^{7}$ $\mathrm{M}_{\odot}h^{-1}$, respectively. We use adaptive gravitational softening (Hopkins, 2015; Hopkins et al., 2018) to compute the softening lengths of all of our particles, and enforce a minimum softening length of $\epsilon_{\mathrm{soft,min}}=0.5$ $\mathrm{kpc}\,h^{-1}$. ### 4.2 Global metal mixing While sub-grid scale turbulence models maximally impact the smallest scales in a cosmological simulation, their integrated effect impact the properties of the largest scales, such as global metal distribution functions (Shen et al., 2010; Escala et al., 2018; Rennehan et al., 2019). Therefore, in this Section, we examine the impact of the gradient model on the metal distribution functions (MDFs) in the circumgalactic medium (CGM) and warm-hot intergalactic medium (WHIM) — both known to be turbulent environments (Iapichino et al., 2011; Iapichino et al., 2013; Tumlinson et al., 2017). Our definition of CGM and WHIM depends on separating gas that is bound to halos from that which is unbound, at a given epoch. A good estimation comes from Davé et al. (2010), $\frac{\rho_{\mathrm{bound}}(z)}{\Omega_{\mathrm{b}}(z)\rho_{\mathrm{c}}(z)}=6\pi^{2}\bigg{(}1+0.4093\bigg{(}\frac{1}{\Omega_{\mathrm{m}}(z)}-1\bigg{)}^{0.9052}\bigg{)}-1,$ (26) where $\Omega_{\mathrm{b}}(z)$ is the baryon fraction as a function of redshift, $\Omega_{\mathrm{m}}(z)$ the matter fraction, $\rho_{c}(z)=3(H(z))^{2}/(8\pi G)$, and $H(z)$ the redshift-dependent Hubble function. All gas above $\rho_{\mathrm{bound}}(z)$ we consider bound to halos, and have confirmed that the approximation holds well. We define the CGM to be all gas in the volume that is above $\rho_{\mathrm{bound}}(z)$ in equation (26) and below the star formation density threshold, $\rho_{\mathrm{*,crit}}=4.4\times 10^{-25}$ g cm-3, at any temperature. That includes gas in the intragroup medium of our most massive halos in the $(25\,\mathrm{cMpc}\,h^{-1})^{3}$ volumes. The WHIM is all gas that is below $\rho_{\mathrm{bound}}(z)$ and above a temperature of $T=10^{5}$ $\mathrm{K}$. Fig. 4 shows the metal distribution functions (MDFs) for our two gas phases in columns: WHIM (left) and CGM (right), and at $z=6$, $4$, $2$, and $0$ in rows from top to bottom, respectively. These are probability density functions, and were constructed by binning the particle metallicities in the range $10^{-6}<\log(Z/Z_{\odot})<10^{1}$, where $Z_{\odot}=0.0134$ (Asplund et al., 2009). The black curves show the control simulation, None, with no sub-grid metal mixing. The coloured curves show the simulations with sub-grid metal mixing and are, from lightest to darkest: Dyn. Smag (solid salmon), Smag. (solid magenta), FIRE (dotted magenta), Dyn. Grad. (dotted purple), and Grad. (dashed purple). See Table 1 for more details. Figure 4: The metal distribution function across global phases: the warm-hot intergalactic medium (WHIM; left column) and the circumgalactic medium (CGM; right column). Redshifts are given by the rows from top to bottom: $z=6$, $4$, $2$, and $0$, respectively. The gradient model variants mix metals much more rapidly in the early stages of the simulation, suggesting that the choice of model will impact studies that focus on enrichment timing of halo gas. First we focus on the WHIM. At $z=6$, there are two distinct components across all of our model variants. The peak at $Z\sim 10^{-1}$ $\mathrm{Z}_{\mathrm{\odot}}$ is the highly enriched interstellar medium (ISM) gas that recently joined the WHIM via stellar winds from the integrated star formation in the early universe. The lower distribution is gas that has mixed into the WHIM but did not recycle through the ISM, missing the opportunity for further enrichment via supernova feedback. It is important to note that since there is no mixing in the None case, when a particle leaves the ISM it cannot change its metallicity. The models that include sub-grid mixing show varying spread in the MDFs, with the FIRE and Dyn. Smag. showing the widest spread. The Dyn. Grad. and Grad. models show the tightest distributions, with the two peaks in the distributions seemingly merging at $Z\sim 10^{-1.5}$ $\mathrm{Z}_{\mathrm{\odot}}$. The Smag. model matches the Dyn. Smag. model at $Z\gtrsim 10^{-3}$ $\mathrm{Z}_{\mathrm{\odot}}$, but is biased toward higher metallicities below that threshold. The next 3 panels in the left column of Fig. 4 show the MDFs in the WHIM for $z=4$, $2$, and $0$, from top to bottom, respectively. The trend for all models is to approach a singly-peaked distribution as the simulation evolves through cosmic time. Most of the evolution in the MDFs occurs from $z=6$ to $z=2$ after which the distributions are mostly stationary. The transition from $z=6$ to $z=4$ demonstrates how rapidly the WHIM evolves at high redshift, and how each sub-grid metal mixing model impacts the MDFs with different mixing rates. Specifically, the Dyn. Grad. and Grad. models predict similar distributions at $z=4$, and produce the tightest MDFs compared to all of the other models. In fact, there is the same trend at $z=4$ as at $z=6$ — the gradient model variants (Dyn. Grad. and Grad.) predict tighter MDFs, followed by wider distributions in the Smagorinsky models (Smag., Dyn. Smag., and FIRE, respectively). There are similar trends in the CGM, as we see in the right column of Fig. 4. To reiterate, the panels show redshifts $z=6$, $4$, $2$, and $0$ from top to bottom, respectively. At $z=6$, there is a clear distinction between the distribution at $Z\lesssim 10^{-2}$ $\mathrm{Z}_{\mathrm{\odot}}$ and $Z\gtrsim 10^{-2}$ $\mathrm{Z}_{\mathrm{\odot}}$ in the None case, and the Smagorinsky variants (Smag., Dyn. Smag., and FIRE). Stellar feedback drives the peak at $Z\sim 10^{-1}$ $\mathrm{Z}_{\mathrm{\odot}}$ similarly to the WHIM at this redshift, whereas the distribution at $Z\lesssim 10^{-2}$ $\mathrm{Z}_{\mathrm{\odot}}$ is from the very first generations of stars. By this time the Dyn. Grad. and Grad. models have mixed the most rapidly to create a single broad distribution in their MDFs. All of the models with sub-grid metal mixing have much more gas mass enriched above $Z>10^{-6}$ $\mathrm{Z}_{\mathrm{\odot}}$ than the None case, especially compared to the deficit at $Z\sim 10^{-1.5}$ $\mathrm{Z}_{\mathrm{\odot}}$ in the None case. The Smagorinsky models vary in mixing rate as Smag., Dyn. Smag., and FIRE, from fastest to slowest, respectively. The MDFs in the CGM at redshifts $z=4$ to $z=0$ demonstrate the same trends as in the WHIM phase at the same redshifts: the gradient models mix much more rapidly at early stages than the Smagorinsky models. At $z=0$ the Dyn. Grad., Grad., and Smag. models predict the same distribution in the global CGM phase, whereas the Dyn. Smag. and FIRE models predict slightly less enriched gas. The MDFs in the turbulent WHIM and CGM show the importance of sub-grid metal mixing models in cosmological simulations as well as the importance of model choice. In all cases we include metal mixing, the MDFs are significantly tighter at all redshifts we measure and significantly tighter for the CGM at $z=0$. This is contrary to the study in Su et al. (2017) that found metal mixing to be relatively unimportant on cosmological scales. However, we find that the FIRE calibration is much too low to reproduce the correct converged hydrodynamical mixing of metals (see Section 3.2). With our new calibrations of the Smagorinsky model, Smag., and the new gradient models, Dyn. Grad. and Grad., we see significant differences at all redshifts. A common theme in theoretical galaxy evolution is that equivalent results between different models at $z=0$ does not necessarily imply a similar integrated history. The evolutionary paths for each sub-grid metal mixing model all evolve at slightly different rates as we would expect based on their diffusivities from Section 2. The lesson from our results is that the metal mixing model choice impacts the early development phases of galaxies rather than the long term equilibrium stages. At higher redshifts, $z\gtrsim 2$, gas is collapsing to form galaxies, while stellar feedback and supermassive black holes are driving outflows out of the potential wells and forcing turbulence. Our inclusion of the full diffusion tensor in the Dyn. Grad. and Grad. models allows the gas that is compressing from feedback and infall to further mix its metal mass with nearby neighbours, tightening the MDFs. The Smagorinsky variants also improve the results and allow metal mixing between gas particles but produce broader distributions, notwithstanding the Dyn. Smag. and Smag. models showing good matches in the convergence tests of Section 3.2. This is an important point: the simple turbulence tests in Section 3.2 showed agreement between the gradient and Smagorinsky models (except for the lower FIRE calibration) but now we see disagreement in complex cosmological environments. Evidently, ignoring the trace of the velocity tensor is not the correct approach for cosmological contexts and we recommend either the Grad. or Dyn. Grad. models as we see no difference with the dynamic procedure applied to the constant coefficient gradient case. ### 4.3 The impact of eddy viscosity While the velocity power spectra results in Section 3.1 show that eddy viscosity is required in Lagrangian finite mass methods, we find no significant impact of eddy viscosity on the average gas and galaxy properties in our cosmological simulations. However, we should expect these results considering that the eddy viscosity models we tested in Section 3.1 had no impact on the largest scales of the simulation (as they should not). In terms of global gas properties, we investigated the vorticity, temperature, and density distributions of the warm-hot intergalactic medium (WHIM) and circumgalactic medium (CGM) phases777Definitions in Section 4.2. and found only minor differences in data binned by galaxy stellar mass. Additionally, there were little differences between stellar mass distributions in our small volume, low resolution tests. To compare the impact of eddy viscosity between the models we need to investigate the small scales of the simulations in a controlled manner. For that reason, we will use the most massive galaxy in our cosmological simulations as a qualitative case study of the impact of the eddy viscosity model on the halo gas since this galaxy represents the same system in each simulation we ran. First, we investigate the temperature projections of the most massive (in stellar mass) galaxy at redshifts $z=2$, $1$, and $0$. We confirmed that the most massive galaxy at $z=2$ ends up as part of the most massive galaxy at $z=1$ and $z=0$. We restrict our analysis to a radius of $500$ $\mathrm{kpc}$ (physical) for the purposes of this introductory study. Additionally, given that we see the Smag., Grad., and Dyn. Grad. models as the best choices from the results in Section 4.2, we restrict our analysis to the None, Smag., and Grad. models. The Grad. model is much less computationally expensive than the Dyn. Grad. model and is, therefore, the better choice for cosmological simulations888Further study is required for tests that rely on getting the small scale structure as correct as possible, such as cold clouds interacting with a hot medium.. Figure 5: Temperature projections of the most massive halo in three of our cosmological simulations at redshifts $z=2$, $1$, and $0$ in rows from top to bottom, respectively. The columns show our None, Smag., and Grad. models from left to right, respectively. Each of the panels represents a $1$ $\mathrm{Mpc}$ by $1$ $\mathrm{Mpc}$ (physical) region centred on the most massive galaxy at each redshift. The Smag. simulation shows the most small- scale structure at all redshifts, and a smoother distribution of temperature at high-redshift compared to the None case. The Grad. model produces less small-scale structure than any other model, and much more hot gas at high redshift. Surprisingly, the Grad. model also produces more extended tails from sub-structure moving through the hot halo at lower redshift, suggesting it may impact future studies of jellyfish galaxies. Figure 6: Velocity magnitude projections of the most massive halo at $z=0$ in our cosmological simulations. From left to right we show the None, Smag., and Grad. simulations, respectively. The images are centred on the most massive galaxy and each show a $1$ $\mathrm{Mpc}$ by $1$ $\mathrm{Mpc}$ (physical) region. The Smag. model shows more substructure than the None and Grad. models. The Grad. model shows many extended tails from substructure moving through the hot halo compared to the control and Smag. models, in addition to an overall smoother distribution in velocity space. Fig. 5 shows the density-weighted temperature projections of the most massive galaxy at $z=2$, $1$, and $0$ in rows from top to bottom, respectively. The columns show the None, Smag., and Grad. models from left to right, respectively. First, we compare the results at $z=2$ across mixing model variants. The dark clumps in the None case are individual cold galaxies that are being fed into the main structure via filaments. Ongoing stellar and AGN feedback leads to the temperature increase in the central region, while the extended $T\sim 10^{6}$ $\mathrm{K}$ halo is a mixture of gravitational heated gas and gas heated by previous generations of stellar feedback and AGN. There are many filamentary structures visible feeding the main galaxy (centred), and there is some small scale structure visible surrounding the galaxies. The Smag. case resembles the None case, except there less cold gas in the upper in-falling structure. Additionally, the satellite galaxies have more small-scale structure in the cold gas surrounding them. Importantly, the distribution of hotter gas appears mores smoothly distributed throughout the volume due to an increase in conversion of kinetic energy in thermal energy via the eddy viscosity. The temperature projection of the Grad. model simulation is strikingly different at $z=2$ than both the Smag. and None cases. While in the None and Smag. cases there appears to be large-scale gas at $T\lesssim 10^{5}$ $\mathrm{K}$ at the boundary of the $500$ $\mathrm{kpc}$ radius, there is no such gas in the Grad. case. There is much less small-scale structure in the Grad. case, and the filaments are much smoothly distributed in space. We conclude that the stellar feedback, gravitational heating, and AGN feedback are much more effective at heating the gas on small scales in the Grad. simulation, since the eddy viscosity is the strongest. At $z=1$, in the middle row of Fig. 5, a similar picture emerges. The Smag. case has much more small scale structure than the None case as is evident in the central region and to the leftmost satellite galaxy structure. There is still a much smoother distribution of cold gas that extends outwards from the satellite galaxies. There are only a few centrally-located cold gas clumps in the Grad. case compared with the Smag. case and there is effectively no fragmentation identifiable in the satellite galaxy structure to the left of panel. The last row of Fig. 5 shows $z=0$ across the three model comparisons. By this redshift, there is little structure remaining in the group-sized halo and the temperature distribution of the intragroup gas is very smooth. Similarly to the other redshifts, the Smag. shows the most fragmentation of cold gas, followed by the None case, and then the Grad. case. All three show a satellite being stripped of gas in the upper right of the panels but the Smag. and Grad. models each differ in an important unique way compared to the None case. It is much easier to see the differences in ram pressure stripping in velocity space rather than temperature space. Fig. 6 shows the density-weighted velocity magnitude projections of the most massive galaxy at $z=0$ for the three eddy viscosity models None, Smag., and Grad. from left to right, respectively. It is apparent across all three cases that the substructure is moving at least a factor of $\approx 5$ faster than the background gas yet only the Grad. case shows the most clearly defined long stripping tails from the cold gas in the galaxies. The Smag. case does not have the cold gas in the satellite galaxy marked by the arrows in the None and Grad. cases, as the gas has been completely stripped away. The Grad. case produces the cold gas in that satellite galaxy, and the tail is much more extended than in the None case. In fact, it is evident upon close inspection of the cold gas structures in the halo that the Grad. case produces tails from the cold gas in the galaxies more readily than the None or Smag. cases. That has implications for the study of ram pressure stripping in general, and should be further investigated in the future. ## 5 Conclusions Turbulence is a key physical process in the study galaxy evolution and one of many highly complex non-linear interactions that must be understood to advance our knowledge of the Universe. The complexity demands the use of simulations that combine astrophysical sub-grid models with hydrodynamics and gravity in an expanding universe. All hydrodynamical simulations are known to require additional sub-grid models to accurately treat the impact sub-grid turbulence, yet these models have been widely ignored in the astrophysical community. We have, for the first time in Lagrangian hydrodynamics, implemented and studied the gradient model (Clark et al., 1979) – an anisotropic sub-grid turbulence model for viscosity and metal mixing (Hu & Chiang, 2020). The model is based on directly modelling the error terms that arise from discretisation of the fluid field via a Taylor series expansion, including the compression terms that are missing from the standard Smagorinsky model. We additionally implemented a dynamic procedure that computes the model parameter on-the-fly for the gradient model following the approach of Rennehan et al. (2019). We used the mesh-free finite mass method in the GIZMO code (Hopkins, 2015) as our numerical hydrodynamics solver for all of our experiments. We ran driven turbulence simulations at Mach numbers $\mathcal{M}=0.3$, $0.7$, and $2.1$ to validate the gradient model and compare with the popular Smagorinsky model. Hu & Chiang (2020) recently showed, by post-processing driven their turbulence simulations, that the gradient model should be able to reduce the build-up of kinetic energy near the resolution scale in isotropic, homogeneous turbulence and better reproduce the sub-grid metal flux. We confirmed these results in Section 3 by using the gradient model at simulation time. Our analysis of the velocity power spectra in driven turbulence produced unexpected results. We found that the gradient and Smagorinsky models, and their dynamic variants, predicted insufficient dissipation to reduce the artificial build-up of kinetic energy near the grid-scale in our $256^{3}$ simulations. For that reason we introduced a boost factor $\gamma$ for the dissipation strength (i.e. $||\boldsymbol{K}||\rightarrow\gamma||\boldsymbol{K}||$) and experimented with various values in the range $\gamma\in[1,100]$. We found that a factor of $\gamma\sim 10$ is sufficient to reduce the build-up of kinetic energy and is required for all of the Smagorinsky and gradient model variants. Additionally, we found that the boost factor only needs to be applied to the subsonic ($\mathcal{M}<1$) particles in our simulations to produce the correct statistics in supersonic turbulence (our $\mathcal{M}\sim 2.1$ test). The true boost factor is higher since we used the maximum interaction distance between neighbouring gas particles to calculate the diffusion tensor, leading to a $\sim 4$ times additional boost (total $\sim 40$) over other the default implementation in GIZMO (Hopkins et al., 2018). Our converged metal mixing simulations in Section 3.2 show that when we use the gradient and Smagorinsky models at lower resolution ($64^{3}$ particles) we are able to produce MDFs that are equivalent to $4$ to $12$ times the resolution. That is true for both the constant-coefficient and dynamic variants of the gradient and Smagorinsky models with standard parameter values, with the additional factor of $\sim 4$ boost from using the maximum interaction distance in the kernel. However, lower calibrations such as those from the FIRE simulations do not produce the correct rate of mixing as they are at least a factor of $\sim 20$ too low. We posit that this is because of the common calibration approach in cosmological simulations: calibration in tandem with the full suite of astrophysical sub-grid models. We argue that calibration of fundamental hydrodynamics models, such as the metal mixing model here, _must be done in the absence of sub-grid astrophysical models_. There must be a strong hydrodynamics base before the complexity of astrophysics is built on top. Note that our dynamic Smagorinsky and dynamic gradient models _do not require calibration_ and produce accurate predictions for the rate of metal mixing in isotropic, homogeneous turbulence. As a first application of the new gradient model, we investigated a set of cosmological simulations to determine if there is any dominant impact on the galaxy and gas properties. We investigated the metal mixing and eddy viscosity separately in Sections 4.2 & 4.3, respectively. We found that the choice of sub-grid metal mixing model strongly impacts the MDF evolution in the warm-hot intergalactic medium (WHIM) and circumgalactic medium (CGM). We found that the gradient and dynamic gradient models mix metals much more rapidly than the Smagorinsky variants and produce tighter MDFs up until $z\sim 1$ when they approach a similar distribution down to $z=0$. In our simulation without sub-grid metal mixing, the MDFs in the WHIM and CGM are significantly broader than any of the simulations with sub-grid metal mixing demonstrating that, at the very minimum, including any sub-grid metal mixing model is an improvement. The most important result we discovered is that the metal mixing models are most impactful during the tempestuous early stages of galaxy evolution. On very long timescales, the equilibrium distributions match quite closely across the models. Including eddy viscosity in our cosmological simulations did not significantly impact the galaxy properties we investigated when averaging in bins of stellar mass after $z=2$. The galaxy stellar mass function was relatively unchanged, along with only slight variations in the stellar mass to halo mass function. We also found that including eddy viscosity did not significantly impact the averaged gas distributions of vorticity, temperature, and density across galaxies of similar stellar mass. We expected a priori that the large-scale properties of galaxies would be unaffected as the sub-grid eddy viscosity mainly impacts the small-scale. For that reason, we investigated a single system that could be linked across all of our cosmological simulations to gain a qualitative view of the impact. In Section 4.3 we showed the temperature projections of the most massive halo traced from $z=2$, $1$ and $0$ in our cosmological simulations for three eddy viscosity models: no model, the standard Smagorinsky model, and the new constant-coefficient gradient model. We found that the Smagorinsky model produced much more fragmentation in the halo gas of the most massive galaxy on the small scale compared to having no eddy viscosity, at all redshifts. We also found that the spatial temperature distribution was much smoother at $z=2$ when stellar and active galactic nuclei feedback was much stronger, showing that the small-scale kinetic energy was being efficiently converted into thermal energy. Although the constant-coefficient gradient model seemingly dissipates faster based on our results in Section 3.1, we observed that its inherent anisotropy does not lead to the same fragmentation we saw in the Smagorinsky model. At high redshift, $z=2$ the gradient model produced a much more widespread hot gas. The filamentary structure at all redshifts was much smoother and, after $z=1$, the satellite galaxies in the halo had many more clearly defined tails due to an improved treatment of ram-pressure stripping. Sub-grid metal mixing and eddy viscosity models have a strong impact on galaxy evolution simulations. In this work, we showed in the simplest case of isotropic, homogeneous turbulence that the all of the models tested here improved the accuracy of metal mixing and turbulent kinetic energy dissipation in the mesh-free finite mass method. The most significant differences between model choice appeared at high redshift in the early stages of galaxy evolution, before any equilibrium is reached. Given that contemporary cosmological simulations have resolutions of less than $\sim 50^{3}$ particles in a typical galaxy, we recommend that future studies must at least use the constant-coefficient gradient model as it is (a) computationally inexpensive compared to the dynamic version, while producing similar results and (b) includes the full velocity tensor in the diffusion tensor to give the most accurate solution for sub-grid turbulence. Our recommendation is especially pertinent given the recent push to study higher redshift systems driven by the upcoming launch of the James Webb Space Telescope, as our theoretical understanding of enrichment and thermodynamic histories will depend directly on sub-grid turbulence model choice. ## Acknowledgements This research was enabled in part by support provided by WestGrid and Compute/Calcul Canada. The simulations in this research were made possible by SciNet and the Niagara supercomputing cluster. DR acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number 534263] and through the Discovery Grant program. DR thanks Arif Babul, Belaid Moa, Ondrea Clarkson, Austin Davis, Drummond Fielding, Phil Hopkins, and Wolfram Schmidt for useful advice during the course of this research. DR gives special acknowledgement to Arif Babul and Belaid Moa for providing invaluable support to this research, without which it would not have been possible. Our analysis was performed using the Python programming language (Python Software Foundation, https://www.python.org). The following packages were used throughout the analysis: h5py (Collette, 2013), numpy (Harris et al., 2020), scipy (Virtanen et al., 2020), yt (Turk et al., 2011), and matplotlib (Hunter, 2007). Prototyping of the analysis scripts was performed in the IPython environment (Perez & Granger, 2007). ## Data availability The data underlying this article will be shared on reasonable request to the authour. ## References * Agertz et al. (2007) Agertz O., et al., 2007, Monthly Notices of the Royal Astronomical Society, 380, 963 * Anderson & Bregman (2010) Anderson M. E., Bregman J. N., 2010, The Astrophysical Journal, 714, 320 * Asplund et al. (2009) Asplund M., Grevesse N., Sauval A. J., Scott P., 2009, Annual Review of Astronomy and Astrophysics, 47, 481 * Balarac et al. (2013) Balarac G., Le Sommer J., Meunier X., Vollant A., 2013, Physics of Fluids, 25 * Bauer & Springel (2012) Bauer A., Springel V., 2012, Monthly Notices of the Royal Astronomical Society, 423, 2558 * Bennett & Sijacki (2020) Bennett J. S., Sijacki D., 2020, Monthly Notices of the Royal Astronomical Society, 499, 597 * Bourne & Sijacki (2017) Bourne M. A., Sijacki D., 2017, Monthly Notices of the Royal Astronomical Society, 472, 4707 * Brook et al. (2014) Brook C. B., Stinson G., Gibson B. K., Shen S., Macciò A. V., Obreja A., Wadsley J., Quinn T., 2014, Monthly Notices of the Royal Astronomical Society, 443, 3809 * Buie et al. (2020) Buie E., Fumagalli M., Scannapieco E., 2020, The Astrophysical Journal, 890, 33 * Chandrasekhar (1961) Chandrasekhar S., 1961, Hydrodynamic and hydromagnetic stability. Oxford University Press, London and New York * Clark et al. (1979) Clark R. A., Ferziger J. H., Reynolds W. C., 1979, Journal of Fluid Mechanics, 91, 1 * Colbrook et al. (2017) Colbrook M. J., Ma X., Hopkins P. F., Squire J., 2017, Monthly Notices of the Royal Astronomical Society, 467, 2421 * Collette (2013) Collette A., 2013, Python and HDF5. O’Reilly * Davé et al. (2010) Davé R., Oppenheimer B. D., Katz N., Kollmeier J. A., Weinberg D. H., 2010, Monthly Notices of the Royal Astronomical Society, 408, 2051 * Davé et al. (2016) Davé R., Thompson R., Hopkins P. F., 2016, Monthly Notices of the Royal Astronomical Society, 462, 3265 * Davé et al. (2017) Davé R., Rafieferantsoa M. H., Thompson R. J., Hopkins P. F., 2017, Monthly Notices of the Royal Astronomical Society, 467, stx108 * Davé et al. (2019) Davé R., Anglés-Alcázar D., Narayanan D., Li Q., Rafieferantsoa M. H., Appleby S., 2019, Monthly Notices of the Royal Astronomical Society, 486, 2827 * Dekel et al. (2009) Dekel A., Sari R., Ceverino D., 2009, The Astrophysical Journal, 703, 785 * Di Mascio et al. (2017) Di Mascio A., Antuono M., Colagrossi A., Marrone S., 2017, Physics of Fluids, 29, 035102 * Elmegreen & Scalo (2004) Elmegreen B. G., Scalo J., 2004, Annu. Rev. Astron. Astrophys, 42, 211 * Escala et al. (2018) Escala I., et al., 2018, Monthly Notices of the Royal Astronomical Society, 474, 2194 * Federrath (2013) Federrath C., 2013, Monthly Notices of the Royal Astronomical Society, 436, 1245 * Fielding et al. (2017) Fielding D., Quataert E., McCourt M., Thompson T. A., 2017, Monthly Notices of the Royal Astronomical Society, 466, 3810 * Fielding et al. (2018) Fielding D., Quataert E., Martizzi D., 2018, Monthly Notices of the Royal Astronomical Society, 481, 3325 * Gaburov & Nitadori (2011) Gaburov E., Nitadori K., 2011, Monthly Notices of the Royal Astronomical Society, 414, 129 * Garnier et al. (2009) Garnier E., Adams N., Sagaut P., 2009, Large Eddy Simulation for Compressible Flows. Scientific Computation, Springer Netherlands, Dordrecht, doi:10.1007/978-90-481-2819-8 * Genel et al. (2014) Genel S., et al., 2014, Monthly Notices of the Royal Astronomical Society, 445, 175 * Gingold & Monaghan (1977) Gingold R. A., Monaghan J. J., 1977, Monthly Notices of the Royal Astronomical Society, 181, 375 * Greif et al. (2009) Greif T. H., Glover S. C. O., Bromm V., Klessen R. S., 2009, Monthly Notices of the Royal Astronomical Society, 392, 1381 * Guedes et al. (2011) Guedes J., Callegari S., Madau P., Mayer L., 2011, The Astrophysical Journal, 742, 76 * Hafen et al. (2019) Hafen Z., et al., 2019, Monthly Notices of the Royal Astronomical Society, 488, 1248 * Hafen et al. (2020) Hafen Z., et al., 2020, Monthly Notices of the Royal Astronomical Society, 494, 3581 * Hahn & Abel (2011) Hahn O., Abel T., 2011, Monthly Notices of the Royal Astronomical Society, 415, 2101 * Harris et al. (2020) Harris C. R., et al., 2020, Nature, 585, 357 * Hernquist & Katz (1989) Hernquist L., Katz N., 1989, The Astrophysical Journal Supplement Series, 70, 419 * Hopkins (2013) Hopkins P. F., 2013, Monthly Notices of the Royal Astronomical Society, 428, 2840 * Hopkins (2015) Hopkins P. F., 2015, Monthly Notices of the Royal Astronomical Society, 450, 53 * Hopkins (2017) Hopkins P. F., 2017, Monthly Notices of the Royal Astronomical Society, 466, 3387 * Hopkins et al. (2014) Hopkins P. F., Kereš D., Oñorbe J., Faucher-Giguère C.-A., Quataert E., Murray N., Bullock J. S., 2014, Monthly Notices of the Royal Astronomical Society, 445, 581 * Hopkins et al. (2018) Hopkins P. F., et al., 2018, Monthly Notices of the Royal Astronomical Society, 480, 800 * Hu & Chiang (2020) Hu C.-Y., Chiang C.-T., 2020, The Astrophysical Journal, 900, 29 * Hunter (2007) Hunter J. D., 2007, Computing in Science & Engineering, 9, 90 * Iapichino et al. (2011) Iapichino L., Schmidt W., Niemeyer J. C., Merklein J., 2011, Monthly Notices of the Royal Astronomical Society, 414, 2297 * Iapichino et al. (2013) Iapichino L., Viel M., Borgani S., 2013, Monthly Notices of the Royal Astronomical Society, 432, 2529 * Karen Yang & Reynolds (2016) Karen Yang H.-Y., Reynolds C. S., 2016, The Astrophysical Journal, 829, 90 * Khatri & Gaspari (2016) Khatri R., Gaspari M., 2016, Monthly Notices of the Royal Astronomical Society, 463, 655 * Landau & Lifshitz (1987) Landau L., Lifshitz E., 1987, Course of theoretical physics. Vol. 6: Fluid Mechanics. Pergamon Press * Lanson & Vila (2008a) Lanson N., Vila J.-P., 2008a, SIAM Journal on Numerical Analysis, 46, 1912 * Lanson & Vila (2008b) Lanson N., Vila J.-P., 2008b, SIAM Journal on Numerical Analysis, 46, 1935 * Li et al. (2020) Li M., Li Y., Bryan G. L., Ostriker E. C., Quataert E., 2020, The Astrophysical Journal, 898, 23 * Lochhaas et al. (2020) Lochhaas C., Bryan G. L., Li Y., Li M., Fielding D., 2020, Monthly Notices of the Royal Astronomical Society, 493, 1461 * Lucy (1977) Lucy L. B., 1977, The Astronomical Journal, 82, 1013 * Monaghan (1989) Monaghan J. J., 1989, Journal of Computational Physics, 82, 1 * Monaghan (2002) Monaghan J. J., 2002, Monthly Notices of the Royal Astronomical Society, 335, 843 * Monaghan (2011) Monaghan J. J., 2011, European Journal of Mechanics, B/Fluids, 30, 360 * Naab & Ostriker (2017) Naab T., Ostriker J. P., 2017, Annual Review of Astronomy and Astrophysics, 55, 59 * Pan et al. (2013) Pan L., Scannapieco E., Scalo J., 2013, The Astrophysical Journal, 775, 111 * Perez & Granger (2007) Perez F., Granger B. E., 2007, Computing in Science & Engineering, 9, 21 * Pillepich et al. (2018) Pillepich A., et al., 2018, Monthly Notices of the Royal Astronomical Society, 475, 648 * Pinto et al. (2015) Pinto C., et al., 2015, Astronomy & Astrophysics, 575, A38 * Piomelli & Liu (1995) Piomelli U., Liu J., 1995, Physics of Fluids, 7, 839 * Poole et al. (2006) Poole G. B., Fardal M. A., Babul A., McCarthy I. G., Quinn T., Wadsley J., 2006, Monthly Notices of the Royal Astronomical Society, 373, 881 * Pope (2000) Pope S. B., 2000, Turbulent Flows, doi:10.1017/CBO9780511840531 * Prasad et al. (2015) Prasad D., Sharma P., Babul A., 2015, The Astrophysical Journal, 811, 108 * Prasad et al. (2018) Prasad D., Sharma P., Babul A., 2018, The Astrophysical Journal, 863, 62 * Price (2012) Price D. J., 2012, Monthly Notices of the Royal Astronomical Society: Letters, 420, 33 * Rennehan et al. (2019) Rennehan D., Babul A., Hopkins P. F., Davé R., Moa B., 2019, Monthly Notices of the Royal Astronomical Society, 483, 3810 * Ruggiero & Lima Neto (2017) Ruggiero R., Lima Neto G. B., 2017, Monthly Notices of the Royal Astronomical Society, 468, 4107 * Sagaut (2006) Sagaut P., 2006, Large Eddy Simulation for Incompressible Flows. Scientific Computation, Springer-Verlag, Berlin/Heidelberg, doi:10.1007/b137536 * Scalo & Elmegreen (2004) Scalo J., Elmegreen B. G., 2004, Annual Review of Astronomy and Astrophysics, 42, 275 * Scannapieco & Brüggen (2008) Scannapieco E., Brüggen M., 2008, The Astrophysical Journal, 686, 927 * Schaye et al. (2015) Schaye J., et al., 2015, Monthly Notices of the Royal Astronomical Society, 446, 521 * Schmidt (2015) Schmidt W., 2015, Living Reviews in Computational Astrophysics, 1, 64 * Schmidt et al. (2014) Schmidt W., et al., 2014, Monthly Notices of the Royal Astronomical Society, 440, 3051 * Semenov et al. (2016) Semenov V. A., Kravtsov A. V., Gnedin N. Y., 2016, The Astrophysical Journal, 826, 200 * Shen et al. (2010) Shen S., Wadsley J., Stinson G., 2010, Monthly Notices of the Royal Astronomical Society, 407, 1581 * Shen et al. (2012) Shen S., Madau P., Aguirre A., Guedes J., Mayer L., Wadsley J., 2012, The Astrophysical Journal, 760, 50 * Shen et al. (2013) Shen S., Madau P., Guedes J., Mayer L., Prochaska J. X., Wadsley J., 2013, The Astrophysical Journal, 765, 89 * Simons et al. (2020) Simons R. C., et al., 2020, The Astrophysical Journal, 905, 167 * Smagorinsky (1963) Smagorinsky J., 1963, Monthly Weather Review, 91, 99 * Sokołowska et al. (2018) Sokołowska A., Babul A., Mayer L., Shen S., Madau P., 2018, The Astrophysical Journal, 867, 73 * Somerville & Davé (2015) Somerville R. S., Davé R., 2015, Annu. Rev. Astron. Astrophys, 53, 51 * Springel (2010) Springel V., 2010, Monthly Notices of the Royal Astronomical Society, 401, 791 * Su et al. (2017) Su K.-Y., Hopkins P. F., Hayward C. C., Faucher-Giguère C.-A., Kereš D., Ma X., Robles V. H., 2017, Monthly Notices of the Royal Astronomical Society, 471, 144 * Tremmel et al. (2017) Tremmel M., Karcher M., Governato F., Volonteri M., Quinn T. R., Pontzen A., Anderson L., Bellovary J., 2017, Monthly Notices of the Royal Astronomical Society, 470, 1121 * Tremmel et al. (2019) Tremmel M., et al., 2019, Monthly Notices of the Royal Astronomical Society, 483, 3336 * Tumlinson et al. (2013) Tumlinson J., et al., 2013, The Astrophysical Journal, 777, 59 * Tumlinson et al. (2017) Tumlinson J., Peeples M. S., Werk J. K., 2017, Annual Review of Astronomy and Astrophysics, 55, 389 * Turk et al. (2011) Turk M. J., Smith B. D., Oishi J. S., Skory S., Skillman S. W., Abel T., Norman M. L., 2011, The Astrophysical Journal Supplement Series, 192, 9 * Vazza et al. (2010) Vazza F., Brunetti G., Gheller C., Brunino R., 2010, New Astronomy, 15, 695 * Vazza et al. (2012) Vazza F., Roediger E., Brüggen M., 2012, Astronomy & Astrophysics, 544, A103 * Vazza et al. (2017) Vazza F., Jones T. W., Brüggen M., Brunetti G., Gheller C., Porter D., Ryu D., 2017, Monthly Notices of the Royal Astronomical Society, 464, 210 * Vazza et al. (2018) Vazza F., Angelinelli M., Jones T. W., Eckert D., Brüggen M., Brunetti G., Gheller C., 2018, Monthly Notices of the Royal Astronomical Society: Letters, 481, L120 * Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261 * Vogelsberger et al. (2014) Vogelsberger M., et al., 2014, Monthly Notices of the Royal Astronomical Society, 444, 1518 * Wadsley et al. (2008) Wadsley J. W., Veeravalli G., Couchman H. M. P., 2008, Monthly Notices of the Royal Astronomical Society, 387, 427 * Wadsley et al. (2017) Wadsley J. W., Keller B. W., Quinn T. R., 2017, Monthly Notices of the Royal Astronomical Society, 471, 2357 * Wang et al. (2020) Wang C., Ruszkowski M., Pfrommer C., Oh S. P., Yang H. Y. K., 2020, arXiv preprint arXiv:2012.11085 * Werk et al. (2016) Werk J. K., et al., 2016, The Astrophysical Journal, 833, 1 * Williamson et al. (2016) Williamson D., Martel H., Kawata D., 2016, The Astrophysical Journal, 822, 91 * Wittor et al. (2017) Wittor D., Jones T., Vazza F., Brüggen M., 2017, Monthly Notices of the Royal Astronomical Society, 471, 3212 * Zhuravleva et al. (2014) Zhuravleva I., et al., 2014, Nature, 515, 85 * Zhuravleva et al. (2015) Zhuravleva I., et al., 2015, Monthly Notices of the Royal Astronomical Society, 450, 4184 * Zhuravleva et al. (2018) Zhuravleva I., Allen S. W., Mantz A., Werner N., 2018, The Astrophysical Journal, 865, 53 * ZuHone et al. (2013) ZuHone J. A., Markevitch M., Ruszkowski M., Lee D., 2013, The Astrophysical Journal, 762, 69 ## Appendix A FILTERING APPROXIMATION Consider a scalar field $f_{i}(\boldsymbol{r})$ where $\boldsymbol{r}=(x,y,z)$ in a general coordinate system. We expand $f_{i}(\boldsymbol{r}^{\prime})$ about $\boldsymbol{r}$ in equation (1) via a Taylor expansion, $\begin{split}f_{i}(\boldsymbol{r}^{\prime})=f_{i}(\boldsymbol{r})+[(\Delta\boldsymbol{R})\cdot\boldsymbol{\nabla}f(\boldsymbol{r})]+\frac{1}{2}[(\Delta\boldsymbol{R})\otimes(H(\boldsymbol{r})\cdot(\Delta\boldsymbol{R}))]\\\ +\mathcal{O}(|\Delta\boldsymbol{R}|^{3}),\end{split}$ (27) where $\Delta\boldsymbol{R}=\boldsymbol{r}^{\prime}-\boldsymbol{r}$ and $H(\boldsymbol{r})$ is the Hessian matrix. Putting this into equation 1 gives, $\begin{split}\overline{f}_{i}(\boldsymbol{r})=f_{i}(\boldsymbol{r})+\frac{1}{2}H(\boldsymbol{r})\int_{\mathrm{kern}}((\Delta\boldsymbol{R})\otimes(\Delta\boldsymbol{R}))G(|\Delta\boldsymbol{R}|,h)d\boldsymbol{r}^{\prime}\\\ +\mathcal{O}(|\Delta\boldsymbol{R}|^{4}).\end{split}$ (28) using the fact that the kernel function is normalized and the integral of an odd function over the domain is zero. The integral in equation 28 can be tabulated for a specific function $G=G(|\Delta\boldsymbol{R}|,h)$, so let’s define, $\epsilon(\boldsymbol{r})\equiv\frac{1}{2}\int_{\mathrm{kern}}((\Delta\boldsymbol{R})\otimes(\Delta\boldsymbol{R}))G(|\Delta\boldsymbol{R}|,h)d\boldsymbol{r}^{\prime}.$ (29) We then have, $\overline{f}_{i}(\boldsymbol{r})\approx f_{i}(\boldsymbol{r})+\epsilon(\boldsymbol{r})\cdot H(\boldsymbol{r}).$ (30) However, $\epsilon(\boldsymbol{r})$ is isotropic $\epsilon(\boldsymbol{r})=\epsilon\,\boldsymbol{I}$, so we can write $\overline{f}_{i}(\boldsymbol{r})\approx f_{i}(\boldsymbol{r})+\epsilon\boldsymbol{\nabla}^{2}f_{i}(\boldsymbol{r}).$ (31) ## Appendix B METAL DISTRIBUTIONS Fig. 7 shows the normalised histograms of the filtered metal field from our driven turbulence simulations in Section 3.2. The panels are Mach numbers $\mathcal{M}=0.3$, $0.7$, and $2.1$ from top to bottom, respectively. The black curves with markers show the convergence of the filtered metal field for resolutions $64^{3}$, $128^{3}$, $256^{3}$, $512^{3}$, and $768^{3}$. The coloured lines show, from lightest to darkest: Dyn. Smag (solid salmon), Smag. (solid magenta), FIRE (dotted magenta), Dyn. Grad. (dashed purple), and Grad. (solid purple) at $64^{3}$ resolution. We obtained all of the information for this figure after approximately two mixing timescales, $t\sim 2\,\tau_{\mathrm{mix}}$, where $\tau_{\mathrm{mix}}=1/\mathcal{M}$ (see Section 3.2 for more details). Fig. 3 we used the Gaussian fits to these distributions and found the widths of the distribution, $\sigma_{\mathrm{Z}}$. Figure 7: The normalised histograms of the filtered metal mass fraction field from the simulations in Section 3.2. The panels show Mach numbers $\mathcal{M}=0.3$, $0.7$, and $2.1$ from top to bottom, respectively after two mixing timescales. The black curves show the simulations with no sub-grid metal mixing. The coloured curves show the simulations at $64^{3}$ resolution with a sub-grid metal mixing model given by the label. All of the sub-grid metal mixing models show improvement except for FIRE which lags due to the lower calibration coefficient.
# Elementary symmetric polynomials and martingales for Heckman-Opdam processes Margit Rösler Institut für Mathematik, Universität Paderborn, Warburger Str. 100, D-33102 Paderborn, Germany<EMAIL_ADDRESS>and Michael Voit Fakultät Mathematik, Technische Universität Dortmund, Vogelpothsweg 87, D-44221 Dortmund, Germany<EMAIL_ADDRESS> ###### Abstract. We consider the generators $L_{k}$ of Heckman-Opdam diffusion processes in the compact and non-compact case in $N$ dimensions for root systems of type $A$ and $B$, with a multiplicity function of the form $k=\kappa k_{0}$ with some fixed value $k_{0}$ and a varying constant $\kappa\in\,[0,\infty[$. Using elementary symmetric functions, we present polynomials which are simultaneous eigenfunctions of the $L_{k}$ for all $\kappa\in\,]0,\infty[$. This leads to martingales associated with the Heckman-Opdam diffusions $(X_{t,1},\ldots,X_{t,N})_{t\geq 0}$. As our results extend to the freezing case $\kappa=\infty$ with a deterministic limit after some renormalization, we find formulas for the expectations $\mathbb{E}(\prod_{j=1}^{N}(y-X_{t,j})),$ $y\in\mathbb{C}$. ###### 2010 Mathematics Subject Classification: 60J60, 33C67, 82C23, 60B20 ## 1\. Introduction In the theory of classical random matrix ensembles there exist several formulas regarding determinants as follows: Let $X$ be a random variable with values in some space of $N\times N$-matrices over $\mathbb{F}=\mathbb{R},\mathbb{C},\mathbb{H}$. Then the expectations $\mathbb{E}(\det(X-yI_{N}))$ are classical orthogonal polynomials of degree $N$ in $y\in\mathbb{C}$. Such results for the Hermite, Laguerre, and Jacobi ensembles can for example be found in [DG, FG, A], where the expectations above can be expressed via classical Hermite, Laguerre and Jacobi polynomials of degree $N$. These results were extended to the spectra of $\beta$-ensembles associated with $N$-dimensional time-homogeneous diffusion processes in [KVW, V] by using some martingales constructed from these diffusions via elementary symmetric polynomials. The generators of the diffusions were Dunkl-Bessel Laplacians and (symmetric) Heckman-Opdam Laplacians in the compact $BC$ case. In the present paper we study corresponding results for Heckman-Opdam Laplacians in the non-compact $BC$ case as well Heckman-Opdam Laplacians of type $A$ in the compact and noncompact setting. Together with [KVW, V], the present paper covers the most important examples related with Calogero-Moser- Sutherland particle models. The basic ideas here are similar to those in [KVW, V]; however, while the approach in [KVW, V] is mainly based on Ito calculus, we now focus on an algebraic point of view. The idea is as follows. Let $(X_{t})_{t\geq 0}$ be a time-homogeneous diffusion on a suitable closed set $C\subset\mathbb{R}^{N}$ (such as a Weyl chamber or a fundamental alcove), where the paths are reflected at the boundary $\partial C$. Then the generator $L$ of the associated transition semigroup is an elliptic partial differential operator whose domain is contained in the space of functions on $\mathbb{R}^{N}$ which admit corresponding symmetries on $\partial C$. We are now interested in functions $f:[0,\infty[\times\mathbb{R}^{N}\to\mathbb{C}$ for which $(f(t,X_{t}))_{t\geq 0}$ is a martingale (w.r.t. the canonical filtration), which essentially means that $f$ is $L$-space-time-harmonic, i.e., $(\frac{\partial}{\partial t}+L)f=0$; see Section III.10 of [RW]. Examples of such harmonic functions can be given in terms of eigenfunctions of $L$. For a general background in stochastic analysis we also recommend [P]. In the framework of Heckman-Opdam theory (see [HS], [HO]), we fix some crystallographic root system $R$ (with associated Weyl group $W$) in $\mathbb{R}^{N}$ and choose $C$ as an associated Weyl chamber or fundamental alcove. We consider a non-negative multiplicity function $k\geq 0$ on $R$ of the form $k=\kappa\cdot k_{0}$ with some fixed multiplicity $k_{0}$ and a constant $\kappa>0$. We now study the $W$-invariant Heckman-Opdam Laplace operators $L_{\kappa}:=L_{\kappa k_{0}}$ as generators of diffusions (see [Sch1, Sch2, RR1]), where the parameter $\kappa$ is varying. We also study the renormalized generators $\widetilde{L}_{\kappa}:=\frac{1}{\kappa}L_{\kappa}$ of the renormalized diffusions $(\widetilde{X}_{t}:=X_{t/\kappa})_{t\geq 0}$. In suitable coordinates, these renormalized generators then have the form $\widetilde{L}_{\kappa}f=\frac{1}{\kappa}\Delta f+Hf$ with some second-order differential operator $\Delta$ (often a classical Laplacian) and some first-order drift operator $H$, where both operators are independent of $\kappa$. This also works for $\kappa=\infty$ where $(\widetilde{X}_{t})_{t\geq 0}$ is the deterministic solution of some ODE associated with $H$. We now use elementary symmetric polynomials to construct simultanous eigenfunctions $f$ of $\widetilde{L}_{\kappa}$ for all $\kappa\in]0,\infty]$. In the next step we use martingales associated with these functions $f$ together with information in the deterministic case $\kappa=\infty$ in order to derive formulas for (1.1) $\mathbb{E}(\prod_{j=1}^{N}(\widetilde{X}_{t,i}-y))\quad\quad(y\in\mathbb{C}).$ For some values of $\kappa$, the operators $L_{\kappa}$ admit an interpretation as Laplace-Beltrami operators on symmetric spaces. In these cases, (1.1) leads to determinantal formulas for Brownian motions on the symmetric spaces which are closely related, for instance, to some identities in [R]. Considering $t\to\infty$ in the compact cases, this also leads to determinantal formulas for the uniform distributions on compact symmetric spaces. The paper is organized as follows: In Section 2 we first recapitulate some well-known facts on Heckman-Opdam hypergeometric functions and polynomials, and the associated Laplacians in the compact and non-compact setting. We there also prove that in the non-compact crystallographic case, the Heckman-Opdam processes admit arbitrary exponential moments for arbitrary deterministic starting conditions. In the main part of the paper, we then concentrate on specific root systems. Section 3 is devoted to the compact case of type $A$, which is related to Calogero-Moser-Sutherland particle models on the torus. Here, for instance, our results lead to determinantal formulas for Brownian motions and the uniform distribution on the unitary groups $U(N)$ and $SU(N)$. In Section 4 we study the non-compact case of type $A$. Section 5 then contains the non-compact case of type $BC$. ## 2\. Heckman-Opdam theory Here we briefly collect some facts from Heckman-Opdam theory; see [HS, HO] for a general background. Let $(\mathfrak{a},\langle\,.\,,\,.\,\rangle)$ be a Euclidean space of dimension $N$ with norm $|x|=\sqrt{\langle x,x\rangle}.$ We identify $\mathfrak{a}$ with its dual space via the given scalar product. Let $R$ be a crystallographic, possibly not reduced root system in $\mathfrak{a}$ with associated finite reflection group $W$. Thus in particular, $R$ spans $\mathfrak{a}.$ We fix a positive subsystem $R_{+}\subset R$ and a $W$-invariant multiplicity $k:R\to[0,\infty[$. The Cherednik operators associated with $R_{+}$ and $k$ are defined as (2.1) $D_{\xi}(k)f(x)=\partial_{\xi}f(x)+\sum_{\alpha\in R_{+}}\frac{k_{\alpha}\langle\alpha,\xi\rangle}{1-e^{-\langle\alpha,x\rangle}}(f(x)-f(\sigma_{\alpha}(x))-\langle\rho(k),\xi\rangle f(x)$ for $\xi\in\mathfrak{a},$ with the (weighted) Weyl vector $\rho(k):=\frac{1}{2}\sum_{\alpha\in R_{+}}k_{\alpha}\alpha$. The operators $D_{\xi}(k),\,\xi\in\mathfrak{a}\,$ commute, and there is a $W$-invariant tubular neighbourhood $U$ of $\mathfrak{a}$ in $\mathfrak{a}_{\mathbb{C}}=\mathfrak{a}+i\mathfrak{a}$ and a unique analytic function $(\lambda,z)\mapsto G(\lambda,k;z)$ on $\mathfrak{a}_{\mathbb{C}}\times U$, the so called Opdam-Cherednik kernel, which satisfies (2.2) $G(\lambda,k;0)=1\quad\text{ and}\quad D_{\xi}(k)G(\lambda,k;\,.\,)=\langle\lambda,\xi\rangle\,G(\lambda,k;\,.\,)\quad\text{ for all }\,\xi\in\mathfrak{a}.$ The hypergeometric function associated with $R$ is defined by (2.3) $F(\lambda,k;z)=\frac{1}{|W|}\sum_{w\in W}G(\lambda,k;w^{-1}z).$ It is $W$-invariant in $\lambda$ and $z.$ To introduce the associated Heckman- Opdam polynomials, we need the weight lattice and the set of dominant weights, $P=\\{\lambda\in\mathfrak{a}:\langle\lambda,\alpha^{\vee}\rangle\in\mathbb{Z}\>\>\forall\alpha\in R\,\\},\quad P_{+}=\\{\lambda\in P:\langle\lambda,\alpha^{\vee}\rangle\geq 0\,\,\forall\alpha\in R_{+}\,\\}\supset R_{+},$ where $\alpha^{\vee}=\frac{2\alpha}{\langle\alpha,\alpha\rangle}.$ $P_{+}$ carries the usual dominance order. Let $\mathcal{T}:=\text{span}_{\mathbb{C}}\\{e^{i\lambda},\,\lambda\in P\\}$ be the space of trigonometric polynomials associated with $R$. The orbit sums $M_{\lambda}=\sum_{\mu\in W\\!\lambda}e^{i\mu}\,,\quad\lambda\in P_{+}$ form a basis of the subspace $\mathcal{T}^{W}$ of $W$-invariant polynomials in $\mathcal{T}$. For $Q^{\vee}:=\text{span}_{\mathbb{Z}}\\{\alpha^{\vee},\,\alpha\in R\\}$, consider the compact torus $\,T=\mathfrak{a}/2\pi Q^{\vee}$ with the weight function (2.4) $\delta_{k}(x):=\prod_{\alpha\in R_{+}}\Bigl{|}\sin\Bigr{(}\frac{\langle\alpha,x\rangle}{2}\Bigr{)}\Bigr{|}^{2k_{\alpha}}.$ The Heckman-Opdam polynomials associated with $R_{+}$ and $k$ are given by $P_{\lambda}(k;z)=M_{\lambda}(z)+\sum_{\nu<\lambda}c_{\lambda\nu}(k)M_{\nu}(z)\quad(\lambda\in P_{+}\,,z\in\mathfrak{a}_{\mathbb{C}})$ where the coefficients $c_{\lambda\nu}(k)\in\mathbb{R}$ are uniquely determined by the condition that $P_{\lambda}(k;\,.\,)$ is orthogonal to $M_{\nu}$ in $L^{2}(T,\delta_{k})$ for all $\nu\in P_{+}$ with $\nu<\lambda$. It is known that $\\{P_{\lambda}(k,\,.\,),\lambda\in P_{+}\,\\}$ is an orthonormal basis of $L^{2}(T,\delta_{k})^{W}$ of all $W$-invariant functions from $L^{2}(T,\delta_{k})$. According to [HS], the normalized polynomials $R_{\lambda}(k,z):=P_{\lambda}(k;z)/P_{\lambda}(k;0)$ can be expressed in terms of the hypergeometic function as (2.5) $R_{\lambda}(k,z)=F(\lambda+\rho(k),k;iz).$ Note that our notion slightly differs from [HS, HO], where the polynomials $P_{\lambda}$ are defined as exponential polynomials on the torus $i\mathfrak{a}/2\pi iQ^{\vee}$. We next introduce the Heckman-Opdam Laplacian $\Delta_{k}:=\sum_{j=1}^{N}D_{\xi_{j}}(k)^{2}\,-\,|\rho(k)|^{2}$ with some orthonormal basis $\xi_{1},\ldots,\xi_{N}$ of $\mathfrak{a}$. The operator $\Delta_{k}$ is independent of the choice of this basis. Denote by $L_{k}$ the restriction of $\Delta_{k}$ to $W$-invariant functions. According to [Sch2], (2.6) $L_{k}f(x)=\Delta f(x)+\sum_{\alpha\in R_{+}}k_{\alpha}\coth\Bigl{(}\frac{\langle\alpha,x\rangle}{2}\Bigr{)}\cdot\partial_{\alpha}f(x).$ We notice that by construction, for all $\lambda\in\mathfrak{a}_{\mathbb{C}}$, the hypergeometric functions $F_{\lambda}:=F(\lambda,k;\,.\,)$ are eigenfunctions of $L_{k}$ with eigenvalues $\sum_{j=1}^{N}\langle\lambda,\xi_{j}\rangle^{2}-|\rho(k)|^{2}.$ The operator $L_{k}$ is independent of the choice of $R_{+}$ and generates a Feller diffusion on the closed Weyl chamber $\overline{\mathfrak{a}_{+}}\subset\mathfrak{a}$ associated with $R_{+}$ (called a radial Heckman-Opdam process), where the paths are reflected at the boundaries, see [Sch1, Sch2]. The transition probabilities of this process, with starting point $y\in\overline{\mathfrak{a}_{+}}$, are given by $p_{t}^{W}\\!(x,y)d\mu(x)$ with the $W$-invariant heat kernel $p_{t}^{W}\\!(x,y)=\int_{i\mathfrak{a}}e^{-\frac{1}{2}(|\lambda|^{2}+|\rho|^{2})}F_{\lambda}(x)F_{\lambda}(-y)d\nu^{\prime}(\lambda),$ where $\rho=\rho(k),$ $\nu^{\prime}$ is the symmetric Plancherel measure as in [Sch2], and $d\mu(x)=d\mu_{k}(x)=\prod_{\alpha\in R_{+}}\big{|}2\sinh\langle\frac{\alpha}{2},x\rangle\big{|}^{2k_{\alpha}}\,dx.$ In the compact case we take the factor $i$ in (2.5) into account as in [RR1] and consider the operator (2.7) $\widehat{L}_{k}f(x):=\Delta f(x)+\sum_{\alpha\in R_{+}}k_{\alpha}\cot\Bigl{(}\frac{\langle\alpha,x\rangle}{2}\Bigr{)}\cdot\partial_{\alpha}f(x).$ $\widehat{L}_{k}$ generates a Feller diffusion on a compact fundamental alcove of the affine Weyl group $W_{\\!\mathit{aff}}=W\ltimes 2\pi Q^{\vee}$ in $\mathfrak{a}$, again with reflecting boundaries. The trigonometric polynomials $P_{\lambda}$ ($\lambda\in P_{+}$) are eigenfunctions of $\widehat{L}_{k}$ with eigenvalues $-\langle\lambda,\lambda+2\rho(k)\rangle\leq 0.$ They were used in [RR1] to construct the transition densities of the diffusions with generators $\widehat{L}_{k}$. For root systems of type $BC$, the $P_{\lambda}$ are multivariate Jacobi polynomials, and the associated Feller diffusions are multivariate Jacobi processes which were studied in [Dem]. Before turning to details for root systems of type $A$ and $BC$, we conclude this section with an integrability result which assures the existence of exponential moments for the radial Heckman-Opdam processes in the non-compact, crystallographic case. This will be needed in Sections 4 and 5. ###### Lemma 2.1. For each $y\in\mathfrak{a}$ and $\,\beta\in\mathfrak{a},$ (2.8) $\int_{\mathfrak{a}}e^{\langle\beta,x\rangle}\,p_{t}^{W}\\!(x,y)d\mu(x)\,<\infty.$ . ###### Proof. For $y=0$ this is obvious from Theorem 5.2 of [Sch2]. For general $y$, we employ the $L^{p}$-theory for the hypergeometric transform developed in [NPP]. For suitable functions on $\mathfrak{a}$ and on $i\mathfrak{a}$ respectively, the hypergeometric transform and the inverse hypergeometric transform are given by $\mathcal{H}f(\lambda)=\int_{\mathfrak{a}}f(x)F_{\lambda}(-x)d\mu(x),\quad\mathcal{I}(g)(x)=\int_{i\mathfrak{a}}g(\lambda)F_{\lambda}(x)d\nu^{\prime}(\lambda).$ We denote by $\mathbb{C}[\mathfrak{a}_{\mathbb{C}}]$ the space of polynomial functions on $\mathfrak{a}_{\mathbb{C}}$ and by $\partial(q)$ the constant coefficient differential operator associated with $q\in\mathbb{C}[\mathfrak{a}_{\mathbb{C}}].$ Moreover, for $x\in\mathfrak{a}$ we denote by $C(x):=co(W.x)$ the convex hull of the $W$-orbit of $x$ in $\mathfrak{a}.$ For an exponent $0<p\leq 2$, the $W$-invariant $L^{p}$-Schwartz space is given by $\mathcal{C}^{p}(\mathfrak{a})^{W}=\Big{\\{}f\in C^{\infty}(\mathfrak{a})^{W}:\,\sup_{x\in\mathfrak{a}}\,\frac{(1+|x|)^{n}}{F_{0}(x)^{2/p}}\,\big{|}\partial(q)f(x)\big{|}\,<\infty\,\,\forall\,n\in\mathbb{N}_{0},\,q\in\mathbb{C}[\mathfrak{a}_{\mathbb{C}}]\Big{\\}}.$ Moreover, we consider the $W$-invariant Schwartz space $\mathcal{S}(\mathfrak{a}_{\epsilon_{p}})^{W}$ with $\epsilon_{p}=\frac{2}{p}-1,$ which consists of all $W$-invariant continuous functions on the closed tube $\mathfrak{a}_{\epsilon_{p}}=C(\epsilon_{p}\rho)+i\mathfrak{a}\subset\mathfrak{a}_{\mathbb{C}}$ which are holomorphic in its interior and satisfy (2.9) $\sup_{\lambda\in\mathfrak{a}_{\epsilon_{p}}}\,(1+|\lambda|)^{n}\big{|}\partial(q)g(\lambda)\big{|}\,<\infty\quad\text{for all }n\in\mathbb{N}_{0},\,q\in\mathbb{C}[\mathfrak{a}_{\mathbb{C}}].$ By Theorem 5.6 of [NPP], the hypergeometric transform $\mathcal{H}$ is a topological isomorphism from $\mathcal{C}^{p}(\mathfrak{a})^{W}$ onto $\mathcal{S}(\mathfrak{a}_{\epsilon_{p}})^{W}$ with inverse $\mathcal{I}$. We claim that for fixed $y\in\mathfrak{a},$ the function $g(\lambda):=e^{\frac{1}{2}\langle\lambda,\lambda\rangle}F_{\lambda}(-y)$ belongs to $\mathcal{S}(\mathfrak{a}_{\epsilon_{p}})$ for each $p\in]0,2];$ here $\langle\,.\,,\,.\,\rangle$ denotes the bilinear extension of the given scalar product to $\mathfrak{a}_{\mathbb{C}}.$ As soon as this is proven, it will follow that $\,p_{t}^{W}(\,.\,,y)=e^{-\frac{1}{2}|\rho|^{2}}\,\mathcal{I}(g)\,$ belongs to $\mathcal{C}^{p}(\mathfrak{a})^{W}$ for each $p\in]0,2].$ In order to check that $g\in\mathcal{S}(\mathfrak{a}_{\epsilon_{p}}),$ we only have to verify the growth condition (2.9). Let $q\in\mathbb{C}[\mathfrak{a}_{\mathbb{C}}].$ Then in view of Theorem 3.4 (and Remark 3.2) of [Sch2], $|\partial_{\lambda}(q)F_{\lambda}(-y)|\,\leq C(1+|y|)^{\deg q}F_{0}(-y)\cdot e^{\max_{w\in W}\text{Re}\langle w\lambda,-y\rangle}\quad(\lambda\in\mathfrak{a}_{\mathbb{C}}).$ This shows that $\partial_{\lambda}(q)F_{\lambda}(-y)$ is bounded as a function of $\lambda$ on $\mathfrak{a}_{\epsilon_{p}}$. Moreover, $\partial(q)e^{\frac{1}{2}\langle\lambda,\lambda\rangle}=\widetilde{q}(\lambda)e^{\frac{1}{2}\langle\lambda,\lambda\rangle}$ with some polynomial $\widetilde{q}$, and $\big{|}e^{\frac{1}{2}\langle\lambda,\lambda\rangle}\big{|}\,\asymp\,e^{-\frac{1}{2}|\lambda|^{2}}\quad\text{on }\mathfrak{a}_{\epsilon_{p}}.$ Therefore $\partial(q)g(\lambda)$ decays exponentially as $|\lambda|\to\infty$ within $\mathfrak{a}_{\epsilon_{p}}.$ It follows that $g\in\mathcal{S}(\mathfrak{a}_{\epsilon_{p}})$ and thus $\,p_{t}^{W}\\!(\,.\,,y)\in\mathcal{C}^{p}(\mathfrak{a})^{W}.$ In particular, for each $p\in]0,2]$ there exists a constant $C_{p}>0$ such that $p_{t}^{W}\\!(x,y)\leq C_{p}F_{0}(x)^{2/p}\quad\text{ for all }x\in\mathfrak{a}.$ From [Sch2] we know that in the closed chamber $\overline{\mathfrak{a}_{+}}\,,$ $\,F_{0}(x)\asymp q_{0}(x)e^{-\langle\rho,x\rangle}\,$ with a certain positive polynomial $q_{0}$. Hence there exists a nonnegative polynoimal $q_{p}$ (depending on $p$), such that $p_{t}^{W}\\!(x,y)\leq q_{p}(x)e^{-\frac{2}{p}\langle\rho,x\rangle}\quad\text{for all }x\in\overline{\mathfrak{a}_{+}}.$ Now fix $\beta\in\mathfrak{a}$ and note that $\rho$ is contained in the open chamber $\mathfrak{a}_{+}.$ Choosing $p>0$ small enough, we therefore obtain $\int_{\overline{\mathfrak{a}_{+}}}e^{\langle\beta,x\rangle}p_{t}^{W}\\!(x,y)d\mu(x)\,\leq\int_{\overline{\mathfrak{a}_{+}}}q_{p}(x)e^{\langle\beta,x\rangle-\frac{2}{p}\langle\rho,x\rangle+2\langle\rho,x\rangle}dx\,<\infty.$ This yields the assertion. ∎ ## 3\. The compact case of type $A_{N-1}$ In this section we study Heckman-Opdam processes of type $A_{N-1}$ in the compact setting. The generators of these processes are the Hamiltonians of interacting particle models of Calogero-Sutherland type with $N$ particles on the torus $\mathbb{T}:=\\{z\in\mathbb{C}:\>|z|=1\\}$; see [LV] for the background. These processes are diffusions on some fundamental domain of $W=S_{N}$ in $\mathbb{T}^{N}$. It will however be convenient to consider also associated diffusions on $\mathbb{R}^{N}$ with $2\pi$-periodicity such that the diffusions on $\mathbb{T}^{N}$ appear as images under $x\mapsto e^{ix}.$ To introduce the processes on $\mathbb{R}^{N}$, we consider the root system $R=A_{N-1}=\\{\pm(e_{i}-e_{j}):1\leq i<j\leq N\\}$ in $\mathbb{R}^{N}$ with positive subsystem $R_{+}=\\{e_{i}-e_{j}:i<j\\}$ and fix a multiplicity parameter $k\in\,]0,\infty[$. Let $\omega:=(1,\ldots,1)^{T}\in\mathbb{R}^{N}.$ Then $Q^{\vee}=\mathbb{Z}^{N}\cap(\mathbb{R}\omega)^{\perp},$ and a fundamental domain for the action of $W_{\\!\mathit{aff}}=W\ltimes 2\pi Q^{\vee}$ in $\mathbb{R}^{N}$ is given by $\displaystyle C_{N}$ $\displaystyle=\\{x\in\mathbb{R}^{N}:0\leq\langle\alpha,x\rangle\leq 2\pi\,\,\forall\alpha\in R_{+}\\}$ $\displaystyle=\,\\{x\in\mathbb{R}^{N}:\,x_{1}\leq x_{2}\leq\ldots\leq x_{N}\leq x_{1}+2\pi\\}.$ We consider the $W$-invariant Heckman-Opdam Laplacian (3.1) $\widehat{L}_{k}f(x)=\Delta f(x)+k\sum_{j=1}^{N}\sum_{l\neq j}\cot\Bigl{(}\frac{x_{j}-x_{l}}{2}\Bigr{)}\frac{\partial}{\partial x_{j}}f(x)$ with reflecting boundaries, i.e. with domain $D(\widehat{L}_{k})=\\{f|_{C^{N}}:f\in C^{2}(\mathbb{R}^{N})\text{ invariant under }W_{\\!\mathit{aff}}\\}.$ $\widehat{L}_{k}$ is the generator of a Feller semigroup of transition operators on $C_{N}$, c.f. [RR1]. Associated Feller diffusions $(X_{t,k})_{t\geq 0}$ with continuous paths (which are reflected at the boundary of $C_{N}$) are called Heckman-Opdam processes of type $A_{N-1}$ on $C_{N}$. We also consider the renormalized generators $\widetilde{L}_{k}:=\frac{1}{k}\widehat{L}_{k}=\frac{1}{k}\Delta+\sum_{j=1}^{N}\sum_{l\neq j}\cot\Bigl{(}\frac{x_{j}-x_{l}}{2}\Bigr{)}\frac{\partial}{\partial x_{j}}$ which degenerate for $k\to\infty$ into $\widetilde{L}_{\infty}=\sum_{j=1}^{N}\sum_{l\neq j}\cot\Bigl{(}\frac{x_{j}-x_{l}}{2}\Bigr{)}\frac{\partial}{\partial x_{j}}.$ For $k\in]0,\infty[$, the process $\widetilde{X}_{k}:=(\widetilde{X}_{t,k})_{t\geq 0}$ with $\widetilde{X}_{t,k}:=X_{t/k,k}$ is a Feller diffusion associated with $\widetilde{L}_{k}$. It can be also described as solution of the SDE (3.2) $d\widetilde{X}_{t,k,j}=\frac{\sqrt{2}}{\sqrt{k}}dB_{t,j}+\sum_{l\neq j}\cot\Bigl{(}\frac{\widetilde{X}_{t,k,j}-\widetilde{X}_{t,k,l}}{2}\Bigr{)}dt\quad\quad(j=1,\ldots,N)$ with some $N$-dimensional Brownian motion $(B_{t,1},\ldots,B_{t,N})_{t\geq 0}$. Moreover, for deterministic starting conditions, $\widetilde{L}_{\infty}$ is the generator of a deterministic process whose paths are the solution of some initial value problem for the ODE (3.3) $\frac{dx_{j}}{dt}(t)=\sum_{l\neq j}\cot\Bigl{(}\frac{x_{j}(t)-x_{l}(t)}{2}\Bigr{)}\quad\quad(j=1,\ldots,N).$ As for Dunkl processes in [AV], one can show that for initial data in the interior of $C_{N}$, the solution $(\widetilde{X}_{t,\infty})_{t\geq 0}$ of this ODE exists for all $t\geq 0$ in the interior of $C_{N}$. We also point out that in the Dunkl setting, the ODE analogous to (3.2) has unique solutions for all $t\geq 0$ even for starting points at the boundary of the chamber, see [VW]. We expect that such a result is also true in the present setting. We need the following stationary solutions of (3.3): ###### Lemma 3.1. For each $x_{1}\in\mathbb{R}$, $\Bigl{(}x_{1},x_{1}+\frac{1}{N}2\pi,x_{1}+\frac{2}{N}2\pi,\ldots,x_{1}+\frac{N-1}{N}2\pi\Bigr{)}\in C_{N}$ is a stationary solution of (3.3). ###### Proof. Assume first that $N$ is odd. As the cotangent is odd and $\pi$-periodic, we have for $j=1,\ldots,N$ that $\sum_{l\neq j}\cot\Bigl{(}\frac{(j-l)\pi}{N}\Bigr{)}=\sum_{l=1}^{N-1}\cot\Bigl{(}\frac{l\pi}{N}\Bigr{)}=\sum_{l=1}^{(N-1)/2}\Bigl{(}\cot\Bigl{(}\frac{l\pi}{N}\Bigr{)}+\cot\Bigl{(}\frac{(N-l)\pi}{N}\Bigr{)}\Bigr{)}=0.$ If $N$ is even, then our computation leads to the additional term $\cot((N\pi/2)/N)=0$ in the last sum and thus to the same result. This yields the claim. ∎ The generator $\widetilde{L}_{k}$ and the associated diffusion $\widetilde{X}_{k}$ on $\mathbb{R}^{N}$ can be decomposed into two independent parts, namely the center of gravity and the process of the distances of neighboring particles. This reflects the fact that the usual representation of the symmetric group $S_{N}$ on $\mathbb{R}^{N}$ decomposes into two irreducible components. More precisely, consider the center-of-gravity process $\widetilde{X}_{k}^{\mathit{cg}}:=(\widetilde{X}_{t,k}^{\mathit{cg}})_{t\geq 0}$ with $\widetilde{X}_{t,k}^{cg}:=\frac{1}{N}(\widetilde{X}_{t,k,1}+\ldots+\widetilde{X}_{t,k,N})\cdot\omega$ which is the orthogonal projection of $\widetilde{X}_{t,k}$ onto $\mathbb{R}\omega$. Then the diffusion $\widetilde{X}_{k}^{\mathit{diff}}:=\widetilde{X}_{k}-\widetilde{X}_{k}^{\mathit{cg}}$ lives on the orthogonal complement $(\mathbb{R}\omega)^{\perp}\subset\mathbb{R}^{N}$. ###### Lemma 3.2. Let $k\in\,]0,\infty[$. If $\widetilde{X}_{k}$ starts in some deterministic point, then the processes $\widetilde{X}_{k}^{\mathit{diff}}$ and $\widetilde{X}_{k}^{\mathit{cg}}$ are stochastically independent. ###### Proof. By the SDE (3.2) we have (3.4) $d\widetilde{X}_{t,k,j}^{\mathit{cg}}=\frac{\sqrt{2}}{N\sqrt{k}}d\Bigl{(}\sum_{l=1}^{N}B_{t,l}\Bigr{)}=:\frac{\sqrt{2}}{\sqrt{Nk}}\,dB_{t}$ with some one-dimensional Brownian motion $(B_{t})_{t\geq 0}$ while $\widetilde{X}_{k}^{\mathit{diff}}$ satisfies $d\widetilde{X}_{t,k,j}^{\mathit{diff}}=\frac{\sqrt{2}}{\sqrt{k}}\,d\Bigl{(}B_{t,j}-\frac{1}{N}\sum_{l=1}^{N}B_{t,l}\Bigr{)}+F_{j}(\widetilde{X}_{t,k}^{\mathit{diff}})\,dt\quad(j=1,\ldots,N)$ where the processes $(B_{t,j}-\frac{1}{N}\sum_{l=1}^{N}B_{t,l})_{t\geq 0}\,$ are stochastically independent of $(B_{t})_{t\geq 0}$, and the $F_{j}$ are concrete continuous functions. This implies the claim by the very definition of solutions of SDEs. ∎ We next transfer all data from $\mathbb{R}^{N}$ to the torus $\mathbb{T}^{N}$ via $z_{j}:=e^{ix_{j}}$ for $j=1,\ldots,N$, or for short, $z:=e^{ix}\in\mathbb{T}^{N}$. A short computation shows that in $z$-coordinates, the operator $L_{k}$ is given by the diffusion operator (3.5) $\displaystyle H_{k}=$ $\displaystyle-\sum_{j=1}^{N}\Bigl{(}z_{j}\frac{\partial}{\partial z_{j}}\Bigr{)}^{2}-k\sum_{j=1}^{N}\sum_{l\neq j}\frac{z_{j}+z_{l}}{z_{j}-z_{l}}\cdot z_{j}\frac{\partial}{\partial z_{j}}$ $\displaystyle=$ $\displaystyle-\sum_{j=1}^{N}z_{j}^{2}\frac{\partial^{2}}{\partial z_{j}^{2}}-(1-k(N-1))\sum_{j=1}^{N}z_{j}\frac{\partial}{\partial z_{j}}-2k\sum_{j=1}^{N}\sum_{l\neq j}\frac{z_{j}^{2}}{z_{j}-z_{l}}\frac{\partial}{\partial z_{j}}\,,$ acting on permutation invariant functions from $C^{2}(\mathbb{T}^{N})$. The operator $H_{k}$ appears in a prominent way in the particle models of Calogero-Sutherland type on $\mathbb{T}$; see Section 2 of [LV]. It is obtained from the Calogero-Sutherland Hamiltonian by conjugation with the ground state. The operator $H_{k}$ is the generator of the Feller diffusions $Z_{k}:=(Z_{t,k}:=e^{iX_{t,k}})_{t\geq 0}$ on the alcove $\mathbb{A}_{N}:=\\{e^{ix}:\>x\in C_{N}\\}\subset\mathbb{T}^{N}.$ Further, the operators $\widetilde{H}_{k}:=\frac{1}{k}H_{k}$ generate the diffusions $\widetilde{Z}_{k}:=(\widetilde{Z}_{t,k}:=e^{i\widetilde{X}_{t,k}})_{t\geq 0}$ for $k\in]0,\infty[$. Clearly, this also works for $k=\infty$ where $\widetilde{Z}_{\infty}$ is deterministic for deterministic initial data. ###### Remark 3.3. Besides the generators in (3.5), also the operators (3.6) $D_{k}:=\sum_{j=1}^{N}z_{j}^{2}\frac{\partial^{2}}{\partial z_{j}^{2}}+2k\sum_{j=1}^{N}\sum_{l\neq j}\frac{z_{j}^{2}}{z_{j}-z_{l}}\frac{\partial}{\partial z_{j}}$ appear in the literature; see e.g. [F, St, OO]. Here $-H_{k}=D_{k}+E_{k}\,$ with the Euler operator (3.7) $E_{k}:=(1-k(N-1))\sum_{j=1}^{N}z_{j}\frac{\partial}{\partial z_{j}}$ which commutes with $D_{k}$. The $-D_{k}$ are generators of diffusions with additional drift on $\mathbb{T}$ which rotates the complete system at some fixed speed. Clearly, the subsequent results can be easily translated to $-D_{k}$. From a stochastic point of view, the diffusions associated with the operators $H_{k}$ seem to be the most natural ones. We recall that the (symmetric) eigenfunctions of $H_{k}$ are Jack polynomials. To become precise, we introduce the following notations. We write $\Lambda_{N}^{+}=\\{\lambda\in\mathbb{Z}_{+}^{N}:\lambda_{1}\geq\cdots\geq\lambda_{N}\\}$ for the set of partitions of length at most $N$. Denote by $C_{\lambda}^{\alpha}\,,\,\lambda\in\Lambda_{N}^{+},$ the Jack polynomials of index $\alpha>0$ in $N$ variables with the normalization (3.8) $(z_{1}+\cdots+z_{N})^{m}=\sum_{|\lambda|=m}C_{\lambda}^{\alpha}(z)\quad(m\in\mathbb{Z}_{+});$ see [St, BF]. The $C_{\lambda}^{\alpha}$ are symmetric and homogeneous of degree $|\lambda|:=\lambda_{1}+\cdots+\lambda_{N}$. Moreover, by [St, BF], the $C_{\lambda}^{\alpha}$ with index $\alpha=1/k$ are eigenfunctions of $D_{k}$ with eigenvalues $d_{\lambda}(k)=\sum_{j=1}^{N}\lambda_{j}(\lambda_{j}-1+2k(N-j)).$ In addition, as $C_{\lambda}^{\alpha}$ is homogeneous of degree $|\lambda|$, $E_{k}C_{\lambda}^{\alpha}=(1-k(N-1))|\lambda|\,C_{\lambda}^{\alpha}.$ In summary we obtain: ###### Lemma 3.4. For $\lambda\in\Lambda_{n}^{+}$, $k\in]0,\infty[$ and $\alpha=1/k$, $C_{\lambda}^{\alpha}$ is an eigenfunction of $\widetilde{H}_{k}=\frac{1}{k}H_{k}$ with eigenvalue $-\sum_{j=1}^{N}\lambda_{j}\bigl{(}\frac{\lambda_{j}}{k}+N+1-2j\bigr{)}\,\leq 0.$ It is well-known (c.f. [HO]) that the polynomials $C_{\lambda}^{1/k},\,\lambda\in\Lambda_{n}^{+}\,$ form a complete orthogonal system of $L^{2}(\mathbb{T}^{N},\mu_{k})^{W}$ with the probability measure (3.9) $d\mu_{k}(z)=\phi_{k}(z)dz,\,\,\,\phi_{k}(z):=\,c_{k}\cdot\\!\prod_{j,l:l\neq j}|z_{j}-z_{l}|^{k}\cdot{\bf 1}_{\mathbb{A}_{N}}(z),$ where $c_{k}>0$ is a normalization constant and $dz$ denotes the Haar measure on $\mathbb{T}^{N}$. Notice that these measures appear also in the context of circular $\beta$-ensembles in random matrix theory. We also notice that the elementary symmetric polynomials $e_{l}$ ($l=0,\ldots,N$) in $N$ variables, which are determined by $\prod_{j=1}^{N}(y-x_{j})\,=\sum_{l=0}^{N}(-1)^{l}e_{l}(x)\,y^{N-l}\quad(y\in\mathbb{C}),$ are Jack polynomials for all $k>0$ up to normalization. More precisely, by (3.8), (3.10) $e_{l}(x)=\frac{1}{l!}C_{\lambda}^{\alpha}(x)\>\>\text{with}\>\>\lambda=1^{l}=(\underbrace{1,\ldots,1}_{l\text{ times}},0,\ldots,0)\quad(l=0,\ldots,N).$ Thus in view of Lemma 3.4, the polynomials $e_{l}$ are eigenfunctions of $\widetilde{H}_{k}$ with eigenvalues $-l\bigl{(}\frac{1}{k}+N-l\bigr{)}$. Moreover, by a continuity argument or by direct computation, the $e_{l}$ are also eigenfunctions of $\widetilde{H}_{\infty}=(N-1)\sum_{j=1}^{N}z_{j}\frac{\partial}{\partial z_{j}}-2\sum_{j=1}^{N}\sum_{l\neq j}\frac{z_{j}^{2}}{z_{j}-z_{l}}\frac{\partial}{\partial z_{j}}$ with eigenvalues $-l(N-l).$ We now use these properties to construct martingales from our processes $\widetilde{Z}_{k}$ for $k\in]0,\infty[$. We recall that by Dynkin’s formula (see e.g. Section III.10 of [RW]), the following holds for the generator $L$ of a Feller semigroup and an arbitrary associated Feller process $(X_{t})_{t\geq 0}$: if $f$ is a bounded eigenfunction of $L$ with eigenvalue $r\in\mathbb{R}$, then the process $(e^{-rt}f(X_{t}))_{t\geq 0}$ is a martingale w.r.t. the canonical filtration. We thus have: ###### Corollary 3.5. For $k\in]0,\infty[$ and $z\in\mathbb{A}_{N}$ consider the diffusion $(\widetilde{Z}_{t,k})_{t\geq 0}$ on $\mathbb{A}_{N}$ with start in $z$. Then, for $l=0,1,\ldots,N$, the process $(e^{l(1/k+N-l)t}e_{l}(\widetilde{Z}_{t,k}))_{t\geq 0}$ is a martingale. In particular, for $t\geq 0$, $\mathbb{E}(e_{l}(\widetilde{Z}_{t,k}))=e^{-l(1/k+N-l)t}\,e_{l}(z).$ This statement also holds in the deterministic case $k=\infty$, where we also obtain additional information for $t\to\infty$: ###### Corollary 3.6. For each starting point $z$ in the interior of $\mathbb{A}_{N}$, the deterministic process $(\widetilde{Z}_{t,\infty})_{t\geq 0}$ satisfies (3.11) $e_{l}(\widetilde{Z}_{t,\infty})=e^{-l(N-l)t}e_{l}(z)\quad\quad(l=0,1,\ldots,N).$ Moreover, the limit $Z:=\lim_{t\to\infty}\widetilde{Z}_{t,\infty}\in\mathbb{T}^{N}$ exists and is given by (3.12) $Z=(Z_{1},Z_{1}e^{2\pi i/N},\,\ldots,Z_{1}e^{2\pi i(N-1)/N})$ where $Z_{1}\in\mathbb{T}$ is as follows: If $z=(e^{ix_{1}},\ldots,e^{ix_{N}})$ with $(x_{1},\ldots,x_{N})\in C_{N}$, then (3.13) $Z_{1}=e^{ix_{0}}\quad\text{with}\quad x_{0}=\frac{x_{1}+\ldots+x_{N}-\pi(N-1)}{N}.$ ###### Proof. Eq. (3.11) is obvious. As each $\zeta\in\mathbb{A}_{N}$ is uniquely determined by the elementary symmetric functions $e_{l}(\zeta)$, (3.11) and a continuity argument imply that $Z=(Z_{1},\ldots,Z_{N}):=\lim_{t\to\infty}\widetilde{Z}_{t,\infty}\in\mathbb{A}_{N}$ exists, and that $\prod_{j=1}^{N}(y-Z_{j})=y^{N}-e_{N}(Z)=y^{N}-z_{1}z_{2}\cdots z_{N}\,.$ This shows that $Z$ has the form as stated in (3.12) for some $Z_{1}\in\mathbb{T}$ with $Z_{1}^{N}=z_{1}\cdots z_{n}\,.$ To identify $Z_{1}$, we write the initial condition as $z=(e^{ix_{1}},\ldots,e^{ix_{N}})$ with $(x_{1},\ldots,x_{N})\in C_{N}$. In the $x$-coordinates, our process $(\widetilde{X}_{t,\infty}=(x_{t,1},\ldots,x_{t,N}))_{t\geq 0}$ satisfies the ODE (3.3). This ODE yields that $x_{t,1}+\ldots+x_{t,N}$ is independent of $t\in[0,\infty]$. Therefore, the form of $Z=(e^{ix_{\infty,1}},\ldots,e^{ix_{\infty,N}})$ in (3.12) yields $x_{1}+\ldots+x_{N}=x_{\infty,1}+\ldots+x_{\infty,N}=\sum_{j=0}^{N-1}(x_{\infty,1}+j\cdot 2\pi/N)=Nx_{\infty,1}+(N-1)\pi.$ This implies (3.13). ∎ In the next step we use the decomposition of the processes $\widetilde{X}_{k}=\widetilde{X}_{k}^{\mathit{diff}}+\widetilde{X}_{k}^{\mathit{cg}}.$ Eq. (3.4) and the expectations of the geometric Brownian motion imply (3.14) $\mathbb{E}(e^{-il\widetilde{X}_{t,k,j}^{\mathit{cg}}})=\mathbb{E}\left(e^{-\frac{il\sqrt{2}}{\sqrt{Nk}}B_{t}-il\widetilde{X}_{0,k,j}^{\mathit{cg}}}\right)=e^{-\frac{tl^{2}}{Nk}-il\widetilde{X}_{0,k,j}^{\mathit{cg}}}\quad(j,l=1,\ldots,N).$ This yields: ###### Corollary 3.7. Let $k\in\,]0,\infty[$ and $x\in C_{N}$ with $x_{1}+\ldots+x_{N}=0.$ Let $z=e^{ix}\in\mathbb{A}_{N}$, and consider the diffusion $(\widetilde{Z}_{t,k})_{t\geq 0}$ on $\mathbb{A}_{N}$ with start in $z$. Then, for $t\geq 0$ and $l=1,\ldots,N$, $\mathbb{E}(e_{l}(e^{i\widetilde{X}_{t,k}^{\mathit{diff}}}))=e^{-l(N-l+(N+l)/(Nk))t}e_{l}(z).$ ###### Proof. By our initial conditions and Eq. (3.4) we have $\widetilde{X}_{t,k,j}^{\mathit{cg}}=\widetilde{X}_{t,k,1}^{\mathit{cg}}$ for all $t,k$ and $j=1,\ldots,N$. Hence, the stochastic independence of $\widetilde{X}_{k}^{\mathit{diff}},\widetilde{X}_{k}^{\mathit{cg}}$, Corollary 3.5, (3.14), and the initial condition imply $\displaystyle\mathbb{E}(e_{l}(e^{i\widetilde{X}_{t,k}^{\mathit{diff}}}))$ $\displaystyle=\mathbb{E}(e_{l}(e^{i\widetilde{X}_{t,k}-i\widetilde{X}_{t,k}^{\mathit{cg}}}))=\mathbb{E}(e_{l}(\widetilde{Z}_{t,k})\cdot e^{-il\widetilde{X}_{t,k,1}^{\mathit{cg}}})$ $\displaystyle=\mathbb{E}(e_{l}(\widetilde{Z}_{t,k}))\cdot\mathbb{E}(e^{-il\widetilde{X}_{t,k,1}^{\mathit{cg}}})=e^{-l(1/k+N-l)t}\,e_{l}(z)\cdot e^{-\frac{tl^{2}}{Nk}}$ as claimed. ∎ ###### Example 3.8. For the starting configuration $x=(0,2\pi/N,\ldots,(N-1)2\pi/N)$ and $z=e^{x}$, we have $e_{1}(z)=\ldots=e_{N-1}(z)=0$ and $e_{N}(z)=(-1)^{N-1}$. Hence, by Corollary 3.7, $\mathbb{E}\bigl{(}e_{l}(e^{i\widetilde{X}_{t,k}^{\mathit{diff}}})\bigr{)}=0\quad(l=1,\ldots,N-1),\quad\text{and}\quad\mathbb{E}\bigl{(}e_{N}(e^{i\widetilde{X}_{t,k}^{\mathit{\mathit{diff}}}})\bigr{)}=(-1)^{N-1}e^{-2Nt/k}$ for all $t\geq 0$. As $e_{0}=1$, we conclude that in this case for all $y\in\mathbb{C}$ and $t\geq 0$, (3.15) $\mathbb{E}\bigl{(}\prod_{j=1}^{N}(y-e^{i\widetilde{X}_{t,k,j}^{\mathit{diff}}})\bigr{)}=\mathbb{E}\bigl{(}\sum_{l=0}^{N}y^{N-l}(-1)^{l}e_{l}(e^{i\widetilde{X}_{t,k}^{\mathit{diff}}})\bigl{)}=y^{N}-e^{-2Nt/k}.$ We point out that this result differs from the case $k=\infty$ where $x$ is a stationary solution of the associated ODE by Lemma 3.1, and thus $\mathbb{E}\bigl{(}\prod_{j=1}^{N}(y-e^{i\widetilde{X}_{t,\infty,j}^{\mathit{diff}}})\bigr{)}=y^{N}-1$ is independent from $t$. We next study the limit $t\to\infty$ for $k\in]0,\infty[$ similar to the limit results for $k=\infty$ in Corollary 3.6. We need the following well-known result, which follows easily for instance from the explicit formulas for the densities of our diffusions in [RR1]: ###### Lemma 3.9. Let $k\in\,]0,\infty[$. Then for each starting point in $\mathbb{A}_{N}$, the process $(\widetilde{Z}_{t,k})_{t\geq 0}$ converges in distribution for $t\to\infty$ to the probability measure $\mu_{k}$ on $\mathbb{A}_{N}$ from (3.9). This observation and Corollary 3.5 for $t\to\infty$ imply: ###### Corollary 3.10. Let $Z=(Z_{1},\ldots,Z_{N})$ be an $\mathbb{A}_{N}$-valued random variable with the distribution $\mu_{k}$ of a circular $\beta$-ensemble. Then for each $y\in\mathbb{C}$, $\mathbb{E}\bigl{(}\prod_{j=1}^{N}(y-Z_{j})\bigr{)}=y^{N}.$ ###### Proof. $\mathbb{E}\bigl{(}\prod_{j=1}^{N}(y-Z_{j})\bigr{)}=\mathbb{E}\bigl{(}\sum_{l=0}^{N}y^{N-l}(-1)^{l}e_{l}(Z)\bigl{)}=y^{N}.$ ∎ A corresponding result can be also stated under the condition that $Z$ only takes values in the alcove $\mathbb{A}_{N}^{1}:=\\{z\in\mathbb{A}_{N}:z_{1}\cdots z_{N}=1\\}.$ For this consider the process $\widetilde{X}_{k}^{\mathit{diff}}$ in the decomposition $\widetilde{X}_{k}=\widetilde{X}_{k}^{\mathit{diff}}+\widetilde{X}_{k}^{\mathit{cg}}$ above. Then $e^{i\widetilde{X}_{k}^{\mathit{diff}}}$ is a diffusion on $\mathbb{A}_{N}^{1}$ which converges for $t\to\infty$ in distribution to the conditional probability measure $\mu_{k}^{1}\in M^{1}(\mathbb{A}_{N}^{1})$ of $\mu_{k}$ under the condition $\mathbb{A}_{N}^{1}$. (This is, up to normalization, the measure with the density (2.4)). Eq. (3.15) now leads to: ###### Corollary 3.11. Let $Z$ be an $\mathbb{A}_{N}^{1}$-valued random variable with distribution $\mu_{k}^{1}$. Then for each $y\in\mathbb{C}$, $\mathbb{E}\bigl{(}\prod_{j=1}^{N}(y-Z_{j})\bigr{)}=y^{N}.$ ###### Remark 3.12. It can be shown and is well known that the measures $\mu_{k}$ tend for $k\to\infty$ weakly to the probability measure $\mu_{\infty}$ which appears as image of the uniform distribution on $\mathbb{T}$ under the mapping $\mathbb{T}\to\mathbb{A}_{N},\quad z\to(z,z\cdot e^{2\pi i/N},\ldots,z\cdot e^{2\pi i(N-1)/N}).$ Corollary 3.10 and continuity show that a random variable $Z=(Z_{1},\ldots,Z_{N})$ on $\mathbb{A}_{N}$ with distribution $\mu_{\infty}$ also satisfy $\mathbb{E}\bigl{(}\prod_{j=1}^{N}(y-Z_{j})\bigr{)}=y^{N}$. A corresponding result holds also in the situation of Corollary 3.11. Please notice that this differs from the situation in the end of Example 3.8 for $k=\infty$ where we have a purely deterministic situation and also a slightly different result. Corollaries 3.5, 3.7, 3.10, and 3.11 have applications to Brownian motions and uniform probabilities on compact symmetric spaces of type $A.$ For a first example, consider the space $C(U(N))$ of all conjugacy classes of $U(N)$, which can be identified with $\mathbb{A}_{N}$ up to the cyclic group $\mathbb{Z}_{N}$, i.e., $C(U(N))\sim\mathbb{A}_{N}/\mathbb{Z}_{N}$. In fact, the conjugacy classes are characterized via the ordered spectra of matrices from $U(N)$ where, say, the eigenvalue with the smallest nonnegative argument has the first position. On the other hand, elements in $\mathbb{A}_{N}$ describe configurations of ordered points on $\mathbb{T}$ where the position of the first entry is arbitrary. It is also well known that the pushforward of the uniform distribution (i.e., the normalized Haar measure) of $U(N)$ under the natural projection $U(N)\mapsto C(U(N))\sim\mathbb{A}_{N}/\mathbb{Z}_{N}$ agrees with the pushforward of $\mu_{k}$ with $k=1$ under the canonical mapping $\mathbb{A}_{N}\mapsto\mathbb{A}_{N}/\mathbb{Z}_{N}$. Corollary 3.10 thus leads to the following. ###### Corollary 3.13. Let $Z$ be a uniformly distributed $U(N)$-valued random variable. Then for each $y\in\mathbb{C}$, $\,\mathbb{E}\bigl{(}\det(yI_{N}-Z)\bigr{)}=y^{N}.$ The same procedure also works for $SU(N),$ where the space $C(SU(N))$ of conjugacy classes corresponds to $\mathbb{A}_{N}^{1}/\mathbb{Z}_{N}$, and the pushforward of the uniform distribution corresponds to $\mu_{k}^{1}\in M^{1}(\mathbb{A}_{N}^{1})$ with $k=1$. Hence, by Corollary 3.11: ###### Corollary 3.14. Let $Z$ be an $SU(N)$-valued random variable which is uniformly distributed. Then for $y\in\mathbb{C}$, $\mathbb{E}\bigl{(}\det(yI_{N}-Z)\bigr{)}=y^{N}$. Corollaries 3.13 and 3.14 are special cases of well-known general formulas for integrals of polynomials on unitary groups w.r.t. uniform distributions in [CS]. On the other hand, our approach leads to generalizations of these formulas for Brownian motions on $U(N)$ and $SU(N)$. For $k=1/2$ or $k=2$, our results above are related to the compact symmetric spaces $U(n)/O(n)$ and $U(2n)/Sp(n)$ associated with the root system $A_{N-1}$. ## 4\. The non-compact case of type $A_{N-1}$ In this section we start with the $W$-invariant Heckman-Opdam Laplacians (4.1) $L_{k}f(x)=\Delta f(x)+k\sum_{j=1}^{N}\sum_{l\neq j}\coth\Bigl{(}\frac{x_{j}-x_{l}}{2}\Bigr{)}\frac{\partial}{\partial x_{j}}f(x)$ for $k\in[0,\infty[$ on the Weyl chamber $C_{N}^{A}:=\\{x\in\mathbb{R}^{N}:\>x_{1}\geq x_{2}\geq\ldots\geq x_{N}\\}$ of type $A_{N-1}$. As in Section 3, $L_{k}$ is the generator of a Feller diffusion $(X_{t,k})_{t\geq 0}$ on $C_{N}^{A}$ with reflecting boundaries. Again we study the renormalized generators $\widetilde{L}_{k}:=\frac{1}{k}L_{k}$ which degenerate for $k\to\infty$ into $\widetilde{L}_{\infty}=\sum_{j=1}^{N}\sum_{l\neq j}\coth\Bigl{(}\frac{x_{j}-x_{l}}{2}\Bigr{)}\frac{\partial}{\partial x_{j}}.$ The process $\widetilde{X}_{k}:=(\widetilde{X}_{t,k}:=X_{t/k,k})_{t\geq 0}$ for $k\in]0,\infty[$ then solves the SDE (4.2) $d\widetilde{X}_{t,k,j}=\frac{\sqrt{2}}{\sqrt{k}}dB_{t,j}+\sum_{l\neq j}\coth\Bigl{(}\frac{\widetilde{X}_{t,k,j}-\widetilde{X}_{t,k,l}}{2}\Bigr{)}dt\quad\quad(j=1,\ldots,N)$ which degenerates for $k=\infty$ into the ODE (4.3) $\frac{dx_{j}}{dt}(t)=\sum_{l\neq j}\coth\Bigl{(}\frac{x_{j}(t)-x_{l}(t)}{2}\Bigr{)}\quad\quad(j=1,\ldots,N).$ Again, for initial data in the interior of the chamber $C_{N}^{A}$, the solution $(\widetilde{X}_{t,\infty})_{t\geq 0}$ of these differential equations exists for all $t\geq 0$ in the interior of $C_{N}^{A}$. As in the preceding section, we next decompose the diffusions $\widetilde{X}_{k}$ into the center-of-gravity process $\widetilde{X}_{k}^{\mathit{cg}}:=(\widetilde{X}_{t,k}^{\mathit{cg}})_{t\geq 0}$ with $\widetilde{X}_{t,k}^{\mathit{cg}}:=\frac{1}{N}(\widetilde{X}_{t,k,1}+\ldots+\widetilde{X}_{t,k,N})\cdot\omega$ and the process $\widetilde{X}_{k}^{\mathit{diff}}:=\widetilde{X}_{k}-\widetilde{X}_{k}^{\mathit{cg}}\,$ on the Weyl chamber $C_{N,0}^{A}:=\\{x\in C_{N}^{A}:x_{1}+\ldots+x_{N}=0\\}$ which describes the distances of the particles. As in the proof of Lemma 3.2, the processes $\widetilde{X}_{k}^{\mathit{diff}}$ and $\widetilde{X}_{k}^{\mathit{cg}}$ are stochastically independent. In the next step we observe that the processes $\widetilde{X}$, $\widetilde{X}_{k}^{\mathit{diff}}$, $\widetilde{X}_{k}^{\mathit{cg}}$ admit arbitrary exponential moments for arbitrary deterministic starting points. In fact, for $\widetilde{X}_{k}^{\mathit{diff}}$ this follows from Lemma 2.1. Moreover, $\widetilde{X}_{k}^{\mathit{cg}}$ is a classical one-dimensional Brownian motion up to scaling and has therefore arbitrary exponential moments. Finally, the independence of $\widetilde{X}_{k}^{\mathit{diff}}$ and $\widetilde{X}_{k}^{\mathit{cg}}$ ensures that $\widetilde{X}_{k}=\widetilde{X}_{k}^{\mathit{diff}}+\widetilde{X}_{k}^{\mathit{cg}}\,$ has this property as well. With the existence of exponential moments in mind, we now follow Section 3 and observe that the trigonometric elementary symmetric polynomials $\widetilde{e}_{l}(x):=e_{l}(e^{x})\quad(l=0,\ldots,N,\,\,x\in C_{N}^{A})$ are eigenfunctions of $\widetilde{L}_{k}$ for all $k$ with eigenvalues $l\bigl{(}\frac{1}{k}+N-l\bigr{)}\,\geq 0$. This leads to martingales for the diffusions $(\widetilde{X}_{t,k})_{t\geq 0}$ on $C_{N}^{A},$ similar to Corollaries 3.5 and 3.6 as follows: ###### Corollary 4.1. For $k\in\,]0,\infty[$ and $x\in C_{N}^{A}$ consider the diffusion $(\widetilde{X}_{t,k})_{t\geq 0}$ on $C_{N}^{A}$ with start in $x$. Then, for $l=0,1,\ldots,N$, the process $\bigl{(}e^{-l(1/k+N-l)t}\,\widetilde{e}_{l}(\widetilde{X}_{t,k})\bigr{)}_{t\geq 0}$ is a martingale. In particular, for $t\geq 0$, $\mathbb{E}\bigl{(}\widetilde{e}_{l}(\widetilde{X}_{t,k})\bigr{)}=e^{l(1/k+N-l)t}\,\widetilde{e}_{l}(x).$ This result also holds for $k=\infty$. More precisely, the solution $(\widetilde{X}_{t,\infty})_{t\geq 0}$ of the ODE (4.3) with start $x$ in the interior of $C_{N}^{A}$ satisfies (4.4) $\widetilde{e}_{l}(\widetilde{X}_{t,\infty})=e^{l(N-l)t}\,\widetilde{e}_{l}(x)\quad\quad(l=0,1,\ldots,N).$ Using the independence of the processes $\widetilde{X}_{k}^{\mathit{diff}}$ and $\widetilde{X}_{k}^{\mathit{cg}}$, we also obtain the following analog of Corollary 3.7. ###### Corollary 4.2. For $k\in\,]0,\infty[$ and $x\in C_{N,0}^{A}$ consider the diffusion $\widetilde{X}_{k}^{\mathit{diff}}$ on the chamber $C_{N,0}^{A}$ with start in $x$. Then, for $t\geq 0$ and $l=1,\ldots,N$, $\mathbb{E}\bigl{(}\widetilde{e}_{l}(\widetilde{X}_{t,k}^{\mathit{diff}})\bigr{)}=e^{l(N-l+(N+l)/(Nk))t}\,\widetilde{e}_{l}(x).$ This holds also for $k=\infty$ where the expectations can be omitted. Corollary 4.2 can be restated as: ###### Corollary 4.3. For $k\in\,]0,\infty[$ and $x\in C_{N,0}^{A}$ consider the diffusion $\widetilde{X}_{k}^{\mathit{diff}}$ on $C_{N,0}^{A}$ with start in $x$. Then, for $t\geq 0$ and $y\in\mathbb{C}$, $\mathbb{E}\bigl{(}\prod_{j=1}^{N}(y-e^{\widetilde{X}_{k,t,j}^{\mathit{diff}}})\bigr{)}=P_{t,N,k,x}(y)$ with the polynomial $P_{t,N,k,x}(y):=\sum_{l=0}^{N}y^{N-l}(-1)^{l}e^{l(N-l+(N+l)/(Nk))t}\,\widetilde{e}_{l}(x).$ For general starting points $x\in C_{N,0}^{A}$, these polynomials do not seem to have particularly nice properties. For the starting configuration $x=0\in C_{N,0}^{A}$, which is of particular interest here, we get $P_{t,N,k,0}(y)=\sum_{l=0}^{N}{N\choose l}(-1)^{l}y^{N-l}e^{l(N-l+(N+l)/(Nk))t}.$ We do not have much information about these polynomials. This is a contrast to the Bessel processes of type $A$ where in a corresponding formula classical Hermite polynomials appear; see [KVW]. ###### Example 4.4. 1. (1) Let $N=2$. We try to solve the ODE (4.3) with the singular starting point $x=0\in C_{2}^{A}.$ In fact, (4.3) yields that $x_{1}(t)=-x_{2}(t)$ and that $y(t):=x_{1}(t)-x_{2}(t)$ satisfies $\frac{dy}{dt}(t)=2\coth(y(t)/2)$. On the other hand, (4.4) for $N=2$, $l=1$ suggests that $e^{x_{1}(t)}+e^{-x_{1}(t)}=2e^{t}$ and thus $x_{1}(t)={\rm arcosh}(e^{t})$. It is now easily checked that in fact, $t\mapsto({\rm arcosh}(e^{t}),-{\rm arcosh}(e^{t}))$ is continuous on $[0,\infty)$ and solves (4.3) for $t>0.$ Moreover, for $k=\infty$ we have $x^{\mathit{diff}}\\!(t)=x(t)$, and $P_{t,2,\infty,0}(y)=y^{2}-2e^{t}\,y+1.$ These polynomials have the zeros $e^{\pm x_{1}(t)}$ as claimed. 2. (2) Let $N=3.$ We again try to solve the ODE (4.3) with $x=0\in\partial C_{3}^{A}$. Here symmetry arguments imply that the solution of the ODE (4.3) must have the form $x(t)=(x_{1}(t),0,-x_{1}(t))$. On the other hand, formula (4.4) with $N=2$, $l=1,2$ suggests that $e^{x_{1}(t)}+e^{-x_{1}(t)}+1=3e^{2t}$ and thus $\,x_{1}(t)={\rm arcosh}\bigl{(}3(e^{2t}-1)/2\bigr{)}.\,$ This indeed gives a solution as in example (1). Moreover, the polynomial $P_{t,3,\infty,0}(y)=y^{3}-3e^{2t}y^{2}+3e^{2t}y-1$ has the zeros $1$ and $e^{\pm x_{1}(t)}$ as claimed. Unfortunately, we have no closed formulas for $x(t)$ for general dimension $N$ even for the starting point $x=0\in C_{N}^{A}$. We finally mention that Corollaries 4.2 and 4.3 for $k=1/2,1,2$ have applications to Brownian motions on the noncompact symmetric spaces of type A, i.e. on $GL(N,\mathbb{F})/U(N,\mathbb{F})$ for $\mathbb{F}=\mathbb{R},\mathbb{C},\mathbb{H}$ similar to the results in the end of Section 3. ## 5\. The non-compact case of type $BC_{N}$ We here start with the nonreduced root system $R=BC_{N}=\\{\pm e_{i},\pm 2e_{i},\pm(e_{i}\pm e_{j});\>\>1\leq i<j\leq N\\}\subset\mathbb{R}^{N}$ for $N\geq 2$ with the multiplicity $k=(k_{1},k_{2},k_{3}),$ where $k_{1},k_{2}\geq 0$, $k_{3}>0$ are the values on the roots $e_{i},2e_{i},e_{i}\pm e_{j}$. As indicated in the introduction, we now reparametrize the multiplicity $k$. This will be not natural at a first glance, but it will turn out to be useful in the end. We here mainly follow the notations in [Dem, V] and define (5.1) $\kappa=k_{3},\quad q=N-1+\frac{1+2k_{1}+2k_{2}}{2k_{3}},\quad p=N-1+\frac{1+2k_{2}}{2k_{3}}.$ Then $\kappa>0$, $q\geq p\geq N-1+1/{2\kappa}$, and (5.2) $k=(k_{1},k_{2},k_{3})=\kappa\cdot\Bigl{(}q-p,k_{0,2},1\Bigr{)}\quad\text{with}\quad k_{0,2}:=p-(N-1)-\frac{1}{2\kappa}.$ We now regard $p,q$ as fixed parameters and $\kappa>0$ as a varying parameter, and denote the multiplicity $k$ in (5.2) by $k_{\kappa}$. The associated $W$-invariant Heckman-Opdam Laplacian (2.6) is $\displaystyle L_{\kappa}=\Delta+\sum_{i=1}^{N}\kappa\Bigl{(}($ $\displaystyle q-p)\coth(\frac{x_{i}}{2})+2k_{0,2}\coth(x_{i})\,$ (5.3) $\displaystyle+\sum_{j:j\neq i}\bigl{(}\coth(\frac{x_{i}-x_{j}}{2})+\coth(\frac{x_{i}+x_{j}}{2})\bigr{)}\Bigr{)}\partial_{i}$ on the Weyl chamber $C_{N}^{B}:=\\{x\in\mathbb{R}^{N}:\>x_{1}\geq x_{2}\geq\ldots\geq x_{N}\geq 0\\}$ of type $B_{N}$, associated with $R_{+}=\\{e_{i},\,2e_{i},\,e_{i}-e_{j}:1\leq i<j\leq N\\}.$ As in the preceding sections, the operators $L_{\kappa}$ are the generators of diffusions $(X_{t,\kappa})_{t\geq 0}$ on $C_{N}^{B}$ where the paths are reflected at $\partial C_{N}^{B}$. The renormalized operators $\widetilde{L}_{\kappa}:=\frac{1}{\kappa}L_{\kappa}\,$ then are the generators of the diffusions $(\widetilde{X}_{t,\kappa}:=X_{t/\kappa,\kappa})_{t\geq 0}$ which may be regarded as solutions of the SDE $\displaystyle d\widetilde{X}_{t,\kappa,j}=$ $\displaystyle\frac{\sqrt{2}}{\sqrt{\kappa}}dB_{t,j}+\sum_{l\neq j}\Bigl{(}\coth\Bigl{(}\frac{\widetilde{X}_{t,\kappa,j}-\widetilde{X}_{t,\kappa,l}}{2}\Bigr{)}+\coth\Bigl{(}\frac{\widetilde{X}_{t,\kappa,j}+\widetilde{X}_{t,\kappa,l}}{2}\Bigr{)}\Bigr{)}dt$ (5.4) $\displaystyle+\Bigl{(}(q-p)\coth\bigl{(}\frac{\widetilde{X}_{t,\kappa,j}}{2}\bigr{)}+2k_{0,2}\coth(\widetilde{X}_{t,\kappa,j})\Bigr{)}dt\quad\quad(j=1,\ldots,N).$ For $\kappa\to\infty$, the generator degenerates into $\displaystyle\widetilde{L}_{\infty}=$ $\displaystyle\sum_{i=1}^{N}\Bigl{(}(q-p)\coth(\frac{x_{i}}{2})+2(p-(N-1))\coth(x_{i})$ (5.5) $\displaystyle+\sum_{j:j\neq i}\Bigl{(}\coth(\frac{x_{i}-x_{j}}{2})+\coth(\frac{x_{i}+x_{j}}{2})\Bigr{)}\Bigr{)}\partial_{i},$ and (5) becomes the ODE (5.6) $\displaystyle\frac{dx_{j}}{dt}(t)=$ $\displaystyle\sum_{l\neq j}\Bigl{(}\coth\Bigl{(}\frac{x_{j}(t)-x_{l}(t)}{2}\Bigr{)}+\coth\Bigl{(}\frac{x_{j}(t)+x_{l}(t)}{2}\Bigr{)}\Bigr{)}$ $\displaystyle+(q-p)\coth\bigl{(}\frac{x_{j}(t)}{2}\bigr{)}+2\bigl{(}p-(N-1)\bigr{)}\coth x_{j}(t)\quad\quad(j=1,\ldots,N).$ Again, for initial data in the interior of the chamber $C_{N}^{B}$, the solution $(\widetilde{X}_{t,\infty})_{t\geq 0}$ of these differential equations exists for $t\geq 0$ in the interior of $C_{N}^{B}$. We now turn to eigenfunctions of the operators $\widetilde{L}_{\kappa}$ and consider the Heckman-Opdam hypergeometric functions $F_{BC}(\lambda,k_{\kappa};x)$ associated with $BC_{N}$ in the variable $x\in\mathbb{R}^{N},$ with spectral parameter $\lambda\in\mathbb{C}^{N}$ and multiplicity $k_{\kappa}$. By Section 2, for $\kappa\in\,]0,\infty[$, the functions $x\mapsto F_{BC}(\lambda,k_{\kappa};x)$ are eigenfunctions of the renormalized Laplacian $\widetilde{L}_{\kappa}$ with the eigenvalues $r_{\lambda,\kappa}:=\frac{1}{\kappa}\Bigl{(}\sum_{j=1}^{N}\lambda_{j}^{2}-|\rho(\kappa)|^{2}\Bigr{)},$ where (5.7) $\rho(\kappa)=\frac{1}{2}\sum_{\alpha\in R_{+}}k_{\kappa}(\alpha)\alpha\quad\text{with}\quad\rho(\kappa)_{j}=\frac{\kappa}{2}\bigl{(}(q-p)+2k_{0,2}+2(N-j)\bigr{)}.$ We further consider the associated (normalized) Heckman-Opdam polynomials $R_{\lambda}(k,\,.\,)=R_{\lambda}^{BC}(k,\,.\,),$ as introduced in (2.5), which are indexed by $P_{+}=\Lambda_{N}^{+}$ (the set of partitions of length at most $N$). These are multivariate generalizations of the classical Jacobi polynomials which are well-studied in the literature, see for instance [BO, L, RR2]. The polynomials $\widetilde{R}_{\lambda}$ defined by $\widetilde{R}_{\lambda}(\cos x):=R_{\lambda}(k;x)$ form an orthogonal basis of $L^{2}(\mathbb{A}_{N},w_{k})$ on $\mathbb{A}_{N}:=\\{x\in\mathbb{R}^{N}|\>-1\leq y_{1}\leq...\leq y_{N}\leq 1\\}$ with the weight function (5.8) $w_{k}(y):=\prod_{i=1}^{N}(1-y_{i})^{k_{1}+k_{2}-1/2}(1+y_{i})^{k_{2}-1/2}\cdot\prod_{i<j}|y_{i}-y_{j}|^{2k_{3}}.$ Here we are mainly interested in the fact that for $k=k_{\kappa}$ and $\lambda\in\Lambda_{N}^{+}$, the exponential polynomials $H_{\lambda}(x):=\widetilde{R}_{\lambda}(\cosh x)=F_{BC}(\lambda+\rho(k),k;x)$ are eigenfunctions of $\widetilde{L}_{\kappa}$ with the eigenvalues $\displaystyle r_{\lambda}$ $\displaystyle=\frac{1}{\kappa}\langle\lambda,\lambda+2\rho(k)\rangle$ $\displaystyle=\frac{1}{\kappa}\sum_{j=1}^{N}\lambda_{j}(\lambda_{j}+k_{1}+2k_{2}+2k_{3}(N-j))$ (5.9) $\displaystyle=\sum_{j=1}^{N}\lambda_{j}\Bigl{(}\frac{\lambda_{j}-1}{\kappa}+p+q+2-2j\Bigr{)}.$ We now consider the partitions $\lambda(n):=1^{n}\in\Lambda_{+}^{N}$ for $n=0,\ldots,N.$ It is known (see Section 5 of [V] and in particular Lemma 5.1 there) that the Jacobi polynomials $\widetilde{R}_{\lambda(n)}$ are of the form (5.10) $\widetilde{R}_{\lambda(n)}=\sum_{l=0}^{n}c_{n,l}(p,q)\cdot e_{l}\quad\text{with }\>c_{n,n}(p,q)\neq 0,$ where the $e_{l}$ are again the elementary symmetric polynomials in $N$ variables and the coefficients $c_{n,l}(p,q)\in\mathbb{R}$ depend on $p,q$ only and not on $\kappa$. This observation will be crucial in the following and is the reason for our parametrization of $k$ by $p,q,\kappa$ above. For more details on the $c_{n,l}(p,q)$ we refer to [V]. In summary, the functions $H_{\lambda(n)}$ with $n=0,\ldots,N$ are independent of $\kappa$ and simultanous eigenfunctions of the operators $\widetilde{L}_{\kappa}$ for all $\kappa\in\,]0,\infty[$ with the eigenvalues $r_{n}=n(p+q-n+1).$ Clearly, this observation also holds for $\kappa=\infty$. This implies: ###### Lemma 5.1. For each starting point $x\in C_{N}^{B}$ of the processes $(\widetilde{X}_{t,\kappa})_{t\geq 0}$ with $\kappa\in\,]0,\infty]$, the processes (5.11) $\Bigl{(}e^{-r_{n}t}\cdot H_{\lambda(n)}\bigl{(}\widetilde{X}_{t,\kappa}\bigr{)}\Bigr{)}_{t\geq 0}$ are martingales for $n=0,\ldots,N$, where the numbers $r_{n}$ and the functions $H_{\lambda(n)}$ do not depend on $\kappa$. In particular, for $x$ in the interior of $C_{N}^{B}$, the solution $(\widetilde{X}_{t,\infty})_{t\geq 0}$ of the ODE (5.6) with $\widetilde{X}_{0,\infty}=x$ satisfies $H_{\lambda(n)}(\widetilde{X}_{t,\infty})=e^{r_{n}t}H_{\lambda(n)}(x)\quad\text{for}\quad t\geq 0,\>n=0,\ldots,N.$ ###### Proof. This follows from our preceding considerations and the fact that the random variables $H_{\lambda(n)}(\widetilde{X}_{t,\kappa})$ are integrable for $\kappa<\infty$ and $t>0$ by Lemma 2.1. ∎ We may invert (5.10) and write the elementary symmetric polynomials $e_{l}$ as linear combinations of the $\widetilde{R}_{\lambda(n)}$ for $l,n=0,\ldots,N$ with coefficients independent of $\kappa$. Lemma 5.1 thus implies: ###### Corollary 5.2. Fix some deterministic starting point $x\in C_{N}^{B}$ as well as the parameters $p,q$. Consider the associated diffusions $(\widetilde{X}_{t,\kappa})_{t\geq 0}$ for $\kappa\in\,]0,\infty[$. Then there are coefficients $a_{n,l}\in\mathbb{R}$ for $0\leq l\leq n\leq N$ such that $\mathbb{E}\bigl{(}e_{n}(\cosh(\widetilde{X}_{t,\kappa}))\bigr{)}=\sum_{l=0}^{n}a_{n,l}\,e^{r_{l}t}$ with $r_{0}=0$ where the coefficients $a_{n,l}$ and the exponents $r_{l}$ depend on $p,q$ and $x$ only and not on $\kappa$. The same holds for $\kappa=\infty$ and starting points $x$ in the interior of $C_{N}^{B}.$ ###### Example 5.3. For $x=0\in C_{N}^{B}$, we have $H_{\lambda(n)}(0)=1\,$ and thus $\mathbb{E}\bigl{(}H_{\lambda(n)}(\widetilde{X}_{t,\kappa})\bigr{)}=e^{r_{n}t}\quad\quad(n=0,\ldots,N,\>t\geq 0,\,\kappa\in\,]0,\infty[).$ We finally turn to an application concerning a determinantal formula. Again we fix some starting point $x\in C_{N}^{B}$ as well as $p,q$ and consider the associated diffusions $(\widetilde{X}_{t,\kappa})_{t\geq 0}$ for $\kappa\in\,]0,\infty[$. Then by Corollary 5.2, for all $t\geq 0$ and $y\in\mathbb{C}$, (5.12) $\displaystyle\mathbb{E}\Bigl{(}\prod_{j=1}^{N}\bigl{(}y-\cosh(\widetilde{X}_{t,\kappa,j})\bigr{)}\Bigr{)}$ $\displaystyle=\sum_{n=0}^{N}(-1)^{n}\,\mathbb{E}\bigl{(}e_{n}(\cosh(\widetilde{X}_{t,\kappa})\bigr{)}\cdot y^{N-n}$ $\displaystyle=\sum_{n=0}^{N}(-1)^{n}\,e_{n}\bigl{(}\cosh(\widetilde{X}_{t,\infty})\bigr{)}\cdot y^{N-n}$ $\displaystyle=\prod_{j=1}^{N}\,\bigl{(}y-\cosh(\widetilde{X}_{t,\infty,j})\bigr{)}=:D_{t,x}(y).$ It is an interesting task to find particularly nice starting points $x\in C_{N}^{B}$ for which $D_{t,x}(y)$ can be determined explicitly. For instance, in the setting of multivariate Bessel processes of types $A_{N-1}$ or $B_{N}$ and start in the origin, $D_{t,0}(y)$ is a classical one- dimensional Hermite or Laguerre polynomial of degree $N$ in $y$, which is scaled by a factor $\sqrt{t}$; for the details see [KVW]. Moreover, in the setting of Heckman-Opdam Jacobi processes of type $BC$ on the compact alcove $\mathbb{A}_{N}$, and with the same paramatrization of the multiplicity $k$ by $p,q,\kappa$ as here, there is a (unique) stationary solution $x_{0}$ of the ODE in the interior of $\mathbb{A}_{N}$, whose coordinates are the ordered zeroes of some classical one-dimensional Jacobi polynomial of degree $N$ in $y$. The indices of this Jacobi polynomial are determined by $p,q$. This means that for this particular starting point $x_{0}$, the function $D_{t,x_{0}}(y)$ is just this specific Jacobi polynomial and is independent of $t\geq 0$ (due to stationarity). We refer to [V] for further details. We expect that in our non-compact $BC$ setting, $D_{t,x}(y)$ should be of particular interest when the associated ODE (5.6) starts in $x=0\in\partial C_{N}^{B},$ where we expect that the corresponding initial value problem is uniquely solvable, as in the Dunkl setting in [VW]. It seems that the explicit solution of this initial value problem is more involved than in the cases considered in [KVW, V]. ## References * [A] K. Aomoto, Jacobi polynomials associated with Selberg integrals. SIAM J. Math. Anal. 18 (1987), 545-549. * [AV] S. Andraus, M. Voit, Limit theorems for multivariate Bessel processes in the freezing regime. Stoch. Proc. Appl. 129 (2019), 4771-4790. * [BF] T.H. Baker, P.J. Forrester, The Calogero-Sutherland model and generalized classical polynomials. Comm. Math. Phys. 188 (1997), 175–216. * [BO] R.J. Beerends, E.M. Opdam, Certain hypergeometric series related to the root system $BC$. Trans. Amer. Math. Soc. 339 (1993), 581–609. * [CS] B. Collins, P. Sniady, Integration with respect to the Haar measure on unitary, orthogonal and symplectic groups, Commun. Math. Phys. 264 (2006), 773-795. * [Dem] N. Demni, $\beta$-Jacobi processes. Adv. Pure Appl. Math. 1 (2010), 325-344. * [DG] P. Diaconis, A. Gamburd, Random matrices, magic squares and matching polynomials. Electron. J. Combin. 11 (2004/06), no. 2, Research Paper 2, 26 pp.. * [F] P. Forrester, Log Gases and Random Matrices, London Mathematical Society, London, 2010. * [FG] P. Forrester, A. Gamburd, Counting formulas associated with some random matrix averages. J. Combin. Theory A 113 (2006), 934–951. * [HO] G. Heckman, E. Opdam, Jacobi polynomials and hypergeometric functions associated with root systems. In: Encyclopedia of Special Functions, Part II: Multivariable Special Functions, eds. T.H. Koornwinder, J.V. Stokman, Cambridge University Press, Cambridge, 2021. * [HS] G. Heckman, H. Schlichtkrull, Harmonic Analysis and Special Functions on Symmetric Spaces, Part I. Perspectives in Mathematics, Vol. 16, Academic Press, 1994. * [HW] D. Hobson, W. Werner, Non-colliding Brownian motions on the circle. Bull. London Math. Soc. 28 (1996), 643–650. * [KN] R. Killip, I. Nenciu, Matrix models for circular ensembles. Int. Math. Res. Not. 50 (2004), 2665–2701. * [KVW] M. Kornyik, M. Voit, J. Woerner, Some martingales associated with multivariate Bessel processes. Acta Math. Hungarica 163 (2021), 194-212. * [L] M. Lassalle, Polynômes de Jacobi généralisés, C. R. Acad. Sci. Paris Ser. I Math. 312, (1991), 425-428. * [LV] L. Lapointe, L. Vinet, Exact operator solution of the Calogero-Sutherland model. Comm. Math. Phys. 178 (1996), 425-452. * [NPP] E.K. Narayanan, A. Pasquale, S. Pusti, Asymptotics of Harish-Chandra expansions, bounded hypergeometric functions associated with root systems, and applications. Adv. Math. 252 (2014), 227–259. * [OO] A. Okounkov, G. Olshanski, Asymptotics of Jack polynomials as the number of variables goes to infinity. Int. Math. Res. Not. 1998, No. 13, 641-682. arXiv:math/9912124. * [P] P.E. Protter, Stochastic Integration and Differential Equations. A New Approach. Springer, Berlin, 2003. * [R] E.M. Rains, Combinatorial properties of Brownian motion on the compact classical groups. J. Theor. Probab. 10 (1997), 659-679. * [RR1] H. Remling, M. Rösler, The heat semigroup in the compact Heckman-Opdam setting and the Segal-Bargmann transform. Int. Math. Res. Not. 2011, No. 18, 4200-4225. * [RR2] H. Remling, M. Rösler, Convolution algebras for Heckman-Opdam polynomials derived from compact Grassmannians. J. Approx. Theory 197 (2015), 30–48. * [RW] L.C.G. Rogers, D. Williams, Diffusions, Markov Processes and Martingales, Vol. 1 Foundations. Cambridge University Press 2000. * [Sch1] B. Schapira, The Heckman-Opdam Markov processes. Probab. Theory Rel. Fields 138 (2007), 495-519. * [Sch2] B. Schapira, Contribution to the hypergeometric function theory of Heckman and Opdam: sharp estimates, Schwarz space, heat kernel. Geom. Funct. Anal. 18 (2008), 222-250. * [St] R.P. Stanley, Some combinatorial properties of Jack symmetric functions. Adv. Math. 77 (1989), 76-115. * [V] M. Voit, Some martingales associated with multivariate Jacobi processes and Aomoto’s Selberg integral. Indag. Math. 31 (2020), 398-410. * [VW] M. Voit, J.H.C. Woerner, The differential equations associated with Calogero-Moser- Sutherland particle models in the freezing regime. Hokkaido Math. J. 2021, to appear, arXiv:1910.07888
On Malyshev’s method of automorphic functions in diffraction by wedges Dedicated to the memory of Vadim Malyshev A.I. Komech Faculty of Mathematics, Vienna University <EMAIL_ADDRESS> A.E. Merzon111The research supported by CONACYT-México and CIC-UMSNH, México. Institute of Physics and Mathematics University of Michoacan de San Nicolas de Hidalgo, Morelia, Mexico <EMAIL_ADDRESS> ###### Abstract We describe Malyshev’s method of automorphic functions in application to boundary value problems in angles and to diffraction by wedges. We give a consize survey of related results of A. Sommerfeld, S.L. Sobolev, J.B. Keller, G.E. Shilov and others. MSC: 35J25; 30F10; 35J05; 35A30; 11F03; 35Q15; 78A45. Keywords: elliptic equation; Helmholtz equation; boundary value problem; plane angle; Fourier transform; analytic function; Riemann surface; characteristics; covering map; automorphic function; Riemann–Hilbert problem; diffraction; wedge; limiting absorption principle; limiting amplitude principle; limiting amplitude: the Sommerfeld radiation condition. ###### Contents 1. 1 Introduction 2. 2 Diffraction by wedges and radar/sonar detection 3. 3 Stationary diffraction and boundary value problems in angles 4. 4 Malyshev’s method of automorphic functions 1. 4.1 Reduction to undetermined algebraic equation on the Riemann surface 1. 4.1.1 Fourier–Laplace transform 2. 4.1.2 Undetermined algebraic equation on the Riemann surface 2. 4.2 Method of automorphic functions 1. 4.2.1 Covering maps 2. 4.2.2 Shift equation 3. 4.3 Reduction to the Riemann–Hilbert problem 5. 5 Nonconvex angles of magnitude $\Phi>\pi$ 6. 6 Time-dependent diffraction by wedge 7. 7 Limiting amplitude principle 8. 8 The Sommerfeld diffraction theory and related results ## 1 Introduction Vadim Malyshev was a very talented and versatile mathematician. He owns significant results in the field of probability theory and Gibbs fields, Markov processes and Euclidean quantum field theory. He also possessed outstanding organizational skills, in particular, he founded the successful and respected mathematical journal “Markov Processes and Related Fields”. In 1970, V. Malyshev invented the method of automorphic functions [36], and applied to random walks on the lattice in the quarter of plane. Later on, he applied the method to queueing systems and analytic combinatorics [16]. In 1972–2022, the method was extended to boundary value problems for partial differential equations in angles [22, 32] and to diffraction by wedges [30]. The main steps of Malyshev’s method are as follows: I. Undetermined algebraic equation on the Riemann surface and analytic continuation. II. Elimination of one unknown function using covering automorphisms. III. The reduction to the Riemann–Hilbert problem. Malyshev’s method played the crucial role in the progress in the theory of diffraction by wedges with general boundary conditions since 1972. The problem was stated by M.I.Vishik in the Summer of 1967. In 1969–1971, one of the authors (AK) tried to solve this problem while preparing his PhD Thesis. As the result of these three- year efforts, the problem has been reduced to an undetermined algebraic equation on the Riemann surface [23], though next steps remained obscure. Fortunately, at the end of 1971, AK received the impetus from his friend Alexander Shnirelman who noticed something similar in Malyshev’s book [36], which he had recently reviewed by request of M.I. Vishik. AK did not understand this book completely, but discovered two pages which could have contained a creative idea. The book, opened on these pages, lied on his desk for about two or three months, when AK pinned down two lines with the key idea of automorphicity. The remaining work took about six months… The extension of the research to diffraction problems was done in an intensive collaboration of both authors, and took about 50 years. The main results of the collaboration were the limiting absorption principle [44, 45], proof of the completeness of Ursell’s trapping modes [31] , the extension to the nonconvex angles [25, 30], and the Sommerfeld representation [24]. Moreover, our general methods [30] allowed us to reproduce the formulas obtained by Sommerfeld, Sobolev and Keller [28, 29, 46]. The identifications justify these formulas as the limiting amplitudes in diffraction. In the present, we give a consize survey of the development of Malyshev’s method of automorphic functions since 1972 in the context of i) boundary value problems in angles for elliplic partial differential equations, and ii) theory of stationary and time-dependent diffraction by wedges. We focus on principal ideas omitting nonessential technical details. All the details can be found in [30]. ## 2 Diffraction by wedges and radar/sonar detection The radar or sonar emits the incident wave, which generates the reflected and diffracted waves (the latter in green color) as shown in Fig. 1. Here $W$ denotes a conducting wedge (for example, the edge of an airplane wing), and Q$=\mathbb{R}^{2}\setminus W$ is an angle of magnitude $\Phi$. The incident wave reaches the wedge and generates the reflected and diffracted waves. The diffracted wave is defined as the total wave minus the incident and reflected waves. The reflected wave is defined by geometric optics, and is absorbed by the ground. On the other hand, the diffracted wave spreads in all directions, and only this part of radiation returns to the radar which allows to detect the airplane location. Figure 1: Incident, reflected, and diffracted waves (the latter in green color). ## 3 Stationary diffraction and boundary value problems in angles The stationary diffraction by wedge is described by the boundary value problem for the Helmholz equation in an angle $Q\subset\mathbb{R}^{2}$ of magnitude $\Phi\in(0,2\pi]$: $\left\\{\begin{array}[]{rcll}(\Delta+\omega^{2})u(x)&=&0,&x\in Q\\\ \\\ B_{l}u(x)&=&f_{l}(x),&x\in\Gamma_{l},\quad l=1,2\end{array}\right|,$ (3.1) where $\Gamma_{1}$ and $\Gamma_{2}$ denote the sides of the angle, the functions $f_{l}$ are defined by the incident wave, and $\omega\in\mathbb{R}$ is its frequency, see Fig. 1 and (7.60). The operators $B_{l}$ in the boundary conditions correspond to the material properties of the wedge (conductor, insulator, ferromagnetic, etc). The relation of stationary problem (3.1) to time-dependent diffraction is highly nontrivial. The key issue is that for $\omega\in\mathbb{R}$, the problem admits an infinite number of linearly independent solutions. We discuss this issue in detail in Section 7. The stationary diffraction problem (3.1) with the Dirichlet and Neumann boundary conditions ($B_{l}=1$ or $B_{l}=\frac{\partial}{\partial n}$) was solved in 1896–1912 for $\Phi=2\pi$, by A. Sommerfeld [60]–[65] (the detailed exposition and comments can be found in [48]). The extension to all $\Phi\in(0,2\pi)$ was obtained in 1920 by H.S. Carslaw [8], in 1932–1937 by V.I. Smirnov and S.L. Sobolev [55, 56, 57, 58, 59], and in 1951 by J.B. Keller and A. Blank [21]. In 1958, G.D. Malujinetz solved the problem for all $\Phi\in(0,2\pi)$ with the impedance (Leontovich) boundary condition $\frac{\partial u(x)}{\partial n}+ib_{l}u(x)=f_{l}(x),\,\,x\in\Gamma_{l}$; see [34, 35]. The detailed exposition of all these results can be found in [2] and [30]. The mixed boundary value problems of type $\left\\{\begin{array}[]{rcll}Au(x)&=&0,&x\in Q\\\ \\\ B_{l}u(x)&=&f_{l}(x),&x\in\Gamma_{l},\quad l=1,2\end{array}\right|,\qquad A=\sum_{|\alpha|\leq m}a_{\alpha}\partial^{\alpha},\quad B_{l}=\sum_{|\alpha|\leq n_{l}}b_{l\alpha}\partial^{\alpha}$ (3.2) were considered in 1958 by S.L. Sobolev [59] and in 1960–1961 by G.E. Shilov [53, 54] in the quadrant $x_{1}>0$, $x_{2}>0$ for the case of hyperbolic operator $A$ in the variable $x_{2}$ and with the Cauchy initial conditions at $x_{2}=0$. For strongly-elliptic second order operators $A$ and general differential boundary operators $B_{l}$, the problem (3.2) was solved in 1972 in convex angles $Q$ of magnitude $\Phi\in(0,\pi)$, see [22, 23]. Strong ellipticity means that $|\hat{A}(z)|\geq\varkappa(|z|^{2}+1),\qquad z\in\mathbb{R}^{2},$ (3.3) where the symbol $\hat{A}(z):=\sum_{|\alpha|\leq 2}a_{\alpha}(-iz)^{\alpha}$ and $\varkappa>0$. In particular, the operator $A=-\Delta+1$ with the symbol $\hat{A}(z)=z^{2}+1$ is strongly elliptic, and also the Helmholtz operator $H=\Delta+\omega^{2}$ from (3.1) is strongly ellipltic for ${\rm Im{\hskip 1.42262pt}}\omega\neq 0$. The method [22, 23] relies on the Malyshev ideas of automorphic functions [36] which is presented in the next section. The extension of this result to nonconvex angles of magnitudes $\Phi\in(\pi,2\pi)$, was done in 1992 by the authors [25]. Let us note that the Helmholtz operator $A=\Delta+\omega^{2}$ is not strongly elliptic if $\omega\in\mathbb{R}$ since its symbol has the form $\hat{A}(z)=-z^{2}+\omega^{2}$. Problem (3.2) for the Helmholtz operator in convex angles was solved in 1972–1977 by A.E. Merzon [44, 45], who proved that for real $\omega\in\mathbb{R}$, the problem admits only a finite number of solutions satisfying the limiting absorption principle: $u_{\omega}(x)=\lim_{\varepsilon\to 0+}u_{\omega+i\varepsilon}(x),\qquad x\in Q,$ (3.4) where $u_{\omega+i\varepsilon}$ denotes suitable solution to (3.1) with $\omega+i\varepsilon$ instead of $\omega$. The application of these results to time-dependent diffraction by wedges was done in 2006–2019 by the authors [26, 27, 30], where, in particular, the limiting amplitude principle (7.54) was established as well as (3.4). Another approach to the construction of solutions to (3.2) has been suggested by Maz’ya and Plamenevskii [37, 38]. However, this approach is applicable only to equations with real coefficients that is not sufficient for application to the diffraction problems. Many works published since 1980’ concern a wide spectrum of properties of solutions to the boundary problems of type (3.2) in different regions with angles, see Grisvard [18], Costabel and Stephan [10], Dauge [12], Bernard [3, 4], Nazarov and Plamenevskii [49], Bonnet-Ben Dhia and Joly [6], Bonnet-Ben Dhia, Dauge and Ramdani [5], Meister with collaborators [39]–[43], Penzel and Teixeira [50], Castro and Kapanadze [9], and others. The detailed survey can be found in [30]. Note that Malyshev’s method plays an important role in the theory of Queueing Systems and Analytic Combinatorics [16]. Another important area of application of Malyshev’s method is the linear theory of water waves. In particular, the method was applied in 1996–2002 by the authors together with P.N. Zhevandrov to trapped modes on a sloping beach. As the result, the long-standing problem of the completeness of the Ursell’s modes has been solved [31, 32, 47]. This progress is due to the fact that the method allows one to obtain all solutions of the boundary value problems in angles. We expect that the method can give a valuable progress in diffraction by ferromagnetic wedges which is a challenging open problem of radar detection. In this case, the operators $B_{l}$ in (3.2) are nonlocal pseudodifferential operators. ## 4 Malyshev’s method of automorphic functions In this section, we present basic steps of the method [22] which relies on Malyshev’s ideas of automorphic functions [36]. Note that in the case of rational angles $\Phi=\pi/n$ and the Dirichlet and Neumann boundary conditions, the boundary value problem (3.1) can be easily solved by reflections in the sides of the angle. This method was well known at least since the Gauss theory of electrostatics [17]. For $\Phi\neq\pi/n$ the reflections do not give a solution, and for irrational $\Phi/\pi$, the method suggested the reflections on a “Riemann surface” formed by the reflected angles. This was the original step of the Sommerfeld approach which leaded him to the famous “Sommerfeld integral representation” for solutions [60]. The reflection on the Riemann surface and the theory of branching solutions to the wave equation have been developed later by Sobolev [57] and [58, Chapter XII]. Very surprisingly, the method of automorphic functions [22, 36] also relies on the reflections on a suitable Riemann surface $V$. However, in this approach, $V$ is the surface in the Fourier space, contrary to the original ideas of Sommerfeld. Namely, $V$ is the Riemann surface of complex characteristics of the elliptic operator $A$: $V=\\{z\in\mathbb{C}^{2}:\hat{A}(z)=0\\}.$ (4.5) ###### Remark 4.1. The main idea of the Malyshev approach is the invariance of the Cauchy data of solutions under covering maps of the Riemann surface $V$, see Remark 4.7. In [22], the problem (3.2) with strongly–elliptic operators $A$ in convex angles $Q$ is solved in the following steps: 1\. Reduction to an undetermined algebraic equation with two unknown functions on the Riemann surface $V$. 2\. Elimination of one unknown function using its invariance with respect to the covering map of the Riemann surface. 3\. Reduction of the obtained equation with one unknown function to the Riemann–Hilbert problem on $V$. Below in this section, we describe some details. ### 4.1 Reduction to undetermined algebraic equation on the Riemann surface As an example, we consider the Dirichlet boundary value problem in the quadrant $Q=\mathbb{R}^{+}\times\mathbb{R}^{+}$ : $\left\\{\begin{array}[]{rcl}Au(x_{1},x_{2})&=&0\\\ \\\ u(x_{1},0)&=&f_{1}(x_{1}),\,\,u(0,x_{2})=f_{2}(x_{2})\end{array}\right|,\,\,\quad x_{1}>0,\,\,\,x_{2}>0.$ (4.6) #### 4.1.1 Fourier–Laplace transform We assume that the solution $u(x)\in C^{2}(\overline{Q})$ and is bounded by a polynomial: $|u(x)|+|\nabla u(x)|\leq C(1+|x|)^{p},\qquad x\in\mathbb{R}^{2}.$ (4.7) Denote $\mathbb{C}^{+}=\\{\zeta\in\mathbb{C}:{\rm Im{\hskip 1.42262pt}}\zeta>0\\}$ and $Z^{+}=\mathbb{C}^{+}\times\mathbb{C}^{+}$, and consider the complex Fourier–Laplace transform of solution $\hat{u}(z)=\int_{0}^{\infty}\int_{0}^{\infty}e^{izx}u(x)dx_{1}dx_{2},\qquad z=(z_{1},z_{2})\in Z^{+}.$ (4.8) By (4.7), this integral is absolutely convergent and hence it is an analytic function of two complex variables (this is a particular case of the Paley–Wiener Theorem). Let us denote the Neumannn data of the solution as $\varphi_{1}(x_{1})=\partial_{2}u(x_{1},0),\,\,\,x_{1}\geq 0;\qquad\varphi_{2}(x_{2})=\partial_{1}u(0,x_{2}),\,\,\,x_{2}\geq 0.$ (4.9) It is well known that the solution $u(x)$ can be expressed via the Dirichlet and Neumannn data $f_{1},f_{2},\varphi_{1},\varphi_{2}$ by the Green integral formula [11]. In our case, it is useful to obtain this formula in the Fourier transform. For this purpose, multiply the first equation in (3.2) by $e^{izx}$ and integrate over $Q$. Integrating by parts, we immediately obtain $0=\int_{0}^{\infty}\int_{0}^{\infty}e^{izx}Au(x)dx_{1}dx_{2}=\hat{A}(z)\hat{u}(z)+F(z),\qquad z\in Z^{+},$ (4.10) where $F(z)=P_{1}(z)\hat{f}_{1}(z_{1})+P_{2}(z)\hat{f}_{2}(z_{2})+S_{1}(z)\hat{\varphi}_{1}(z_{1})+S_{2}(z)\hat{\varphi}_{2}(z_{2}),\qquad z\in Z^{+},$ (4.11) and the functions $P_{l}$ and $S_{l}$ are polynomials. #### 4.1.2 Undetermined algebraic equation on the Riemann surface Rewrite (4.10) as $\hat{A}(z)\hat{u}(z)=-F(z),\qquad z\in Z^{+}.$ (4.12) Now (4.5) implies the identity $F(z)=0,\qquad z\in V^{+}:=V\cap Z^{+}$ (4.13) since all the functions $\hat{A}(z),\hat{u}(z),F(z)$ are analytic in the domain $Z^{+}$! ###### Remark 4.2. Note that the set of complex characteristics $V$ is nonempty even for strongly elliptic operators (3.3), though its intersection with the real plane $\mathbb{R}^{2}$ is empty; see Example 4.5 below. The identity (4.13) can be rewritten as undetermined linear algebraic equation $S_{1}(z)\varphi_{1}(z_{1})+S_{2}(z)\varphi_{2}(z_{2})=G(z),\qquad z\in V^{+}$ (4.14) with two unknown functions $\varphi_{1}(z_{1})$, $\varphi_{2}(z_{2})$, and with known right-hand side: $G(z):=-P_{1}(z)\hat{f}_{1}(z_{1})-P_{2}(z)\hat{f}_{2}(z_{2}),\qquad z\in V^{+}.$ (4.15) ###### Remark 4.3. The identity (4.12) implies the formula for the solution $u(x)=-\Big{[}{\cal F}^{-1}\frac{F(z)}{\hat{A}(z)}\Big{]}(x),\qquad x\in Q,$ (4.16) where ${\cal F}^{-1}$ denotes the inverse to the Fourier–Laplace transform (4.8), and the right hand side is well defined due to (3.3). The formula (4.16) can be transformed into the well known Green formula which expresses the solution $u(x)$ via its Cauchy data. ### 4.2 Method of automorphic functions #### 4.2.1 Covering maps Denote $\psi_{1}(z)=\varphi_{1}(z_{1}),\quad\psi_{2}(z)=\varphi_{2}(z_{2}),\qquad\hat{g}_{1}(z)=\hat{f}_{1}(z_{1}),\quad\hat{g}_{2}(z)=\hat{f}_{2}(z_{2}).$ (4.17) Now (4.14) becomes $S_{1}(z)\psi_{1}(z)+S_{2}(z)\psi_{2}(z)=G(z),\qquad z\in V^{+},$ (4.18) where $G(z):=-P_{1}(z)\hat{g}_{1}(z)-P_{2}(z)\hat{g}_{2}(z).$ (4.19) Of course, this equation is not equivalent to (4.14). To keep the equivalence, we need an additional characterisation of the functions $\psi_{l}(z)$. This is the key observation of Malyshev that the functions are automorphic with respect to an appropriate groups of transformation of the Riemann surface $V$. First, consider the coordinate projections $p_{l}:V\to\mathbb{C}$ defined by $p_{1}(z_{1},z_{2})=z_{1},\qquad p_{2}(z_{1},z_{2})=z_{2}.$ (4.20) These projections are two-sheeted since, for example, $p_{1}(z_{1},z_{2})=z_{1}$ means that $z_{2}$ is the root of the quadratic equation $\hat{A}(z_{1},z_{2})=0$. Accordingly, the inverse maps $p_{l}^{-1}:\mathbb{C}\to V$ are double-valued: for $z_{1},z_{2}\in\mathbb{C}$, $p_{1}^{-1}(z_{1})=\\{\zeta_{1}^{-},\zeta_{1}^{+}\\},\qquad p_{2}^{-1}(z_{2})=\\{\zeta_{2}^{-},\zeta_{2}^{+}\\},$ (4.21) and at the branching points of $p_{l}^{-1}$, the two points $\zeta_{l}^{\pm}\in V$ coincide. ###### Definition 4.4. Covering maps $h_{1},h_{2}:V\to V$ are defined as follows: for any $z_{1},z_{2}\in\mathbb{C}$, $h_{1}\zeta_{1}^{\pm}=\zeta_{1}^{\mp},\qquad\qquad h_{2}\zeta_{2}^{\pm}=\zeta_{2}^{\mp}.$ (4.22) ###### Example 4.5. For the strongly-elliptic operator $A=-\Delta+1$, the corresponding Riemann surface $V\\!\\!:z_{1}^{2}+z_{2}^{2}+1=0$ is shown in Fig. 2 in projection onto the plane ${\rm Im{\hskip 1.42262pt}}z_{1},{\rm Im{\hskip 1.42262pt}}z_{2}$. It is easy to see that this projection does not cover the circle $|{\rm Im{\hskip 1.42262pt}}z_{1}|^{2}+|{\rm Im{\hskip 1.42262pt}}z_{2}|^{2}<1$, and it covers twice each point with $|{\rm Im{\hskip 1.42262pt}}z_{1}|^{2}+|{\rm Im{\hskip 1.42262pt}}z_{2}|^{2}>1$. The surface consists of two sheets shown in Fig. 2, and glued along the cuts. Thus, $h_{1}$ permutes the points $\zeta_{1}^{\pm}\in V$ with the identical projections $z_{1}=p_{1}\zeta_{1}^{\pm}$, and similarly, $h_{2}$ permutes the points $\zeta_{2}^{\pm}\in V$ with the identical projections $z_{2}=p_{2}\zeta_{2}^{\pm}$ (see Fig. 2): $p_{1}h_{1}\zeta_{1}^{\pm}=p_{1}\zeta_{1}^{\mp}=z_{1},\qquad p_{2}h_{2}\zeta_{2}^{\pm}=p_{2}\zeta_{2}^{\mp}=z_{2}.$ (4.23) The maps $h_{l}:V\to V$ with $l=1,2$ define the corresponding automorphisms of the ring of (meromorphic) functions $\psi(z)$ on the Riemann surface $V$: $\psi^{h_{l}}(z):=\psi(h_{l}z),\qquad z\in V.$ (4.24) Figure 2: Riemann surface $V\\!\\!:z_{1}^{2}+z_{2}^{2}+1=0$ in projection onto the plane ${\rm Im{\hskip 1.42262pt}}z_{1},{\rm Im{\hskip 1.42262pt}}z_{2}$. Figure 2 shows that $\begin{array}[]{ll}p_{1}\zeta_{1}^{+}=z_{1}=p_{1}\zeta_{1}^{-},&{\rm so}\,\,\,\psi_{1}(\zeta_{1}^{+})=\varphi_{1}(z_{1})=\psi_{1}(\zeta_{1}^{-}),\\\ \\\ p_{2}\zeta_{2}^{+}=z_{2}=p_{2}\zeta_{2}^{-},&{\rm so}\,\,\,\psi_{2}(\zeta_{2}^{+})=\varphi_{2}(z_{2})=\psi_{2}(\zeta_{2}^{-}).\end{array}$ Now it is clear that the functions $\psi_{l}(z):=\varphi_{l}(z_{l})$ with $l=1,2$ are invariant with respect to the automorphisms $h_{l}$: $\,\,\,\,\,\psi_{l}^{h_{l}}(z)=\psi_{l}(z),\qquad\,\,\,\,\,\,\,z\in V^{+}.$ (4.25) In other words, the functions $\psi_{l}$ are automorphic, and the automorphisms defined by $h_{l}$ belong to the corresponding Galois groups of extensions of the ring of functions of $z_{l}$. #### 4.2.2 Shift equation Applying formally $h_{1}$ to (4.18), and using (4.25) with $l=1$, we get a new equation for the same unknown functions: $S_{1}^{h_{1}}(z)\psi_{1}(z)+S_{2}^{h_{1}}(z)\psi_{2}^{h_{1}}(z)=G^{h_{1}}(z).$ (4.26) The problem is that $\psi_{2}^{h_{1}}(z)=\psi_{2}(h_{1}z)$ and $G^{h_{1}}(z)=G(h_{1}z)$ are not defined generally for $z\in V^{+}$ since $V^{+}$ is not invariant with respect to the covering map $h_{1}$. In particular, we have by (4.19), $G^{h_{1}}(z):=-P_{1}^{h_{1}}(z)\hat{g}_{1}(z)-P_{2}^{h_{1}}(z)\hat{g}_{2}^{h_{1}}(z),$ (4.27) where $\hat{g}_{2}^{h_{1}}(z)=\hat{g}_{2}(h_{1}z)$ is not defined generally for $z\in V^{+}$. To save the situation, consider the case $f_{2}=0$. Then $\hat{g}_{2}=0$, and now the right hand side of equation (4.26) is well defined for $z\in V^{+}$. It is important that in this case $\psi_{2}^{h_{1}}(z)$ is also well defined [30, Ch. 14]. The case $f_{1}=0$ can be considered similarly. ###### Remark 4.6. The function $\psi_{2}(z)$ admits an analytic continuation outside the region $V^{+}_{l}:=\\{z\in V:{\rm Im{\hskip 1.42262pt}}z_{2}>0\\}$ on the Riemann surface $V$, see [30, Ch. 14]. Let us stress that this is analytic continuation along the surface $V$. Now we can eliminate the function $\psi_{1}$ from (4.18) and (4.26). As a result, we obtain an algebraic equation with a shift for one unknown function $R_{1}(z)\psi_{2}^{h_{1}}(z)-R_{2}(z)\psi_{2}(z)=H(z),\quad z\in V^{+}.$ (4.28) Finally, using (4.25) with l=2, we get $R_{1}(z)\psi_{2}^{h}(z)-R_{2}(z)\psi_{2}(z)=H(z),\quad z\in V^{+};\qquad h=h_{2}h_{1}.$ (4.29) ###### Remark 4.7. The elimination of unknown functions using their invariance with respect to suitable “reflections” is the main idea of Malyshev’s method. ### 4.3 Reduction to the Riemann–Hilbert problem Let us illustrate the reduction of equation (4.28) to the Riemann–Hilbert problem for a particular case of strongly-ellipltic operator $A=-\Delta+1$. Its symbol is $\hat{A}(z)=z^{2}+1$, so $V=\\{(z_{1},z_{2})\in\mathbb{C}^{2}:z_{1}^{2}+z_{2}^{2}=-1\\}$ and the covering maps are $h_{1}(z_{1},z_{2})=(z_{1},-z_{2}),\qquad h_{2}(z_{1},z_{2})=(-z_{1},z_{2}).$ (4.30) Introduce the coordinate $w$ on the universal covering $\hat{V}=\mathbb{C}$ of the surface $V$ by $z_{1}=i\cos w,\qquad z_{2}=i\sin w.$ (4.31) The maps (4.30) can be lifted to $\hat{V}$ as $\hat{h}_{1}w=-w,\qquad\hat{h}_{2}w=-w+\pi.$ (4.32) Now $h=w+\pi$, so (4.29) becomes $\tilde{R}_{1}(w)\tilde{\psi}_{2}(w+\pi)-\tilde{R}_{2}(w)\tilde{\psi}_{2}(w)=\tilde{H}(w).$ (4.33) where $\tilde{R}_{1}$, etc, denote the liftings of the corresponding functions to the universal covering. The equation (4.33) holds for an appropriate region of $w\in\mathbb{C}$. Restricted to the strip ${\rm Re{\hskip 1.42262pt}}w\in[0,\pi]$, this equation is the Riemann–Hilbert problem which can be solved in quadratures [30, Chs 17 and 18]. Let us recall some details. The function $z=e^{2iw}$ analytically transforms the strip to the plane with the cut $[0,\infty)$. Denote the function $\check{\psi}_{2}(t)=\tilde{\psi}_{2}(w)$, $\check{H}(t)=\tilde{H}(w)$ and $\check{R}_{k}(t)=\tilde{R}_{k}(w)$ for $k=1,2$. Then relation (4.33) becomes $\check{R}_{1}(t)\check{\psi}_{2}(t-i0)-\check{R}_{2}(t)\check{\psi}_{2}(t+i0)=\check{H}(t),\qquad t>0.$ (4.34) As the first step of the Riemann–Hilbert method, one must solve the corresponding homogeneous problem: $\check{R}_{1}(t)T(t-i0)-\check{R}_{2}(t)\check{T}(t+i0)=0,\qquad t>0.$ (4.35) Equivalently, $\frac{T(t+i0)}{T(t-i0)}=q(t):=\frac{\check{R}_{1}(t)}{\check{R}_{2}(t)},\qquad t>0.$ (4.36) The solution to this equation depends on zeros of the functions $\check{R}_{1}(t)$ and $\check{R}_{2}(t)$ for $t>0$. Let us consider the simplest case when such zeros do not exist, and moreover, $q(0)=q(\infty)=1.$ (4.37) Then the equation is equivalent to $\log T(t+i0)-\log T(t-i0)=\log q(t),\qquad t>0.$ (4.38) The solution is given by the Cauchy-type integral $\log T(t)=\frac{1}{2\pi i}\int_{0}^{\infty}\frac{\log q(s)}{t-s}ds,\qquad t\in\mathbb{C}\setminus[0,\infty).$ (4.39) It is important that $T(t)$ is analytic and nonvanishing in the region $\mathbb{C}\setminus[0,\infty)$. Now the nonhomogeneous problem (4.34) can be solved as follows. First, (4.34) and (4.35) imply $\frac{\check{\psi}_{2}(t-i0)}{T(t-i0)}-\frac{\check{\psi}_{2}(t+i0)}{T(t+i0)}=\frac{\check{H}(t)}{\check{R}_{1}(t)T(t-i0)},\qquad t>0.$ (4.40) Therefore, similarly to (4.38), $\frac{\check{\psi}_{2}(t)}{T(t)}=-\frac{1}{2\pi i}\int_{0}^{\infty}\frac{\check{H}(s)}{\check{R}_{1}(s)T(s-i0)(t-s)}ds,\qquad t\in\mathbb{C}\setminus[0,\infty)$ (4.41) since the function $\frac{\check{\psi}_{2}(t)}{T(t)}$ is analytic in $\mathbb{C}\setminus[0,\infty)$. Thus, we have calculated the function $\check{\psi}_{2}(t)$. Now $\psi_{2}(z)$ can be obtained from the relation (4.26). Hence, the functions $\hat{\varphi}_{1}(z_{1})$ and $\hat{\varphi}_{2}(z_{2})$ are known. It remains to substitute the obtained functions into the formula (4.11) for the function $F$. Then the solution to (3.2) is expressed by (4.16), which can be reduced to the integral of Sommerfeld type [24]. ###### Remark 4.8. Equation (4.28) is obtained using the invariance (4.25) with $l=1$, while (4.29) uses also $l=2$. Note that the equation (4.28) reads now $\tilde{T}_{1}(w)\tilde{\psi}_{2}(-w)-\tilde{T}_{2}(w)\tilde{\psi}_{2}(w)=\tilde{H}(w),$ (4.42) which provisionally cannot be reduced to a nonsingular Riemann–Hilbert problem, see [33]. Thus, both invariance conditions (4.25) are necessary for the reduction. ###### Remark 4.9. For the random walks studied in [36, 16], the corresponding Riemann surface and the covering maps $h_{l}$ can be more complicated than for 2-nd order elliptic operators which requires more sophisticated methods of the Galois theory. ## 5 Nonconvex angles of magnitude $\Phi>\pi$ The extension of the theory outlined above to the case of nonconvex angle $Q$ differs drastically from the convex one. As an example, consider the Dirichlet boundary value problem in the angle $Q=\mathbb{R}^{2}\setminus\mathbb{R}^{+}\times\mathbb{R}^{+}$ : $\left\\{\begin{array}[]{rcll}Au(x)&=&0,\qquad x\in Q&\\\ \\\ u(x_{1},0)&=&f_{1}(x_{1}),\,\,x_{1}>0;&u(0,x_{2})=f_{2}(x_{2}),\,\,x_{2}>0.\end{array}\right|.$ (5.43) Note that the relations (4.10) and (4.12), (4.16) remain true in this case, but now the function (4.11) is changed to its negative: $F(z)=-P_{1}(z)\hat{f}_{1}(z_{1})-P_{2}(z)\hat{f}_{2}(z_{2})-S_{1}(z)\hat{\varphi}_{1}(z_{1})-S_{2}(z)\hat{\varphi}_{2}(z_{2}),\qquad z\in\mathbb{R}^{2}.$ (5.44) On the other hand, the key relation (4.13) is not well defined in contrast to the case when the support of $u$ belongs to a convex angle. This is due to the fact that the Fourier–Laplace transform (4.8) of the function $u$ with the support in a nonconvex angle generally does not admit an analytical continuation to a region of $\mathbb{C}^{2}$. Nevertheless, the function (5.44) in this case is analytic in the same region $Z^{+}=\mathbb{C}^{+}\times\mathbb{C}^{+}$ as the function (4.11). The answer to this riddle was found in [25]. First, the function (5.44) admits the splitting $F(z)=\gamma_{1}(z)+\gamma_{2}(z),\quad\gamma_{1}(z)=-P_{1}(z)\hat{f}_{1}(z_{1})-S_{1}(z)\hat{\varphi}_{1}(z_{1}),\quad\gamma_{2}(z)=-P_{2}(z)\hat{f}_{2}(z_{2})-S_{2}(z)\hat{\varphi}_{2}(z_{2}),$ (5.45) where the functions $\gamma_{l}(z)$ are analytic in the regions $V_{l}^{+}=\\{z\in V:{\rm Im{\hskip 1.42262pt}}z_{l}>0\\},\qquad l=1,2.$ (5.46) Second, as shown in [25] (see also [30, Theorem 20.1]), each function $\gamma_{l}$ admits an analytic continuation from $V_{l}^{+}$ to the region $V^{-}:=\\{z\in V:{\rm Im{\hskip 1.42262pt}}z_{1}<0,\,{\rm Im{\hskip 1.42262pt}}z_{2}<0\\}$, and the following identity holds : $\gamma_{1}(z)+\gamma_{2}(z)=0,\qquad z\in V^{-}.$ (5.47) This identity formally coincides with the undetermined equation (4.14), and it allows us to calculate both unknown functions $\hat{\varphi}_{l}$ by methods of Sections 4.2 and 4.3. ## 6 Time-dependent diffraction by wedge The time-dependent diffraction by a wedge $W$ is described by the solution of the wave equation in the plane angle $Q=\mathbb{R}^{2}\setminus W$ with appropriate boundary conditions. For example, consider the Dirichlet boundary conditions $\left\\{\begin{array}[]{rcl}\ddot{u}(x,t)&=&\Delta u(x,t),\quad x\in Q\\\ \\\ u(x,t)&=&0,\quad x\in\Gamma_{1}\cup\Gamma_{2}\end{array}\right|,\quad t\in\mathbb{R}.$ (6.48) The incident wave is defined by the initial condition $u(x,t)=u^{in}(x,t),\qquad u^{in}(x,t):=f(kx-\omega_{0}t)e^{i(kx-\omega_{0}t)},\qquad t<0,$ (6.49) where the frequency $\omega_{0}\in\mathbb{R}$ and $k\in\mathbb{R}^{2}$ is the wave vector. The incident wave $u^{in}(x,t)$ must be a solution to (6.48) for $t<0$: $\left\\{\begin{array}[]{rcl}\ddot{u}^{in}(x,t)&=&\Delta u^{in}(x,t),\quad x\in Q\\\ \\\ u^{in}(x,t)&=&0,\quad x\in\Gamma_{1}\cup\Gamma_{2}\end{array}\right|,\quad t<0.$ (6.50) The wave equation in (6.50) holds for any function $f(s)$ for all $t\in\mathbb{R}$ if $|k|=|\omega_{0}|$. The boundary condition in (6.50) can be satisfied only in the case of nonconvex angle $Q$ of magnitude $\Phi>\pi$ and the wave vector $k$ satisfying the inequalities $k\cdot x\geq 0$ for $x\in W=\mathbb{R}^{2}\setminus Q$. Then for $\omega_{0}>0$ the boundary condition holds if $f(s)=0,\qquad s>0.$ (6.51) ## 7 Limiting amplitude principle Let us assume that there exists the limit $f(-\infty):=\lim_{s\to-\infty}f(s),$ (7.52) and the convergence is sufficiently fast, for example, $f(s)=\theta(-s)$. Then the incident wave (6.49) admits the long-time asymptotics $u^{in}(x,t)\sim f(-\infty)e^{ikx}e^{-i\omega_{0}t},\qquad t\to\infty$ (7.53) which suggests similar asymptotics of solution $u(x,t)\sim a_{\omega_{0}}(x)e^{-i\omega_{0}t},\qquad t\to\infty.$ (7.54) Such asymptotics are called as limiting amplitude principle. Determination of the limiting amplitudes $a_{\omega_{0}}(x)$ for different diffraction processes is the main goal of the theory of diffraction [7, 65] (see also [30]). The proof of the asymptotics is the main goal of the mathematical theory of diffraction. For diffraction by wedges, this asymptotics has been established for the first time in [26]. Formal substitution of the asymptotics (7.54) into (6.48) gives a problem of type (3.1): $\left\\{\begin{array}[]{rcl}-\omega_{0}^{2}a_{\omega_{0}}(x)&=&\Delta a_{\omega_{0}}(x),\quad x\in Q\\\ \\\ a_{\omega_{0}}(x)&=&0,\quad x\in\Gamma_{1}\cup\Gamma_{2}\end{array}\right|.$ (7.55) However, this boundary problem is ill-posed since it admits an infinite number of linearly independent solutions for real $\omega_{0}\in\mathbb{R}$. Thus, this problem does not allow us to find the limiting amplitude. This fact is the main peculiarity of the diffraction theory. This can be easily checked in the case $\Phi=\pi$ when the angle $Q$ is the half-plane, so all solutions can be calculated by the Fourier transform along the boundary $\partial\Omega$. For the problems of type (7.55) in convex angles of magnitude $\Phi<\pi$, this nonuniqueness was discovered in 1973 by one of the authors [45]. Let us recall how to prove the asymptotics (7.54) and how to calculate the limiting amplitudes $a_{\omega_{0}}(x)$. First, note that for the incident wave $u^{in}(x,t)$ the asymptotics of type (7.54) holds by (7.52): $u^{in}(x,t)\sim f(-\infty)e^{ikx}e^{-i\omega_{0}t},\qquad t\to\infty.$ (7.56) The reflected wave is defined by geometric optics, and its main properties are as follows: $u^{r}(x,t)=-u^{in}(x,t),\quad x\in\partial Q;\qquad u^{r}(x,t)\sim a^{r}(x)e^{-i\omega_{0}t},\qquad t\to\infty.$ (7.57) The diffracted wave $u^{d}(x,t)$ is defined by the splitting the total solution as $u(x,t)=u^{in}(x,t)+u^{r}(x,t)+u^{d}(x,t).$ (7.58) Hence, it remains to calculate the corresponding asymptotics for the diffracted wave $u^{d}(x,t)\sim a^{d}_{\omega_{0}}(x)e^{-i\omega_{0}t},\qquad t\to\infty,$ (7.59) Substituting (7.58) into (6.48), using (7.57) and the fact that the wave equation in (6.50) holds for all $t\in\mathbb{R}$, we get the boundary problem for the diffracted wave $\left\\{\begin{array}[]{rcl}\ddot{u}^{d}(x,t)&=&\Delta u^{d}(x,t)+F(x,t),\quad x\in Q\\\ \\\ u^{d}(x,t)&=&0,\quad x\in\Gamma_{1}\cup\Gamma_{2}\end{array}\right|,\qquad F(x,t):=(\partial_{t}^{2}-\Delta)u^{r}(x,t)\sim b(x)e^{-i\omega_{0}t},\,\,\,t\to\infty.$ (7.60) Formal substitution of the asymptotics (7.59) into (7.60), gives the boundary problem $\left\\{\begin{array}[]{rcl}-\omega_{0}^{2}a^{d}_{\omega_{0}}(x,t)&=&\Delta a^{d}_{\omega_{0}}(x)+b(x),\quad x\in Q\\\ \\\ a^{d}(x)&=&0,\quad x\in\Gamma_{1}\cup\Gamma_{2}\end{array}\right|.$ (7.61) For $\omega_{0}\in\mathbb{R}$, this system also admits an infinite number of linearly independent solutions, as well as (7.55). Similar problem of nonuniqueness arises in every diffraction problem in unbounded regions. The problem of nonuniqeness was resolved by the discovery of additional features of the limiting amplitude $a^{d}_{\omega_{0}}(x)$. The key discovery was the limiting absorption principle (3.4) for the limiting amplitude of the diffracted wave. In application to problem (7.61), we have $a^{d}_{\omega_{0}}(x)=\lim_{\varepsilon\to 0+}a^{d}_{\omega_{0}+i\varepsilon}(x),\qquad x\in Q,$ (7.62) where $a^{d}_{\omega_{0}+i\varepsilon}$ denotes a solution to (7.61) with $\omega_{0}+i\varepsilon$ instead of $\omega_{0}$. ###### Remark 7.1. The convergence (7.62) holds for the limiting amplitude $a^{d}_{\omega_{0}}(x)$ of the diffracted wave $u^{d}_{\omega_{0}}(x,t)$ (formal proof can be found in [30, Section 4.1]). However, it does not hold for the limiting amplitude $a(x)$ of the total solution $u(x,t)$ although these amplitudes satisfy quite similar equations (7.61) and (7.55). The difference is that the initial state of the diffracted wave $u^{d}_{\omega_{0}}(x,0)$ is of finite energy (in our case zero), while for the total solution the initial state $(u(x,0),\dot{u}(x,0))$ is the plane wave (6.49) and its derivative in time at $t=0$. The limiting absorption principle has been introduced for the first time in 1905 by W. Ignatovsky [19]. Rigorous proofs of this principle for limiting amplitudes of solutions with finite energy initial states were achieved much later. The results for the wave and Schrödinger equations in the entire space and for diffraction problems with smooth boundaries were obtained by Agmon [1], Eidus [13, 14, 15], Jensen and Kato [20], A.Ya. Povzner [51], B.R. Vainberg [66] and others. The convergence (7.62) for stationary diffraction problems has been established for the first time in 1977 by one of the authors [45]: it was proven that stationary problem (7.61) and problems (3.2) with $A=\Delta+\omega^{2}$ and general boundary conditions in convex angles of magnitude $\Phi<\pi$, i) for complex $\omega\not\in\mathbb{R}$ admit only a finite number of linearly independent solutions in appropriate class of functions; ii) for real $\omega\in\mathbb{R}$ admit an infinite number of linearly independent solutions, iii) for real $\omega\in\mathbb{R}$ admit only a finite number of linearly independent solutions satisfying (7.62). For the time-dependent diffraction problem (6.48), (6.49), the limiting absorption principle (7.62) and the limiting amplitude principle (7.54) were justified in 2006 by the authors [26]. The proofs rely on the analysis of the Fourier-Laplace transform in time: $\tilde{u}(x,\omega)=\int_{0}^{\infty}e^{i\omega t}u(x,t)dt,\qquad\omega\in\mathbb{C}^{+}.$ (7.63) The function $\tilde{u}(x,\omega)$ satisfies a boundary value problem of type (7.61) with complex $\omega\not\in\mathbb{R}$. In this case the Helmholtz operator $A=\Delta+\omega^{2}$ is strongly elliptic. Hence, $\tilde{u}(x,\omega)$ can be calculated and analysed by the methods described in previous sections. The limiting amplitude is calculated in [26] using the limit (7.62). ###### Remark 7.2. In 1912, A. Sommerfeld discovered the Sommerfeld radiation condition [62] (see also [52]), which provides the uniqueness of solution to the boundary problem of type (7.61) in the case when $Q$ is the exterior of a bounded region in $\mathbb{R}^{3}$. This condition is more practical for numerical calculation of the limiting amplitudes than (7.62). ## 8 The Sommerfeld diffraction theory and related results For the angle $\Phi=2\pi$, A. Sommerfeld constructed in 1896 a solution $a(x)$ to stationary diffraction problem of type (7.55) with Dirichlet and Neumannn boundary conditions. In this case the wedge is the half-plane, which is represented by the semi-axis $[0,\infty)$ in the corresponding 2D problem. The main ideas were i) to treat the semi-axis as the cut on an appropriate Riemann surface, and ii) to extend the known method of reflections to Riemann surfaces. As a result, A. Sommerfeld constructed a universal integral representation of a class of branching solutions of the Helmholtz equation on the Riemann surface in the form of the Sommerfeld integral with a fixed integral kernel and a with a suitable density function. Further, A. Sommerfeld chose an appropriate densities to satisfy the boundary conditions. Sommerfeld’s strategy of costructing the solution remains a mysterious riddle to this day. This approach is reproduced with some comments in [30, Ch. 5], see also [48]. However, the Sommerfeld integral representation turned out to be extremely fruitful, and in particular, was used by G.D. Malujinetz to solve the problem with the Leontovich boundary condition [34, 35], see also [2]. For any angles $\Phi\in(0,2\pi)$ the stationary diffraction problem (7.55) for the Dirichlet and Neumannn boundary conditions in the angles of this magnitude was solved by other methods in 1920 by H.S. Carslaw [8], in 1932–1937 by V.I. Smirnov and S.L. Sobolev [55, 56, 57, 58, 59], and in 1951 by J.B. Keller and A. Blank [21]. ###### Remark 8.1. i) In all the works, cited above, the limiting amplitude principle was not established, and the choice of suitable solution of the ill-posed problem (7.55) was not rigorously clarified. Nevertheless, as shown in [28, 29, 46], all the obtained solutions coincide with the limiting amplitudes calculated in [26] and admit the Sommerfeld representation. ii) S.L. Sobolev mentions, in the articles cited above, that the functions of type (6.49) must be solutions to the wave equation even if the amplitude $a(s)$ is a discontinuouos function. These remarks later inspired the theory of weak derivatives of S.L. Sobolev and the theory of distributions of L. Schwartz. ## References * [1] S. Agmon, Spectral properties of Schrödinger operator and scattering theory, Ann. Scuola Norm. Sup. Pisa, Ser. IV 2 (1975), 151–218. * [2] V. M. Babich, M. A. Lyalinov, V. E. Grikurov, The Sommerfeld–Malyuzhinets Technique in Diffraction Theory, Alpha Science International, Oxford, 2007. * [3] J.-M. L. Bernard, Diffraction by a metalic wedge covered with a dielectric material, Wave Motion 9 (1987), 543–561. * [4] J.-M. L. Bernard, A spectral approach for scattering by impedance polygons, Quarterly Journal of Mechanics and Applied Mathematics 59 (2006), no. 4, 517-550. * [5] A.-S., Bonnet-Bendhia, M. Dauge, K. Ramdani, Spectral analysis and singularities of a non-coercive transmission problem. C. R. Acad. Sci., Paris, Sér. I, Math. 1999; 328(8): 717-720. * [6] A.-S. Bonnet-Ben Dhia, P. Joly, Mathematical analysis of guided water waves, SIAM. J. Appl. Math. 53 (1993), no. 6, 1507-1550. * [7] M. Born, E. Wolf, Principles of Optics. Cambridge University Press, Cambridge, 1966. * [8] H.S. Carslaw, Diffraction of waves by a wedge of any angle, Lond. M. S. Proc. (Series 2) 18 (1920), no. 1, 291–306. * [9] L. P. Castro, D. Kapanadze, Wave diffraction by wedges having arbitrary aperture angle, Mathematical Methods in the Applied Sciences 421 (2015), no. 2, 1295–1314. * [10] M. Costabel, E. Stephan, Boundary integral equations for mixed boundary value problems in polygonal domains and Galerkin approximation, Banach Center Publicat. 15 (1985), 175–251. * [11] R. Courant, D. Hilbert, Methods of mathematical physics, Vol. I, Interscience Publishers, New York, 1953; Vol. II, Interscience Publishers, New York, 1962. * [12] M. Dauge, Elliptic Boundary Value Problems on Corner Domains. Smoothness and Asymptotics of Solutions, Lecture Notes in Mathematics 1341, Springer, Berlin, 1988. * [13] D. M. Eidus, The principle of limiting absorption, Am. Math. Soc., Transl., II. Ser. 47 (1965), 157-191. * [14] D. M. Eidus, The principle of limit amplitude, Russ. Math. Surv. 24 (1969), no. 3, 97–167. * [15] D. M. Eidus, The limiting amplitude principle for the Schrödinger equation in domains with unbounded boundaries, Asymptotic Anal. 2 (1989), no. 2, 95–99. * [16] G. Fayolle, R. Iasnogorodski, V.A. Malyshev, Random Walks in the Quarter Plane, Applications to Queueing Systems and Analytic Combinatorics, Springer, 2017. * [17] C.F. Gauss, Intensitas vis magneticae terrestris ad mensuram absolutam revocata, Dieterich, G?ttingen, 1833. * [18] P. Grisvard, Elliptic Problems in Nonsmooth Domains, Pitman, Boston, 1985. * [19] W. Ignatowsky, Reflexion elektromagnetischer Wellen an einem Drahte, Ann. der Physik 18 (1905), no. 13, 495–522. * [20] A. Jensen, T. Kato, Spectral properties of Schrödinger operators and time-decay of the wave functions, Duke Math. J. 46 (1979), 583–611. * [21] J. Keller, A. Blank, Diffraction and reflection of pulses by wedges and corners, Comm. Pure and Appl. Math. 4 (1951), no. 1, 75–95. * [22] A. I. Komech, Elliptic boundary value problems on manifolds with piecewise smooth boundary, Math. USSR Sbornik, 21 (1973), no. 1, 91–135. * [23] A. I. Komech, Elliptic differential equations with constant coefficients in a cone, Moscow Univ. Math. Bull. 29 (1974), no. 2, 140–145. * [24] A. I. Komech, N. J. Mauser, A. E. Merzon, On Sommerfeld representation and uniqueness in scattering by wedges, Math. Methods in Appl. Sci. 28 (2005), no. 2, 147–183. * [25] A. I. Komech, A. E. Merzon, General boundary value problems in region with corners, pp. 171–183 in Operator Theory. Advances and Applications, Vol. 57, Birkhauser, Basel, 1992. * [26] A. I. Komech, A. E. Merzon, Limiting amplitude principle in the scattering by wedges, Math. Methods in Appl. Sci. 29 (2006), no. 10, 1147–1185. * [27] A. I. Komech, A. E. Merzon, Relation between Cauchy data for the scattering by a wedge, Russian J. Math. Phys. 14 (2007), no. 3, 279–303. * [28] A. I. Komech, A. E. Merzon, On uniqueness and stability of Sobolev’s solution in scattering by wedges, J. Appl. Math. Phys. (ZAMP) 66 (2015), no. 5, 2485–2498. * [29] A. I. Komech, A. E. Merzon, A. Esquivel Navarrete, J. E. De La Paz Méndez, T. J. Villalba Vega, Sommerfeld s solution as the limiting amplitude and asymptotics for narrow wedges, Math. Methods in Appl. Sci. (2018) 1–14. https://doi.org/10.1002/mma.5075. * [30] A.I. Komech, A.E. Merzon, Stationary Diffraction by Wedges. Method of Automorphic Functions on Complex Characteristics, Springer, 2019. * [31] A. I. Komech, A. E. Merzon, P. N. Zhevandrov, On completeness of Ursell’s trapping modes, Russian J. Math. Phys. 4 (1996), no. 4, 55–85. * [32] A.I. Komech, A.E. Merzon, P.N. Zhevandrov, A method of complex characteristics for elliptic problems in angles and its applications, pp 125–159 in: Translations. Series 2. AMS 206, Providence, RI, 2002. * [33] G. S. Litvinchuk, Solvability Theory of Boundary Value Problems and Singular Integral Equations with Shift, Springer Netherlands, 2000. * [34] G. D. Malyuzhinets, Inversion formula for the Sommerfeld integral. Soviet Phys. Dokl. 3 (1958), 52–56. * [35] G. D. Malujinetz, Excitation, reflection and emission of surface waves from a wedge with given face impedances, Soviet. Phys. Dokl. 3 (1959), 752–755. * [36] V.A. Malyshev, Random Walks, Wiener–Hopf Equations in the Quadrant of Plane, Galois Automorphisms, Moscow University, 1970. * [37] V. G. Maz’ya, B. A. Plamenevskii, Problems with oblique derivatives in regions with piecewise smooth boundaries, Func. Anal. Appl. 5 (1971), 256–258. * [38] V. G. Maz’ya, B. A. Plamenevskii, On boundary value problems for a second-order elliptic equations in a domain with wedges , Vest. Leningr. Univ., Math. 1 (1975), 102–108. * [39] E. Meister, Some solved and unsolved canonical problems of diffraction theory, pp 320–336 in: Lecture Notes in Math. 1285, 1987. * [40] E. Meister, A. Passow, K. Rottbrand, New results on wave diffraction by canonical obstacles, Operator Theory: Adv. Appl. 110 (1999), 235–256. * [41] E. Meister, F. Penzel, F. O. Speck, F. S. Teixeira, Some interior and exterior boundary-value problems for the Helmholtz equations in a quadrant, Proc. Roy. Soc. Edinburgh Sect. A 123 (1993), no. 2, 275–294. * [42] E. Meister, F. Penzel, F.-O. Speck, F. S. Teixeira, Two canonical wedge problems for the Helmholtz equation, Math. Methods Appl. Sci. 17 (1994), 877–899. * [43] E. Meister, F. O. Speck, F.S. Teixeira, Wiener-Hopf-Hankel operators for some wedge diffraction problems with mixed boundary conditions, J. Integral. Equat. and Applications 4 (1992), no. 2, 229–255. * [44] A. E. Merzon, On the solvability of differential equations with constant coefficients in a cone, Soviet Math. Dokl. 14 (1973), no. 4, 1012–1015. * [45] A. E. Merzon, General boundary value problems for the Helmholtz equations in a plane angle, Uspekhi Math. Nauk 32 (1977), no. 2, 219–220. [Russian] * [46] A. E. Merzon, A. I. Komech, J. E. De la Paz Mendez, T. J. Villalba, On the Keller solution to the scattering problem of pulses by wedges, Math. Methods Appl. Sci. 38 (2015), no.10, 2035–2040. * [47] A. E. Merzon, P. N. Zhevandrov, High-frequency asymptotics of edge waves on a beach of nonconstant slope, SIAM J. Appl. Math 59 (1998), no. 2, 529–546. * [48] R. J. Nagem, M. Zampolli, G. Sandri, Arnold Sommerfeld. Mathematicai Theory of Diffraction, Progress in Mathematical Physics Volume 35. Springer Science+Business Media New York Originally published by Birkhauser Boston in 2004. * [49] S.A. Nazarov, B.A. Plamenevskij, Elliptic Problems in Domains with Piecewise Smooth Boundaries, De Gruyter, Berlin, 1994. * [50] F. Penzel, F.S. Teixeira, The Helmholtz equation in a quadrant with Robin’s conditions, Math. Methods Appl. Sci. 22 (1999), 201–216. * [51] A. Ya. Povzner, On the expansion of arbitrary functions in characteristic functions of the operator $-\Delta u+cu$, Mat. Sbornik N.S. 32(74) (1953), 109–156 [Russian]. * [52] S.H. Schot, Eighty years of Sommerfeld’s radiation condition, Historia Mathematica 19 (1992), no. 4, 385–401. * [53] G. E. Shilov, On the boundary value problems in a quadrant for partial differential equations with constant coefficients, Uspekhi Matem. Nauk 15 (1960), no. 4, 218–220. [Russian] * [54] G. E. Shilov, On boundary value problems in a quarter plane for partial differential equations with constant coefficients, Sib. Math. J. 11 (1961), no.1, 144–160 [Russian]. * [55] V. I. Smirnov, S. L. Sobolev, Sur une méthode nouvelle dans le probléme plan des vibrations élastiques, Trudy Seismological Institute, Acad. Nauk SSSR 20 (1932), 1–37. * [56] S. L. Sobolev, Theory of diffraction of plane waves, Proceedings of Seismological Institute, no. 41, Russian Academy of Science, Leningrad, 1934. * [57] S. L. Sobolev, General theory of diffraction of waves on Riemann surfaces, Tr. Fiz.-Mat. Inst. Steklova 9 (1935), 39–105. [Russian] (English translation: S.L. Sobolev, General theory of diffraction of waves on Riemann surfaces, p. 201–262 in: Selected Works of S.L. Sobolev, Vol. I, Springer, New York, 2006.) * [58] S. L. Sobolev, Some questions in the theory of propagations of oscillations, pp 468-617 in: F. Frank and P. Mizes (eds), Differential anf Integral Equations of Mathematical Physics, Leningrad-Moscow, 1937. [Russian] * [59] S. L. Sobolev, On mixed problem for partial differential equations with two independent variables, Doklady Ac. Sci. USSR 122 (1958), no.4, 555–558. [Russian] * [60] A. Sommerfeld, Mathematische Theorie der Diffraction, Math. Ann. 47 (1896), 317–341. * [61] A. Sommerfeld, Theoretisches ueber die Beugung der Roentgenstrahlen (German), Z. Math. Phys. 46 (1901), 11–97. * [62] A. Sommerfeld, Die Greensche Funktion der Schwingungsgleichung, Thematiker-Vereinigung 21, 309–353. Reprinted in Gesammelte Schriften, Vol. 1, pp. 272–316. * [63] A. Sommerfeld, Partial Differential Equations in Physics, Academic Press, New York, New York, 1949. * [64] A. Sommerfeld, Autobiographische Skizze. In Gesammelte Schriften, Vol. 4, pp. 673–682. * [65] A. Sommerfeld, Lectures on theoretical physics Vol. 4, Optics, New York, 1954. * [66] B. R. Vainberg, Principles of radiation, limit absorption and limit amplitude in the general theory of partial differential equations, Russ. Math. Surv. 21 (1966), 115–193.
As in the proof of Proposition \ref{prop:Proofs:GConvergence:WellPosed:liminf}, without loss of generality, assume that $\cF_{n,\eps_n}((\nu_n,v_n)) \leq C$ for some $C > 0$. In particular, this implies that $\nu_n = \mu_n$, and by Proposition \ref{prop:Back:TLp}, $\mu_n$ converges weakly to $\nu$. By the uniqueness of weak limits, we must have that $\nu = \mu$. We have \[ C\geq \liminf_{n \to \infty} \mathcal{F}_{n,\eps_n}( (v_n,\nu_n) ) \geq \liminf_{n \to \infty} \EnergySnepsn(v_n) \geq \EnergyS(v) = \mathcal{G}((\nu,v)) \] where the last inequality follows from Theorem \ref{thm:Proofs:GConvergence:withoutConstraints}. \end{proof} The $\limsup$ inequality requires two computational Lemmas. \begin{lemma} \label{lem:Proofs:GConvergence:IllPosed:diracEnergies} Energy estimates of dirac deltas. Assume Assumptions~\ref{ass:Main:Ass:S1}, \ref{ass:Main:Ass:M1}, \ref{ass:Main:Ass:M2}, \ref{ass:Main:Ass:D1}, \ref{ass:Main:Ass:W1}, \ref{ass:Main:Ass:W2} and~\ref{ass:Main:Ass:L1} hold, $\eps_n$ satisfies~\eqref{eq:Main:Ass:epsLBIllPosed}. Then, there exists $C > 0$ such that, $\bbP$-a.s., for $n$ large enough we have \begin{equation} \label{eq:Proofs:GConvergence:IllPosed:diracEnergies} \EnergySnepsn(\delta_{x_i}) \leq \frac{C}{n \eps_n^{2s}} \end{equation} for all $x_i \in \Omega_n$. \end{lemma} \begin{proof} With probability one, we can assume that the conclusion of Theorem \ref{thm:Back:TLp:LinftyMapsRate} holds. By Assumption \ref{eq:Main:Ass:epsLBIllPosed} and Theorem \ref{thm:Back:TLp:LinftyMapsRate}, we can apply \cite[Lemma 22]{Stuart} for $n$ large enough. We recall the variational definition of eigenvalues: \[ \lambda_{n,n}^s = \sup_{\Vert u \Vert_{\Lp{2}} = 1,\: u \in \mathbb{R}^n} \langle u, \Delta_{n,\eps_n}^s u \rangle_{\Lp{2}(\mu_n)}. \] Furthermore, for $x_i \in \Omega_n$, \[ \Vert \sqrt{n}\delta_{x_i} \Vert_{\Lp{2}} = \sqrt{\frac{1}{n} \sum_{j=1}^n n \delta_{x_i}(x_j)^2} = 1 \] so that we can estimate: \begin{align} \EnergySnepsn(\delta_{x_i}) &= \langle \delta_{x_i}, \Delta_{n,\eps_n}^s \delta_{x_i} \rangle_{\Lp{2}(\mu_n)} \notag\\ &= \frac{1}{n} \langle \sqrt{n} \delta_{x_i} , \Delta_{n,\eps_n}^s \sqrt{n}\delta_{x_i} \rangle_{\Lp{2}(\mu_n)} \notag\\ &\leq \frac{1}{n}\sup_{\Vert u \Vert_{\Lp{2}} = 1,\: u \in \mathbb{R}^n} \langle u, \Delta_{n,\eps_n}^s u \rangle_{\Lp{2}(\mu_n)} \notag\\ &= \frac{\lambda_{n,n}^s}{n} \notag\\ &\leq \frac{C}{n\eps_n^{2s}}\label{eq:Proofs:GConvergence:IllPosed:dirac:eigenvalueBound} \end{align} where we used \cite[Lemma 22]{Stuart} for \eqref{eq:Proofs:GConvergence:IllPosed:dirac:eigenvalueBound}. \end{proof} \begin{proposition} \label{prop:Proofs:GConvergence:IllPosed:limsup} $\limsup$-inequality in the ill-posed case. Assume Assumptions~\ref{ass:Main:Ass:S1}, \ref{ass:Main:Ass:M1}, \ref{ass:Main:Ass:M2}, \ref{ass:Main:Ass:D1}, \ref{ass:Main:Ass:W1}, \ref{ass:Main:Ass:W2} and~\ref{ass:Main:Ass:L1} hold, $\eps_n$ satisfies~\eqref{eq:Main:Ass:epsLBIllPosed}. Assume that $n\eps_n^{2s} \to \infty$ and $\rho \in \Ck{\infty}$. Let $(\nu,v) \in \TLp{2}(\Omega)$. Then, $\bbP$-a.s., there exists $\{(\nu_n,v_n)\} \subseteq \TLp{2}(\Omega)$ such that $(\nu_n,v_n) \to (\nu,v)$ in $\TLp{2}$ and \begin{equation}\label{eq:Proofs:GConvergence:IllPosed:limsup} \limsup_{n\to \infty} \cF_{n,\eps_n}((\nu_n,v_n)) \leq \cG((\nu,v)). \end{equation} \end{proposition} \begin{proof} In the proof $C>0$ will denote a constant that can be arbitrarily large, independent of $n$ that may change from line to line. With probability one, we can assume that the conclusions of Theorem \ref{thm:Back:TLp:LinftyMapsRate} and Lemma \ref{lem:Proofs:GConvergence:IllPosed:diracEnergies} hold. We note that \eqref{eq:Proofs:GConvergence:IllPosed:limsup} is trivial if $\mathcal{G}((\nu,v)) = \infty$. Hence, we might assume that $\mathcal{G}((\nu,v)) < \infty$ and in particular, this implies that $\nu = \mu$ and $\mathcal{G}((\nu,v)) = \EnergyS(v)$. We therefore need to prove \eqref{eq:Proofs:GConvergence:IllPosed:limsup} on $\{\mu\}\times\mathcal{H}^s(\Omega) \subseteq \TLp{2}$. We start by assuming that $v \in \Ck{\infty}$, which is dense in $\mathcal{H}^s(\Omega)$. Let $\bar{v}_n$ be the restriction of $v$ to $\Omega_n$ and consider the recovery sequence $\{(\mu_n,\bar{v}_n)\}_{i=1}^\infty \subseteq \TLp{2}(\Omega)$. By Theorem \ref{thm:Back:TLp:LinftyMapsRate}, we get transport maps $\{T_n\}_{n=1}^\infty$ from $\mu$ to $\mu_n$. Repeating the proof of Proposition \ref{prop:Proofs:GConvergence:WellPosed:limsup}, we can show that $(\mu_n,\bar{v}_n) \to (\mu,v)$ in $\TLp{2}$ and that $\limsup_{n \to \infty} \EnergySnepsn(\bar{v}_n) \leq \EnergyS(v)$. We define the functions \[ v_n(x_i) = \begin{cases} \bar{v}_n(x_i) &\text{if $i \geq N + 1$,}\\ \ell_i & \text{if $i \leq N$.} \end{cases} \] and estimate as follows: \[ \int_\Omega \vert v_n \circ T_n - v \vert^2 \,\dd\mu \leq 2 \int_\Omega \vert v_n \circ T_n - v \circ T_n \vert^2 \,\dd\mu + 2 \int_\Omega \vert v \circ T_n - v \vert^2 \,\dd\mu =: 2(A + B). \] As in the proof of Proposition \ref{prop:Proofs:GConvergence:WellPosed:liminf}, we see that $B \to 0$. For the $A$ term, we have \begin{align} A &\leq \int_{\{x \spaceBar T_n(x) \neq x_i \text{ for $i \leq N$}\}} \vert v_n \circ T_n - v \circ T_n \vert^2 \, \dd\mu + \sum_{i=1}^N \int_{\{x \spaceBar T_n(x) = x_i\}} \vert v_n \circ T_n - v \circ T_n \vert^2 \, \dd\mu \notag\\ &=\sum_{i=1}^N \vert v(x_i) - \ell_i \vert^2 \mu(\{x \spaceBar T_n(x) = x_i\}) \notag\\ &= \sum_{i=1}^N \vert v(x_i) - \ell_i \vert^2 \mu_n(\{x_i\}) \notag\\ &= \sum_{i=1}^N \frac{\vert v(x_i) - \ell_i \vert^2}{n} \notag \end{align} from which we deduce that $(\mu_n,v_n) \to (\mu,v)$ in $\TLp{2}$. Let $n$ be large enough so that \eqref{eq:Proofs:GConvergence:IllPosed:diracEnergies} holds. We now show that \[ \limsup_{n \to \infty} \mathcal{F}_{n,\eps_n}(v_n,\mu_n) = \limsup_{n \to \infty} \EnergySnepsn(v_n) \leq \EnergyS(v). \] To this purpose, we note that \[ v_n = \bar{v}_n + \sum_{i = 1}^N \delta_{x_i} (\ell_i - \bar{v}_n(x_i)) \] and applying Lemma \ref{lem:Proofs:GConvergence:IllPosed:Minkowski}, we obtain: \begin{align} \sqrt{\EnergySneps{}(v_n)} &\leq \sqrt{\EnergySnepsn(\bar{v}_n)} + \sum_{i=1}^N \vert \ell_i - \bar{v}_n(x_i) \vert \sqrt{\EnergySnepsn(\delta_{x_i})} \notag\\ &\leq \sqrt{\EnergySnepsn(\bar{v}_n)} + C \sum_{i=1}^N \sqrt{\EnergySnepsn(\delta_{x_i})} \label{eq:Proofs:GConvergence:IllPosed:limsup:linfty}\\ &\leq \sqrt{\EnergySnepsn(\bar{v}_n)} + C \l \frac{1}{n\eps_n^{2s}} \r^{1/2} \label{eq:Proofs:GConvergence:IllPosed:limsup:energy} \end{align} where we used the fact that $\Vert v \Vert_{\Lp{\infty}}$ is bounded on $\Omega$ for \eqref{eq:Proofs:GConvergence:IllPosed:limsup:linfty} (and $\bar{v}_n$ is the restriction of $v$ to $\Omega_n$ and so is also bounded in $\Lp{\infty}$) and \eqref{eq:Proofs:GConvergence:IllPosed:diracEnergies} for \eqref{eq:Proofs:GConvergence:IllPosed:limsup:energy}. Taking $n \to \infty$ on the latter right hand side, we have \[ \limsup_{n \to \infty} \sqrt{\EnergySneps(v_n)} \leq \limsup_{n \to \infty} \sqrt{\EnergySnepsn(\bar{v}_n)} + \limsup_{n \to \infty} C \l \frac{1}{n\eps_n^{2s}} \r^{1/2} \leq \sqrt{\EnergyS(v)} \] since by assumption $n\eps_n^{2s} \to \infty$ and $\limsup_{n \to \infty} \EnergySnepsn(\bar{v}_n) \leq \EnergyS(v)$. Having shown \eqref{eq:Proofs:GConvergence:IllPosed:limsup} on $\{\mu\}\times \Ck{\infty}$, we now use the fact that it is sufficient to establish the existence of a recovery sequence on a dense subset \cite[Remark 2.7]{Trillos3}. \end{proof} \subsection{Bounds on Minimizers} \label{subsec:Proofs:Linfty} \begin{lemma} \label{lem:Proofs:Bounds:UniformBounds} Uniform bound of energies for minimizers. Assume Assumptions~\ref{ass:Main:Ass:S1}, \ref{ass:Main:Ass:M1}, \ref{ass:Main:Ass:M2}, \ref{ass:Main:Ass:D1}, \ref{ass:Main:Ass:W1}, \ref{ass:Main:Ass:W2} and~\ref{ass:Main:Ass:L1} hold. Assume $\rho \in \Ck{\infty}$. Let $\{(\mu_n,u_n)\}_{n=1}^\infty$ be the minimizers of $\cF_{n,\eps_n}(\cdot)$. Then, there exists $C >0$ such that, $\bbP$-a.s., we have \[ \sup_{n} \cF_{n,\eps_n}( (\mu_n,u_n) ) \leq C. \] \end{lemma} \begin{proof} In the proof $C>0$ will denote a constant that can be arbitrarily large, independent of $n$ that may change from line to line. With probability one, we can assume that the conclusion of Proposition~\ref{prop:Proofs:GConvergence:WellPosed:limsup} holds. Let $v \in C^\infty(\Omega)$ be a function that interpolates all the points $\{(x_i,\ell_i)\}_{i=1}^N$. Since $\rho \in C^\infty$, we have that $\Delta^s_\rho v \in C^\infty$ implying that $v \Delta^s_\rho v \in C^\infty$. We have \[ \EnergyS(v) = \int_\Omega v \Delta^s_\rho v \,\dd\mu < K. \] for some $K > 0$. %Let $v_n$ be the restriction of $v$ on $\Omega_n$. By Proposition \ref{prop:Proofs:GConvergence:WellPosed:limsup} there exists a sequence $v_n$ converging to $v$ and such that %we obtain \[ \lim_{n \to \infty} h_n := \lim_{n \to \infty} \sup_{m \geq n } \mathcal{E}^{(s)}_{m,\eps_m}(v_m) = \limsup_{n \to \infty} \EnergySnepsn(v_n) = \limsup_{n \to \infty} \mathcal{F}_{n,\eps_n}((\mu_n,v_n)) \leq \EnergyS(v) < K. \] Let $h := \limsup_{n \to \infty} \EnergySnepsn(v_n)$ and let $\bar{\eps} = K - h > 0$. Then, there exists $n_0$ such that for all $n \geq n_0$, $\vert h_n - h \vert < \bar{\eps}/2$, which is equivalent to $h_n = \sup_{m \geq n} \mathcal{E}^{(s)}_{m,\eps_m}(v_m) < h + \bar{\eps}/2 < K$. Using the latter, we have \[ \sup_{n} \EnergySnepsn(v_n) = \max\{\mathcal{E}_{1,\eps_1}^{(s)}(v_1),\dots,\mathcal{E}_{n_0,\eps_{n_0}}^{(s)}(v_{n_0}),\sup_{n \geq n_0} \EnergySnepsn(v_n)\} %\leq \max\{\mathcal{E}_{1,\eps_1}^{(s)}(v_1),\dots,\mathcal{E}_{n_0,\eps_{n_0}}^{(s)}(v_{n_0}),K\} \leq C. \] Since $\{u_n\}_{n=1}^\infty$ are minimizers, we have $\EnergySnepsn(u_n) \leq \EnergySnepsn(v_n)$ which implies \[ \sup_n \mathcal{F}_{n,\eps_n}((\mu_n,u_n)) = \sup_n \EnergySnepsn(u_n) \leq \sup_n \EnergySnepsn(v_n) \leq C. \qedhere \] \end{proof} \subsection{Proof of Theorem~\ref{thm:Main:Res:ConsFracLap}} \begin{proof}[Proof of Theorem \ref{thm:Main:Res:ConsFracLap}] In the proof $C>0$ will denote a constant that can be arbitrarily large, independent of $n$ that may change from line to line. % \item \paragraph{Well-Posed Case.} With probability one, we can assume that the conclusions of Lemma \ref{lem:Proofs:Bounds:UniformBounds}, Proposition \ref{prop:Proofs:Bounds:Poincaré}, Theorem \ref{thm:Proofs:GConvergence:withoutConstraints}, Proposition \ref{prop:Proofs:Compactness:L2Compactness} and Proposition \ref{prop:Proofs:GConvergence:WellPosed:liminf} hold. Using Lemma \ref{lem:Proofs:Bounds:UniformBounds} and Proposition \ref{prop:Proofs:Bounds:Poincaré}, we have %\[ \Vert u_n \Vert_{\Lp{2}} \leq \Vert u_n - \frac{1}{N} \sum_{i=1}^N u_n(x_i) \Vert_{\Lp{2}} + \Vert \frac{1}{N} \sum_{i=1}^N u_n(x_i) \Vert_{\Lp{2}} \leq C \sqrt{\EnergySnepsn(u_n)} + \frac{1}{N} \sum_{i=1}^N \vert\ell_i\vert \leq C. \] \[ \Vert u_n \Vert_{\addmaths{\Lp{\infty}}{\removemaths{\Lp{2}}}} \leq \Vert u_n - \frac{1}{N} \sum_{i=1}^N u_n(x_i) \Vert_{\addmaths{\Lp{\infty}}{\removemaths{\Lp{2}}}} + \Vert \frac{1}{N} \sum_{i=1}^N u_n(x_i) \Vert_{\addmaths{\Lp{\infty}}\removemaths{\Lp{2}}} \leq C \sqrt{\EnergySnepsn(u_n)} + \frac{1}{N} \sum_{i=1}^N \vert\ell_i\vert \leq C. \] Hence minimisers are bounded in $\TLp{2}$ and $\max\l\sup_{n} \Vert u_{n} \Vert_{\Lp{\infty}},\sup_{n} \mathcal{E}^{(s)}_{n}(u_{n}\r \leq C$. By Proposition~\ref{prop:Proofs:Compactness:L2Compactness} there exists $u$ and a subsequence converging uniformly and in $\TLp{2}$. By Propositions~\ref{prop:Proofs:GConvergence:WellPosed:liminf} and \ref{prop:Proofs:GConvergence:WellPosed:limsup} it follows that $(\mu,u)$ is a minimiser of $\cF$. By uniqueness of the minimiser of $\cF$ it follows that the whole sequence $\{( \mu_n,u_n)\}$ converges in $\TLp{2}$ to $(\mu,u)$. \paragraph{Ill-Posed Case.} With probability one, we can assume that the conclusions of Theorem \ref{thm:Proofs:GConvergence:withoutConstraints}, Proposition \ref{prop:Proofs:GConvergence:IllPosed:liminf} and Proposition \ref{prop:Proofs:GConvergence:IllPosed:limsup} hold. By Proposition \ref{prop:Proofs:GConvergence:IllPosed:liminf} and Proposition \ref{prop:Proofs:GConvergence:IllPosed:limsup}, we know that $\mathcal{F}_{n,\eps_n}$ $\Gamma$-converges to $\mathcal{G}(\cdot)$. By Theorem \ref{thm:Proofs:GConvergence:withoutConstraints} $\mathcal{F}_{n,\eps_n}(\cdot)$ satisfies the compactness property. Hence, by Proposition~\ref{prop:Back:Gamma:minimizers} we can conclude the result. \end{proof} \section{Numerical Experiments} \label{sec:NumExp} \begin{figure} \centering \includegraphics[width = \textwidth]{minimizers.png} \caption{Plots of the discrete and continuum minimizers with the setting described in Section \ref{subsec:criticalBoundary}. The values at the points $(0.1,0.1)$ and $(0.9,0.9)$ are marked with black squares. \textit{Left}: Discrete minimizer computed with $n = 1733$ points. \textit{Right}: Continuum minimizer. \end{figure} \subsection{Critical boundary between well-posed and ill-posed regimes} \label{subsec:criticalBoundary} In this section, we will investigate the gap in the upper bound alluded to in Remark \ref{rem:Main:Res:Relationship}. In particular, we will rely on the same methodology as in [53]. To test the gap between the well-posed and ill-posed regime, i.e. $n^{-\frac{2}{s-1}}\lesssim \eps_n\lesssim n^{-\frac{1}{2s}}$ (by Corollary~\ref{cor:Main:Res:ConsFracLap} we know that $\eps_n\lesssim n^{-\frac{2}{s-1}}$ implies asymptotic well-posedness and $\eps_n\gg n^{-\frac{1}{2s}}$ implies asymptotic ill-posedness) we will consider the following setting: we choose the uniform measure on $[0,1]^2$ with periodic boundary conditions, the kernel function $\eta(t) = 1$ if $t\leq 1$ and $\eta(t) = 0$ else and $s = 16$. This choice of $s$ satisfies the constraint $s > 2d + 9 = 13$ from Remark \ref{rem:Main:Res:LowerBoundS}. The training set consists of the points $(0.1,0.1)$ and $(0.9,0.9)$ labelled $0$ and $1$ respectively. We will vary the number of points in our graph from $n = 100$ to $n = 5000$. For each $n$, we will also consider a wide range of $\eps_n$ ranging from roughly the connectivity radius to above $n^{-1/2s}$. For each combination of $(n,\eps_n)$, we then compute the discrete minimizer $u_n$ using Lagrange multipliers and compare it to the continuum solution $u$. The continuum solution is computed using a finite-difference scheme on a regular grid of 10000 points on $[0,1]^2$. The error considered is \begin{equation} \label{eq:numExp:error} \text{err}(n,\eps_n,u_n) = \Vert u_n - u \Vert_{\Lp{2}(\mu_n)} \end{equation} where, in order to evalutate $u$ on the graph, we use spline interpolation. Finally, by re-sampling the points for each $n$, we repeat the experiments one hundred times and average the error. We are interested in finding the value of $\eps_n$ where fractional Laplacian learning switches from the well-posed to the ill-posed regime. In order to compute the latter, we start by smoothing the function $\eps_n \mapsto \text{err}(n,\eps_n,u_n)$ and compute the maximizer of its first derivative and the minimizer of its second derivative which we denote by $\hat{\eps}_n$ and $\eps_n^{*}$ respectively. We choose $\hat{\eps}_n$ and $\eps_n^*$ greater than the minimizer of $\text{err}(n,\cdot,u_n)$. Both $\hat{\eps}_n$ and $\eps_n^*$ could be taken as reasonable definitions of the transition point between the well-posed and ill-posed regime and, it is therefore interesting to understand how they scale with $n$. Using the five largest values of $n$, the best linear fits between $\log(\hat{\eps}_n)$, $\log(\eps_n^*)$ and $\log(n)$ yield that \[ \hat{\eps}_n \approx \frac{0.6541}{n^{0.05}} \quad \text{and} \quad \eps_n^* \approx \frac{0.7312}{n^{0.06}}. \] For $s = 16$, we have $1/(2s) = 0.03125$ and $2/(s-1) \approx 0.134$. Given the top plots in Figure \ref{fig:error}, we observe that both $\hat{\eps}_n$ and $\eps_n^*$ seem to scale accurately with powers that are different to $1/(2s)$. In fact, we note that $0.06 \approx 0.05 \approx 1/s = 0.0625$. On one hand, this indicates that we should be able to extend the well-posed regime of Theorem \ref{thm:Main:Res:ConsFracLap} to \[ \l \frac{1}{n} \r^{2/(s-1)} \ll \eps_n \ll \l \frac{1}{n}\r^{1/s} \] and, by Remark \ref{rem:Main:Res:LowerBoundS}, we could relax our assumption of $s > 2d + 9$ to $s > d + 4$. If we were able to also tackle the lower bound gap (see Remark \ref{rem:Main:Res:LowerBoundGap}), we could furthermore have $s > d$ in contrast to $s > d/2$ which is conjectured in Remark \ref{rem:Main:Res:Sobolev}. On the other hand, we are not able to fully confirm the conjecture made in Remark \ref{rem:Main:Res:Relationship} numerically and it remains an open question to accurately characterize the behaviour of fractional Laplacian semi-supervised learning when \[ \l \frac{1}{n} \r^{1/s} \ll \eps_n \ll \l \frac{1}{n}\r^{1/(2s)}. \] \begin{figure}[t!] \setlength\figureheight{0.25\textwidth} \setlength\figurewidth{0.3\textwidth} \centering \scriptsize \begin{subfigure}[t]{0.3\textwidth} \centering \scriptsize \input{LinearFitEpsIll2.tikz} \caption{ The blue line is the fit $\frac{0.6541}{n^{0.05}}$ and the red line is $\hat{\eps}_n$. \end{subfigure} \hspace*{0.04\textwidth} \begin{subfigure}[t]{0.3\textwidth} \centering \scriptsize \input{LinearFitEpsIll.tikz} \caption{ The blue line is the fit $\frac{0.7312}{n^{0.06}}$ and the red line is $\eps_n^*$. \end{subfigure} \hspace*{0.04\textwidth} \begin{subfigure}[t]{0.3\textwidth} \centering \scriptsize \input{Error5000.tikz} \caption{ Error \eqref{eq:numExp:error} for $n = 5000$. The blue line is the average $\text{err}(n,\eps_n,u_n)$ and the dashed black lines are the 10\% and 90\% quantiles. The red and black vertical lines respectively indicate $\hat{\eps}_n$ and $\eps_n^*$. \end{subfigure} \caption{ Plots of the errors between the discrete and continuum minimizers with the setting described in Section \ref{subsec:criticalBoundary}. \label{fig:error} \end{figure} \subsection{Uniform Bounds on Eigenfunctions} \label{subsec:linear} Proposition~\ref{prop:Proofs:Back:DisReg:EVecAlt} implies that we can upper bound the $\Lp{\infty}$ norm of the discrete eigenvectors $\psi_{n,k}$ by \begin{equation} \label{eq:numExp:lambda} \|\psi_{n,k}\|_{\Lp{\infty}}\leq C\lambda_{k}^{d+1} \end{equation} when $k\in\{1,\hdots,\ceil{K_n}\}$ where $K_n=\alpha \eps_n^{-d/2} + 1$. We now investigate the following: on one hand, we want to see if the power of $\eps_n$ in the definition of $K_n$, namely $-d/2$, is the lowest we can get; and on the other hand, we are interested in the optimal power of $\lambda_{k}$ in \eqref{eq:numExp:lambda}. We choose the uniform measure on $[0,1]^2$ with periodic boundary conditions and the kernel function $\eta(t) = 1$ if $t\leq 1$ and $\eta(t) = 0$ else. We proceed as follows: given a number of points ranging from $n = 400$ to $n = 5200$, we seek the smallest $\eps_n$ such that the graph is connected. We then compute $\lb \log\l \lambda_{k} \r, \log \l \|\psi_{n,k}\|_{\Lp{\infty}} \r \rb_{k=1}^{k_n^{(i)}}$ where $k_n^{(1)} = \alpha_1 \eps_n^{-d/4}$, $k_n^{(2)} = \alpha_2 \eps_n^{-d/2}$, $k_n^{(3)} = \alpha_3 \eps_n^{-d}$, $k_n^{(4)} = n$ and $\alpha_i$ for $1 \leq i \leq 3$ are constants. While we took the arbitrary choice of $\alpha_i = 4$ for $1 \leq i \leq 3$, empirical trials have shown that the overall conclusion does not change significantly. By re-sampling the points for each $n$, we repeat the experiments one hundred times and average the results. \begin{figure}[t!] \setlength\figureheight{0.35\textwidth} \setlength\figurewidth{0.47\textwidth} \centering \scriptsize \begin{subfigure}[t]{0.47\textwidth} \centering \scriptsize \input{eigenPairs1200.tikz} \caption{ $n = 1200$ \end{subfigure} \hspace*{0.04\textwidth} \begin{subfigure}[t]{0.47\textwidth} \centering \scriptsize \input{eigenPairs5200.tikz} \caption{ $n = 5200$ \end{subfigure} \caption{ Plots of $\log \l \|\psi_{n,k}\|_{\Lp{\infty}} \r$ versus $\log\l \lambda_{k} \r$ for $1 \leq k \leq n$ with the setting described in Section \ref{subsec:linear}. The blue line corresponds to the average of $\log \l \|\psi_{n,k}\|_{\Lp{\infty}} \r$, the dashed black lines are the mean plus/minus the standard deviation and the red lines indicate the values of $\log \l \|\lambda_{k_n^{(i),*}}\|_{\Lp{\infty}} \r$ for $1 \leq i \leq 4$ from left to right. \ref{subsec:criticalBoundary}. \label{fig:eigenvalues} \end{figure} \begin{figure}[t!] \setlength\figureheight{0.25\textwidth} \setlength\figurewidth{0.3\textwidth} \centering \scriptsize \begin{subfigure}[t]{0.3\textwidth} \centering \scriptsize \input{linear_k2.tikz} \caption{ The blue line is the fit $4.05 - 0.63 \log(\Vert \psi_{n,k_n^{(2),*}} \Vert)$ and the red line is $\lambda_{k_n^{(2),*}}$. \end{subfigure} \hspace*{0.04\textwidth} \begin{subfigure}[t]{0.3\textwidth} \centering \scriptsize \input{linear_k3.tikz} \caption{ The blue line is the fit $0.2 - 0.08 \log(\Vert \psi_{n,k_n^{(3),*}} \Vert)$ and the red line is $\lambda_{k_n^{(3),*}}$. \end{subfigure} \hspace*{0.04\textwidth} \begin{subfigure}[t]{0.3\textwidth} \centering \scriptsize \input{linear_k4.tikz} \caption{ The blue line is the fit $-0.55 + 0.01 \log(\Vert \psi_{n,k_n^{(4),*}} \Vert)$ and the red line is $\lambda_{k_n^{(4),*}}$. \end{subfigure} \caption{ Plots of $\log \l \|\psi_{n,k_n^{(i),*}}\|_{\Lp{\infty}} \r$ versus $\log\l \lambda_{k_n^{(i),*}} \r$ for $2 \leq i \leq 4$ with the setting described in Section \ref{subsec:linear}. \label{fig:linear} \end{figure} From Figure \ref{fig:eigenvalues} there appears to be different regimes of growth for the eigenpairs depending on the value of $1 \leq k \leq n$. In fact, we see that below $\log(\lambda_{k_n^{(3)}})$, $\log(\Vert \psi_{n,k} \Vert_{\Lp{\infty}})$ seems to be increasing monotonically, while it unexpectedly first decreases and then sharply increases from $\log(\lambda_{k_n^{(3)}})$ to $\log(\lambda_{k_n^{(4)}})$. It is the subject of future research to provide explanations for the latter phenomena. Let us define \[ k_n^{(i),*} = \argmax_{1 \leq k \leq k_n^{(i)}} \log \l \Vert \psi_{n,k} \Vert_{\Lp{\infty}} \r \] and, using the seven largest values of $n$, for $1 \leq i \leq 4$, we will be computing the best linear fits for the points \[ \lb( \log \l \lambda_{n,k_n^{(i),*}} \r, \log \l \Vert \psi_{k_n^{(i),*}} \Vert_{\Lp{\infty}} \r\rb_{n=400}^{5200}. \] For $k_n^{(i)}$ with $1 \leq i \leq 2$, the linear fits in the log-log domain appear to be very accurate as depicted in Figure \ref{fig:linear}, so we are able to confirm the theoretical guarantees of Proposition \ref{prop:Proofs:Back:DisReg:EVecAlt}. In particular, we obtain \[ \Vert \psi_{n,k_n^{(2),*}} \Vert_{\Lp{\infty}} \approx C \lambda_{k_n^{(2),*}}^{-0.63} \] and $\lambda_{k_n^{(2),*}}^{-0.63} \leq \lambda_{k_n^{(2),*}}^{d+1}$ since $\lambda_k \geq 1$ (in fact, we have $\{\lambda_k\}_{k=1}^\infty = \{ 4 \pi^2 k^2 \spaceBar k \in \bbN \}$ in this particular setting). However, it shows that the bound is not sharp for the flat torus. The situation for the regime $k_n^{(3)}$ is different and we obtain that \[ \Vert \psi_{n,k_n^{(3),*}} \Vert_{\Lp{\infty}} \approx C \lambda_{k_n^{(3),*}}^{-0.08}. \] The much smaller exponent in the above compared to the $k_n^{(2),*}$ setting indicates that we seem to be switching from one regime of growth of $\Vert \psi_{n,k} \Vert_{\Lp{\infty}}$ to another one. Finally, we note that the linear fit in Figure \ref{fig:linear} for the $k_n^{(4)}$ regime yields \[ \Vert \psi_{n,k_n^{(4),*}} \Vert_{\Lp{\infty}} \approx C \lambda_{k_n^{(4),*}}^{0.01} \] and, making the crude approximation $0 \approx 0.01$, this confirms our intuition that on the flat torus, the graph Laplacian should have uniformly bounded $\Lp{\infty}$-norms of eigenfunctions: indeed, the continuum eigenfunctions are uniformly bounded in $\Lp{\infty}$, so we expect the same behaviour for their discrete counterparts. This suggests that one should be able to pick $\alpha = 0$ in Theorem \ref{thm:Main:Res:ConsFracLap} yielding the (almost) optimal Sobolev bound $s > d/2 + 2$ as discussed in Remark~\ref{rem:Main:Res:Sobolev}. This conclusion also suggests that, on the flat torus at least, one should be able to go above $\eps_n^{-d/2}$ in Proposition \ref{prop:Proofs:Back:DisReg:EVecAlt}. \section*{Acknowledgements} The authors would like to thank Nicol\'as Garc\'ia Trillos for his comments and insights on this paper. MT was supported by the European Research Council under the European Union’s Horizon 2020 research and innovation programme Grant Agreement No. 777826 (NoMADS). \begin{thebibliography}{10} [1] Mikhail {Belkin} and Partha Niyogi. \newblock Using manifold structure for partially labelled classification. \newblock In {\em Advances in Neural Information Processing Systems}, pages 953--960, 2002. [2] Mikhail {Belkin} and Partha {Niyogi}. \newblock Semi-supervised learning on {R}iemannian manifolds. \newblock {\em Machine Learning}, 56(1):209--239, 2004. [3] Mikhail Belkin and Partha Niyogi. \newblock Convergence of {L}aplacian eigenmaps. \newblock In {\em Advances in Neural Information Processing Systems}, 2007. [4] Andrea~L. {Bertozzi} and Arjuna {Flenner}. \newblock Diffuse interface models on graphs for classification of high dimensional data. \newblock {\em SIAM Review}, 58(2):293--328, 2016. [5] Jean {Bourgain}, Haim {Brezis}, and Petru {Mironescu}. \newblock Another look at {S}obolev spaces. \newblock In {\em Optimal Control and Partial Differential Equations}, pages 439--455, 2001. [6] Andrea {Braides}. \newblock {\em $\Gamma$-convergence for Beginners}. \newblock Oxford University Press, 2002. [7] Leon Bungert, Jeff Calder, and Tim Roith. \newblock {Uniform convergence rates for Lipschitz learning on graphs}. \newblock {\em IMA Journal of Numerical Analysis}, 09 2022. \newblock drac048. [8] Jeff {Calder}. \newblock The game theoretic $p$-{L}aplacian and semi-supervised learning with few labels. \newblock {\em Nonlinearity}, 32(1):301--330, dec 2018. [9] Jeff {Calder}. \newblock Consistency of {L}ipschitz learning with infinite unlabeled data and finite labeled data. \newblock {\em SIAM Journal on Mathematics of Data Science}, 1(4):780--812, [10] Jeff {Calder}, Brendan {Cook}, Matthew {Thorpe}, and Dejan {Slep{\v c}ev}. \newblock Poisson learning: Graph based semi-supervised learning at very low label rates. \newblock In {\em Proceedings of the International Conference on Machine Learning}, pages 1283--1293, 2020. [11] Jeff Calder and Mahmood Ettehad. \newblock Hamilton-jacobi equations on graphs with applications to semi-supervised learning and data depth. \newblock {\em Journal of Machine Learning Research}, 23(318):1--62, 2022. [12] Jeff {Calder} and Nicol\'as {Garc\'ia Trillos}. \newblock Improved spectral convergence rates for graph {L}aplacians on $\eps$-graphs and $k-nn$ graphs. \newblock {\em Applied and Computational Harmonic Analysis}, 60:123--175, 2022. [13] Jeff Calder, Nicol\'as Garc\'ia~Trillos, and Marta Lewicka. \newblock Lipschitz regularity of graph {L}aplacians on random data clouds. \newblock {\em SIAM Journal on Mathematical Analysis}, 54(1):1169--1222, 2022. [14] Jeff {Calder} and Dejan {Slep{\v c}ev}. \newblock Properly-weighted graph {L}aplacian for semi-supervised learning. \newblock {\em Applied Mathematics \& Optimization}, 82(3):1111--1159, 2020. [15] Jeff Calder, Dejan Slepčev, and Matthew Thorpe. \newblock Rates of convergence for laplacian semi-supervised learning with low labeling rates. \newblock {\em Research in the Mathematical Sciences}, 10, 02 2023. [16] Marco {Caroccia}, Antonin {Chambolle}, and Dejan {Slep{\v{c}}ev}. \newblock {M}umford--{S}hah functionals on graphs and their asymptotics. \newblock {\em Nonlinearity}, 33(8):3846--3888, jun 2020. [17] Isaac {Chavel}. \newblock {\em Eigenvalues in Riemannian geometry}. \newblock Academic Press, Inc, 1984. [18] Ronald~R. {Coifman} and St\'ephane {Lafon}. \newblock Diffusion maps. \newblock {\em Applied and Computational Harmonic Analysis}, 21(1):5--30, 2006. [19] Mircea {Craioveanu}, Mircea {Puta}, and Themistocles~M. {Rassias}. \newblock {\em Old and New Aspects in Spectral Geometry}, volume 534 of {\em Mathematics and Its Applications}. \newblock Springer-Science+Business Media, B.V., 2001. [20] Riccardo {Cristoferi} and Matthew {Thorpe}. \newblock Large data limit for a phase transition model with the p-{L}aplacian on point clouds. \newblock {\em European Journal of Applied Mathematics}, 31(2):185–231, 2020. [21] Oliver~M. Crook, Tim Hurst, Carola-Bibiane Sch\"onlieb, Matthew Thorpe, and Konstantinos~C. Zygalakis. \newblock {PDE}-inspired algorithms for semi-supervised learning on point \newblock {\em preprint arXiv:1909.10221}, 2019. [22] Matthew {Dunlop}, Dejan {Slepcev}, Andrew {Stuart}, and Matthew {Thorpe}. \newblock Large data and zero noise limits of graph-based semi-supervised learning algorithms. \newblock {\em Applied and Computational Harmonic Analysis}, 49(2):655--697, [23] Ahmed {El Alaoui}, Xiang {Cheng}, Aaditya {Ramdas}, Martin~J. {Wainwright}, and Michael~I. {Jordan}. \newblock Asymptotic behavior of $\ell_p$-based {L}aplacian regularization in semi-supervised learning. \newblock In {\em Proceedings of the Conference on Learning Theory}, pages 879--906, 2016. [24] Imad El~Bouchairi, Jalal Fadili, and Abderrahim Elmoataz. \newblock Continuum limit of $p$-{L}aplacian evolution problems on graphs: {$L^q$} graphons and sparse graphs. \newblock {\em preprint arXiv:2010.08697}, 2020. [25] \newblock {\em General Topology}. \newblock Sigma series in pure mathematics. Heldermann, 1989. [26] Jalal Fadili, Nicolas Forcadel, Thi~Tuyen Nguyen, and Rita Zantout. \newblock Limits and consistency of non-local and graph approximations to the {E}ikonal equation. \newblock {\em preprint arXiv:2105.01977}, 2021. [27] Mauricio {Flores}, Jeff {Calder}, and Gilad {Lerman}. \newblock Analysis and algorithms for $\ell p$-based semi-supervised learning on graphs. \newblock {\em Applied and Computational Harmonic Analysis}, 60:77--122, 2022. [28] Nicol\'as {Garc\'ia Trillos}, Moritz {Gerlach}, Matthias {Hein}, and Dejan \newblock Error estimates for spectral convergence of the graph {L}aplacian on random geometric graphs toward the {L}aplace--{B}eltrami operator. \newblock {\em Foundations of Computational Mathematics}, 20:827--887, 2020. [29] Nicol{{\'a}}s {Garc\'ia Trillos} and Ryan {Murray}. \newblock A new analytical approach to consistency and overfitting in regularized empirical risk minimization. \newblock {\em European Journal of Applied Mathematics}, 28(6):886–921, 2017. [30] Nicol{\'a}s {Garc{\'i}a Trillos}, Ryan {Murray}, and Matthew {Thorpe}. \newblock From graph cuts to isoperimetric inequalities: Convergence rates of cheeger cuts on data clouds. \newblock {\em Archive for Rational Mechanics and Analysis}, 244(3):541--598, [31] Nicol{\'a}s Garc{\'\i}a~Trillos and Dejan Slep{\v c}ev. \newblock Continuum limit of total variation on point clouds. \newblock {\em Archive for Rational Mechanics and Analysis}, 220(1):193--241, [32] Nicol{\'a}s {Garc{\'\i}a Trillos} and Dejan Slep{\v c}ev. \newblock A variational approach to the consistency of spectral clustering. \newblock {\em Applied and Computational Harmonic Analysis}, 45(2):239--281, [33] Nicol{\'a}s {Garc\'ia Trillos}, Dejan {Slep{\v{c}}ev}, and James {Von Brecht}. \newblock Estimating perimeter using graph cuts. \newblock {\em Advances in Applied Probability}, 49(4):1067--1090, 2017. [34] Nicol{{\'a}}s {Garc\'ia Trillos}, Dejan {Slep\v{c}ev}, James {von Brecht}, Thomas {Laurent}, and Xavier {Bresson}. \newblock Consistency of cheeger and ratio graph cuts. \newblock {\em Journal of Machine Learning Research}, 17(181):1--46, 2016. [35] Evarist Gin{\'e} and Vladimir {Koltschinskii}. \newblock {\em Empirical graph Laplacian approximation of Laplace--Beltrami operators: Large sample results}, volume~51 of {\em IMS Lecture Notes Monographs Series}, pages 238--259. \newblock Institute of Mathematical Statistics, 2006. [36] Ashish {Goel}, Sanatan {Rai}, and Bhaskar {Krishnamachari}. \newblock Monotone properties of random geometric graphs have sharp thresholds. \newblock {\em The Annals of Applied Probability}, 15:2535--2552, 2005. [37] Yosra Hafiene, Jalal Fadili, and Abderrahim Elmoataz. \newblock Nonlocal $p$-{L}aplacian evolution problems on graphs. \newblock {\em SIAM Journal on Numerical Analysis}, 56(2):1064--1090, 2018. [38] Yosra Hafiene, Jalal~M. Fadili, Christophe Chesneau, and Abderrahim Elmoataz. \newblock Continuum limit of the nonlocal $p$-{L}aplacian evolution problem on random inhomogeneous graphs. \newblock {\em ESAIM: Mathematical Modelling and Numerical Analysis}, 54(2):565--589, 2020. [39] Matthias {Hein}. \newblock Uniform convergence of adaptive graph-based regularization. \newblock In {\em Proceedings of the Conference on Learning Theory}, pages 50--64, 2006. [40] Matthias {Hein}, Jean-Yves {Audibert}, and Ulrike {von Luxburg}. \newblock From graphs to manifolds -- weak and strong pointwise consistency of graph {L}aplacians. \newblock In {\em Proceedings of the Conference on Learning Theory}, pages 470--485, 2005. [41] \newblock {\em Probability Theory: A Comprehensive Course}. \newblock Universitext. Springer London, 2013. [42] Rasmus {Kyng}, Anup {Rao}, Sushant {Sachdeva}, and Daniel~A. {Spielman}. \newblock Algorithms for {L}ipschitz learning on graphs. \newblock In {\em Proceedings of the Conference on Learning Theory}, pages 1190--1223, 2015. [43] Giovanni {Leoni}. \newblock {\em A First Course in Sobolev Spaces}. \newblock Graduate studies in mathematics. American Mathematical Soc., 2009. [44] Boaz {Nadler}, Nathan {Srebro}, and Xueyuan {Zhou}. \newblock Semi-supervised learning with the graph {L}aplacian: The limit of infinite unlabelled data. \newblock In {\em Advances in Neural Information Processing Systems}, pages 1330--1338, 2009. [45] Braxton {Osting} and Todd~Harry {Reeb}. \newblock Consistency of {D}irichlet partitions. \newblock {\em SIAM Journal on Mathematical Analysis}, 49(5):4251--4274, 2017. [46] Bruno {Pelletier} and Pierre {Pudlo}. \newblock Operator norm convergence of spectral clustering on level sets. \newblock {\em Journal of Machine Learning Research}, 12(12):385--416, 2011. [47] Mathew~D. Penrose. \newblock {\em Random Geometric Graphs}. \newblock Oxford University Press, 2003. [48] Tim {Roith} and Leon {Bungert}. \newblock Continuum limit of {L}ipschitz learning on graphs. \newblock {\em Foundations of Computational Mathematics}, pages 1--39, 2022. [49] Filippo {Santambrogio}. \newblock {\em Optimal Transport for Applied Mathematicians}, volume~87 of {\em Progress in Nonlinear Differential Equations and Their Applications}. \newblock Birkhäuser Basel, 2015. [50] Zuoqiang {Shi}, Stanley {Osher}, and Wei {Zhu}. \newblock Weighted nonlocal {L}aplacian on interpolation from sparse data. \newblock {\em Journal of Scientific Computing}, 73:1164--1177, 2017. [51] Amir {Singer}. \newblock From graph to manifold {L}aplacian: The convergence rate. \newblock {\em Applied and Computational Harmonic Analysis}, 21:128--134, 2006. [52] Amit {Singer} and Hau-Tieng {Wu}. \newblock {Spectral convergence of the connection Laplacian from random \newblock {\em Information and Inference: A Journal of the IMA}, 6(1):58--123, 12 2016. [53] Dejan {Slep\v{c}ev} and Matthew {Thorpe}. \newblock Analysis of $p$-{L}aplacian regularization in semisupervised \newblock {\em SIAM Journal on Mathematical Analysis}, 51(3):2085--2120, 2019. [54] Christopher~D Sogge. \newblock Riemannian manifolds with maximal eigenfunction growth. \newblock {\em S{\'e}minaire {\'E}quations aux d{\'e}riv{\'e}es partielles (Polytechnique) dit aussi" S{\'e}minaire Goulaouic-Schwartz"}, pages 1--16, [55] \newblock {\em An Introduction to Measure Theory}. \newblock Graduate studies in mathematics. American Mathematical Society, 2011. [56] Matthew {Thorpe} and Florian {Theil}. \newblock Asymptotic analysis of the {G}inzburg--{L}andau functional on point \newblock {\em Proceedings of the Royal Society of Edinburgh Section A: Mathematics}, 149(2):387--427, 2019. [57] Daniel Ting, Ling Huang, and Michael~I. Jordan. \newblock An analysis of the convergence of graph {L}aplacians. \newblock In {\em Proceedings of the International Conference on Machine Learning}, pages 1079--1086, 2010. [58] Yves {van Gennip} and Andrea {Bertozzi}. \newblock Gamma-convergence of graph {G}inzburg--{L}andau functionals. \newblock {\em Advances in Differential Equations}, 17(11--12):1115--1180, [59] C{\'e}dric Villani. \newblock {\em Optimal transport: old and new}, volume 338. \newblock Springer-Verlag Berlin Heidelberg, 2009. [60] Ulrike {von Luxburg}, Mikhail {Belkin}, and Olivier {Bousquet}. \newblock {Consistency of spectral clustering}. \newblock {\em The Annals of Statistics}, 36(2):555--586, 2008. [61] \newblock Spectral convergence rate of graph {L}aplacian. \newblock {\em preprint arXiv:1510.08110}, 2015. [62] Xueyuan {Zhou} and Mikhail {Belkin}. \newblock Semi-supervised learning by higher order regularization. \newblock In Geoffrey Gordon, David Dunson, and Miroslav Dudík, editors, {\em Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, volume~15 of {\em Proceedings of Machine Learning Research}, pages 892--900, Fort Lauderdale, FL, USA, 11--13 Apr 2011. PMLR. [63] Xianjin {Zhu}, Zoubin {Ghahramani}, and John {Lafferty}. \newblock Semi-supervised learning using {G}aussian fields and harmonic \newblock In {\em Proceedings of the International Conference on Machine Learning}, 2003. \end{thebibliography} \bibliographystyle{plain} \end{document}
# Pushing the precision frontier for gravitational waves Michèle Levi111https://sites.google.com/view/levim Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen, Denmark (August 6, 2021) ###### Abstract Pushing the precision frontier for gravitational waves is one of the most urgent tasks in theoretical physics today, in light of the increasing influx of data from a rapidly growing worldwide network of gravitational-wave detectors. Levi will analytically predict gravitational radiation from such compact binaries and further develop the use of quantum field theory advances to study gravity. She seeks to uncover profound duality relations between gauge and gravity. International Editor Clifford Holt spoke to Levi about her research, the importance of analytical high-precision predictions for real-world gravitational-wave data, and potential challenges as she works to develop a better idea of universal commonalities across classical and quantum field theories. Firstly, congratulations on being awarded the Ernest Rutherford Fellowship. How important are funding mechanisms such as these, and what will the Fellowship mean for your own research and career? Thank you! A funding mechanism like this prestigious early-career award from the STFC (one of the seven UK research councils under UKRI) is very important and unique. This is the most competitive scheme in the UK for early-career researchers in my domain, devised to ensure that those 10 carefully-selected awardees, who are recognised as the very top international talent and leadership ‘material’ in science early on in their career, will be integrated into top academic positions in the leading research institutions in the UK. This will enable the awardees to play a key role in the coming decades in the scientific and transformative leadership at the national level in the UK and will also represent the UK at the international level, solidifying its position as a scientific world-leader for the next generations. As a public funder, UKRI selects these future leaders to shape strategic UK policies, to bolster the UK’s scientific place in the world, and to bring about a radical shift in the current research culture in terms of the way that science is conducted, namely its standards and its values, but also its role and commitment within society. Currently, many academics have sadly forgotten that their raison d’etre is to serve the public, that they have the privilege of expanding the frontiers of knowledge mainly thanks to the public money that is funding their research. We must insist that science that is peer-reviewed and approved for publication and funding lives up to the standards of FAIR (Findabilty, Accessibilty, Interoperabilty, Reusabilty) principles, namely of responsible research of integrity, that is publicly available, widely accessible, and reproducible. We should also advocate for equality, diversity, and inclusion in Science, which should properly represent Society, and engage with the public. It is very clear that UKRI as a public funder has a strong commitment to realising these aspirations, which are unfortunately still overlooked by the majority of private funders. I am excited and honoured to transfer and base my cutting-edge research programme in the UK, and to be part of making the UK a world-leading force in the timely domain of gravitational waves with the most innovative theoretical approaches and computational tools applied to gravity. I am also incredibly aligned with the aspirations of UKRI to transform the current research culture, and as I myself belong to an overlap of under-represented and marginalised groups in science, I am particularly aware to how systemic inequalities affect the ability of students and researchers to truly realise their potential and excel, or even to just conform to the common skewed metrics of academic merit. I adamantly champion and pursue equality, diversity, and inclusion in every aspect of my everyday scientific activities. I believe and I see that it is only through these consistent everyday actions and through my being a role model that I can ultimately translate my research work to further concrete scientific and societal impact. I also adamantly advance the broad communication of science across the research community and through public outreach, which I regard as a social imperative. I also find it extremely rewarding to share my knowledge with as many people as possible. Your project aims to deliver analytical high-precision predictions for real world gravitational wave data. Can you tell me more about this and why it is important? Gravitational radiation is a probing prediction of any complete theory of gravity. Currently, we still do not have a complete theory of gravity and this is true for both ends of the scale, since we do not know how to describe gravity on quantum scales, which are the smallest scales, and we do not have a theory that captures gravity on the largest scales – cosmological scales, since we do not know how to explain the puzzle of dark matter or the accelerated expansion of the Universe that was discovered in 1998. Understanding how the force of gravity works on these more ‘extreme’ scales is possibly the deepest and most fundamental longstanding open problem in physics. The first Earth-shaking gravitational-wave (GW) detection announced in 2016 launched a new era of gravitational-wave astronomy and precision gravity, which is only in its infancy, with vast future impacts in astrophysics and cosmology and, most notably, in fundamental physics. Since this first milestone detection, ground-based detectors keep multiplying, and many more are planned for the coming years. They are designed to measure various frequency ranges, which will enable a richer array of sources and events that generate gravitational radiation to be captured. The technology of measurements is constantly upgraded, either in currently- operating detectors like LIGO or VIRGO, or in upcoming and planned ones, like KAGRA in Japan, and others in the US, Europe, India, and Korea. Furthermore, there are also concrete planned space-based experiments from Europe, China, and Japan that will also target smaller frequencies and other types of sources. All in all, in the coming years and decades we are headed towards a growing influx of gravitational-wave data that will have higher and higher quality. Let me stress what, I believe, much of the community in Europe and the US does not fully recognise yet: that Asia is about to emerge as possibly the major player in gravitational-wave astronomy in the coming years: in Japan, KAGRA is already operational; in India, IndiGO will become operational in just a few years; and in Korea, SOGRO is planned to uniquely target mid-range frequencies from Earth, while China and Japan are also planning space-borne detectors, TianQin and DECIGO, respectively. How will your research help to inform the activities of detectors such as VIRGO and LIGO, as well as perhaps LISA? How could your theoretical work come to complement future observations/detections? My research generates theoretical data from which a huge bank of templates is created to be matched in the detectors by the real data that is measured. The GW signal is weak and is covered by many sources of noise so, in order to identify it the matched filtering technique is used for detection, which requires the most accurate theoretical signal templates possible. When a match is found, a detection is announced. Yet, lots of real-world data, even if is great data is just data without interpretation. What transforms data into knowledge is theory and analysis. This is especially true for the gravitational-wave signal since it is so weak, so it is really important to separate the noise from the true signal. For the most part, the observable differences among the plethora of candidate theories for gravity suggested along the years have been insanely tiny. In most cases, all of these theories seem grossly similar within gravitational- wave measurements. So it is only the theoretical resolution into various physical effects at a very high-precision, such as those that have been computed in my research since my graduate studies, that will enable us to really distinguish among the various suggested theories of gravity, and to close in on which is the viable one. Another feature which is rather unique to my own theoretical framework is that it provides predictions for generic sources of compact binaries, not only for binaries of two black holes. As the technology in upcoming detectors advances, and as the sensitivity improves, we are able to observe more and more neutron stars, for example, in the radiating binaries. Whereas from black holes we can ‘only’ learn about gravity theories, neutron stars also tell us about the theory of QCD (Quantum Chromodynamics) in extreme conditions that no human- built lab could produce. With future detectors, we also expect to observe white dwarfs, and learn more about them. Have you yet been able to identify any potential challenges that you expect to have to overcome as you work to develop a better idea of universal commonalities across classical and quantum field theories? Right now, we see some analogies between interactions of classical extended gravitating objects like the huge non-rotating black holes in radiating binaries, and elementary particles of quantum nature, which we call ‘scalar particles’ – these are boson particles of zero quantum spin. But the precise mapping between them, and more importantly its generalisations for all types of elementary particle, is poorly understood. In particular, my theoretical work pushed the understanding of spinning gravitating objects, which would also be analogous to higher-spin particles. Currently, the highest-spin particle that has been observed in nature is of spin one, and we also believe there could be particles of spin three/two according to supersymmetric theories, as well as our theoretically predicted graviton – a massless particle of spin two. But what about particles with spins larger than two? Do they exist? Could they be elementary? Or only show up as what we call ‘composite’ particles, which must be composed of elementary ones? At the moment, we cannot even manage to give well-defined predictions of observables of quantum field theories that capture such higher-spin particles, and we are certainly not able to write down such quantum field theories explicitly. Furthermore, we are not certain whether this fundamental issue exists already at the classical level, i.e. with the type of two-body interactions that my work is tackling for spinning gravitating objects. The big challenge for my research programme, therefore, would be to more rigorously and generally link the classical and quantum sides of these analogies and to clarify whether there is indeed a fundamental issue with higher-spin – and, if so, whether it originates in the quantisation of the theory, or if it already exists at the classical level. This all relates to possibly the biggest open question in theoretical physics of how to complete the theory of gravity at high energies, and therefore it would indeed be a breakthrough for my upcoming research programme to be able to provide any further insights on that. What are your short/long term hopes and ambitions? Since my graduate studies, I have had the privilege of pioneering and leading innovative analytical work and high-precision computation on the highly intricate problem of compact binaries that source gravitational radiation. At the time, I did not believe that I would live to see actual detections of gravitational waves from these sources, since at that time the detectors had been operational for more than a decade without any successful detection. Yet, the detection came in February 2016, and I can still remember it as though it were yesterday; I was overwhelmed with excitement and wonder for what was to come when the first observation of a gravitational-wave signal by LIGO was announced that day. Since then, the scientific prospects of gravitational-wave science have been continuously improving and exceeding all expectations. The future truly looks symphonic, filled with gravitational-wave echoes. With this increasing influx of data of an ever improving quality, we have a chance to really push theoretical predictions and to test so much more theory in gravity and even in QCD. My research programme is exactly aimed at maximising the potential of what we can learn from gravitational-wave measurements on the theory of gravity across all scales. I hope to engage the broader scientific community with this exciting new domain of research, such that it really crosses over all sub-fields of theoretical physics, especially since this is a great scientific challenge that can benefit from so many different specific perspectives. I believe that every one of these perspectives can bring in something fresh and interesting to the mix. I also believe that, to some extent, I have already contributed to the widespread attention that this type of research obtained in the traditional high energy physics community. In the UK, I hope to engage more people and institutions with the research in this field as the UK really has the potential to become a superpower in this domain, especially in the type of analytical work that I am involved in, which has been traditionally dominated by France. I am also a big believer in open code and open data, and I really want to see individual researchers, big experimental collaborations, scientific journals, and academic institutions all persistently moving towards this in the coming years. The scientific challenges at hand are too big, and we need to focus on the science and make our best joint efforts to make breakthroughs, instead of yielding to the old zero-sum game mentality that keeps theoretical physics stagnated. Simply put, I just hope that, together, we can all increase our knowledge of the world around us. Isn’t that the greatest privilege of our society? ## Acknowledgements ML received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant 847523. Her upcoming research program will be supported by the STFC Rutherford grant ST/V003895/1 “Harnessing QFT for Gravity”.
# Relativistic corrections to exclusive decays $\psi(n)\to\rho\pi$ and possible solution to the “$\rho\pi$-puzzle” Nikolay Kivel Physik-Department, Technische Universität München, James-Franck-Str. 1, 85748 Garching, Germany ###### Abstract We study relativistic corrections to exclusive $S$-wave charmonium decays into $\rho\pi$ and $\gamma\pi$ final states. The contribution of relative order $v^{2}$ and the set of associated higher order corrections are calculated using NRQCD and collinear factorisation framework. Numerical estimates show that the dominant effect is provided by the corrections of relative order $v^{2}$. The numerical values of these contributions are of the same order as the leading-order ones. These results suggest that the sum of relativistic and radiative QCD corrections can potentially explain the “$\rho\pi$-puzzle”. ## 1 Introduction A description of $S$-wave charmonium decays into $\rho\pi$ final state is already a long-standing problem in QCD phenomenology. The branching ratios for $J/\psi$ and excited state $\psi(3686)\equiv\psi^{\prime}$ are measured sufficiently accurately and their ratio is found to be very small [1] $\displaystyle Q_{\rho\pi}=\frac{Br[\psi^{\prime}\to\rho\pi]}{Br[J/\psi\to\rho\pi]}\approx 0.20\times 10^{-2}.$ (1) This corresponds to a strong violation of the “13%-rule”, which suggests that $Q_{\rho\pi}\approx Q_{e^{+}e^{-}}\simeq 0.13$. The latter is valid only if the decay amplitudes of $S$-wave charmonium are dominated by the leading-order contribution in the QCD factorisation framework (pQCD). Therefore the disagreement between the data and qualitative theoretical expectation indicates about large dynamical effects, which are not accounted by the leading-order approximation of pQCD. The problem has attracted a lot of attention and many different qualitative ideas and phenomenological models have been proposed in order to understand the small value of $Q_{\rho\pi}$. Almost all of proposed explanations use different ideas about long distance QCD dynamics; a comprehensive overview of the topic can be found in Refs.[2, 3]. The dominant role of some nonperturbative dynamics is related to the fact that the QCD helicity selection rule suppresses the valence contribution to the decay amplitude. Therefore, it is necessary to take into account for the one of outgoing mesons a non-valence component of the wave functions, which is suppressed by additional power $\Lambda/m_{c}$. However, already long ago in Refs. [4, 5] it was found that pQCD framework yields a reliable leading-order estimate for the $J/\psi$ branching ratio. In Ref. [4] the non-valence contributions are described by the three-particles twist-3 light-cone distribution amplitudes (LCDAs). These nonperturbative functions are process independent and the first few moments of these functions can be estimated using QCD sum rules. At present time corresponding matrix elements were studied and revised for various mesons, see updates in Refs. [6, 7, 8]. Therefore it is reasonable to believe that pQCD description is a good starting point in order to develop a systematic description of the process within the effective field theory framework. Following this way one faces with the problem in the description of $\psi^{\prime}\to\rho\pi$, which must be strongly suppressed relative $J/\psi\to\rho\pi$ in order to get the small ratio (1). There are various assumptions about possible dynamical origins for this suppression. Often they are related to the fact that the mass of excited state $\psi^{\prime}$ is close to the open charm threshold and this can lead to dynamical effects, which provide the crucial difference between $J/\psi$ and $\psi^{\prime}$ decays. The possible scenarios include: destructive interference of the large non-valence and valence contributions [4, 9]; suppression of the colour- singlet $c\bar{c}$-wave function at the origin for $\psi^{\prime}$ and the dominance of the colour-octet state [10]; cancellation between $c\bar{c}$ and $D\bar{D}$ components of $\psi^{\prime}$ [11]; cancellation between $S$\- and $D$-wave components of $\psi^{\prime}$ [12] and others [2]. On the other hand, the potential of the effective field theory framework to study the problem was not been fully exploited yet. Especially it is interesting to study the higher order corrections, which are different for $J/\psi$ and $\psi^{\prime}$. In this way the natural violation of the “13% rule” can be related to relativistic corrections in NRQCD [13]. In fact, already an order $v^{2}$ nonrelativistic QCD matrix elements $\left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon\ }{\boldsymbol{\nabla}}^{2}\psi\left|\psi(n,\boldsymbol{\epsilon})\right\rangle$ have very different values for $J/\psi$ and $\psi^{\prime}$, that was noticed already long time ago [14]. Recently, the relativistic corrections to exclusive $\psi(n)\to p\bar{p}$ decays have been studied in Ref.[15]. It is found that corrections of relative order $v^{2}$ are large and comparable with the leading-order contribution. This effect is closely related to the structure of the integrand in the collinear convolution integral describing the decay amplitude. This observation holds for both states $J/\psi$ and $\psi^{\prime}$ but for excited state the absolute effect is larger because the corresponding matrix element is larger. The similar mechanism may also be relevant for other hadronic decay channels including $\psi(n)\to\rho\pi$ decays. Therefore, the main purpose of this paper is to calculate the relativistic corrections to $\psi(n)\to\rho\pi$ and to study their numerical effect. As a first step in this direction we will calculate the correction of relative order $v^{2}$ combining NRQCD expansion with the leading-order collinear expansion. We will use the NRQCD projection technique developed in Refs.[16, 17, 18, 19], which is also effective for calculations of exclusive amplitudes. This technique also allows one to resum a part of higher order corrections, which are related to the corrections to quark-antiquark wave function in the potential model [19]. Such consideration is also useful providing an estimate of possible effects from higher order contributions. ## 2 Relativistic corrections to $\psi(n)\to\rho\pi$ and $\psi(n)\to\gamma\pi$ decays To describe the $J/\psi(P)\rightarrow\rho(p)\pi(p^{\prime})$ decay amplitude we use the charmonium rest frame and assume that outgoing momenta are directed along $z$-axis. The amplitude is defined as $\left\langle\rho(p)\pi(p^{\prime})\right|iT\left|\psi(n)\right\rangle=i(2\pi)^{4}\delta(p+p^{\prime}-P)i\epsilon_{\alpha\beta\mu\nu}\epsilon^{\alpha}e^{\ast\beta}\frac{p^{\prime\mu}p^{\nu}}{(pp^{\prime})}A_{\rho\pi},$ (2) where $\epsilon$ and $e^{\ast}$ denotes polarisation vectors of $\psi$ and $\rho$-meson, respectively. The amplitude $A_{\rho\pi}$ can be described as a superposition of a hard kernel with nonperturbative matrix elements describing the long distance coupling with hadronic states. In order to calculate the hard kernel, we perform an NRQCD matching, which is combined with the collinear light-cone expansion for the light quarks. This technique allows one to perform the matching at the amplitude level and to find the hard kernels for corrections associated with the specific set of higher order NRQCD matrix elements [19] $\left\langle{v}^{2n}\right\rangle=\frac{\left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon\ }\left(-\frac{i}{2}\overleftrightarrow{\boldsymbol{D}}\right)^{2n}\psi\left|\psi(n,\boldsymbol{\epsilon})\right\rangle}{m_{c}^{2n}\ \left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}\psi\left|\psi(n,\boldsymbol{\epsilon})\right\rangle}\simeq\left\langle{v}^{2}\right\rangle^{n},$ (3) where the last equality is valid up to corrections $\mathcal{O}(v^{2})$ [20]. The diagrams, which describe the decay amplitude are schematically shown in Fig.1. Figure 1: a) Typical diagrams describing the subprocess $Q\bar{Q}\rightarrow VP$, where $V=\rho,\gamma$. The blobs denote the light-cone matrix elements, see explanation in the text. b) An example of diagrams, describing the contribution with the perturbative photon coupling. The long distance hadronisation dynamics of outgoing mesons is described by the twist-2 and twist-3 light-cone distribution amplitudes (LCDAs). Various properties and models for required LCDAs can be found in Refs.[7, 8]. The twist-2 light-cone matrix elements read 111For simplicity, we do not explicitly show the gauge links in the light-cone operators assuming the appropriate light-cone gauge. $\left\langle\pi^{+}(p^{\prime})\right|\bar{u}(z_{1+})\hbox to0.0pt{\hbox to5.00002pt{\hfil$\bar{n}$\hfil}\hss}/\gamma_{5}d(z_{2+})\left|0\right\rangle=-if_{\pi}\left(p^{\prime}\bar{n}\right)\ \int_{0}^{1}du\ e^{iu\left(p^{\prime}\bar{n})(z_{1}n\right)/2+i(1-u)\left(p^{\prime}\bar{n})(z_{2}n\right)/2}\ \phi_{2\pi}(u),$ (4) $\left\langle\rho^{-}(p)\right|\bar{d}(z_{1-})\gamma_{\bot}^{\mu}\hbox to0.0pt{\hbox to6.00235pt{\hfil/\hfil}\hss}nu(z_{2-})\left|0\right\rangle=if_{\rho}^{\bot}e_{\bot}^{\ast\mu}(pn)\ \int_{0}^{1}dy\ e^{iy(pn)(z_{1}\bar{n})/2+i(1-y)\left(pn)(z_{2}\bar{n}\right)/2}\ \phi_{2\rho}^{\bot}(y),$ (5) where we use auxiliary light-cone vectors $\displaystyle n=(1,0,0,-1),\ \bar{n}=(1,0,0,1),\ g^{\mu\nu}_{\bot}=g^{\mu\nu}-\frac{1}{2}(n^{\mu}\bar{n}^{\nu}+n^{\nu}\bar{n}^{\mu}),$ (6) $\displaystyle p^{\prime}=(p^{\prime}\bar{n})\frac{n}{2}+\frac{m^{2}_{\pi}}{(p^{\prime}\bar{n})}\frac{\bar{n}}{2},\ p=(pn)\frac{\bar{n}}{2}+\frac{m^{2}_{\rho}}{(pn)}\frac{n}{2},\ (p^{\prime}\bar{n})\sim(pn)\sim m_{c}.$ (7) and the short notation for the arguments of quark fields $q(z_{i+})\equiv q((z_{i}n)\bar{n}/2),\ q(z_{i-})\equiv\ q((z_{i}\bar{n})n/2).$ (8) The required twist-3 three-particles LCDAs are defined as $\left\langle\pi^{+}(p^{\prime})\right|\bar{u}(z_{1+})\hbox to0.0pt{\hbox to5.00002pt{\hfil$\bar{n}$\hfil}\hss}/\gamma_{\bot}^{\mu}\gamma_{5}gG_{\bar{n}\mu}(z_{3+})d(z_{2+})\left|0\right\rangle=-2f_{3\pi}\left(p^{\prime}\bar{n}\right)^{2}\text{FT}\left[\phi_{3\pi}(u_{i})\right],$ (9) $\left\langle\rho^{-}(p,e)\right|\bar{d}(z_{1-})\hbox to0.0pt{\hbox to6.00235pt{\hfil/\hfil}\hss}ngG_{\mu n}(z_{2-})u(z_{3-})\left|0\right\rangle=-\ f_{\rho}m_{\rho}(pz)^{2}e_{\bot}^{\ast\mu}\ \text{FT}\left[\phi_{3\rho}(y_{i})\right],$ (10) $\left\langle\rho^{-}(p,e)\right|\bar{d}(z_{1})\hbox to0.0pt{\hbox to6.00235pt{\hfil/\hfil}\hss}n\gamma_{5}g\tilde{G}_{\mu n}(\lambda z)u(-z)\left|0\right\rangle=-if_{\rho}m_{\rho}\zeta_{3}(pz)^{2}e_{\bot}^{\ast\mu}\ \text{FT}\left[\tilde{\phi}_{3\rho}(y_{i})\right],$ (11) where $G_{\mu n}\equiv G_{\mu\nu}n^{\nu}$ and $\text{FT}\left[f(u_{i})\right]=\int Du_{i}\ e^{iu_{1}\left(p^{\prime}\bar{n})(z_{1}n\right)/2+iu_{2}\left(p^{\prime}\bar{n})(z_{2}n\right)/2+iu_{3}\left(p^{\prime}\bar{n})(z_{3}n\right)/2}\phi_{3\pi}(u_{1},u_{2},u_{3}),$ (12) with $Du_{i}=du_{1}du_{2}du_{3}\delta(1-u_{1}-u_{2}-u_{3}).$ (13) The FT$\left[\phi_{3\rho}(y_{i})\right]$ is defined analogously but with $y_{i}(pn)(z_{i}\bar{n})$ in the Fourier transformation. The normalisation constants $f_{\pi,\rho}$, $\zeta_{3},\ f_{3\pi}$ and models for various LCDAs will be discussed below. The expression for the amplitude can be written as $A_{\rho\pi}=\left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}\psi\left|\psi(n,\boldsymbol{\epsilon})\right\rangle\frac{\sqrt{2M_{\psi}}}{2E}\ \frac{1}{4\pi}\int d\Omega\ \text{Tr}\left[\Pi_{1}\hat{A}_{Q}\right],$ (14) where $\hat{A}_{Q}$ describes subprocess $Q\bar{Q}\rightarrow\rho\pi$ with the quark-antiquark pair in the initial state. The heavy quark projector on the triplet spin state $\Pi_{1}$ reads [19] $\Pi_{1}=\frac{-1}{2\sqrt{2}E(E+m)}\left(\frac{1}{2}\hbox to0.0pt{\hbox to7.80904pt{\hfil/\hfil}\hss}P+m+\hbox to0.0pt{\hbox to5.00002pt{\hfil$q$\hfil}\hss}/\right)\frac{\hbox to0.0pt{\hbox to7.80904pt{\hfil/\hfil}\hss}P+2E}{4E}\hbox to0.0pt{\hbox to5.00002pt{\hfil$\epsilon$\hfil}\hss}/\left(\frac{1}{2}\hbox to0.0pt{\hbox to7.80904pt{\hfil/\hfil}\hss}P-m-\hbox to0.0pt{\hbox to5.00002pt{\hfil$q$\hfil}\hss}/\right)\otimes\frac{\mathbf{1}}{\sqrt{N_{c}}},$ (15) and is normalised as $\displaystyle\text{Tr}\left[\Pi_{1}\Pi_{1}^{{\dagger}}\right]=4E^{2},$ (16) where $E$ is the heavy quark energy $p_{Q}=(E,\boldsymbol{q})$, $p_{\bar{Q}}=(E,-\boldsymbol{q})$ and $E=\sqrt{m_{c}^{2}+\boldsymbol{q}^{2}}$. The integration $d\Omega$ over the angles of the relative momentum $\boldsymbol{q}$ in Eq.(14) is used to get the state with $L=0$. Therefore the relevant amplitude $\hat{A}_{Q}$ is the function of relative momentum square $\boldsymbol{q}^{2}$ only, which is substituted $\boldsymbol{q}^{2}\rightarrow m_{c}^{2}\left\langle{v}^{2}\right\rangle$ in the final expression (14), various technical details concerning NRQCD matching can be found in Refs.[17, 19]. Calculation of the diagrams as in Fig.1 gives $A_{\rho^{-}\pi^{+}}=-\left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}\psi\left|\psi(^{3}S_{1})\right\rangle\sqrt{2M_{\psi}}\left(\pi\alpha_{s}\right)^{2}\frac{10}{27}\left(1+\frac{m_{c}}{E}\right)\frac{f_{\rho}^{\bot}~{}f_{3\pi}}{\left[4E^{2}\right]^{2}}\left(J_{\pi}+\frac{\ f_{\rho}m_{\rho}\zeta_{3}\ f_{\pi}}{f_{\rho\bot}f_{3\pi}}\ J_{\rho}\right),$ (17) where the dimensionless collinear convolution integrals $J_{\pi}$ and $J_{\rho}$ describe contributions with twist-3 $\pi$\- and $\rho$-LCDAs, respectively. These integrals also depend on the NRQCD parameter $\left\langle{v}^{2}\right\rangle$. In the leading-order limit $\left\langle{v}^{2}\right\rangle\rightarrow 0$, $E\rightarrow m_{c}^{2}$ Eq.(17) reproduces the result from Ref.[4] $A_{\rho^{-}\pi^{+}}^{\text{lo}}=-\left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}\psi\left|\psi(n,\boldsymbol{\epsilon})\right\rangle\sqrt{2M_{\psi}}\left(\pi\alpha_{s}\right)^{2}\frac{20}{27}\frac{f_{\rho}^{\bot}f_{3\pi}}{\left[4m_{c}^{2}\right]^{2}}\left(J_{\pi}^{\text{lo}}+\frac{\ f_{\rho}m_{\rho}\zeta_{3}f_{\pi}}{f_{\rho\bot}f_{3\pi}}\ J_{\rho}^{\text{lo}}\right),$ (18) where $\displaystyle J_{\pi}^{\text{lo}}$ $\displaystyle=\int Du_{i}\ \frac{\phi_{3\pi}(u_{i})}{u_{1}u_{2}u_{3}}\int_{0}^{1}dy\ \frac{\phi_{2\rho}^{\bot}(y)}{1-y}\frac{2u_{1}}{\left(y\bar{u}_{2}+u_{2}\bar{y}\right)\left(yu_{1}+\bar{y}\bar{u}_{1}\right)},$ (19) $\displaystyle J_{\rho}^{\text{lo}}$ $\displaystyle=\int_{0}^{1}du\ \frac{\phi_{2\pi}(u)}{1-u}\int Dy_{i}\ \frac{\left(\phi_{3\rho}+\tilde{\phi}_{3\rho}\right)(y_{i})}{y_{1}y_{2}y_{3}}\frac{1}{y_{2}\bar{u}+u\bar{y}_{2}},$ (20) with $\bar{x}=1-x$. The analytical expressions for the integrals $J_{\pi,\rho}$ in Eq.(17) are somewhat lengthy and presented in Appendix. In order to estimate these integrals we use the following models of LCDAs $\phi_{2\rho}^{\bot}(y)=6y(1-y)\left(1+a_{2\rho}\ C_{2}^{3/2}(2y-1)\right),\ $ (21) $\phi_{2\pi}(u)=6u(1-u)\left(1+a_{2}^{\pi}C_{2}^{3/2}(2u-1)\right),$ (22) $\phi_{3\rho}(y_{i})=360y_{1}y_{2}y_{3}^{2}(y_{1}-y_{2})\omega_{3\rho},$ (23) $\tilde{\phi}_{3\rho}(y_{i})=360y_{1}y_{2}y_{3}^{2}\left(1+\frac{\tilde{\omega}_{3\rho}}{\zeta_{3}}\frac{1}{2}\left(7y_{3}-3\right)\right),$ (24) $\ \ \phi_{3\pi}(u_{i})=360u_{1}u_{2}u_{3}^{2}\left(1+\omega_{3\pi}\frac{1}{2}(7\alpha_{3}-3)\right).$ (25) The different nonperturbative moments, which enter in the definitions (4)-(11) and (21)-(25), were estimated in Refs.[4, 7, 8]. Their values are summarised in Table 1. In the numerical estimates we fix for the factorisation scale the value $\mu=2$ GeV and use $\alpha_{s}\simeq 0.30$. Table 1: The values of the moments, which parametrise the hadronic LCDAs. All values are given at the scale $\mu=2$ GeV. For the pion moments, the values are taken from Ref.[7], for the $\rho$-meson from Ref.[8]. $f_{\pi},$MeV | $f_{\rho}$, MeV | $f_{\rho}^{\bot}$, MeV | $a_{2\pi}$ | $a_{2\rho}$ | $f_{3\pi},$ GeV2 | $\zeta_{3\rho}$ | $\omega_{3\rho}$ | $\tilde{\omega}_{3\rho}$ | $\omega_{3\pi}$ ---|---|---|---|---|---|---|---|---|--- $131$ | $216$ | $143$ | $0.19$ | $0.11$ | $0.31\times 10^{-2}$ | $0.02$ | $0.09$ | $-0.04$ | $-1.1$ All the convolution integrals calculated with the models (21)-(25) are well defined, which confirms that collinear factorisation is also valid beyond the leading-order approximation. As a first step of the numerical analysis let us consider the the leading- order estimate for the branching ratio of $J/\psi$. For that purpose we use the estimates for the NRQCD matrix element obtained in Ref.[18] $\left|\left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}\psi\left|J/\psi\right\rangle\right|^{2}\simeq 0.440.$ (26) For the various masses in Eq.(18) we use $M_{\psi}=3.1\ $GeV, $m_{\rho}=775\ $MeV, for the pole $c$-quark mass $m_{c}=1.4\ $GeV and for the total width $\Gamma_{J/\psi}=93\ $KeV [1]. Then for the sum of all final states $\rho^{\pm}\pi^{\mp}$ and $\rho^{0}\pi^{0}$ we obtain $\text{Br}[J/\psi\rightarrow\rho\pi]_{\text{lo}}\simeq 1.0\%,$ (27) which is somewhat smaller then the corresponding experimental value $1.69(15)\%.$ This updated result confirm the conclusion of Ref.[4], that the LO NRQCD approximation works sufficiently well for the $J/\psi$ decay.222We assume that the difference about factor two is not a large discrepancy taking into account various uncertainties from scale setting, pole mass $m_{c}$, etc., which we do not consider now. On the other hand this approximation can not describe branching ratio $\psi^{\prime}\to\rho\pi$. Consider now the effect provided by the relativistic corrections in Eq.(17). The one part is provided by the resummation of relativistic corrections in the factor $E=m_{c}^{2}\sqrt{1+\left\langle{v}^{2}\right\rangle}$ in Eq.(17). This effect can be understood as transition from the scale $4m_{c}^{2}$ to the scale $M_{\psi}^{2}\simeq 4m_{c}^{2}(1+\left\langle{v}^{2}\right\rangle)$. These corrections reduce the ratio $Q_{\rho\pi}$ due to the factor $(1+\left\langle{v}^{2}\right\rangle_{J/\psi})^{2}/(1+\left\langle{v}^{2}\right\rangle_{\psi^{\prime}})^{2}\sim M_{\psi}^{4}/M_{\psi^{\prime}}^{4}\sim 0.50$. However, this can not explain the very small value $Q_{\rho\pi}$ in Eq.(1). The second effect of the relativistic corrections is associated with the modification of the hard kernels in the convolution integrals $J_{\rho,\pi}$. Because these integrals depend on meson LCDAs, the resulting effect of the relativistic corrections is also sensitive to hadronic nonperturbative structure. For the numerical calculation we use for $J/\psi$ the estimate from Ref.[18] $\left\langle{v}^{2}\right\rangle_{J/\psi}\approx 0.225,$ (28) and for the excite state $\psi^{\prime}$ we apply the following estimate $\left\langle{v}^{2}\right\rangle_{\psi^{\prime}}=\frac{M_{\psi^{\prime}}-M_{J/\psi}+E_{1}}{m_{c}}\approx 0.64,$ (29) where $E_{1}=\left\langle{v}^{2}\right\rangle_{J/\psi}m_{c}\simeq 315\ $MeV is the binding energy for $J/\psi$. The resulting value of $\left\langle{v}^{2}\right\rangle_{\psi^{\prime}}$ is much larger than $\left\langle{v}^{2}\right\rangle_{J/\psi}$, which can have a significant numerical effect and, therefore, affect the value of $Q_{\rho\pi}$. The given calculation of the relativistic corrections is complete at the relative order $v^{2}$ only. The resummation of higher orders $\left\langle{v}^{2}\right\rangle^{n}$ with $n>2$ describes the part of the relativistic corrections associated with the quark-antiquark wave function only [19]. We use this approximation in order to study a possible effect from higher-order contributions. Therefore, for the comparison, we present the values of the integrals in Eq.(17) obtained in the leading-order approximation $J^{\text{lo}}$ ($\left\langle{v}^{2}\right\rangle\rightarrow 0$), in the next-to-leading approximation $J^{\text{nlo}}$, which takes into accont the next-to-leading correction $J^{\text{nlo}}=J^{\text{lo}}+\left\langle{v}^{2}\right\rangle J^{(1)}$, and the integral $J$, which includes all powers $\left\langle{v}^{2}\right\rangle^{n}$ : $J=J^{\text{lo}}+\sum\left\langle{v}^{2}\right\rangle^{n}J^{(n)}$. The total integral in Eq.(17) is described by the sum of two contributions with the different LCDAs $J_{\rho\pi}=J_{\pi}+\frac{\ f_{\rho}m_{\rho}\zeta_{3}\ f_{\pi}}{f_{\rho\bot}f_{3\pi}}\ J_{\rho},$ (30) where, schematically, $J_{\pi}=\phi_{3\pi}\ast T_{\pi}\ast\phi_{2\rho}^{\bot}$ and $J_{\rho}=\phi_{3\rho}\ast T_{\rho}\ast\phi_{2\pi}+\tilde{\phi}_{3\rho}\ast\tilde{T}_{\rho}\ast\phi_{2\pi}$ (the asterisk denotes the convolution integrals, $T_{\pi,\rho}$ are the hard kernels). Using parameters from Table 1 one finds $\frac{\ f_{\rho}m_{\rho}\zeta_{3}\ f_{\pi}}{f_{\rho\bot}f_{3\pi}}\approx 0.99.$ (31) Therefore the normalisation couplings in the definitions (4)-(11) do not provide any numerical difference between the two terms in Eq.(30). The results for convolution integrals (30) are presented in Table 2 Table 2: Numerical result for the convolution integrals $J_{\rho\pi}$ | $J_{\rho\pi}^{\text{lo}}$ | $J_{\rho\pi}^{\text{nlo}}/J_{\rho\pi}^{\text{lo}}$ | $\ J_{\rho\pi}/J_{\rho\pi}^{\text{lo}}$ ---|---|---|--- $J/\psi$ | $630$ | $0.53$ | $0.45$ $\psi^{\prime}$ | $630$ | $-0.46$ | $-0.65$ The effect of the relativistic corrections is negative and the values of the LO integrals are substantially reduced. Notice that neglecting the higher- order corrections in $v^{2}$ in the square of the integral, one gets in case of $J/\psi$ the strong cancellation $\displaystyle|J_{\rho\pi}^{\text{nlo}}|^{2}=(J_{\rho\pi}^{\text{lo}})^{2}(2J_{\rho\pi}^{\text{nlo}}/J_{\rho\pi}^{\text{lo}}-1)+\mathcal{O}(v^{4})\simeq 0.06(J_{\rho\pi}^{\text{lo}})^{2}.$ (32) Therefore we assume that it is better to take the large NLO correction exactly, i.e. do not expanding the square of the integral in powers of $v^{2}$. At the same time the numerical effect from other higher order corrections is already much smaller. For $\psi^{\prime}\to\rho\pi$ the numerical effect is bigger because $\left\langle\boldsymbol{v}^{2}\right\rangle_{\psi^{\prime}}$ is larger. One can also see that the dominant part of the correction is also provided by the contribution of relative order $v^{2}$, which is obtained exactly in this calculation. The numerical dominance of this correction can be explained by the numerical enhancement of the corresponding convolution integrals in the same way as for the baryon decays [15]. Let us assume that the relativistic correction of order $v^{2}$ provides the dominant numerical effect for $J/\psi$ and $\psi^{\prime}$ states. Then, this allows one to suggest a possible explanation of the small $\psi^{\prime}\to\rho\pi$ width, which explains the ”$\rho\pi$-puzzle”. The NRQCD description of decay amplitudes also involves the $\mathcal{O}(\alpha_{s})$ NLO QCD radiative correction, which can also provide substantial numerical effect. Usually this contribution is considered to be of the same order as relativistic corrections of relative order $v^{2}$. The value of radiative corrections for $J/\psi$ and $\psi^{\prime}$ states is the same except the NRQCD matrix element. Therefore, if the radiative $\mathcal{O}(\alpha_{s})$ correction is positive and large enough in order to compensate the negative contribution $J^{\text{nlo}}$ for $\psi^{\prime}$ then this naturally explains the small width for $\psi^{\prime}$. On the other hand, such positive contribution will improve the description of $J/\psi\to\rho\pi$ increasing the value of the convolution integral, i.e. in this case the $\mathcal{O}(\alpha_{s})$ also compensates the negative effect of the relativistic correction. This is in agreement with the observation that the leading-order description $J/\psi\to\rho\pi$ provides qualitatively good estimate. Potentially this analysis can also be applied for other meson decays. The good feature of the collinear factorisation is that the hadronic nonperturbative content is described in terms of universal process independent LCDAs. Many of these functions were already studied in the literature. Even if the hard kernels are the same the differences in the models for LCDAs can affect the numerical balance and change the value $Q_{hh^{\prime}}$. Consider, for example, the decay of S-wave charmonia into $\gamma\pi^{0}$ final state. In this case the decay amplitude is described by the same diagrams as in Fig.1(a) but with the photon LCDAs instead of $\rho$-meson. These diagrams describe the photon as a hadron, i.e. such contributions are sensitive to the nonperturbative components of the photon wave function. The contribution with the perturbative photon coupling appear from the diagrams Fig.1(b) only and therefore they are suppressed by electromagnetic coupling $\alpha$ or by additional QCD coupling $\alpha_{s}$. In such situation the contributions with nonperturbative photon can provide a sizeable impact, see e.g. discussion in Ref.[21]. The data for the branching fractions $\psi(n)\to\gamma\pi$ are known [1] $\text{Br}[J/\psi\rightarrow\gamma\pi^{0}]=3.56(17)\times 10^{-5},\ \ \text{Br}[\psi^{\prime}\rightarrow\gamma\pi^{0}]=1.04(22)\times 10^{-6},$ (33) which yields $Q_{\gamma\pi}\simeq 0.03.$ (34) The width $\Gamma[J/\psi\rightarrow\gamma\pi^{0}]$ can be well estimated using data for $\Gamma[J/\psi\rightarrow\rho\pi^{0}]$ and VDM model [4].333 We assume that the contribution of $J/\psi\to\gamma^{*}\to\gamma\pi$ is subleading, in contrast to the analysis in Ref.[4]. We guess that this contribution is overestimated in [4] due to the specific model of the pion LCDA .. This indirectly support the picture with the dominant contribution from the non-perturbative photon coupling. However, the ratio $Q_{\gamma\pi}$ is about an order of magnitude larger than $Q_{\rho\pi}$. We will use the models for the photon LCDAs from Ref.[21]. The twist-2 light- cone matrix element reads $\left\langle\gamma(q,e)\right|\bar{q}(z_{1-})\hbox to0.0pt{\hbox to5.00002pt{\hfil$\bar{n}$\hfil}\hss}/\gamma_{\bot}^{\mu}q(z_{2-})\left|0\right\rangle=ie_{q}e\ f_{\gamma}e_{\bot}^{\ast\mu}(qn)\ \int_{0}^{1}dy\ e^{iy(pn)(z_{1}\bar{n})/2+i(1-y)\left(pn)(z_{2}\bar{n}\right)/2}\ \phi_{2\gamma}^{\bot}(y),$ (35) where $e_{u}=2/3$, $e_{d}=-1/3$, electric charge $e=\sqrt{4\pi\alpha}$. The model for $\phi_{2\gamma}^{\bot}$ reads [21] $\phi_{2\gamma}^{\bot}(y)\simeq 6y(1-y),\ \ \ \ f_{\gamma}(2\text{GeV})\simeq-47\ \text{MeV}.$ (36) Twist-3 DAs matrix elements are defined as $\left\langle\gamma(p)\right|\bar{q}(z_{1-})\hbox to0.0pt{\hbox to6.00235pt{\hfil/\hfil}\hss}ngG_{\mu n}(z_{3-})q(z_{2-})\left|0\right\rangle=-\ e_{q}e\ f_{3\gamma}(qn)^{2}\varepsilon_{\bot}^{\ast\mu}\ \text{FT}\left[\phi_{3\gamma}(y_{i})\right],$ (37) $\left\langle\gamma(p)\right|q(z_{1-})\hbox to0.0pt{\hbox to6.00235pt{\hfil/\hfil}\hss}n\gamma_{5}g\tilde{G}_{\mu n}(z_{2-})q(z_{3-})\left|0\right\rangle=-ie_{q}e\ f_{3\gamma}(qn)^{2}\varepsilon_{\bot}^{\ast\mu}\ \text{FT}\left[\tilde{\phi}_{3\gamma}(y_{i})\right],$ (38) where Fourier transformation is the same as in Eq.(12). The models for twist-3 LCDAs read [21] $\phi_{3\gamma}(y_{i})=360y_{1}y_{2}y_{3}^{2}\left(y_{1}-y_{2}\right)\ \omega_{3\gamma},\ \ $ (39) $\tilde{\phi}_{3\gamma}(y_{i})=360y_{1}y_{2}y_{3}^{2}\left(1+\tilde{\omega}_{3\gamma}\frac{1}{2}\left(7y_{3}-3\right)\right),\ \ \tilde{\omega}_{3\gamma}\approx\tilde{\omega}_{3\rho}/\zeta_{3}.$ (40) where $f_{3\gamma}(2\text{GeV})=-0.32\times 10^{-2}\text{GeV}^{2},\ \ \omega_{3\gamma}\approx\omega_{3\rho},\ \ \ \ \tilde{\omega}_{3\gamma}\approx\tilde{\omega}_{3\rho}/\zeta_{3}.$ (41) The $\gamma\pi$-decay amplitude can be obtained from Eq.(17) substituting photon LCDAs instead of $\rho$-meson ones. The LO amplitude reads $A_{\gamma\pi^{0}}^{\text{lo}}=-\left\langle 0\right|\chi^{{\dagger}}\boldsymbol{\sigma}\cdot\boldsymbol{\epsilon}\psi\left|\psi(n,\boldsymbol{\epsilon})\right\rangle\sqrt{2M_{\psi}}\left(\pi\alpha_{s}\right)^{2}\sqrt{2\pi\alpha}\frac{20}{27}\frac{f_{\gamma}~{}f_{3\pi}}{\left[4m_{c}^{2}\right]^{2}}\left(J_{\gamma\pi}^{\text{lo}}+\frac{\ f_{3\gamma}\ f_{\pi}}{f_{3\pi}f_{\gamma}}\ J_{\gamma\pi}^{\text{lo}}\right).$ (42) The ratio of the normalisation couplings in Eq.(42) yields ($\mu=2$GeV) $\frac{\ f_{3\gamma}\ f_{\pi}}{f_{3\pi}f_{\gamma}}\approx 2.92,$ (43) which is different from the analogous ratio for the $\rho$-meson (31). The leading-order numerical estimates gives $\text{Br}[J/\psi\rightarrow\gamma\pi^{0}]_{\text{lo}}\simeq 3.82\times 10^{-5},$ (44) which very well agrees with the data. The results for the total convolution integral $J_{\gamma\pi}$ are presented in Table 3. Table 3: Numerical result for the convolution integrals $J_{\gamma\pi}$ | $J_{\gamma\pi}^{\text{lo}}$ | $\ J_{\gamma\pi}^{\text{nlo}}/J_{\gamma\pi}^{\text{lo}}$ | $\ J_{\gamma\pi}/J_{\gamma\pi}^{\text{lo}}$ ---|---|---|--- $J/\psi$ | $932$ | $0.62$ | $0.45$ $\psi^{\prime}$ | $932$ | $-0.49$ | $-0.74$ Comparing with the analogous results for the $\rho\pi$-channel one finds that the both descriptions are qualitatively similar despite the different ratio (43) and the differences between the LCDAs $\phi_{2\gamma}$ and $\phi_{2\rho}^{\bot}$. Therefore one can again assume that $\mathcal{O}(\alpha_{s})$ radiative corrections also play a crucial role in the understanding of the value of the decay width $\gamma\pi$. Moreover, the contributions with a perturbative photon, as in Fig. 1(b) can probably explain the larger value of $Q_{\gamma\pi}$. ## 3 Conclusions In conclusion, we calculated and investigated relativistic corrections to the decay amplitudes $\psi(n)\to\rho\pi$ and $\psi(n)\to\gamma\pi$ within the pQCD (NRQCD and collinear factorisation) framework. This calculation includes the exact correction of relative order $v^{2}$ and subset of the higher order corrections associated with the quark-antiquarks wave function. Numerical estimates show that an order $v^{2}$ correction is large and give the dominant numerical effect, which can be related to the structure of the collinear integrals. If this observation is not affected by other higher order relativistic corrections, then one has to consider the relative $v^{2}$-contribution as a special case. The obtained relativistic corrections are negative and large. In case $\psi^{\prime}\to\rho\pi$ the relative $v^{2}$ contribution is much larger than the leading-order one. The different effects of relativistic corrections for $J/\psi\to\rho\pi$ and $\psi^{\prime}\to\rho\pi$ suggest a possible explanation for the $\rho\pi$-puzzle. If the QCD radiative correction is positive and large enough then it interferes destructively with the relativistic correction for $\psi^{\prime}\to\rho\pi$ giving the small branching fraction. At the same time such radiative correction will improve the description of $J/\psi\to\rho\pi$ reducing the negative effect of the relativistic correction. Therefore, we believe that further investigation of relative order $v^{4}$ corrections and QCD radiative corrections can help to clarify this scenario. We also expect that the same approach can be be used for an analysis other hadronic decay channels. As a simplest example, the decay $\psi(n)\to\gamma\pi$ was also considered. We studied the contribution, which is given by similar diagrams but with nonperturbative photon instead of $\rho$-meson. Despite the difference between twist-2 LCDAs for photon and $\rho-$meson, the qualitative effect from the relativistic corrections is quite similar to $\psi\to\rho\pi$, they are also large and negative. Therefore one can guess that the similar scenario with radiative corrections is also applicable here. The only difference with $\psi(n)\to\rho\pi$ is provided by the contributions with perturbative photon coupling, which are suppressed by $\mathcal{O}(\alpha_{s})$ or $\mathcal{O}(\alpha)$. Therefore, it can be that these expects are responsible for the larger value of $Q_{\gamma\pi}$. ## 4 Appendix Here we provide the analytical expressions for the intergrals $J_{\pi}$ and $J_{\rho}$ introduced in Eq.(17). In order to simplify notations we use $\left\langle v^{2}\right\rangle\equiv\boldsymbol{v}^{2},\ \ \delta=1-1/\sqrt{1+\boldsymbol{v}^{2}}.\ $ (45) The first integral in Eq.(17) reads $J_{\pi}\left(\boldsymbol{v}^{2}\right)=\int Du_{i}\ \frac{\phi_{3\pi}(u_{i})}{u_{1}u_{2}u_{3}}\int_{0}^{1}dy\ \frac{\phi_{2}^{\bot}(y)}{y\bar{y}}\left(\frac{2A_{\pi}}{D_{1}D_{3}}+\frac{B_{\pi}}{D_{1}D_{2}}\right),\ \ \bar{y}=1-y,$ (46) where $D_{i}=\delta_{i1}\ \left(y_{1}\bar{u}_{2}+\bar{y}_{1}u_{2}\right)+\delta_{i2}\ \left(y_{2}\bar{u}_{1}+\bar{y}_{2}u_{1}\right)+\delta_{i3}\ u_{3},$ (47) with $y_{1}=y,\ y_{2}=\bar{y}.$ (48) The symbol $\delta_{ik}$ denotes the Kronecker delta. The numerators $A_{\pi}$ and $B_{\pi}~{}$are given by the sums $A_{\pi}=\sum_{k=0}^{4}f_{k}^{A}I_{k}[13],\ \ \ B_{\pi}=\sum_{k=0}^{4}f_{k}^{B}I_{k}[12],$ (49) where $I_{k}[ij]=\frac{1}{2}\int_{-1}^{1}d\eta\ \frac{\boldsymbol{v}^{k}\eta^{k}}{\left(1+\boldsymbol{v}\eta\ a_{i}\right)\left(1-\boldsymbol{v}\eta\ a_{j}\right)}=\frac{\boldsymbol{v}^{k}}{a_{i}+a_{j}}\sum_{n=0}^{\infty}\boldsymbol{v}^{n}\frac{a_{j}^{n+1}+(-1)^{n}a_{i}^{n+1}}{n+1+k}\frac{1}{2}\left[1+(-1)^{n+k}\right].\ \ $ (50) with $a_{j}=\delta_{1j}\left(1-\delta\right)\frac{y_{1}-u_{2}}{y_{1}\bar{u}_{2}+\bar{y}_{1}u_{2}}+\delta_{2j}\left(1-\delta\right)\frac{y_{2}-u_{1}}{y_{2}\bar{u}_{1}+\bar{y}_{2}u_{1}}-\delta_{3j}\left(1-\delta\right).$ (51) The coefficients $f_{k}^{A,B}\equiv f_{k}^{A,B}(u_{i},y;\delta)$ in Eq.(49) read $f_{0}^{A}=\frac{\delta}{2}\left(3u_{3}-2-\delta\right),\ \ \ \ f_{1}^{A}=\frac{\delta}{2}\frac{\left(1-\delta\right)}{\left(2-\delta\right)}u_{3},$ (52) $f_{2}^{A}=\frac{1}{2}\frac{\left(1-\delta\right)^{2}}{\left(2-\delta\right)^{2}}\left(4-3(2-\delta)u_{3}+2\delta(1-\delta)\right),$ (53) $f_{3}^{A}=-\frac{1}{2}\frac{\left(1-\delta\right)^{3}}{\left(2-\delta\right)^{2}}u_{3},\ \ f_{4}^{A}=-\frac{1}{2}\frac{\left(1-\delta\right)^{4}}{\left(2-\delta\right)^{2}}.$ (54) $\displaystyle f_{0}^{B}$ $\displaystyle=u_{1}y_{1}+u_{1}y_{2}-\frac{\delta}{2}(u_{1}+u_{2}+y_{1}+y_{2}-\delta)$ (55) $\displaystyle+\frac{\delta}{2-\delta}\left(u_{1}y_{1}+u_{1}y_{2}-(2-\delta)\left(u_{1}+u_{2}+y_{1}+y_{2}\right)+(2-\delta)^{2}\right),$ (56) $f_{1}^{B}=\frac{1}{2}\frac{\left(1-\delta\right)}{\left(2-\delta\right)}\left\\{4(u_{1}y_{1}-u_{2}y_{2})+\delta(u_{1}+y_{1}-u_{2}-y_{2})\right\\},$ (57) $\displaystyle f_{2}^{B}$ $\displaystyle=\frac{1}{2}\frac{\left(1-\delta\right)^{2}}{\left(2-\delta\right)^{2}}\left\\{6\left(u_{1}+u_{2}+y_{1}+y_{2}\right)-2\left(u_{1}y_{1}+u_{1}y_{2}\right)-8\right.$ (58) $\displaystyle\left.+\delta\left(4-3\left(u_{1}+u_{2}+y_{1}+y_{2}\right)\right)+2\delta(2-\delta)\right\\},$ (59) $f_{3}^{B}=-\frac{1}{2}\frac{\left(1-\delta\right)^{3}}{\left(2-\delta\right)^{2}}\left(u_{1}+y_{1}-u_{2}-y_{2}\right),\ \ f_{4}^{B}=-\frac{1}{2}\frac{\left(1-\delta\right)^{4}}{\left(2-\delta\right)^{2}}.$ (60) The $\rho$-meson integral in Eq.(17) reads $J_{\rho}=\int_{0}^{1}du\ \frac{\phi_{2\pi}(u)}{u\bar{u}}\int Dy_{i}\ \ \frac{1}{y_{1}y_{2}y_{3}}\left(\frac{2A_{\rho}}{y_{3}D_{2}}+\frac{B_{\rho}}{D_{1}D_{2}}\right).$ (61) The numerators $A_{\rho}$ and $B_{\rho}$ can be written as $A_{\rho}=\phi_{3\rho}(y_{i})\sum_{k=0}^{4}\left[f_{k}^{A}\right]I_{k}[23]+\tilde{\phi}_{3\rho}(y_{i})\sum_{k=0}^{4}\left[\tilde{f}_{k}^{A}\right]I_{k}[23],$ (62) $B_{\rho}=\phi_{3\rho}(y_{i})\sum_{k=0}^{4}\left[f_{k}^{B}\right]I_{k}[12]+\tilde{\phi}_{3\rho}(y_{i})\sum_{k=0}^{4}\left[\tilde{f}_{k}^{B}\right]I_{k}[12],$ (63) where the integrals $I_{k}$ are defined in Eq.(50) with a bit different combination $a_{j}$ $a_{j}=\delta_{1j}\left(1-\delta\right)\frac{y_{1}-u_{2}}{y_{1}\bar{u}_{2}+\bar{y}_{1}u_{2}}+\delta_{2j}\left(1-\delta\right)\frac{y_{2}-u_{1}}{y_{2}\bar{u}_{1}+\bar{y}_{2}u_{1}}+\delta_{3j}\left(1-\delta\right)\ ,$ (64) and we again use for the two-particle LCDA $\ u_{1}=u$, $u_{2}=1-u$. The coefficients $f_{k}^{A,B}$ and $\tilde{f}_{k}^{A,B}$ defined in Eqs.(62) and (63) read $\displaystyle f_{0}^{A}$ $\displaystyle=\frac{1}{4}\left(u_{1}(2y_{3}-\delta)+\delta\left(\delta- y_{2}-y_{3}\right)\right)+\frac{\delta^{2}}{2}$ (65) $\displaystyle\ \ \ \ \ \ \ \ \ \ +\frac{1}{4}\frac{\delta}{(2-\delta)}\left(u_{1}\left(6+4y_{3}-3\delta\right)+\left(2-\delta\right)\left(3y_{2}+2y_{3}-2-3\delta\right)\right),$ (66) $\displaystyle f_{1}^{A}$ $\displaystyle=-\frac{1}{4}\frac{1-\delta}{2-\delta}\left(4u_{1}+\delta\right)y_{3},$ (67) $f_{2}^{A}=-\frac{1}{4}\frac{(1-\delta)^{2}}{(2-\delta)^{2}}\left(2u_{1}y_{3}+(2-\delta)\left(2u_{1}+2y_{2}+y_{3}\right)-2(1-\delta)(2-\delta)\right),$ (68) $f_{3}^{A}=\frac{1}{4}\frac{(1-\delta)^{3}}{(2-\delta)^{2}}y_{3},\ \ f_{4}^{A}=0.$ (69) $\displaystyle f_{0}^{B}$ $\displaystyle=-f_{2}^{B}=\frac{\delta}{4}\left(u_{1}-u_{2}-y_{1}+y_{2}\right),\ $ (70) $\displaystyle\ \ f_{1}^{B}$ $\displaystyle=-f_{3}^{B}=-\frac{\delta}{4}(1-\delta)\left(u_{1}+u_{2}-y_{1}-y_{2}\right),\ \ \ f_{4}^{B}=0.$ (71) $\displaystyle\tilde{f}_{0}^{A}$ $\displaystyle=\frac{1}{4}\left(u_{1}(2y_{3}-\delta)+\delta\left(\delta- y_{2}-y_{3}\right)\right)$ (72) $\displaystyle+\frac{1}{4}\frac{\delta}{(2-\delta)}\left(u_{1}\left(2+4y_{3}-\delta\right)+\left(2-\delta\right)\left(2+y_{2}-2y_{3}-3\delta\right)\right),$ (73) $\tilde{f}_{1}^{A}=-\frac{1}{4}\frac{1-\delta}{2-\delta}\left(4u_{1}y_{3}+\delta\left(2u_{1}-2y_{2}+y_{3}\right)\right),$ (74) $\tilde{f}_{2}^{A}=-\frac{1}{4}\frac{(1-\delta)^{2}}{(2-\delta)^{2}}\left(2u_{1}y_{3}-3y_{3}(2-\delta)+\delta(2-\delta)\right),$ (75) $\tilde{f}_{3}^{A}=\frac{1}{4}\frac{(1-\delta)^{3}}{(2-\delta)^{2}}\left(2u_{1}-2y_{2}+y_{3}\right),\ \ \ \tilde{f}_{4}^{A}=-\frac{1}{2}\frac{(1-\delta)^{4}}{(2-\delta)^{2}}.$ (76) $\tilde{f}_{0}^{B}=\frac{\delta}{4}\left(3\left(u_{1}+u_{2}+y_{1}+y_{2}\right)-4-2\delta\right),\ \ \ \tilde{f}_{1}^{B}=-\frac{\delta}{4}\frac{1-\delta}{2-\delta}\left(u_{1}-u_{2}+y_{1}-y_{2}\right),$ (77) $\tilde{f}_{2}^{B}=\frac{1}{4}\frac{(1-\delta)^{2}}{(2-\delta)^{2}}\left(8-3\left(2-\delta\right)\left(u_{1}+u_{2}+y_{1}+y_{2}\right)+4\delta(1-\delta)\right),$ (78) $\tilde{f}_{3}^{B}=\frac{1}{4}\frac{(1-\delta)^{3}}{(2-\delta)^{2}}\left(u_{1}-u_{2}+y_{1}-y_{2}\right),\ \ \ \tilde{f}_{4}^{B}=-\frac{1}{2}\frac{(1-\delta)^{4}}{(2-\delta)^{2}}.$ (79) ## References * [1] R. L. Workman et al. [Particle Data Group], PTEP 2022 (2022), 083C01 * [2] N. Brambilla et al. [Quarkonium Working Group], hep-ph/0412158. * [3] X. H. Mo, C. Z. Yuan and P. Wang, Chin. Phys. C 31 (2007), 686-701 [arXiv:hep-ph/0611214 [hep-ph]]. * [4] V. L. Chernyak and A. R. Zhitnitsky, Phys. Rept. 112 (1984) 173. * [5] A. R. Zhitnitsky, I. R. Zhitnitsky and V. L. Chernyak, Yad. Fiz. 41 (1985), 199-208 * [6] P. Ball and V. M. Braun, [arXiv:hep-ph/9808229 [hep-ph]]. * [7] P. Ball, V. M. Braun and A. Lenz, JHEP 05 (2006), 004 [arXiv:hep-ph/0603063 [hep-ph]]. * [8] P. Ball and G. W. Jones, JHEP 03 (2007), 069 [arXiv:hep-ph/0702100 [hep-ph]]. * [9] V. Chernyak, [arXiv:hep-ph/9906387 [hep-ph]]. * [10] Y. Q. Chen and E. Braaten, Phys. Rev. Lett. 80 (1998), 5060-5063 [arXiv:hep-ph/9801226 [hep-ph]]. * [11] M. Suzuki, Phys. Rev. D 63 (2001), 054021 [arXiv:hep-ph/0006296 [hep-ph]]. * [12] J. L. Rosner, Phys. Rev. D 64 (2001), 094002 [arXiv:hep-ph/0105327 [hep-ph]]. * [13] G. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D 51 (1995) 1125 [Phys. Rev. D 55 (1997) 5853] [hep-ph/9407339]. * [14] E. Braaten and J. Lee, Phys. Rev. D 67 (2003), 054007 [erratum: Phys. Rev. D 72 (2005), 099901] [arXiv:hep-ph/0211085 [hep-ph]]. * [15] N. Kivel, “A study of relativistic corrections to $J/\psi\rightarrow p\bar{p}$ decay,” [arXiv:2211.13603 [hep-ph]]. * [16] J. H. Kuhn, J. Kaplan and E. G. O. Safiani, Nucl. Phys. B 157 (1979), 125-144 * [17] G. T. Bodwin and A. Petrelli, Phys. Rev. D 66 (2002), 094011 [erratum: Phys. Rev. D 87 (2013) no.3, 039902] [arXiv:hep-ph/0205210 [hep-ph]]. * [18] G. T. Bodwin, H. S. Chung, D. Kang, J. Lee and C. Yu, Phys. Rev. D 77 (2008), 094017 [arXiv:0710.0994 [hep-ph]]. * [19] G. T. Bodwin, J. Lee and C. Yu, Phys. Rev. D 77 (2008), 094018 [arXiv:0710.0995 [hep-ph]]. * [20] G. T. Bodwin, D. Kang and J. Lee, Phys. Rev. D 74 (2006), 014014 doi:10.1103/PhysRevD.74.014014 [arXiv:hep-ph/0603186 [hep-ph]]. * [21] P. Ball, V. M. Braun and N. Kivel, Nucl. Phys. B 649 (2003), 263-296 [arXiv:hep-ph/0207307 [hep-ph]].
JUNA Collaboration # Deep underground laboratory measurement of 13C($\alpha$,$n$)16O in the Gamow windows of the s- and i-processes B. Gao Joint department for nuclear physics, Lanzhou University and Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China T.Y. Jiao CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China Y.T. Li CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China H. Chen CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China W.P. Lin Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China Z. An Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China L.H. Ru CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China Z.C. Zhang CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China X.D. Tang <EMAIL_ADDRESS>Joint department for nuclear physics, Lanzhou University and Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China X.Y. Wang CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China N.T. Zhang CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China X. Fang Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-sen University, Zhuhai, Guangdong 519082, People’s Republic of China D.H. Xie Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China Y.H. Fan CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China L. Ma Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China X. Zhang Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China F. Bai Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China P. Wang Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China Y.X. Fan Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China G. Liu Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China H.X. Huang China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Q. Wu Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China Y.B. Zhu Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China J.L. Chai Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China J. Q. Li Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China L. T. Sun Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China S. Wang Shandong Provincial Key Laboratory of Optical Astronomy and Solar-Terrestrial Environment, Institute of Space Sciences, Shandong University, Weihai 264209, China J.W. Cai CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China Y.Z. Li CAS Key Laboratory of High Precision Nuclear Spectroscopy, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 73000, People’s Republic of China School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China J. Su China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Key Laboratory of Beam Technology and Material Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China H. Zhang China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Z. H. Li School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Y. J. Li China Institute of Atomic Energy, Beijing 102413, People’s Republic of China E. T. Li College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China C. Chen China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Y. P. Shen China Institute of Atomic Energy, Beijing 102413, People’s Republic of China G. Lian China Institute of Atomic Energy, Beijing 102413, People’s Republic of China B. Guo China Institute of Atomic Energy, Beijing 102413, People’s Republic of China X. Y. Li Key Laboratory of Beam Technology and Material Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China L. Y. Zhang Key Laboratory of Beam Technology and Material Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China J. J. He Key Laboratory of Beam Technology and Material Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China Y. D. Sheng Key Laboratory of Beam Technology and Material Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China Y. J. Chen Key Laboratory of Beam Technology and Material Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China L. H. Wang Key Laboratory of Beam Technology and Material Modification of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China L. Zhang China Institute of Atomic Energy, Beijing 102413, People’s Republic of China F. Q. Cao China Institute of Atomic Energy, Beijing 102413, People’s Republic of China W. Nan China Institute of Atomic Energy, Beijing 102413, People’s Republic of China W. K. Nan China Institute of Atomic Energy, Beijing 102413, People’s Republic of China G. X. Li China Institute of Atomic Energy, Beijing 102413, People’s Republic of China N. Song China Institute of Atomic Energy, Beijing 102413, People’s Republic of China B. Q. Cui China Institute of Atomic Energy, Beijing 102413, People’s Republic of China L. H. Chen China Institute of Atomic Energy, Beijing 102413, People’s Republic of China R. G. Ma China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Z. C. Zhang College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China S. Q. Yan China Institute of Atomic Energy, Beijing 102413, People’s Republic of China J. H. Liao China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Y. B. Wang China Institute of Atomic Energy, Beijing 102413, People’s Republic of China S. Zeng China Institute of Atomic Energy, Beijing 102413, People’s Republic of China D. Nan China Institute of Atomic Energy, Beijing 102413, People’s Republic of China Q. W. Fan China Institute of Atomic Energy, Beijing 102413, People’s Republic of China N. C. Qi Yalong River Hydropower Development Company, Chengdu 610051, China W. L. Sun Yalong River Hydropower Development Company, Chengdu 610051, China X. Y. Guo Yalong River Hydropower Development Company, Chengdu 610051, China P. Zhang Yalong River Hydropower Development Company, Chengdu 610051, China Y. H. Chen Yalong River Hydropower Development Company, Chengdu 610051, China Y. Zhou Yalong River Hydropower Development Company, Chengdu 610051, China J. F. Zhou Yalong River Hydropower Development Company, Chengdu 610051, China J. R. He Yalong River Hydropower Development Company, Chengdu 610051, China C. S. Shang Yalong River Hydropower Development Company, Chengdu 610051, China M. C. Li Yalong River Hydropower Development Company, Chengdu 610051, China S. Kubono RIKEN Nishina Center, Wako, Saitama 351-0198, Japan Center for Nuclear Study, University of Tokyo, Wako, Saitama 351-0198, Japan W. P. Liu<EMAIL_ADDRESS>China Institute of Atomic Energy, Beijing 102413, People’s Republic of China R.J. deBoer Department of Physics and the Joint Institute for Nuclear Astrophysics, University of Notre Dame, Notre Dame, IN 46556, USA M. Wiescher Department of Physics and the Joint Institute for Nuclear Astrophysics, University of Notre Dame, Notre Dame, IN 46556, USA Wolfson Fellow of Royal Society, School of Physics and Astronomy, University of Edinburgh, King’s Buildings, Edinburgh EH9 3FD, United Kingdom M. Pignatari Konkoly Observatory, Research Centre for Astronomy and Earth Sciences (CSFK), Eötvös Loránd Research Network (ELKH), Konkoly Thege Miklós út 15-17, H-1121 Budapest, Hungary CSFK, MTA Centre of Excellence, Budapest, Konkoly Thege Miklós út 15-17, H-1121, Hungary E. A. Milne Centre for Astrophysics, Department of Physics and Mathematics, University of Hull, HU6 7RX, United Kingdom NuGrid Collaboration, http://nugridstars.org ###### Abstract The 13C($\alpha$,$n$)16O reaction is the main neutron source for the slow- neutron-capture (s-) process in Asymptotic Giant Branch stars and for the intermediate (i-) process. Direct measurements at astrophysical energies in above-ground laboratories are hindered by the extremely small cross sections and vast cosmic-ray induced background. We performed the first consistent direct measurement in the range of $E_{\rm c.m.}=$0.24 MeV to 1.9 MeV using the accelerators at the China Jinping Underground Laboratory (CJPL) and Sichuan University. Our measurement covers almost the entire i-process Gamow window in which the large uncertainty of the previous experiments has been reduced from 60% down to 15%, eliminates the large systematic uncertainty in the extrapolation arising from the inconsistency of existing data sets, and provides a more reliable reaction rate for the studies of the s- and i-processes along with the first direct determination of the alpha strength for the near-threshold state. ††preprint: APS/123-QED Low-mass Asymptotic Giant Branch (AGB) stars are the primary sources of elements above iron in the Galaxy via the activation of the slow-neutron- capture process (s-process) Kobayashi _et al._ (2020). In these stars, the bulk of the s-process abundances is created by neutrons from the the 13C($\alpha$,$n$)16O reaction at temperatures around $T_{9}$=0.1 in the radiative 13C-pocket, located in the helium-rich layers, just below the stellar envelope Gallino _et al._ (1998); Bisterzo _et al._ (2017). Here $T_{9}$ is defined as a temperature divided by 109 Kelvin. The subsequent neutron capture and $\beta$-decay processes transmute lighter elements into heavier ones, with a major production efficiency between Sr and Pb depending on the initial metallicity of the star Käppeler _et al._ (2011). In some AGB simulations, it was also found that part of the 13C is still alive in the 13C-pocket at the onset of the convective thermal pulse: the remaining 13C is mixed at the bottom of the He intershell region activating 13C($\alpha$,$n$)16O at He burning temperatures of approximately $T_{9}$=0.2-0.25 Guo _et al._ (2012); Cristallo _et al._ (2018). This scenario tends to be favored by a low 13C($\alpha$,$n$)16O rate Ciani _et al._ (2021), and the anomalous activation of the 13C($\alpha$,$n$)16O reaction together with the 22Ne($\alpha$,$n$)25Mg reaction at the bottom of the convective thermal pulse. This may affect the s-process isotopic pattern near the active s-process branching points Cristallo _et al._ (2018); Ciani _et al._ (2021). The 13C($\alpha$,$n$)16O reaction is also the main neutron source of the intermediate process (i-process ) Cowan and Rose (1977), which matches the puzzling abundances observed in some post AGB stars Herwig _et al._ (2011), of a subset of carbon-enhanced metal poor (CEMP) stars Hampel _et al._ (2016), of presolar grains Jadhav _et al._ (2013) and stars in young open clusters Mishenina _et al._ (2015). The i-process can be activated in different stellar environments, including among other low-mass AGB stars Cristallo _et al._ (2009); Hampel _et al._ (2016), super AGB stars Jones _et al._ (2016) and post AGB stars Herwig _et al._ (2011), massive stars Clarkson _et al._ (2018); Banerjee _et al._ (2018) and rapidly-accreting WDs Denissenkov _et al._ (2019). In those models, a small amount of hydrogen is ingested into the convective helium-burning zone underneath the envelope. Hydrogen reacts with the primary product of He-burning 12C, making 13N that will decay to 13C. The i-process is generated by the 13C($\alpha$,$n$)16O activated at He-burning temperatures of around $T_{9}$=0.2 or above Herwig _et al._ (2011). This results in a neutron density of around 10${}^{14}\sim$ 1016 cm-3, which is significantly higher than typical values of the s-process (10${}^{6}\sim$ 1010 cm-3). At least for one-dimensional models of metal-poor low-mass AGB models, Cristallo _et al._ (2018) showed that after the hydrogen ingestion, the 13C($\alpha$,$n$)16O reaction also becomes a relevant energy source in the He shell: in their calculations even a factor of two variation of the 13C($\alpha$,$n$)16O rate changes the i-process production by orders of magnitude. Such an impact will need to be confirmed for different types of stars by multi-dimensional hydrodynamics simulations, providing guidance for how one-dimensional models should behave once hydrogen has been ingested in the hotter He-burning regions Herwig _et al._ (2014); Denissenkov _et al._ (2019); Clarkson and Herwig (2021). It is clear that the 13C($\alpha$,$n$)16O reaction rate is a fundamental ingredient in the s- and i-process models, determining the neutron density and the final isotopic production. A solid understanding of the reaction cross section is needed at the associated Gamow energies of about $E_{\mathrm{c.m.}}$ = 0.15 to 0.3 MeV and 0.2 to 0.54 MeV, respectively. The reaction cross sections in these energy regions are strongly influenced by the $\alpha$ cluster state near the separation threshold according to the theory of Ikeda Ikeda _et al._ (1972). Descouvemont made a theoretical prediction of the level structure using a microscopic generator-coordinate method (GCM) and concluded that the theoretical S-factor below $E_{\mathrm{c.m.}}$ = 0.3 MeV increases rapidly with respect to the extrapolations that ignored this threshold state Descouvemont (1987). Reliable experimental measurements down to energies below 0.3 MeV are called for to confirm the prediction. Considerable efforts have been devoted to push the direct measurement of the 13C($\alpha$,$n$)16O reaction cross section ($\sigma$) Sekharan _et al._ (1967); Bair and Haas (1973); Davids (1968); Drotleff _et al._ (1993); Heil _et al._ (2008) down to the stellar energies where $\sigma$ becomes extremely small. Due to the vast cosmic background, direct measurements performed in laboratories on the Earth’s surface stopped at energies above $E_{\mathrm{c.m.}}$ = 0.27 MeV with a lower limit of $\sigma$ = 6(4)$\times$ $10^{-11}$ barn Drotleff _et al._ (1993), unable to effectively constrain the crucial threshold state and provide a reliable extrapolation down to stellar energies. Besides that, the extrapolation accuracy is further limited by large discrepancies among those measurements deBoer _et al._ (2020); Brown _et al._ (2018). Recent breakthrough in the direct measurement of this reaction was reported by the LUNA collaboration Ciani _et al._ (2021) at $E_{\mathrm{c.m.}}$ = 0.23-0.30 MeV, the upper range of the s-process Gamow window. However, they had to rely on other existing data at higher energies, ANC of the threshold state and the R-matrix analysis to determine the S-factor in the range of 0.15 to 0.5 MeV, most of which is inaccessible with the current LUNA facility. The large discrepancies between Harissopulos _et al._ (2005) and other measurements Drotleff _et al._ (1993); Heil _et al._ (2008) result in $\sim$50% differences in their recommended upper and lower limits for the reaction rate at $T_{9}$=0.1-0.3, leading to significant uncertainties in the production yields of several important isotopes, such as 60Fe and 205Pb, by using their AGB model. In this paper, we report the first consistent direct measurement of the 13C($\alpha$,$n$)16O reaction over a wider energy range of $E_{\rm c.m.}$ = 0.24-1.9 MeV with improved precision. Our measurement reduces the 60% large uncertainty down to 15% at the center of the Gamow window of the i-process, provides the first direct determination of the alpha strength for the near threshold state, and eliminates the large systematic uncertainty in the extrapolation incurred by the discrepancy of the existing experiments. A new reliable reaction rate is recommended based on our measurement. The underground experiment was performed in the A1 hall of the China JinPing underground Laboratory (CJPL) Cheng _et al._ (2017); Liu _et al._ (2016); Zhang _et al._ (2021). High-intensity 4He+ and 4He2+ ions were extracted from 2.45- and 14.5-GHz electron-cyclotron-resonance (ECR) sources, respectively, and accelerated by a 400-kV platform called Jinping Underground Nuclear Astrophysics experimental facility (JUNA). The highest beam energy of 800 keV was achieved by using 4He2+ ions, allowing comparisons with previous measurements. The acceleration voltage was calibrated using the 12C(p,$\gamma$)13N, 27Al(p,$\gamma$)28Si, 11B(p,$\gamma$)12C and 14N(p,$\gamma$)15O reactions Wang (2021). The absolute beam energy was determined to an accuracy of 0.5 keV with an energy spread of less than 0.2 keV Wang (2021). A 90∘ dipole magnet with a mass resolution of 250 was used together with a set of analyzing slits to eliminate the H${}_{2}^{+}$/D+ contamination in the 4He2+ beam Chen _et al._ (2018). A clear separation of 4He2+ from the other impurities was observed at the slit position and the count ratio of the inner and outer rings of the 3He detector array indicated that no neutrons came from the deuterium impurity. To avoid the source of systematic uncertainty incurred by target deterioration in traditional thin target experiments, we used 2-mm thick 13C enriched targets with a purity of 97%. The target was installed on a water-cooled copper backing. On-target beam intensity of up to 2.5 particle mA, the highest $\alpha$ beam intensity among the deep underground laboratories, was achieved. The thick targets turned out to be very stable and only two targets were used for the whole experiment. A cold trap was installed to reduce the natural carbon buildup on the targets. For the 4He+ runs, the analyzing slits used in the 4He2+ runs were removed to allow maximum transmission efficiency and achieve higher beam intensities. Neutrons from the 13C($\alpha$, $n$)16O reaction were detected by an array consisting of 24 3He-filled proportional counters. By placing 35-cm thick 7% borated polyethylene blocks and 1-mm thick cadmium sheets around the detection array, a background of 4.7(2) events/hour was achieved, compared to 1238(11) events/hour measured on the Earth’s surface. The detection efficiency of the array was determined to be 26% for 2.5-MeV neutrons using the 51V(p,n)51Cr reaction together with Geant4 simulations Li _et al._ (2022). The thick target yield ${\rm Y(E)}$ of the 13C($\alpha$,$n$)16O reaction was measured with beam energies of 0.3 $<$ $E_{\alpha}$ $<$ 0.785 MeV. The beam- induced neutron background (BINB) was estimated by measuring the ${\rm Y(E)}$ at $E_{\alpha}$ = 0.25 MeV where the cross section of the 13C($\alpha$,$n$)16O reaction is negligibly small and all neutrons detected above the environmental background level should be attributed to the BINB. BINB was determined to be 0.05(8) events/Coulomb, consistent with zero. The cross sections and the corresponding effective energies were extracted by differentiating the thick target yield Spillane _et al._ (2007); Notani _et al._ (2012). We repeatedly checked the neutron yields at 17 energy points and the reproducibility was found to be 8% and 3% for the 4He+ and 4He2+ data sets, respectively. This random error likely originates from the beam tuning and the potential carbon build up. Therefore, we added this error quadratically together with the statistical error. Another thick target measurement was performed in the range of $E_{\rm c.m.}$ = 0.75 MeV to 1.9 MeV using the 4He+ beam from the 3 MV Tandetron at Sichuan University Han _et al._ (2018) to resolve the discrepancies among the S-factors at higher energies in the previous works Harissopulos _et al._ (2005); Bair and Haas (1973); Drotleff _et al._ (1993). The same detection setup was used to minimize extra systematic uncertainties. The beam energy was calibrated using the 7Li(p,n)7Be reaction, and confirmed by the narrow resonances of the 13C($\alpha$,$n$)16O at $E_{\alpha}$=1055.6, 1334.7 and 1338.8 keV. A thin target measurement was also carried out in the range of $E_{\mathrm{c.m.}}$=1.6 to 1.9 MeV using a 3.2$\mu$g/cm2-thick 13C target. The thin-target data is normalized to the thick-target data. The reproducibility of the thick-target and thin-target measurements are estimated to be 3% and 2%, respectively. This uncertainty is included with the statistical error as discussed above. By adopting a compiled angular distribution Brune (2021); Walton _et al._ (1957) in the Geant4 simulation, our efficiency has been corrected for angular distribution effects. These effects were found to change the efficiency by $\pm$2% at $E_{\mathrm{c.m.}}<$0.6 MeV. However, the efficiency at $E_{\mathrm{c.m.}}\sim$0.9 MeV deviates from the nominal efficiency with an isotropic distribution by $\sim$5%, which is larger than the statistical uncertainties in the previous measurements Harissopulos _et al._ (2005); Bair and Haas (1973). This deviation becomes even larger at the narrow resonances Li _et al._ (2022). Such an effect was overlooked in these previous works. Systematic uncertainties of our measurements at CJPL and SCU are estimated to be 11%, which includes contributions from the beam current integration(5%), detection efficiency (7%), angular distribution (2% for JUNA underground measurement and 4% for SCU experiment), and stopping power (6%) Ziegler _et al._ (2010). Our S-factor is converted into the bare S-factors after correcting for the screen effect using our fitted screening potential of $U_{e}$ = 0.78 keV together with the previous measurements Drotleff _et al._ (1993); Heil _et al._ (2008); Harissopulos _et al._ (2005); Ciani _et al._ (2021); Kellogg _et al._ (1989). The results are presented in Fig. 1 . It can be seen that our underground data cover the energy range from $E_{\rm c.m.}$ = 0.24 to 0.59 MeV, greatly overlapping with the astrophysical region of $E_{\rm c.m.}$ = 0.15 to 0.5 MeV with a statistical uncertainty better than 15%. With the unique energy range and ultra low neutron background in the deep underground lab, we are able to precisely measure the S-factor in the range of astrophysical interest for i-process nucleosynthesis. The center of the Gamow window for the i-process is located at 0.35 MeV, beyond the accessible energy range of LUNA. The two extrapolation scenarios of LUNA using either the normalization of Heil, or Drotleff Drotleff _et al._ (1993); Heil _et al._ (2008); Bair and Haas (1973) or that of Harissopulos _et al._ (2005) resulted in their so-called best fit and “low LUNA” fit, respectively. To be on the safe side, they defined the “low-LUNA” fit by taking the 95% confidence level of the lower limit of the fit with the original Harissopulos data. These two fits differ from each other by a factor of 2 at 0.35 MeV. Such a large systematic uncertainty in their extrapolation is eliminated by our consistent measurement, which rules out the lower normalization of Harissopulos _et al._ (2005). Drotleff _et al._ (1993) was the best measurement before ours at the energy around 0.35 MeV. While our data above 0.4 MeV is in good agreement with that of Drotleff, our data around 0.27 MeV are about 50% lower and disagree with the upturning trend in this data set. The nearly 60% uncertainty in Ref. Drotleff _et al._ (1993) within the Gamow window has been reduced to 15%. Figure 1: The S-factor of the 13C($\alpha$,$n$)16O reaction. The uncertainties from the fit to the JUNA+SCU data are indicated by dotted lines. The best fit and lower limit recommended by LUNA Ciani _et al._ (2021) are shown as black and blue dashed lines, respectively. The S-factors have been corrected with the screening potential $U_{e}$ = 0.78 keV. The temperatures in $T_{9}$ on the top correspond to the center energy of the Gamow window on the bottom. The S-factor at $E_{\rm c.m.}<$0.24 MeV was obtained using an $R$-matrix analysis Lane and Thomas (1958) in the range of $E_{\rm c.m.}$=0.24 to 1.9 MeV using the code AZURE2 Azuma _et al._ (2010); Uberseder and deBoer (2015). In our analysis, we only included our measurements of the 13C($\alpha$,$n$)16O cross section, to eliminate the systematic uncertainty of the inconsistent data sets, and the 16O+$n$ total cross section Fowler _et al._ (1973). Our best fit is shown together with its estimated uncertainty in Fig. 1. The screening potential($U_{e}$) is fitted to be 0.78$\pm$0.43 keV. It agrees with the theoretical prediction of $U_{e}$=0.937 keV using the adiabatic limit while ruling out the larger prediction of $U_{e}$=2 keV Trippella and Cognata (2017). Our fit is about 15% systematically higher than the LUNA measurement Ciani _et al._ (2021). The reduced-$\chi^{2}$ of the LUNA data is 25 by using their best fit. It drops to 1.02 with our fit after the normalization and excluding the point at $E_{\rm c.m.}$=0.29 MeV, which is 5$\sigma$ lower than our best fit. Although the LUNA measurement agrees with ours within the quoted errors, the inconsistency between the measurement of Harissopulos _et al._ (2005) and other measurements at higher energies leads to a $\sim$50% difference between the upper and the lower limits of the reaction rate recommended by LUNA at $T_{9}$=0.1-0.3. This demonstrates a key limitation of the LUNA measurement, that its limited energy range did not allow for a direct comparison with higher energy data. Using our consistent measurement over a board energy range, the uncertainty of our fit are reliably constrained to the level of$<$16% at the Gamow windows of s- and i-processes. The extrapolated S-factor towards lower energy is dominated by the $\alpha$ reduced width $\gamma_{\alpha}$ or the Coulomb renormalized asymptotic normalization coefficient ($\tilde{C}^{2}$) of the 1/2+ threshold state. The R-matrix analysis performed in previous works involved fixing the ANC of the threshold state to values obtained from indirect measurements. However, the uncertainties in these ANCs often suffer from difficulties to quantify systematic uncertainties from the models used to obtain them. The lower and higher limits of the measured $\tilde{C}^{2}$ differ from each other by a factor of $\sim$5 deBoer _et al._ (2020). These systematic uncertainties have been eliminated in our fit by treating the $\Gamma_{\alpha}$ of this state as a free parameter. The reduced widths $\gamma_{\alpha}$ obtained from our best $R$-matrix analysis is -0.14(2) MeV1/2 with a channel radius of 6.684 fm and $E_{x}$=6.3772 MeV, corresponding to an ANC of $\tilde{C}^{2}$=2.1(5) fm-1 with $E_{x}$=6.356 MeV Mukhamedzhanov _et al._ (2017); Brune (2020). Our value is slightly lower than the indirect measurements of 3.6(7) fm-1 Avila _et al._ (2015) and agree with 2.7(8) fm-1 Guo _et al._ (2012); Shen _et al._ (2019) and 4.5(2.2)Pellegriti _et al._ (2008). For the first time, we not only validate the $\alpha$ width of the threshold state obtained with the indirect method using the direct measurement, but also determine the interference pattern in the $R$-matrix analysis. As LUNA used the higher $\tilde{C}^{2}$ from Avila _et al._ (2015) to constrain their extrapolation towards lower energies, our best fit is 23% lower than their best fit at $E_{\rm c.m.}$=0.19 MeV, the center of the Gamow window for $T_{9}$=0.1 (see Fig.2). At the same energy, with the combination of a larger reduced width Avila _et al._ (2015) and the cross section of Harissopulos _et al._ (2005), the “low LUNA” fit is 11% lower than our best fit. Figure 2: The Gamow function of 13C($\alpha$,$n$)16O at $T_{9}$ = 0.1 and 0.2. Color coding is identical to Fig.1 The 13C($\alpha$,$n$)16O reaction rate is calculated by numerical integration of the standard reaction rate equation Rolfs and Rodney (1988): $\langle\sigma v\rangle=\Big{(}\frac{8}{\pi\mu}\Big{)}^{1/2}\frac{1}{{kT}^{3/2}}\int_{0}^{\infty}\sigma(E)E\exp\Big{(}-\frac{E}{kT}\Big{)}\mathrm{d}{\rm E}$ (1) To highlight the important stellar energy range for a typical helium burning temperature $T_{9}$ = 0.1 and 0.2, the integrand of Eq. (1) (Gamow function) is computed and shown in Fig. 2. At $T_{9}$ = 0.1, the temperature of the 13C pocket in the AGB model, our extrapolation is lower than the best fit of LUNA and tends to agree better with their “low LUNA” fit. At $T_{9}$ = 0.2, which is of importance for both the i-process and s-process nucleosynthesis in the thermal pulse in the AGB model, our measurement covers nearly the entire Gamow function with significantly improved uncertainties. This is a substantial improvement compared to previous measurements as the ground-based measurements from Ref. Drotleff _et al._ (1993); Heil _et al._ (2008) covered only the upper part of the Gamow window with large uncertainties while the LUNA extrapolation suffered from the inconsistencies in the absolute cross section of the higher energy measurements. At the center of the Gamow window of $T_{9}$ = 0.2, our result agrees with the best fit of LUNA within our quoted uncertainty, but rules out the “low LUNA” fit, reflecting a significant difference in the shape of our extrapolation from that of LUNA. The reaction rate is calculated with the JUNA fit shown in Fig. 1. Comparisons of the reaction rates are shown in Fig. 3. Our reaction rate agrees well with previous Reaclib compilations based on the ANC method Guo _et al._ (2012) and NACRE-II Xu _et al._ (2013) at $T_{9}\geq$ 0.1. The nearly 50% difference between the upper limit and lower limit (“low LUNA”) of LUNA and the even larger uncertainty in NACRE-II have been improved significantly. At $T_{9}$=0.1-0.3, the typical temperatures for s- and i-processes, we have reached an uncertainty of 13%-16%. Figure 3: Selected reaction rates normalized by the rate determined in this work. The uncertainties of our new rate and the LUNA rate, based on their best fit, are indicated by the red hatched area and yellow shaded area, respectively. For comparison, we also show the rates from NACRE-II Xu _et al._ (2013), LUNA Ciani _et al._ (2021), and JINA Reaclib Guo _et al._ (2012); Cyburt _et al._ (2010). It has been shown that the “low-LUNA” rate increases the survivability of 13C in the 13C-pocket of an AGB star, and that it burns at a high temperature in the subsequent thermal pulse Ciani _et al._ (2021). Compared with the “low- LUNA” rate, our recommended rate is slightly higher at temperatures typical of the 13C-pocket within 15$\%$, and about 30$\%$-40% higher at the thermal pulse temperatures. Therefore, we expect our rate to produce effects similar to those discussed by Ciani _et al._ (2021), although a follow up, detailed study, on AGB stellar models is needed. Concerning the impact on the i-process nucleosynthesis, future models based on the next generation of multi- dimensional hydrodynamics simulations will be more predictive thanks to the more reliable reaction rate uncertainties provided by this work. In summary we have performed a direct measurement of the 13C($\alpha$,$n$)16O reaction cross section over the range of $E_{\rm c.m.}$ = 0.24-1.9 MeV using the most intense $\alpha$ beam available in the deep underground laboratories with the highest precision to date. Our consistent measurement, covering a wide energy range, reduce the large uncertainty in the reaction rate down to 13% to 16% for i- and s-process nucleosynthsis. Our reaction rate is similar to the “low LUNA” rate at the typical 13C pocket, favoring the release of more neutrons from the 13C($\alpha$,$n$)16O reaction during the thermal pulse phase. Our direct measurement eliminates an important systematic uncertainty in the R-matrix extrapolation by resolving the inconsistency among the data sets at higher energies Bair and Haas (1973); Harissopulos _et al._ (2005); Kellogg _et al._ (1989). For the first time, we determines the ANC of the threshold state using the direct measurement, fix the interference pattern and determine the screening potential using R-matrix analysis. ###### Acknowledgements. The authors would like to thank Tsinghua University and Prof. Jianping Cheng for the support of Lab infrastructure and the China National Nuclear Corporation for the finial support, CDEX and PandaX collaborations for their kind helps, L. He, Y. Wang, P. Wang and J.K. Liu for their helps in setting up the experimental terminal and Prof. Z.G. Wang for developing the instruments which was used to develop the thick 13C targets. The authors also acknowledge the assistance of Carl Brune in estimating neutron angular distribution corrections. X.T. thanks A. Best and D. Rapagnani for their helpful discussions throughout the project and providing their original data sets. This work was supported by the National Natural Science Foundation of China under Grants No. 11490564, the Strategic Priority Research Program of Chinese Academy of Sciences, grant No. XDB34020000, the national key research and development program (MOST 2016YFA0400501). S. W. was supported by the National Natural Science Foundation of China under Grants No. 11775133, 11405096. B. G. was supported by the National Natural Science Foundation of China under Grants No. 12125509. W.P. L. was supported by the National Natural Science Foundation of China under Grant No. 11805138. X. F. was supported by the National Natural Science Foundation of China under Grants No. 11875329. R.J.D. acknowledges the use of resources from the Notre Dame Center for Research Computing. R.J.D. and M.W. were supported by the National Science Foundation through Grant No. Phys-2011890, and the Joint Institute for Nuclear Astrophysics through Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements). MP acknowledges support to NuGrid from STFC (through the University of Hull’s Consolidated Grant ST/R000840/1) and ongoing access to viper, the University of Hull High Performance Computing Facility. MP thanks the support by the ERC Consolidator Grant (Hungary) funding scheme (Project RADIOSTAR, G.A. n. 724560), the ChETEC COST Action (CA16117), supported by the European Cooperation in Science and Technology, the IReNA network supported by NSF AccelNet, and the European Union ChETEC-INFRA (project no. 101008324). ## References * Kobayashi _et al._ (2020) C. Kobayashi, A. I. Karakas, and M. Lugaro, The Astrophysical Journal 900, 179 (2020), arXiv:2008.04660 [astro-ph.GA] . * Gallino _et al._ (1998) R. Gallino, C. Arlandini, M. Busso, M. Lugaro, C. Travaglio, O. Straniero, A. Chieffi, and M. Limongi, The Astrophysical Journal 497, 388 (1998). * Bisterzo _et al._ (2017) S. Bisterzo, C. Travaglio, M. Wiescher, F. Käppeler, and R. Gallino, The Astrophysical Journal 835, 97 (2017). * Käppeler _et al._ (2011) F. Käppeler, R. Gallino, S. Bisterzo, and W. Aoki, Reviews of Modern Physics 83, 157 (2011), arXiv:1012.5218 [astro-ph.SR] . * Guo _et al._ (2012) B. Guo, Z. H. Li, M. Lugaro, J. Buntain, D. Y. Pang, Y. J. Li, J. Su, S. Q. Yan, X. X. Bai, Y. S. Chen, Q. W. Fan, S. J. Jin, A. I. Karakas, E. T. Li, Z. C. Li, G. Lian, J. C. Liu, X. Liu, J. R. Shi, N. C. Shu, B. X. Wang, Y. B. Wang, S. Zeng, and W. P. Liu, The Astrophysical Journal 756, 193 (2012). * Cristallo _et al._ (2018) S. Cristallo, M. La Cognata, C. Massimi, A. Best, S. Palmerini, O. Straniero, O. Trippella, M. Busso, G. F. Ciani, F. Mingrone, L. Piersanti, and D. Vescovi, Astrophys. J. 859, 105 (2018), arXiv:1804.10751 [astro-ph.SR] . * Ciani _et al._ (2021) G. F. Ciani, L. Csedreki, D. Rapagnani, M. Aliotta, J. Balibrea-Correa, F. Barile, D. Bemmerer, A. Best, A. Boeltzig, C. Broggini, C. G. Bruno, A. Caciolli, F. Cavanna, T. Chillery, P. Colombetti, P. Corvisiero, S. Cristallo, T. Davinson, R. Depalo, A. Di Leva, Z. Elekes, F. Ferraro, E. Fiore, A. Formicola, Z. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Gyürky, G. Imbriani, M. Junker, M. Lugaro, P. Marigo, E. Masha, R. Menegazzo, V. Mossa, F. R. Pantaleo, V. Paticchio, R. Perrino, D. Piatti, P. Prati, L. Schiavulli, K. Stöckel, O. Straniero, T. Szücs, M. P. Takács, F. Terrasi, D. Vescovi, and S. Zavatarelli (LUNA Collaboration), Phys. Rev. Lett. 127, 152701 (2021). * Cowan and Rose (1977) J. J. Cowan and W. K. Rose, The Astrophysical Journal 212, 149 (1977). * Herwig _et al._ (2011) F. Herwig, M. Pignatari, P. R. Woodward, D. H. Porter, G. Rockefeller, C. L. Fryer, M. Bennett, and R. Hirschi, The Astrophysical Journal 727, 89 (2011). * Hampel _et al._ (2016) M. Hampel, R. J. Stancliffe, M. Lugaro, and B. S. Meyer, The Astrophysical Journal 831, 171 (2016). * Jadhav _et al._ (2013) M. Jadhav, M. Pignatari, F. Herwig, E. Zinner, R. Gallino, and G. R. Huss, The Astrophysical Journal Letters 777, L27 (2013), arXiv:1310.2679 [astro-ph.EP] . * Mishenina _et al._ (2015) T. Mishenina, M. Pignatari, G. Carraro, V. Kovtyukh, L. Monaco, S. Korotin, E. Shereta, I. Yegorova, and F. Herwig, Monthly Notices of the Royal Astronomical Society 446, 3651 (2015), arXiv:1411.1422 [astro-ph.SR] . * Cristallo _et al._ (2009) S. Cristallo, L. Piersanti, O. Straniero, R. Gallino, I. Domínguez, and F. Käppeler, Publications of the Astronomical Society of Australia 26, 139 (2009), arXiv:0904.4173 [astro-ph.SR] . * Jones _et al._ (2016) S. Jones, C. Ritter, F. Herwig, C. Fryer, M. Pignatari, M. G. Bertolli, and B. Paxton, Monthly Notices of the Royal Astronomical Society 455, 3848 (2016), arXiv:1510.07417 [astro-ph.SR] . * Clarkson _et al._ (2018) O. Clarkson, F. Herwig, and M. Pignatari, Monthly Notices of the Royal Astronomical Society 474, L37 (2018), arXiv:1710.01763 [astro-ph.SR] . * Banerjee _et al._ (2018) P. Banerjee, Y.-Z. Qian, and A. Heger, The Astrophysical Journal 865, 120 (2018). * Denissenkov _et al._ (2019) P. A. Denissenkov, F. Herwig, P. Woodward, R. Andrassy, M. Pignatari, and S. Jones, Monthly Notices of the Royal Astronomical Society 488, 4258 (2019), arXiv:1809.03666 [astro-ph.SR] . * Herwig _et al._ (2014) F. Herwig, P. R. Woodward, P.-H. Lin, M. Knox, and C. Fryer, The Astrophysical Journal Letters 792, L3 (2014), arXiv:1310.4584 [astro-ph.SR] . * Clarkson and Herwig (2021) O. Clarkson and F. Herwig, Monthly Notices of the Royal Astronomical Society 500, 2685 (2021), arXiv:2005.07748 [astro-ph.SR] . * Ikeda _et al._ (1972) K. Ikeda, T. Marumori, R. Tamagaki, and H. Tanaka, Prog. Theor. Phys. Suppl. 52, 1 (1972), http://oup.prod.sis.lan/ptps/article-pdf/doi/10.1143/PTPS.52.1/5357826/52-1.pdf . * Descouvemont (1987) P. Descouvemont, Physical Review C 36, 2206 (1987). * Sekharan _et al._ (1967) K. K. Sekharan, A. S. Divatia, M. K. Mehta, S. S. Kerekatte, and K. B. Nambiar, Physical Review 156, 1187 (1967). * Bair and Haas (1973) J. K. Bair and F. X. Haas, Physical Review C 7, 1356 (1973). * Davids (1968) C. N. Davids, Nuclear Physics A 110, 619 (1968). * Drotleff _et al._ (1993) H. W. Drotleff, A. Denker, H. Knee, M. Soine, G. Wolf, J. W. Hammer, U. Greife, C. Rolfs, and H. P. Trautvetter, The Astrophysical Journal 414, 735 (1993). * Heil _et al._ (2008) M. Heil, R. Detwiler, R. E. Azuma, A. Couture, J. Daly, J. Görres, F. Käppeler, R. Reifarth, P. Tischhauser, C. Ugalde, and M. Wiescher, Physical Review C 78, 025803 (2008). * deBoer _et al._ (2020) R. J. deBoer, C. R. Brune, M. Febrarro, J. Görres, I. J. Thompson, and M. Wiescher, Phys. Rev. C 101, 045802 (2020). * Brown _et al._ (2018) D. Brown, M. Chadwick, R. Capote, A. Kahler, A. Trkov, M. Herman, A. Sonzogni, Y. Danon, A. Carlson, M. Dunn, D. Smith, G. Hale, G. Arbanas, R. Arcilla, C. Bates, B. Beck, B. Becker, F. Brown, R. Casperson, J. Conlin, D. Cullen, M.-A. Descalle, R. Firestone, T. Gaines, K. Guber, A. Hawari, J. Holmes, T. Johnson, T. Kawano, B. Kiedrowski, A. Koning, S. Kopecky, L. Leal, J. Lestone, C. Lubitz, J. Márquez Damián, C. Mattoon, E. McCutchan, S. Mughabghab, P. Navratil, D. Neudecker, G. Nobre, G. Noguere, M. Paris, M. Pigni, A. Plompen, B. Pritychenko, V. Pronyaev, D. Roubtsov, D. Rochman, P. Romano, P. Schillebeeckx, S. Simakov, M. Sin, I. Sirakov, B. Sleaford, V. Sobes, E. Soukhovitskii, I. Stetcu, P. Talou, I. Thompson, S. van der Marck, L. Welser-Sherrill, D. Wiarda, M. White, J. Wormald, R. Wright, M. Zerkle, G. Žerovnik, and Y. Zhu, Nuclear Data Sheets 148, 1 (2018), special Issue on Nuclear Reaction Data. * Harissopulos _et al._ (2005) S. Harissopulos, H. W. Becker, J. W. Hammer, A. Lagoyannis, C. Rolfs, and F. Strieder, Physical Review C 72, 062801 (2005). * Cheng _et al._ (2017) J.-P. Cheng, K.-J. Kang, J.-M. Li, J. Li, Y.-J. Li, Q. Yue, Z. Zeng, Y.-H. Chen, S.-Y. Wu, X.-D. Ji, and H. T. Wong, Annual Review of Nuclear and Particle Science 67, 231 (2017), https://doi.org/10.1146/annurev-nucl-102115-044842 . * Liu _et al._ (2016) W. Liu, Z. Li, J. He, X. Tang, G. Lian, Z. An, J. Chang, H. Chen, Q. Chen, X. Chen, Z. Chen, B. Cui, X. Du, C. Fu, L. Gan, B. Guo, G. He, A. Heger, S. Hou, H. Huang, N. Huang, B. Jia, L. Jiang, S. Kubono, J. Li, K. Li, T. Li, Y. Li, M. Lugaro, X. Luo, H. Ma, S. Ma, D. Mei, Y. Qian, J. Qin, J. Ren, Y. Shen, J. Su, L. Sun, W. Tan, I. Tanihata, S. Wang, P. Wang, Y. Wang, Q. Wu, S. Xu, S. Yan, L. Yang, Y. Yang, X. Yu, Q. Yue, S. Zeng, H. Zhang, H. Zhang, L. Zhang, N. Zhang, Q. Zhang, T. Zhang, X. Zhang, X. Zhang, Z. Zhang, W. Zhao, Z. Zhao, C. Zhou, and J. U. N. A. Collaboration, Science China Physics, Mechanics & Astronomy 59, 642001 (2016). * Zhang _et al._ (2021) L. Y. Zhang, J. Su, J. J. He, M. Wiescher, R. J. deBoer, D. Kahl, Y. J. Chen, X. Y. Li, J. G. Wang, L. Zhang, F. Q. Cao, H. Zhang, Z. C. Zhang, T. Y. Jiao, Y. D. Sheng, L. H. Wang, L. Y. Song, X. Z. Jiang, Z. M. Li, E. T. Li, S. Wang, G. Lian, Z. H. Li, X. D. Tang, H. W. Zhao, L. T. Sun, Q. Wu, J. Q. Li, B. Q. Cui, L. H. Chen, R. G. Ma, B. Guo, S. W. Xu, J. Y. Li, N. C. Qi, W. L. Sun, X. Y. Guo, P. Zhang, Y. H. Chen, Y. Zhou, J. F. Zhou, J. R. He, C. S. Shang, M. C. Li, X. H. Zhou, Y. H. Zhang, F. S. Zhang, Z. G. Hu, H. S. Xu, J. P. Chen, and W. P. Liu, Phys. Rev. Lett. 127, 152702 (2021). * Wang (2021) S. Wang, “Measurements of the proton beam characteristics of juna 400 kv accelerator,” (2021), unpublished Manuscript. * Chen _et al._ (2018) H. Chen, S. Xu, N. Zhang, J. Hu, K. Li, S. Ma, X. Ruan, X. Tang, and L. Zhang, Science China Physics, Mechanics, and Astronomy 61, 52021 (2018). * Li _et al._ (2022) Y. T. Li, W. P. Lin, B. Gao, H. Chen, H. Huang, Y. Huang, T. Y. Jiao, K. A. Li, X. D. Tang, X. Y. Wang, X. Fang, H. X. Huang, J. Ren, L. H. Ru, X. C. Ruan, N. T. Zhang, and Z. C. Zhang, Nuclear Science and Techniques 33, 41 (2022), arXiv:2111.12552 . * Spillane _et al._ (2007) T. Spillane, F. Raiola, C. Rolfs, D. Schürmann, F. Strieder, S. Zeng, H.-W. Becker, C. Bordeanu, L. Gialanella, M. Romano, and J. Schweitzer, Physical Review Letters 98, 122501 (2007). * Notani _et al._ (2012) M. Notani, H. Esbensen, X. Fang, B. Bucher, P. Davies, C. L. Jiang, L. Lamm, C. J. Lin, C. Ma, E. Martin, K. E. Rehm, W. P. Tan, S. Thomas, X. D. Tang, and E. Brown, Physical Review C 85, 014607 (2012). * Han _et al._ (2018) J. Han, Z. An, G. Zheng, F. Bai, Z. Li, P. Wang, X. Liao, M. Liu, S. Chen, M. Song, and J. Zhang, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 418, 68 (2018). * Brune (2021) C. Brune, Private communication (2021). * Walton _et al._ (1957) R. B. Walton, J. D. Clement, and F. Boreli, Phys. Rev. 107, 1065 (1957). * Ziegler _et al._ (2010) J. F. Ziegler, M. Ziegler, and J. Biersack, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 268, 1818 (2010). * Kellogg _et al._ (1989) S. Kellogg, R. Vogelaar, and R. Kavanagh, Bull. Am. Phys. Soc. 34, 1192 (1989). * Lane and Thomas (1958) A. M. Lane and R. G. Thomas, Rev. Mod. Phys. 30, 257 (1958). * Azuma _et al._ (2010) R. E. Azuma, E. Uberseder, E. C. Simpson, C. R. Brune, H. Costantini, R. J. de Boer, J. Görres, M. Heil, P. J. LeBlanc, C. Ugalde, and M. Wiescher, Phys. Rev. C 81, 045805 (2010). * Uberseder and deBoer (2015) E. Uberseder and R. J. deBoer, _AZURE2 User Manual_ (2015). * Fowler _et al._ (1973) J. L. Fowler, C. H. Johnson, and R. M. Feezel, Phys. Rev. C 8, 545 (1973). * Trippella and Cognata (2017) O. Trippella and M. L. Cognata, The Astrophysical Journal 837, 41 (2017). * Mukhamedzhanov _et al._ (2017) A. M. Mukhamedzhanov, Shubhchintak, and C. A. Bertulani, Phys. Rev. C 96, 024623 (2017). * Brune (2020) C. R. Brune, Phys. Rev. C 102, 034328 (2020). * Avila _et al._ (2015) M. L. Avila, G. V. Rogachev, E. Koshchiy, L. T. Baby, J. Belarge, K. W. Kemper, A. N. Kuchera, and D. Santiago-Gonzalez, Physical Review C 91, 048801 (2015). * Shen _et al._ (2019) Y. Shen, B. Guo, T. Ma, D. Pang, D. Ni, Z. Ren, Y. Li, Z. An, J. Su, J. Liu, Q. Fan, Z. Han, X. Li, Z. Li, G. Lian, Y. Su, Y. Wang, S. Yan, S. Zeng, and W. Liu, Physics Letters B 797, 134820 (2019). * Pellegriti _et al._ (2008) M. G. Pellegriti, F. Hammache, P. Roussel, L. Audouin, D. Beaumel, P. Descouvemont, S. Fortier, L. Gaudefroy, J. Kiener, A. Lefebvre-Schuhl, M. Stanoiu, V. Tatischeff, and M. Vilmay, Phys. Rev. C 77, 042801 (2008). * Rolfs and Rodney (1988) C. Rolfs and W. Rodney, _Claudrons in the Cosmos_ (The University of Chicago Press, Chicago and London, 1988). * Xu _et al._ (2013) Y. Xu, K. Takahashi, S. Goriely, M. Arnould, M. Ohta, and H. Utsunomiya, Nuclear Physics A 918, 61 (2013). * Cyburt _et al._ (2010) R. H. Cyburt, A. M. Amthor, R. Ferguson, Z. Meisel, K. Smith, S. Warren, A. Heger, R. D. Hoffman, T. Rauscher, A. Sakharuk, H. Schatz, F. K. Thielemann, and M. Wiescher, The Astrophysical Journal Supplement Series 189, 240 (2010).
Differential identities of matrix algebras Brox]Jose Brox Departamento de Álgebra, Análisis Matemático, Geometría y Topología, Universidad de Valladolid, Palacio de Santa Cruz, 47002, Valladolid, Spain C. Rizzo]Carla Rizzo Dipartimento di Matematica e Informatica, Università degli Studi di Palermo, via Archirafi 34, 90123, Palermo, Italy [2020]Primary 16R10, 16R50, 17B10; Secondary 16W25, 16P90, 16S30, 17B35, 17B20, 16G30, 15B30 This work was partially supported by the Centre for Mathematics of the University of Coimbra - UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES. Jose Brox was first supported by the Portuguese Government through grant SFRH/BPD/118665/2016 (FCT/Centro 2020/Portugal 2020/ESF), later by a postdoctoral fellowship “Convocatoria 2021” funded by Universidad de Valladolid and partially supported by grant PID2022-137283NB-C22 funded by MCIN/AEI/10.13039/501100011033 and ERDF “A way of making Europe”. We study the differential identities of the algebra $\M$ of $k\times k$ matrices over a field $F$ of characteristic zero when its full Lie algebra of derivations, $L=\Der(\M)$, acts on it. We determine a set of 2 generators of the ideal of differential identities of $\M$ for $k\geq 2$. Moreover, we obtain the exact values of the corresponding differential codimensions and differential cocharacters. Finally we prove that, unlike the ordinary case, the variety of differential algebras with $L$-action generated by $\M$ has almost polynomial growth for all $k\geq 2$. § INTRODUCTION Let $A$ be an associative algebra over a field $F$ of characteristic zero, $F\langle X \rangle$ be the free associative algebra freely generated by an infinite countable set $X$ over $F$, and $\I(A)\subset F\langle X \rangle$ be the $T$-ideal of all polynomial identities of $A$. From a celebrated theorem of Kemer it is known that in characteristic zero every $T$-ideal is finitely generated (see [46]). The proof given by Kemer is not constructive, and finding an explicit finite basis of the $T$-ideal of polynomial identities of an algebra is, in general, an extremely hard task. Indeed there is only a handful of nontrivial examples of algebras for which this problem is completely solved. These include the algebra $UT_k(F)$ of upper triangular matrices ([51]), the infinite-dimensional Grassmann algebra $G$ ([49]), and the tensor product $G\otimes G$ of Grassmann algebras ([60]). If one adds to the above the full matrix algebra $M_2(F)$ of order $2$ (see [15, 62]), one approximately gets the complete list of algebras for which the identities are known. In fact, even the description of the $T$-ideal of $3\times 3$ matrices is still an open problem with no solution in sight. Since finding the exact form of the polynomial identities satisfied by a given algebra is a goal that seems too hard to achieve for the vast majority of relevant algebras, one is led to the study of identities of algebras with additional structure, such as algebras with a trace, group-graded algebras, algebras with involution, algebras with a Lie algebra action induced by derivations and, more in general, algebras with a Hopf algebra action. Such theories of identities include the theory of ordinary ones as a special case and, overall, their study tends to be less challenging. In this paper we focus our attention on algebras with derivations, i.e., associative algebras with a Lie algebra action by derivations. If $L$ is such a Lie algebra, then its action can be naturally extended to an action of its universal enveloping algebra $U(L)$, and we say that $A$ is an algebra with derivations from $L$, or an $L$-algebra. In this context the differential identities of $A$ are defined as the polynomials vanishing on $A$ in the variables $x^u:=u(x)$ with $u\in U(L)$, i.e., coming from the free $L$-algebra $\FL$. Notice that the theory of differential identities generalizes the theory of ordinary polynomial identities, as any algebra $A$ can be regarded as an $L$-algebra by letting $L$ act on $A$ trivially, with $L$ acting on $A$ as the trivial Lie algebra and $U(L)\cong F$. Differential identities were introduced by Kharchenko in [41] (see also [42]) and, in later years, relevant work by Gordienko and Kochetov ([32]) has motivated a growing interest in them. The $T_L$-ideals of differential identities of some important algebras have been determined: in [22, 70], Giambruno and Rizzo gave a complete description of the differential identities of the algebra $UT_2(F)$ of $2\times 2$ upper triangular matrices endowed with all possible actions of Lie algebras by derivations; in [11], Di Vincenzo and Nardozza determined the generators of the $T_L$-ideal of the algebras $UT_k(F)$ under the action of the nonabelian two-dimensional Lie algebra; in [69], Rizzo studied the differential identities of $G$ with the action of a finite-dimensional abelian Lie algebra of inner derivations. We also refer the interested reader to [55, 58] for more results on differential identities of other interesting algebras. Since the base field $F$ is of characteristic zero, as it happens in ordinary PI theory, the $T_L$-ideal $\I^L(A)$ is completely determined by its multilinear elements as well. Recall from ordinary PI theory that the codimensions sequence $\{c_n(A)\}_{n\in\N}$ of an algebra $A$ is defined by taking $c_n(A)$ as the dimension of the space $P_n$ of multilinear polynomials of degree $n$ modulo $\I(A)$. The codimensions sequence is also hard to compute, in the sense that, quoting Regev (<cit.>), in general there is no hope to find a closed formula for $c_n(A)$. Therefore one resorts to studying the growth of the sequence as $n$ tends to infinity. In the late nineties Giambruno and Zaicev ([24, 25]) proved that if $A$ is an algebra satisfying a nontrivial polynomial identity, then the limit $\exp(A):=\lim_{n\to \infty}\sqrt[n]{c_n(A)}$ exists and is always a nonnegative integer called the (ordinary) exponent of $A$. As a consequence, it follows that the codimensions of an algebra are either polynomially bounded or grow exponentially. Given a variety $\textsc V$, its growth is defined as the growth of $c_n(A)$ for any $A$ generating $\textsc V$. Then one says that variety $\textsc V$ has almost polynomial growth if its growth is not polynomially bounded but every proper subvariety of $\textsc V$ has polynomial growth. In the ordinary setting, $G$ and $UT_2(F)$ are the only algebras generating varieties of almost polynomial growth ([45]). Analogous definitions of $P_n^L$, $c_n^L(A)$ and $\exp^L(A)$ can be given in the differential setting. In [31] Gordienko proved that, in case $A$ is finite dimensional, $\exp^L(A)$ indeed exists and is a nonnegative integer called the $L$-exponent of $A$, which allows to likewise define the concept of almost polynomial growth in this case. Moreover, since $U(L)$ is a unital algebra, we can identify $P_n$ with a subspace of $P_n^L$ in a natural way and hence we have $c_n(A)\leq c_n^L(A)$ for all $n\geq 1$, from what it is clear that $\exp(A)\leq \exp^L(A)$. In [32], Gordienko and Kochetov proved that in case $L$ is a finite-dimensional semisimple Lie algebra then $\exp(A)=\exp^L(A)$; in [72], it was shown that if $A$ is finite dimensional and $L$ is any Lie algebra then $\exp^L(A)=1$ if and only if $\exp(A)=1$; and in [71], the author proved that $\exp^L(A)$ coincides with $\exp(A)$ for any Lie algebra $L$. In case $L$ is finite dimensional and solvable, the only two finite-dimensional $L$-algebras generating $L$-varieties of almost polynomial growth are $UT_2(F)$ with trivial $L$-action, and $UT_2(F)^\varepsilon$ with $L$ the one-dimensional Lie algebra generated by the inner derivation $\varepsilon$ induced by the matrix unit $e_{11}$ (<cit.>). The assumption that $L$ is solvable is crucial; in fact, in this paper we present an infinite family of finite-dimensional $L$-algebras of almost polynomial growth for a simple Lie algebra $L$. This points out that the structural properties of the acting Lie algebra deeply affect the growth of the related varieties. As matrix algebras are of great importance for both mathematics and its applications, the identities satisfied by matrices have been an attractive object of study from the very origins of polynomial identities theory. Concerning matrices with additional structure, to the best of our knowledge, so far the only known results are on graded identities of the matrix algebras $\M$ for cross-product gradings ([73, 74] for gradings by $\mathbb{Z}_k$ and $\mathbb{Z}$, [1] for gradings by an arbitrary group), and on the trace identities of the full matrix algebras (see [61, 63]). The main purpose of this paper is to determine the differential identities of the algebra $\M$ of $k\times k$ matrices for $k\geq 2$ over a field $F$ of characteristic zero, when acted by its Lie algebra of all derivations $\Der(\M)$, making all computations explicit along the way. To do so, in order to have a finite-dimensional algebra at our disposal, for any $L$-algebra $A$ we call $U$ to the image of the representation of $U(L)$ in $\E_F(A)$ and define two related free $L$-algebras, $\FU$ and $\FLU$ (with their corresponding notions of $U$- and $(L,U)$-polynomials and $T_U$- and $T_{L,U}$-ideals). These algebras allow us to make computations with $U$, and between the two they model the desired properties of $\FL$: roughly speaking, $\FU$ inherits the linear behavior of $\FL$, while $\FLU$ inherits its $L$-action behavior. In Section <ref> we conduct a careful analysis of these algebras and their relations, and develop the general setting of the variety of $(L,U)$-algebras (which we define), for which $\FLU$ is the free algebra. In this way we show that we can study differential identities, codimensions, and growth by considering $U$- and $(L,U)$-polynomials and the variety of $(L,U)$-algebras. In Section <ref> we particularize to the case $L:=\Der(\M)\cong\Sl_k(F)$ and, via the representation theory of $L$ (see Theorem <ref>), describe $U$-polynomials of $\FU$ as being composed of variables either of the form $x^{\vp_{ab}}=x^{ab}$ for $a,b$ elements of the standard Cartan-Weyl basis $\mathcal S:=\{h_1,\ldots,h_{k-1},e_{12},\ldots,e_{kk-1}\}$ of $\Sl_k(F)$, or of the form $x^{\vp_{gg}}=x^{gg}$, with $g=I_k$ the identity matrix, with the exponent endomorphisms satisfying $\vp_{ab}\vp_{cd}=\delta_{bc}\vp_{ad}$ and $1=\sum_{a\in\mathcal S}\vp_{aa} + \vp_{gg}\in\E_F(\M)$. It is this partition of unity into orthogonal idempotents what allows us to circumvent the apparition of ordinary PIs in our computations. Moreover, the definitions of the endomorphisms $\vp_{ab}$ allow us to directly translate identities from $\mathcal S\cup\{g\}$ to $U$-identities of $\M$ (see Lemma <ref>), e.g. $e_{12}e_{31}=0$ implies $x^{e_{12}e_{12}}y^{e_{12}e_{31}}=0$, with the second exponent index carrying the weight of the identities. In Section <ref> we use this idea together with the linear structure of $\FU$ to show a set of generators of $\I_U(\M)$, in 2 variables and with at most 3 terms, which afterwards we reduce to a minimal set of $4$ generators (in 2 variables and 2 terms) with the aid of the $L$-action of $\FU$, which allows to modify the second index of an exponent; finally we show, through the result from the representation theory of $L$ that we call the primitive element lemma (Lemma <ref>), that $\I_{L,U}(\M)$ is principal. In order to translate this last result to $\FL$ in an explicit way, if $\phi$ is the homomorphism sending $U(L)$ to $U$, we need to compute some preimages $\phi^{-1}(\vp_{ab})\in U(L)$ of the endomorphisms $\vp_{ab}\in U$, and also some generators of $\ker\phi$, what we also do in Section <ref>. For the preimages, we just find expressions formed with polynomials of degree at most $6$ in the elements $e_{ij}\in U(L)$. For the kernel, we recall that the center of $U(L)$ is a polynomial ring in $k-1$ indeterminates $c_i$ which on each irreducible representation $\rho$ of $L$ act as scalars $\lambda^\rho_i$, from which each $c_i-\lambda^\rho_i$ is in the kernel of $\rho$.[We also compute explicitly the values of the eigenvalues of a standard set of Casimir generators of $\Sl_k(F)$ for the adjoint representation, a result which may be of independent interest.] On the other hand, we know that $e_{12}^3\in\ker\phi$ and that $\phi$ is the direct sum of the trivial and the adjoint representations of $L$. From these facts, the algebraic geometry of $U(L)$ (Gröbner bases, primitive spectrum), and the primitive element lemma, we show that $\ker\phi$ is principal (Theorem <ref>). Then, as $\FLU\cong\FL/\ILU$ with $\ILU$ the $T_L$-ideal generated by $x^z$ for $z\in\ker\phi$, we get as our main result, Theorem <ref>, that the differential identities of $\M$ are generated by 1 identity in 1 variable (coming from $\ker\phi$ and depending on $k$) and 1 identity in 2 variables (coming from $\I_{L,U}(\M)$ and not depending on $k$ except for $k=2$). In addition, in Section <ref> we also show a special kind of symmetry that holds for the $U$-identities of any $(L,U)$-algebra $A$ and that we use profusely thereafter, roughly speaking, that changes in the first exponent index leave $T_U(A)$ invariant. More concretely, consider the space $P^U_{\mathcal I,\mathcal J,(a_1,\ldots,a_{n-r})}$ with $(\mathcal I,\mathcal J)$ a partition of $\{1,\ldots,n\}$ and $|\mathcal I|=r$, of those multilinear $U$-polynomials in which variable $x_i$ always appears paired with first exponent $g$ for $i\in\mathcal I$ and variable $x_j$ always appears paired with first exponent $a_j\in\mathcal S$ for $j\in\mathcal J$. Then $P^U_{\mathcal I,\mathcal J,(a_1,\ldots,a_{n-r})}$ is linearly isomorphic to $P^U_{\mathcal I',\mathcal J',(a'_1,\ldots,a'_{n-r'})}$ if and only if $r=r'$, and $U$-identities of $A$ map to $U$-identities. Moreover, defining an action of $S_r\times S_{n-r}$ by permutations of variables together with their first exponents, the linear isomorphisms are in fact isomorphisms of $S_r\times S_{n-r}$-modules. Since then all of them are isomorphic to $P^U_{r,n-r}:=P^U_{\{1,\ldots,r\},\{r+1,\ldots,n-r\},(a,\ldots,a)}$ for fixed $a$, we can restrict to the study of these $S_r\times S_{n-r}$-modules for each $0\leq r\leq n$. With these ideas at hand, we show a combinatorial formula for the $U$-codimensions (Formula (<ref>)), arising from $P^U_{r,n-r}(A)$, that is used in Section <ref> together with the $U$-identities of $\M$ to find a closed formula for $c^L_n(\M)$ (see Theorem <ref>). In particular, the associated generating function is rational; in contrast, when $k\geq3$ is odd, the generating function of (ordinary) $c_n(\M)$ is not algebraic (<cit.>). As an aside, this proof also constitutes a simple way of showing that $\exp(\M)=k^2$ (as $\exp(\M)=\exp^L(\M)=k^2$), which was originally shown by Regev by resorting to the asymptotics of trace identities ([65]), and can also be proved by Wedderburn's decomposition (<cit.>). Now let $P_{(n;r)}^U(A)$ be the direct sum of all $P_{\mathcal I,\mathcal J,(a_1,\ldots,a_{n-r})}(A)$ such that $|\mathcal I|=r$; it is an $S_r\times S_{n-r}$-module whose character $\chi_{(n;r)}^U(A)$, which we call the $(n,r)$th $U$-cocharacter of $A$, is a multiple of that of $P^U_{r,n-r}(A)$ (Formula (<ref>)). In Section <ref> we show, by a counting argument, that $\chi_{(n;r)}^U(\M)$ is a multiple of the irreducible $S_r\times S_{n-r}$-cocharacter $\chi_{(r)}\otimes\chi_{(n-r)}$ for each $0\leq r\leq n$ (Theorem <ref>). Lastly, in Section <ref> we prove a result which is, in our view, one of the most interesting and unexpected PI results of this paper: unlike the ordinary case, the variety $\V^L(M_k(F))$ of differential algebras with $L$-action generated by $M_k(F)$ has almost polynomial growth for all $k\geq 2$, i.e., $\V^L(M_k(F))$ has exponential growth but any of its proper $L$-subvarieties has polynomial growth (Theorem <ref>). To show it, we prove that if an $L$-subvariety $\textsc V$ satisfies any $L$-identity not belonging to $T_L(\M)$, then it must satisfy all $U$-identities of the form $x_1^{a_1a_2}\cdots x_t^{a_ta_{2t}}$ for $a_i\in\mathcal S$ for some $t$, implying that $c_{r,n-r}^L(\textsc V) = c_{r,n-r}^U(\textsc V)=0$ whenever $n-r\geq N$ for some $N$. § GENERAL SETTING §.§ Preliminaries Throughout this paper, $F$ will denote a field of characteristic zero, $A$ an associative algebra, and $(L,[\cdot\, ,\cdot]_L)$ a Lie algebra. All algebras and vector spaces have $F$ as their underlying field. Although we work with varieties of nonunital associative algebras, all results can be easily adapted to unital associative algebras as well. All notations, once introduced, will maintain their meanings in the ensuing sections of the paper. Associative algebras. Given a set $S\subseteq A$, by $\la S\ra$ we denote the ideal generated by $S$. $A$ is split semisimple (over $F$) if it is a direct sum of matrix algebras over $F$. Given a unital associative algebra $U$ with product $\cdot$, the opposite algebra $U^{\op}$ is the underlying vector space of $U$ endowed with the opposite product $a\cdot^{\op} b:=b\cdot a$ for $a,b\in U$; $U^{\op}$ is antiisomorphic to $U$ as unital associative algebras (with $(U^{\op})^{\op}=U$) through the map $^{\op}:U\to U^{\op}$ such that $a^{\op}:=a$; in particular, any subset of $U$ is mapped to itself. If $\phi:U_1\to U_2$ is a homomorphism of unital associative algebras, then $\phi^{\op}:U_1^{\op}\to U_2^{\op}$ defined by $\phi^{\op}(a):=\phi(a)$ is a homomorphism of unital associative algebras. Given a vector space $V$, a left (resp. right) (algebra) $U$-action of a unital associative algebra $U$ on $V$ is a map $\cdot:U\times V\rightarrow V$ (resp. $\cdot:V\times U\rightarrow V$) such that $1\cdot x=x$, $a\cdot(\lambda x+y)=\lambda(a\cdot x)+a\cdot y$, $(\lambda a+b)\cdot x = \lambda(a\cdot x)+b\cdot x$, and $(ab)\cdot x = a\cdot(b\cdot x)$ (resp. $x\cdot 1=x$, $(\lambda x+y)\cdot a=\lambda(x\cdot a)+y\cdot a$, $x\cdot(\lambda a+b) = \lambda(x\cdot a)+x\cdot b$, and $x\cdot (ab) = (x\cdot a)\cdot b$) for $a,b\in U$, $x,y\in V$ and $\lambda\in F$. Let $\End_F(V)$ be the algebra of the endomorphisms of $V$ acting on the left of $V$. Then a left (resp. right) $U$-action on $V$ produces a left (resp. right) representation of $U$, i.e., a homomorphism of unital algebras $\phi:U\to\End_F(V)$ (resp. $\phi:U^{\op}\to\End_F(V)$) and vice versa. Any left action $\cdot$ of $U$ on $V$ has an associated right action $\cdot^{\op}$ of $U^{\op}$ on $V$ given by $x\cdot^{\op}a:=a\cdot x$ for $a\in U^{\op}$, $x\in V$, and vice versa (with $(\cdot^{\op})^{\op}=\cdot$). If there is a (left, right) $U$-action on $A$ we say that $A$ is a (left, right) $U$-algebra (for this action). Throughout this paper we define endomorphisms as acting on the left, but we use exponential notation to denote their actions: hence we see any left $U$-algebra as a right $U^{\op}$-algebra (notice that the associated representation $\phi$ is the same), with exponents living in $U^{\op}$; in addition, we denote the opposite products appearing in exponents just by juxtaposition. Moreover, by abuse of notation we may also denote $\phi^{\op}:U^{\op}\to\End_F(A)^{\op}$ by $\phi$. For example, if $\phi:U\to\End_F(A)$ with $\phi(u_i)=\phi_i$, $u_i\in U$ for $i=1,2$ and associated left action denoted by $\sbullet$, then we write \[(a^{u_1})^{u_2} = a^{u_1u_2} = a^{\phi(u_1)\phi(u_2)} = a^{\phi_1\phi_2} =\phi_2(\phi_1(a)) = u_2\sbullet(u_1\sbullet a) = (u_2u_1)\sbullet a = (u_1\cdot^{\op} u_2)\sbullet a\] for $a\in A$ and, in the exponents, $u_1,u_2\in U^{\op}$, $\phi_1,\phi_2\in\End_F(A)^{\op}$. Since for any set $S\subseteq U$ we have $S^{\op}=S$ inside $U^{\op}$, if no confusion may arise, when picking exponents we may write $s\in S$ instead of $s\in S^{\op}$. Lie algebras. Given $L$, the opposite Lie algebra $L^{\op}$ is the underlying vector space of $L$ endowed with the opposite product $[a,b]_{L^{\op}}:=-[a,b]_L$ ($L^{\op}$ is isomorphic to $L$). If $\vp:L\rightarrow M$ is a homomorphism of Lie algebras, then $\vp^{\op}:L^{\op}\rightarrow M^{\op}$ defined by $\vp^{\op}(a):=\vp(a)$ is a homomorphism of Lie algebras. The underlying vector space of $A$ endowed with the commutator product $[a,b]:=ab-ba$ for all $a,b\in A$ is a Lie algebra, denoted by $A^-$. We have $(A^{\op})^- = (A^-)^{\op}$. A linear endomorphism $\delta:A \to A$ is a derivation of $A$ if it satisfies $(ab)^\delta=a^\delta b + a b^\delta$ for all $a,b\in A$. If $a\in A$, the endomorphism $\ad_a:A\to A$ defined by $\ad_a(b):=[a,b]$ for all $b\in A$ is a derivation of $A$, called the inner derivation induced by $a$. For ease of reading, given an element in $A$ denoted by a lowercase letter, at times we denote the inner derivation this element induces by the corresponding uppercase letter, e.g. $E:=\ad_e\in\End_F(A)$ for $e\in A$. The vector space of all derivations of $A$ endowed with the commutator product is a Lie algebra denoted by $\Der(A)\subseteq\E_F(A)^-$, with the subspace $\ad (A)\subseteq\Der(A)$ of inner derivations of $A$ being a Lie ideal of $\Der(A)$. A semisimple Lie algebra is split (over $F$) if it has a Cartan subalgebra $H$ such that the eigenvalues of $\ad_h$ lie in $F$ for all $h\in H$, called a splitting Cartan subalgebra; hence a split-semisimple Lie algebra has a root system (<cit.>). $L$ is split simple if it is simple and split semisimple. A (left) representation of $L$ on the vector space $V$ is a Lie algebra homomorphism $\rho:L\rightarrow\End_F(V)^-$; we also say that that $V$ has a (left) $L$-action given by $d\cdot x:=\rho(d)x$ for $d\in L$, $x\in V$, and that $V$ (together with $\rho$) is a (left) $L$-module. We similarly define the same notions for the right side, with any left $L$-action $\cdot$ giving rise to a right $L^{\op}$-action $\cdot^{\op}$ by the rule $x\cdot^{\op} d:=d\cdot x$; accordingly, we will write left $L$-actions as right $L^{\op}$-actions with exponential notation. The trivial representation on $V$ is given by $\rho_0:=0$. The adjoint representation of $L$ is given by $\ad$ on $L$. A representation $\rho$ on $V$ is irreducible if it has no proper nontrivial subrepresentation $\rho|_W$ on $W\subseteq V$. Weyl's theorem states that, when $L$ is finite dimensional semisimple, every finite-dimensional $L$-module is completely reducible, i.e., a direct sum of irreducible $L$-modules (<cit.>). For the rest of this paragraph let $L$ be a finite-dimensional split-semisimple Lie algebra (over a field of characteristic $0$), with a fixed splitting Cartan subalgebra $H\subseteq L$ and a fixed set of positive roots; then $L$ has a triangular decomposition $L=N_-\oplus H\oplus N_+$ where $N_{-,+}$ are the linear spans of the negative and positive root spaces, respectively. Any finite-dimensional irreducible $L$-module is absolutely irreducible, i.e., irreducible for any extension of scalars of the base field (<cit.>). A weight of $L$ is an algebra homomorphism $\lambda\in\Hom(H,F)$. A vector $v$ of the $L$-module $V$ is a weight vector of weight $\lambda$ if $\rho(h)v=\lambda(h)v$ for all $h\in H$, a highest weight vector if in addition $\rho(N_+)v=0$, in which case $\lambda$ is a highest weight of $V$. An $L$-module is a (cyclic) highest weight module if it is generated by a single highest weight vector. Given a weight $\lambda$ of $L$ we can build a universal highest weight $L$-module with $\lambda$ as highest weight, the Verma module $W_\lambda$, such that any highest weight $L$-module with highest weight $\lambda$ is a quotient of $W_\lambda$ (<cit.>). Given a representation $\rho:L\to\E_F(V)^-$, the enveloping (associative) algebra of $\rho$ is the unital associative subalgebra of $\E_F(V)$ generated by $\rho(L)$, denoted here by $[\rho(L)]$. The following result is known (see e.g. <cit.> for a close result); we include a proof here since it is a key tool of our paper. Let $L$ be a finite-dimensional split-semisimple Lie algebra over a field of characteristic $0$ and $\rho$ be a finite-dimensional representation of $L$. Then the enveloping associative algebra $[\rho(L)]$ of $\rho$ is split semisimple. Moreover, if $\rho$ is irreducible of dimension $d$, then $[\rho(L)]$ is a matrix algebra of dimension $d^2$. Put $\rho:L\to\End_F(V)^-$ with $\dim_F V=d$. Since $\rho$ is finite dimensional, by Weyl's theorem it is completely reducible, so it suffices to assume that $\rho$ is irreducible. Denote $U:=[\rho(L)]$, let $K$ be an algebraic closure of $F$, and consider the following objects produced by extension of scalars: $L_K:=L\otimes_F K, \, V_K:=V\otimes_F K, \E_F(V)\otimes_F K = \E_K(V_K),$ the representation $\rho_K:=\rho\otimes_F K:L_K\rightarrow\E_K(V_K)^-$, and its enveloping algebra $U_K:=[\rho_K(L_K)] = U\otimes_F K$. Since $L$ is a finite-dimensional split-semisimple Lie algebra and $\chr(F)=0$, $\rho$ is absolutely irreducible. Therefore $\rho_K$ is irreducible, hence $V_K$ is a finite-dimensional irreducible $U_K$-module (since $U_K$ is generated by $L_K$) which is also faithful (as $U_K\subseteq\E_K(V_K)$); by Jacobson's density theorem (<cit.>), $U_K=\End_D(V_K)$ with $D:=\End_{U_K}(V_K)$ a finite-dimensional division algebra (by Schur's lemma) over the algebraically closed field $K$, which forces $D=K$ and $U_K = \End_K(V_K)\cong M_d(K)$. Now $U\otimes_F K = U_K = \E_K(V_K) = \E_F(V)\otimes_F K$ implies $U=\E_F(V)$ (since $U\subseteq\E_F(V)$), so $U\cong M_d(F)$. Universal enveloping algebras. The universal enveloping algebra $U(L)$ of $L$ is the quotient of the unital tensor algebra generated by $L$ by the ideal generated by relations $a\otimes b-b\otimes a-[a,b]_L$ for all $a,b\in L$, i.e., for $a,b\in L$ we have $[a,b]_{U(L)}=[a,b]_L$, hence $[a,b]_{\Uop}=-[a,b]_L = [a,b]_{L^{\op}}$. The universal enveloping algebra satisfies the following universal property: there is a monomorphism $\gamma:L\rightarrow U(L)^-$ of Lie algebras such that for any homomorphism $\vp:L\rightarrow A^-$ of Lie algebras there exists a unique homomorphism $\phi:U(L)\rightarrow A$ of associative algebras such that $\vp=\phi\circ\gamma$. Since its universal property determines $U(L^{\op})$ up to isomorphism, and $U(L)^{\op}$ is easily seen to satisfy said universal property, we have $U(L^{\op})\cong U(L)^{\op}$ as unital associative algebras. By abuse of notation, we identify $L$ with $\gamma(L)$ inside $U(L)$. Then the Poincaré-Birkhoff-Witt (PBW) theorem asserts that if $\{e_i\}_{i\in I}$ is an ordered basis of $L$, the set $B:=\{e_{i_1}^{k_1}\cdots e_{i_j}^{k_j} \ | \ j\in\N, e_{i_1}<\cdots< e_{i_j}, k_1,\ldots,k_j\in\N^*\}$ is a basis of $U(L)$ (including $1\in B$). In particular, $U(L)=:U^*(L)\oplus F\cdot1$, where $U^*(L)$ is the nonunital universal enveloping algebra of $L$ and the augmentation ideal of $U(L)$ (which is a maximal ideal). Note that $U(L)$ is always an infinite-dimensional algebra, even if $L$ is finite dimensional. Universal enveloping algebras of finite-dimensional Lie algebras are Noetherian rings (<cit.>) and the theory of Gröbner bases is available for $U(L)$ (see <cit.>); we are interested in a particular application. Let $L$ be finite dimensional, fix an ordered basis $\{e_1,\ldots,e_d\}$ with $e_i<e_j$ if $i<j$, and let $B$ denote the corresponding basis of monomials of $U(L)$ given by the PBW theorem. The deglex order extends $<$ to $B$ by the rule $m_1:=e_1^{k_1}\cdots e_d^{k_d}< e_1^{l_1}\cdots e_d^{l_d}=:m_2$ if either $\deg(m_1)<\deg(m_2)$ or $\deg(m_1)=\deg(m_2)$ and the first nonzero entry of $(k_1-l_1,\ldots,k_d-l_d)$ is positive. Given an element $f\in U(L)$, its leading monomial $\LM(f)$ with respect to $<$ is the largest monomial of $f$ with nonzero coefficient. If $I$ is an ideal of $U(L)$, its set of leading monomials is $\LM(I):=\{\LM(f) \ | \ f\in I\}$ and its set of normal words is $N(I):=\{m\in B \ | \ m\not\in\LM(I)\}$. Then $U(L)=I\oplus \spn N(I)$ (<cit.>). We can make $U(L)$ an $L$-module by extending the adjoint action of $L$ to $U(L)$, which is the restriction to $L$ of the adjoint action of $U(L)$ on itself (given by $\ad_x(y)=[x,y]$ for $x,y\in U(L)$). By the universal property of $U(L)$, any representation $\rho:L\rightarrow\End_F(V)^-$ of $L$ extends to an associative left $U(L)$-representation $\rho:U(L)\rightarrow\End_F(V)$ (note the abuse of notation). A $U(L)$-representation is irreducible if and only if it is irreducible as an $L$-representation (since $U(L)$ is generated as an algebra by $L$). If $L$ is finite dimensional split semisimple, the Verma module associated to weight $\lambda$ of $L$ can be constructed as $M_\lambda\cong U(L)/I_\lambda$, with $I_\lambda$ the left ideal of $U(L)$ generated by $\{h-\lambda(h)1 \ | \ h\in H\}\cup N^+$ (<cit.>, <cit.>). The center $Z(U(L))$ of $U(L)$ is the set of elements $c\in U(L)$ such that $[c,U(L)]=0$, which are called Casimir elements. For the rest of this paragraph let $L$ be a finite-dimensional semisimple Lie algebra over an algebraically closed field (of characteristic $0$), and denote $Z:=Z(U(L))$. By the Harish-Chandra isomorphism (<cit.>), $Z$ is isomorphic to the algebra of polynomials in rank$(L)$ indeterminates (see also <cit.>); we call a set of Casimir generators to any set of rank$(L)$ algebraically independent elements of $Z$. By Schur's lemma, if $c\in Z$ and $\rho$ is an irreducible representation of $L$ on $V$ then $\rho(c)$ acts as a scalar on $V$. A central character of $L$ is an algebra homomorphism $\chi:Z\rightarrow F$. If $\rho$ is an irreducible representation of $L$ of dimension $d$ then $\chi_\rho(c):=\frac1d\tr(\rho(c))$ for $c\in Z$ is the central character associated to $\rho$; we call $\chi_\rho(c)$ the eigenvalue of $c$ for $\rho$. We clearly have $U(L)\ker\chi_\rho\subseteq\ker\rho$. If $\rho,\rho'$ are two finite-dimensional irreducible representations of $L$ and $\chi_\rho=\chi_{\rho'}$ then $\rho\cong\rho'$ (<cit.>). Note that, for a fixed set of Casimir generators, the maximal ideals of $Z$ are in 1-to-1 correspondence with the central characters of $L$. Now fix $F=\Co$, the field of complex numbers. We will need the following results from the algebraic geometry of $U(L)$. Let $\Max(A), \Prim(A)$ respectively denote the sets of maximal and primitive ideals of the algebra $A$, and for $I\in\Prim(U(L))$ let $\pi(I):=I\cap Z$ and say that $I$ lies over $J$ for an ideal $J$ of $Z$ if $I\in\pi^{-1}(J)$. Then $\pi(I)\in\Max(Z)$, and Dixmier's theorem asserts the following: given $M\in\Max(Z)$, the set of primitive ideals of $U(L)$ lying over $M$ is finite and contains a minimal and a maximal element, with $\pi:\Max(U(L))\rightarrow\Max(Z)$ being a bijection ([12, 13], see also <cit.> and <cit.>). Joseph's theorem on principal series submodules states that if the ideal $I$ of $U(L)$ satisfies $I\cap Z(L)\in\Max(Z(L))$ then $I$ is the annihilator of the $L$-module $W_\lambda/IW_\lambda$ for some weight $\lambda$ of $L$ (<cit.>). We can say more when the ideal is of finite codimension: Let $L$ be a complex finite-dimensional semisimple Lie algebra and $I\neq U(L)$ be an ideal of finite codimension of $U(L)$ such that $I\cap Z(U(L))\in\Max(Z(U(L)))$. Then $I\in\Max(U(L))$. Denote $Z:=Z(U(L))$. Since $I\cap Z\in\Max(Z)$, $I$ is the annihilator of the $U(L)$-module $N:=W_\lambda/IW_\lambda$ for some weight $\lambda$ of $L$ by Joseph's theorem on principal series submodules. Since $W_\lambda\cong U(L)/I_\lambda$ as $U(L)$-modules, we get $N\cong\frac{U(L)/I_\lambda}{(I+I_\lambda)/I_\lambda}\cong U(L)/(I+I_\lambda)$ as $U(L)$-modules, with $I+I_\lambda$ of finite codimension, proving $N$ a finite-dimensional $U(L)$-module. By Weyl's theorem $N$ is completely reducible, $N=\bigoplus_{i=1}^n N_i$ with $N_i$ irreducible and finite dimensional for $1\leq i\leq n$. The primitive ideals $M_i:=\Ann N_i$ are maximal by Jacobson's density theorem (<cit.>), in particular pairwise coprime, so $I=\Ann N=\Ann(\bigoplus_{i=1}^n N_i) = \bigcap_{i=1}^n\Ann N_i =\bigcap_{i=1}^n M_i$ and (by the Chinese remainder theorem) $U(L)/I\cong \prod_{i=1}^n U(L)/M_i$ is a sum of $n\geq1$ matrix algebras ($n>0$ because $I\neq U(L)$). Suppose $n\geq2$; then there are at least two different maximal ideals $M_1,M_2$ of $U(L)$ containing $I$, with $I\cap Z$ maximal in $Z$ forcing $M_1\cap Z = I\cap Z = M_2\cap Z$. But Dixmier's theorem implies the uniqueness of the maximal ideal of $U(L)$ lying over $I\cap Z$, so $M_1=M_2$, a contradiction which forces $n=1$. So $U(L)/I$ is simple and $I$ is maximal. Hopf algebras. Consider a Hopf algebra $H$ with comultiplication $\Delta:H\rightarrow H\otimes H$ (in particular $\Delta(xy)=\Delta(x)\Delta(y)$ in $H\otimes H$). $H$ is cocommutative if $\tau\circ\Delta=\Delta$ for the twist map $\tau:H\otimes H\rightarrow H\otimes H$ defined by $\tau(a\otimes b):=b\otimes a$. If $H$ is cocommutative then $H^{\op}$ is also a Hopf algebra with the same comultiplication, counit, and antipode. We write comultiplications in Sweedler's notation, $\Delta(h)=:\sum h_1\otimes h_2$. For $n\in\N^*$, the $(n-1)$th iterated comultiplication is the operation $\Delta_{n-1}:H\rightarrow \overbrace{H\otimes\cdots\otimes H}^n$ given iteratively by \[\Delta_{n-1}(h):=\sum \Delta(h_1)\otimes\cdots\otimes h_{n-1} =: \sum h_1\otimes\cdots\otimes h_n\] (which is well defined by the coassociativity axiom, see <cit.>). Given $H$, we say that $A$ has a right Hopf (algebra) $H$-action, or that $A$ is a right $H$-module algebra, if there is a right algebra action of $H$ on $A$, $\cdot:A\otimes H\rightarrow A$, such that $(ab)\cdot h=\sum(a\cdot h_1)(b\cdot h_2)$ and $1\cdot h=\varepsilon(h)1$, for all $a,b\in A$ and all $h\in H$, with $\Delta(h)=\sum h_1\otimes h_2$. When $A$ has a right Hopf action, then, for $h\in H$, \[(a_1\ldots a_n)^h = \sum a_1^{h_1}\cdots a_n^{h_n}, \, \text{ with }\Delta_{n-1}(h)=\sum h_1\otimes\cdots\otimes h_n.\] The universal enveloping algebra $U(L)$ of $L$ (and hence $\Uop$) becomes a cocommutative Hopf algebra when endowed with comultiplication $\Delta(\delta):=\delta\otimes 1 + 1\otimes \delta$, counit $\varepsilon(\delta):=0$ and antipode $S(\delta):=-\delta$ for $\delta\in L$, and extended to $U(L)$ via the PBW theorem: $\Delta(e_{i_1}^{k_1}\cdots e_{i_j}^{k_j} ):=\Delta(e_{i_1})^{k_1}\cdots\Delta(e_{i_j})^{k_j}$, $S(e_{i_1}^{k_1}\cdots e_{i_j}^{k_j} ):=S(e_{i_1})^{k_1}\cdots S(e_{i_j})^{k_j}$. E.g., for $\delta_1,\delta_2\in L$ we have $\Delta(\delta_1\delta_2)=\delta_1\delta_2\otimes 1 + \delta_1\otimes\delta_2 + \delta_2\otimes\delta_1 + 1\otimes\delta_1\delta_2$ and $\Delta_2(\delta_1\delta_2)= \delta_1\delta_2\otimes 1\otimes 1 + \delta_1\otimes\delta_2\otimes 1 + \delta_1\otimes 1\otimes\delta_2 + \delta_2\otimes\delta_1\otimes 1 + \delta_2\otimes 1\otimes\delta_1 + 1\otimes\delta_1\delta_2\otimes 1 + 1\otimes\delta_1\otimes\delta_2 + 1\otimes\delta_2\otimes\delta_1 + 1\otimes 1\otimes\delta_1\delta_2$. §.§ The variety of $L$-algebras $L$-algebras. Given $L$, we say that $A$ is an $L$-algebra or that $L$ acts on $A$ by derivations, if there exists a homomorphism of Lie algebras $\varphi:L\to\Der(A)$, hence $A$ has a right $L^{\op}$-action satisfying $(a_1a_2)^\delta = a_1^\delta a_2+a_1a_2^\delta$ for $a_1,a_2\in A$, $\delta\in L$. From now on, when we say that $A$ has an $L$-action it will imply that $A$ is an $L$-algebra for that $L$-action. Note that when $L$ is simple either $\varphi=0$ or $\varphi$ is a monomorphism. By the universal property of $U(L)$, an $L$-action on $A$ can be uniquely extended to a right Hopf $\Uop$-action (which by abuse of notation we also call an $L$-action), by extending $\varphi$ to the homomorphism of unital associative algebras $\phi:U(L)\to\E_F(A)$ such that $\phi(ab):=\vp(a)\vp(b)$ (recall that we also denote the opposite homomorphism $\phi^{\op}:\Uop\to\E_F(A)^{\op}$ by $\phi$, see <ref>). In this way, $A$ becomes a right $\Uop$-module algebra (with action in exponential notation and opposite product denoted by juxtaposition). More explicitly, the $L$-action on $A$ satisfies \[(a_1a_2\cdots a_n)^\delta =a_1^\delta a_2\cdots a_n + a_1a_2^\delta\cdots a_n + \cdots + a_1a_2\cdots a_n^\delta\] for $a_1,\ldots,a_n\in A$ and $\delta\in L$, and, if $\Delta_{n-1}(u)=\sum u_1\otimes\cdots\otimes u_n$ for $u\in \Uop$, then \[(a_1a_2\cdots a_n)^u=\sum a_1^{u_1}a_2^{u_2}\cdots a_n^{u_n}. \tag{U}\label{formula(U)}\] Note that when $\varphi(L)=0$, then $\phi(U^*(L))=0$, $\phi(1)=1$, and the Hopf $\Uop$-action is just the linear action of $F$. For fixed $L$ the class of $L$-algebras is equational, and so it is a variety in the sense of universal algebra (see e.g. [7]). This variety is nontrivial, as it contains $A$. Ideals of $L$-algebras ($L$-ideals) are understood to be invariant under the $\Uop$-action, and homomorphisms $f:A\rightarrow B$ between $L$-algebras $A,B$ ($L$-homomorphisms) must satisfy $f(a^u)=f(a)^u$ for $a\in A$, $u\in\Uop$. The $L$-ideal generated by elements $a_1,\ldots,a_n\in A$ we denote by $\la a_1,\ldots,a_n\ra_L$. In the next result, which we call the primitive element lemma, it is shown that some $L$-ideals are principal; it is a direct generalization of <cit.> by Catoiu. Let $L$ be a finite-dimensional split-semisimple Lie algebra and $A$ be an $L$-algebra. If the $L$-ideal $I$ of $A$ is generated by weight vectors $a_1,\ldots,a_n\in A$ of different weights, then $I$ is generated by $a_1+\cdots+a_n$. Let $H$ be a fixed Cartan subalgebra of $L$ with basis $\{h_1,\ldots,h_m\}$ and let $\lambda_{ij}$ denote the eigenvalue of $h_i$ for $a_j$. We proceed by induction on $n$. If $n=1$ then the result is trivial. Suppose $n>1$ and that the conclusion is true for all $r<n$. Since $a_1,\ldots,a_n$ have different weights, there exists $1\leq i\leq m$ such that $z_{in}:=h_i-\lambda_{in}$ does not kill all $a_j$. Reorder the $a_j$ so that $a_j^{z_{in}}=0$ if and only if $j>r$ for some $1\leq r<n$ (it kills at least $a_n$). Put $a:=a_1+\cdots+a_n$. Then $a^{z_{in}} = \alpha_1a_1+\cdots + \alpha_r a_r\in \la a\ra_L$ with $0\neq\alpha_1,\ldots,\alpha_r\in K$. By the inductive hypothesis, $\la a_1,\ldots,a_r\ra_L = \la \alpha_1a_1,\ldots,\alpha_ra_r\ra_L = \la \alpha_1a_1+\cdots +\alpha_ra_r\ra_L \subseteq \la a\ra_L$, which implies that $a_{r+1}+\cdots+a_n\in \la a\ra_L$ since $a=a_1+\cdots+a_n$. By the inductive hypothesis again, $\la a_{r+1},\ldots,a_n\ra_L = \la a_{r+1}+\cdots+a_n\ra_L\subseteq \la a\ra_L$. Therefore $\la a \ra_L=I$. The variety of $L$-algebras contains the free (nonunital associative) $L$-algebra $\FL$, freely generated by the countably infinite set of variables $X:=\{x_1,x_2, \dots\}$, which satisfies the following universal property: each map $\gamma:X\rightarrow A$ to an $L$-algebra $A$ can be uniquely extended to an $L$-homomorphism $\FL\rightarrow A$, which we call the evaluation of $\FL$ at elements $\gamma(x_1),\gamma(x_2)\ldots$ from $A$. We can describe $\FL$ as follows: $\FL$ is generated as an algebra by the set $\{x_i^u \ | \ i\in \N^*, u\in\Uop\}$, subject to the relations $x_i^1=x_i$, $x_i^{\lambda u+v}=\lambda x_i^u + x_i^v$, $(\lambda x_i+x_j)^u =\lambda x_i^u+x_j^u$ for all $u,v\in\Uop$, $\lambda\in F$ and $i,j\in \N^*$. Note that given a basis $\mathcal B :=\{e_i\}_{i\in\mathcal I}$ of $\Uop$, $\FL$ is generated as an algebra by the set $\{x_i^{e_j} \ | \ i\in\N^*, j\in\mathcal I\}$ and, moreover, the set \[\left\{x_{i_1}^{e_{j_1}}\cdots x_{i_n}^{e_{j_n}} \ | \ n\in \N^*, i_1,\ldots,i_n\in \N^*, e_{j_1},\ldots,e_{j_n}\in\mathcal B\right\}\] is a basis of $\FL$. The free $L$-algebra is endowed with an $L$-action determined (as in (<ref>)) by \[(x_{i_1}^{e_{j_1}}\cdots x_{i_n}^{e_{j_n}})^u := \sum x_{i_1}^{e_{j_1}u_1}\cdots x_{i_n}^{e_{j_n}u_n}\] for $e_{j_1},\ldots,e_{j_n}\in\mathcal B$ and $u\in\Uop$ with $\Delta_{n-1}(u)=\sum u_1\otimes\cdots\otimes u_n$. The elements of the free $L$-algebra are called differential polynomials or $L$-polynomials. A $T_L$-ideal of the free $L$-algebra is an $L$-ideal which in addition is invariant under all $L$-endomorphisms of $\FL$ or substitutions, which send $x_i\mapsto f_i$ for $i\in\N^*$ and $f_i\in\FL$; e.g., there is a substitution sending $x_1$ to $x_1x_2$ and $x_i$ to $x_i$ for $i\neq1$. Special substitutions are those mapping $x_i\mapsto x_{\sigma(i)}^{u_i}$ for $i\in I$ and $x_j\mapsto x_j$ for $j\not\in I$, for given $I:=\{i_1,\ldots,i_n\}$, $\sigma\in S_n$ acting on $I$ and $u_i\in\Uop$ for $i\in I$, which we call substitutions swapping variables. When referring to elements of a $T_L$-ideal in at most two variables we write them with “generic” variables $x,y$; since $T_L$-ideals are closed under substitutions, $x,y$ may be replaced by any $L$-polynomials $f,g\in\FL$. Given a set $S\subseteq\FL$, by $\la S\ra_{T_L}$ we denote the smallest $T_L$-ideal containing $S$. §.§ The variety of $(L,U)$-algebras We want to avoid, as much as possible, the infinite dimensionality of $U(L)$ in the determination of the differential identities of an $L$-algebra. For this reason, we introduce $(L,U)$-algebras. Fix $L$, an $L$-algebra $A$, and the homomorphism $\phi: U(L)\to\E_F(A)$ corresponding to the right Hopf $ \Uop$-action on $A$. We denote $ U:=\phi(U(L))\subseteq\E_F(A)$; note that $U$ is a unital associative algebra, which is finite dimensional when $A$ is, but not necessarily a Hopf algebra. In the following we omit $\phi$ from the notation, but the reader should be aware of the fact that the concepts defined below depend not only on $L$ and $U$ but also on $\phi$. $(L,U)$-algebras. The $ \Uop$-action on $A$ induces a right action of $ U^{\op}$ as a unital associative algebra on $A$; this action is not necessarily a Hopf action (it is a generalized Hopf algebra action as defined by Gordienko in <cit.>; see also Berele's <cit.>), but satisfies $a^u=a^{\phi(u)}$ for all $u\in\Uop$ and $a\in A$. Accordingly, we say that an $L$-algebra $B$ is an $(L,U)$-algebra if it is endowed with a right algebra $ U^{\op}$-action such that $b^u=b^{\phi(u)}$ for all $u\in\Uop$ and $b\in B$, which we call an $(L,U)$-action. If $B,C$ are two associative algebras and $C$ has a right $ U^{\op}$-action then the associative algebra $B\otimes_F C$ has a right $U^{\op}$-action given by $(b\otimes c)^u:=b\otimes c^u$ for $u\in U^{\op}$, $b\in B$, $c\in C$, while if $\delta$ is a derivation of $C$ then $1\otimes\delta$ is a derivation of $B\otimes_F C$. Therefore, if $C$ is an $(L,U)$-algebra then $B\otimes_F C$ is an $(L,U)$-algebra with $L$-action given by $(b\otimes c)^u:=b\otimes c^u$ for $u\in\Uop$, $b\in B$, $c\in C$. The class of $(L,U)$-algebras is a variety that contains $A$, denoted by $\textsc V^{L,U}$. Ideals of $(L,U)$-algebras are closed under the $ U^{\op}$-action (equivalently, the $L$-action), and homomorphisms $f:B\rightarrow C$ between $(L,U)$-algebras $B,C$ must satisfy $f(b^u)=f(b)^u$ for $b\in B$, $u\in U^{\op}$ (equivalently, $u\in\Uop$). The variety of $(L,U)$-algebras contains the free (nonunital associative) $(L,U)$-algebra $\FLU$ freely generated by $X$, which is isomorphic to the quotient $\FL/\ILU$ where \[\ILU:=\langle\{f^z \ | \ f\in\FL, z\in\ker\phi\}\rangle, \tag{I}\label{formula(I)}\] which is a $T_L$-ideal. The structure of $\ILU$ is strongly dependent on the algebraic structure of $U$. For example, if $d:=\phi(\delta)$ with $\delta\in L$ satisfies $d^2=0$, then $x^{d^2}=x^0=0$ for any $x\in\FLU$, and so \[2x_1^dx_2^d = x_1^{d^2}x_2 +2x_1^dx_2^d +x_1x_2^{d^2} = (x_1x_2)^{d^2} = 0\] in $\FLU$. We refer to the elements of $\FLU$ as $(L,U)$-polynomials. The $T_{L,U}$-ideals of $\FLU$ are defined analogously to the $T_L$ ideals of $\FL$, with $\la S\ra_{T_{L,U}}$ denoting the smallest $T_{L,U}$-ideal containing the set $S$. Observe that the free $(L,U)$-algebra is generated as an associative algebra by the set $\{x_i^u \ | \ i\in \N^*, u\in\mathcal U\}$ for a basis $\mathcal U$ of $U$, albeit not freely, and the construction of a well-behaved basis of $\FLU$ may prove challenging. To circumvent this issue we introduce $\FU$, an algebra with more freeness than $\FLU$ and a better linear parallelism to $\FL$, which is an $L$-algebra but has no specified $U$-action. We call $\FU$ the free algebra with $U$-exponents and define it as follows: $\FU$ is generated as an algebra by the set $\{x_i^u \ | \ i\in \N^*, u\in U^{\op}\}$, subject to the relations $x_i^1=x_i$, $x_i^{\lambda u+v}=\lambda x_i^u + x_i^v$, $(\lambda x_i+x_j)^u =\lambda x_i^u+x_j^u$ for all $u,v\in U^{\op}$, $\lambda\in F$ and $x_i,x_j\in X$. Note that given a basis $\mathcal U:=\{u_1,\ldots,u_N\}$ of $U$, $\FU$ is freely generated as an associative algebra by $\{x_i^{u_j} \ | \ i\in \N^*, u_j\in \mathcal U\}$ (so this is the same algebra considered by Gordienko in <cit.>). Moreover, the set \[\{x_{i_1}^{u_{j_1}}\cdots x_{i_n}^{u_{j_n}} \ | \ n\in\N^*, i_1,\ldots,i_n\in \N^*, u_{j_1},\ldots,u_{j_n}\in\mathcal U\}\] is a basis of $\FU$. In addition, setting the subspaces $M_n^U:= \spn\{x_{i_1}^{u_{j_1}}\cdots x_{i_n}^{u_{j_n}} \ | \ i_1,\ldots,i_n\geq1, \ | u_{j_1},\ldots,u_{j_n}\in\mathcal U\}$ of $U$-monomials of degree $n$ for $n\in\N^*$, we get the grading $\FU=\bigoplus_{n\in \N^*} M_n^U$. The $L$-action on $\FU$ is determined by \[(x_{i_1}^{u_{j_1}}\cdots x_{i_n}^{u_{j_n}})^v := \sum x_{i_1}^{u_{j_1}\phi(v_1)}\cdots x_{i_n}^{u_{j_n}\phi(v_n)}\] for $u_{j_1},\ldots,u_{j_n}\in\mathcal U$ and $v\in\Uop$ with $\Delta_{n-1}(v)=\sum v_1\otimes\cdots\otimes v_n$. In addition, the operation $(x_i^u)^v:=x_i^{uv}$ for $u,v\in U^{\op}$ is well defined, which allows defining the $L$-endomorphisms of $\FU$ which we call substitutions swapping variables, that map $x_i\mapsto x_{\sigma(i)}^{u_i}$ for $i\in I$ and $x_j\mapsto x_j$ for $j\not\in I$, for given $I:=\{i_1,\ldots,i_n\}$, $\sigma\in S_n$ acting on $I$ and $u_i\in U^{\op}$ for $i\in I$. Although it has no specified $U$-action, and we are not considering any variety of $U$-algebras which would contain it as a free algebra, the algebra $\FU$ satisfies the following universal property: each map $\gamma:X\rightarrow B$ to an algebra $B$ with right $ U^{\op}$-action can be uniquely extended to a homomorphism of associative algebras $\gamma:\FU\rightarrow B$ such that $\gamma(x_i^u)=\gamma(x_i)^u$ for all $x_i\in X$ and $u\in U^{\op}$. More importantly, although $\FU$ is not an $(L,U)$-algebra, it satisfies $x_i^u=x_i^{\phi(u)}$ for all $u\in\Uop$ and $x_i\in X$. Therefore $\FU$ also satisfies the following universal property: each map $\gamma:X\rightarrow B$ to an $(L,U)$-algebra $B$ can be uniquely extended to an $L$-homomorphism $\gamma:\FU\rightarrow B$ such that $\gamma(f^u)=\gamma(f)^{\phi(u)}$ for all $f\in\FU$ and $u\in\Uop$, which we call an evaluation of $\FU$ at elements $\gamma(x_1),\gamma(x_2)\ldots$ from $B$. We refer to the elements of $\FU$ as $U$-polynomials. A $T_U$-ideal of $\FU$ is an $L$-ideal (so, invariant under the Hopf $ \Uop$-action) which in addition is invariant under all $L$-endomorphisms of $\FU$; in particular, under all linear substitutions of the form $x_j\mapsto \sum_{i\in I} \alpha_i x_i$ with fixed $j$, finite $I\subseteq\N^*$ and $\alpha_i\in F$ for $i\in I$, and all the substitutions swapping variables. Note that not every substitution of the variables by $U$-polynomials is valid, as not all are $L$-endomorphisms of $\FU$, with this phenomenon depending on the algebraic structure of $U$: e.g., if there is $\delta\in L$ such that $0\neq d:=\phi(\delta)$ satisfies $d^2=0$ then the substitution $\vp$ mapping $x_1\mapsto x_1x_2$ is not an $L$-endomorphism, as $\vp(x_1^{\delta^2}) = \vp(x_1^{d^2}) = 0 \neq 2x_1^dx_2^d = \vp(x_1)^{\delta^2}$. The $T_U$-ideal generated by $f_1,\ldots,f_m\in\FU$ is the set of $U$-polynomials of the form \[\sum_{j=1}^r g_j f_{i_j}^{u_j}(p_{j1},\ldots,p_{jk_j}) h_j = \sum_{j=1}^r g_j f_{i_j}(p_{j1},\ldots,p_{jk_j})^{u_j} h_j\tag{TU}\label{formula(TU)}\] with $r\in\N^*$, $i_j\in\{1,\ldots,m\}$ (where we may have $i_j=i_k$ for $j\neq k$), $u_j\in\Uop$, $g_j,h_j\in\FU$ or $g_j=1$ or $h_j=1$, and $\vp_j(f):=f(p_{j1},\ldots,p_{jk_j})$ with $p_{j1},\ldots,p_{jk_j}\in\FU$ being an $L$-endomorphism of $\FU$ which maps $x_{t_i}\mapsto p_{ji}$ for some $x_{t_i}\in X$ and $1\leq i\leq k_j$. When referring to elements of a $T_U$-ideal in at most two variables we write them with “generic” variables $x,y$; since $T_U$-ideals are closed under substitutions swapping variables, $x,y$ may be replaced by any variables $x_i^u,x_j^v$ with $x_i,x_j\in X$ and $u,v\in U^{\op}$. Given a set $S\subseteq\FU$, by $\la S\ra_{T_U}$ we denote the smallest $T_U$-ideal containing $S$. [Action of $L$ on $\FU$] * The free algebra with $U$-exponents is not an $(L,U)$-algebra in general: due to their respective universal properties with respect to $(L,U)$-algebras, if both $\FLU$ and $\FU$ were $(L,U)$-algebras, then they would be isomorphic as $(L,U)$-algebras (see <cit.>), in particular as $L$-algebras, which they are not in general. * Since $\FU$ is not an $(L,U)$-algebra, attention must be paid to the application of the $L$-action: exponents must be in $\Uop$ in general, and can only be taken from $U^{\op}$ when applied directly on “isolated” variables $x_i$. A expression like $(x_1\cdots x_n)^u$ with $n>1$ and $u\in U^{\op}$ makes no sense in $\FU$, and $(x_1\cdots x_n)^u$ with $u\in\Uop$ expands to $\sum x_1^{v_1}\cdots x_n^{v_n}$ for $\Delta_{n-1}(u)=\sum u_1\otimes\cdots\otimes u_n$ and $v_i:=\phi(u_i)$ for $i\in\{1,\ldots,n\}$. * The Hopf $U(L)$-action on $\FU$ does not necessarily produce a $U$-action on $\FU$ via $x^{\phi(u)}:=x^u$ for $x\in\FU$, $u\in\Uop$, as we may have $u,v\in\Uop$ generating different actions on $\FU$ and such that $\phi(u)=\phi(v)$. E.g, if $\delta\in L$ satisfies $\phi(\delta^2)=0=\phi(0)$ but $\phi(\delta)\neq0$, then \[(x_1x_2)^{\delta^2} = 2x_1^{\phi(\delta)}x_2^{\phi(\delta)} \neq 0 = (x_1x_2)^0\] (recall that we have actually designed $\FU$ for this to happen). §.§ Identities and growth $L$-identities, $L$-codimensions and $L$-exponent. A differential polynomial $f(x_1, \dots , x_n)\in F\langle X| L\rangle$ is a differential identity or $L$-identity of the $L$-algebra $B$ if $f(b_{1},\dots,b_{n})=0$ for any $b_1,\ldots,b_n\in B$ ($f$ vanishes under all evaluations of $\FL$ at elements from $B$). We denote by $\I^L(B)$ the set of differential identities of $B$, which is a $T_L$-ideal of the free $L$-algebra (in particular $\I^L(B)$ is closed under the Hopf $\Uop$-action and under substitutions). Note that $\I^L(B)$ is the intersection of all kernels of evaluations of $\FL$ from $B$. For $n\geq 1$ we denote by $P_n^L$ the vector space of multilinear differential polynomials in the variables $x_{1},\dots,x_{n}$, so that \[P_n^L:=\spn_F\{x_{\sigma(1)}^{e_{i_1}}\cdots x_{\sigma(n)}^{e_{i_n}} \ | \ \sigma\in S_{n} ,e_{i_1},\ldots,e_{i_n}\in \mathcal{B} \},\] where $S_n$ denotes the symmetric group acting on $\{1,\ldots,n\}$. As in the ordinary case, since $F$ has characteristic zero, a Vandermonde plus linearization argument shows that the $T_L$-ideal $\I^L(B)$ is completely determined by its multilinear $L$-polynomials (see <cit.>). We also consider the vector space \[P_n^L(B):= \dfrac{P_n^L}{P_n^L \cap \I^L(B)}.\] When the action of $\Uop$ is finite dimensional, i.e., when $U$ is a finite-dimensional algebra, the $n$th differential codimension of $B$ is $c_n^L(B):=\dim_F P_n^L(B)$. Moreover, if $B$ is finite dimensional then the limit $\exp^L(B):=\lim_{n\to \infty}\sqrt[n]{c_n^L(B)}$ exists and is a nonnegative integer called the $L$-exponent of $B$ ([31]). Growth of $L$-varieties. Given a variety $\textsc V$ of $L$-algebras, the growth of $\textsc V$ is defined as the growth of the sequence of differential codimensions of any $L$-algebra $B$ generating $\textsc V$, i.e., $\textsc V=\V^L(B)$. In this case we set $c_n^L(\textsc V):=c_n^L(B)$, $n\geq 1$, and $\exp^L(\textsc V):=\exp^L(B)$. Then we say that $\textsc V$ has polynomial growth if there exist $C,t>0$ such that $c_n^L(\textsc V) \leq C n^t$, i.e., $\exp^L(\textsc V)\leq 1$, and that $\textsc V$ has almost polynomial growth if $c_n^L(\textsc V)$ is not polynomially bounded, i.e., $\exp^L(\textsc V)>1$, but every proper subvariety of $\textsc V$ has polynomial growth. Analogues for $U$-algebras and $(L,U)$-algebras. Mutatis mutandis, for $B$ an associative algebra with right $ U^{\op}$-action (resp. an $(L,U)$-algebra), inside $\FU$ (resp. $\FLU$) we define the $U$-identities (resp. $(L,U)$-identities) of $B$, the $T_U$-ideal $\I^U(B)$ closed under the Hopf $\Uop$-action and the valid substitutions (resp. the $T_{L,U}$-ideal $\I^{L,U}(B)$ closed under the Hopf $\Uop$-action and substitutions), the vector space of multilinear $U$-polynomials $P_n^U$ (resp. of multilinear $(L,U$)-polynomials $P_n^{L,U}$), the quotient $P_n^U(B)$ (resp. $P_n^{L,U}(B)$), the $n$th $U$-codimension $c_n^U(B)$ (resp. the $(L,U)$-codimension $c_n^{L,U}(B)$) when $U$ is finite dimensional. If $\exp^U(B):=\lim_{n\to\infty}\sqrt[n]{c_n^U(B)}$ (resp. $\exp^{L,U}(B):=\lim_{n\to\infty}\sqrt[n]{c_n^{L,U}(B)}$) exists, we call it the $U$-exponent (resp. $(L,U)$-exponent) of $B$; see Lemma <ref> below. Similarly, for a variety of $(L,U)$-algebras we define the growth and the notions of polynomial growth and almost polynomial growth. §.§ Computing $L$-data through $U$-data In this section we relate $L$-identities to $U$-identities, $L$-varieties to $(L,U)$-varieties, and $L$-cocharacters to $U$-cocharacters. Since $\FU$ is an $L$-algebra and $\FL$ is the free $L$-algebra, we can consider the $L$-homomorphism $\Psi:\FL\to\FU$ sending $x_i\mapsto x_i$, which is defined by $\Psi(x_i^u):=x_i^{\phi(u)}$ for all $x_i\in X$ and $u\in\Uop$. We denote $\IU:=\ker\Psi$. Analogously we have the $L$-homomorphism $\Theta:\FL\to\FLU$ sending $x_i\mapsto x_i$, defined by $\Theta(x_i^u):=x_i^{\phi(u)}$, which satisfies $\ker\Theta=\ILU$. We have $\IU\subseteq\ILU$. In addition, since $\FLU$ is an $(L,U)$-algebra, we have the $L$-homomorphism $\Gamma:\FU\to\FLU$ sending $x_i\mapsto x_i$, which satisfies $\Theta=\Gamma\circ\Psi$. Let $B$ be any $(L,U)$-algebra. By definition $\Psi(\I^L(B))=\I^U(B)$, $\IU\subseteq \I^L(B)$, and $\Psi(P_n^L)=P_n^U$. Hence by the isomorphism theorems we get $\dfrac{\I^L(B)}{\IU} \cong \I^U(B)$, $\dfrac{\FL}{\I^L(B)} \cong \dfrac{\FU}{\I^U(B)}$(as $L$-algebras) and $P_n^L(B)\cong P_n^U(B)$ (as vector spaces). We get analogous results for $\FLU$ from $\Theta$, proving the next elementary lemma. Let $B$ be an $(L,U)$-algebra. * Let $G=\{g_i\}_{i\in\mathcal I}$ be a system of generators of $\I^U(B)$ (resp. $\I^{L,U}(B)$) as a $T_L$-ideal (resp. $T_{L,U}$-ideal), and for each $g_i\in G$ pick a fixed preimage $f_i\in\Psi^{-1}(g)$ (resp. $f_i\in\Theta^{-1}(g)$). Let $F:=\{f_i\}_{i\in\mathcal I}$. Then $\I^L(B)=\la F\ra_{T_L}+\IU$ (resp. $\I^L(B)=\la F\ra_{T_L}+\ILU$). * If $U$ is finite dimensional, then $c_n^L(B)=c_n^U(B)=c_n^{L,U}(B)$ for all $n\geq 1$. Moreover, if $B$ is finite dimensional, then $\exp^U(B)$, $\exp^{L,U}(B)$ exist and $\exp^L(B)=\exp^U(B)=\exp^{L,U}(B)$. Although $\ILU$ is a $T_L$-ideal, $\IU$ is not a $T_L$-ideal in general: it may not be invariant under the substitution $x_1\mapsto x_1x_2$, $x_i\mapsto x_i$ for $i\neq1$, as $x_1^z\in\IU$ for $z\in\ker\phi$ but $(x_1x_2)^z$ may not belong to $\IU$. E.g., $z:=\delta^2$ with $\delta\in L$, $\delta\not\in\ker\phi$ and $\phi(\delta)^2=0$ satisfies $x_1^z\in\IU$ and $(x_1x_2)^z = x_1^zx_2 + 2x_1^\delta x_2^\delta + x_1x_2^z\not\in\IU$ since $x_1^zx_2, x_1x_2^z\in\IU$ but $x_1^\delta x_2^\delta\not\in\IU$. Nevertheless, $\IU$ is invariant under substitutions swapping variables (since $\FU\cong\FL/\IU$). Therefore, since $\IU\subseteq\I^L(A)$, which is a $T_L$-ideal, the $T_L$-ideal $\langle\IU\rangle_{T_L}$ may contain some interesting $L$-identities of $A$, obtained from substitutions in elements of $\IU$. More concretely, we have: * $\IU$ is generated as an associative algebra ideal by the set $\{x_i^z \ | \ x_i\in X, z\in\ker\phi\}$, and as an ideal with substitutions swapping variables by the identity $x^z$ for any $z\in\ker\phi$. * $\langle\IU\rangle_{T_L} = \ILU = \displaystyle\bigcap_{B\in\textsc V^{L,U}}\I^L(B)$. * Since the restriction $\phi: U(L)\rightarrow U$ is an epimorphism, we can write $ U=V\oplus\ker\phi$ for some $V$ such that the restriction $\phi:V\rightarrow U$ is an isomorphism of vector spaces. Let $\mathcal V$, $\mathcal K$ be bases of $V$ and $\ker\phi$, respectively, and set $\mathcal B:=\mathcal V\cup\mathcal K$, which is a basis of $\Uop$. Then $\mathcal F_L:=\{x_{i_1}^{e_{j_1}}\cdots x_{i_n}^{e_{j_n}} \ | \ n\in \N^*, i_1,\ldots,i_n\in \N^*, e_{j_1},\ldots,e_{j_n}\in\mathcal B\}$ is a basis of $\FL$, and we can write $\mathcal F_L=M^V\cup M^K$, where $M^V$ is the set of monomials whose variables have all its exponents in $V$ and $M^K$ is the set of monomials which have at least one variable with exponent in $\ker\phi$. Hence for $f\in\FL$ we can write $f=\sum_{i\in\mathcal I}\alpha^V_i m^V_i +\sum_{j\in\mathcal J}\alpha^K_j m^K_j$, where $\mathcal I,\mathcal J$ are finite sets, $\alpha^V_i,\alpha^K_j\in F$ for all $i\in\mathcal I, j\in\mathcal J$, $m^V_i\in M^V$ for $i\in\mathcal I$ and $m^K_j\in M^K$ for $j\in\mathcal J$. On the other hand, $\mathcal F_U:=\left\{x_{i_1}^{u_{j_1}}\cdots x_{i_n}^{u_{j_n}} \ | \ n\in \N^*, i_1,\ldots,i_n\in \N^*, u_{j_1},\ldots,u_{j_n}\in\mathcal U\right\}$ is a basis of $\FU$ such that $\Psi(M^V)=\mathcal F_U$. Then, since $\Psi(M^K)=0$, $\Psi(f)= \sum_{i\in\mathcal I}\alpha^V_i\Psi(m^V_i)$ is a linear combination of monomials from the basis $\mathcal F_U$ and thus $\Psi(f)=0$ implies $\alpha^V_i=0$ for all $i\in\mathcal I$, i.e., $f\in\spn M^K$ as we wanted to show. Since $\IU$ is invariant under substitutions swapping variables, the second claim of this item follows. * By definition, $\ILU=\langle\{x^z \ | \ z\in\ker\phi\}\rangle_{T_L}$, and clearly $\langle\{x^z \ | \ z\in\ker\phi\}\rangle_{T_L} = \langle\{x_i^z \ | \ x_i\in X, z\in\ker\phi\}\rangle_{T_L} = \langle\IU\rangle_{T_L}$ by item (1). On the other hand, given $f\in\ILU$ in $n$ variables we can write $f=\sum_{i\in\mathcal I} h_if_i^{z_i}g_i$ with $h_i,f_i,g_i\in\FL$ and $z_i\in\ker\phi$ for a finite set $\mathcal I$. Then, for any $(L,U)$-algebra $B$ and any $b_1,\ldots,b_n\in B$ we have $f(b_1,\ldots,b_n)= \sum_{i\in\mathcal I} h_i(b_1,\ldots,b_n)(f_i(b_1,\ldots,b_n))^{\phi(z_i)}g_i(b_1,\ldots,b_n) = 0$ since $\phi(z_i)=0$, what implies $\ILU\subseteq\I^L(B)$. Moreover, we have $\I^L(\FLU) = \ILU$ because $\FLU$ is the free $(L,U)$-algebra. Thus we get \[\ILU\subseteq\bigcap_{\mathclap{B\in\textsc V^{L,U}}}\; \I^L(B)\subseteq\I^L(\FLU) = \ILU.\qedhere\] \[\xymatrix{\FL \ar[r]^-\Psi\ar@(dr,dl)[rr]_-\Theta & \FU\cong \FL/\IU \ar[r]^-\Gamma & \FLU\cong \FL/\ILU}\] \[\ILU = \langle\IU\rangle_{T_L} = \langle\{f^z \ | \ f\in\FL, z\in\ker\phi\}\rangle = \displaystyle\bigcap_{B\in\textsc V^{L,U}}\I^L(B)\] Relationships between the different free algebras From the previous results we derive the following general strategy for computing the differential identities of $A$: * We determine its $U$-identities by exploiting the structure of the finite-dimensional algebra $ U\subseteq\End_F(A)$ and the good behavior of $\FU$. We find a system of generators of $\I^U(A)$ (which gives also a system of generators of $\I^{L,U}(A)$) and reduce it to a system $G$ by resorting to the $L$-action. We consider the system $F$ of some fixed preimages of $G$ in $\I^L(A)$. * We determine a system of generators $Z$ of the ideal $\ker\phi$, with the aid of the representation theory of $L$ applied to $U(L)$ and of the algebraic geometry of $U(L)$. Then $\ILU$ is generated by $K:=\{x^z, z\in Z\}$ by substitutions and the $L$-action, as $x^{uzv}=((x^u)^z)^v$ for $u,v\in\Uop$, $z\in\ker\phi$. * We check if any element of $F$ is generated by the others plus $\la\IU\ra_{T_L}=\ILU$. If so, we remove it and check again. * We find the (small) system of generators $F\cup K$ of $\I^L(A)$. Let $B$ be an $(L,U)$-algebra. We not only have $\Psi(\I^L(B))=\I^U(B)$, but also $\Psi^{-1}(\I^U(B))=\I^L(B)$, and similarly $\Theta^{-1}(\I^{L,U}(B))=\I^L(B)$ and $\Gamma^{-1}(\I^{L,U}(B))=\I^U(B)$, since $f(b_1,\ldots,b_n)=\Psi(f)(b_1,\ldots,b_n)=\Theta(f)(b_1,\ldots,b_n)$ for all $f\in\FL$ and all $b_1,\ldots,b_n\in B$. In particular, $f\in \FL$ satisfies $f\in\I^L(B)$ if and only if $\Psi(f)\in\I^U(B)$ if and only if $\Theta(f)\in \I^{L,U}(B)$, and $f\in\FU$ satisfies $f\in\I^U(B)$ if and only if $\Gamma(f)\in\I^{L,U}(B)$. Let $B$ be an $(L,U)$-algebra and $C\in\V^L(B)$. * $C$ is an $(L,U)$-algebra such that $C\in\V^{L,U}(B)$. * $\V^L(B)$ has almost polynomial growth if and only if $\V^{L,U}(B)$ has almost polynomial growth. * $\I^U(B) \subseteq \I^U(C)$, and $\V^{L,U}(C)$ is a proper subvariety of $\V^{L,U}(B)$ if and only if there exists a $U$-polynomial $f\in\I^U(C)\setminus\I^U(B)$. * Since $B$ is an $(L,U)$-algebra, $\ILU\subseteq\I^L(B)$ by Proposition <ref>(2), and since $C\in\V^L(B)$, $\I^L(B)\subseteq \I^L(C)$. Therefore $\ILU\subseteq\I^L(C)$, whence $x^z\in\I^L(C)$ for all $z\in\ker\phi$, so the right $ U^{\op}$-action $c^u:=c^v$ is well defined for any $c\in C$, $u\in U$ and $v\in\phi^{-1}(u)$ ($\phi(v_1)=\phi(v_2)$ implies $v_1-v_2\in\ker\phi$, so $c^{v_1}=c^{v_2}$), and is clearly an $(L,U)$-action. In addition, $\I^{L,U}(B)=\Theta(\I^L(B))\subseteq\Theta(\I^L(C))=\I^{L,U}(C)$, hence $C\in\V^{L,U}(B)$. * By item (1) and the fact that every $(L,U)$-algebra is an $L$-algebra we get $\V^L(B)=\V^{L,U}(B)$ as sets, and $c_n^L(C)=c_n^{L,U}(C)$ for every $C\in\V^{L,U}(B)$ by Lemma <ref>(2), which in particular implies that $C\in\V^L(B)$ generates a proper $L$-subvariety if and only if it generates a proper $(L,U)$-subvariety of $\V^{L,U}(B)$. * $\I^{L,U}(B) \subseteq \I^{L,U}(C)$ by item (1), so $\I^{U}(B) \subseteq \I^{U}(C) $ by Remark <ref>. Moreover, $\V^{L,U}(C)$ is a proper subvariety of $\V^{L,U}(B)$ if and only if there exists $g\in \I^{L,U}(C)\setminus \I^{L,U}(B)$, if and only if there exists $f\in\Gamma^{-1}(g)$ such that $f\in\I^U(C)\setminus \I^U(B)$ by Remark <ref>. Therefore we can study the growth of $\V^L(A)$ and its subvarieties by considering $(L,U)$-algebras, $U$-polynomials, and $U$-codimensions. § MATRIX SETTING §.§ Derivations of $\M$ In this section, we describe the enveloping algebra $ U$ of the Lie algebra of derivations of $\M$ for $k\geq 2$. Let $Z_k(F)$ denote the center of $\M$ (i.e., the scalar multiples of the identity matrix $I_k$) and let $\Sl_k(F)$ denote the special Lie algebra of order $k$, that is, the set of traceless matrices inside $\M$ endowed with the bracket product. $\bs{\Der(\M)}$ is isomorphic to $\bs{\SL}$. As a consequence of the Noether-Skolem theorem, all derivations of $\M$ are inner (<cit.>), so $\ad:\M\rightarrow\Der(\M)$ is a surjective linear map between vector spaces. In addition $\ad(A)=\ad(B)$ if and only if $A-B\in Z_k(F)$, so $\ad:\M/Z_k(F)\rightarrow\Der(\M)$ is a linear isomorphism, which moreover satisfies $\ad([A,B])=[\ad_A,\ad_B]$, giving an isomorphism of Lie algebras between $\Der(\M)$ and $\M/Z_k(F)$. On the other hand, since $\chr(F)=0$ we have $\M= Z_k(F)\oplus\SL$ (direct sum of Lie ideals) and hence $\M/Z_k(F)\cong\SL$ as Lie algebras in a natural way. From now on we identify $\Der(\M)$ with $\SL$ as the inner derivations arising from $\SL\subseteq\M$, and we fix $L:=\SL$ for the rest of this paper. Observe that $L$ is split simple (<cit.>). Structure of $\bs{U}$. From the exposition of the previous paragraph, we infer that the $L$-action of $\SL$ on $\M= Z_k(F)\oplus\SL$ is the direct sum of the trivial action $\rho_0$ on the center and the adjoint action $\Ad$ on $\SL$, whence the image $U=\phi(U(L))$ of the left representation $\phi$ of $U(L)$ on $\M$ is the direct sum $U=U_1\oplus U_2\subseteq\E_F(\M)$ with $U_1=\E_F(F\cdot I_k)\cong F$ and $U_2$ the enveloping algebra of the adjoint action. Since $\Ad$ is finite dimensional and irreducible (because subrepresentations of $\Ad$ correspond to ideals of $L$, which is simple), by Theorem <ref> we have $U_2=\E_F(\SL)$. Therefore \[U =\E_F(F\cdot I_k)\oplus\E_F(\Sl_k(F)).\] In particular, $U$ is a split-semisimple algebra of dimension $(k^2-1)^2+1$. §.§ Explicit description of $U^{\op}$ In this section and the next we describe how to operate with exponents coming from $U^{\op}$. We denote the product of $U^{\op}$ by juxtaposition. Basis of $\M$. Let $\{e_{ij}\}_{i,j=1}^k$ be the standard matrix units of $\M$ (with 1 as $(i,j)$ entry and $0$ elsewhere) and denote $h_i:=e_{ii}-e_{i+1,i+1}$ for $1\leq i<k$. Then a basis of $\SL$ is \[\mathcal{S}:=\{e_{ij}|1\leq i\neq j\leq k\}\cup\{h_1,\ldots,h_{k-1}\},\] which we expand to a basis $\mathcal{M}$ of $\M=\SL\oplus Z_k(F)$ by appending $g:=I_k$, \[\mathcal M:=\mathcal S\cup\{g\}.\] We will also refer to elements $h_{ij}:=e_{ii}-e_{jj}$ for $i,j\in\{1,\ldots,k\}$, $i\neq j$ (thus $h_i=h_{ii+1}$ for $1\leq i<k$). We have $h_{ij}=-h_{ji}$. Let us write $(-1)^{i>j}:=1$ if $i\leq j$ and $(-1)^{i>j}:=-1$ if $i>j$. Then in basis $\mathcal S$ we have \[h_{ij}=(-1)^{i>j}\sum_{l=\min(i,j)}^{\max(i,j)-1} h_l.\] Basis of $U^{\op}$. Write $x\in\Mat_k(F)$ in basis $\mathcal M$ as $x=\sum_{a\in\mathcal M}\mu_a^x a$, i.e., $\mu_a^x$ denotes the coefficient of $x$ with respect to $a\in\mathcal M$. Then, given $a,b\in\mathcal M$ define $\varphi_{ab}\in\E_F(\M)^{\op}$ by \[\varphi_{ab}(x):=\mu_a^x b,\] i.e., $\varphi_{ab}$ is the endomorphism sending basis element $a$ to basis element $b$ and the remaining basis elements to $0$. For example, if $x=e_{12}+2h_1+3h_2\in\Mat_4(F)$ then $x^{\varphi_{h_1e_{23}}}=2e_{23}$, $x^{\varphi_{h_3e_{23}}}=0$ and \[x^{\varphi_{h_1e_{23}}\varphi_{e_{23}h_4}}=(x^{\varphi_{h_1e_{23}}})^{\varphi_{e_{23}h_4}} = (2e_{23})^{\varphi_{e_{23}h_4}} = 2h_4.\] We also define endomorphisms $\vp_{ab}$ for any $a\in\mathcal S$ and $b\in\SL$ by linearity. In particular, for elements $h_{ij}$ and $a\in\mathcal S$, we define \[\vp_{ah_{ij}}:=(-1)^{i>j}\sum_{l=\min(i,j)}^{\max(i,j)-1} \vp_{ah_l}.\] Note that, in $\E_F(\M)^{\op}$, for $a,b,c,d\in\mathcal M$ we have \[\vp_{ab}\vp_{cd}=\delta_{bc}\vp_{ad} \tag{F}\label{formula(F)}\] where $\delta_{bc}$ is Kronecker's delta, since $(x^{\vp_{ab}})^{\vp_{cd}}=(\mu_a^x b)^{\vp_{cd}} = \delta_{bc}\mu_a^x d = \delta_{bc}x^{\vp_{ad}}$ for $x\in\M$. In fact, $\{\varphi_{ab}\}_{a,b\in\mathcal M}$ is nothing else than the standard set of matrix units of $\E _F(\M)^{\op}\cong\Mat_{k^2}(F)$ when basis $\mathcal M$ is fixed for $\M$. With this presentation, $U_2^{\op}\cong\Mat_{k^2-1}(F)$ has $\{\varphi_{ab}\}_{a,b\in\mathcal S}$ as a basis and $U_1^{\op}\cong F$ corresponds to the subspace of endomorphisms spanned by $\varphi_{gg}$. This is the presentation we will use in the following; hence from now on we fix the basis of $U^{\op}$ \[\mathcal U:=\{\vp_{ab}\}_{a,b\in\mathcal S}\cup\{\vp_{gg}\}.\] Notice that $I_{k^2}=\sum_{a\in\mathcal M} \varphi_{aa}$, so in this way we avoid the explicit use of the identity endomorphism and thus the participation of the problematic ordinary polynomial identities. To prevent the notation from becoming too cumbersome, throughout the rest of the paper we will omit the letters $\varphi$ from the exponents when applying endomorphisms of $U^{\op}$ to $\M$ or writing $U$- or $(L,U$)-polynomials; so, for example, $x^{h_1h_2}$ is shorthand for $x^{\varphi_{h_1h_2}}$. This notation of the form $x^{ab}$ with $a,b\in\mathcal\SL$ for polynomial $x^{\vp_{ab}}$ in $\FU$ or in $\FLU$ should never be confused with notation for $L$-polynomial $(x^a)^b$ in $\FL$ with $a,b\in L$; we will never write $L$-polynomials in the latter way. §.§ Computations involving $U^{\op}$ Multiplication in $\SL$. The Lie multiplication table of $\mathcal M$ is summarized by the following relations: * $[g,x]=0$ for any $x\in\mathcal M$. * $[e_{ij},e_{kl}]=0$ ($j\neq k, l\neq i$), $[e_{ij},e_{jk}]=e_{ik}$ ($k\neq i$), $[e_{ij},e_{ji}]=h_{ij}$. * $[h_i,e_{ij}] = e_{ij}$ ($j\neq i,i+1$), $[h_{i-1},e_{ij}] =-e_{ij}$ ($j\neq i-1,i$), $[h_{ij},e_{ij}]=2e_{ij}$. * $[h_i,h_j]=0$. Computations involving inner derivations. Recall that for $c\in\SL$ we write $C:=\ad_{c}\in U^{\op}$. Among the elements in $U^{\op}$ we find the inner derivations $E_{ij}$ generated by the elements $e_{ij}$ ($i\neq j$), which will play a special role in some results. Denote $\vp_{h_0a}:=0, \vp_{h_ka}:=0$ for $a\in\mathcal S$. Then we can write $E_{ij}$ in basis $\mathcal U$ as \[E_{ij}=\sum_{l\neq i,j}\vp_{e_{jl}e_{il}} -\sum_{l\neq i,j}\vp_{e_{li}e_{lj}} + \vp_{e_{ji}h_{ij}}+\vp_{h_{i-1}e_{ij}}-\vp_{h_{i}e_{ij}}-\vp_{h_{j-1}e_{ij}}+\vp_{h_{j}e_{ij}}.\tag{E}\label{formula(E)}\] By Formulas (<ref>) and (<ref>), the product of two of these inner derivations is given by \begin{align*} E_{ij}E_{rs} =& \boldsymbol{\delta_{is}}\left(\sum_{l\neq i,j,r}\vp_{e_{jl}e_{rl}} + \boldsymbol{(1-\delta_{jr})}(\vp_{e_{jr}h_{ri}}+\vp_{h_{i-1}e_{rj}}-\vp_{h_{i}e_{rj}}-\vp_{h_{j-1}e_{rj}}+\vp_{h_{j}e_{rj}})\right)+\\ +&\boldsymbol{\delta_{jr}}\left(\sum_{l\neq i,j,s}\vp_{e_{li}e_{ls}} - \boldsymbol{(1-\delta_{is})}(\vp_{e_{si}h_{js}}+\vp_{h_{i-1}e_{is}}-\vp_{h_{i}e_{is}}-\vp_{h_{j-1}e_{is}}+\vp_{h_{j}e_{is}})\right)+\\ +&\boldsymbol{\delta_{is}\delta_{jr}}\left(\vp_{h_{i-1}h_{ji}} -\vp_{h_{i}h_{ji}} -\vp_{h_{j-1}h_{ji}}+\vp_{h_{j}h_{ji}}\right)+\\ +&\boldsymbol{\delta_{ji+1}}\left(\boldsymbol{\delta_{ir-1}}\vp_{e_{i+1i}e_{i+1s}} -\boldsymbol{\delta_{ir}}\vp_{e_{i+1i}e_{is}} -\boldsymbol{\delta_{is-1}}\vp_{e_{i+1i}e_{ri+1}} +\boldsymbol{\delta_{is}}\vp_{e_{i+1i}e_{ri}}\right)+\\ \end{align*} In particular, a useful identity derived from (<ref>) is \[E_{ij}^2 = -2\vp_{e_{ji}e_{ij}}.\tag{E2}\label{formula(E2)}\] Observe that $e_{ij}^2=0$ ($i\neq j$) implies $E_{ij}^3=0$, since for all $x\in\M$, \[x^{E_{ij}^3}=[e_{ij},[e_{ij},[e_{ij},x]]] = e_{ij}^3x-3e_{ij}^2xe_{ij}+3e_{ij}xe_{ij}^2 -xe_{ij}^3=0.\] We will also make use of the important bracket formula \[\vp_{ab}C = \vp_{a[c,b]}\tag{B}\label{formula(B)}\] for any $c\in\SL$ and $a,b\in\mathcal S$ or $a=g=b$, which is true because for all $x\in\M$, \[x^{\vp_{ab}C} = (\mu_a^xb)^C = [c,\mu_a^x b] = \mu_a^x [c,b] = x^{\vp_{a[c,b]}}.\] In particular, for any left $U$-algebra $A$, $x\in A$, and $a\in\mathcal S$, we have \[(x^{ae_{ij}})^{-E_{ji}}=x^{ah_{ij}}, \,\, (x^{ah_{ij}})^{-\frac12E_{ij}}=x^{ae_{ij}}.\] The action of the power of a derivation on a product is given by Leibniz's rule: for $x,y\in A$, $c\in\SL$ and $p\in\N$, \[(xy)^{C^p} = \sum_{i=0}^p \binom pi x^{C^i}y^{C^{p-i}}.\] As an example, let us compute the action of the square of derivation $E_{ij}$ on a product by using Leibniz's rule, the bracket formula, and Formula (<ref>): \begin{align*} (x^{ab}y^{cd})^{E_{ij}^2} &= x^{(ab)E_{ij}^2}y^{cd} + 2x^{(ab)E_{ij}}y^{(cd)E_{ij}} + x^{ab}y^{(cd)E_{ij}^2} = -2x^{(ab)(e_{ji}e_{ij})}y^{cd} + 2x^{a[e_{ij},b]}y^{c[e_{ij},d]} -2x^{ab}y^{(cd)(e_{ji}e_{ij})} =\\ &= -2(\delta_{be_{ji}} x^{ae_{ij}}y^{cd} - x^{a[e_{ij},b]}y^{c[e_{ij},d]} + \delta_{de_{ji}}x^{ab}y^{ce_{ij}}). \end{align*} The action of a general composition of derivations on a product is given by Formula (<ref>). §.§ Explicit description of $\Uop$ In this section we describe how to operate with exponents coming from $\Uop$. We denote the product of $\Uop$ by juxtaposition. Recalling that $U^{\op}=\phi(U(L)^{\op})\cong U(L)^{\op}/\ker\phi$, fix a unique preimage $\vr_{ab}\in\phi^{-1}(\vp_{ab})$ for each $\vp_{ab}\in\mathcal U$. Then $U(L)^{\op}=V\oplus\ker\phi$, with \[\mathcal V:=\{\vr_{ab}\ | \ a,b\in\mathcal S\text{ or }a=g=b\}\] being a basis of $V$. We extend the notation $\vr_{ab}$ to any $a\in\mathcal S$, $b\in\SL$ by linearity. Preimages of the basis elements. We first show a valid assignment of the elements $\vr_{ab}\in U(L)^{\op}$, formed with polynomials of degree at most $6$ in the elements $e_{ij}\in U(L)^{\op}$. Let $c\cdot v$ denote the scalar product of vectors $c\in F^{k-1}$, $v \in (U(L)^{\op})^{k-1}$ ($c\cdot v:=c_1v_1+\cdots+c_{k-1}v_{k-1}$). Then one valid assignment of $\mathcal V$ is _e_rse_ij:= 1/2 e_sr^2e_rje_is (r≠j, s≠i; r≠s, i≠j) _e_rse_ir:=-1/2 e_sr^2e_is (i≠s; r≠s, i≠r), _e_rse_sj:= 1/2 e_sr^2e_rj (r≠j; r≠s, s≠j) _e_rse_rs:= 1/4 e_sr^2e_rs^2 (r≠s), _e_rse_sr:=-1/2 e_sr^2 (r≠s) _e_rsh_i:= 1/2 e_sr^2e_i+1se_rie_ii+1 (i≠r-1; i≠r, i≠s-1; s≠r), _e_rsh_r:=-1/2 e_sr^2e_r+1se_rr+1 (s≠r+1; s≠r) _e_rsh_s-1:= 1/2 e_sr^2e_rs-1e_s-1s (s≠r+1; s≠r), _e_rsh_r-1:= 1/2 e_sr^2e_r-1se_rr-1 (s≠r-1; s≠r) _e_rr-1h_r-1:= 1/2 e_r-1r^2e_rr-1, _e_r-1rh_r-1:=-1/2 e_rr-1^2e_r-1r _h_ie_rs:= -1/2k(c_is·v_rs)e_rs^2, where v_rs:=(v_rs1,…,v_rss-1,v_rss+1,v_rss+2,…,v_rsk) with v_rsj:=e_sje_jr for j≠r,s and v_rsr:=-e_sr, and c_pq:=(-k+p,…,-k+p^p-1,m,p,…,p^k-p-1), m:=-k+p if p<q and m:=p if p≥q. _h_ih_j:= 1/k(c_ij·w_j)e_j+1je_jj+1, where w_j:=(w_j1,…,w_jj,w_jj+2,w_jj+3,…,w_jk) with w_jr:=e_jre_rj-1/4e_j+1r^2e_rj+1^2 for r≠j,j+1 and w_jj:=1/2e_jj+1e_j+1j. _gg:= 1-∑_a∈𝒮_aa. For example, for $k=6$ we have $c_{35}=(-3,-3,-3,3,3)$, $v_{45}=(e_{51}e_{14},e_{52}e_{24},e_{53}e_{34},-e_{54},e_{56}e_{64})$ and \[\vr_{h_3e_{45}}=\frac14(e_{51}e_{14}+e_{52}e_{24}+e_{53}e_{34}+e_{54}-e_{56}e_{64})e_{45}^2.\] Proving the proposition is a matter of verifying that, for each assignment found in the statement of the form $\vr_{ab}:=\sum_{p,\ldots,q}\alpha_{p,\ldots,q} e_p\cdots e_q$ with $\alpha_{p,\ldots,q}\in F$ and $e_p,\ldots,e_q\in\SL$, the identity $\vp_{ab} = \sum_{p,\ldots,q}\alpha_{p,\ldots,q} E_p\cdots E_q$ holds in $U^{\op}$. Accordingly, we skip computations when they are straightforward. * The first 11 assignments of the statement are easily checked with Formulas (<ref>), (<ref>), (<ref>) and (<ref>). * For $\vp_{h_ie_{rs}}$, first check that \[V^{rsj}:= E_{sj}E_{jr}E_{rs}^2=2(-\vp_{h_{j-1}e_{rs}}+\vp_{h_je_{rs}}+\vp_{h_{s-1}e_{rs}}-\vp_{h_se_{rs}})\text{ for }j\neq r,s \tag{Ea}\label{formula(Ea)}\] \[V^{rsr}:=-E_{sr}E_{rs}^2=2(-\vp_{h_{r-1}e_{rs}}+\vp_{h_re_{rs}}+\vp_{h_{s-1}e_{rs}}-\vp_{h_se_{rs}}).\tag{Eb}\label{formula(Eb)}\] Then, for fixed $r,s$, write all identities in (<ref>) for $1\leq j\leq k$, $j\neq r,s$ together with (<ref>) as a $(k-1)\times(k-1)$ system of linear equations $V^{rs}=M(s)\cdot \vp^{rs}$, with vectors $\vp^{rs}:=(\vp_{h_1e_{rs}},\ldots,\vp_{h_{k-1}e_{rs}})$ and \[ Compute $M(s)^{-1}$ to solve the system and find $\vp^{rs}=M(s)^{-1}V^{rs}$. The matrix of coefficients $M(s)$ is described as follows: Suppose first $1<s<k$. For $i<s-1$ (corresponding to $j=i$ in (Ea) or $r=i$ in (Eb)) and for $i>s$ (corresponding to $j=i+1$ or $r=i+1$), the $i$th row has a $-1$ entry in columns $i-1$ and $s$, a $1$ entry in columns $i$ and $s-1$, and $0$ elsewhere. The $(s-1)$th row (corresponding to $j=s-1$ or $r=s-1$) has a $-1$ entry in columns $s-2$ and $s$ and a $2$ entry in column $s-1$. The $s$th row (corresponding to $j=s+1$ or $r=s+1$) has a $1$ entry in columns $s-1$ and $s+1$ and a $-2$ entry in column $s$. \[M(s)=\begin{pmatrix*}[r] 1 & 0 & \cdots & \cdots & \cdots & 0 & 1 & -1 & 0 & \cdots & \cdots & \cdots & \cdots & 0\\ -1 & 1 & 0 & \cdots & \cdots & 0 & 1 & -1 & 0 & \cdots & \cdots & \cdots & \cdots & 0\\ 0 &-1 & 1 & 0 & \cdots & 0 & 1 & -1 & 0 & \cdots & \cdots & \cdots & \cdots & 0\\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \vdots\\ 0 & \cdots & \cdots & \cdots & 0 & -1 & 2 & -1 & 0 & \cdots & \cdots & \cdots & \cdots & 0\\ 0 & \cdots & \cdots & \cdots & \cdots & 0 & 1 & -2 & 1 & 0 & \cdots & \cdots & \cdots & 0\\ 0 & \cdots & \cdots & \cdots & \cdots & 0 & 1 & -1 & -1 & 1 & 0 & \cdots & \cdots & 0\\ 0 & \cdots & \cdots & \cdots & \cdots & 0 & 1 & -1 & 0 & -1 & 1 & 0 & \cdots & 0\\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots& \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots& 0\\ 0 & \cdots & \cdots & \cdots & \cdots & 0 & 1 & -1 & 0 & \cdots & \cdots \cdots & 0 & -1 & 1 \end{pmatrix*}.\] Equivalently, if $i<s-1$ or $i>s$, the $i$th column $C_{is}$ of $M(s)$ has entries $1,-1$ in rows $i,i+1$ (resp. $i-1,i$); if $i=s-1$ (resp. $i=s$), entry $2$ (resp. $-2$) in row $s-1$ (resp. $s$) and $1$ (resp. $-1$) elsewhere. Let us show that $M(s)$ is invertible. It is straightforward to check that the row vector \[c_{is}:=(\overbrace{-k+i,\ldots,-k+i}^{i-1},m,\overbrace{i,\ldots,i}^{k-i-1}), m:=-k+i\text{ if }i<s\text{ and }m:=i\text{ if }i\geq s\] satisfies $c_{is}\cdot C_{js}=-k\delta_{ij}$, whence the matrix with rows $c_{1s},\ldots,c_{k-1s}$ scaled by $-1/k$ is the inverse of $M(s)$. Therefore $\vp_{h_ie_{rs}}=-\frac1{k}c_{is}\cdot V^{rs}$. In the extreme cases, $s=1$ and $s=k$, the matrix $M(s)$ follows the same pattern with the obvious changes and the same formula gives the inverse. * For $\vp_{h_ih_j}$ check that, for $r\neq j,j+1$, \[E_{jr}E_{rj}E_{j+1j}E_{jj+1} = \vp_{e_{rj+1}e_{rj+1}} +\vp_{h_{r-1}h_j}-\vp_{h_rh_j}-\vp_{h_{j-1}h_j}+\vp_{h_jh_j} \] by showing first that Next apply that $\vp_{e_{rj+1}e_{rj+1}}=\frac14E_{j+1r}^2E_{rj+1}^2 = \frac14E_{j+1r}^2E_{rj+1}^2E_{j+1j}E_{jj+1}$ to find \[W^{jr}:=E_{rj}E_{j+1j}E_{jj+1} - \frac14E_{j+1r}^2E_{rj+1}^2E_{j+1j}E_{jj+1} = \vp_{h_{r-1}h_j}-\vp_{h_rh_j}-\vp_{h_{j-1}h_j}+\vp_{h_jh_j}\text{ for }r\neq j,j+1. \tag{Ec}\label{formula(Ec)}\] Check also that \[W^{jj}:=E_{jj+1}E_{j+1j}^2E_{jj+1} = 2(-\vp_{h_{j-1}h_j}+2\vp_{h_jh_j}-\vp_{h_{j+1}h_j}).\tag{Ed}\label{formula(Ed)}\] Now fix $j$ and proceed as in the previous case by solving for $\vp^j:=(\vp_{h_1h_j},\ldots,\vp_{h_{k-1}h_j})$ the $(k-1)\times(k-1)$ system of linear equations $W^j=-M(j)\vp^j$ generated by (<ref>) and (<ref>), where $W^j:=(W^{j1},\ldots,W^{jj-1},\frac12W^{jj},W^{jj+2},\ldots,W^{jk})$ and $M(s)$ is (thankfully!) the matrix described in the previous item. Therefore $\vp_{h_ih_j}=\frac1k c_{ij}\cdot W^j$. * For $\vp_{gg}$ use that $1_{U^{\op}}=\vp_{gg} + \sum_{a\in\mathcal S}\vp_{aa}$. Generator of $\ker\phi$. The ideal $\ker\phi$ of $\Uop$ is infinite dimensional, but is finitely generated; in this subsection, we show, through the primitive element lemma, that it is in fact a principal ideal. For most of this section we work with $U(L)$. Let $\Ad_k$ denote the adjoint representation of $\SL$ and $\chi_k$ its associated central character, and let $\phi_k$ be the representation of $\SL$ on $\M$ given by the action of $\ad$. We have $\phi_k=\Ad_k\oplus\rho_0$ with $\rho_0$ acting on $F\cdot g$. Fixing the Cartan subalgebra of diagonal traceless matrices and the set of positive roots giving $N_+=\spn\{e_{12},e_{23},\ldots,e_{k-1k}\}$, the highest weight vector of $\Ad_k$ is $e_{1k}$. Denote $x^i_j:=e_{ij}$ for $1\leq i\neq j\leq k$, and for $1\leq i\leq k$, \[x^i_i:=\frac1k\sum_{j=1}^{k-1}\alpha_{ij}h_j,\,\, \alpha_{ij}:=k-j\text{ if }j\geq i,\,\, \alpha_{ij}:=-j\text{ if }j<i.\tag{X}\label{formula(X)}\] The elements $x^i_j$ form a set of generators of $\SL$ satisfying $[x^i_j,x^r_s]=\delta_{jr}x^i_s-\delta_{is}x^r_j$. Then the Casimir elements \[c_{p,k}:=\sum_{i_1,\ldots,i_p=1}^k x^{i_1}_{i_2}x^{i_2}_{i_3}\cdots x^{i_p}_{i_1}, \,\, 2\leq p\leq k,\tag{Ca}\label{formula(Ca)}\] which have rational coefficients in the PBW basis, form a set of Casimir generators of $Z(U(L))$ (see <cit.>). For example, \begin{align*} c_{3,3}& = (x^1_1)^3+(x^1_1)^2x^1_2+(x^1_1)^2x^1_3+ x^1_1x^1_2x^2_2 +x^1_1x^1_2x^2_3 + \cdots + (x^3_3)^3 =\\ & = 2/9h_1^3 + 1/3h_1^2h_2 - 1/3h_1h_2^2 - 2/9h_2^3 + 2h_1^2 + h_1h_2 + 4h_1 + 2h_2 +\\ & + h_1e_{21}e_{12} + 2h_2e_{21}e_{12} - 2h_1e_{32}e_{23} - h_2e_{32}e_{23} + 3e_{31}e_{12}e_{23} + 3e_{21}e_{32}e_{13} + h_1e_{31}e_{13} - h_2e_{31}e_{13} +\\ & + 6e_{21}e_{12} + 3e_{31}e_{13}. \end{align*} Let $\lambda_{p,k}$ denote the eigenvalue of $c_{p,k}$ for $\Ad_k$. These eigenvalues are the following positive integers. Put $\lambda_{1,k}:=0$ and $\lambda_{p,k}:=\chi_k(c_{p,k})$ for $2\leq p\leq k$. Then $\lambda_{p,k}=\lambda_{p-2,k}+k^{p-1}$ with $\lambda_{2,k}=2k$, thus \[\lambda_{p,k}=\left\{\begin{array}{lc}\displaystyle k\left(\frac{k^p-1}{k^2-1} + 1\right), & p\text{ even}\\\displaystyle k^2\frac{k^{p-1}-1}{k^2-1}, & p\text{ odd} \end{array}\right..\] By <cit.> we have $\lambda_{p,k}=\tr(A_k^pE_k)$, where $E_k$ is the $k\times k$ matrix full of ones and $A_k$ is the $k\times k$ upper triangular matrix with $(i,j)$ entries equal to $-1$ when $i<j$ and diagonal \[(m_1+k-1,m_2+k-2,\ldots,m_{k-1}+1,m_k),\] where $m_i$ is the eigenvalue of $x_i^i$ for the highest weight vector of the adjoint representation, i.e., $x_i^ie_{1k}=:m_ie_{1k}$. A straightforward computation with Formula (<ref>) produces $m_1=1$, $m_2=\cdots=m_{k-1}=0$, $m_k=-1$. By induction on $p$ with base case $p=1$ it is proven that $A_k^pE_k=A_k\cdot(A_k^{p-1}E_k)$ equals \[\begin{pmatrix} a_{p,k} & a_{p,k} & \cdots & a_{p,k}\\ 0 & 0 & \cdots & 0\\ \vdots & \ddots & \ddots & \vdots\\ -1 & -1 & \cdots & -1 \end{pmatrix}\] when $p$ is odd and \[\begin{pmatrix} a_{p,k} & a_{p,k} & \cdots & a_{p,k}\\ 1 & 1 & \cdots & 1\\ \vdots & \ddots & \ddots & \vdots\\ 1 & 1 & \cdots & 1 \end{pmatrix}\] when $p$ is even, with $a_{1,k}=1$, \[a_{p,k}=\left\{\begin{array}{lc} ka_{p-1,k}+1, & p\text{ even}\\ ka_{p-1,k}-(k-1), & p>1\text{ odd} \end{array}\right.,\] \[\lambda_{p,k}=\tr(A^p_kE_k)=\left\{\begin{array}{lc} a_{p,k}+k-1, & p\text{ even}\\ a_{p,k}-1, & p>1\text{ odd} \end{array}\right..\] \[\lambda_{p,k}=\left\{\begin{array}{lc} k(a_{p-1,k}+1) = k(\lambda_{p-1,k}+2), & p\text{ even}\\ k(a_{p,k}-1) = k(\lambda_{p-1,k}-k), & p>1\text{ odd} \end{array}\right.,\] with $\lambda_{2,k}=2k$, $\lambda_{1,k}=0$. Notice that $\lambda_{p,k}-\lambda_{p-2,k}=k(\lambda_{p-1,k}-\lambda_{p-3,k})$ regardless of whether $p$ is even or odd. By recursion $\lambda_{p,k}-\lambda_{p-2,k}=k^{p-3}(\lambda_{3,k}-\lambda_{1,k})=k^{p-3}k^2=k^{p-1}$, hence by recursion again we find \[ \lambda_{p,k}=\left\{\begin{array}{lc}\displaystyle \sum_{i=1}^{p/2-1} k^{2i+1} +2k= k\left(\frac{k^p-1}{k^2-1} + 1\right), & p\text{ even}\\\displaystyle \sum_{i=1}^{(p-1)/2} k^{2i} = k^2\frac{k^{p-1}-1}{k^2-1}, & p\text{ odd} \end{array}\right..\qedhere\] Clearly $c_{p,k}-\lambda_{p,k}\in\ker\chi_k\subseteq\ker\Ad_k$ for $2\leq p\leq k$, and we also have $e_{ij}^3\in\ker\Ad_k$ for $1\leq i\neq j\leq k$. We prove that these elements generate $\ker\Ad_k$ in $U(L)$ and show that $\ker\phi_k$ is a principal ideal. We build on ideas from <cit.>, which solve the problem for $k=3$. Denote $z_{p,k}:=c_{p,k}-\lambda_{p,k}$ for $2\leq p\leq k$ and $z'_{p,k}:=\lambda_{2,k}c_{p,k}-\lambda_{p,k}c_{2,k}$ for $3\leq p\leq k$. * $\ker\Ad_k = \la e_{12}^3, z_{2,k},\ldots,z_{k,k}\ra$. * $\ker\phi_k = \la e_{12}^3,e_{12}z_{2k}, z'_{3,k},\ldots,z'_{p,k}\ra$. * $\ker\phi_k = \la e_{12}^3+e_{12}z_{2,k} + e_{13}z_{3,k} +\cdots +e_{1k}z_{k,k}\ra$. Let $K$ be a field extension of $F$. Then $\Sl_k(K)=\SL\otimes_F K$, $U(\Sl_k(K))=U(L)\otimes_F K$, the adjoint representation of $\Sl_k(K)$ is $\Ad_k\otimes_F K$, and if $\rho$ is a representation of $\SL$ then $\rho\otimes_F K$ is a representation of $\Sl_k(K)$ such that \[\ker_{U(\Sl_k(K))}(\rho\otimes_F K) = (\ker_{U(L)}\rho)\otimes_F K.\] In addition, if $I$ is an ideal of $U(L)$ such that $I\otimes_F K = \la g_1,\ldots,g_n\ra$ in $U(\Sl_k(K))$ with $g_i\in I$ for $1\leq i\leq n$ then $I=\la g_1,\ldots,g_n\ra$ in $U(L)$. Therefore, by extension and restriction of scalars, we can assume without loss of generality that $F=\Co$. * Clearly $I_k:=\la e_{12}^3, z_{2,k},\ldots,z_{k,k}\ra\subseteq\ker\Ad_k$. Let us show the opposite inclusion. First, we see that $I_k$ has finite codimension. Consider a deglex order on the set of monomials of $U(L)$ with $h_i>e_{ii+1}>e_{i+1i}$ for $1\leq i<k$. Since $U(L)=I_k\oplus\spn N(I_k)$, the ideal $I_k$ has finite codimension if and only if $\spn N(I_k)$ has finite dimension, hence if and only if there are $n_1,\ldots,n_{k^2-1},m_1,\ldots,m_{k-1}\in\N$ such that $e_{ij}^{n_{ij}},h_l^{m_l}\in\LM(I_k)$ for all $1\leq i\neq j\leq k$ and $1\leq l<k$. In the next identities let $\ad$ denote the adjoint map of $U(L)$; since we have \begin{align*} &e_{21}^3 = -\frac1{6!}\ad_{e_{21}}^6(e_{12}^3)\text{ if } k=2,\\ &e_{ij}^3 = \frac1{6}\ad_{e_{il}}^3(e_{lj}^3),\,\,\,\, e_{ij}^3 = -\frac1{6}\ad_{e_{lj}}^3(e_{il}^3)\,\,\text{ for }i\neq l\neq j\neq i\text{ if }k>2, \end{align*} starting from $e_{12}^3$ we can show $e_{ij}^3\in I_k$ for all $1\leq i\neq j\leq k$, for all $k\geq2$: for $k\geq3$, use the third identity to get $e_{1j}^3$ for all $3\leq j\leq k$ from $e_{12}^3$, the second identity to get $e_{i2}^3$ for all $3\leq i\leq k$ from $e_{12}^3$, then the second identity again to get $e_{21}^3$ from $e_{31}^3$; and so on. Also, since for all $k\geq2$ and all $1\leq i<k$ we have \[\frac16\ad^3_{e_{ii+1}}(e_{i+1i}^3) = h_i^3-6e_{i+1i}e_{ii+1}h_i -3h_i^2 +4h_i,\] we find $h_i^3\in\LM(I_k)$ for $1\leq i<k$. This proves that $I_k$ has finite codimension. Now, since $z_{2,k},\ldots,z_{k,k}$ is a set of Casimir generators, $I_k\cap Z(U(L))$ is a maximal ideal of $Z(U(L))$, and so Lemma <ref> shows that $I_k$ is a maximal ideal of $U(L)$, implying $I_k=\ker\Ad_k$. * The representation $\phi_k$ is the direct sum of the adjoint representation $\Ad_k$ and the trivial representation $\rho_0$, so $\ker\phi_k=\ker\Ad_k\cap\ker\rho_0=I_k\cap U^*(L)$, where $U^*(L)$ is the nonunital universal enveloping algebra of $L$; i.e., $\ker\phi_k$ is formed by those elements of $I_k$ which do not have a nonzero constant term. We first change the set of Casimir generators to get rid of unnecessary constant terms in the generators of $I_k$. By Lemma <ref> we have $\lambda_{2,k}=2k\neq0$, hence the matrix \[\pma 1 & 0 & \cdots & \cdots & 0 \\ \lambda_{3,k} & -\lambda_{2,k} & 0 & \cdots & 0 \\ \lambda_{4,k} & 0 & -\lambda_{2,k} & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ \lambda_{k,k} & 0 & \cdots & 0 & -\lambda_{2,k} \epma\] is invertible. Therefore the central elements $c_{2,k}$ and $z'_{p,k}$ for $3\leq p\leq k$ form another set of Casimir generators such that $\chi_k(c_{2,k})=\lambda_{2,k}$, $\chi_k(z'_{p,k})=0$ for $3\leq p\leq k$. Then \[\ker Ad_k=\la e_{12}^3,z_{2k}, z'_{3,k},\ldots,z'_{p,k}\ra\] with $e_{12}^3, z'_{3,k},\ldots,z'_{p,k}\in U^*(\SL)$. Put $I:=\la e_{12}^3, z'_{3,k},\ldots,z'_{p,k}\ra$, $J:=\la z_{2,k}\ra$ and $M:=U^*(L)$. Then $\ker\phi_k=(I+J)\cap M = I+J\cap M$ because $I\subseteq M$. Since $M$ is a maximal ideal and $z_{2,k}\not\in M$, $J+M=U(L)$, whence \[JM\subseteq J\cap M = U(L)(J\cap M) = (J+M)(J\cap M)\subseteq JM + MJ = JM\] since $z_{2,k}$ is central. This shows $J\cap M = JM$. Moreover, since $\SL$ is simple, $U^*(L)=\la e_{12}\ra$, so $JM = \la z_{2,k})\I(e_{12}\ra = \la z_{2,k}e_{12}\ra$ because $z_{2,k}$ is central. Therefore \[\ker\phi_k=I+JM = \la e_{12}^3,e_{12}z_{2,k},z_{3,k}',\ldots,z_{k,k}'\ra.\] Now repeat the argument above with $I:=\la e_{12}^3\ra$, $J:=\la z_{2,k},\ldots,z_{k,k}\ra$ to arrive at $\ker\phi_k = \la e_{12}^3,\allowbreak e_{12}z_{2,k},\ldots,e_{12}z_{k,k}\ra$. Since $[x,yz_{p,k}]=[x,y]z_{p,k}$ for all $x,y\in U(L)$ and $\la e_{1p}\ra=U^*(L)$ for all $2\leq p\leq k$, we get $\la e_{12}z_{p,k}\ra=\la e_{1p}z_{p,k}\ra$ for all $3\leq p\leq k$, so $\ker\phi_k = \la e_{12}^3,e_{12}z_{2,k},\ldots,e_{1k}z_{k,k}\ra$. * The elements $e_{1i}$ correspond to different roots $\alpha_i$ of $\SL$ and as such are weight vectors of different weights for the adjoint action of $\SL$ on $U(L)$. The identity $[h_i,e_{1p}z_{p,k}]=\alpha_p(h_i)e_{1p}z_{p,k}$ for $1\leq i<k$, $2\leq p\leq k$ shows that the elements $e_{1p}z_{p,k}$ are also weight vectors of $U(L)$ of different weights, which are also different from the weight $3\alpha_2$ of the weight vector $e_{12}^3$. Since ideals of $U(L)$ are $L$-ideals for the adjoint action, by the primitive element lemma (Lemma <ref>) we find $\ker\phi_k=\la e_{12}^3,e_{12}z_{2,k},\ldots,e_{1k}z_{k,k}\ra = \la e_{12}^3+e_{12}z_{2,k}+\cdots+e_{1k}z_{k,k}\ra$. Now we turn back to $\Uop$. \[\ker\phi = \la e_{12}^3+e_{12}z_{2,k} + e_{13}z_{3,k} +\cdots +e_{1k}z_{k,k}\ra.\] We have $\phi=(\phi_k)^{\op}$, so $\ker\phi=\ker\phi_k$ as sets with $z:=e_{12}^3+e_{12}z_{2,k} + e_{13}z_{3,k} +\cdots +e_{1k}z_{k,k}$ generating $\ker\phi_k$ in $U(L)$, and since $z_{p,k}\in Z(U(L))$ for $2\leq p<k$, $z$ generates $\ker\phi$ in $U(L)^{\op}$. §.§ $U$-polynomials and $U$-identities Modifying the first index of an exponent. As we will see below in <ref>, it is the second basis element in the subindex of the endomorphisms $\varphi_{ab}$ which carries the weight of the identities of $\M$. The first basis element is not that relevant, and in fact it can be freely changed by substitution: On the one hand, if $f(x_1,\ldots,x_n)$ is a $U$-identity of $(L,U)$-algebra $A$ and $u_1,\ldots,u_n\in U$, then $f(x_1^{u_1},\ldots,x_n^{u_n})$ is a $U$-identity of $A$. On the other hand, $(x^{ab})^{bc}=x^{ac}$ for $a,b,c\in\mathcal S$. Therefore, if variable $x^{ab}$ with $a,b\in\mathcal S$ features in a $U$-identity of $A$, there is an analogous $U$-identity replacing $x^{ab}$ with variable $x^{cb} = (x^{ca})^{ab}$ for any $c\in\mathcal S$. Hence we can fix one element $a\in\mathcal S$ and assume that each endomorphism appearing as exponent in a $U$-identity is either $\vp_{gg}$ or of the form $\varphi_{ab}$ for some $b\in\mathcal S$; that is, when looking for the generating identities of the $T_U$-ideal of $A$ we may assume that all exponents different from $\vp_{gg}$ start with the same first basis element $a$. Taking this into account, for a fixed and previously specified $a\in\mathcal S$, we will write $x^b$ as a shorthand for $x^{ab}$ ($x^{\varphi_{ab}}$) with $b\in\SL$ (this should not be confused with element $x^b\in\FL$); we will also write $x^g$ as a shorthand for $x^{gg}$ ($x^{\vp_{gg}}$). In particular, in this format, the bracket formula takes the simple form $(x^b)^C = x^{[c,b]}$ for $b,c\in\SL$ and $C=\ad_c$. Fixed-exponents components of $U$-identities. We expand and add rigor to <ref>. Let us write multilinear $U$-polynomials of $P_n^U$ by grouping their terms with respect to the exponents of their variables, with one set of indices $\mathcal I$ for variables of the form $x_i^{gg}$ and another set of indices $\mathcal J$ for variables of the form $x_j^{a_jb_j}$ with $a_j,b_j\in\mathcal S$, and taking into account how the first exponent indices $a_j$ are paired with the variables $x_j$. For $n\geq 1$, let $\stirlingtwo{[n]}{2}$ denote the set of pairs of sets $(\mathcal I,\mathcal J)$ such that $\{\mathcal I,\mathcal J\}$ is a partition of the set $\{1,\dots,n\}$ into two disjoint subsets, one of which may be empty (observe that $(\mathcal I,\mathcal J)$ and $(\mathcal J,\mathcal I)$ are different elements belonging to $\stirlingtwo{[n]}{2}$). Then for $f\in P_n^U$ we have the decomposition \[f=\sum_{(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}}\,\sum_{\bs a\in \mathcal{S}^{|\mathcal J|}} f_{\mathcal I, \bs a},\] where for fixed $\mathcal I=\{i_1,\ldots,i_r\}$, $\mathcal J=\{j_1,\ldots,j_{n-r}\}$ and $\bs a=(a_1,\ldots,a_{n-r})$, the $U$-polynomial $f_{\mathcal I, \bs a}$ denotes the sum of all terms of $f$ in which only the variables \[x_{i_1}^{gg}, \dots, x_{i_r}^{gg}\text{ and }x_{j_1}^{a_1b_1}, \dots, x_{j_{n-r}}^{a_{n-r}b_{n-r}}\text{ for any }(b_1,\dots, b_{n-r})\in \mathcal{S}^{n-r}\] appear, in any order. Call $f_{\mathcal I,\bs a}$ the $(\mathcal I,\bs a)$ fixed-exponents component of $f$. Any $U$-polynomial whose only nonzero fixed-exponents component is $(\mathcal I,\bs a)$ is of the form \[f(\mathcal I,\bs a,\{\alpha_{\sigma,\bs b}\}):=\sum_{\sigma\in S_n}\sum_{\tiny\begin{array}{cc}\bs b=(b_1,\ldots,b_n)\in\mathcal M^n\\b_i=g\text{ if and only if }\sigma(i)\in \mathcal I\end{array}}\!\!\!\!\!\!\!\! \alpha_{\sigma,\bs b} \, x_{\sigma(1)}^{a_{\sigma(1)}b_1}\cdots x_{\sigma(n)}^{a_{\sigma(n)}b_n},\] where $a_{i}:=g$ for all $i\in \mathcal I$, $S_n$ is the symmetric group acting on $\{1,\ldots,n\}$, and $\alpha_{\sigma,\bs b}\in F$. If the first exponent index is homogeneous, i.e., $\bs a=(a,\ldots,a)$ for $a\in\mathcal S$, then we write $f(\mathcal I, a,\{\alpha_{\sigma,\bs b}\})$. If $\mathcal I=\{1,\ldots,r\}$ then we write $f(r,\bs a,\{\alpha_{\sigma,\bs b}\})$ and say that $f$ has first $r$ $g$-exponents. We show that to study $P_n^U\cap\I^U(A)$ it is enough to study the $U$-identities with only one nonzero fixed-exponents component, with homogeneous first exponent index and first $r$ $g$-exponents. Let $A$ be an $(L,U)$-algebra, $(\mathcal I,\mathcal J)\in \stirlingtwo{[n]}{2}$ with $\mathcal I=\{i_1,\ldots,i_r\}$ and $\mathcal J=\{j_1,\ldots,j_{n-r}\}$, $\bs a = (a_{j_1},\ldots,a_{j_{n-r}})\in\mathcal S^{n-r}$, and $c\in\mathcal S$. * $f\in P_n^U$ is a $U$-identity of $A$ if and only if any nonzero fixed-exponents component of $f$ is a $U$-identity of $A$. * $f(\mathcal I,\bs a,\{\alpha_{\sigma,\bs b}\})$ is a $U$-identity of $A$ if and only if $f(\mathcal I,c,\{\alpha_{\sigma,\bs b}\})$ is a $U$-identity of $A$. * $f(\mathcal I,c,\{\alpha_{\sigma,\bs b}\})$ is a $U$-identity of $A$ if and only if $f(r,c,\{\alpha_{\sigma,\bs b}\})$ is a $U$-identity of $A$. Along the proof we use repeatedly that if $f\in \I^U(A)$ and $\rho$ is a substitution endomorphism swapping variables then $\rho(f)\in\I^U(A)$ (see <ref>). * One implication is clear from the definition, since any multilinear $U$-polynomial is the sum of its fixed-exponents components. For the other one, fix $f\in P_n^U \cap \I^U(A)$ and for $j\in\mathcal J$ let $\rho_j$ denote the substitution endomorphism sending variable $x_j$ to $x_j^{a_ja_j}$. Then, since $\vp_{a_ja_j}\vp_{a_jb}=\vp_{a_jb}$ and $\vp_{a_ja_j}\vp_{cd}=0$ for $c\neq a_j$, \[f_{\mathcal I,\bs a} = \rho_{j_1}\circ\cdots\circ\rho_{j_{n-r}}(f).\] * For $j\in\mathcal J$ let $\rho^c_j$ denote the substitution endomorphism sending variable $x_j$ to $x_j^{ca_j}$. Then \[f(\mathcal I,c,\{\alpha_{\sigma,\bs b}\}) = \rho^c_{j_1}\circ\cdots\circ\rho^c_{j_{n-r}}(f(\mathcal I,\bs a,\{\alpha_{\sigma,\bs b}\})).\] Analogously, for $j\in\mathcal J$ let $\rho^j_c$ denote the substitution endomorphism sending variable $x_j$ to $x_j^{a_jc}$. Then \[f(\mathcal I,\bs a,\{\alpha_{\sigma,\bs b}\}) = \rho^{j_1}_c\circ\cdots\circ\rho^{j_{n-r}}_c(f(\mathcal I,c,\{\alpha_{\sigma,\bs b}\})).\] * For $\sigma\in S_n$ let $\rho_\sigma$ denote the substitution endomorphism sending variable $x_i$ to $x_{\sigma(i)}$ for all $1\leq i\leq n$. Let $\tau$ be the permutation sending $i_s$ to $s$ for $1\leq s\leq r$ and $j_s$ to $s+r$ for $1\leq s\leq n-s$. \[f(r,c,\{\alpha_{\sigma,\bs b}\}) = \rho_\tau(f(\mathcal I,c,\{\alpha_{\sigma,\bs b}\}))\text{ and } f(\mathcal I,c,\{\alpha_{\sigma,\bs b}\}) = \rho_{\tau^{-1}}(f(r,c,\{\alpha_{\sigma,\bs b}\})).\qedhere\] Moreover, it is now clear that given a set of $U$-identities, we can homogeneously fix one single element $a\in\mathcal S$ as the first exponent index of all $U$-identities in the set, first exponent index which we may elide. Formula for the $U$-codimensions. For $(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}$ and $\bs a\in\mathcal{S}^{|\mathcal J|}$, we denote by $P_{\mathcal I,\mathcal J,\bs a}^U$ the subspace of $P_n^U$ composed of the multilinear $U$-polynomials whose only nonzero fixed-exponents component is $(\mathcal I,\bs a)$ (see <ref>). For $0\leq r\leq n$ and $a\in\mathcal{S}$, we denote $P^U_{r,n-r,a}:=P^U_{\{1,\ldots,r\},\{r+1,\ldots,n\},(a,\ldots,a)}$, the vector space of multilinear $U$-polynomials with only one nonzero fixed-exponents component, with homogeneous first exponent index $a$ and first $r$ $g$-exponents. If $A$ is an $(L,U)$-algebra, we denote $P_{r,n-r,a}^U(A):= \displaystyle\dfrac{P_{r,n-r,a}^U}{P_{r,n-r,a}^U \cap \I^U(A)}$ and $c^U_{r,n-r}(A):=\dim_F P^U_{r,n-r,a}(A)$ for any $a\in\mathcal S$ (it is independent of $a$, as deduced from item (1) of the following proposition). We show that to study $P_n^U\cap\I^U(A)$ it is enough to study $P^U_{r,n-r,a}\cap\I^U(A)$ for some fixed $a\in\mathcal S$. Let $A$ be an $(L,U)$-algebra. * For $(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}$ such that $r=|\mathcal I|$ and any $\bs a\in\mathcal S^{n-r}$ and $c\in\mathcal S$ there is a linear isomorphism between $P_{\mathcal I,\mathcal J,\bs a}^U$ and $P_{r,n-r,c}^U$ sending $P_{\mathcal I,\mathcal J,\bs a}^U \cap\I^U(A)$ to $P_{r,n-r,c}^U \cap\I^U(A)$. \[c_n^U(A)= \sum_{r=0}^{n} \binom{n}{r} (k^2 -1)^{n-r} c^U_{r,n-r}(A). \tag{C}\label{formula(C)}\] * The proof of item (2) of Lemma <ref> shows a linear isomorphism between $P_{\mathcal I,\mathcal J,\bs a}^U$ and $P_{\mathcal I,\mathcal J,(c,\ldots,c)}^U$ such that $P_{\mathcal I,\mathcal J,\bs a}^U \cap\I^U(A) \cong P_{\mathcal I,\mathcal J,(c,\ldots,c)}^U\cap\I^U(A)$, while the proof of item (3) shows a linear isomorphism between $P_{\mathcal I,\mathcal J,(c,\ldots,c)}^U$ and $P_{r,n-r,c}^U$ such that $P_{\mathcal I,\mathcal J,(c,\ldots,c)}^U\cap\I^U(A) \cong P_{r,n-r,c}^U\cap\I^U(A)$ (the isomorphisms being given by the invertible substitutions specified there). * By definition of fixed-exponents components we have \[P_n^U = \bigoplus_{(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}, \bs a\in \mathcal{S}^{|\mathcal J|}} P_{\mathcal I,\mathcal J,\bs a}^U.\] This identity, combined with Lemma <ref>(1), leads to \[P_n^U\cap\I^U(A) = \bigoplus_{(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}, \bs a\in \mathcal{S}^{|\mathcal J|}} (P_{\mathcal I,\mathcal J,\bs a}^U\cap\I^U(A)).\] \[P_n^U(A) = \dfrac{P_n^U}{P_n^U\cap\I^U(A)} \cong \bigoplus_{(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}, \bs a\in \mathcal{S}^{|\mathcal J|}} \dfrac{P_{\mathcal I,\mathcal J,\bs a}^U}{P_{\mathcal I,\mathcal J,\bs a}^U\cap\I^U(A)}.\] Now fix $c\in\mathcal S$. By item (1), for each $(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}$ such that $|\mathcal I|=r$ and $\bs a\in\mathcal{S}^{|\mathcal J|}$ there is an isomorphism between $\dfrac{P_{\mathcal I,\mathcal J,\bs a}^U}{P_{\mathcal I,\mathcal J,\bs a}^U\cap\I^U(A)}$ and $\dfrac{P_{r,n-r,c}^U}{P_{r,n-r,c}^U\cap\I^U(A)}$. Therefore, \[P_n^U(A) \cong \bigoplus_{0\leq r\leq n,\, (\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2} \, | \, |\mathcal I|=r,\, \bs a\in \mathcal{S}^{n-r}} \dfrac{P_{r,n-r,c}^U}{P_{r,n-r,c}^U\cap\I^U(A)} = \bigoplus_{0\leq r\leq n}\binom nr (k^2-1)^{n-r} \dfrac{P_{r,n-r,c}^U}{P_{r,n-r,c}^U\cap\I^U(A)},\] since for any $0\leq r\leq n$ we have $\binom nr$ choices of the first $r$ variables and $(k^2 -1)^{n-r}$ distinct elements in $\mathcal{S}^{n-r}$. Hence Formula (<ref>) follows. Fixed $a\in \mathcal{S}$, when no confusion may arise, we will write $P^U_{r,n-r}:=P_{r,n-r,a}^U$, $P^U_{r,n-r}(A):=P_{r,n-r,a}^U(A).$ $U$-cocharacter and its decomposition. In view of the previous sections, instead of considering the usual permutations of variables as the actions of the symmetric groups on the spaces of multilinear polynomials, we consider the actions which permute variables together with their first index exponent, which is more natural in this context. Denoting by $S_{\mathcal I}$ and $S_{\mathcal J}$ the symmetric groups acting on the sets $\mathcal I$ and $\mathcal J$ respectively, the group $S_{\mathcal I} \times S_{\mathcal J}$ acts on $P_{\mathcal I,\mathcal J,\bs a}^U$ on the left in the following way: \[(\sigma,\tau)(x_{\rho(1)}^{a_{\rho(1)}b_{1}}\cdots x_{\rho(n)}^{a_{\rho(n)}b_{n}}) := x_{\pi\rho(1)}^{a_{\pi\rho(1)}b_{1}}\cdots x_{\pi\rho(n)}^{a_{\pi\rho(n)}b_{n}}\] where $\rho\in S_n$, $a_{\rho(i)}=g=b_i$ if $\rho(i)\in \mathcal I$, $(\sigma,\tau)\in S_{\mathcal I}\times S_{\mathcal J}$, and $\pi(i):=\sigma(i)$ if $i\in \mathcal I$ while $\pi(i):=\tau(i)$ if $i\in \mathcal J$. In this way $P_{\mathcal I,\mathcal J,\bs a}^U$ becomes an $S_{\mathcal I} \times S_{\mathcal J}$-module. If $A$ is an $(L,U)$-algebra, then $P_{\mathcal I,\mathcal J,\bs a}^U\cap \I^U(A)$ is invariant under the $S_{\mathcal I} \times S_{\mathcal J}$-action, making \[P_{\mathcal I,\mathcal J,\bs a}^U(A):=\frac{P_{\mathcal I,\mathcal J,\bs a}^U}{P_{\mathcal I,\mathcal J,\bs a}^U\cap \I^U(A)}\] an $S_{\mathcal I} \times S_{\mathcal J}$-module with the induced action. If $|\mathcal I|=r$, then $S_{\mathcal I} \times S_{\mathcal J}\cong S_{r} \times S_{n-r}$ where $S_r$ and $S_{n-r}$ denote the symmetric groups acting on the sets $\{1,\dots , r\}$ and $\{r+1,\dots, n\}$, respectively. Thus, for any $(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}$ such that $|\mathcal I|=r$ and $\bs a\in\mathcal{S}^{n-r}$, $P_{\mathcal I,\mathcal J,\bs a}^U(A)$ can be regarded as an $S_r\times S_{n-r}$-module. As a consequence, the space \[P_{(n;r)}^U(A):= \bigoplus_{(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2} \, | \, |\mathcal I|=r,\, \bs{a}\in \mathcal{S}^{n-r}}P_{\mathcal I,\mathcal J,\bs a}^U(A)\] is also an $S_r\times S_{n-r}$-module, whose character we denote by $ \chi_{(n;r)}^U(A)$ and call the $(n,r)$th $U$-cocharacter of A. Now, fixed $a\in \mathcal S$, the space $P^U_{r,n-r}(A)$ (defined in <ref>) is an $S_r \times S_{n-r}$-module whose character we denote by $\chi_{r,n-r}^U(A)$ (it is independent of a, as deduced Proposition <ref>(1)). From the same result we know that for each $(\mathcal I,\mathcal J)\in\stirlingtwo{[n]}{2}$ such that $|\mathcal I|=r$ and $\bs a\in\mathcal{S}^{n-r}$ there is an isomorphism of vector spaces between $P_{\mathcal I,\mathcal J,\bs a}^U(A)$ and $P^U_{r,n-r}(A)$, and since the $S_r\times S_{n-r}$-action commutes with the isomorphism, it is in addition an isomorphism of $S_r\times S_{n-r}$-modules. Therefore \begin{equation}\label{eq:P_{(n;r)}^U(A)} P_{(n;r)}^U(A) \underset{S_r\times S_{n-r}}{\cong\phantom{m}}\binom nr (k^2-1)^{n-r} P^U_{r,n-r}(A). \end{equation} Recall that the irreducible $S_r \times S_{n-r}$-characters are the tensor products $\chi_\lambda \otimes \chi_\mu$ of the irreducible $S_r$- and $S_{n-r}$-characters $\chi_\lambda$ and $\chi_\mu$, where $\lambda\vdash r$ and $\mu \vdash n-r$ are partitions. Since $\chr(F)=0$, by complete reducibility we can write \begin{equation} \label{S_r X S_n-r character} \chi_{r,n-r}^U(A)=\sum_{(\lambda,\mu)\vdash(r,n-r)} m_{\lambda,\mu}\,\chi_\lambda \otimes \chi_\mu, \end{equation} where $\lambda\vdash r$, $\mu \vdash n-r$, and $m_{\lambda, \mu}\geq 0$ is the multiplicity corresponding to $\chi_\lambda \otimes \chi_\mu$. Thus, as a consequence of (<ref>) and (<ref>), the $(n,r)$th $U$-cocharacter of A can be decomposed as \[ \chi_{(n;r)}^U(A)=\sum_{(\lambda,\mu)\vdash(r,n-r)} \binom nr (k^2-1)^{n-r} m_{\lambda,\mu}\,\chi_\lambda \otimes \chi_\mu. \tag{$\chi$}\label{formula(chi)} \] § DIFFERENTIAL IDENTITIES OF $\M$ In this section we determine the $U$-identities, $(L,U)$-identities, and $L$-identities of the algebra $\M$ for $k\geq 2$. §.§ $U$-identities of $\M$ Multiplication table of $\M$. The multiplication table arising from $\mathcal M$ (see <ref>), with results expressed in matrix units, is summarized by the following relations: * $gx=x=xg$ for any $x\in\mathcal M$. * $e_{ij}e_{jk}=e_{ik}$, $e_{ij}e_{kl}=0$ ($j\neq k$). * $h_ie_{ij} = e_{ij}$, $h_ie_{i+1,j}=-e_{i+1,j}$, $e_{ij}h_j = e_{ij}$, $e_{i,j+1}h_{j} = -e_{i,j+1}$, $h_ie_{jk}=0$ ($j\not\in\{i,i+1\}$), $e_{ij}h_k=0$ ($k\not\in\{j,j+1\}$). * $h_i^2=e_{ii}+e_{i+1,i+1}$, $h_ih_{i+1}=-e_{i+1,i+1}=h_{i+1}h_i$, $h_ih_j=0$ ($j\not\in\{i-1,i,i+1\}$). Generating $U$-identities. Due to the nature of the endomorphisms $\varphi_{ab}$ (see Formula (<ref>)), the identities in the multiplication table of $\M$ (see <ref>) translate well to $U$-identities of $\M$: for example, if $x,y\in\M$ and $a,b\in\mathcal S$ then \[x^{ae_{ij}}y^{be_{lm}}=\mu_a^x\mu_b^ye_{ij}e_{lm}=0\text{ if }j\neq l.\] This idea provides us at once with the following $U$-identities of $\M$ in two variables. Fix $a_1,a_2\in \mathcal M$. For $1\leq i\leq r$, fix $\sigma_i\in S_2$ and let $\alpha_i\in F$ and $m^i_1,m^i_2\in\mathcal M$ be such that $\sum_{i=1}^r \alpha_im^i_1m^i_2=0$ in $\M$, with $a_1=g$ (resp. $a_2=g$) forcing $m^i_{\sigma_i(1)}=g$ (resp. $m^i_{\sigma_i(2)}=g$) for all $1\leq i\leq r$. Then \[\sum_{i=1}^r \alpha_i x^{a_{\sigma_i(1)}m^i_1}_{\sigma_i(1)}x^{a_{\sigma_i(2)}m^i_2}_{\sigma_i(2)}\in\I^U(\M).\] Evaluating in $x_1,x_2\in\M$, by definition of $\vp_{ab}$ we get \[\sum_{i=1}^r \alpha_i x^{a_{\sigma_i(1)}m^i_1}_{\sigma_i(1)}x^{a_{\sigma_i(2)}m^i_2}_{\sigma_i(2)} = \sum_{i=1}^r \mu^{x_{\sigma_i(1)}}_{a_{\sigma_i(1)}}\mu^{x_{\sigma_i(2)}}_{a_{\sigma_i(2)}}\alpha_im^i_1m^i_2 = \mu^{x_1}_{a_1}\mu^{x_2}_{a_2}\sum_{i=1}^r \alpha_im^i_1m^i_2 = 0.\qedhere\] We will show in the following that all $U$-identities of $\M$ can be generated from $U$-identities in two variables, with at most two terms, arising from its multiplication table as in Lemma <ref>. Recall that notation $x^b$ with $x\in\FU$, $b\in\mathcal S$ is shorthand for the element $x^{\vp_{ab}}\in\FU$, for a fixed and elided first exponent index $a\in\mathcal S$ which we will not explicitly mention in the next results (see <ref>). Similarly we write $P^U_{r,n-r}$ instead of $P^U_{r,n-r,a}$ (see <ref>). We start the description of the $U$-identities and codimensions of $M_k(F)$ by appealing to the linear structure of the $T_U$-ideal. We will tackle the simpler case $k=2$ separately. The $T_U$-ideal of $U$-identities of $M_2(F)$ is generated by the following $U$-polynomials: \[[x^g,y^g], \ [x^g, y^a],\ x^a y^b - y^a x^b, \ x^c y^c, \ x^{h_1} y^c + x^c y^{h_1}, \ x^{e_{12}} y^{e_{21}} + x^{e_{21}} y^{e_{12}}-x^{h_1} y^{h_1},\] where $a,b \in \{h_1, e_{12}, e_{21}\}$ and $c\in \{e_{12}, e_{21}\}$. In addition, $c_n^U(M_2(F))=4^{n+1}- 3(n+1).$ Firstly, note that $M_2(F)$ has no nontrivial $U$-identities of degree $1$ (as $x^\vp=0$ implies $\vp=0$ for any $\vp\in\E(M_2(F))$), and so $c_1^U(M_2(F)) = \dim_F(P_1^U) = \dim_F(U) =3^2 +1 = 4^2-3\cdot 2$ as needed. Hence in the following, we assume $n\geq2$. Let $I$ be the $T_U$-ideal generated by the $U$-polynomials in the statement of the proposition. We will show that $I=\I^U(M_2(F))$. Recall that $\mathcal S=\{h_1, e_{12}, e_{21}\}$ for $k=2$ by definition. By Lemma <ref> it follows that $I\subseteq \I^U(M_2(F))$. In order to prove the opposite inclusion let $f\in \I^U(M_2(F))$ with $\deg f=n$ and assume, as we may, that $f$ is multilinear and $f\in P^U_{r,n-r}$, where $0\leq r \leq n$ (see <ref> and <ref>). We will prove that $f\equi I 0$. For all $ 1\leq i \leq n-r$, in order to simplify the notation, let us rename $x_{r+i}^a$ to $y_i^a$, $a\in \mathcal S$, so that variables $x_1,\ldots,x_r$ correspond to exponents $g$ and variables $y_1,\ldots,y_{n-r}$ correspond to exponents $a\in\mathcal S$. Since $ [x_1^g,x_2^g], \ [x^g, y^a]\in I$ for all $a\in\mathcal S$, $f$ modulo $I$ is a linear combination of $U$-monomials of type \[x_1^g \cdots x_r^g y_{i_1}^{a_1}\cdots y_{i_{n-r}}^{a_{n-r}}\] where $a_{1},\dots, a_{n-r}\in\mathcal S$. If $r=n$ we have $f\equi I \alpha x_1^g \cdots x^g_n$ for some $\alpha\in F$; by evaluating $x_i=g$ for $1\leq i\leq n$ we find, since $f$ is a $U$-identity of $M_2(F)$, \[0=f(g,\ldots,g)\equi I \alpha(g)^{gg}\cdots (g)^{gg} = \alpha g^n = \alpha g,\] hence $\alpha=0$ and $f\equi I 0$. If $r=n-1$ then we have \[f\equi I \alpha_1 x^g_1 \cdots x^g_{n-1}y_1^{h_1}+ \alpha_2 x^g_1 \cdots x^g_{n-1}y_1^{e_{12}}+ \alpha_3 x^g_1 \cdots x^g_{n-1}y_1^{e_{21}}\] for some $\alpha_i\in F$, $1\leq i\leq 3$; by evaluating $x_i=g$ for $1\leq i \leq n-1$ and $y_1=h_1 +e_{12}+ e_{21}$ we analogously get, for some $a\in\{h_1,e_{12},e_{21}\}$, \[\alpha_1(g)^{gg}\cdots (g)^{gg}(h_1+e_{12}+e_{21})^{ah_1} + \cdots + \alpha_3(g)^{gg}\cdots (g)^{gg}(h_1+e_{12}+e_{21})^{ae_{21}} and hence $\alpha_{i}=0$ for $1\leq i \leq 3$, and $f\equi I 0$. Now let us assume that $0\leq r \leq n-2$. Since $y_1^a y_2^b - y_2^a y_1^b\in I$ with $a,b\in\mathcal S$, it is possible to reorder the variables $y_1,\ldots,y_r$ in each $U$-monomial of $f$ modulo $I$ without reordering their original exponents. Moreover, since $y_1^c y_2^c\in I$ for $c\in \{e_{12}, e_{21}\}$, modulo $I$ the nonzero terms of $f$ do not have two variables with exponent $e_{12}$ nor with exponent $e_{21}$ adjacent to each other. Since in addition $y_1^{h_1} y_2^c + y_1^c y_2^{h_1}\in I$ for $c\in \{e_{12}, e_{21}\}$, we can permute the $h_1$ exponents with the $e_{12},e_{21}$ exponents; it then follows that $f$ modulo $I$ can be written as a linear combination of $U$-monomials of the following forms: \begin{align*} &x^g_1\cdots x^g_r y_{1}^{h_1} \cdots y_{m}^{h_1} y_{m+1}^{e_{12}} y_{m+2}^{e_{21}} \cdots y_{n-r-1}^{e_{12}} y_{n-r}^{e_{21}},\\ &x^g_1\cdots x^g_r y_{1}^{h_1} \cdots y_{m}^{h_1} y_{m+1}^{e_{21}} y_{m+2}^{e_{12}} \cdots y_{n-r-1}^{e_{21}} y_{n-r}^{e_{12}},\\ &x^g_1\cdots x^g_r y_{1}^{h_1} \cdots y_{m}^{h_1} y_{m+1}^{e_{12}} y_{m+2}^{e_{21}} y_{m+3}^{e_{12}}\cdots y_{n-r-1}^{e_{21}} y_{n-r}^{e_{12}},\\ &x^g_1\cdots x^g_r y_{1}^{h_1} \cdots y_{m}^{h_1} y_{m+1}^{e_{21}} y_{m+2}^{e_{12}} y_{m+3}^{e_{21}} \cdots y_{n-r-1}^{e_{12}} y_{n-r}^{e_{21}}, \end{align*} where $0\leq m \leq n-r$. Since $y_1^{e_{12}} y_2^{e_{21}} + y_1^{e_{21}} y_2^{e_{12}} -y_1^{h_1} y_2^{h_1} \in I$ it follows that, if $n-r\geq 2$, we can write $f$ modulo $I$ as a linear combination of $U$-monomials in which at most one exponent $h_1$ appears. Thus, if $n-r$ is even, we get that \begin{align} \label{n-r even} f\equi I & \alpha_1 x^g_1\cdots x^g_r y_{1}^{h_1} y_{2}^{e_{12}} y_{3}^{e_{21}} y_{4}^{e_{12}}\cdots y_{n-r-1}^{e_{21}} y_{n-r}^{e_{12}}+\alpha_2 x^g_1\cdots x^g_r y_{1}^{h_1} y_{2}^{e_{21}} y_{3}^{e_{12}} y_{4}^{e_{21}} \cdots y_{n-r-1}^{e_{12}} y_{n-r}^{e_{21}}+\\ +& \alpha_3 x^g_1\cdots x^g_r y_{1}^{e_{12}} y_{2}^{e_{21}}\cdots y_{n-r-1}^{e_{12}} y_{n-r}^{e_{21}} + \alpha_4 x^g_1\cdots x^g_r y_{1}^{e_{21}} y_{2}^{e_{12}} \cdots y_{n-r-1}^{e_{21}} y_{n-r}^{e_{12}} \notag \end{align} for some $\alpha_i\in F$, $1\leq i\leq 4$, whereas if $n-r$ is odd, then we have that \begin{align*} f\equi I & \alpha_1 x^g_1\cdots x^g_r y_{1}^{h_1} y_{2}^{e_{12}} y_{3}^{e_{21}} \cdots y_{n-r-1}^{e_{12}} y_{n-r}^{e_{21}}+ \alpha_2 x^g_1\cdots x^g_r y_{1}^{h_1} y_{1}^{e_{21}} y_{2}^{e_{12}} \cdots y_{n-r-1}^{e_{21}} y_{n-r}^{e_{12}}+\\ +& \alpha_3 x^g_1\cdots x^g_r y_{1}^{e_{12}} y_{2}^{e_{21}} y_{3}^{e_{12}}\cdots y_{n-r-1}^{e_{21}} y_{n-r}^{e_{12}}+ \alpha_4 x^g_1\cdots x^g_r y_{1}^{e_{21}} y_{2}^{e_{12}} y_{3}^{e_{21}} \cdots y_{n-r-1}^{e_{12}} y_{n-r}^{e_{21}} \end{align*} for some $\alpha_i\in F$, $1\leq i\leq 4$. Suppose that $f$ is as in (<ref>). By making the evaluation $x_i=g$ for $1\leq i \leq r$ and $y_j=h_1 +e_{12}+ e_{21}$ for $1\leq j \leq n-r$ we get $\alpha_1 e_{12} -\alpha_2 e_{21}+ \alpha_3 e_{11}+ \alpha_4 e_{22}=0$. Thus $\alpha_i=0$ for $1\leq i \leq 4$, and $f$ is the zero $U$-polynomial modulo $I$. One can deal similarly with the other case. Thus $\I^U(M_2(F))=I$. The argument above also proves that \[c^U_{r,n-r}(M_2(F))=\begin{cases} 1, & \mbox{ if } r=n, \\ 3, & \mbox{ if } r=n-1, \\ 4, & \mbox{ if } 0\leq r\leq n-2. \end{cases}\] Therefore, by Formula (<ref>), \[c_n^U(M_2(F))= \sum_{r=0}^{n} \binom{n}{r} 3^{n-r} c^U_{r,n-r}(M_2(F))=4\sum_{r=0}^{n} \binom{n}{r} 3^{n-r} -3n-3= 4^{n+1}-3(n+1).\qedhere\] Next lemma follows from simple computations. If $k\geq 3$, then: * $x^{e_{il}}y^{e_{lj}} \equiv - x^{h_i}y^{e_{ij}} \ (\md \langle x^{e_{i j}}y^{h_{j-1}}- x^{e_{i l}} y^{e_{lj}}, \ x^{h_i}y^{e_{ij}}+ x^{e_{ij}}y^{h_{j-1}} \rangle_{T_U})$, where $1\leq i \leq k-1$, $2\leq j \leq k$, $ j\neq i$, $1\leq l \leq k$; * $x^{e_{1j}}y^{e_{j1}} \equiv x^{h_1} y^{h_1}+ x^{h_1}y^{h_2} \ (\md \langle x^{e_{1j}} y^{e_{j1}} + x^{e_{2,l}} x_2^{e_{l,2}}-x^{h_1} x_2^{h_1}, \ x^{h_{1}} x_2^{h_2} + x^{e_{2j}} x_2^{e_{j2}} \rangle_{T_U})$, where $2\leq j \leq k$, $1\leq l \leq k$, $l \neq 2$; * $x^{e_{kj}}y^{e_{jk}} \equiv x^{h_{k-1}} y^{h_{k-1}}+ x^{h_{k-2}}y^{h_{k-1}} \ (\md \langle x^{e_{k-1,j}} y^{e_{j,k-1}} + x^{e_{k,l}} y^{e_{l,k}}-x^{h_{k-1}} y^{h_{k-1}}, \ x^{h_{k-2}} y^{h_{k-1}} + x^{e_{k-1,l}} y^{e_{l,k-1}} \rangle_{T_U})$, where $1\leq j \leq k-1$, $1\leq l \leq k$, $l\neq k-1$. The $T_U$-ideal of $U$-identities of $\M$, $k\geq 3$, is generated by the following $U$-polynomials: * $[x^g,y^g], \ \ [x^g, y^a], \ \ x^a y^b - y^a x^b$, where $a,b\in \mathcal{S}$; * $x^{e_{ij}}y^{e_{lm}}$, where $1\leq i,j,l,m\leq k$, $j\neq l$; * $x^{h_i}y^{e_{jl}}$, where $1\leq i \leq k-1$, $1\leq j, l \leq k$, $ j\neq i, i+1$, $l\neq j$; * $x^{e_{jl}}y^{h_i}$ where $1\leq i \leq k-1$, $1\leq j,l\leq k$, $l\neq i, i+1$, $j\neq l$; * $x^{h_i}y^{e_{ij}}+ x^{e_{ij}}y^{h_{j-1}} $, where $ 1\leq i \leq k-1$, $ 2\leq j \leq k$, $j\neq i$; * $ x^{h_{i-1}}y^{e_{ij}}+ x^{e_{ij}}y^{h_j}$, where $ 2\leq i \leq k$, $1\leq j \leq k-1$, $ j\neq i$; * $x^{h_{i-1}}y^{e_{i j}}+x^{e_{i l}} y^{e_{lj}}$, where $2\leq i \leq k$, $1\leq j,l\leq k$, $ j\neq i$, $i,j\neq l$; * $x^{h_{i-1}} y^{h_i} + x^{e_{ij}} y^{e_{ji}}$, where $2\leq i \leq k-1$, $1\leq j \leq k$, $j\neq i$; * $x^{h_i}y^{h_j}$, where $1\leq i,j\leq k-1$, $j\neq i-1,i,i+1$, if $k\geq 4$; * $[x^{h_{i}}, y^{h_{i+1}}]$, where $1\leq i \leq k-2$; * $x^{e_{ij}} y^{e_{ji}} + x^{e_{i+1l}}y^{e_{li+1}}-x^{h_i}y^{h_i}$, where $ 1\leq i \leq k-1$, $1\leq j,l\leq k$, $j\neq i$, $ l\neq i+1$; * $x^{e_{i j}}y^{h_{j-1}}+ x^{e_{i l}} y^{e_{lj}}$, where $1\leq i,l\leq k$, $2\leq j \leq k$, $j\neq i$, $i,j\neq l$. In addition, $c_n^U(\M)=k^{2(n+1)}- (k^2-1)(n+1).$ The proof of this result follows a scheme similar to that of Proposition <ref> for $2\times2$ matrices, with different computations. Firstly, note that there are no nontrivial identities of degree $1$ and thus $c_1^U(\M) = \dim_F(U) = (k^2-1)^2 +1 = k^4 - 2(k^2-1)$ as expected, so henceforth we assume $n\geq2$. Let $I$ be the $T_U$-ideal generated by the $U$-polynomials in the statement of the proposition. By Lemma <ref> it follows that $I\subseteq \I^U(\M)$. To prove the opposite inclusion, first, we find a set of generators of $P^U_{r,n-r}$ modulo $P^U_{r,n-r}\cap I$, for each $n \geq 1$ and $0\leq r\leq n$, and after that, we show, by evaluation, that the sets of generators found are actually bases of their corresponding vector spaces. Let $f\in P^U_{r,n-r}$ be a multilinear $U$-polynomial of degree $n$. In order to simplify the notation, for all $ 1\leq i \leq n-r$ let us rename $x_{r+i}^a$ to $y_i^a$, $a\in \mathcal{S}$, so that variables $x_1,\ldots,x_r$ correspond to exponents $g$ and variables $y_1,\ldots,y_{n-r}$ correspond to exponents $a\in\mathcal S$. Since $[x^g_1,x^g_2], \ [x^g, y^a] \in I$ for all $a\in \mathcal{S}$, $f$ modulo $I$ is a linear combination of $U$-monomials of type \[x_1^g \cdots x_r^g y_{i_1}^{a_1}\cdots y_{i_{n-r}}^{a_{n-r}}\] where $a_{i_1},\dots, a_{i_{n-r}}\in \mathcal{S}$. So $P^U_{n,0}$ is generated modulo $P^U_{n,0}\cap I$ by the $U$-monomial $x_1^g \cdots x_n^g$. If $r=n-1$ then $f$ modulo $I$ is a linear combination of $U$-monomials \begin{equation} \label{generatori r=n-1} x^g_1 \cdots x^g_{n-1}y_1^{h_i} \text{ for }1\leq i \leq k-1\text{ and }x^g_1 \cdots x^g_{n-1}y_1^{e_{jl}}\text{ for }1\leq j,l \leq k, j\neq l. \end{equation} It follows that $P^U_{n-1,1}$ is generated modulo $P^U_{n-1,1}\cap I$ by the $U$-monomials in (<ref>). Now suppose that $0\leq r \leq n-2$. By $U$-identities <ref>-<ref>, in each $U$-monomial of $f$ we can move to the left, modulo $I$, all the variables with exponent $h_i$ for $1\leq i \leq k-1$; moreover, since $y_1^a y_2^b - y_2^a y_1^b\in I$ for all $a,b\in \mathcal{S}$, we can always reorder the indices of the variables with exponent in $\mathcal S$. Call (P1) to this moving and reordering procedure. From (P1) it follows that $f$ modulo $I$ is a linear combination of $U$-monomials of type \[x^g_1\cdots x^g_r y_{1}^{h_{i_1}} \cdots y_{s}^{h_{i_s}} y_{s+1}^{b_1} \cdots y_{n-r}^{b_{n-r}}\] for $0\leq s\leq n-r$, $1\leq i_1, \dots, i_s\leq k-1$ and $b_1,\dots, b_{n-r}\in \{e_{ij}\ | \ 1\leq i,j\leq k, i\neq j\}$. Since $y_1^{e_{ij}}y_2^{e_{lm}}\in I$ for $1\leq i,j,l,m\leq k$, $j\neq l$, we can require in addition that the sequence of exponents $b_1,\dots, b_{n-r}$ has nonzero product $b_1\cdots b_{n-r}$, i.e., if $b_t=e_{ij}$ and $b_{t+1}=e_{lm}$ then $j=l$. Notice that the $U$-identities of Lemma <ref> also belong to $I$, thus by $U$-identities <ref>,<ref> and Lemma <ref> we can reduce each pair $y_t^{e_{il}}y_{t+1}^{e_{lj}}$: * If $i\neq j,k$, to the pair $y_t^{h_i}y_{t+1}^{e_{ij}}$. * If $i=k\neq j$, to the pair $y_t^{h_{k-1}}y_{t+1}^{e_{kj}}$. * if $i=j$, to a linear combination of one or two $U$-monomials of the form $y_t^{h_l}y_{t+1}^{h_m}$ with $1\leq l\leq k-1$ and $m\in\{l,l+1\}$. To this reduction procedure call (P2). By applying (P1) and (P2) repeatedly, we find that $f$ modulo $I$ is a linear combination of $U$-monomials of types \[x^g_1\cdots x^g_r y_{1}^{h_{i_1}} \cdots y_{n-r}^{h_{i_{n-r}}}\text{ and }x^g_1\cdots x^g_r y_{1}^{h_{i_1}} \cdots y_{n-r-1}^{h_{i_{n-r-1}}} y_{n-r}^{e_{ij}}\] for $ 1\leq i_1, \dots, i_{n-r}\leq k-1$ and $1\leq i,j\leq k$, $i\neq j$. By $U$-identity <ref> we can assume that the product of the exponents $h_{i_1}\cdots h_{i_{n-r}}$ is nonzero, and by $U$-identity <ref> we can assume that $i_1 \leq \cdots \leq i_{n-r}$; hence we may assume that $i_{j+1}\in\{i_j,i_j+1\}$ for all $0\leq j<n-r$. Moreover, since for $k\geq 4$, by $U$-identities <ref>,<ref> we have \[y_1^{h_{i}}y_2^{h_{i+1}}y_3^{h_{i+2}} = [y_1^{h_{i}}y_2^{h_{i+1}}]y_3^{h_{i+2}}+y_2^{h_{i+1}}(y_1^{h_{i}}y_3^{h_{i+2}})\in I\] for $1\leq i \leq k-3$, we can assume that, for all $k\geq 3$, in each $U$-monomial of $f$ modulo $I$ there are at most two distinct (and consecutive) exponents $h_i$ ($1\leq i\leq k-1$), the rest of them being copies of one of those. In addition, by $U$-identities <ref> (applied twice) and <ref> we find \begin{align*} &y_1^{h_{i}}y_2^{h_{i+1}} +y_1^{h_{i-1}}y_2^{h_i} + y_1^{h_i}y_2^{h_i} =\\ &(y_1^{h_{i}}y_2^{h_{i+1}} + y_1^{e_{i+1j}}y_2^{e_{ji+1}}) + (y_1^{h_{i-1}}y_2^{h_i} + y_1^{e_{ij}}y_2^{e_{ji}})-(y_1^{e_{ij}} y_2^{e_{ji}} + y_1^{e_{i+1j}} y_2^{e_{ji+1}}-y_1^{h_i}y_2^{h_i})\in I \end{align*} for all $2\leq i \leq k-2$, so by recursion on $i$ we may suppose that, in each $U$-monomial of $f$, either all exponents $h_i$ are equal (for some $1\leq i\leq k-1$) or there are two distinct exponents, $h_1$ and $h_2$. Now notice that by $U$-identities <ref>,<ref>,<ref> we have \[y_1^{h_1}y_2^{h_2}y_3^{h_2} + y_1^{h_1} y_2^{h_1}y_3^{h_2} = y_1^{h_1}(y_2^{h_2}y_3^{h_2}-y_2^{e_{21}}y_3^{e_{12}}-y_2^{e_{31}}y_3^{e_{13}}) +y_1^{h_1}(y_2^{h_1}y_3^{h_2}+y_2^{e_{21}}y_3^{e_{12}})+ (y_1^{h_1}y_2^{e_{31}})y_3^{e_{13}}\in I,\] so $y_1^{h_1}y_2^{h_2}y_3^{h_2} \equi I -y_1^{h_1} y_2^{h_1}y_3^{h_2}$. Hence $f$ modulo $I$ is a linear combination of $U$-monomials of types x^g_1⋯x^g_r y_1^h_i ⋯y_n-r^h_i, x^g_1⋯x^g_r y_1^h_1 ⋯y_n-r-1^h_1y_n-r^h_2, x^g_1⋯x^g_r y_1^h_i ⋯y_n-r-1^h_i y_n-r^e_lj, x^g_1⋯x^g_r y_1^h_1 ⋯y_n-r-2^h_1 y_n-r-1^h_2 y_n-r^e_lj, for $1\leq i \leq k-1$, $1\leq j, l \leq k$, $l\neq j$. Finally, since $y_1^{h_{i-1}}y_2^{e_{ij}} + y_1^{h_i} y_2^{e_{ij}} \in \langle y_1^{h_{i-1}}y_2^{e_{i j}}+ y_1^{e_{i l}} y_2^{e_{lj}}, \ y_1^{e_{i j}}y_2^{h_{j-1}}+ y_1^{e_{i l}} y_2^{e_{lj}}, \ y_1^{h_i}y_2^{e_{ij}}+ y_1^{e_{ij}}y_2^{h_{j-1}} \rangle_{T_U},$ where $2\leq i \leq k-1$, $1\leq j, l \leq k$, $j\neq i$, $l\neq i,j$, and by $U$-identity <ref>, it follows that $f$ modulo $I$ is a linear combination of the following $U$-monomials: \begin{equation}\label{relativamente libera Mk} \begin{split} & x^g_1\cdots x^g_r y_1^{h_i} \cdots y_{n-r}^{h_i},\ \ x^g_1\cdots x^g_r y_1^{h_1} \cdots y_{n-r-1}^{h_1} y_{n-r}^{h_2},\\ & x^g_1\cdots x^g_r y_1^{h_i} \cdots y_{n-r-1}^{h_i} y_{n-r}^{e_{ij}},\ \ x^g_1\cdots x^g_r y_1^{h_{k-1}} \cdots y_{n-r-1}^{h_{k-1}} y_{n-r}^{e_{kl}}, \end{split} \end{equation} where $1\leq i, l \leq k-1$, $1\leq j \leq k$, $j\neq i$. Thus we have that $P^U_{r, n-r}$ modulo $P^U_{r,n-r}\cap I$ is generated by the $U$-monomials in (<ref>). The $U$-monomial $x_1^g \cdots x_n^g$ can be seen to be nonzero modulo $I$ by evaluating $x_1=\cdots =x_n =g$. We next show that the $U$-monomials in (<ref>) and (<ref>) are linearly independent modulo $\I^U(\M)$ if $r=n-1$ or $0\leq r\leq n-2$, respectively. To that end, let us assume first that $f\in \I^U(\M)$ is a linear combination of $U$-monomials in (<ref>), i.e., \[f=\sum_{1\leq i \leq k-1} \alpha_i x^g_1 \cdots x^g_{n-1} y_1^{h_i}+\sum_{\substack{1\leq j,l \leq k\\ j\neq l}} \beta_{jl} x^g_1 \cdots x^g_{n-1} y_1^{e_{jl}}\] for some $\alpha_i,\beta_{jl}\in F$. From the evaluation $x_1=\cdots=x_{n-1}=g\text{ and } y_1=\sum_{a\in\mathcal S} a$ we get \[\sum_{1\leq i \leq k-1} \alpha_i h_i + \sum_{\substack{1\leq j,l \leq k\\ j\neq l}} \beta_{jl} e_{jl}=0,\] from which it follows, since $\mathcal S$ is a linearly independent set, that $\alpha_i=\beta_{lj}=0$ for all $1\leq i \leq k-1$, $1\leq j , l \leq k$, $j \neq l$. Therefore the $U$-monomials in (<ref>) are linearly independent modulo $\I^U(\M)$. Let us assume now that $f\in \I^U(\M)$ is such that \begin{align*} f= & \sum_{1\leq i \leq k-1} \alpha_i x^g_1\cdots x^g_r y_1^{h_i} \cdots y_{n-r}^{h_i} + \beta x^g_1\cdots x^g_r y_1^{h_1} \cdots y_{n-r-1}^{h_1} y_{n-r}^{h_2}+ \sum_{\substack{1\leq i \leq k-1\\ 1\leq j \leq k\\ i\neq j}} \gamma_{ij} x^g_1\cdots x^g_r y_1^{h_i} \cdots y_{n-r-1}^{h_i} y_{n-r}^{e_{ij}}+\\ +&\sum_{1\leq l \leq k-1} \gamma_{kl} x^g_1\cdots x^g_r y_1^{h_{k-1}} \cdots y_{n-r-1}^{h_{k-1}} y_{n-r}^{e_{kl}} . \end{align*} If we evaluate $x_1=\cdots=x_r=g$ and $y_1=\cdots=y_{n-r}=\sum_{a\in\mathcal S} a$, we get \[\sum_{1\leq i \leq k-1} \alpha_i (e_{ii}\pm e_{i+1,i+1})+ \beta e_{22} +\sum_{\substack{1\leq i \leq k-1\\ 1\leq j \leq k\\ i\neq j}} \gamma_{ij} e_{ij}+ \sum_{1\leq l \leq k-1} (-1)^{n-r-1} \gamma_{kl} e_{kl} =0,\] which produces $\alpha_{i}=\beta=\gamma_{lj}=0$ for all $1\leq i \leq k-1$, $1\leq l,j\leq k$. Therefore the elements in (<ref>) are linearly independent modulo $P^U_{r,n-r}\cap \I^U(\M)$. The fact that $P^U_{r,n-r}\cap \I^U(\M) \supseteq P^U_{r,n-r}\cap I$ for all $n\in\N$ and $0\leq r\leq n$ proves that $\I^U(\M)=I$, with $x_1^g\cdots x_n^g$ and the elements in (<ref>), (<ref>) forming a basis of $P^U_{r,n-r}$ modulo $P^U_{r,n-r}\cap \I^U(\M)$ for $r=n$, $r=n-1$ and $0\leq r \leq n-2$, respectively. Thus, by counting we get \[c^U_{r,n-r}(\M)=\begin{cases} 1, & \mbox{ if } r=n, \\ k^2 -1, & \mbox{ if } r=n-1, \\ k^2, & \mbox{ if } 0\leq r\leq n-2. \end{cases}\] Hence, by Formula (<ref>) it follows that \begin{align*} c_n^U(\M) &= \sum_{r=0}^{n} \binom{n}{r} (k^2 -1)^{n-r} c^U_{r,n-r}(\M)=k^2 \sum_{r=0}^{n} \binom{n}{r} (k^2 -1)^{n-r} -(k^2 -1)n-(k^2 -1) = \\ &= k^{2(n+1)}-(k^2 -1)(n+1).\qedhere \end{align*} Modifying the second index of an exponent. Through the $L$-action, the second basis element of the subindex of $\varphi_{ab}$ can also be changed in the search for identities, albeit the result is less straightforward. In particular, if $d\in L$ is a derivation then $(x^{ca}y^{cb})^d = x^{(ca)d}y^{cb} + x^{ca}y^{(cb)d}$. We will resort to this method to reduce the number of generators in the basis of $\I^U(\M)$. We are working in $\FU$, which has no $U$-action (see Remarks <ref>). Nevertheless, to simplify the notation, instead of writing exponents belonging to $\Uop$ whose action would eventually project to $U^{\op}$ once they landed on isolated variables, we will write the exponents directly in $U^{\op}$ with the caution of evaluating them only on isolated variables. For example, to apply the action of $e_{12}^2\in \Uop$ to $x^uy^v\in\FU$, we write \[(x^uy^v)^{E_{12}^2} = x^{uE_{12}^2}y^v + 2x^{uE_{12}}y^{vE_{12}} + x^uy^{vE_{12}^2}\] with $E_{12}\in U^{\op}$, and only then simplify the exponents $uE_{12}^2,uE_{12},vE_{12},vE_{12}^2$ by computing in $U^{\op}$. The $T_U$-ideal of $U$-identities of $\M$ is generated by the following $U$-polynomials: * Either $x^{e_{12}e_{12}}y^{e_{12}e_{12}}$ if $k=2$ or $x^{e_{12}e_{12}}y^{e_{12}e_{31}}$ if $k\geq3$, * $x^{e_{12}e_{12}}y^{e_{12}e_{21}}-y^{e_{12}e_{12}}x^{e_{12}e_{21}}$, * $[x^{gg},y^{gg}]$, * $[x^{gg},y^{e_{12}e_{12}}]$. By Propositions <ref> and <ref>, the $T_U$-ideal of $U$-identities of $\M$ is generated by the following list (L) of identities, for fixed $a\in\mathcal S$: * $x^{ae_{ij}}y^{ae_{lm}}$ with $1\leq i\leq k-1$, $2\leq j\leq k$, $j\neq i$. * $x^{ah_i}y^{ae_{jl}}$ with $1\leq i \leq k-1$, $1\leq j, l \leq k$, $ j\neq i, i+1$, $l\neq j$; * $x^{ae_{jl}}y^{ah_i}$ with $1\leq i \leq k-1$, $1\leq j,l\leq k$, $l\neq i, i+1$, $j\neq l$; * $x^{ah_i}y^{ah_j}$ with $1\leq i,j\leq k-1$, $j\neq i-1,i,i+1$. * $ x^{ae_{ij}}y^{ah_{j-1}} + x^{ae_{il}} y^{ae_{lj}}$ with $1\leq i,l\leq k$, $2\leq j \leq k$, $l,j\neq i$, $j\neq l$; * $x^{ah_{i-1}}y^{ae_{ij}} + x^{ae_{il}} y^{ae_{lj}}$ with $2\leq i \leq k$, $1\leq j,l\leq k$, $ l,j\neq i$, $ j\neq l$; * $x^{ah_i}y^{ae_{ij}}+ x^{ae_{ij}}y^{ah_{j-1}}$ with $ 1\leq i \leq k-1$, $ 2\leq j \leq k$, $j\neq i$; * $ x^{ah_{i-1}}y^{ae_{ij}}+ x^{ae_{ij}}y^{ah_j}$ with $ 2\leq i \leq k$, $1\leq j \leq k-1$, $ j\neq i$; * $x^{ah_{i-1}} y^{ah_i} + x^{ae_{ij}} y^{ae_{ji}}$ with $2\leq i \leq k-1$, $1\leq j \leq k$, $j\neq i$; * $x^{ae_{ij}} y^{ae_{ji}} + x^{ae_{i+1,l}} y^{ae_{l,i+1}}-x^{ah_i} y^{ah_i}$ with $ 1\leq i \leq k-1$, $1\leq j,l\leq k$, $j\neq i$, $l\neq i+1$; * $x^{ab} y^{ac} - y^{ab} x^{ac}$ with $b,c\in \mathcal{S}$; * $[x^{gg},y^{gg}], \, [x^{gg} y^{ab}]$ with $b\in \mathcal{S}$; * $[x^{ah_{i}}, y^{ah_{i+1}}]$ with $1\leq i \leq k-2$; where some identities are not realized for $k=2$ and $k=3$. Let $J$ be the $T_U$-ideal of $\FU$ generated by either $x^{e_{12}e_{12}}y^{e_{12}e_{12}}$ if $k=2$ or $x^{e_{12}e_{12}}y^{e_{12}e_{31}}$ if $k\geq3$, $x^{e_{12}e_{12}}y^{e_{12}e_{21}}-y^{e_{12}e_{12}}x^{e_{12}e_{21}}$, $[x^{gg},y^{gg}]$ and $[x^{gg},y^{e_{12}e_{12}}]$. Clearly $J\subseteq\I^U(\M)$. In the following, we will prove that $J=\I^U(\M)$ by showing that all identities in the list (L) belong to $J$, mainly by the action of inner derivations. From now on we change, as we may, the first exponent index $a$ of any variable $x^{ab}$ with $b\in\mathcal S$ to $e_{12}$ and elide it by writing $x^b$; we also write $x^g$ for $x^{gg}$. By the bracket formula (<ref>), the action of the inner derivation $E_{ij}$ (generated by $e_{ij}$) on $x^{e_{ab}}y^{e_{cd}}$ gives \[(x^{e_{ab}}y^{e_{cd}})^{E_{ij}}=\delta_{ja}x^{e_{ib}}y^{e_{cd}} -\delta_{bi}x^{e_{aj}}y^{e_{cd}} +\delta_{jc}x^{e_{ab}}y^{e_{id}} -\delta_{di}x^{e_{ab}}y^{e_{cj}}\] for $i,j,a,b,c,d\in \{1,\ldots,k\}$, $i\neq j, a\neq b, c\neq d$, where $\delta_{rs}$ denotes Kronecker's delta (see <ref> to review the key computational facts about inner derivations that we will need in the sequel). From now on we assume without further notice that any element of the form $e_{ab}$ or $E_{ab}$ satisfies $a\neq b$ and imposes this restriction wherever it appears.
* [95] J. Wang, S. Hao, R. Wen, B. Zhang, L. Zhang, H. Hu, and R. Lu, “IoT-Praetor: Undesired behaviors detection for IoT devices,” _IEEE Internet of Things Journal_ , vol. 8, no. 2, pp. 927–940, 2021. * [96] T. F. J.-M. Pasquier, J. Singh, D. Eyers, and J. Bacon, “Camflow: Managed data-sharing for Cloud services,” _IEEE Transactions on Cloud Computing_ , vol. 5, no. 3, pp. 472–484, 2017. * [97] J. Singh, C. Millard, C. Reed, J. Cobbe, and J. Crowcroft, “Accountability in the IoT: Systems, law, and ways forward,” _Computer_ , vol. 51, no. 7, pp. 54–65, 2018. * [98] P. Emami-Naeini, Y. Agarwal, L. Faith Cranor, and H. Hibshi, “Ask the experts: What should be on an IoT privacy and security label?” in _2020 IEEE Symposium on Security and Privacy (SP)_ , May 2020, pp. 447–464, iSSN: 2375-1207. * [99] Pew Research Center, “Research in the crowdsourcing age, a case study,” https://www.pewresearch.org/internet/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/, July 2016, accessed: 2022-11-14. * [100] F. M. Shipman and C. C. Marshall, “Ownership, privacy, and control in the wake of cambridge analytica: The relationship between attitudes and awareness,” in _CHI ’20_ , 2020, pp. 1––12. * [101] J. Jager, D. L. Putnick, and M. H. Bornstein, “More than just convenient: The scientific merits of homogeneous convenience samples,” _Monographs of the Society for Research in Child Development_ , vol. 82, no. 2, pp. 13–30, 2017. * [102] G. Chalhoub and I. Flechais, “Alexa, are you spying on me?: Exploring the effect of user experience on the security and privacy of smart speaker users,” in _HCI for Cybersecurity, Privacy and Trust_ , A. Moallem, Ed. Cham: Springer International Publishing, 2020, pp. 305–325. * [103] D. Winder, “How to stop your smart home spying on you,” https://www.theguardian.com/technology/2020/mar/08/how-to-stop-your-smart-home-spying-on-you-lightbulbs-doorbell-ring-google-assistant-alexa-privacy, March 2020, accessed: 2022-08-14. * [104] R. Yus and P. Pappachan, “Smart devices spy on you – 2 computer scientists explain how the Internet of Things can violate your privacy,” https://theconversation.com/smart-devices-spy-on-you-2-computer-scientists-explain-how-the-internet-of-things-can-violate-your-privacy-174579, March 2022, accessed: 2022-08-14. * [105] C. Norval and J. Singh, “Supplementary data — A room with an overview: Towards meaningful transparency for the consumer internet of things,” https://github.com/cnorval/meaningful_IoT, 2023, accessed: 2023-09-18. * [106] H. S. Alavi, E. F. Churchill, M. Wiberg, D. Lalanne, P. Dalsgaard, A. Fatah gen Schieck, and Y. Rogers, “Introduction to Human-Building Interaction (HBI): Interfacing HCI with architecture and urban design,” _ACM Trans. Comput.-Hum. Interact._ , vol. 26, no. 2, mar 2019. * [107] D. Bastos, F. Giubilo, M. Shackleton, and F. El-Moussa, “GDPR privacy implications for the Internet of Things,” in _4th Annual IoT Security Foundation Conference_ , vol. 4, 2018, pp. 1–8. * [108] V. Braun and V. Clarke, “Using thematic analysis in psychology,” _QRP_ , vol. 3, no. 2, pp. 77–101, 2006. * [109] M. H. Bornstein, J. Jager, and D. L. Putnick, “Sampling in developmental science: Situations, shortcomings, solutions, and standards,” _Developmental Review_ , vol. 33, no. 4, pp. 357–370, 2013. * [110] P. Sedgwick, “Convenience sampling,” _BMJ_ , vol. 347, 2013. * [111] S. J. Stratton, “Population research: Convenience sampling strategies,” _Prehospital and Disaster Medicine_ , vol. 36, no. 4, pp. 373––374, 2021\. * [112] MURAL, “MURAL,” https://www.mural.co/, January 2022, accessed: 2022-01-21. * [113] J. Smith and H. Noble, “Bias in research,” _Evidence-Based Nursing_ , vol. 17, no. 4, pp. 100–101, 2014. * [114] M. A. A. Elsood, H. A. Hefny, and E. S. Nasr, “A goal-based technique for requirements prioritization,” in _2014 9th International Conference on Informatics and Systems_ , 2014, pp. SW–18–SW–24. * [115] A. Cuthbertson, “Hackers can hijack your house through your light bulb, researchers discover,” https://www.independent.co.uk/tech/philips-hue-smart-light-bulb-hack-cyber-security-a9317456.html, February 2020, accessed: 2022-08-13. * [116] E. Ronen and A. Shamir, “Extended functionality attacks on IoT devices: The case of smart lights,” in _2016 IEEE European Symposium on Security and Privacy (EuroS &P)_, 2016, pp. 3–12. * [117] E. Ronen, A. Shamir, A.-O. Weingarten, and C. O’Flynn, “IoT goes nuclear: Creating a ZigBee chain reaction,” in _2017 IEEE Symposium on Security and Privacy (SP)_ , 2017, pp. 195–212. * [118] E. McGowan, “Here’s what your Ring doorbell knows about you,” https://blog.avast.com/what-amazon-ring-knows-about-you-avast, May 2021, accessed: 2022-08-14. * [119] K. N. Truong, G. R. Hayes, and G. D. Abowd, “Storyboarding: An empirical determination of best practices and effective guidelines,” in _Proceedings of the 6th Conference on Designing Interactive Systems_ , ser. DIS ’06. New York, NY, USA: Association for Computing Machinery, 2006, pp. 12––21. * [120] W. D. Perreault, “Controlling order-effect bias,” _The Public Opinion Quarterly_ , vol. 39, no. 4, pp. 544–551, 1975. * [121] J. Brooke, “SUS: A quick and dirty usability scale,” in _Usability evaluation in industry_. Taylor and Francis, 1996. * [122] A. Bangor, P. Kortum, and J. Miller, “Determining what individual SUS scores mean: Adding an adjective rating scale,” _JUS_ , vol. 4, no. 3, pp. 114–123, 2009. * [123] F. Loukil, C. Ghedira-Guegan, A. N. Benharkat, K. Boukadi, and Z. Maamar, “Privacy-aware in the IoT applications: A systematic literature review,” in _On the Move to Meaningful Internet Systems. OTM 2017 Conferences_ , H. Panetto, C. Debruyne, W. Gaaloul, M. Papazoglou, A. Paschke, C. A. Ardagna, and R. Meersman, Eds. Cham: Springer International Publishing, 2017, pp. 552–569. * [124] C. M. Gray, Y. Kou, B. Battles, J. Hoggatt, and A. L. Toombs, “The dark (patterns) side of UX design,” in _CHI ’18_ , 2018. * [125] M. Kowalczyk, J. T. Gunawan, D. Choffnes, D. J. Dubois, W. Hartzog, and C. Wilson, “Understanding dark patterns in home IoT devices,” in _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ , ser. CHI ’23. New York, NY, USA: Association for Computing Machinery, 2023. * [126] R. Cloete, C. Norval, and J. Singh, “A call for auditable virtual, augmented and mixed reality,” in _Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology_ , ser. VRST ’20. New York, NY, USA: Association for Computing Machinery, 2020\. * [127] R. Cloete, C. Norval, and J. Singh, “Auditable augmented/mixed/virtual reality: The practicalities of mobile system transparency,” _Proc. ACM Interact. Mob. Wearable Ubiquitous Technol._ , vol. 5, no. 4, dec 2022. * [128] C. Norval, R. Cloete, and J. Singh, “Navigating the audit landscape: A framework for developing transparent and auditable XR,” in _Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency_ , ser. FAccT ’23. New York, NY, USA: Association for Computing Machinery, 2023, pp. 1418––1431. * [129] S. A. Javadi, C. Norval, R. Cloete, and J. Singh, “Monitoring AI services for misuse,” in _Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society_ , ser. AIES ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 597––607. * [130] C. J. Millard, _Cloud computing law_ , 2nd ed. Oxford University Press Oxford, 2021. * [131] C. Norval, H. Janssen, J. Cobbe, and J. Singh, “Data protection and tech startups: The need for attention, support, and scrutiny,” _Policy & Internet_, vol. 13, no. 2, pp. 278–299, 2021. * [132] European Union, “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation),” _Official Journal of the European Union_ , vol. L119, pp. 1–88, May 2016\. * [133] A. F. T. Winfield, S. Booth, L. A. Dennis, T. Egawa, H. Hastie, N. Jacobs, R. I. Muttram, J. I. Olszewska, F. Rajabiyazdi, A. Theodorou, M. A. Underwood, R. H. Wortham, and E. Watson, “IEEE P7001: A proposed standard on transparency,” _Frontiers in Robotics and AI_ , vol. 8, 2021\. * [134] Electronic Frontier Foundation, “Electronic Frontier Foundation,” https://www.eff.org/, January 2022, accessed: 2022-01-20. * [135] Open Rights Group, “Open Rights Group,” https://www.openrightsgroup.org/, January 2022, accessed: 2022-01-20. * [136] J. Chen and L. Urquhart, “They’re all about pushing the products and shiny things rather than fundamental security: Mapping socio-technical challenges in securing the smart home,” _arXiv preprint arXiv:2105.11751_ , 2021. * [137] European Commission, “Europe’s Internet of Things Policy,” https://digital-strategy.ec.europa.eu/en/policies/internet-things-policy, October 2022, accessed: 2022-12-19. * [138] European Commission, “Cyber Resilience Act,” https://digital-strategy.ec.europa.eu/en/library/cyber-resilience-act, September 2022, accessed: 2022-12-19. * [139] GOV.UK, “New cyber security laws to protect smart devices amid pandemic sales surge,” https://www.gov.uk/government/news/new-cyber-security-laws-to-protect-smart-devices-amid-pandemic-sales-surge, April 2021, accessed: 2021-11-15. * [140] GOV.UK, “Code of practice for consumer IoT security,” https://www.gov.uk/government/publications/code-of-practice-for-consumer-iot-security, October 2018, accessed: 2022-12-19. * [141] Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679,” no. WP260, April 2018.
We propose and study a novel graph clustering method for data with an intrinsic network structure. Similar to spectral clustering, we exploit an intrinsic network structure of data to construct Euclidean feature vectors. These feature vectors can then be fed into basic clustering methods such as k-means or Gaussian mixture model (GMM) based soft clustering. What sets our approach apart from spectral clustering is that we do not use the eigenvectors of a graph Laplacian to construct the feature vectors. Instead, we use the solutions of total variation minimization problems to construct feature vectors that reflect connectivity between data points. Our motivation is that the solutions of total variation minimization are piece-wise constant around a given set of seed nodes. These seed nodes can be obtained from domain knowledge or by simple heuristics that are based on the network structure of data. Our results indicate that our clustering methods can cope with certain graph structures that are challenging for spectral clustering methods. machine learning; clustering; non-smooth optimization; community detection; complex networks § INTRODUCTION The analysis of networked data is often facilitated by grouping or clustering the data points into coherent subsets of data points. Clustering methods aim at finding subsets (clusters) of data points that are more similar to each other than to the remaining data points [29]. Many basic clustering algorithms aim at clusters that are enclosed by a hypersphere or hyperellipsoid in a Euclidean feature space. These methods are most successful if data points are characterized by Euclidean feature vectors whose distance is small (large) for data points in the same (different) cluster(s). Graph clustering methods can be applied to data with an intrinsic network structure that reflects a notion of similarity between different data points. Many important application domains, ranging from the Internet of Things to the management of pandemics, generate distributed collections of local datasets (“big data over network”) [22, 23]. The network structure of these local datasets might be induced by spatio-temporal proximity (“contact networks”), statistical dependencies, or functional relations [1, 2]. § PROBLEM FORMULATION We represent networked data using an undirected “empirical” graph $\graph=(\nodes, \edges, \edgeweights)$ [24, 9]. Every node $\nodeidx \in \nodes = \{1,\ldots,\nrnodes\}$ of the empirical graph represents a datapoint. The dataset consists of $\nrnodes$ different datapoints. A datapoint might be a single sensor measurement, an entire time series, or even a whole collection of videos. Our approach does not take the individual nature of datapoints into account. Rather, we only use their similarities as encoded in the weighted edges of $\graph$. In particular, two nodes $\nodeidx, \nodeidx'$ are connected by an edge $\edge{\nodeidx}{\nodeidx'}$ if the corresponding datapoints are similar. The amount of similarity is quantified by a positive edge weight $\edgeweight_{\nodeidx,\nodeidx'}$. We find it convenient to use an oriented (or directed) version of the empirical graph $\graph$ by declaring, for each undirected edge $\edge{\nodeidx}{\nodeidx'}$, the node $\min\{\nodeidx,\nodeidx'\}$ as tail and the other node $\max\{\nodeidx,\nodeidx'\}$ as head in the corresponding directed edge. The resulting oriented empirical graph $\overrightarrow{\graph} \defeq \big( \nodes, \overrightarrow{\edges}, \edgeweights \big)$ has the same nodes as $\graph$ and contains the directed edge $\directededge{\nodeidx}{\nodeidx'}$ from node $\nodeidx$ to node $\nodeidx'$ if and only if $\nodeidx < \nodeidx'$ and $\edge{\nodeidx}{\nodeidx'} \in \edges$. This directed edge $\directededge{\nodeidx}{\nodeidx'} \in \overrightarrow{\edges}$ has the same weight $\edgeweight_{\nodeidx,\nodeidx'}$ as the corresponding undirected edge $\edge{\nodeidx}{\nodeidx'} \in \edges$. We will use several matrices that are naturally associated with an empirical graph $\graph$. The weight matrix $\edgeweights$ contains the edge weight $\edgeweight_{\nodeidx,\nodeidx'}$ in its $\nodeidx$th row and $\nodeidx'$th column. The diagonal degree matrix $\degreemtx \defeq {\rm diag}\{\nodedegree{1},\ldots,\nodedegree{\nrnodes}\}$ collects the (weighted) node degrees $\nodedegree{\nodeidx}=\sum_{\nodeidx'} \edgeweight_{\nodeidx,\nodeidx'}$ for each node $\nodeidx$. The graph Laplacian matrix $\mathbf{L} \defeq \degreemtx - \edgeweights$ is instrumental for spectral clustering methods (see Section <ref>). There are also vector spaces naturally associated with an empirical graph. The vector space consisting of maps $\vu$ from nodes to real numbers is denoted $\mathbb{R}^{\nrnodes}$. A vector $\vu \in \mathbb{R}^{\nrnodes}$ is a function or map that assigns each node $\nodeidx \in \nodes$ a real-number $u_{\nodeidx}$. Similarly, we define the vector space $\mathbb{R}^{\overrightarrow{\edges}}$ that consists of vectors $\edgesigvec$ that assign each edge $\directededge{\nodeidx}{\nodeidx'} \in \edges$ a real-number $\edgesig_{\directededge{\nodeidx}{\nodeidx'}}$. Many graph clustering methods, such as spectral clustering, rely on a clustering assumption. This clustering assumption applies to a dataset for which we know a useful empirical graph [6, 7, 9]. A cluster is then constituted by data points whose nodes in $\graph$ are more densely connected with each other than with nodes representing data points outside the cluster [15, 18]. This informal clustering assumption can be made precise by constraining the cut-size of cluster [9]. By the maxflow/mincut duality, requiring a small cut is equivalent to requiring a minimum amount of network flow that can be routed from the nodes inside a cluster through its boundary [25]. Using the above clustering assumption, we construct feature vectors $\featurevec^{(\nodeidx)}$ for each node (or datapoint) $\nodeidx \in \nodes$. The feature vectors will be constructed such that two nodes $\nodeidx, \nodeidx'$ that belong to a well-connected subset of nodes (a cluster) have more feature vectors $\featurevec^{(\nodeidx)}, \featurevec^{(\nodeidx')}$ with a small Euclidean distance $\big\| \featurevec^{(\nodeidx)}- \featurevec^{(\nodeidx')} \big\|$. Loosely speaking, the feature construction maps similarity between datapoints in the empirical graph $\graph$ into proximity of their Euclidean feature vectors. This allows us then to apply standard clustering methods, such as k-means of soft clustering [29], to the network structured data. § SPECTRAL CLUSTERING Before we detail our construction of the feature vectors $\featurevec^{(\nodeidx)}$ let us briefly review the construction used by spectral graph clustering methods [9] . The most basic variant of spectral clustering constructs the node feature vectors $\featurevec^{(\nodeidx)}$ using the eigenvectors of the graph Laplacian matrix $\mathbf{L} = \degreemtx - \edgeweights$ [9]. The matrix $\mathbf{L}$ is positive semi-definite (psd) and therefore we can find an orthonormal set of eigenvectors [11] \begin{equation} \label{equ_def_eigvals} \vu^{(1)},\ldots,\vu^{(\nrnodes)} \mbox{ with eigenvalues } 0 \leq \eigval{1} \leq \ldots \leq \eigval{\nrnodes}. \end{equation} For a given a number $\nrcluster$ of clusters, spectral clustering methods use the feature vectors \begin{equation} \label{equ_def_feat_vec_spec_cluster} \featurevec^{(\nodeidx)} = \big( u^{(1)}_{\nodeidx}, \ldots,u^{(\nrcluster)}_{\nodeidx} \big)^{T} \mbox{ for every node } \nodeidx \in \nodes. \end{equation} The Ideal Case. To develop some intuition for the usefulness of the construction (<ref>), consider an empirical graph that contains $\nrcluster$ components $\datacluster{1},\ldots,\datacluster{\nrcluster} \subseteq \nodes$ that are not connected by any edge. In this case, the first $\nrcluster$ (possibly repeated) eigenvalues in (<ref>) are equal to $0$. The correspond eigenvectors $\vu^{(1)},\ldots,\vu^{(\nrcluster)}$ constitute an orthonormal basis for the subspace of $\mathbb{R}^{\nrnodes}$ that is spanned by the indicator vectors $\mathbf{e}^{(\clusteridx)}$ of the components $\datacluster{\clusteridx}$, $e_{\nodeidx}^{(\clusteridx)} = 1$ for nodes $\nodeidx \in \datacluster{\clusteridx}$ and $e_{\nodeidx}^{(\datacluster{\clusteridx})} = 0$ for nodes $\nodeidx \notin \datacluster{\clusteridx}$. Thus, the feature vectors of nodes in the same cluster $\datacluster{\clusteridx}$ are identical and orthogonal to the feature vectors of nodes in different clusters $\datacluster{\clusteridx'}$ with $\clusteridx' \neq \clusteridx$. General Case. In general, the empirical graph will contain edges between different clusters. A widely used approach to analyzing spectral clustering methods is to consider the effect of these inter-cluster edges as perturbations of the Laplacian matrix $\mL$. For sufficiently small perturbations, the subspace spanned by the first $\nrcluster$ eigenvectors of $\mL$ will be close to the subspace spanned by the indicator vectors $\mathbf{e}^{(\clusteridx)}$. This deviation can be made precise using tools from linear algebra and results in conditions on the empirical graph for the success of k-means when applied to the feature vectors (<ref>) [9, 6]. There are certain types of empirical graphs that are challenging for spectral clustering methods that use the features (<ref>) [9, 28, 27]. For example, spectral methods tend to fail for datasets that consist of clusters with significantly varying sizes (see Section <ref>). Another example of a challenging network structure for spectral clustering is a chain graph [26]. Consider a chain graph that consists of two clusters connected via a single edge. The weights of all intra-cluster edges are equal to $1$ while the weight $\edgeweight_{o}$ of the single boundary edge is slightly smaller. As the chain graph is connected, $\eigval{1}=0$ with corresponding eigenvector $\vu^{(1)}$ having identical entries. The smallest non-zero eigenvalue is $\eigval{2}$ has a corresponding eigenvector $\vu^{(2)}$ whose entries are depicted in Figure <ref>. The resulting feature vectors (see (<ref>) with $k=2$) do not reflect well the cluster structure of the chain graph. x=0.35cm,y=2.5cm,every path/.style=>=latex,node style/.style=circle,draw [ head to column names, late after head=ı,, after line=ı] [line width=0.0mm] (, ) (ı,) node $\circ$; [ head to column names, late after head=ı,, after line=ı] [line width=0.0mm] (, ) (ı,) node $\star$; [->] (0.4,0) – (22,0); [right] at (19.5,0.1) node $i$; [->] (0.5,0) – (0.5,1.1); [-,dashed] (0,0.5) – (22,0.5); in 0/$0$,0.5/$0.5$,1/$1$ (0.4, – (0.6, node[left=2mm] ; in 1/$1$,5/$5$,10/$10$,15/$15$,20/$20$ (1pt) – (-2pt) node[below] ; Solution of TV minimization (“$\circ$”) for the chain graph obtained from Algorithm <ref> using $\nriter=1000$ iterations. Entries (“$\star$”) of the eigenvector $\vu^{(2)}$ corresponding to the smallest non-zero eigenvalue $\eigval{2}$ (see (<ref>)). x=4cm,y=2.5cm,every path/.style=>=latex,node style/.style=circle,draw [ head to column names, late after head=ı,, after line=ı] [line width=0.0mm] (,0) (,0) node $\circ$; [ head to column names, late after head=ı,, after line=ı] [line width=0.0mm] (,1 ) (,1) node $\star$; [->] (-0.1,0) – (1.1,0); [->] (-0.1,1) – (1.1,1); [right] at (1.1,0.0) $\feature_{\nodeidx} \defeq \hat{u}_{\nodeidx}^{(\seednodes)}$ (<ref>); [right] at (1.1,1.0) $\feature_{\nodeidx} \defeq u_{\nodeidx}^{(2)}$ (<ref>); in 0/$0$,0.5/$0.5$,1/$1$ (1.02) – (0.98) node[below] ; in 0/$0$,0.5/$0.5$,1/$1$ (0.02) – (-0.02) node[below] ; Scatterplot of (scalar) node features $\feature_{\nodeidx}$ constructed by spectral and flow-based clustering for the chain graph in Figure <ref>. Spectral clustering uses the entries (“$\star$”) of the eigenvector $\vu^{(2)}$ (see (<ref>)). Flow-based clustering uses the solution of TV minimization (“$\circ$”) (see (<ref>)). We next describe our novel construction of feature vectors. The idea is to replace the eigenvectors of the Laplacian in (<ref>) with solutions of total variation (TV) minimization problems. It turns out that these feature vectors better reflect the cluster geometry of $\graph$ in terms of bottlenecks for network flows between different clusters. These flow bottlenecks are the boundaries between the clusters that are obtained by applying clustering methods to these feature vectors. [1] D. Koller and N. Friedman. “Probabilistic Graphical Models: Principles and Techniques. Adaptive computation and machine learning,“ in MIT Press, 2009. [2] S. Banerjee, B.P. Carlin, and A.E. Gelfand. “Hierarchical Modeling and Analysis for Spatial Data,“ in Chapman and Hall/CRC, 2015. [3] H.-J. Li and J.J. Daniels. “Social significance of community structure: Statistical view,“ in IEEE Phys. Rev, 91(1):012801, Jan. 2015. [4] T. Pock and A. Chambolle. “Diagonal preconditioning for first order primal-dual algorithms in convex optimization,“ in IEEE ICCV, 21, Nov. 2011. [5] A. Jung. “On the duality between network flows and network lasso,“ IEEE Sig. Proc. Lett., 27:940 – 944, 2020. [6] A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,“ in Adv. Neur. Inf. Proc. Syst, 2001. [7] D. Spielman, “Spectral graph theory,“ in U. Naumann and O. Schenk, editors, Combinatorial Scientific Computing. Chapman and Hall/CRC, 2012. [8] R. T. Rockafellar, “Convex Analysis,“ in Princeton Univ. Press, 1970. [9] U. von Luxburg, “A tutorial on spectral clustering,“ Statistics and Computing, vol. 17, no. 4, pp. 395–416, Dec. [10] A. Jung and Y. SarcheshmehPour, “Local Graph Clustering With Network Lasso,“ IEEE Signal Processing Letters ( Volume: 28), 2020, pp. 106–110. [11] G. H. Golub and C. F. Van Loan, “Matrix Computations, 3rd.“ Johns Hopkins University Press, 1996. [12] D. Hallac, J. Leskovec, and S. Boyd, “Network lasso: Clustering and optimization in large graphs,“ in Proc. SIGKDD, 2015, pp. 387–396. [13] S. Boyd and L. Vandenberghe, “Convex Optimization,“ Cambridge Univ. Press, Cambridge, UK, 2004. [14] A. Jung, A O. Hero, A. Mara, S. Jahromi, A. Heimowitz, and Y.C. Eldar. “Semi-supervised learning in network-structured data via total variation minimization,“ IEEE Trans. Signal Processing, 67(24), Dec. 2019. [15] A. BertrandMarc and M. Moonen. “Seeing the bigger picture: How nodes can learn their place within a complex ad hoc network topology,“ IEEE Signal Processing Magazine, 30(3):71-82, May 2013. [16] M.C.V. Nascimento and A.C. De Carvalho. “Spectral methods for graph clustering–a survey,“ European Journal of Operational Research, 211(2):221–231, 2011. [17] A. Jung and N. Tran. “Localized linear regression in networked data,“ IEEE Signal Processing Letters, vol.26, no. 7, pp. 1090–1094, 2019. [18] B. Saha, A. Mandal, S.B. Tripathy , D. Mukherjee1 “Complex Networks, Communities and Clustering: A survey,“ CoRR, vol. abs/1503.06277, 2015. [19] B. He, Y. You, and X. Yuan. “On the convergence of primal-dual hybrid gradient algorithm,“ SIAM J. Imaging Sci., 7(4):2526–2537, 2014. [20] C. Lee and D.J. Wilkinson . “A review of stochastic block models and extensions for graph clustering,“ Applied Network Science., volume 4, Article number: 122, 2019 [21] A. Jung. “Clustering in partially labeled stochasticblock models via total variation minimization,“ inProc.54th Asilomar Conf., Signals, Systems, Computers, Pa-cific Grove, CA, Nov. 2020. [22] S. Cui and A. Hero and Z.-Q. Luo and J.M.F. Moura. “Big Data over Networks,“ cup., Cambridge, UK. 2016. [23] A. Jung. “Federated Learning over Networks for Pandemics,“ Manning., 2021. [24] A . Chapelle and B. Schölkopf and A. Zien. “Semi-Supervised Learning,“ The MIT Press., Cambridge, Massachusetts. 2006. [25] Y. Boykov and V. Kolmogorov. “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,“ IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 9, 2004. [26] A. Jung and M. Hulsebos. “The Network Nullspace Property for Compressed Sensing of Big Data over Networks,“ Front. Appl. Math. Stat., 2018. [27] B. Nadler and N. Srebro and X. Zhou. “Statistical Analysis of Semi-Supervised Learning: The Limit of Infinite Unlabelled Data,“ Advances in Neural Information Processing Systems 22., pages. 1330–1338, 2009. [28] Nadler, Boaz and Galun, Meirav. “Fundamental Limitations of Spectral Clustering,“ Advances in Neural Information Processing Systems., B. Schölkopf and J. Platt and T. Hoffman, 2007, vol. 19, MIT Press. [29] A. Jung. “Machine Learning: The Basics,“ ArXiv e-prints https://arxiv.org/abs/1805.05052., 2020. § TOTAL VARIATION MINIMIZATION The construction of feature vectors (<ref>) in spectral clustering methods is global. Indeed, the eigenvectors (<ref>) of the graph Laplacian will typically depend on every edge in the empirical graph. Instead, we construct features in a more local fashion by considering local clusters around seed nodes. These seed nodes might be obtained by domain knowledge (telling us which nodes must belong to the same cluster), based on basic connectivity properties (nodes with high degree), or chosen randomly. Given a set of seed nodes $\seednodes$ we solve the TV minimization problem \begin{equation} \label{equ_def_nLasso} \hat{\vu}^{(\seednodes)} = \argmin_{\vu \in \mathbb{R}^{\nodes}} \sum_{\nodeidx \in \seednodes} (1\!-\!u_{\nodeidx})^2/2\!+\! \sum_{\nodeidx \notin \seednodes} (\alpha/2) u_{\nodeidx} ^2\!+\!\regparam \| \vu \|_{\rm TV}. \end{equation} Here, we used the TV of a vector $\vu \in \mathbb{R}^{\nrnodes}$, \begin{equation} \label{eq_def_TV} \| \vu \|_{\rm TV} = \sum_{\edge{\nodeidx}{\nodeidx'} \in \edges} \edgeweight_{\nodeidx,\nodeidx'} |u_{\nodeidx} - u_{\nodeidx'}|. \end{equation} Note that the TV minimization (<ref>) is parametrized by the set $\seednodes \subseteq \nodes$ of seed nodes. Our clustering methods solve multiple instances of (<ref>), each time using another choice for the seed nodes $\seednodes$. Section <ref> discusses approaches for choosing a useful set of seed nodes. We have recently explored the duality between TV minimization (<ref>) and network flow optimization [10]. This duality allows to characterize the solution $\hat{\vu}$ of (<ref>) in terms of network flows. A network flow is a vector $\edgesigvec \in \mathbb{R}^{\overrightarrow{\edges}}$ whose entries $\edgesig_{\directededge{\nodeidx}{\nodeidx'}}$ represent a flow from node $\nodeidx$ to $\nodeidx'$. A vector $\widehat{\vu}$ solves TV minimization (<ref>) if and only if there is a flow vector $\edgesigvec \in \mathbb{R}^{\overrightarrow{\edges}}$ such that \begin{align} -\sum_{\directededge{\nodeidx}{\nodeidx'} \in \overrightarrow{\edges}} \edgesig_{\directededge{\nodeidx}{\nodeidx'}} + \sum_{\directededge{\nodeidx'}{\nodeidx} \in \overrightarrow{\edges}}\edgesig_{\directededge{\nodeidx'}{\nodeidx}} & = \hat{u}_{\nodeidx}\!-\!1 \mbox{ for } \nodeidx \in \seednodes, \nonumber \\[2mm] -\sum_{\directededge{\nodeidx}{\nodeidx'} \in \overrightarrow{\edges}} \edgesig_{\directededge{\nodeidx}{\nodeidx'}} +\sum_{\directededge{\nodeidx'}{\nodeidx} \in \overrightarrow{\edges}}\edgesig_{\directededge{\nodeidx'}{\nodeidx}}& = \alpha \hat{u}_{\nodeidx} \mbox{ for } \nodeidx \notin \seednodes, \nonumber \\[2mm] |\edgesig_{\directededge{\nodeidx}{\nodeidx'}}| \leq \lambda \edgeweight_{\nodeidx,\nodeidx'} \mbox{ for all } &\directededge{\nodeidx}{\nodeidx'} \in \overrightarrow{\edges}, \nonumber \\[2mm] \hat{u}_{\nodeidx}\!-\!\hat{u}_{\nodeidx'}\!=\!0 \mbox{ for all } \directededge{\nodeidx}{\nodeidx'} \in \overrightarrow{\edges} \mbox{ with } & |\edgesig_{\directededge{\nodeidx}{\nodeidx'}}|\!<\!\lambda\edgeweight_{\nodeidx,\nodeidx'} . \label{equ_pd_optimal_non_staturated} \end{align} The result can be obtained by applying Fenchel duality <cit.> to TV minimization (<ref>) and a dual minimum cost flow problem (see <cit.>). Let us illustrate Theorem <ref> for the empirical graph in Figure <ref>. This empirical graph is a chain graph and partitioned into two clusters $\datacluster{1}$ and $\datacluster{2}$ which are connected by a boundary edge $b$ with weight $\edgeweight_{o}$. Ideal Case. Assume we solve TV minimization (<ref>) for the chain graph in Figure <ref> with a single seed node $\seednodes = \{ \nodeidx^{(1)} \}$ with $\nodeidx^{(1)} \in \datacluster{1}$. Moreover, we choose $\regparam$ and $\alpha$ in (<ref>) such that \begin{equation} \label{equ_cond_alpha_lambda} \regparam \edgeweight_{o} < 1\mbox{, and } | \datacluster{1}| (\alpha/\regparam)+ \edgeweight_{o}<1, | \datacluster{2}| \alpha \geq \regparam \edgeweight_{o}. \end{equation} A direct inspection of the optimality condition (<ref>) then yields the TV minimization solution \begin{equation} \label{equ_def_TV_min_chain} \hat{u}_{\nodeidx} = \begin{cases} (1- \regparam\edgeweight_{o} )/ \big(1+ \alpha (|\datacluster{1}|-1) \big) & \mbox{ for } \nodeidx \in \datacluster{1} \\ \regparam\edgeweight_{o} / \big( \alpha |\datacluster{2}| \big) & \mbox{ for } \nodeidx \in \datacluster{2} \end{cases}. \end{equation} Note that the vector (<ref>) is piece-wise constant over the clusters $\datacluster{1}$ and $\datacluster{2}$. Thus, if we would use (<ref>) as (single) feature $\feature_{\nodeidx} = \hat{u}_{\nodeidx}$, basic clustering methods would successfully recover the clusters $\datacluster{1}$ and $\datacluster{2}$. General Case. Note that (<ref>) characterizes the solution of TV minimization (<ref>) only when $\regparam$ and $\alpha$ are chosen such that (<ref>) is valid. However, condition (<ref>) is not useful in practice as it involves the size of the clusters which we would like to determine. A practical approach to choosing $\alpha$ and $\lambda$ can be based on probabilistic models for the empirical graph such as stochastic block models [21]. Alternatively, the choice for $\alpha$ and $\lambda$ can be guided by the size of the piece $\{ \nodeidx \in \nodes: \hat{u}_{\nodeidx} = \hat{u}_{\nodeidx^{(1)}}\}$ around the seed node $\nodeidx^{(1)}$. This piece grows for increasing $\regparam$ and becomes the entire node set $\nodes$ whenever $\regparam \edgeweight_{\nodeidx,\nodeidx'} > 1 \mbox{ for all edges } \edge{\nodeidx}{\nodeidx'} \in \edges.$ The duality between TV minimization (<ref>) and network flow optimization is also instrumental for developing iterative methods to solve (<ref>). Algorithm <ref> summarizes the application of a generic primal-dual method to solve (<ref>) [10]. Primal-Dual Method For TV Minimization (<ref>) Input: $\graph=\big(\nodes,\edges,\edgeweights\big), \seednodes, \lambda, \alpha, \nriter$ Initialize: $\hat{\vu}^{(0)}\!\defeq\!{\bf 0}$; $\hat{\vu}^{(-1)}\!\defeq\!{\bf 0}$;$\hat{\edgesigvec}^{(0)} \!\defeq\! {\bf 0}$; Output: approximation $\widehat{\vu}$ of solution $\widehat{\vu}^{(\seednodes)}$ to (<ref>) [1] $\pditer = 0, \ldots,\nriter-1$ $\mbox{ {\bf for} } i\!\in\!\nodes \mbox{ {\bf do} } : \tilde{u}_{\nodeidx} \!\defeq\! 2 \hat{u}^{(\pditer)}_{\nodeidx} - \hat{u}^{(\pditer\!-\!1)}_{\nodeidx} $ $ e = \directededge{\nodeidx}{\nodeidx'} \in \overrightarrow{\edges} $ $ \hat{\edgesig}^{(\pditer\!+\!1)}_{e} \!\defeq\!\hat{\edgesig}^{(\pditer)}_{e}\!+\! (1/2) (\tilde{u}_{\nodeidx}\!-\!\tilde{u}_{\nodeidx'})$ $\hat{\edgesig}^{(\pditer\!+\!1)}_{e} \!\defeq\! \hat{\edgesig}_{e}^{(\pditer\!+\!1)}\!/\!\max\{1, |\hat{\edgesig}_{e}^{(\pditer\!+\!1)}|/(\regparam \edgeweight_{e}) \} $ $ \nodeidx \in \nodes $ $\hat{u}^{(\pditer\!+\!1)}_{\nodeidx} \!\defeq\! \hat{u}^{(\pditer)}_{\nodeidx}\!-\!\gamma_{\nodeidx} \bigg[\hspace*{-1mm}\sum_{\directededge{\nodeidx}{\nodeidx'}} \hat{\edgesig}^{(\pditer\!+\!1)}_{\directededge{\nodeidx}{\nodeidx'}} \!-\!\hspace*{-1mm}\sum_{\directededge{\nodeidx'}{\nodeidx}} \hat{\edgesig}^{(\pditer\!+\!1)}_{\directededge{\nodeidx'}{\nodeidx}} \bigg]$ $\nodeidx \in \seednodes$ $\hat{u}_{\nodeidx}^{(\pditer\!+\!1)} \!\defeq\! \big(\gamma_{\nodeidx}\!+\!\hat{u}^{(\pditer\!+\!1)}_{\nodeidx}\big)/(\gamma_{\nodeidx}\!+\!1)$ $\hat{u}_{\nodeidx}^{(\pditer\!+\!1)} \!\defeq\! \hat{u}^{(\pditer\!+\!1)}_{\nodeidx}/(\alpha\gamma_{\nodeidx}\!+\!1)$ $\widehat{\vu} \defeq \hat{\vu}^{(\nriter)}$ The output $\widehat{\vu} \in \mathbb{R}^{\nrnodes}$ of Algorithm <ref> is an approximation to the solution of (<ref>). We assume that the number of iterations $\nriter$ used for Algorithm <ref> is sufficiently large such that the output of Algorithm <ref> can be considered a (numeric) solution to (<ref>). The number $\nriter$ of iterations can be guided by probabilistic models for the underlying empirical graph combined with the convergence rates guaranteed by primal-dual methods [4]. Alternatively, we can tune the number of iterations based on the final clustering result obtained by using Algorithm <ref> as a sub-routine within our clustering method (see Algorithm <ref>). It is instructive to interpret Algorithm <ref> as a message passing method for iteratively optimizing the network flow $\hat{\edgesig}^{(\pditer)}_{\directededge{\nodeidx}{\nodeidx'}}$. Step <ref> enforces the capacity constraint $| \hat{\edgesig}^{(\pditer)}_{\directededge{\nodeidx}{\nodeidx'}} | \leq \regparam \edgeweight_{\nodeidx,\nodeidx'}$. Step <ref> adjusts the value $\hat{u}^{(\pditer)}_{\nodeidx}$ based on the net flow into the node $\nodeidx$. In step <ref>, flow is injected into seed nodes $\nodeidx \in \seednodes$ while in step (<ref>) flow is leaked out of remaining nodes $\nodeidx \notin \seednodes$. § FLOW-BASED GRAPH CLUSTERING We are now in the position to formulate our flow-based graph clustering method as Algorithm <ref>. This method constructs feature vectors $\featurevec$ using the solutions of TV minimization (<ref>) for different choices of seed nodes $\seednodes$. Instead of using the values of eigenvectors for the graph Laplacian (as used by spectral clustering), we use entries of the vector solving TV minimization (<ref>) to construct feature vectors for each node $\nodeidx \in \nodes$. Flow-Based Graph Clustering Input: empirical graph $\graph$, TV min. parameters $\regparam$, $\alpha$, number $\nrcluster$ of cluster; number $\nrseeds$ of seeds. Output: cluster assignments $\hat{c}_{1}, \ldots , \hat{c}_{\nrnodes} \in \{1,\ldots,\nrcluster\}$ [1] $\pditer = 1, \ldots,\nrseeds$ select new seed nodes $\seednodes$ with Algorithm <ref> run Algorithm <ref> with $\graph,\seednodes,\regparam,\alpha$ store resulting vector in $\widehat{\vu}^{(\pditer)}$ construct node features \begin{equation} \label{equ_def_flow_vec_features} \featurevec^{(\nodeidx)} = \big( \hat{u}^{(1)}_{\nodeidx}, \ldots,\hat{u}^{(\nrseeds)}_{\nodeidx} \big)^{T} \mbox{ for every node } \nodeidx \in \nodes. \end{equation} compute cluster assignments $\hat{c}_{\nodeidx}$ by applying k-means to feature vectors (<ref>) A key challenge for the successful application of Algorithm <ref> is a suitable section of seed nodes in step <ref>. One simple approach is to choose the set $\seednodes$ by randomly selecting a single node $\nodeidx \in \nodes$. In general it is preferable to use more than one seed node. However, we must ensure that a particular selection of seed nodes $\seednodes$ contains only nodes from the same cluster. Our numerical experiments (see Section <ref>) use a simple heuristic method for selecting seed nodes $\seednodes$ in step (<ref>) of Algorithm <ref>. This heuristic method is summarized in Algorithm <ref>. Algorithm <ref> constructs a new set of seed nodes by first randomly choosing a node $\nodeidx'$ with large degree $\nodedegree{\nodeidx}$ and then adding all neighbours of $\nodeidx'$ that have a sufficiently large number of common neighbours with $\nodeidx'$. Select Seed Nodes $\seednodes$ Input: empirical graph $\graph$, minimum number of common neighbours $\eta$, minimum degree $d$ Output: seed nodes $\seednodes \subseteq \nodes$ [1] initialize $\seednodes \defeq \emptyset$ determine $\nodeidx' \defeq randomly\ select\ from\ \{\nodeidx : \nodedegree{\nodeidx} \geq d\}$ add node, $\seednodes \defeq \{ \nodeidx' \}$ each $\nodeidx''$ with $\edge{\nodeidx'}{\nodeidx''} \in \edges$ $\mathcal{D} = \{ \nodeidx''': \edge{\nodeidx'''}{\nodeidx'}, \edge{\nodeidx''}{\nodeidx'''} \in \edges \}$ $| \mathcal{D} | \geq \eta$ $\seednodes \defeq \seednodes \bigcup \{ \nodeidx'' \}$ § NUMERICAL EXPERIMENTS For datasets without intrinsic graphs, we construct the empirical graph by using an Euclidean distance based equation to define the similarity between data points: $A_{i, j}:=\exp \left(-\left\|\mathbf{x}^{(i)}-\mathbf{x}^{(j)}\right\|_{2}^{2} /\left(2 \sigma^{2}\right)\right)$. Here, $\sigma$ is a tuning parameter that chosen via cross-validation or using a probabilistic model for the data points to keep numerical stability. §.§ Experiments with Synthetic data A dataset [28] with two clusters, as depicted in Figure <ref>, the first cluster is a set of 2D data points drawn from Gaussian density centered at $(2,0.2)$ with diagonal covariance matrix $ 0.01*I $, the second cluster is a set of 2D data points denoting uniform density in a rectangular region: $\left\{ ( x_{1}, x_{2}) \mid 0 < x_{1}< 8, {-0.05} < x_{2} < 0 \right\}$. As shown in Figure <ref>, our method significantly outperforms spectral clustering. Parameters used for Algorithm <ref> are $\alpha = 0.005$, $\lambda=0.01$, $\eta=50$, and $d=12$. [ head to column names, late after head=, after line=] =̧ 1[color=red] (, ) node $\cdot$; =̧ 0[color=green] (, ) node $\cdot$; (0,-1.5) – (8,-1.5) – (8,1.5) –(0,1.5) – (0,-1.5); in -1/$-1$,0/$0$,1/$1$ (0, – (0.1, node[left=2mm] ; in 2/$2$,4/$4$,6/$6$ (-1.5) – (-1.4) node[below=2mm] ; [ head to column names, late after head=, after line=] =̧ 1[color=red] (, ) node $\cdot$; =̧ 0[color=green] (, ) node $\cdot$; (0,-1.5) – (8,-1.5) – (8,1.5) –(0,1.5) – (0,-1.5); in -1/$-1$,0/$0$,1/$1$ (0, – (0.1, node[left=2mm] ; in 2/$2$,4/$4$,6/$6$ (-1.5) – (-1.4) node[below=2mm] ; The left plot is the clustering result of Algorithm <ref>, the right one is the clustering result of spectral clustering [6]. §.§ Image Segmentation / Pixel Clustering The performance of Algorithm <ref> for image segmentation/pixel clustering is tested on some RGB images. We construct an empirical graph as mentioned in <ref> based on pixel values. Each pixel is connected to other pixels within up to three hops. empirical graph is forced to be sparse by removing some edges associated with small weights. Pixels in a rectangular are set to be seed nodes. After 1000 iterations, a local cluster which segment out the object of interest from the background is determined. As depicted in fig <ref>, the algorithm can accurately detect which pixels belong to the object. To compare and contrast, we also use spectral clustering to execute this task, the result in fig <ref> shows that our algorithm out-performs spectral clustering. The source code for the above experiments can be found at <https://github.com/YuTian8328/>. Image Segmentation: The left plot is the original image, the middle one is the segmentation result from Algorithm <ref>, the right one is the result from spectral clustering.
# Surveillance Testing for Rapid Detection of Outbreaks in Facilities Yanyue Ding Graduate Program in Operations Research & Industrial Engineering (ORIE) University of Texas at Austin Austin, Texas 78712 Sudesh K. Agrawal Graduate Program in ORIE University of Texas at Austin Austin, Texas 78712 &Jincheng Cao Department of Mathematics University of Texas at Austin Austin, Texas 78712 Lauren Meyers Graduate Program in ORIE, and Department of Integrative Biology University of Texas at Austin Austin, Texas 78712 &John J. Hasenbein Graduate Program in ORIE University of Texas at Austin Austin, Texas 78712 <EMAIL_ADDRESS> ###### Abstract This paper develops an agent-based disease spread model on a contact network in an effort to guide efforts at surveillance testing in small to moderate facilities such as nursing homes and meat-packing plants. The model employs Monte Carlo simulations of viral spread sample paths in the contact network. The original motivation was to detect COVID-19 outbreaks quickly in such facilities, but the model can be applied to any communicable disease. In particular, the model provides guidance on how many test to administer each day and on the importance of the testing order among staff or workers. Keywords: COVID-19, surveillance testing, agent-based model, contact networks, nursing homes ## 1 Introduction In this paper, we develop an agent-based disease spread model on a contact network to optimize surveillance testing in facilities. Generally speaking, the model’s scope involves facilities of small to moderate size, say, with dozens to hundreds of employees. The original motivation derives from the need to quickly detect outbreaks of COVID-19 in facilities such as nursing homes, factories, and meat-packing plants, in which close contact is nearly unavoidable. Our focus is on _surveillance testing_ or proactive testing of staff. In other words, the premise is that there is not currently an outbreak in the facility, and the manager wishes to proactively test asymptomatic staff in order to quickly detect, and prevent, an outbreak. Given a contact network, along with other modeling parameters detailed in the sequel, the primary goal is to answer the following: 1. 1. How many employees should be tested each day in order to detect an outbreak with a given probability? 2. 2. What is the optimal sequence in which to test employees? We propose a relatively simple Monte Carlo simulation model that is tailored to answering the two questions. Our model is similar to other agent-based SEIR models in the literature. However, the novelty is in our focus on optimization and decision making, and how it relates to the structure of the contact network. In the first part of the paper, we examine generic contact networks that may be of use when there is little data on the interactions in a facility. In the second part of the paper, we present more nuanced contact models, that are derived from the staff and interaction structure in nursing homes. Such models are preferred when there is detailed data on a facility’s operations. ## 2 Literature Review COVID-19 is a critical health issue worldwide and is likely to continue to be of concern for many years. Even with vaccine development, it is imperative to detect outbreaks at an early stage to develop effective mitigation measures. According to some studies, 38%–80% of cases are pre-symptomatic or asymptomatic in Long-Term-Care Centers (LTCs) [5, 35]. Moreover, 56%–95.5% of disease transmissions are caused by asymptomatic cases [14, 17]. Early in the COVID-19 pandemic, in many facilities, only individuals with COVID-like symptoms were tested, due to limited testing resources. However, several studies show that screening symptomatic cases only is not enough to control outbreaks [33, 16, 17]. Starting in March 2020, the Centers for Disease Control and Prevention (CDC) and the Centers for Medicare & Medicaid Services (CMS) offered various COVID-19 surveillance testing strategies to hospitals, nursing homes, schools, and other facilities, as the pandemic progressed. Now, surveillance testing is a widely applied strategy to contain the spread of the virus, especially for asymptomatic cases [4, 12]. Apart from suggestions provided by the government, managers and researchers in various facilities developed their own surveillance testing strategies, combined with contact interventions, based on the facility’s contact patterns and healthcare resources. In LTCs for example, the strategies include daily individual health screening via surveys or front desk checks [3, 16], daily temperature and oxygen level checks [33], serological anti-body checks [31], and reverse transcription-polymerase chain reaction (RT-PCR) tests. In terms of evaluation of testing strategies, some strategies are evaluated by outbreak case studies or analysis of publicly recorded data [1, 8, 21, 23, 32]. In addition, strategies may be evaluated using variations of the stochastic susceptible-exposed-infected-recovered/removed (SEIR) model. Some researchers have made modifications to the traditional SEIR model, to account for nuances of COVID-19. These include adding, removing, or subdividing the standard stages in the network. For example, the exposed stage $E$ is sometimes removed, and the remaining SIR model is used [11, 15]. Saha [26] sub-divides the infected stage $I$ into detected and undetected infected stages. Chapman et al. [7] assume all infected individuals experience a pre- symptomatic stage, after which the corresponding node in the model transitions to an asymptomatic, mild symptomatic or severe symptomatic stage. Another innovation is to allow the parameters in the differential equations defining the SEIR model to be non-linear. Chen and Wei [9] and Shu et al. [28] assume that the parameters in the differential equations can be non-linear functions of time. Some studies modify parameters and the contact network structure over time. These adaptations are primarily used in comparing and finding the best surveillance screening and intervention strategies. Hou et al. [20] and Shaikh et al. [27] estimate the parameter settings under various surveillance testing and intervention efficacy assumptions and perform a parametric analysis. Ciaperon et al. [11] study the dynamic SEIR model by temporarily removing the edges in the contact network due to quarantine intervention for individuals who have COVID-like symptoms or tested positive. One key assumption in the differential-equation-based SEIR models is that the population is well-mixed and features are homogeneous among individuals [7, 9, 10]. The standard SEIR model might be suitable for predicting outbreak progression in a large well-mixed population while it might over-simplify dynamics in special networks [10]. Broadly speaking, two ways to address these issues have been developed in the literature. One is to classify individuals who share similar characteristics into subgroups and use specific differential parameters in each subgroup. For example, Besse and Faye [6] divide the population by spatial features and use a standard SIR model with heterogeneous parameters in each spatial group. The disease transmission between groups is estimated by heat equations. Shaikh et al. [27] divide the population in an LTC by the occupations, and use different transmission probabilities for each group. A second solution, requiring significant computational effort, is to use network-based stochastic models that model each individual. Ames et al. [2] believe human social networks tend to be clustered and heterogeneous. In particular, they study disease transmission features in networks with various coefficient and clustering properties. Their results show that the structure of the network has a profound influence on disease progression. Similarly, Meyers et al. [24] test intervention strategies for severe acute respiratory syndrome (SARS) in various networks. Their analysis indicates that contact patterns in the network impact the size of epidemic significantly, even if the basic reproduction rates are the same among different networks. Herrera et al. [18] study surveillance strategies in different networks and find that the best surveillance strategies are based on network structures. Rennert et al. [25] modify the SIR model by distinguishing symptomatic and pre-symptomatic stages and adding an isolation stage. Their results show that voluntary and random testing is not enough to protect students and faculty on a campus. They recommend performing rigorous testing during the semester. They also provide a novel solution by continuing to test the target group if a positive case in random surveillance occurs, which can be more effective in controlling disease spread. Chapman et al. [7] applied the SEIR model to simulate COVID-19 transmission in a congregated shelter population. The model was tested in shelters in San Francisco, Boston, and Seattle. The results showed that a combined strategy that involves daily screening, twice-weekly PCR testing and contact intervention is the best strategy to detect and avert an outbreak at an early stage. Smith et al. [29, 30] used an individual-based SEIR model to simulate the transmission of COVID-19 based on detailed contact data in an LTC in order to find the best surveillance strategy. They compared the strategies when daily testing capacities of 1, 2, 4, 8, 16, and 32 are available. Apart from COVID-19-based models, other researchers have discussed the importance of considering the structure of contact networks when designing surveillance strategies. For example, Herrera-Diestra et al. [19] study contact networks modeling the spread of foot-and-mouth disease among cattle farms in Turkey, and note that such networks have typical “small-world” properties. Although they suggest some general surveillance principles based on their observations, they do not provide detailed suggestions. Mastin et al. [22] examine the spread of plant pathogens. Their paper is close in spirit to our work, in that they focus of optimization of surveillance nodes by building a stochastic optimization model, using Monte Carlo samples. However, their model applies to a “static” surveillance strategy in which a fixed number of nodes are selected to aid detection. In this paper we investigate dynamic testing, in which different nodes are tested each day. Overall, we note that most previous research focuses on the effectiveness of various surveillance strategies to detect an outbreak of COVID-19, but few papers provide guidance on specific testing strategies, especially when testing resources are in shortage. The model in this paper is predicated on the belief that decisions makers have different risk tolerances, which in turn determine the amount of testing that is performed. We also assume the contacts of individuals in a facility can be heterogeneous. We simulate disease progression on a network and find the probability of detecting an outbreak for various levels of testing capacities and time thresholds, as described in the next section. ## 3 Model Description The first component of our model is a contact graph $G=(V,E)$ on a set $V$ of nodes and a set $E$ of undirected edges. We call two nodes _neighbors_ if they are connected by an edge. Each node in the graph corresponds to a single staff member, and there is an edge connecting them if they have close contact during a day, i.e., there is a non-zero probability of COVID-19 passing between them if one of them is infectious. Apart from the graph structure just described, the model is characterized by the following parameters: * • $K:=|V|$, the number of staff in the facility * • $p$: daily probability of external infection * • $\ell$: latency, the number of days to move from the exposed state to the infected state * • $d$: degree of each node in the symmetric case, indicating the density of contact graph * • $r$: daily probability of infection between neighbors, a representation of the force of infection * • $f$: false negative rate for a disease test, after a node is infectious * • $t$: outbreak threshold tolerance, in days. We now describe the progression of the disease through the network. The spread model is a discrete-time Markov chain, with time index $n=0,1,2,\ldots.$ At each time point the nodes can be in one of three states: susceptible ($S$), exposed ($E$), and infected ($I$). We do not model recovered nodes since our focus is on detecting initial outbreaks, generally in less than 10 days from the first infected staff member entering the facility. Nodes may be exposed either from external interactions, or internal interactions. We also have simplified assumptions regarding the exposed and infected states. In the exposed state a node cannot spread the disease to other nodes and cannot be detected via a test. After exactly $\ell$ days, an exposed node becomes infected. When a node is infected, each neighboring susceptible node enters the exposed state with probability $r$ each day. The sequence of events corresponding to infected nodes infecting susceptible nodes is assumed to be mutually independent, both among nodes and across days. In addition, to model external infections, each susceptible node can move directly to the infected state, each day, with probability $p$. The reason we assume this direct move is that an exposed node cannot be detected by a test, and cannot infect other nodes, until it is in the “infected” state. Hence, from a dynamic systems point of view, there is no need to track or model nodes that are just “exposed” externally. The sequence of events corresponding to external infections are also mutually independent. Finally, an external infection resulting in a change to the infected state obviously supersedes a pending move to “infected” if the same node has already been exposed internally. In terms of disease testing, a decision maker can administer $k$ tests per day to try to detect an outbreak. For simplicity, we assume that such a test is administered in the “morning” to each of the $k$ individuals chosen that day, and that it is a rapid test, whose results are available before the staff member interacts with the rest of the staff. From the point of view of our model, this means that the tested individuals are identified immediately if: (a) they are tested on a given day, (b) are in the infected state on the same day, and (c) the test does not yield a false negative result on this day. At this point, we stop the process, since a detection has occurred. We do not assume that the test is perfect. Specifically, there is a false negative rate given by $f$. This rate applies to a test given to an individual that is already in the infected state. We assume the false negative rate is constant throughout the period a node is infected. This is obviously a simplification of the case for most diseases, such as COVID-19, in which the false negative rate fluctuates as an individual moves through the progression of the disease. However, recall that our model is generally concerned with rapid detection, within the first several days of infection (after the “incubation” period given by $\ell$). Up until Section 5, we assume that the contact network is homogeneous in several ways. First, all nodes have an equal probability of external infection (characterized by $p$). Second, all nodes have equal degree and are vertex transitive (see Section 3.1). Third, each edge has an equal “weight,” in that the infection force $r$ is the same for every edge. In Section 5, we relax some of these assumptions to model heterogeneous contact networks, based on more realistic contact data. Part of the reason to first study homogeneous networks is to gain insights into different principles, such as the effect of the structure of the graph on testing thresholds and efficient testing protocols. To the best of our knowledge, these issues have not been studied before in the literature on surveillance testing. ### 3.1 Graph Structure In our study of so-called homogeneous graphs, we confine our analysis to $d$-regular graphs, in order to preserve some sense of internal symmetry. In addition, this allows us to systematically explore the effect of internal interaction density, as embodied by the single parameter $d$. Of course, there are numerous other ways to quantify graph structure, even for $d$-regular graphs, but we do not explore those other avenues in this paper. First, we recall that in a $d$-regular graph each node has exactly $d$ neighbors. A complete graph is a $d$-regular graph in which each node is connected to every other node in the graph. Although there are many types of $d$-regular graphs for a given $d$, in this paper we only study the class of _circulant graphs_ , which are always $d$-regular. A circulant graph is a graph whose adjacency matrix is a circulant matrix. More specifically, a circulant graph can be characterized by a vector $(a_{1},a_{2},\ldots,a_{m})$ where $a_{1}<a_{2}<\cdots<a_{m-1}<a_{m}$ and $a_{m}<(K+1)/2$. For simplicity, we assume that $a_{m}<K/2$. In this case, the corresponding circulant graph is $d$-regular with $d=2m$. This allows us to uniquely define $m$, for a specified even $d$ (which we vary to study the effect of graph density on disease spread). The vector $(a_{1},a_{2},\ldots,a_{m})$ is sometimes called the _jump sequence_ , and it characterizes the connections in the graph as follows. Suppose we label the nodes in the graph $0,1,2,\ldots,K-1$. Then each node $i$ has nodes $(i\pm a_{1})\mod{K}$, $(i\pm a_{2})\mod{K}$, …, $(i\pm a_{m})\mod{K}$, as neighbors. Circulant graphs are useful for our study as they have a number of desirable properties. As mentioned above, they are always $d$-regular, with a degree directly related to the parameter $m$. We also design our graphs so that they are always connected. Further, circulant graphs are _vertex transitive_ so that, roughly speaking, every vertex looks similar. Note that circulant graphs need not be edge transitive and hence are not symmetric in the graph-theoretic sense. In studying the effect of graph structure and density on surveillance protocols, we use three classes of circulant graphs: complete graphs, neighboring graphs, and crossing graphs. The first class is standard. The latter two terms were created for this study. A _neighboring graph_ is one in which the jump sequence is given by $(1,2,\ldots,m)$. In other words, each node is connected to its $m$ closest neighbors, if one envisions the nodes being arranged, as labeled, in a circle. A crossing graph is one in which the jump sequence is given by $(\lfloor K/2-1\rfloor-m+1,\ldots,\lfloor K/2-1\rfloor)$. In this case, each is node is connected to its $m$ farthest neighbors. In Figure 1 we display representatives of each graph class for a small network. Figure 1: Some examples of $d$-regular graphs: (a) a complete 11-regular graph, (b) a neighboring 4-regular graph, (c) a crossing 3-regular graph. ### 3.2 Monte Carlo Simulation In order to answer the questions posed in the introduction, we perform Monte Carlo simulations of disease spread, and detection, on networks of varying structures and sizes. To keep the insights manageable, the parameters $\ell=3$, $r=0.05$, and $f=0.21$ are kept constant throughout the numerical studies. Otherwise, we vary network size, density, external infection rate, and the outbreak threshold tolerance. Given a particular network and testing protocol we typically simulate 50,000 outbreaks in a facility. An outbreak is initiated by at least one node becoming infectious at time 0, and the resulting nodes that are exposed and infected, until the outbreak threshold tolerance $t$, at which point the simulation is stopped. For each sample outbreak, we record whether or not there was a successful detection. An outbreak may not be detected due to an infected node not being tested, or due to a false negative result. These simulations then produce point estimates and confidence intervals on the true detection probability for a particular testing protocol. In general, we found 50,000 sample paths sufficient to distinguish among policies. In our numerical results, the primary performance criterion is the probability of successfully detecting an outbreak within $t$ days, given that an outbreak occurs. It is important to note that this is what one might call a conditional outbreak probability. We believe that it is more useful than computing the unconditional probability of detecting an outbreak, as this type of performance metric essentially penalizes a protocol for failing to detect an outbreak that does not exist. As we shall see below, this needs to be kept in mind, as some results under this metric might appear counterintuitive. For example, the probability of successful detection actually increases with the community infection rate of $p$. To see why, imagine the extreme case in which everyone walks in the door on day 1 infected with a disease and the false negative rate is 0. In this case, it does not matter who is tested, as everyone will set off the detection alarm. In particular you only need one test to detect an outbreak when $p=1$ and $f=0$. However, when the community infection rate is very low, this means that it is very likely that one, and only one, person is the genesis of an outbreak. This makes the outbreak more difficult to detect with a small number of tests. ## 4 Computational Results for Testing Protocols ### 4.1 Parameter Comparisons In our first set of experiments, we investigate the effect of graph density, embodied by the parameter $d$, on the number of tests per day needed to detect an outbreak. In our model, $d$ represents the number of other staff members a particular staff member is in contact with each day. For example, when $K=100$ and $d=99$, the resulting contact network is a complete graph, and the graph implies that all staff members come into contact with all other staff members each day. Obviously, this is an extreme situation. As such, we also examine lower density networks. In Figure 2, we examine the case of 100 staff members and an external community infection probability of $p=0.0001$ (per day). We set $t=6$, indicating that our goal is to detect an outbreak within 6 days of the introduction of the disease into the facility. Keeping these parameters fixed, we plot the point estimates for the probability of detection versus the number of tests per day, and plot this curve for various values of $d$. The first thing to note is that there appear to be diminishing marginal benefits of administering more tests, as one might expect. From the graph we also see that the number of tests required to meet a certain benchmark probability of detection can vary significantly with $d$. For example, if our threshold probability is 0.7, and $d=99$ then roughly 4 tests per day are needed. However, if $d=20$ then about 13 tests per day are needed. This may seem counterintuitive but it is related to our observation above relating to the community infection rate. The more quickly a disease spreads in a facility, the more quickly it can be detected. We also observe that if we wish to enforce a very high (say 0.95) detection probability, then a large number of staff need to be tested daily. This is unlikely to be palatable for management and staff. $0$$10$$20$$30$$40$$50$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreak$d=20$$d=40$$d=60$$d=80$$d=99$ Figure 2: Number of tests versus detection probability for circulant neighboring graphs of varying degrees. Fixed graph parameters are $K=100$, $p=0.0001$, $r=0.05$, $f=0.21$, $\ell=3$, $t=6$. The results were obtained by sampling 50,000 virus sample paths. In the next set of experiments, we vary the outbreak threshold tolerance $t$ while keeping other parameters fixed. We choose a neighboring graph with $d=60$ as the results for this graph show the contrast among different tolerance levels well. The results are shown in Figure 3. As must be the case, the detection probability curves are nested, with larger tolerance curves dominating lower tolerance curves. Again we notice that the number of tests required to detect an outbreak at various tolerance levels varies widely. If the probability threshold is again 0.7, and $t=8$ then just one test per day is needed. However, if $t=4$ then around 10 tests per day are needed, which implies that all staff are tested every 10 days. $0$$10$$20$$30$$40$$50$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreak$t=4$$t=5$$t=6$$t=7$$t=8$$t=9$$t=10$ Figure 3: Number of tests versus detection probability for circulant neighboring graphs of tolerance levels. Fixed graph parameters are $K=100$, $p=0.01$, $r=0.05$, $f=0.21$, $\ell=3$, $d=60$. The results were obtained by sampling 50,000 virus sample paths. Finally, we examine the effect of the external infection probability on the detection probability curves. The networks tested in Figure 4 have the same parameters as the networks tested in Figure 3, with one exception: we change the value of $p$ from 0.01 to $0.0001$. For each tolerance level $t$ we note that the curve corresponding to $p=0.01$ dominates the curve for $p=0.0001$. For example when the probability threshold is 0.7, $t=4$, and $p=0.01$, around 10 tests per day are needed, as observed above. However, when we examine the case with $p=0.0001$ the required number of tests per day is approximately 17. At first glance, this may seem counterintuitive: we require fewer tests when a disease is more prevalent in the community. However, the result is mathematically sound. For example, suppose we examine the extreme case when $p$ is very close to 1. In that case, a large number of staff members would arrive to the facility each day being infectious. As such, we only need to test a small fraction of the staff to detect an outbreak. In contrast, when the community infection rate is low, it is likely that only 1 staff member out of, say 100, would initiate an outbreak. This one seed of an outbreak is harder to catch with a low number of tests. $0$$10$$20$$30$$40$$50$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreak$t=4$$t=5$$t=6$$t=7$$t=8$$t=9$$t=10$ Figure 4: Number of tests versus detection probability for circulant neighboring graphs of varying tolerance levels. Fixed graph parameters are $K=100$, $p=0.0001$, $r=0.05$, $f=0.21$, $\ell=3$, $d=60$. The results were obtained by sampling 50,000 virus sample paths. ### 4.2 The Effect of Testing Order In the experiments of Section 4.1, the nodes of the graph were tested in “standard order.” What this means is that we envision the nodes with a fixed numbering, 0 to $K-1$, and arranged in a circle according to the numbering. This numbering is used to characterize various types of circulant graphs, as described already. In addition, the standard testing order is to begin by testing nodes 0 through $k-1$ on day 1, then testing nodes $k$ through $2k-1$ on day 2, etc. We continue testing in this manner, returning to node 1 when all nodes have been tested. Of course, for some values of $t$ and $K$ not all nodes will be tested. With respect to our simulation results, it does not matter with which node we start since outbreaks are simulated in a uniform manner across all nodes, and the graphs are vertex transitive. However, it is not clear that testing in this standard order, for any given initial starting node, is optimal. Obtaining the optimal testing order is computationally prohibitive, as the underlying optimization problem is a stochastic integer program, with a very large number of feasible solutions. In this paper, we make no attempt to prove optimality bounds on testing orders, or to develop complex heuristics. Rather, we examine a few different testing algorithms on circulant graphs. Our results indicate that the testing order has little effect on the detection probability, at least for circulant graphs. This is good news in that it indicates that optimizing the testing order likely is not a high priority in tuning test protocols in contact networks. In our next set of experiments, we examine the effect of changing the testing order, while keeping all the network parameters fixed. In Figure 5 we examine three possible testing protocols. The first protocol, labeled “circle” is the standard order described in the previous paragraph. The second protocol, labeled “new” works as follows. The first node to be tested is chosen uniformly at random. Call this node $A$. The second node that is tested is the node that has the maximum distance from $A$, with ties broken randomly. Recall that the distance between two nodes in a graph is the number of edges in the shortest path connecting the nodes. To continue the selection process after some set of nodes has already been selected, the next node chosen is one whose sum of distances from all previously selected nodes is largest. To break ties among equal distance sums, we select a node for which these distances have the smallest sample standard deviation (if there is still a tie, it is broken randomly). We repeat the selection process until all nodes have been selected, to create an ordered list that defines the testing order. In our simulation analysis, we test the nodes in the given order and then return to the beginning of the list once all nodes have been tested once. The third protocol is based on a randomly selected permutation of the numbers 0 through $K-1$. For the neighboring graph testing in Figure 5, we see that the distance algorithm does slightly outperform the circle protocol for testing levels of about 5 to 10 nodes per day. This also implies a slight difference in the required number of tests at the mid-level probability thresholds. For example, for a probability threshold of 0.9 the difference in the required tests per day appears to be about 2. Outside this middle zone, the difference is essentially negligible. Note that we do expect the circle protocol to have a slightly lower performance in a neighboring graph. When the virus is most likely to spread among close neighbors, when the nodes are arranged in a circle, then there is some redundancy in testing nodes in this same circular order. It is better to hop across the circle for subsequent tests, which is essentially what the “new” protocol does. $0$$2$$4$$6$$8$$10$$12$$14$$16$$18$$20$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreakcirclerandomnew Figure 5: Number of tests versus detection probability for various testing protocols in a neighboring graph. Fixed graph parameters are $K=100$, $p=0.01$, $r=0.05$, $f=0.21$, $\ell=3$, $d=80$, $t=6$. The results were obtained by sampling 10,000 virus sample paths. We next test the same three testing protocols in a crossing graph. In Figure 6, we see that there is almost no difference in the three protocols, across a range of testing levels. This is also expected in this case. In a crossing graph, the testing protocols circle and new are quite similar, because nodes that are “neighbors” when arranged on a circle are actually rather distant as measured by the number of hops required to travel between such nodes. Again, these results indicate that testing protocol does not have a large effect on detection probability, at least for circulant graphs. Finally, in Figure 7, we test the same protocols on a randomly generated $d$-regular graph. As expected, there are relatively small differences among protocols. We also tested protocols for different graph densities, and the results are similar to those presented herein. We conclude that the difference among testing orders is usually insignificant, with some minor differences as shown in Figure 5. Since the “new” protocol performs well in all cases, we recommend this protocol, assuming that the decision maker has concrete knowledge of the graph structure. Otherwise, nodes can likely be tested in an arbitrary manner, with little loss of performance versus a good heuristic. $0$$2$$4$$6$$8$$10$$12$$14$$16$$18$$20$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreakcirclerandomnew Figure 6: Number of tests versus detection probability for various testing protocols in a crossing graph. Fixed graph parameters are $K=100$, $p=0.01$, $r=0.05$, $f=0.21$, $\ell=3$, $d=80$, $t=6$. The results were obtained by sampling 50,000 virus sample paths. $0$$2$$4$$6$$8$$10$$12$$14$$16$$18$$20$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreakcirclerandomnew Figure 7: Number of tests versus detection probability for various testing protocols in a random $d$-regular graph. Fixed graph parameters are $K=100$, $p=0.01$, $r=0.05$, $f=0.21$, $\ell=3$, $d=80$, $t=6$. The results were obtained by sampling 10,000 virus sample paths. ## 5 Heterogeneous Graphs and Real-world Test Cases ### 5.1 Real-world Test Cases #### 5.1.1 Model adjustment In previous sections, we considered simplified contact networks in order to gain insight into the relationship between the parameters of the graph, and the number of tests needed to detect an outbreak. In the remaining sections, we consider more general graphs that are inspired by real-world situations and data. A primary difference is that we allow the parameters $r$ and $p$ to be heterogeneous, i.e., they can vary by node. The outside probability of infection $p$ may vary since each staff member has different social interactions and risk outside of work. Also, in real facilities staff in different categories can have very different contact patterns. Hence, the corresponding contact graph is no longer “symmetric,” and the internal infection probability $r$ can vary by node. Our assumptions regarding the latency $l$, the false negative rate $f$, and disease testing procedures are the same as in Section 3. In many facilities, the staff rosters differ each day as do the contact patterns. In theory, it might be possible to develop even more nuanced models in a particular facility. For example, Duval et al. [13] studied the contact patterns in one nursing home by asking the staff and patients to use wearable sensors and recorded the contact duration, distances, and frequencies. However, for legal and ethical reasons, there are obviously significant barriers to collecting such detailed data in most facilities. Hence, we do not attempt to model a representative facility at this level of detail. Instead, we collect general data on staff levels, categories and interactions. We believe that this level of modeling still provides important insight into appropriate testing levels for outbreak detection. Let $c$ be the number of different staff categories in a facility. The staff categories are given by the $C=\\{1,2,...,c\\}$. Since we have heterogeneous internal infection parameters, we now use $r_{c_{1},c_{2}}$ to denote the daily probability of infection between staff members in categories $c_{1}$ and $c_{2}$, assuming that they are connected by an edge in the contact graph. Of course, we allow $c_{1}=c_{2}$ to reflect interactions between staff in the same category. Staff in a facility can be categorized by their job titles, rosters, offices, working units, etc. In our real-world case study, we categorize the staff in a nursing home by job title, and we assume the staff with the same job titles share the same contact patterns. #### 5.1.2 Parameter fitting We estimate the parameter $r_{c_{1},c_{2}}$ for different choices of $c_{1}$ and $c_{2}$ based on the contact patterns among staff. Inspired by Smith et al. [29] and Temime et al. [34], we use the contact frequencies, average duration of each contact, and the rate of infection with close contact to estimate the probability of internal infection for staff in various categories. Let $p_{0}$ be the probability of infection per minute between two individuals with close contact, $d_{0}$ the average contact duration (in minutes), and $n_{0}$ the average contact frequency. Then we can estimate the internal probability of infection as $r=p_{0}\cdot d_{0}\cdot n_{0}$. We collected data from a nursing home in the northern US by collecting information from the facility manager. The details of this contact data are shown in Table 1. Next, for each staff category, we estimated the number of other staff they come into contact with daily, for all other categories. This data is shown in Table 2. For a job category in row $i$ of the table, the $i,j$th entry in the upper part of the table is the average number of contacts for a staff member of category $i$ with staff members in category $j$. Note that the table would not be symmetric if the lower half was filled in, but the data in the upper portion is sufficient to estimate contacts between all pairs of staff categories. Job title | Staff # | # of daily contact with residents | # of daily contact with staff | Avg. contact distance (ft) | Avg. contact duration (min) ---|---|---|---|---|--- Nurses and nurse aids | 69 | 30 | 20 | $<$ 6 | $>$ 15 House keeping | 16 | 10 | 10 | $>$ 6 | $<$ 15 Rehabilitation | 4 | 10 | 10 | $<$ 6 | $>$ 15 Activity lifestyle | 12 | 20 | 10 | $<$ 6 | 15 Salon | 2 | 10 | 10 | $<$ 6 | 15 Maintenance | 12 | 10 | 10 | $<$ 6 | $<$ 15 Reception | 6 | 10 | 40 | $<$ 6 | $>$ 15 Dining | 6 | 30 | 20 | $<$ 6 | $<$ 15 Social | 5 | 10 | 10 | $<$ 6 | $<$ 15 Finance | 14 | 0 | 10 | $<$ 6 | $>$ 15 Logistics | 3 | 0 | 20 | $<$ 6 | $<$ 15 Driver | 4 | 20 | 10 | $<$ 6 | $>$ 15 Table 1: Staff data for the nursing home case study Job title | NR | HK | RH | MS | SL | MT | RC | DN | SC | AD | LG | DR ---|---|---|---|---|---|---|---|---|---|---|---|--- NR | 15 | 1.5 | 0.2 | 1 | 0.1 | 0.1 | 2 | 0.5 | 1 | 1 | 0.5 | 0.22 HK | $-$ | 4 | 0.5 | 0.5 | 0.1 | 1 | 1 | 0.2 | 0.2 | 1 | 0.1 | 0.25 RH | $-$ | $-$ | 1.5 | 2 | 0.1 | 0.1 | 0.5 | 0.2 | 0.2 | 1 | 0.5 | 0.2 MS | $-$ | $-$ | $-$ | 3 | 0.1 | 0.1 | 0.5 | 0.2 | 0.2 | 1 | 0.5 | 0.2 SL | $-$ | $-$ | $-$ | $-$ | 1 | 0.1 | 0.5 | 0.1 | 0.1 | 1.2 | 0.5 | 0.2 MT | $-$ | $-$ | $-$ | $-$ | $-$ | 4 | 0.5 | 0.1 | 0.1 | 1.2 | 0.5 | 0.3 RC | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 2 | 0.1 | 0.1 | 3 | 1 | 0.4 DN | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 4 | 1 | 1 | 0.1 | 0.1 SC | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 3 | 1 | 0.5 | 0.2 AD | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 5 | 1 | 0.5 LG | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 1.5 | 0.4 DR | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.5 Table 2: Estimated daily contacts between staff of different categories, rounded to two decimals places Based on this data, we are now able to generate a representative contact graph for the case-study nursing home. First, we label all the nodes in the graph with their corresponding staff category. Then, we calculate the probability of generating an edge between two nodes by using the data in Tables 1 and 2. For example, if a node is labeled as a nurse, the probability that a contact is triggered with another nurse node is $15/68\approx 0.22$, as each nurse has an average of 15 contacts with the other 68 nurses during a shift. Then, for every pair of nurses, we generate a Bernoulli(0.22) random variable and assign an edge to this pair if the outcome is a 1. Otherwise, there is no edge connecting the pair. When generating edges between nodes in different staffing categories, we use similar logic. For example, the probability that there is an edge connecting a nurse with a member of the housekeeping staff is 1.5/16, since, using Table 2, each nurse has an average of 1.5 contacts per day with a member of the housekeeping staff. Figure 8 shows an instance of the contact network induced by the procedure just described. Our estimates of these $r_{i,j}$ values appear in Table 3. For example, if in the contact network there exists an edge between a nurse and a housekeeper, then the probability of transmission (per day) between these two staff members is 0.01. If there is no edge between two staff members, then the transmission probability is 0. We only provide data in the upper part of the table, as this table is symmetric, by our assumptions. Job title | NR | HK | RH | MS | SL | MT | RC | DN | SC | AD | LG | DR ---|---|---|---|---|---|---|---|---|---|---|---|--- NR | 0.08 | 0.01 | 0.06 | 0.015 | 0.01 | 0.01 | 0.04 | 0.02 | 0.02 | 0.01 | 0.01 | 0.08 HK | $-$ | 0.08 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.02 | 0.02 | 0.01 | 0.01 | 0.04 RH | $-$ | $-$ | 0.08 | 0.015 | 0.01 | 0.01 | 0.04 | 0.02 | 0.02 | 0.01 | 0.01 | 0.08 MS | $-$ | $-$ | $-$ | 0.08 | 0.01 | 0.01 | 0.04 | 0.02 | 0.02 | 0.01 | 0.01 | 0.08 SL | $-$ | $-$ | $-$ | $-$ | 0.08 | 0.01 | 0.01 | 0.02 | 0.02 | 0.01 | 0.01 | 0.08 MT | $-$ | $-$ | $-$ | $-$ | $-$ | 0.08 | 0.01 | 0.02 | 0.02 | 0.01 | 0.01 | 0.04 RC | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.08 | 0.02 | 0.02 | 0.01 | 0.02 | 0.08 DN | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.02 | 0.04 | 0.02 | 0.01 | 0.08 SC | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.02 | 0.03 | 0.02 | 0.08 AD | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.08 | 0.03 | 0.08 LG | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.08 | 0.08 DR | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | 0.08 * • Abbreviations: NR, nurse; HK, house-keeper; MS, mission; SL, salon; RC, receptionist; DN, dining; SC, social; AD, administration; LG, logistic; DR, driver. Table 3: Estimated probability of internal infection among staff in different categories Figure 8: A real-world LTC network with $s=152,c=12$ The probability of external infection is a dynamic quantity that depends highly on the current community prevalence of COVID-19. In this study, we assume that all staff have the same value of $p$. For a given area, such as a state or a county, we can estimate the daily probability of infection for an individual as $p=\frac{\text{Number of new infections over the past 7 days}}{7\cdot\text{Population size}}.$ Of course, the official count new infections in most cases is underestimating the true number of cases. Therefore, in conducting sensitivity analysis, we multiply the estimate above by $3$, $5$, and $10$. #### 5.1.3 Computational experiments In this section, we present results using the network generated from the data in Tables 1, 2, and 3. Similar to Section 4.1, we vary some basic model parameters to perform comparisons. In the first set of experiments, we fix the probability of external infection to $p=0.003$, and vary the outbreak threshold tolerance $t$. As shown in Figure 9, the results align with the experiments we did for the circulant graph. Obviously, as before, a larger tolerance value requires a smaller number of tests per day, for a fixed conditional probability target. For example, if the facility manager targets a 0.8 probability of detecting an outbreak, with $t=3$ then approximately 44 tests per day are required. However, if the threshold of tolerance is set to 4, 5 or even 10 days, then the required number of tests per day is 29, 19 and 3, respectively. We also note for $t=3$ the graph appears to be piecewise linear. This makes intuitive sense, as we do not benefit from the “network” effect of virus spread in this case. $0$$10$$20$$30$$40$$50$$60$$70$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreak$t=3$$t=4$$t=5$$t=6$$t=7$$t=8$$t=9$$t=10$$t=11$$t=12$$t=13$$t=14$ Figure 9: Estimated probabilities of outbreak detection for $K=152$, $c=12$, and $p=0.003$ for various values of $t$ Next, we examine the effect of changing $p$, the probability of external infection. Table 10 shows the results of our simulations. As argued in Section 4.1, although it is counterintuitive that higher values of $p$ require fewer tests per day to meet a given probability target, this is actually a logical outcome of the model. $0$$10$$20$$30$$40$$50$$60$$70$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreak$p=0.001$$p=0.003$$p=0.006$$p=0.01$$p=0.015$$p=0.03$ Figure 10: Estimated probabilities of outbreak detection in a nursing home case study with $K=152$, $c=12$, $t=7$ and varying values of $p$ Keeping this observation in mind, we note the a facility manager may want to examine additional metrics when setting the tolerance level $t$ and the detection probability target. For one parameter setting in our case-study network, we provide in Figure 11 various statistics on the spread on infection, as reflected by 50,000 simulation runs. In Figure 11, the blue line represents the cumulative number of infectious staff, the red line represents the number of infected staff, and the yellow line represents the number of newly infected staff. The shaded areas around the lines represent 95% of confidence intervals around the point estimates for these quantities, various days after the outbreak. As expected, the blue line point estimate is higher than the red line, because infection occurs three days before an individual becomes infectious. Examining the graph, if a facility manager wishes to detect an outbreak before 10% of the staff becomes infected then she would set $t=5$ for the network depicted in Figure 11. $0$$2$$4$$6$$8$$10$$12$$14$$16$$18$$20$$0$$20$$40$$60$$80$$100$$120$$140$Number of days after outbreakNumber of staffcumulative infected staffcumulative infectious staffnewly infected staffconfidence interval = 95% Figure 11: Infection statistics in a case-study model with $K=152$, $c=124$, $p=0.003$, $l=3$, and $f=0.21$ Finally, in Figure 12 we show the proportion of infected staff in various categories over 20 days, again averaged over 50,000 simulated outbreaks. Note that receptionists and nurses are the categories with the highest infection proportions. In contrast, salon workers have the lowest infection rates in this example. Our discussions with nursing home staff, indicate that their perception is that nurses are quite vulnerable to outbreaks, but receptionists are not. However, if we scrutinize Table 1 we note that receptionists and nurses both have frequent, close, and prolonged contact with other staff, so the model does seem to be reflecting the data provided. $0$$2$$4$$6$$8$$10$$12$$14$$16$$18$$20$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of days after outbreakProportion of staff infected within each categoryNRHKRHMSSLMTRCDNSCADLGDR Figure 12: Proportion of infected staff in different job categories with $s=152$, $c=12$, $p=0.003$, $l=3$, and $f=0.21$ ### 5.2 Effect of Testing Order In Section 4.2, we examined the effects of testing order in what we called homogeneous graphs, i.e., graphs with a significant amount symmetry and homogeneity among the nodes. There, we found small, but mostly insignificant differences among testing protocols. In contact networks arising from the real world, the associated graphs are not expected to have such a regular structure. Due to the less regular structure, there are also a larger variety of heuristic testing protocols that can be tested. The most intuitive protocols are based on testing “central” or important nodes first. In this section, we test three such heuristics of varying levels of complexity. In each heuristic we need to produce an ordered list (permutation) of the nodes that determines the testing protocol. The first heuristic, which we call _degree rank_ , we simply order the nodes from highest degree to lowest degree, breaking ties in an arbitrary manner. Nodes are then tested in this order. The idea, of course, is a higher degree node is more likely to “detect” an outbreak since it has connections to lots of other nodes. The second heuristic, which is called _PageRank_ , is based on the well-known node ranking system first developed by the founders of Google. We compute the PageRank for all nodes, and test in the order of highest rank to lowest. Again, the idea of this ranking is that higher ranked nodes would be visited more often by a virus surfing the contact network. A potential drawback of this notion, as applied to surveillance testing, is that PageRank is essentially derived from the steady-state distribution of the virus surfing process alluded to above. However, in our virus spread and detection model, we are really concerned with the _transient_ behavior of the virus. In particular, what is most important for early detection is the initial trajectory of the virus over the network. This observation inspires us to consider another algorithm which we call _simulation rank_. The idea is as follows: if our outbreak tolerance is $t$ days, then we are interested in testing the nodes that are most likely to be involved in an outbreak during the $t$ days. In particular, we should rank the nodes according to the probability that they are part of a $t$-day outbreak. In theory, one could compute this probability exactly, but it is clearly an intractable computation for networks of even moderate size. Instead, we simulate many virus sample paths to estimate the aforementioned probability for each node, and then use these estimates to create the ranking. It is clear that the degree rank ordering requires very little computational effort, as it just involves counting edges. PageRank is more difficult, but there are efficient algorithms to estimate the PageRank, even for large networks. Finally, simulation rank requires a considerable amount of effort, as potentially thousands of simulation samples are required to get a good estimate. In Figures 13 and 14, we show the results of testing the three ranking algorithms for test protocols. In each case, we determine the test order and then evaluate the protocol using the network depicted in Figure 8. As in our previous computations, we using 50,000 virus sample paths to perform the evaluation. Figure 13 displays the results for a tolerance of $t=4$ days. Clearly, the difference among the algorithms is imperceptible in this figure. We also ran statistical tests confirming that there no statistically significant difference among the algorithms. Figure 14 displays the results for a tolerance of $t=7$ days. Again, the differences are almost negligible, although PageRank appears to slightly outperform the other algorithms when 7 to 10 tests per days are performed. Still, these differences do not appear to be statistically significant. We performed similar tests on a small set of other networks, with similar results. Although the tests are not comprehensive, they seem to indicate that a relatively simple algorithm such as degree rank works just as well as more sophisticated algorithms. $0$$7$$14$$21$$28$$35$$42$$49$$56$$63$$70$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreakdegree rankPageRanksimulation rank Figure 13: Performance of heuristic testing protocols for the LTC network with $s=152$, $c=12$, $p=0.006$, $l=3$, $t=4$ and $f=0.21$ $0$$7$$14$$21$$28$$35$$42$$49$$56$$63$$70$$0$$0.2$$0.4$$0.6$$0.8$$1$Number of tests per day ($k$)Conditional probability of detecting outbreakdegree rankPageRanksimulation rank Figure 14: Performance of heuristic testing protocols for the LTC network with $s=152$, $c=12$, $p=0.006$, $l=3$, $t=7$ and $f=0.21$ ## References * [1] Y. Albogami, H. Alkofide, and A. Alrwisan. Covid-19 Vaccine Surveillance in Saudi Arabia: Opportunities for Real-time Assessment. Saudi Pharmaceutical Journal, 29(8):914–916, 2021. * [2] G. M. Ames, D. B. George, C. P. Hampson, A. R. Kanarek, C. D. Mcbee, D. R. Lockwood, J. D. Achter, and C. T. Webb. Using network properties to predict disease dynamics on human contact networks. Proceedings of the Royal Society B: Biological Sciences, 278(1724):3544–3550, 2011. * [3] Y. J. Baek, T. Lee, Y. Cho, J. H. Hyun, M. H. Kim, Y. Sohn, J. H. Kim, J. Y. Ahn, S. J. Jeong, N. S. Ku, J. S. Yeom, J. Lee, and J. Y. Choi. A mathematical model of COVID-19 transmission in a tertiary hospital and assessment of the effects of different intervention strategies. PLoS ONE, 15(10):1–16, 2020. * [4] Y. Bai, B. Yang, L. Lin, J. L. Herrera, Z. Du, and P. Holme. Optimizing sentinel surveillance in temporal network epidemiology. Scientific Reports, 7(1):1–10, 2017. * [5] A. Bernadou, S. Bouges, M. Catroux, J. C. Rigaux, C. Laland, N. Levêque, U. Noury, S. Larrieu, S. Acef, D. Habold, F. Cazenave-Roblot, and L. Filleul. High impact of COVID-19 outbreak in a nursing home in the Nouvelle-Aquitaine region, France, March to April 2020. BMC Infectious Diseases, 21(1):1–6, 2021. * [6] C. Besse and G. Faye. Dynamics of epidemic spreading on connected graphs. Journal of Mathematical Biology, 82(6):1–52, 2021. * [7] L. A. Chapman, M. Kushel, S. N. Cox, A. Scarborough, C. Cawley, T. Q. Nguyen, I. Rodriguez-Barraquer, B. Greenhouse, E. Imbert, and N. C. Lo. Comparison of infection control strategies to reduce COVID-19 outbreaks in homeless shelters in the United States: a simulation study. BMC Medicine, 19(1):1–13, 2021. * [8] H. J. Chen, H. J. Lin, M. C. Wu, H. J. Tang, B. A. Su, and C. C. Lai. The implementation of an active surveillance integrating information technology and drive-through coronavirus testing station for suspected COVID-19 cases. Journal of Infection, 82(2):282–327, 2021. * [9] L. Chen and F. Wei. Study on a susceptible–exposed–infected–recovered model with nonlinear incidence rate. Advances in Difference Equations, 2020(206), 2020. * [10] G. Chowell, L. Sattenspiel, S. Bansal, and C. Viboud. Mathematical models to characterize early epidemic growth: A review. Physics of Life Reviews, 18:66–97, 2016. * [11] M. Ciaperoni, E. Galimberti, F. Bonchi, C. Cattuto, F. Gullo, and A. Barrat. Relevance of temporal cores for epidemic spread in temporal networks. Scientific Reports, 10(1):1–15, 2020. * [12] K. Drenkard, B. Sakallaris, P. Deyo, S. Abdillahi, and H. Hahn. University COVID-19 Surveillance Testing Center: Challenges and Opportunities for Schools of Nursing. Journal of Professional Nursing, 37(5):948–953, 2021. * [13] A. Duval, T. Obadia, L. Martinet, P.-Y. Boëlle, E. Fleury, D. Guillemot, L. Opatowski, and L. Temime. Measuring dynamic social contacts in a rehabilitation hospital: Effect of wards, patient and staff characteristics. Scientific reports, 8(1):1–11, 2018. * [14] L. Ferretti, C. Wymant, M. Kendall, L. Zhao, A. Nurtay, L. Abeler-Dörner, M. Parker, D. Bonsall, and C. Fraser. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. Science, 368(6491), 2020. * [15] G. Gaeta. A simple SIR model with a large set of asymptomatic infectives. Mathematics In Engineering, 3(2):1–39, 2021. * [16] P. M. Garibaldi, N. N. Ferreira, G. R. Moraes, J. C. Moura, D. L. Espósito, G. J. Volpe, R. T. Calado, B. A. Fonseca, and M. C. Borges. Efficacy of COVID-19 outbreak management in a skilled nursing facility based on serial testing for early detection and control. Brazilian Journal of Infectious Diseases, 25(2):1–6, 2021. * [17] S. Harada, S. Uno, T. Ando, M. Iida, Y. Takano, Y. Ishibashi, Y. Uwamino, T. Nishimura, A. Takeda, S. Uchida, A. Hirata, M. Sata, M. Matsumoto, A. Takeuchi, H. Obara, H. Yokoyama, K. Fukunaga, M. Amagai, Y. Kitagawa, T. Takebayashi, and N. Hasegawa. Control of a Nosocomial Outbreak of COVID-19 in a University Hospital. Open Forum Infectious Diseases, 7(12):1–9, 2020. * [18] J. L. Herrera, R. Srinivasan, J. S. Brownstein, A. P. Galvani, and L. A. Meyers. Disease Surveillance on Complex Social Networks. PLoS Computational Biology, 12(7):1–16, 2016. * [19] J. L. Herrera-Diestra, M. Tildesley, K. Shea, and M. Ferrari. Network structure and disease risk for an endemic infectious disease. arXiv preprint arXiv:2107.06186, 2021. * [20] C. Hou, J. Chen, Y. Zhou, L. Hua, J. Yuan, S. He, Y. Guo, S. Zhang, Q. Jia, C. Zhao, J. Zhang, G. Xu, and E. Jia. The effectiveness of quarantine of Wuhan city against the Corona Virus Disease 2019 (COVID-19): A well-mixed SEIR model analysis. Journal of Medical Virology, 92(7):841–848, 2020. * [21] O. B. Leal-Neto, F. A. Santos, J. Y. Lee, J. O. Albuquerque, and W. V. Souza. Prioritizing COVID-19 tests based on participatory surveillance and spatial scanning. International Journal of Medical Informatics, 143:104263, 2020. * [22] A. J. Mastin, T. R. Gottwald, F. van den Bosch, N. J. Cunniffe, and S. Parnell. Optimising risk-based surveillance for early detection of invasive plant pathogens. PLoS biology, 18(10):e3000863, 2020. * [23] T. M. McMichael, D. W. Currie, S. Clark, S. Pogosjans, M. Kay, N. G. Schwartz, J. Lewis, A. Baer, V. Kawakami, M. D. Lukoff, J. Ferro, C. Brostrom-Smith, T. D. Rea, M. R. Sayre, F. X. Riedo, D. Russell, B. Hiatt, P. Montgomery, A. K. Rao, E. J. Chow, F. Tobolowsky, M. J. Hughes, A. C. Bardossy, L. P. Oakley, J. R. Jacobs, N. D. Stone, S. C. Reddy, J. A. Jernigan, M. A. Honein, T. A. Clark, and J. S. Duchin. Epidemiology of Covid-19 in a Long-Term Care Facility in King County, Washington. New England Journal of Medicine, 382(21):2005–2011, 2020. * [24] L. A. Meyers, B. Pourbohloul, M. E. Newman, D. M. Skowronski, and R. C. Brunham. Network theory and SARS: Predicting outbreak diversity. Journal of Theoretical Biology, 232(1):71–81, 2005. * [25] L. Rennert, C. McMahan, C. A. Kalbaugh, Y. Yang, B. Lumsden, D. Dean, L. Pekarek, and C. C. Colenda. Surveillance-based informative testing for detection and containment of SARS-CoV-2 outbreaks on a public university campus: an observational and modelling study. The Lancet Child and Adolescent Health, 5(6):428–436, 2021. * [26] S. Saha and S. Saha. The impact of the undetected COVID-19 cases on its transmission dynamics. Indian Journal of Pure and Applied Mathematics, pages 1–6, June 2021. * [27] A. S. Shaikh, I. N. Shaikh, and K. S. Nisar. A mathematical model of COVID-19 using fractional derivative: outbreak in India with dynamics of transmission and control. Advances in Difference Equations, 2020(1), 2020. * [28] H. Shu, D. Fan, and J. Wei. Global stability of multi-group SEIR epidemic models with distributed delays and nonlinear transmission. Nonlinear Analysis: Real World Applications, 13(4):1581–1592, 2012\. * [29] D. R. Smith, A. Duval, K. B. Pouwels, D. Guillemot, J. Fernandes, B.-T. Huynh, L. Temime, and L. Opatowski. How best to use limited tests? Improving COVID-19 surveillance in long-term care. medRxiv, 2020. * [30] D. R. Smith, A. Duval, K. B. Pouwels, D. Guillemot, J. Fernandes, B. T. Huynh, L. Temime, and L. Opatowski. Optimizing COVID-19 surveillance in long-term care facilities: a modelling study. BMC Medicine, 18(1):1–16, 2020. * [31] R. Strand, N. Fernström, A. Holmberg, Y. De Marinis, C. J. Fraenkel, and M. Rasmussen. Post-outbreak serological screening for SARS-CoV-2 infection in healthcare workers at a Swedish University Hospital. Infectious Diseases, 53(9):707–712, 2021. * [32] C. H. Sudre, A. Keshet, M. S. Graham, A. D. Joshi, S. Shilo, H. Rossman, B. Murray, E. Molten, K. Klaser, L. D. Canas, M. Antonelli, L. H. Nguyen, D. A. Drew, M. Modat, J. C. Pujol, S. Ganesh, J. Wolf, T. Meir, A. T. Chan, C. J. Steves, T. D. Spector, J. S. Brownstein, E. Segal, S. Ourselin, and C. M. Astley. Anosmia, ageusia, and other COVID-19-like symptoms in association with a positive SARS-CoV-2 test, across six national digital surveillance platforms: an observational study. The Lancet. Digital health, 7500(21):1–10, 2021. * [33] E. P. H. E. Team, K. Danis, L. Fonteneau, S. Georges, C. Daniau, S. Bernard-Stoecklin, L. Domegan, J. O’Donnell, S. H. Hauge, S. Dequeker, E. Vandael, J. Van der Heyden, F. Renard, N. B. Sierra, E. Ricchizzi, B. Schweickert, N. Schmidt, M. Abu Sin, T. Eckmanns, J. Paiva, and E. Schneider. High impact of COVID-19 in long-term care facilities, suggestion for monitoring in the EU/EEA, May 2020. Eurosurveillance, 25(22):2000956, 2020. * [34] L. Temime, M.-P. Gustin, A. Duval, N. Buetti, P. Crépey, D. Guillemot, R. Thiébaut, P. Vanhems, J.-R. Zahar, D. R. M. Smith, and L. Opatowski. A conceptual discussion about the basic reproduction number of severe acute respiratory syndrome coronavirus 2 in healthcare settings. Clinical Infectious Diseases, 72(1):141–143, January 2021. * [35] M. Y. Yen, J. Schwartz, C. C. King, C. M. Lee, and P. R. Hsueh. Recommendations for protecting against and mitigating the COVID-19 pandemic in long-term care facilities. Journal of Microbiology, Immunology and Infection, 53(3):447–453, 2020.
# Jacob’s ladders and some new consequences from A. Selberg’s formula Jan Moser Department of Mathematical Analysis and Numerical Mathematics, Comenius University, Mlynska Dolina M105, 842 48 Bratislava, SLOVAKIA <EMAIL_ADDRESS> ###### Abstract. It is proved in this paper that the Jacob’s ladders together with the A. Selberg’s classical formula (1942) lead to a new kind of formulae for some short trigonometric sums. These formulae cannot be obtained in the classical theory of A. Selberg, and all the less, in the theories of Balasubramanian, Heath-Brown and Ivic. ###### Key words and phrases: Riemann zeta-function ## 1\. The A. Selberg’s formula A. Selberg has proved in 1942 the following formula (1.1) $\int_{T}^{T+U}X^{2}(t)\left(\frac{n_{2}}{n_{1}}\right)^{it}{\rm d}t=\sqrt{\frac{\pi}{2}}\frac{U}{\sqrt{n_{1}n_{2}}}\left(\ln\frac{P^{2}}{n_{1}n_{2}}+2c\right)+\mathcal{O}(T^{1/2}\xi^{5})$ (see [19], p. 55), where (1.2) $\begin{split}&X(t)=\frac{1}{2}t^{1/4}e^{\frac{1}{4}\pi t}\pi^{-\frac{s}{2}}\zeta(s),\ s=\frac{1}{2}+it,\\\ &U=T^{1/2+\epsilon},\ \xi=\left(\frac{T}{2\pi}\right)^{\epsilon/10},\ \epsilon\leq\frac{1}{10},\ P=\sqrt{\frac{T}{2\pi}}\\\ &n_{1},n_{2}\in\mathbb{N},(n_{1},n_{2})=1,\ n_{1},n_{2}\leq\xi,\end{split}$ (comp. [19], pp. 10, 18, $a=1/2+\epsilon,\ \epsilon>0$) and $c$ is the Euler’s constant. Since (see [19], p. 10, [20], p. 79) $Z^{2}(t)=\left|\zeta\left(\frac{1}{2}+it\right)\right|^{2}=\sqrt{\frac{2}{\pi}}X^{2}(t)\left(1+\mathcal{O}(\frac{1}{t})\right),$ i.e. (1.3) $X^{2}(t)=\sqrt{\frac{2}{\pi}}Z^{2}(t)\left(1+\mathcal{O}(\frac{1}{t})\right)$ where (1.4) $\begin{split}&Z(t)=e^{i\vartheta(t)}\zeta\left(\frac{1}{2}+it\right),\\\ &\vartheta(t)=-\frac{1}{2}t\ln\pi+\text{Im}\ln\Gamma\left(\frac{1}{4}+\frac{1}{2}it\right)=\frac{t}{2}\ln\frac{t}{2\pi}-\frac{t}{2}-\frac{\pi}{8}+\mathcal{O}(\frac{1}{t})\end{split}$ is the signal defined by the Riemann zeta-function $\zeta(s)$. Following eqs. (1.1) and (1.3) we obtain (1.5) $\int_{T}^{T+U}Z^{2}(t)\left(\frac{n_{2}}{n_{1}}\right)^{it}{\rm d}t=\frac{U}{\sqrt{n_{1}n_{2}}}\left(\ln\frac{P^{2}}{n_{1}n_{2}}+2c\right)+\mathcal{O}(T^{1/2}\xi^{5})$ ###### Remark 1. If $n_{1}=n_{2}=1$ then the Hardy-Littlewood-Ingham formula $\int_{T}^{T+U}Z^{2}(t){\rm d}t=U\ln\frac{T}{2\pi}+2cU+\mathcal{O}(T^{1/2}\xi^{5})$ follows from the A. Selberg’s formula (1.5) (comp. [20], p. 120). ###### Remark 2. Let us remind that the A. Selberg’s formula (1.5) played the main role in proving the fundamental Selberg’s result $N_{0}(T+U)-N_{0}(T)>A(\epsilon)U\ln T$ where $N_{0}$ stands for the number of zeroes of the function $\zeta(1/2+it),\ t\in(0,T]$. In this paper it is proved that the Jacob’s ladders together with the A. Selberg’s classical formula lead to a new kind of results for some short trigonometric sums. This paper is a continuation of the series of works [3] \- [18]. ## 2\. The result ### 2.1. Let us remind some notions. First of all (2.1) $\tilde{Z}^{2}(t)=\frac{{\rm d}\varphi_{1}(t)}{{\rm d}t},\ \varphi_{1}(t)=\frac{1}{2}\varphi(t),$ where (2.2) $\tilde{Z}^{2}(t)=\frac{Z^{2}(t)}{2\Phi^{\prime}_{\varphi}[\varphi(t)]}=\frac{Z^{2}(t)}{\left\\{1+\mathcal{O}\left(\frac{\ln\ln t}{\ln t}\right)\right\\}\ln t}$ (see [3], (3.9); [5], (1.3); [9], (1.1), (3.1), (3.2)) and $\varphi(t)$ is the Jacob’s ladder, i.e. the solution of the following nonlinear integral equation $\int_{0}^{\mu[x(T)]}Z^{2}(t)e^{-\frac{2}{x(T)}t}{\rm d}t=\int_{0}^{T}Z^{2}(t){\rm d}t$ that was introduced in our paper [3]. Next, we have (see [1], comp. [18]) (2.3) $\begin{split}&G_{3}(x)=G_{3}(x;T,U)=\\\ &=\bigcup_{T\leq g_{2\nu}\leq T+U}\\{t:\ g_{2\nu}(-x)\leq t\leq g_{2\nu}(x)\\},\ 0<x\leq\frac{\pi}{2},\\\ &G_{4}(y)=G_{4}(y;T,U)=\\\ &=\bigcup_{T\leq g_{2\nu+1}\leq T+U}\\{t:\ g_{2\nu+1}(-y)\leq t\leq g_{2\nu+1}(y)\\},\ 0<y\leq\frac{\pi}{2},\end{split}$ and the collection of sequences $\\{g_{\nu}(\tau)\\},\ \tau\in[-\pi,\pi],\ \nu=1,2,\dots$ is defined by the equation (see [1], [18], (6)) $\vartheta_{1}[g_{\nu}(\tau)]=\frac{\pi}{2}\nu+\frac{\tau}{2};\ g_{\nu}(0)=g_{\nu}$ where (comp. (1.4)) $\vartheta_{1}(t)=\frac{t}{2}\ln\frac{t}{2\pi}-\frac{t}{2}-\frac{\pi}{8}.$ ### 2.2. In this paper we obtain some new integrals containing the following short trigonometric sums $\begin{split}&\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos(t\ln p),\ \sum_{2\leq n\leq\xi}\frac{1}{\sqrt{n}}\cos(t\ln n),\\\ &\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos(t\ln n)\end{split}$ where $p$ is the prime, $n\in\mathbb{N}$ and $d(n)$ is the number of divisors of $n$. In this direction, the following theorem holds true. ###### Theorem. Let (2.4) $G_{3}(x)=\varphi_{1}(\mathring{G}_{3}(x)),\ G_{4}(y)=\varphi_{1}(\mathring{G}_{4}(y)).$ Then we have (2.5) $\begin{split}&\int_{\mathring{G}_{3}(x)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos(\varphi_{1}(t)\ln p)\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &\frac{2x}{\pi}U\ln P\ln\ln P,\ x\in(0,\pi/2],\\\ &\int_{\mathring{G}_{4}(y)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos(\varphi_{1}(t)\ln p)\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &\frac{2y}{\pi}U\ln P\ln\ln P,\ y\in(0,\pi/2],\end{split}$ (2.6) $\begin{split}&\int_{\mathring{G}_{3}(x)}\left(\sum_{2\leq n\leq\xi}\frac{1}{\sqrt{n}}\cos(\varphi_{1}(t)\ln n)\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &\frac{1}{\pi}\left\\{\left(\frac{2\epsilon}{5}-\frac{\epsilon^{2}}{50}\right)x+\frac{\epsilon^{2}}{50}\sin x\right\\}U\ln^{2}P,\\\ &\int_{\mathring{G}_{4}(y)}\left(\sum_{2\leq n\leq\xi}\frac{1}{\sqrt{n}}\cos(\varphi_{1}(t)\ln n)\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &\frac{1}{\pi}\left\\{\left(\frac{2\epsilon}{5}-\frac{\epsilon^{2}}{50}\right)y-\frac{\epsilon^{2}}{50}\sin y\right\\}U\ln^{2}P,\end{split}$ (2.7) $\begin{split}&\int_{\mathring{G}_{3}(x)}\left(\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos(\varphi_{1}(t)\ln n)\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &\frac{\sin x}{2500\pi^{3}}U\ln^{4}P,\\\ &\int_{\mathring{G}_{4}(y)}\left(\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos(\varphi_{1}(t)\ln n)\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &-\frac{\sin y}{2500\pi^{3}}U\ln^{2}P,\end{split}$ where (2.8) $t-\varphi_{1}(t)\sim(1-c)\pi(t),\ t\to\infty,$ and $\pi(t)$ is the prime-counting function. ###### Remark 3. Let $T=\varphi_{1}(\mathring{T})$, $T+U=\varphi_{1}(\widering{T+U})$, (comp. (2.4)). Then from (2.8), similarly to [14], (1.8), we obtain $\rho\\{[T,T+U];[\mathring{T},\widering{T+U}]\\}\sim(1-c)\pi(T);\ T+U<\mathring{T},$ where $\rho$ stands for the distance of the corresponding segments. ###### Remark 4. The formulae (2.5) - (2.7) cannot be obtained in the classical theory of A. Selberg, and, all the less, in the theories of Balasubramanian, Heath-Brown and Ivic. ## 3\. New asymptotic formulae for the short trigonometric sums: their dependence on $|\zeta\left(\frac{1}{2}+it\right)|^{2}$ We obtain, putting $x=y=\pi/2$ in (2.5) $\begin{split}&\int_{\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &2U\ln P\ln\ln P.\end{split}$ Using successively the mean-value theorem (since $\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)$ is a segment), we have (3.1) $\begin{split}&\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(\alpha_{1})\ln p\\}\int_{\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)}Z^{2}\\{\varphi_{1}(t)\\}Z^{2}(t){\rm d}t=\\\ &=\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(\alpha_{1})\ln p\\}Z^{2}\\{\varphi_{1}(\alpha_{2})\\}\int_{\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &2U\ln P\ln\ln P,\ \alpha_{1},\alpha_{2}\in\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2);\ \alpha_{1}=\alpha_{1}(T,U)=\alpha_{1}(T,\epsilon),\dots.\end{split}$ Since (3.2) $\int_{\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)}\tilde{Z}^{2}{\rm d}t=\left|\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)\right|$ (comp. Remark 8), and (3.3) $m\\{\mathring{G}_{3}(x)\\}\sim\frac{x}{\pi}U,\ m\\{\mathring{G}_{4}(y)\\}\sim\frac{y}{\pi}U\ \Rightarrow\ \left|\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)\right|\sim U$ (see [2], (13), $m$ stands for the measure) then we obtain from (2.5) (see (3.1) - (3.3)) the following ###### Corollary 1. For every $T\geq T_{0}[\varphi_{1}]$ there are the values $\alpha_{1}(T),\alpha_{2}(T)\in\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)$ such that (3.4) $\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(\alpha_{1}(T))\ln p\\}\sim\frac{2\ln P\ln\ln P}{\left|\zeta\left(\frac{1}{2}+i\varphi_{1}(\alpha_{2}(T))\right)\right|^{2}},\ T\to\infty$ where $\varphi_{1}(\alpha_{1}(T)),\varphi_{1}(\alpha_{2}(T))\in\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)$. Similarly, we obtain from (2.6) ###### Corollary 2. For every $T\geq T_{0}[\varphi_{1}]$ there are the values $\alpha_{3}(T),\alpha_{4}(T)\in\mathring{G}_{3}(\pi/2)\cup\mathring{G}_{4}(\pi/2)$ such that (3.5) $\begin{split}&\sum_{2\leq n\leq\xi}\frac{1}{\sqrt{n}}\cos\\{\varphi_{1}(\alpha_{3}(T))\ln n\\}\sim\\\ &\sim\left(\frac{2\epsilon}{5}-\frac{\epsilon^{2}}{50}\right)\frac{\ln^{2}P}{\left|\zeta\left(\frac{1}{2}+i\varphi_{1}(\alpha_{4}(T))\right)\right|^{2}},\ T\to\infty\end{split}$ where $\varphi_{1}(\alpha_{3}(T)),\varphi_{1}(\alpha_{4}(T))\in G_{3}(\pi/2)\cup G_{4}(\pi/2)$. ###### Remark 5. From the asymptotic formulae (3.4), (3.5) it follows that the values of mentioned short trigonometric sums are connected with the values of the Riemann zeta-function $\zeta\left(\frac{1}{2}+it\right)$ for some infinite subset of $t$. ## 4\. New asymptotic formulae on two collections of disconnected sets $G_{3}(x),G_{4}(y)$ From (2.7), similarly to p. 3, we obtain ###### Corollary 3. (4.1) $\begin{split}&\left.\left\langle\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{\varphi_{1}(t)\ln n\\}\right\rangle\right|_{\mathring{G}_{3}(x)}\sim\\\ &\sim\frac{1}{2500\pi^{2}}\frac{\sin x}{x}\frac{\ln^{4}P}{\langle Z^{2}\\{\varphi_{1}(t)\\}\rangle|_{\mathring{G}_{3}(x)}}\\\ &\left.\left\langle\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{\varphi_{1}(t)\ln n\\}\right\rangle\right|_{\mathring{G}_{4}(y)}\sim\\\ &\sim-\frac{1}{2500\pi^{2}}\frac{\sin y}{y}\frac{\ln^{4}P}{\langle Z^{2}\\{\varphi_{1}(t)\\}\rangle|_{\mathring{G}_{4}(y)}},\ T\to\infty\end{split}$ where $\langle(\dots)\rangle|_{\mathring{G}_{3}(x)},\dots$ denote the mean- value of $(\dots)$ on $\mathring{G}_{3}(x),\dots$ . ###### Remark 6. It follows from (4.1) that the short trigonometric sum $\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{t\ln n\\},\ t\geq T_{0}[\varphi_{1}]$ has an infinitely many zeroes of the odd order. ## 5\. Law of the asymptotic equality of areas Let $\begin{split}&\mathring{G}_{3}^{+}(x)=\left\\{t:\ t\in\mathring{G}_{3}(x),\ \sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{\varphi_{1}(t)\ln n\\}>0\right\\},\\\ &\vdots\\\ &\mathring{G}_{4}^{-}(x)=\left\\{t:\ t\in\mathring{G}_{4}(x),\ \sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{\varphi_{1}(t)\ln n\\}<0\right\\}.\end{split}$ Then we obtain from (2.7), (comp. Corollary 3 in [14]) ###### Corollary 4. (5.1) $\begin{split}&\int_{\mathring{G}_{3}^{+}(x)\cup\mathring{G}_{4}^{+}(x)}\left(\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{\varphi_{1}(t)\ln n\\}\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t\sim\\\ &\sim-\int_{\mathring{G}_{3}^{-}(x)\cup\mathring{G}_{4}^{-}(x)}\left(\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{\varphi_{1}(t)\ln n\\}\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t.\end{split}$ ###### Remark 7. The formula (5.1) represents the law of the asymptotic equality of the areas (measures) of complicated figures corresponding to the positive part and the negative part, respectively, of the graph of the function (5.2) $\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{\varphi_{1}(t)\ln n\\}Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t),\ t\in\mathring{G}_{3}(x)\cup\mathring{G}_{4}(x),$ where $x\in(0,\pi/2]$. This is one of the laws governing the _chaotic_ behaviour of the positive and negative values of the signal (5.2). This signal is created by the complicated modulation of the fundamental signal $Z(t)=e^{i\vartheta(t)}\zeta\left(\frac{1}{2}+it\right)$, (comp. (1.4), (2.2)). ## 6\. Proof of the Theorem ### 6.1. Let us remind that the following lemma holds true (see [8], (2.5); [9], (3.3)): for every integrable function (in the Lebesgue sense) $f(x),\ x\in[\varphi_{1}(T),\varphi_{1}(T+U)]$ we have (6.1) $\int_{T}^{T+U}f[\varphi_{1}(t)]\tilde{Z}^{2}(t){\rm d}t=\int_{\varphi_{1}(T)}^{\varphi_{1}(T+U)}f(x){\rm d}x,\ U\in(0,T/\ln T],$ where $t-\varphi_{1}(t)\sim(1-c)\pi(t)$. In the case (comp. (2.4)) $T=\varphi_{1}(\mathring{T})$, $T+U=\varphi_{1}(\widering{T+U})$, we obtain from (6.1) the following equality (6.2) $\int_{\mathring{T}}^{\widering{T+U}}f[\varphi_{1}(t)]\tilde{Z}^{2}(t){\rm d}t=\int_{T}^{T+U}f(x){\rm d}x.$ ### 6.2. First of all, we have from (6.2), for example, $\int_{\mathring{g}_{2\nu}(-x)}^{\mathring{g}_{2\nu}(x)}f[\varphi_{1}(t)]\tilde{Z}^{2}(t){\rm d}t=\int_{g_{2\nu}(-x)}^{g_{2\nu}(x)}f(t){\rm d}t,$ (see (2.3). Next, in the case $f(t)=\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}\\{\varphi_{1}(t)\\}$ we have the following $\tilde{Z}^{2}$-transformation (6.3) $\begin{split}&\int_{\mathring{G}_{3}(x)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t=\\\ &=\int_{\mathring{G}_{3}(x)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}(t){\rm d}t,\\\ &\int_{\mathring{G}_{4}(y)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}\\{\varphi_{1}(t)\\}\tilde{Z}^{2}(t){\rm d}t=\\\ &=\int_{\mathring{G}_{4}(y)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}(t){\rm d}t.\end{split}$ Let us remind that we have proved (see [2], (13) and Corollary 7) the following formulae (6.4) $\begin{split}&\int_{G_{3}(x)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}(t){\rm d}t\sim\frac{2x}{\pi}\ln P\ln\ln P,\\\ &\int_{G_{4}(y)}\left(\sum_{2\leq p\leq\xi}\frac{1}{\sqrt{p}}\cos\\{\varphi_{1}(t)\ln p\\}\right)Z^{2}(t){\rm d}t\sim\frac{2y}{\pi}\ln P\ln\ln P.\end{split}$ Now, our formulae (2.5) follow from (6.3), (6.4). ### 6.3. Similarly, from the formulae $\begin{split}&\int_{G_{3}(x)}\left(\sum_{2\leq n\leq\xi}\frac{1}{\sqrt{n}}\cos\\{t\ln n\\}\right)Z^{2}(t){\rm d}t\sim\\\ &\sim\frac{x}{\pi}\left(\frac{2\epsilon}{5}-\frac{\epsilon^{2}}{50}+\frac{\epsilon^{2}}{50}\frac{\sin x}{x}\right)U\ln^{2}P,\\\ &\int_{G_{4}(y)}\left(\sum_{2\leq n\leq\xi}\frac{1}{\sqrt{n}}\cos\\{t\ln n\\}\right)Z^{2}(t){\rm d}t\sim\\\ &\sim\frac{y}{\pi}\left(\frac{2\epsilon}{5}-\frac{\epsilon^{2}}{50}-\frac{\epsilon^{2}}{50}\frac{\sin y}{y}\right)U\ln^{2}P,\end{split}$ and $\begin{split}&\int_{G_{3}(x)}\left(\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{t\ln n\\}\right)Z^{2}(t){\rm d}t\sim\frac{\sin x}{2500\pi^{3}}U\ln^{4}P,\\\ &\int_{G_{4}(y)}\left(\sum_{2\leq n\leq\xi}\frac{d(n)}{\sqrt{n}}\cos\\{t\ln n\\}\right)Z^{2}(t){\rm d}t\sim-\frac{\sin y}{2500\pi^{3}}U\ln^{4}P\end{split}$ (see [2], (13) and Corollaries 8 and 9) we obtain (2.6) and (2.7), respectively. ###### Remark 8. The formulae of type (3.2) can be obtained from (6.2) putting $f(t)\equiv 1$. I would like to thank Michal Demetrian for helping me with the electronic version of this work. ## References * [1] A. Moser, ‘New mean-value theorems for the function $|\zeta\left(\frac{1}{2}+it\right)|^{2}$‘, Acta Math. Univ. Comen., 46-47 (1985), 21-40, (in russian). * [2] J. Moser, ‘The structure of the A. Selberg’s formula in the theory of the Riemann zeta-function‘, Acta Math. Univ. Comen., 48-49, (1986), 93-121, (in russian). * [3] J. Moser, ‘Jacob’s ladders and the almost exact asymptotic representation of the Hardy-Littlewood integral’, (2008), arXiv:0901.3973. * [4] J. Moser, ‘Jacob’s ladders and the tangent law for short parts of the Hardy-Littlewood integral’, (2009), arXiv:0906.0659. * [5] J. Moser, ‘Jacob’s ladders and the multiplicative asymptotic formula for short and microscopic parts of the Hardy-Littlewood integral’, (2009), arXiv:0907.0301. * [6] J. Moser, ‘Jacob’s ladders and the quantization of the Hardy-Littlewood integral’, (2009), arXiv:0909.3928. * [7] J. Moser, ‘Jacob’s ladders and the first asymptotic formula for the expression of the sixth order $|\zeta(1/2+i\varphi(t)/2)|^{4}|\zeta(1/2+it)|^{2}$’, (2009), arXiv:0911.1246. * [8] J. Moser, ‘Jacob’s ladders and the first asymptotic formula for the expression of the fifth order $Z[\varphi(t)/2+\rho_{1}]Z[\varphi(t)/2+\rho_{2}]Z[\varphi(t)/2+\rho_{3}]\hat{Z}^{2}(t)$ for the collection of disconnected sets‘, (2009), arXiv:0912.0130. * [9] J. Moser, ‘Jacob’s ladders, the iterations of Jacob’s ladder $\varphi_{1}^{k}(t)$ and asymptotic formulae for the integrals of the products $Z^{2}[\varphi^{n}_{1}(t)]Z^{2}[\varphi^{n-1}(t)]\cdots Z^{2}[\varphi^{0}_{1}(t)]$ for arbitrary fixed $n\in\mathbb{N}$‘ (2010), arXiv:1001.1632. * [10] J. Moser, ‘Jacob’s ladders and the asymptotic formula for the integral of the eight order expression $|\zeta(1/2+i\varphi_{2}(t))|^{4}|\zeta(1/2+it)|^{4}$‘, (2010), arXiv:1001.2114. * [11] J. Moser, ‘Jacob’s ladders and the asymptotically approximate solutions of a nonlinear diophantine equation‘, (2010), arXiv: 1001.3019. * [12] J. Moser, ‘Jacob’s ladders and the asymptotic formula for short and microscopic parts of the Hardy-Littlewood integral of the function $|\zeta(1/2+it)|^{4}$‘, (2010), arXiv:1001.4007. * [13] J. Moser, ‘Jacob’s ladders and the nonlocal interaction of the function $|\zeta(1/2+it)|$ with $\arg\zeta(1/2+it)$ on the distance $\sim(1-c)\pi(t)$‘, (2010), arXiv: 1004.0169. * [14] J. Moser, ‘Jacob’s ladders and the $\tilde{Z}^{2}$ \- transformation of polynomials in $\ln\varphi_{1}(t)$‘, (2010), arXiv: 1005.2052. * [15] J. Moser, ‘Jacob’s ladders and the oscillations of the function $|\zeta\left(\frac{1}{2}+it\right)|^{2}$ around the main part of its mean-value; law of the almost exact equality of the corresponding areas‘, (2010), arXiv: 1006.4316 * [16] J. Moser, ‘Jacob’s ladders and the nonlocal interaction of the function $Z(t)$ with the function $\tilde{Z}^{2}(t)$ on the distance $\sim(1-c)\pi(t)$ for a collection of disconneted sets‘, (2010), arXiv: 1006.5158 * [17] J. Moser, ‘Jacob’s ladders and the $\tilde{Z}^{2}$-transformation of the orthogonal system of trigonometric functions‘, (2010), arXiv: 1007.0108. * [18] J. Moser, ‘Jacob’s ladders and the nonlocal interaction of the function $Z^{2}(t)$ with the function $\tilde{Z}^{2}(t)$ on the distance $\sim(1-c)\pi(t)$ for the collections of disconnected sets‘, (2010), arXiv: 1007.5147. * [19] A. Selberg, ‘On the zeroes of Riemann’s zeta-function‘, Skr. Norske vid. Akad. Oslo, 10 (1942), 1-59. * [20] E.C. Titchmarsh, ‘The theory of the Riemann zeta-function‘, Clarendon Press, Oxford, 1951.
# Parameter Refinement of a Ballbot and Predictive Control for Reference Tracking with Linear Parameter-Varying Embedding 1st Dimitrios S. Karachalios This work was funded by the German Research Foundation (DFG), project number 419290163. Institute for Electrical Engineering in Medicine University of Luebeck Lübeck, Germany email<EMAIL_ADDRESS>2nd Hossam S. Abbas Institute for Electrical Engineering in Medicine University of Luebeck Lübeck, Germany email<EMAIL_ADDRESS> ###### Abstract In this study, we implement a control method for stabilizing a ballbot that simultaneously follows a reference. A ballbot is a robot balancing on a spherical wheel where the single point of contact with the ground makes it omnidirectional and highly maneuverable but with inherent instability. After introducing the scheduling parameters, we start the analysis by embedding the nonlinear dynamic model derived from first principles to a linear parameter- varying (LPV) formulation. Continuously, and as an extension of a past study, we refine the parameters of the nonlinear model that enhance significantly its accuracy. The crucial advantages of the LPV formulation are that it consists of a nonlinear predictor that can be used in model predictive control (MPC) by retaining the convexity of the quadratic optimization problem with linear constraints and further evades computational burdens that appear in other nonlinear MPC methods with only a slight loss in performance. The LPVMPC control method can be solved efficiently as a quadratic program (QP) that provides timing that supports real-time implementation. Finally, to illustrate the method, we test the control designs on a two-set point 1D non-smooth reference with sudden changes, to a 2D nonstationary smooth reference known as Lissajous curves, and to a single-set point 1D non-smooth reference where for this case theoretical guarantees such as stability and recursive feasibility are provided. ###### Index Terms: Identification, Control, Ballbot, Model Predictive Control, Linear Parameter- Varying, Quadratic Program ## I Introduction A ballbot Fig. 1 is an omnidirectional mobile robot that balances on a single spherical wheel [1, 2, 3]. The single point of contact with the ground makes this under-actuated system agile but challenging to control. The use cases of a ballbot include healthcare, virtual assistants, etc., as service robots, where maneuverability in busy environments while maintaining stability is important. A ballbot system’s stability implies balancing and trajectory tracking through a predetermined path. Linear control of a ballbot [1] is proven insufficient, and the LPV model, being much closer to the nonlinear model, has shown great potential in several applications, such as autonomous vehicles [4]. The dynamic model structure is well-known for robotic systems based on physical laws. The identification of the parameters can be obtained through classical mechanics [5] and enhanced white modeling, but parameters that describe friction can be determined only experimentally. This study will provide a method that refines the parameters that define the nonlinear ballbot model from the measurements derived in [1]. For controlling the ballbot, often, PID controllers considered [6, 1] or with double-loop approaches [3]. One of the limitations of the aforementioned methods is the consideration of input/state constraints. Model predictive control (MPC) has been widely applied for reference tracking with constraints [4]. It can generate optimal trajectories that steer the robot in a constrained space. Since autonomously moving robots are safety- critical systems, nonlinear MPC (NMPC) is gaining popularity due to its ability to utilize high-fidelity nonlinear models, enabling more accurate and precise control actions. Another advantage of MPC is that it can be leveraged with planning algorithms, which is crucial for such autonomously moving objects. The drawback of NMPC is the computational complexity that makes online minimization of the underlying objective function over a nonconvex manifold cumbersome. Therefore, a good alternative is to utilize the LPV formulation that can embed the nonlinear model equivalently and introduce the LPVMPC framework that can be solved efficiently as QP, with nearly the same computational burden as in linear MPC, enabling real-time implementation. Figure 1: The Ballbot constructed at the Institute for Electrical Engineering in Medicine (IME) and side view on the $xz$-plane. Contents: Sec. II contains preliminaries and definitions. In Sec. II-A, the nonlinear ballbot model is presented. Continuously, in Sec II-B, the linear parameter-varying (LPV) model is presented with technical aspects. In Sec. II-C, we refer to other stabilization methods that will help tune our terminal cost, and in Sec. II-D, the LPVMPC complete framework is introduced with all the details along with algorithms for ease of implementation. Sec. III contains results; in particular, in Sec.III-A, the nonlinear model is identified. In Sec. III-B, three representative scenarios on reference tracking illustrate the developed method together with results that can ensure theoretical guarantees. Finally, a discussion followed by acknowledgment concludes the study. Notations: $I_{n}$ is the identity matrix of dimension $n$. The notation $Q\succeq 0$ represents the positive semi-definiteness of a matrix $Q$. The weighted norm $\|x\|_{Q}$ is defined as $\|x\|_{Q}^{2}=x^{\top}Qx$ and similarly $\lVert x\rVert^{2}_{P}=x^{\top}Px,~{}\lVert u\rVert^{2}_{R}=u^{\top}Ru$. The function $\texttt{diag}(\mathbf{x})$ constructs a diagonal square matrix from a vector $\mathbf{x}$. The set of positive integers, including zero, is denoted by $\mathbb{Z}_{+}\cup\\{0\\}$; $i|k$ denotes at time $k$ the $i^{\textrm{th}}$ prediction. ## II Preliminaries and Definitions ### II-A Nonlinear Model of a Ballbot The ballbot model is thoroughly discussed regarding the Euler-Lagrange method of formulating the dynamics in [1, 2]. A first attempt towards model discovery was performed in [1], where a linearized model was used to balance the ballbot. The ballbot is a $5$-DoF system, where three are the body’s rotational motion, and the two are the translation in the $xy$-plane. We will see that the body’s angular position $\theta$ is used mainly for balancing, and the angular position of the ball $\phi$ is used mainly for tracking see Fig. 1. The symmetry of the ballbot control problem helps us focus independently on just the $xz$-plane [1] to simulate and understand its stability and controllability. The ballbot state space model in (1) gives us the two observable and controllable states as $q=[\phi_{y},\theta_{y}]^{T}$, together with $\tau_{y}$, being the control input torque of the virtual wheel actuator of the ballbot in the $xz$-plane as in Fig. 1 (right). The ballbot model consists of the following matrices $M,C,D,G$, that denote the mass, Coriolis force (centrifugal forces), frictional torque, and gravitational forces, respectively, together with the input vector $\widetilde{B}$. The nonlinear mechanical model is $\displaystyle M(q)\ddot{q}+C(q,\dot{q})+D(\dot{q})+G(q)=\widetilde{B}\tau_{y},$ (1) where the matrices are defined as: $\displaystyle\begin{aligned} M&=\begin{bmatrix}b_{1}&-b_{2}+\ell r_{b}\cos{\theta_{y}}\\\ -b_{2}+\ell r_{b}\cos{\theta_{y}}&b_{3}\end{bmatrix},\\\ C&=\begin{bmatrix}-\ell r_{b}\sin({\theta_{y})\dot{\theta}_{y}^{2}}\\\ 0\end{bmatrix},~{}D=\begin{bmatrix}b_{4}\dot{\phi}_{y}\\\ 0\end{bmatrix},\\\ G&=\begin{bmatrix}0\\\ -\ell g\sin({\theta_{y}})\end{bmatrix},~{}\widetilde{B}=\begin{bmatrix}\frac{r_{b}}{r_{w}}\\\ -\frac{r_{b}}{r_{w}}\end{bmatrix}.\end{aligned}$ (2) The nonlinear state-space representation of the planar ballbot model is given next in (3). $\displaystyle\underbrace{\begin{bmatrix}\dot{q}\\\ \ddot{q}\end{bmatrix}}_{\dot{x}}=\underbrace{\begin{bmatrix}\dot{q}\\\ M^{-1}(-C-D-G-\epsilon)\end{bmatrix}+\begin{bmatrix}0\\\ M^{-1}\widetilde{B}\end{bmatrix}\tau_{y}}_{f(x,u)},$ (3) where, $x=[\phi_{y},\theta_{y},\dot{\phi}_{y},\dot{\theta}_{y}]^{\top}$ is the state vector of the system with $\phi_{y}$ the angular displacement, $\theta_{y}$ the tilt angle, $\dot{\phi}_{y}$ the angular speed of the ballbot and $\dot{\theta}_{y}$ the angular speed of the tilt angle. The input to each plane, e.g., $\tau_{y}$, is the virtual wheel’s torque Fig. 1. The virtual torques $(\tau_{x},\tau_{y},\tau_{z})$ can be converted to real torques of the three motors $(\tau_{1},\tau_{2},\tau_{3})$ as in [1] by using (4), where for the design in Fig. 1 it holds that the Zenith angle is $\alpha=\pi/4=45^{\circ}$. $\begin{bmatrix}\tau_{1}\\\ \tau_{2}\\\ \tau_{3}\end{bmatrix}=\frac{1}{3}\begin{bmatrix}\frac{2}{\cos{\alpha}}&0&\frac{1}{\sin{\alpha}}\\\ -\frac{1}{\cos{\alpha}}&\frac{\sqrt{3}}{\cos{\alpha}}&\frac{1}{\sin{\alpha}}\\\ -\frac{1}{\cos{\alpha}}&-\frac{\sqrt{3}}{\cos{\alpha}}&\frac{1}{\sin{\alpha}}\end{bmatrix}\begin{bmatrix}\tau_{x}\\\ \tau_{y}\\\ \tau_{z}\end{bmatrix}.$ (4) Now that we have a well-defined nonlinear model, we can introduce an equivalent representation with an LPV embedding. ### II-B The Linear Parameter-Varying (LPV) Embedding The linear parameter-varying (LPV) embedding [7, 8, 9, 10] can be seen as a structured linearization over an adaptive operational point that depends on the scheduling parameters. The nonlinear system in (3), can be embedded in the continuous-time LPV representation by denoting with the subscript ”c” the continuous operators that depend on the variable $\rho$ as $A_{c}(\rho)$, $B_{c}(\rho)$ and define the continuous-time LPV model as $\left\\{\begin{aligned} \dot{x}(t)&=A_{c}(\rho(t))x(t)+B_{c}(\rho(t))u(t)\equiv f(x(t),u(t)),\\\ \rho(t)&=\sigma(x(t)),~{}x_{0}=x(0),~{}t\geq 0,\end{aligned}\right.$ (5) where $\rho$ is the so-called scheduling parameter that belongs to the scheduling set $\mathcal{P}\subset\mathbb{R}^{n_{\rho}}$ that defines the range of the scheduling parameter, and it should be compact. Further, $\rho$ depends on the states through the mapping $\sigma\colon\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{n_{\rho}}$. For the nonlinear system under consideration (on one plane of the ballbot), the state dimension is $n_{x}=4$. We consider two scheduling parameters of the LPV model as $\theta_{y}(t)$ and $\dot{\theta}_{y}(t)$, thus $n_{\rho}=2$ and $\rho(t)=(\theta_{y}(t),\dot{\theta}_{y}(t))$. Therefore, in this case, the functionality $\sigma$111Functionality: $\sigma(x)=x^{\top}\left[\begin{array}[]{cccc}0&1&0&0\\\ 0&0&0&1\end{array}\right]^{\top}=\left[\begin{array}[]{cc}\theta_{y}&\dot{\theta}_{y}\\\ \end{array}\right]=\rho$. can be seen as a linear function that chooses from the state vector only some states. Finally, the input dimension is $n_{u}=1$ with $u(t)=\tau_{y}(t)$ representing the virtual torque of the virtual wheel Fig. 1. The considered LPV system’s matrices of the continuous-time state-space model are given by (6), and the elements of the matrices $A_{c}$ and $B_{c}$ are given in Table I. $\displaystyle\footnotesize A_{c}(\rho)=\begin{bmatrix}0&0&1&0\\\ 0&0&0&1\\\ 0&A_{32}&A_{33}&A_{34}\\\ 0&A_{42}&A_{43}&A_{44}\end{bmatrix},~{}B_{c}(\rho)=\begin{bmatrix}0\\\ 0\\\ B_{31}\\\ B_{41}\end{bmatrix},$ (6) TABLE I: Elements of the operators $A_{c}(\rho)$ and $B_{c}(\rho)$, with their respective scheduling parameters given within the parentheses $A_{ij}(\rho)$ and $B_{ij}(\rho)$, and their scheduling signals are given by $\rho=(\theta_{y},\dot{\theta}_{y})$ as shown in the table. $d(\theta_{y})$ | $b_{1}b_{3}-(-b_{2}+\ell r_{b}\cos({\theta_{y}}))^{2}$ ---|--- $A_{32}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $\frac{\ell g\operatorname{sinc}(\theta_{y})(b_{2}-\ell r_{b}\cos({\theta_{y}}))}{d(\theta_{y})}$ $A_{33}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $-\frac{b_{3}b_{4}}{d(\theta_{y})}$ $A_{34}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $\frac{b_{3}\ell r_{b}\sin({\theta_{y}})\dot{\theta}_{y}}{d(\theta_{y})}$ $A_{42}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $\frac{b_{1}\ell g\operatorname{sinc}(\theta_{y})}{d(\theta_{y})}$ $A_{43}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $-\frac{b_{4}(b_{2}-\ell r_{b}\cos{(\theta_{y})})}{d(\theta_{y})}$ $A_{44}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $\frac{\ell r_{b}\sin{(\theta_{y})}\dot{\theta}_{y}(b_{2}-\ell r_{b}\cos{(\theta_{y})})}{d(\theta_{y})}$ $B_{31}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $\frac{r_{b}(b_{3}-(b_{2}-\ell r_{b}\cos{(\theta_{y})})}{r_{w}d(\theta_{y})}$ $B_{41}\big{(}\theta_{y},\dot{\theta}_{y}\big{)}$ | $\frac{r_{b}((b_{2}-\ell r_{b}\cos{(\theta_{y})})-b_{1})}{r_{w}d(\theta_{y})}$ A discretization of the nonlinear continuous model is mandatory before setting the framework that combines the LPV with Model Predictive Control (MPC). Therefore, to discretize (5), many methods exist that differ in accuracy and numerical properties (s.a., numerical stability). For instance, the Forward Euler method is commonly used due to its ease of implementation, but it usually endures low-level accuracy and, most importantly, inherits numerical instability. Therefore, we evade such numerical pitfalls by utilizing the Runge–Kutta $4^{\textrm{th}}$ order with a single-time step (RK4). Consider the sampling time $t_{s}$ that spans the continuous time as $t=t_{s}k,~{}k\in\mathbb{Z}_{+}\cup\\{0\\}$, and, it is denoted as $x(t_{k})=x(kt_{s})=x_{k}$. We denote the angular velocity of the tilt angle $\theta_{y}$ as $\dot{\theta}_{y}(t):=\omega_{y}(t)$ and the corresponding discrete value is $\omega_{k}$. Similarly, we denote the angular velocity $\dot{\phi}_{y}(t):=\varphi_{y}(t)$, thus, the complete discrete state vector remains $x_{k}=\left[\begin{array}[]{cccc}\phi_{k}&\theta_{k}&\varphi_{k}&\omega_{k}\end{array}\right]^{\top}$. By denoting the continuous in-time LPV operator $f_{\text{LPV}}$ (i.e., $f_{\text{LPV}}$ is equivalent with the nonlinear operator $f$) of the ballbot system, we can rewrite (5) as $\dot{x}(t)=f_{\text{LPV}}(x(t),u(t)):=A_{c}(\sigma(x(t)))x(t)+B_{c}(\sigma(x(t)))u(t)$ with the input $u(t)=\tau_{y}(t)$. The Runge-Kutta discretization method of order $4^{\textrm{th}}$ (RK4) with a single time step $t_{s}$ when zero-order hold (ZOH) is considered remains: $\text{RK4}~{}\left\\{\begin{aligned} \kappa_{1}&=f_{\text{LPV}}\left(x_{k},u_{k}\right)\\\ \kappa_{2}&=f_{\text{LPV}}\left(x_{k}+(t_{s}/2)\kappa_{1},u_{k}\right)\\\ \kappa_{3}&=f_{\text{LPV}}\left(x_{k}+(t_{s}/2)\kappa_{2},u_{k}\right)\\\ \kappa_{4}&=f_{\text{LPV}}\left(x_{k}+t_{s}\kappa_{3},u_{k}\right)\\\ x_{k+1}&=x_{k}+(t_{s}/6)(\kappa_{1}+2\kappa_{2}+2\kappa_{3}+\kappa_{4}).\\\ \end{aligned}\right.$ (7) Thus, we introduce the following remark to proceed with linear operations with the LPV formulation, which will be essential for deriving a predictor that retains the characteristics of the underlying optimization problem. ###### Remark 1 (The operator $f_{\text{LPV}}$ with a given scheduling variable $\rho$ is linear) When the scheduling parameter $\rho$ is given, the LPV operator remains linear w.r.t. the state $x$ and input $u$. In such cases, the LPV continuous operator will be denoted as: $f_{\text{LPV}}^{(\rho)}(\rho,x(t),u(t)):=A_{c}(\rho)x(t)+B_{c}(\rho)u(t).$ (8) Consequently, when the scheduling parameter can be considered known, we will denote the LPV operator with a superscript $\rho$ as $f_{\textrm{LPV}}^{(\rho)}$ that will indicate that $\rho$ is independent of the state/input allowing linear actions on both. In particular, Remark 1 allows the LPV predictor to be inserted in a linear MPC framework and retain all the characteristics of the optimization problem that stay invariant under linear operations. For instance, the convexity of the optimization problem and the linear constraints can be retained. At the same time, efficient algorithms of QP can solve the underlying control problem efficiently. ### II-C Stabilization and closed-loop parameters identification To identify the parameters of the ballbot as a real physical system, it was stabilized in [1] through a proportional integral derivative (PID) controller. During stabilization in [1], a multi-harmonic excitation signal provided enough measurements to identify a linear model that is defined with the $p$-parameters provided in Table II and in the study [1]. The identified linear model could stabilize the ballbot with an optimal feedback gain $K$ utilizing the solution of the linear quadratic regulator (LQR) approach. TABLE II: Physical parameters of the ballbot model (2), with $p$ being the linearized model parameters from [1]. $p_{1}$ | $p_{2}$ | $p_{3}$ | $p_{4}$ | $p_{5}$ ---|---|---|---|--- -342.6038 | -52.8301 | -1425.9 | -36.0734 | -9.1477 $p_{6}$ | $\ell[m]$ | $r_{b}[m]$ | $r_{w}[m]$ | $g[m/s^{2}]$ 251.8476 | 0.2978 | 0.12 | 0.05 | 9.81 In this study, we will use the identified linear model from [1] for discovering the $b$ parameters that define the refined values in (1) after solving (offline) a nonlinear optimization problem as will be explained in Sec. III-A. Consequently, the LPV embedding of the identified nonlinear model will allow the LPV integration within the proposed MPC scheme as shown in Sec. III-B that will handle the stabilization and reference tracking of the ballbot simultaneously without cascading complex control designs based on the linearized model. ### II-D Model Predictive Control with LPV Embeddings Model predictive control with linear parameter-varying embeddings implies that the MPC optimally controls an underlying time-varying model based on the scheduling parameters shown in the LPV formulation (5). The standard MPC form begins with the cost function defined as $J_{k}(u)$ as seen in (9), where $x_{i|k}$ denotes the discretized state vector at time $k$ and prediction $i$. The $i$ varies in length $i=0,\ldots,N-1$ with $N$ the prediction horizon length. The reference state vector is given as $x^{\mathrm{ref}}_{i|k}$. $\displaystyle J_{k}(u)$ $\displaystyle=\sum_{i=0}^{N-1}(\lVert x_{i|k}-x^{\mathrm{ref}}_{i|k}\rVert^{2}_{Q}+\lVert u_{i|k}\rVert^{2}_{R})+$ (9) $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\underbrace{(\lVert x_{N|k}-x^{\mathrm{ref}}_{N|k}\rVert^{2}_{P}}_{\text{terminal cost}},$ where $Q\succeq 0,~{}R\succ 0,~{}P\succ 0$ are weighting symmetric matrices. Theoretically, to provide closed-loop stabilization of the LPV model, for some feedback gain $K(\rho)$ the $A(\rho)-B(\rho)K(\rho)$ should be Hurwitz for all possible parameter variation of $\rho$. In this study, with the suitable tuning of the LQR on the linearized model, we could provide a non-parametric state feedback gain $K$ that satisfies the Hurwitz condition for the LPV over a dense grid that covers the scheduling space $\mathcal{P}$. In addition, the algebraic Riccati solution $P$ will be used to penalize the terminal cost of the MPC. In (9) and on the one hand, the explicit decision variables are the input entries $u_{i|k},~{}i=0,\ldots,N-1$. Thus, the optimizer should provide a solution in $u$ that minimizes the energy function (9). On the other hand, the $x_{i|k},~{}i=0,\ldots,N$ consists of the set with the implicit decision variables inserted through a predictor. Suppose the prediction $x_{i|k}$ depends nonlinearly on previous states. In that case, the energy function will not remain quadratic or convex, inevitably leading to nonconvex optimization formulation where the cost of solving such problems inherits high complexity. To evade such a computational burden, by utilizing the LPV formulation together with the property in Remark 1, when the scheduling variable is given, the implicit decision variables $x_{i|k}$ are introduced linearly in the energy function and retain the quadratic manifold where the minimization is quite efficient through the quadratic program (QP) that will introduce next. Importantly, good scheduling estimation will make the adaptive nature of the LPV over a potentially nonlinear manifold quite accurate. Good predictions for the scheduling variable lie at the heart of the LPV method, and methods that introduce ways, e.g., scheduling tube control [10], can provide further guarantees, such as recursive feasibility and stability. Back to our aim, we want to minimize the quadratic cost function $J$ in (9) under the linear equality constraints that satisfy the LPV (given scheduling) predictor model together with the following linear inequalities given in (LABEL:eq:constraints). $\displaystyle\begin{aligned} &\mathcal{X}=\\{x_{k}\in\mathbb{R}^{n_{{\mathrm{x}}}}~{}|~{}G^{x}x_{k}\leq h^{x}\\},\\\ &\mathcal{U}=\\{u_{k}\in\mathbb{R}^{n_{u}}~{}|~{}G^{u}u_{k}\leq h^{u}\\},\end{aligned}$ (10) where, the state/input inequality constraints emerge as $\displaystyle G^{x}$ $\displaystyle=\left[\begin{array}[]{c}+I_{n_{\mathrm{x}}}\\\ -I_{n_{\mathrm{x}}}\end{array}\right]\in\mathbb{R}^{(2n_{\mathrm{x}})\times n_{\mathrm{x}}},~{}h^{x}=\left[\begin{array}[]{c}+x_{\text{max}}\\\ -x_{\text{min}}\end{array}\right]\in\mathbb{R}^{2n_{\mathrm{x}}},$ $\displaystyle G^{u}$ $\displaystyle=\left[\begin{array}[]{c}+I_{n_{u}}\\\ -I_{n_{u}}\end{array}\right]\in\mathbb{R}^{(2n_{u})\times n_{u}},~{}h^{u}=\left[\begin{array}[]{c}+u_{\text{max}}\\\ -u_{\text{min}}\end{array}\right]\in\mathbb{R}^{2n_{u}}.$ The superiority of the MPC control strategy comes exactly at the point where constraints can be introduced. MPC casts the control problem into optimization with the major feature, among other methods, that can handle constraints. MPC can be extended easily to include trajectory planning of the ballbot when it moves in a structured environment with obstacles. Tangent planes can play a significant role in keeping the optimization problem as QP, like the work in [4]. The QP optimization problem is formulated together with the LPV predictor at the ”present” time $k$ as the function $\mathrm{QP}(\rho_{i|k},x_{k},x^{\mathrm{ref}}_{i+k})$ with the input arguments; 1) the given (from previous time) scheduling signal $\rho_{i|k}$; 2) the initial conditions $x_{k}$; 3) the reference $x_{i+k}^{\textrm{ref}}$. The Quadratic Program’s (QP) functionality is defined next in Algorithm 1. Algorithm 1 The quadratic program (QP) based-LPVRK4 0: The time step $k\in\mathbb{Z}_{+}$, the weighted costs $Q,~{}R,~{}P$, the triplet $(\rho_{i|k},x_{k},x^{\mathrm{ref}}_{k+i})$, for $i=0,\ldots,N-1$, the horizon length $N\in\mathbb{Z}_{+}$ and the sampling time $t_{s}\in\mathbb{R}_{+}$. 0: Optimal numerically design $u_{i|k},~{}i=0,\ldots,N-1$.Minimization of the quadratic cost function:$\underset{u_{i|k}}{min}\sum_{i=0}^{N-1}\Big{(}\lVert\hat{x}_{i|k}-x^{\mathrm{ref}}_{k+i}\rVert^{2}_{Q}+\lVert u_{i|k}\rVert^{2}_{R}\Big{)}+\underbrace{\lVert\hat{x}_{N}-x^{\mathrm{ref}}_{k+N}\rVert_{P}^{2}}_{\text{terminal cost}}$Subject to: 1: $\hat{x}_{0|k}=x_{0|k}=x_{k}$, 2: for i=0,…,N-1 do 3: $\kappa_{1}=f_{\textrm{LPV}}^{(\rho_{i|k})}\left(\rho_{i|k},\hat{x}_{i|k},u_{i|k}\right)$, 4: $\kappa_{2}=f_{\textrm{LPV}}^{(\rho_{i|k})}\left(\rho_{i|k},\hat{x}_{i|k}+(t_{s}/2)\kappa_{1},u_{i|k}\right)$, 5: $\kappa_{3}=f_{\textrm{LPV}}^{(\rho_{i|k})}\left(\rho_{i|k},\hat{x}_{i|k}+(t_{s}/2)\kappa_{2},u_{i|k}\right)$, 6: $\kappa_{4}=f_{\textrm{LPV}}^{(\rho_{i|k})}\left(\rho_{i|k},\hat{x}_{i|k}+t_{s}\kappa_{3},u_{i|k}\right)$, 7: Update: $\hat{x}_{i+1|k}=\hat{x}_{i|k}+(t_{s}/6)(\kappa_{1}+2\kappa_{2}+2\kappa_{3}+\kappa_{4})$, 8: end for 9: Satisfy: $\hat{x}_{i|k}\in\mathcal{X},\forall i=1,\dots,N$, 10: Satisfy: $u_{i|k}\in\mathcal{U},\forall i=0,1,\dots,N-1$. Next, we discuss Algorithm (1) step by step when it is called at the simulation time instance $k\in\mathbb{Z}_{+}$: * • At time $k$, the input to the QP consists of the matrices $Q,~{}R,~{}P$, the initial condition $x_{k}$, the reference $x_{k+i}^{\textrm{ref}}$, and the scheduling variable that is either available from previous time $k-1$ or initialized from initial conditions $x_{k}$. In particular, when the simulation starts at time $k=0$, and there is no available scheduling prediction from the past, the scheduling variable is initialized as $\rho_{i|0}=\sigma(x_{0}),~{}i=0,\ldots,N-1$. * • The optimization problem consists of the quadratic cost function (9) that is updated with the implicit decision variables $\hat{x}_{i|k}$ as a function of the explicit decision variables $u_{i|k}$ with the LPV predictor. This can be done with linear operations since the scheduling variable $\rho_{i|k},~{}i=0,\ldots,N-1$ is considered known, and the linear property explained in Remark 1. The RK4 in lines (2-7) is the discretization scheme that predicts the evolution. * • The QP can be solved together with the constraints in lines (9-10), efficiently from [11] and provide the optimized input design at each time $k$ numerically as $u_{i|k},~{}i=0,\ldots,N-1$. Algorithm 2 The Ballbot QP-based LPVMPC algorithm 0: Initial conditions $x_{0}$, with reference $x^{\textrm{ref}}_{k},k\in\mathbb{Z}_{+}\cup\\{0\\}$ 0: The control input $u_{k},~{}k=0,1,\ldots$, that drives the nonlinear system to the reference by satisfying constraints.Algorithm steps 1: Initialize the scheduling parameter for $k=0$ as next: $\rho_{i|0}=(\theta_{0},\omega_{0}),~{}i=0,\ldots,N$ 2: while $k=0,1,\ldots$ do 3: Solve the QP in Algorithm 1 $u_{i|k}\leftarrow\texttt{QP}(\rho_{i|k},x_{k},x_{i+k}^{\text{ref}}),~{}i=0,\ldots,N-1\vspace{-3mm}$ 4: Update the scheduling parameters from the designed input $u_{i|k},~{}i=0,\ldots,N-1$ with (7) as $x_{i+1|k}=\left[\begin{array}[]{c}\phi_{i+1|k}\\\ {\theta_{i+1|k}}\\\ \varphi_{i+1|k}\\\ {\omega_{i+1|k}}\end{array}\right]\underset{\text{RK4}}{\overset{\eqref{eq:dlpv}}{=}}f_{\textrm{LPV}}\left(\left[\begin{array}[]{c}\phi_{i|k}\\\ {\theta_{i|k}}\\\ \varphi_{i|k}\\\ {\omega_{i|k}}\end{array}\right],u_{i|k}\right)$ 5: Update $\rho_{i|k}=\left(\theta_{i|k},\omega_{i|k}\right),~{}i=0,\ldots,N$ 6: Apply $u_{k}=u_{0|k}$ to the continuous system (5) 7: $k\leftarrow k+1$ 8: Measure $x_{k}$ 9: Update $\rho_{i|k}\leftarrow\rho_{i+1|k-1},~{}i=0,\ldots,N-1$ 10: end while As we have introduced in detail the QP minimization problem that is tailored with the LPV formulation in Algorithm 1, we are ready to proceed by stating the method in Algorithm 2 that is connected with our application on stabilizing and tracking a given reference for the ballbot. Next, every step in the resulting Algorithm 2 is explained. * • The input to Algorithm 2 consists of the initial conditions $x_{0}$ and the full state reference $x^{\textrm{ref}}$. For the ballbot system, the state reference contains the angular displacement of the ball $\phi_{k}$. In particular, $\theta_{k}$ is the tilt angle, and its reference target is zero, thus enforcing stabilization in that operating (unstable equilibrium) point. Finally, for the rest of the states in $x^{\text{ref}}$ that concern the angular velocities, again, we enforce them to zero as the aim is to reach the reference $\phi^{\textrm{ref}}$ and then the ballbot to remain there with $\theta=\dot{\theta}=\dot{\phi}=0$ (steady state) and $\phi$ at the target point. * • In Step 1, initialization at any instant $k$ of the scheduling parameters can be done either from the previous time as $\rho_{i|k}=\sigma(x_{i+1|k-1}),~{}i=0,\ldots,N-1$ or from the initial conditions as explained in Algorithm 1. * • In Steps 2-3, we can call the QP Algorithm 1 to provide the optimally designed input $u_{i|k},~{}i=0,\ldots,N-1$. Note that by having $N$ inputs, we can compute $N+1$ state estimation through the LPV predictor. * • In Step 4, given $x_{k}$ and $u_{i|k}$, we can compute the true evolution response from the nonlinear system after using the exact/embedded nonlinear discrete predictor (7). The scheduling parameters to be used in Step 5 can be revealed in Step 4. The hat notation is omitted as these computations result in the system’s true response, assuming that no other disturbance or measurement noise will affect the system. * • In Step 6, the one-step implementation of the input to the nonlinear system (5) is done by simulating with ode45 (Runge Kutta 45 with adaptive time step) where between two consequent time instances that differ only one sampling time $t_{s}$, zero-order hold (ZOH) (constant input of the sampling time) is considered. This high-fidelity simulation ensures good agreement with the true response of the physical plant. * • In Step 7, we slide the prediction horizon by one (sliding the prediction window with length $N$ by one sampling time $t_{s}$) * • In Step 8, we measure the new initial conditions $x_{k}$. * • Finally, in Step 9, we update the new scheduling variable at time $k$ from the previous simulation time $k-1$, and the while loop in Steps 2-10 runs to provide the next control inputs $u_{k}$ for arbitrary $k$. ## III Results and Discussion ### III-A Refinement of the ballbot physical parameters A linearized continuous model at $x_{e}=[0,0,0,0]^{T}$ can be derived through the LPV evaluated at $x_{e}$ as in (11). $\displaystyle A_{c}(0,0)$ $\displaystyle=\begin{bmatrix}0&0&1&0\\\ 0&0&0&1\\\ 0&\underbrace{A_{32}(0,0)}_{p_{1}}&\underbrace{A_{33}(0,0)}_{p_{2}}&\underbrace{A_{34}(0,0)}_{0}\\\ 0&\underbrace{A_{42}(0,0)}_{p_{4}}&\underbrace{A_{43}(0,0)}_{p_{5}}&\underbrace{A_{44}(0,0)}_{0}\end{bmatrix},~{}$ (11) $\displaystyle B_{c}(0,0)$ $\displaystyle=\begin{bmatrix}0\\\ 0\\\ \underbrace{B_{3}(0,0)}_{p_{3}}\\\ \underbrace{B_{4}(0,0)}_{p_{6}}\end{bmatrix},~{}d(0)=b_{1}b_{3}-(-b_{2}+\ell r_{b})^{2}.$ By introducing the functions $h_{i}(b):\mathbb{R}^{4}\rightarrow\mathbb{R},~{}i=1,~{}\ldots,6$ with suport the parameterized vector $b=(b_{1},b_{2},b_{3},b_{4})$, we can define: $\displaystyle A_{32}(0,0)$ $\displaystyle:=h_{1}(b)=\frac{gl\left(b_{2}-l\mathrm{r_{b}}\right)}{d(0)},$ (12) $\displaystyle A_{33}(0,0)$ $\displaystyle:=h_{2}(b)=-\frac{b_{3}\,b_{4}}{d(0)},$ $\displaystyle A_{42}(0,0)$ $\displaystyle:=h_{4}(b)=\frac{b_{1}\,g\,l}{d(0)},$ $\displaystyle A_{43}(0,0)$ $\displaystyle:=h_{5}(b)=-\frac{b_{4}\,\left(b_{2}-l\,\mathrm{r_{b}}\right)}{d(0)},$ $\displaystyle A_{34}(0,0)$ $\displaystyle:=0,~{}A_{44}(0,0):=0,$ $\displaystyle B_{3}(0,0)$ $\displaystyle:=h_{3}(b)=\frac{b_{3}\,\mathrm{r_{b}}-\mathrm{r_{b}}\,\left(b_{2}-l\,\mathrm{r_{b}}\right)}{\mathrm{r_{w}}\,d(0)},$ $\displaystyle B_{4}(0,0)$ $\displaystyle:=h_{6}(b)=\frac{\mathrm{r_{b}}\,\left(b_{2}-l\,\mathrm{r_{b}}\right)-b_{1}\mathrm{r_{b}}}{\mathrm{r_{w}}\,d(0)}.$ To infer the nonlinear $b$-parameters, given the $p$-parameters values, we must solve the nonlinear system of equations described in the set of equations (12) for the unknown vector $b=(b_{1},b_{2},b_{3},b_{4})$ and with the functionality $h_{i}(b):\mathbb{R}^{4}\rightarrow\mathbb{R},~{}i=1,\ldots,6$. The problem remains to find $b^{*}$ that satisfies $p_{i}-h_{i}(b^{*})\approx 0$ for all $i=1,\ldots,6$. Thus, we define the operator $F(b):\mathbb{R}^{4}\rightarrow\mathbb{R}^{6}$ along with the Jacobian next: $\displaystyle F(b):=\left[\begin{array}[]{cc}p_{1}-h_{1}(b)\\\\[2.84526pt] p_{2}-h_{2}(b)\\\\[2.84526pt] \vdots\\\\[2.84526pt] p_{6}-h_{6}(b)\\\ \end{array}\right],~{}\nabla F(b)=\left[\begin{array}[]{ccc}\frac{\partial F_{1}}{\partial b_{1}}&\cdots&\frac{\partial F_{1}}{\partial b_{4}}\\\\[2.84526pt] \frac{\partial F_{2}}{\partial b_{1}}&\cdots&\frac{\partial F_{2}}{\partial b_{4}}\\\\[2.84526pt] \vdots\\\\[2.84526pt] \frac{\partial F_{6}}{\partial b_{1}}&\cdots&\frac{\partial F_{6}}{\partial b_{4}}\\\ \end{array}\right].$ The Newton scheme for updating the parameters $b$ with initialization $b_{0}$ and for $k\in\mathbb{Z}_{+}\cup\\{0\\}$ is $b_{k+1}=b_{k}-\left[\nabla F(b_{k})\right]^{-1}F(b_{k}),~{}k=0,\ldots$ (13) Upon convergence of (13), the optimal solution vector $b^{*}$ can be obtained to satisfy $F(b^{*})\approx 0$ (with the smallest possible residual). In such problems, where the nonlinear parameterized manifold is nonconvex, the iterative scheme in (13) remains divergent when random initialization is considered, or a solution can be algebraically correct without explaining the physical law. To tackle such a problem, initialization of the $b$-parameters should be done by respecting the engineering regularity of the problem. Thus, some parameters $b_{i}$ can be initialized with physical meaning, e.g., $b_{1},~{}b_{2},~{}b_{3}$ being positive to explain the high-fidelity knowledge of the physical plant [1] where $b_{4}$ as concerns friction can remain arbitrary but also with a reasonable small value. In Fig. 2, the iterative scheme (13) converged after five steps and gave the optimal solution with a residual error that stagnated to the value $0.4330$. This error cannot be improved further as the analysis relies on the identified parameters $p$ from the study [1], where the dynamics are corrupted with noise and/or slight nonlinear behavior from the physical plant. Continuously, in Fig. 3, the simulation indicates the expected performance after substituting the obtained parameters $b$ in Table III and applying a multi-harmonic input to the stabilized plant. In particular, in Fig. 3, the nonlinear model compared to the linearized model stays close to the dynamical evolution for all states when the dynamics are close to the linearization operation point $x_{e}=0$ as expected. The discrepancy increases for larger deviations in $\phi$ from the origin where the linearized cannot be trusted. Finally, in Fig. 3, the comparison between the LPV and the nonlinear model certifies that the LPV is an equivalent embedding of the nonlinear system where differences are buried in machine precision error. Finally, the discrete-time implementation with the RK4 in (7) with a ZOH also certifies a good accuracy compared to the ode45. Figure 2: Convergence of the Newton scheme with residual error $\|F(b^{*})\|=0.4330$. The optimal parameter vector $b^{*}$ is shown in Tab. III. TABLE III: The refined $b$ parameters of the nonlinear model. $b^{*}$ | $b_{1}$ | $b_{2}$ | $b_{3}$ | $b_{4}$ ---|---|---|---|--- Optimal | 0.002483 | 0.059325 | 0.143093 | -0.07436 Figure 3: Models simulations; the linearized model given in dashed blue line; non-linear model simulation given in black; LPV in dashed red lines; all done using ode45 on MATLAB. Finally, with green is the RK4 method utilizing (7). The input $\tau_{y}$ is a multiharmonic signal. ### III-B LPVMPC for Reference Tracking Reference tracking for a ballbot refers to the stabilized motion by the LPVMPC method that can maneuver over any given state reference $x^{\mathrm{ref}}$. We will consider three study scenarios. The 1st scenario will be a 1D reference tracking of a nonsmooth discontinuous trajectory that will ”shock” the optimization problem due to the sudden changes. The 2nd scenario will be a 2D smooth reference as Lissajous curves, assuming the ballbot can be driven independently in the $xz$ and $yz$ planes. Finally, a 3rd scenario will illustrate the reference tracking of a single set point with a scheme of the LPVMPC that can guarantee the stability of the closed-loop system and the recursive feasibility of its optimization problem. These properties are explained below. Tuning of the prediction horizon depends on the physical system’s operational bandwidth and can vary significantly across applications. The ballbot is a robotic system that operates around $1$(Hz); thus, a horizon of about $1$ (s) is reasonable. #### III-B1 1st Scenario: Nonsmooth 1D reference In Fig. 4, we consider the following 2-set point nonsmooth reference. The ballbot should start from the origin and at $t=1$ s should roll for one circle $2\pi$ rad, then stay there for $2$ s and for the remaining $1$ s to return to the origin; thus, it will roll a total distance of approximately $(2\cdot 2\pi\cdot r_{b}\approx 1.5~{}\text{m})$ within $4$ s. At the beginning and near the origin, positive virtual torque applies on the ballbot to decrease the tilt angle $\theta$ that instantly will move the ballbot to the negative $\phi$, where immediately switching direction (normal behavior of such non-minimal phase system, similar to a bike where to turn, first a small maneuver to the opposite direction is needed). Continuously the ballbot increases speed in the $\phi$ positive direction while approaching the target $2\pi$ with satisfying accuracy and faster than the linear one after $\approx 1.3$ s, then stays there and at around $2^{+}$ s where the prediction horizon of time length $N\cdot t_{s}=20\cdot 0.05=1$ s receives the information of the sudden change in reference from $2\pi$ to $0$ rad, the ballbot slightly overshoots in $\phi$ so to change the tilt angle again and starts to accelerate in the opposite direction. After less than a second, the ballbot has reached its origin without overshooting, which outlines the good performance seen mainly in nonlinear MPC frameworks. In Fig. 4, we compare the MPC performances between the LPV and the linearized model. As expected, the LPVMPC is generally faster at reaching the reference with almost similar computational complexity as the linear MPC and the potential to handle better strong maneuvering as the adaptive scheduling variable $\rho_{i|k}$ provides linearizations over a trajectory of the nonlinear ballbot model instead of a single fixed point for the linearized model. The quadratic costs for the 1st scenario are defined as $Q=\texttt{diag}([200,1,0.1,0.1])$, $R=1000$, and the terminal cost $P$ is the Riccati solution from the LQR on the linearized model from [1] with the same tuning of $Q$ and $R$ that will penalize the ”tail” of the horizon. The sampling time is $t_{s}=0.05$ (s), with horizon length $N=20$, and the MPC solution is obtained under the input/state hard constraints introduced in Table IV and highlighted with the gray background in Fig. 4. TABLE IV: MPC Constraints Variable | Lower bound | Upper bound | Units ---|---|---|--- $\phi_{k}$ | $-\infty$ | $+\infty$ | [rad] $\theta_{k}$ | $-\pi/3$ | $+\pi/3$ | [rad] $\dot{\phi}_{k}$ | $-10\pi$ | $+10\pi$ | $[\frac{\rm rad}{\rm s}]$ $\dot{\theta}_{k}$ | $-2\pi$ | $+2\pi$ | $[\frac{\rm rad}{\rm s}]$ $u$ | $-1.5$ | $+1.5$ | [Nm] In the 1D reference tracking, as shown in Fig. 4, traveling in $y$ direction, the ballbot system also balances itself simultaneously, as shown with $\theta$ being between ranges of $0\,rad$ to $0.5\,rad$ while traveling a distance of $\phi$, which is referenced for $2\pi\,rad$, and angular velocities of both are shown in $\dot{\theta}$, and $\dot{\phi}$ respectively. They show the agile and robust movements of the ballbot. Also, in correspondence to the reference and torque, $\tau_{y}$ is applied to the virtual wheel actuator. The virtual torque is shown as discrete steps, i.e., ZOH, and the energy function $J$ to be minimized to get the optimal control input $\tau_{y}$. Figure 4: Ballbot reference tracking with the angular displacement $\phi$ traveling from the origin to $2\pi$ rad and back. A comparison between the linear and LPV MPC frameworks is illustrated. $\theta$ measuring balancing of ballbot, $\dot{\phi}$, $\dot{\theta}$ measuring the angular velocities, $\tau_{y}$ is the virtual torque, $J$ is the energy cost function. #### III-B2 2nd Scenario: Lissajous 2D smooth curves Solving the LPVMPC reference tracking problem in parallel for the $yz$-plane and $xz$-plane, we can drive the ballbot over the $xy$ plane with reference as Lissajous curves in Fig. 5. The ballbot starts from $(x,y)=(\phi_{x}r_{b},\phi_{y}r_{b})=(0,0)$, and following the path to global coordinates as: $X^{\textrm{ref}}=\phi_{x}r_{b}=0.12\cdot 2\pi\sin(0.3t)$ and $Y^{\textrm{ref}}=\phi_{y}r_{b}=0.12\cdot 2\pi\sin(0.4t)$, under the constraints in Table IV. In that case, the reference is smoother without sudden changes. Therefore, we can increase the weight for matching the $\phi$ without inheriting infeasibility problems; thus, the quadratic cost for the 2D reference tracking in each direction can be set as $Q=\texttt{diag}([1000,1,0.1,0.1])$. All the other quadratic weights and parameter specifications are considered the same as in the 1st scenario. In Fig. 5, the LPVMPC control strategy accurately drives the ballbot to the reference within an area of around $2~{}m^{2}$ and for $70$ s. Figure 5: Stabilization and Lissajous curve trajectory tracking of the ballbot in 2D using the LPVMPC algorithm solved in both $xz$ and $yz$ planes. #### III-B3 3rd Scenario: Single set point reference and LPVMPC with guarantees In our last scenario, regulated as in the 1st (e.g., quadratic costs, etc.), we want to illustrate the crucial advantage of considering LPV models for MPC tasks, as it can provide theoretical guarantees. On the one hand, when a linearized model is considered instead of the underlying nonlinear, achieving theoretical guarantees within MPC could be a straightforward task. Still, it will remain quite conservative and thus not practically useful. On the other hand, in the case of NMPC, guarantees can also be provided at the expense of computational complexity. Consequently, an excellent alternative for providing theoretical guarantees is the LPV formulation within MPC, which can be considered an excellent trade-off between the complexity and conservativeness of the previous two general approaches. To prove theoretical guarantees, in Algorithm 1, we have to introduce additionally the terminal equality constrain as $\mathcal{X}_{t}=\\{\hat{x}_{N|k}\in\mathbb{R}^{n_{\textrm{x}}}~{}|~{}\hat{x}_{N|k}=x^{\text{ref}}_{N+k}\\}.$ (14) Thus, the state constraints $\mathcal{X}$ in Algorithm 1 are enhanced with $\hat{x}_{N|k}\in\mathcal{X}_{t}$. Note that in the presence of the terminal constraint $\mathcal{X}_{t}$, the terminal cost remains zero; thus, in that case, no use of the weight $P$ is needed. In Fig. 6, the dynamics are illustrated when considering the terminal constraint (cyan line) in (14), where we lose some performance compared with no guarantees (red line). Thus, with active (14) and making use of the Theorem 1 in [12] (page: 6), all the conditions are satisfied in our 3rd scenario, which leads to the following conclusions; i) the LPVMPC optimization problem is recursively feasible; ii) the closed-loop system satisfies the constraints $u_{i|k}\in\mathcal{U}$, $\hat{x}_{i|k}\in\mathcal{X}$, and $\hat{x}_{N|k}\in\mathcal{X}_{t}$; iii) $(u^{\text{ref}},x^{\text{ref}})=(0,\pi)$ is an exponentially stable (forced) equilibrium of the closed-loop system. The proof of these properties follows the same reasoning as the proof of Theorem 1 in [12]. Figure 6: Single-set point reference i.e., $\pi$, for the LPVMPC framework with and without theoretical guarantees. TABLE V: Comparison of computation times for a single QP using YALMIP with LPVMPC. The simulations are performed on a Dell Latitude 5590 laptop with an Intel(R) Core(TM) i7-8650U CPU and 16 GB of RAM. The scenarios are implemented in MATLAB, utilizing the YALMIP toolbox [11], with an optimality tolerance of $10^{-8}$. LPVMPC with $t_{s}=0.05$ s | Mean | Standard deviation ---|---|--- 1st Scenario | 0.0086 s | 0.0104 s 2nd Scenario | 0.0087 s | 0.0132 s 3rd Scenario | 0.0058 s | 0.0025 s ## IV Conclusion In this study, we were concerned with the reference tracking of a ballbot by utilizing the LPV embedding within the MPC framework. An important step was the refinement of the parameters of the ballbot system from measurements of a past study. Thus, having the nonlinear model allowed the nonlinear embedding of the LPV form. An improved discretization method based on the well-known Runge-Kutta $4^{\textrm{th}}$ order introduced in the LPVMPC framework potentially improves the accuracy issue that can arise in the forward Euler method. The results indicated that applying the LPVMPC-based control method for simultaneously balancing and reference tracking can achieve real-time implementation as the average timing of solving the QP problem is usually $(10)$-times less than the sampling time under consideration. Utilizing such a control design that preserves the nonlinear behavior of the model is advantageous and practical, unlike methods that work with linearized models or try to solve very complex optimization problems tailored directly to the nonlinear, which might be impossible online. The method is illustrated with practical reference tracking scenarios where theoretical guarantees can be provided in the single-set point case. Proving stability and recursive feasibility in the case of multiple reference points is left to future investigation. Finally, as the theoretical establishment of such a robotic system has matured, implementing this method in the physical ballbot in a real environment is left for our immediate future endeavors. ## Acknowledgement We thank Mr. Ing.-Ievgen Zhavzharov, who constructed the ballbot in Fig. 1. ## References * [1] M. Studt, I. Zhavzharov, and H. S. Abbas, “Parameter identification and LQR/MPC balancing control of a ballbot,” in _2022 European Control Conference (ECC)_ , 2022, pp. 1315–1321. * [2] P. Fankhauser and C. Gwerder, “Modeling and control of a ballbot,” 2010. [Online]. Available: https://api.semanticscholar.org/CorpusID:58555644 * [3] D. B. Pham, H. Kim, J. Kim, and S.-G. Lee, “Balancing and transferring control of a ball segway using a double-loop approach [applications of control],” _IEEE Control Systems Magazine_ , vol. 38, no. 2, pp. 15–37, 2018. * [4] M. Nezami, D. S. Karachalios, G. Schildbach, and H. S. Abbas, “On the design of nonlinear MPC and LPVMPC for obstacle avoidance in autonomous driving*,” in _2023 9th International Conference on Control, Decision and Information Technologies (CoDIT)_ , 2023, pp. 1–6. * [5] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, _Robotics: Modelling, Planning and Control_ , 1st ed. Springer Publishing Company, Incorporated, 2008. * [6] S. Puychaison and B. S., “Mouse type ballbot identification and control using a convex-concave optimization,” in _In Proceedings of the International Automatic Control Conference (CACS 2019)_ , National Taiwan Ocean University, Keelung,Taiwan, Nov. 2019, pp. 1–6. * [7] P. S. G. Cisneros, S. Voss, and H. Werner, “Efficient nonlinear model predictive control via quasi-LPV representation,” in _2016 IEEE 55th Conference on Decision and Control (CDC)_ , 2016, pp. 3216–3221. * [8] M. M. Morato, J. E. Normey-Rico, and O. Sename, “Model predictive control design for linear parameter varying systems: A survey,” _Annual Reviews in Control_ , vol. 49, pp. 64–80, 2020. * [9] H. S. Abbas, R. Tóth, N. Meskin, J. Mohammadpour, and J. Hanema, “A robust MPC for input-output LPV models,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 12, pp. 4183–4188, 2016. * [10] H. S. Abbas, “Linear parameter-varying model predictive control for nonlinear systems using general polytopic tubes,” _Automatica_ , vol. 160, p. 111432, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S000510982300599X * [11] J. Löfberg, “YALMIP : A toolbox for modeling and optimization in matlab,” in _In Proceedings of the CACSD Conference_ , Taipei, Taiwan, 2004. * [12] C. Verhoek, J. Berberich, S. Haesaert, R. Tóth, and H. S. Abbas, “A linear parameter-varying approach to data predictive control,” 2023. [Online]. Available: https://arxiv.org/abs/2311.07140
# Realization of the all-optical phase modulator, filter, splitter, and self- consistent logic gates based on assembled magneto-optical heterostructures Jie Xu1,2, Yun You3, Fengwen Kang4, Sanshui Xiao5, Lujun Hong6, Yun Shen6, Yamei Luo1,2, and Kosmas L. Tsakmakidis7 1School of Medical Information and Engineering, Southwest Medical University, Luzhou 646000, China 2Medical Engineering & Medical Informatics Integration and Transformational Medicine of Luzhou Key Laboratory, Luzhou 646000, China 3School of Science, East China Jiaotong University, Nanchang 330013, China 4College of Materials Science and Engineering, Sichuan University, Chengdu 610065, China 5DTU Fotonik, Department of Photonics Engineering, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 6Institute of Space Science and Technology, Nanchang University, Nanchang 330031, China 7Section of Condensed Matter Physics, Department of Physics, National and Kapodistrian University of Athens, Panepistimioupolis, GR-157 84 Athens, Greece Corresponding authors: J. Xu ([email protected]), Y. Luo ([email protected]), and K. L. Tsakmakidis <EMAIL_ADDRESS> ###### Abstract All-optical computing has recently emerged as a vibrant research field in response to the energy crisis and the growing demand for information processing. However, the efficiency of subwavelength-scale all-optical devices remains relatively low due to challenges such as back-scattering reflections and strict surface roughness. Furthermore, achieving multifunctionality through the reassembly of all-optical structures has thus far been rarely accomplished. One promising approach to address these issues is the utilization of one-way edge modes. In this study, we propose four types of deep-subwavelength ($\sim 10^{-2}\lambda_{0}$, where $\lambda_{0}$ is the wavelength in vacuum) all-optical functional devices: a phase modulator, a filter, a splitter, and logic gates. These devices are based on robust one-way modes but do not require an external magnetic field, which can allow for flexible assembly. In particular, we investigate a phase modulation range spanning from $-\pi$ to $\pi$, a perfect filter that divides the input port’s one-way region into two output one-way regions with equal bandwidth, a multi- frequency splitter with an equal splitting ratio (e.g., 50/50), and self- consistent logic gates. We validate these theoretical findings through comprehensive full-wave numerical simulations. Our findings may find applications in minimal optical calculations and integrated optical circuits. ## I Introduction Optical communication is known for its low loss [1], parallel calculation capability [2], and high speed. As a result, all-optical devices are attracting increasing attention. Regular all-optical devices include, but are not limited to, optical filters, sensors, splitters, switches, frequency/phase/amplitude modulators, couplers, and amplifiers. Most of these devices rely on the electro-optic (EO) effect, thermo-optic (TO) effect, or nonlinear optical (NLO) effect. Among these optical effects, the linear EO effect has been extensively studied in fields such as optical sensing and optical frequency combs [3] due to its ultrafast response [4]. The TO effect is commonly employed in the design of optical switches [5], while the NLO effect is primarily used for light control with light, enabling applications such as laser generation and optical switching [6]. However, all-optical devices based on EO, TO, or NLO suffer from high power consumption due to reflection induced by reciprocity/symmetry and fabrication errors. In recent years, there has been a significant focus on one-way edge modes due to their potential for breaking the diffraction limit (similar to SPPs) [7, 8, 9], robust processing/computing capabilities [10], and low loss. One common approach to achieve one-way edge modes is by utilizing photonic crystals (PhCs). The key lies in engineering a special surface that is bounded by materials with zero and nonzero Chern numbers. According to the bulk-edge correspondence [11, 12], this ensures the existence of one-way edge mode(s) on this surface. The robustness of one-way modes in PhCs to imperfections or bends has been proposed theoretically and verified experimentally by several groups [13, 14, 15]. However, PhCs still pose challenges in terms of their relatively complex manufacturing process and the large device size compared to the wavelength of the guiding wave ($\lambda_{0}$). This poses a challenge for achieving subwavelength or even deep/ultra-subwavelength optical devices. Recently, our group has proposed several subwavelength magneto-optical (MO) one-way structures and discovered interesting phenomena/devices, such as unidirectional/bidirectional slow light [16, 17], a perfect optical buffer with zero phase shift [18], and all-optical logic gates (LGs) [19]. The most significant advantage of the MO one-way structure is its extremely simple manufacturing process, requiring only two joint layers and not having strict requirements for surface roughness. Ultra-subwavelength all-optical devices can be achieved based on the MO structure, regardless of the nonlocal effect, which holds true in common scenarios, especially when the wavenumber is relatively small (e.g., $k<100k_{0}$) [20, 21]. Here, we propose the design of assembled MO structures and several interesting deep-subwavelength ($\sim 10^{-2}\lambda_{0}$) all-optical functional devices, including phase modulators, filters, splitters, and self-consistent logic gates (LGs). We study the propagation characteristics of surface magnetoplasmons (SMPs) [22] in theory, and demonstrate the low-loss properties of one-way SMPs through full-wave numerical simulations. Furthermore, we verify the functionality of these all-optical devices. The key to realizing the all-optical phase modulator, splitter, and self-consistent LGs lies in the utilization of one-way index-near-zero (INZ) modes. Unlike other INZ modes found in epsilon-near-zero (ENZ) [23, 24, 25], mu-near-zero (MNZ) [26], or epsilon-and-mu-near-zero (EMNZ) [27] metamaterials, the INZ modes in our work exhibit a single propagation direction and can be easily adjusted within a continuous band. ## II Tunable INZ modes and all-optical phase modulator Figure 1: Assembled all-optical communication systems and functional devices utilizing YIG with remanence, along with PEC and PMC walls. Yttrium iron garnet (YIG) stands out as one of the most captivating MO materials, finding widespread applications in optical communication. These applications encompass optical isolators [28, 29], band-pass filters [30], high-efficiency optomagnonic micron-sized resonators [31], robust LGs [19], and more. Benefiting from the unique magnetization characteristics of YIG, residual magnetism persists even after removing the bias magnetic field. This property holds great potential for the realization of reliable all-optical communication, as depicted in Fig. 1. In this paper, we demonstrate that by utilizing YIG with remanence and PMC walls, a wide range of all-optical functional devices, including phase modulators, filters, splitters, and LGs, can be achieved. Importantly, these devices can be finely controlled and integrated into a system, as illustrated in Fig. 1. To investigate the impact of PMC walls on SMPs, we first construct four MO heterostructures surrounded by PEC and/or PMC walls [32]. These structures are named as follows: the PEC-air-YIG-PEC (EDYE) structure, the PEC-air-YIG-PMC (EDYM) structure, the PMC-air-YIG-PEC (MDYE) structure, and the PMC-air-YIG- PMC (MDYM) structure, as illustrated in the inset pictures in Fig. 2. It should be noted that the YIG material with remanence are used in this study, which means that all the proposed functional devices in this work do not require precise control of an external magnetic field. Our theoretical analysis reveals the following forms for the dispersion relationship of these structures: $\alpha_{r}X_{1}+\frac{\mu_{r2}}{\mu_{r1}}k-\alpha_{d}\mu_{vr}X_{2}=0$ (1) where $X_{1}$ and $X_{2}$ are parameters that directly depend on the lower and upper boundary conditions of the structures, respectively. For the four structures, their values are as follows: $\mathrm{Lower\ boundary}\begin{aligned} &\begin{cases}&X_{1}=\frac{1}{tanh(\alpha_{r}d_{2})}\qquad(\mathrm{PEC})\\\ &X_{1}=\frac{\alpha_{r}\cdot tanh(\alpha_{r}d_{2})-\frac{\mu_{r2}}{\mu_{r1}}k}{\alpha_{r}-\frac{\mu_{r2}}{\mu_{r1}}k\cdot tanh(\alpha_{r}d_{2})}\qquad(\mathrm{PMC})\end{cases}\\\ \end{aligned}$ (2) $\mathrm{Upper\ boundary}\begin{aligned} &\begin{cases}&X_{2}=-\frac{1}{tanh(\alpha_{d}d_{1})}\qquad(\mathrm{PEC})\\\ &X_{2}=-tanh(\alpha_{d}d_{1})\qquad(\mathrm{PMC})\end{cases}\\\ \end{aligned}$ (3) where $\mu_{r1}=1+i\frac{\nu\omega_{r}}{\omega(1+\nu^{2})}$ and $\mu_{r2}=-\frac{\omega_{r}}{\omega(1+\nu^{2})}$ are the nondiagonal elements of the tensor relative permeability of the YIG layers [33, 7]. $\alpha_{r}$ and $\alpha_{d}$ represent the attenuation factors of the surface wave in the YIG layer and air, respectively [34]. $d_{1}$ and $d_{2}$ are the thicknesses of the air and YIG layers. $\omega_{r}$ is the characteristic circular frequency, which is denoted as $\omega_{m}$ when the YIG reaches saturation magnetization. In this paper, we assume $\omega_{m}=2\pi\times 5\times 10^{9}$ rad/s and $\omega_{r}=2\pi\times 3.587\times 10^{9}$ rad/s. It can be observed that as $k\to\pm\infty$, $X_{1}\to 1$ and $X_{2}\to-1$. Additionally, Eq. (1) tends to $\displaystyle\begin{cases}&1+\frac{\mu_{r2}}{\mu_{r1}}+\mu_{vr}=0\iff\omega_{sp}^{+}=\omega_{r}\qquad(k\to+\infty)\\\ &-1+\frac{\mu_{r2}}{\mu_{r1}}-\mu_{vr}=0\iff\omega_{sp}^{-}=0.5\omega_{r}\qquad(k\to-\infty)\end{cases}$ (4) According to Equation (4), we can expect, at most, one asymptotic frequency for $k>0$, denoted as $\omega_{sp}^{+}$, and one asymptotic frequency for $k<0$, denoted as $\omega_{sp}^{-}$. Figure 2 illustrates the four dispersion curves of SMPs in these structures as $d_{1}=d_{2}=0.03\lambda_{m}$. Interestingly, when the lower PEC wall in the original EDYE structure is replaced with a PMC wall (see Fig. 2(b)), the one-way region abruptly changes from the higher region, $\omega_{sp}^{-}<\omega<\omega_{sp}^{+}$ (see Fig. 2(a)), to the lower region, $0<\omega<\omega_{sp}^{-}$, and simultaneously the propagation direction changes oppositely. Similarly, when the upper PEC wall in the EDYE structure is replaced with a PMC wall (Fig. 2(c)), the asymptotic frequencies and the one-way region remain the same, but the one-way SMPs consistently have positive values of k, i.e., $k>0$. Furthermore, when both the upper and lower PEC walls are replaced by PMC walls (Fig. 2(d)), the one- way region is completely disrupted due to the emergence of modes based on total internal reflection (TIR). In the inset of Fig. 2, we present simulation results with a setting of $\nu=10^{-3}\omega_{m}$, which shows good agreement with our theoretical analysis considering lossless materials ($\nu=0$). The corresponding transmission coefficients in these simulations are approximately $62.20\%$ (effective propagation length $L_{eff}\simeq 1.63\lambda_{0}$), $87.73\%$ ($L_{eff}\simeq 2.17\lambda_{0}$), and $86.82\%$ ($L_{eff}\simeq 5.06\lambda_{0}$). Thus, the proposed deep-subwavelength structures exhibit truly low loss. Figure 2: Dispersion diagrams of ’XDYX’ structures where the first ’X’ can be either (a, b) PEC or (c, d) PMC, while the second ’X’ can also be either (a, c) PEC or (b, d) PMC. The red line and black dashed lines represent the dispersion curves of the SMPs. The cyan dashed area is the bulk zone of the bulk YIG. The other parameters are $d_{1}=d_{2}=0.03\lambda_{m}$, $\omega_{r}=0.7174\omega_{m}$, and $\varepsilon_{d}=1$. Furthermore, one of the most significant differences between Fig. 2(a) and Figs. 1(b,c) is the presence of an INZ mode, marked by a green circle in Fig. 2(a). In our previous work, we proposed a method to control the thickness parameters in order to adjust the INZ modes in magnetized-YIG systems. Similarly, in the current study, we can employ the same approach to achieve tunable INZ modes. The advantage of integrability in this system is undeniable compared to previous systems. In Fig. 3, we investigate the impact of the thicknesses of YIG ($d_{2}$) and air ($d_{1}$) on the INZ modes in two cases. In the first case, we assume $d_{1}=d_{2}$ ($<0.1\lambda_{m}$, subwavelength). As shown in Figs. 2(a) and 2(b), when we increase $d_{1}$, the INZ frequency gradually decreases and remains close to $\omega=0.5\omega_{m}$. The red line in Fig. 3(a) represents a fitting curve with the following form: $\bar{\omega}=5.4848\cdot\bar{d_{1}}^{2}-0.09412\cdot\bar{d_{1}}+0.5085$ (5) where $\bar{\omega}=\omega/\omega_{m}$ and $\bar{d_{1}}=d_{1}/\lambda_{m}$ ($\lambda_{m}=2\pi c/\omega_{m}$). Equation (5) allows us to estimate the INZ frequency ($f_{\mathrm{INZ}}$) in the $d_{1}=d_{2}$ case. For example, when $d_{1}=0.045\lambda_{m}$, according to Eq. (5), the INZ frequency should be approximately $0.4932\omega_{m}$. The inset of Fig. 3(a) further confirms that when $d_{1}=d_{2}=0.045\lambda_{m}$, a point source with $f=0.4932f_{m}$ can excite SMPs with a sufficiently large effective wavenumber ($k_{eff}$) and exhibit no phase shift during propagation. In the second case, we assume $d_{2}=2d_{1}$. Similar to the first case, we plot $f_{\mathrm{INZ}}$ as a function of $d_{1}$ in Fig. 3(c). The corresponding fitting equation is as follows: $\bar{\omega}=102.45\cdot\bar{d_{1}}^{3}-25.783\cdot\bar{d_{1}}^{2}+0.3716\cdot\bar{d_{1}}+0.5828$ (6) In comparison to the red line in Fig. 3(a), the red line in Fig. 3(c) exhibits a sharper slope, resulting in greater changes in $f_{\mathrm{INZ}}$ as $d_{1}$ increases. It is important to emphasize that, in theory, it is possible to engineer the fitting curve by designing a suitable ratio between $d_{1}$ and $d_{2}$, such as establishing a nearly linear relationship between $f_{\mathrm{INZ}}$ and $d_{1}$ within the range of $0.05\lambda_{m}<d_{1}<0.1\lambda_{m}$. This linear relationship could have potential applications in phase modulators, optical filters, and frequency combs. Figure 3: The operating frequencies of INZ modes as a function of $d_{1}$ in the cases of (a) $d_{1}=d_{2}$ and (c) $d_{2}=2d_{1}$. The dispersion curves of one-way SMPs as the thicknesses changing (b) in case (a), and (d) in case (c). Figure 4: All-optical phase modulator relied on INZ modes. (a) The phase angle of $E_{z}$ of the output INZ modes as a function of $L_{x}$. (b)-(e) Electric filed distributions of the FEM simulations, and $L_{x}$ are equal to (b) 40 mm, (c) 50 mm, (d) 60 mm, and (e) 70 mm, respectively. (f) The schematic picture of the INZ modes-based phase modulator. Parameters are $d_{1}=d_{2}=0.03\lambda_{m}$, $\nu=10^{-3}\omega_{m}$, and $f=0.5007f_{m}$. It is well known that an INZ wave can travel with no phase shift, which provides a potential approach to building a phase modulator. By designing a joint structure that contains two one-way waveguides with the same one-way region, and if the second waveguide supports the one-way INZ mode, it is theoretically possible to control the output phase by manipulating the joint position or the lengths of the waveguides. As analyzed in Fig. 2, the EDYE structure possesses the same one-way region as the MDYE structure, and an INZ mode is present in the EDYE structure. Therefore, as shown in Fig. 4(f), we propose a novel phase modulator called the ’MDYE-EDYE’ structure. We further investigate how the joint position $L_{x}$ affects the output phase of the electric field ($E_{z}$). In Fig. 4(a), a clear linear relationship between the phase (arg($E_{z}$)) and $L_{x}$ is observed, suggesting that such a simple structure can achieve precise phase modulation. From Fig. 4(a), it is evident that to achieve a phase shift range of $[-\pi,\pi]$, the position shift $\Delta L_{x}$ should be approximately 20 mm, whereas the theoretical value in the lossless condition is around 19.3125 mm ($k\simeq 3.1068k_{m}$). To clearly demonstrate the linear relationship between the phase angle of $E_{z}$ and $L_{x}$, we perform four full-wave simulations in a lossy condition with $\nu=10^{-3}\omega_{m}$, where $L_{x}$ is set to 40 mm (Fig. 4(b)), 50 mm (Fig. 4(c)), 60 mm (Fig. 4(d)), and 70 mm (Fig. 4(e)). Consequently, the output $E_{z}$ in Figs. 3(b) and 3(d) are both positive with nearly the same phase, as well as in Figs. 3(c) and 3(e). Therefore, it is reasonable to believe that even in a real lossy condition, our proposed ’MDYE-EDYE’ structure can function as a precise all-optical phase modulator, offering advantages such as robustness against backscattering reflection and a straightforward manufacturing process. Compared to previous all-optical phase modulators based on Mach-Zehnder Interferometers [35, 36], metasurfaces [37, 38], and/or metamaterials [39], our proposed structure exhibits comparable performance and simplicity. ## III Perfect filter, all-optical splitter, and self-consistent all-optical logic gates based on INZ modes Furthermore, through careful assembly, the remanence-based MO system can function as a perfect filter, effectively separating electromagnetic (EM) waves within a specific frequency band into two separate channels with identical bandwidth. Fig. 5(b) illustrates the Y-shaped filter, which consists of three arms. The horizontal arm serves as the input port, while the other two arms, namely the ’EDYM’ and ’EDYE’ structures (previously studied in the previous subsection, Figs. 1(a) and 1(b)), function as the output ports. It is important to note that the magnetization direction in the ’EDYM’ structure has been reversed to support the forward propagation of EM waves. The input port is referred to as the ’EYYE’ structure, with the interconnected YIG layers having opposite magnetization directions. As shown in Fig. 5(a), the ’EYYE’ structure exhibits a significantly wide one-way region, $0<\omega<\omega_{r}$, which corresponds to the combined one-way regions of the ’EDYE’ structure ($0.5\omega_{r}<\omega<\omega_{r}$) and the ’EDYM’ structure ($0<\omega<0.5\omega_{r}$). Figure 5: (a) Dispersion diagram of the YIG-YIG structure. (b) The schematic picture and (c) simulations of the perfect optical filter with frequencies are $f=0.5f_{m}$ (marked by blue circle in (a)) and $f=0.2f_{m}$ (marked by black circle in (a)), respectively. Benefiting from the unique relationship between the one-way regions of the input port (’A’) and the output ports (’B’ and ’C’), along with the robustness of one-way EM waves, it can be theoretically guaranteed that the excited one- way EM wave from the input port will propagate into either of the output ports. To verify the performance of the filter, we conducted two finite element method (FEM) simulations in this structure with frequencies set at $f=0.5f_{m}$ (the second image in Fig. 5(c)) and $f=0.2f_{m}$ (the third image in Fig. 5(c)). As a result, the excited wave from ’A’ propagated to the junction point and decisively injected into either ’B’ or ’C’, resulting in an extremely high contrast ratio between the output ports. Therefore, the simulation results align with our theoretical analysis. The proposed perfect filter has the potential to serve crucial roles such as an optical switch in an integrated optical circuit. By directly changing the working frequency, the state of the communication system can be switched between ’work’ and ’stop’. Figure 6: (a) Dispersion diagrams of three arms of a novel tunable splitter shown in (c). (b) Relationship between transmittances of two output arms and the operating frequency within the one-way region (colored area in (a)). (c) The schematic diagram and simulations for splitting ratio $\eta=0/100$ (second picture) and $\eta=50/50$ (last picture). Other parameters are $d_{1}=d_{2}=0.01\lambda_{m}$ and $\nu=10^{-3}\omega_{m}$. Besides, it is worth noting that the ’EDYE’, ’MDYE’, and ’EYYE’ structures have overlapping one-way regions, but the wavenumbers of the one-way SMPs differ. As a result, one-way SMPs with different frequencies propagating in these structures will have different effective refractive indexes ($n_{eff}$) and may induce different splitting ratios ($\eta$). Fig. 6(c) illustrates the designed Y-shaped splitter, consisting of the ’MDYE’ (’A’), ’EYYE’ (’B’), and ’EDYE’ (’C’) structures. Figure 6(a) presents the dispersion curves of SMPs in the vicinity of the overlapping one-way region ($0.5\omega_{r}\leq\omega<\omega_{r}$). It is important to note that we set $d_{1}=d_{2}=0.01\lambda_{m}$ to engineer an SMP with an infinite wavenumber at $\omega=0.5\omega_{r}$ (the lower limit of the shaded region in Fig. 6(a)) in the ’EDYE’ structure, which implies $n_{eff}\to\infty$ and consequently zero transmittance in the ’A-C’ channel should be observed under this condition. We performed FEM simulations in the designed splitter, with the frequency changing within the one-way region, and calculated the transmittance. As depicted in Fig. 6(b), the red and black lines represent the transmittance of the ’A-B’ and ’A-C’ channels, respectively. As expected, when $\omega=0.5\omega_{r}=0.3587\omega_{m}$, nearly 100% of the energy traveled through the ’A-B’ channel (as shown in the second picture in Fig. 6(c)). More importantly, as the frequency gradually increases to $0.4447\omega_{m}$, the splitting ratio can approach 50/50 (as shown in the last picture in Fig. 6(c)), which is a highly desirable value in the field of optical communication. Another noteworthy case occurs when $\omega\simeq 0.45\omega_{m}$ (indicated by the right arrow in Fig. 6(b)). In this scenario, the EM wave cannot pass through the ’A-C’ channel either. This behavior can be explained by the theory of total internal reflection: 1) the effective indices of ’A’, ’B’, and ’C’ are approximately 19.4082 ($k\simeq 10.6745k_{m}$), 29.4398 ($k\simeq 16.1919k_{m}$), and 6.8467 ($k\simeq 3.7657k_{m}$), respectively; 2) the incident angle is $\pi/3$, which is clearly larger than the ’critical angle’ (equal to $arcsin(6.8467/19.4082)$) of the ’A-C’ channel. When the frequency is further increased to approximately $0.6456\omega_{m}$, a 50/50 splitting ratio will be achieved (indicated by the black left arrow in Fig. 6(b)). Therefore, our proposed remanence-based all-optical splitter has the capability to maintain a constant splitting ratio for different frequencies. For example, it can achieve $\eta=50/50$ at $\omega=0.4447\omega_{m}$ and/or $\omega=0.6456\omega_{m}$, which we believe is extremely important for parallel optical computing. Figure 7: (a) Schematic picture of all-optical LGs based on YIG with remanence. (b) Logic operations and FEM simulations for three different inputs, i.e. (’1’, ’0’), (’0’, ’1’), and (’1’, ’1’). (c) The truth table for OR, AND, and NAND gates according to simulation results shown in (b). In our previous work[19], we proposed a possible method to achieve all-optical LGs with an extremely high contrast ratio. However, there was an unresolved issue in the previously proposed LGs, which involved inconsistency between the logic of the input and output ports. In this study, we propose another Y-shaped system based on INZ modes, which can function as all-optical LGs such as OR, AND, and NAND gates. Following a consistent positive logic convention, the presence of EM energy is treated as logic ’1’ and is utilized in both the input and output ports, while the phase angle of $E_{z}$ serves as the reference quantity in the output port. Figure 7(a) illustrates the schematic diagram of the designed system. One can observe that this structure is very similar to the one shown in Fig. 6(c), except for the exchange of YIG layers in arm ’B’. As discussed earlier, all three arms share the same one-way region, $0.5\omega_{r}<\omega<\omega_{r}$. In contrast to arm ’B’ shown in Fig. 6, the arm in Fig. 7 can sustain backward-propagating SMPs (as shown in the inset of Fig. 7(a)). Consequently, two one-way channels are successfully engineered based on the splitter model shown in Fig. 6. In contrast to the working frequency used in our previous work, we choose the INZ frequency in the output port as the working frequency in order to preserve the phase information in the output section. The proposed system can be theoretically treated as a combination of basic LGs such as OR, AND, and NAND gates. For the OR gate, we utilize positive logic, where any presence of EM energy is considered logic ’1’. Based on the robust one-way channels, any input EM signal (logic ’1’) from either arm ’A’ or arm ’B’ can unidirectionally propagate through arm ’C’ (also logic ’1’). Hence, the system naturally functions as an OR gate. To implement an AND gate, we introduce a reference quantity, the phase angle ($\theta$) of $E_{z}$, to determine the state of the output port. Specifically, we assume that the output EM signal with $\pi/2<\theta<\pi/2$ represents logic ’1’. As illustrated in Fig. 7(b), three input combinations, [’1’, ’0’], [’0’, ’1’], and [’1’, ’1’], yield three corresponding outputs, logic ’0’, logic ’0’, and logic ’1’. It should be noted that we set $\theta\simeq 1.4$ rad for input-1 to engineer an appropriate output phase angle. Therefore, in practical applications, the input $\theta$ needs to be tailored for different inputs. In Fig. 7, the inputs $[\theta_{1},\theta_{2}]$ are [-1.9 rad, ], [, 1.4 rad], and [1.4 rad, 1.4 rad], respectively. Figure 7(c) presents the truth table for the OR, AND, and NAND gates. It is evident that the outputs are inverted for the AND and NAND gates. Therefore, based on the analysis of the AND gate, the system can easily achieve NAND operation by assuming that the output EM signal with $\pi/2<\theta<\pi/2$ represents logic ’0’. Furthermore, other basic LGs can be straightforwardly extended. With improved integration and consistent logic, the proposed Y-shaped LGs enhance the potential applications of one-way mode-based LGs in the all-optical calculation of integrated optical circuits. Figure 8: Realization of 3D all-optical phase modulator, perfect filter, and self-consistent LGs. The above-mentioned all-optical phase modulator, filter, and LGs, which are based on YIG with remanence, can be straightforwardly extended from 2D models to 3D structures[34]. As depicted in the inset of Fig. 8(a), two PEC walls are introduced in the z direction to confine the guided wave within the device. The simulation result shown in Fig. 8(a) exhibits similarities to the corresponding 2D case (Fig. 4(e)). Figures 7(b)-(e) illustrate the simulation outcomes for the 3D perfect filter, 3D splitter, and 3D LGs, respectively, and all the results are consistent with their 2D counterparts. Consequently, our proposed all-optical structures hold great potential for practical device implementation. Due to the characteristic of one-way propagation, the manufacturing process should be significantly simpler compared to other competing technologies. More importantly, as depicted in Fig. 1, our proposed all-optical phase modulator, filter, splitter, and LGs can be easily assembled on a chip. Therefore, it holds significant promise for enabling flexible all- optical computation and communication. ## IV Conclusion In conclusion, we have proposed a series of all-optical devices based on YIG with remanence. The all-optical phase modulator utilizes INZ modes and allows for controllable phase modulation in a linear relationship with the length of the boundary. The INZ frequency can be adjusted by manipulating the thickness parameters, providing tunability. Moreover, we have designed a perfect filter, which separates the input port’s one-way region into two parts and creates two one-way channels with equal bandwidth. Additionally, we have introduced an all-optical splitter that divides the input wave into two output ports with a customizable and precise splitting ratio, such as 50/50. Furthermore, we have proposed all-optical logic gates based on one-way SMPs, where consistent positive logic and the phase angle of the output field are used to determine the logic states. Basic logic gates, including OR, AND, and NAND, have been achieved based on this principle. The feasibility and performance of the proposed devices have been validated in both 2D and 3D simulations. Our proposed all-optical devices, leveraging MO heterostructures, hold potential for optical calculations, including parallel computing in integrated optical circuits. ## Acknowledgement This work was supported by the National Natural Science Foundation of Sichuan Province (No. 2023NSFSC1309), and the open fund of Luzhou Key Laboratory of Intelligent Control and Application of Electronic Devices (No. ZK202210), Sichuan Science and Technology Program (No. 2022YFS0616), the Science and Technology Strategic Cooperation Programs of Luzhou Municipal People’s Government and Southwest Medical University (No. 2019LZXNYDJ18). J.X. and Y.L. thanks for the support of the Innovation Laboratory of Advanced Medical Material & Physical Diagnosis and Treatment Technology. K.L.T. was supported by the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI) under Grant No. 4509. ## References * [1] Limin Tong, Rafael R. Gattass, Jonathan B. Ashcom, Sailing He, Jingyi Lou, Mengyan Shen, Iva Maxwell, and Eric Mazur. Subwavelength-diameter silica wires for low-loss optical wave guiding. Nature, 426(6968):816–819, 2003. * [2] J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja, J. Liu, C. D. Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H. Bhaskaran. Parallel convolutional processing using an integrated photonic tensor core. Nature, 589(7840):52–58, 2021. * [3] Rongjin Zhuang, Kai Ni, Guanhao Wu, Ting Hao, Longzhao Lu, Yang Li, and Qian Zhou. Electro-optic frequency combs: Theory, characteristics, and applications. Laser & Photonics Reviews, page 2200353, 2023. * [4] Viola Valentina Vogler-Neuling, Artemios Karvounis, Andrea Morandi, Helena Weigand, Eric Dénervaud, and Rachel Grange. Photonic Assemblies of Randomly Oriented Nanocrystals for Engineered Nonlinear and Electro-Optic Effects. ACS Photonics, 9(7):2193–2203, 2022. * [5] Xiaojun Chen, Jiao Lin, and Ke Wang. A Review of Silicon-Based Integrated Optical Switches. Laser & Photonics Reviews, 17(4):2200571, 2023. * [6] Mengxin Ren, Wei Cai, and Jingjun Xu. Tailorable Dynamics in Nonlinear Optical Metasurfaces. Advanced Materials, 32(3):1806317, 2020. * [7] Jie Xu, Xiaohua Deng, Hang Zhang, Chiaho Wu, Martijn Wubs, Sanshui Xiao, and Linfang Shen. Ultra-subwavelength focusing and giant magnetic-field enhancement in a low-loss one-way waveguide based on remanence. Journal of Optics, 22(2):025003, 2020. * [8] K. L. Tsakmakidis, L. Shen, S. A. Schulz, X. Zheng, J. Upham, X. Deng, H. Altug, A. F. Vakakis, and R. W. Boyd. Breaking lorentz reciprocity to overcome the time-bandwidth limit in physics and engineering. Science, 356(6344):1260–1264, 2017. * [9] Shiqing Li, Kosmas L. Tsakmakidis, Qian Shen, Hang Zhang, Jinhua Yan, Shulin Sun, and Linfang Shen. Broadband unidirectional guided-wave-driven metasurfaces for arbitrary wavefront control, 2023. * [10] Farzad Zangeneh-Nejad, Dimitrios L Sounas, Andrea Alù, and Romain Fleury. Analogue computing with metamaterials. Nature Reviews Materials, 6(3):207–225, 2021. * [11] Guo-Jing Tang, Xin-Tao He, Fu-Long Shi, Jian-Wei Liu, Xiao-Dong Chen, and Jian-Wen Dong. Topological photonic crystals: physics, designs, and applications. Laser & Photonics Reviews, 16(4):2100300, 2022. * [12] Guancong Ma, Meng Xiao, and Che Ting Chan. Topological phases in acoustic and mechanical systems. Nature Reviews Physics, 1(4):281–294, 2019. * [13] Yin Poo, Rui-xin Wu, Zhifang Lin, Yan Yang, and C. T. Chan. Experimental Realization of Self-Guiding Unidirectional Electromagnetic Edge States. Physical Review Letters, 106(9):093903, 2011. * [14] Peiheng Zhou, Gui-Geng Liu, Xin Ren, Yihao Yang, Haoran Xue, Lei Bi, Longjiang Deng, Yidong Chong, and Baile Zhang. Photonic amorphous topological insulator. Light: Science & Applications, 9(1):133, 2020. * [15] Mudi Wang, Ruo-Yang Zhang, Lei Zhang, Dongyang Wang, Qinghua Guo, Zhao-Qing Zhang, and Che Ting Chan. Topological one-way large-area waveguide states in magnetic photonic crystals. Physical Review Letters, 126(6):067401, 2021. * [16] Jie Xu, Panpan He, Delong Feng, Kangle Yong, Lujun Hong, Yun Shen, and Yun Zhou. Slow wave and truly rainbow trapping in a one-way terahertz waveguide. Optics Express, 29(7):11328–11341, 2021. * [17] Jie Xu, Qian Shen, Kai Yuan, Xiaohua Deng, Yun Shen, Hang Zhang, Chiaho Wu, Sanshui Xiao, and Linfang Shen. Trapping and releasing bidirectional rainbow at terahertz frequencies. Optics Communications, 473:125999, 2020. * [18] Yun Zhou, Panpan He, Sanshui Xiao, Fengwen Kang, Lujun Hong, Yun Shen, Yamei Luo, and Jie Xu. Realization of tunable index-near-zero modes in nonreciprocal magneto-optical heterostructures. Optics Express, 30(15):27259–27272, 2022. * [19] Jie Xu, Fengwen Kang, Yamei Luo, Sanshui Xiao, and Kosmas L Tsakmakidis. All-optical digital logic based on unidirectional modes. Advanced Optical Materials, 11(1):2201836, 2023. * [20] S Ali Hassani Gangaraj and Francesco Monticone. Do truly unidirectional surface plasmon-polaritons exist? Optica, 6(9):1158–1165, 2019. * [21] Kosmas L. Tsakmakidis, Konstantinos Baskourelos, and Tomasz Stefański. Topological, nonreciprocal, and multiresonant slow light beyond the time-bandwidth limit. Applied Physics Letters, 119(19):190501, 2021. * [22] JJ Brion, RF Wallis, A Hartstein, and E Burstein. Theory of surface magnetoplasmons in semiconductors. Physical Review Letters, 28(22):1455, 1972. * [23] Yiyu Zhou, M Zahirul Alam, Mohammad Karimi, Jeremy Upham, Orad Reshef, Cong Liu, Alan E Willner, and Robert W Boyd. Broadband frequency translation through time refraction in an epsilon-near-zero material. Nature communications, 11(1):1–7, 2020. * [24] Xinxiang Niu, Xiaoyong Hu, Saisai Chu, and Qihuang Gong. Epsilon-near-zero photonics: a new platform for integrated devices. Advanced Optical Materials, 6(10):1701292, 2018. * [25] Mário Silveirinha and Nader Engheta. Tunneling of Electromagnetic Energy through Subwavelength Channels and Bends using $\varepsilon$ -Near-Zero Materials. Physical Review Letters, 97(15):157403, 2006. * [26] João S. Marcos, Mário G. Silveirinha, and Nader Engheta. $\mu$ -near-zero supercoupling. Physical Review B, 91(19):195112, 2015. * [27] Ahmed M. Mahmoud and Nader Engheta. Wave–matter interactions in epsilon-and-mu-near-zero structures. Nature Communications, 5(1):5638, 2014. * [28] Takian Fakhrul, Stana Tazlaru, Lukáš Beran, Yan Zhang, Martin Veis, and Caroline A. Ross. Magneto-Optical Bi:YIG Films with High Figure of Merit for Nonreciprocal Photonics. Advanced Optical Materials, 7(13):1900056, 2019. * [29] Jiachang Liang, Yan Li, Tingge Dai, Yuejun Zhang, Xiaowei Zhang, Hongjun Liu, and Pengjun Wang. On-chip Ce:YIG/Si Mach–Zehnder optical isolator with low power consumption. Optics Express, 31(5):8375–8383, 2023. * [30] Maksym Popov, Yuzan Xiong, Igor Zavislyak, Hryhorii Chumak, Oleksandr Fedorchuk, Sujoy Saha, Rao Bidthanapally, Hongwei Qu, Michael R Page, and Gopalan Srinivasan. Y-type hexagonal ferrite-based band-pass filter with dual magnetic and electric field tunability. Scientific Reports, 13(1):1179, 2023. * [31] E. Almpanis, G. P. Zouros, P. A. Pantazopoulos, K. L. Tsakmakidis, N. Papanikolaou, and N. Stefanou. Spherical optomagnonic microresonators: Triple-resonant photon transitions between Zeeman-split Mie modes. Physical Review B, 101(5):054412, 2020. * [32] Constantine A Balanis. Advanced Engineering Electromagnetics. John Wiley & Sons, 2012. * [33] Qian Shen, Xiaodong Zheng, Hang Zhang, Yun You, and Linfang Shen. Large-area unidirectional surface magnetoplasmons using uniaxial -near-zero material. Optics Letters, 46(23):5978, 2021. * [34] Lujun Hong, Yazhou Wang, Yun Shen, Xiaohua Deng, Kai Yuan, Sanshui Xiao, and Jie Xu. Broadband energy squeezing and tunneling based on unidirectional modes. Optical Materials Express, 11(9):2975–2984, 2021. * [35] C Sturm, D Tanese, H S Nguyen, H Flayac, E Galopin, A Lemaıtre, I Sagnes, D Solnyshkov, A Amo, G Malpuech, and J Bloch. All-optical phase modulation in a cavity-polariton Mach–Zehnder interferometer. NATURE COMMUNICATIONS, 2014. * [36] Chong Zhang, Paul A. Morton, Jacob B. Khurgin, Jon D. Peters, and John E. Bowers. Ultralinear heterogeneously integrated ring-assisted Mach–Zehnder interferometer modulator on silicon. Optica, 3(12):1483, 2016. * [37] Ziqi Miao, Qiong Wu, Xin Li, Qiong He, Kun Ding, Zhenghua An, Yuanbo Zhang, and Lei Zhou. Widely Tunable Terahertz Phase Modulation with Gate-Controlled Graphene Metasurfaces. Physical Review X, 5(4):041027, 2015. * [38] Weihao Yang, Jun Qin, Jiawei Long, Wei Yan, Yucong Yang, Chaoyang Li, En Li, Juejun Hu, Longjiang Deng, Qingyang Du, and Lei Bi. A self-biased non-reciprocal magnetic metasurface for bidirectional phase modulation. Nature Electronics, 6(3):225–234, 2023. * [39] Seojoo Lee, Soojeong Baek, Teun-Teun Kim, Hyukjoon Cho, Sangha Lee, Ji-Hun Kang, and Bumki Min. Metamaterials for Enhanced Optical Responses and their Application to Active Control of Terahertz Waves. Advanced Materials, 32(35):2000250, 2020.
11institutetext: Amirarsalan Rajabi 22institutetext: Complex Adaptive Systems Lab (CASL), University of Central Florida, Orlando, 32816 22email<EMAIL_ADDRESS>33institutetext: Seyyedmilad Talebzadehhosseini 44institutetext: Complex Adaptive Systems Lab (CASL), University of Central Florida, Orlando, 32816 44email<EMAIL_ADDRESS>55institutetext: Ivan Garibay 66institutetext: Complex Adaptive Systems Lab (CASL), University of Central Florida, Orlando, 32816 66email<EMAIL_ADDRESS> # Resistance of communities against disinformation Amirarsalan Rajabi Seyyedmilad Talebzadehhosseini and Ivan Garibay ###### Abstract The spread of disinformation is considered a big threat to societies and has recently received unprecedented attention. In this paper we propose an agent- based model to simulate dissemination of a conspiracy in a population. The model is able to compare the resistance of different network structures against the activity of conspirators. Results show that connectedness of network structure and centrality of conspirators are of crucial importance in preventing conspiracies from becoming widespread. ## 1 Introduction We define conspiratorial thinking as a belief held by an individual or a group of individuals, while there is enough evidence and information to undermine or totally refute the belief. If some conspiracy theories get popular support, it may cause serious concerns.This is particularly true of conspiracies over scientific and medical issues where conspiracy theories can result in rejection of the scientific method lewandowsky2013role . Several underlying reasons have been proposed to explain the existence of conspiracy theories. According to Barkun, conspiratorial thinking exhibits three characteristics: Firstly, nothing happens by accident. Secondly, things are not as they seems on surface. And thirdly, things are highly connected barkun2013culture . All three characteristics mentioned by Barkun, refer to a special cognitive function of the conspiracy theorist. Indeed, a great deal of literature on conspiracy theory associates conspiratorial thinking with a special and different heuristic of the conspiracy theorist. On the other hand, Sunstein and Vermeule claim that many of those who hold conspiracy theories do so not as a result of a mental illness of any kind, or of simple irrationality, but as a result of crippled epistemology (knowing very few things, which are indeed wrong) sunstein2009conspiracy . For most of what they believe and know, human beings lack direct information; they must rely on what others say and think. Hardin argues many people suffer from crippled epistemology, meaning they only get their information from a few incorrect sources breton2002political . Crippled epistemology usually takes place in echo chambers. Echo chambers are communities in which individuals merely communicate with each other and rarely seek information from entities outside the community. The advent of social media platforms has resulted in the rise of echo-chambers bakshy2015exposure . Echo chambers play an important role in political and social polarization barbera2015tweeting . The negative effects of polarization in social networks is studied in garibay2019polarization . Bauman states that individuals who are embedded in isolated groups or small, self-enclosed networks who are thus exposed only to skewed information, will more often hold conspiracy theories that are justified, relative to their limited informational environment bauman2013liquid . The study of the dynamics by which the echo chambers form, can therefore shed light on the mechanisms by which conspiratorial thinking forms and thrives in a community. Traditional studies of conspiratorial thinking assume that conspiratorial thinking extinguishes in large network structure grimes2016viability , and that conspiratorial ideation is because of flawed reasoning and biased heuristics barkun2013culture . This study challenges both of these claims. Contrary to a large body of literature on conspiracy theory that studies the cognitive function of isolated individuals [22], this paper takes into account systemic belief dissemination as a result of interactions between individuals. ## 2 Opinion Dynamics Opinion formation is a complex process which is formed by the interaction of multiple underlying elements. People tend to form their opinion on a wide variety of subjects through the process of learning. “Social learning” is a term referring to the process of learning through the communication of individuals with each other, their own experience and their observations of others’ experiences, media sources, propaganda and indoctrination from political leaders and the state acemoglu2011opinion . In this paper, we refer to models of opinion dynamics as mathematical models that aim at capturing the dynamics of social learning, opinion spreading, collective decision making, and so on from a mathematical point of view. Models of opinion dynamics can be divided into two categories : Bayesian models of opinion dynamics and non-Bayesian models of opinion dynamics. Bayesian models rely on Bayes rule bayes1991essay . These kinds of models assume that an individual (agent) is Bayesian rational and update their belief optimally with respect to Bayes rule, given an underlying model of the world. One problem with Bayesian models of opinion dynamics is that these models make a lot of assumptions. One demanding assumption of these models is that an agent must have a reliable prior assumption about the world, an assumption that might be unrealistic in many cases. Additionally, it is assumed that they then go on and update their prior beliefs based on the new information that they get from others. Bayesian models also put too much structure on updating by ruling out “zero probability events” acemoglu2011opinion . The aforementioned problems make Bayesian models unfavorable to be incorporated in our study of dissemination of conspiracy theories. Non-Bayesian approaches and models on the other hand try to avoid some of these problems. Non-Bayesian approaches are believed to be more effective in modeling belief manipulation and the spread of disinformation. The simplest of these models start by specifying rules of thumb acemoglu2011opinion . Several different non-Bayesian models exist. Classical models of interacting particle systems which in inspired by statistical mechanics inspires many of these models (see for example see for example clifford1973model ; galam2002minority ; latane1981psychology ; castellano2009nonlinear ; hegselmann2002opinion ). One noteworthy non-Bayesian model is DeGroot (1974) degroot1974reaching . In this model, a set of interacting agents start by an initial beliefs about an underlying state held by each agent, and exchange information about their beliefs with their neighbors and update their beliefs at discrete time instances, with respect to a weight matrix that represents the social network structure of interactions. This model captures the “imitation” aspect of non- Bayesian models. While notably innovative, this model suffers from duplication of information acemoglu2010spread , meaning that agents in this model might interact endlessly with their neighbors that hold unchanging opinions in each timestep. ## 3 Model of Conspirators Our agent-based model is inspired by the variation of Acemoglu et. al. on DeGroot model acemoglu2010spread . In our model, two types of agents exist. Majority of agents are _susceptible_ agents and a minority of agents exist that are called _conspirators_. Conspirators deliberately disseminate false information to susceptibles. Let’s consider a conceptual underlying state of the world and call it $\Theta$. We assume that the true value of $\Theta$ is 1, and the discussion between agents in the model is on the true value of $\Theta$. $x_{i}^{s}(k)$ and $x_{i}^{c}(k)$ represent the opinion of susceptible agent $i$ at time $k$, and the opinion of conspirator agent $i$ at time $k$, respectively. At first, each susceptible agent holds an initial belief about the underlying state. N shows the total number of agents in the model. The initial belief of each susceptible agent is a randomly generated float number between zero and two, the initial belief of each conspirator agent is 0, and the average of initial beliefs of all susceptible agents is very close to one: $x_{i}^{s}(0)\in[0,2]\qquad x_{i}^{c}(0)=0\qquad\frac{1}{N}\sum_{i}x_{i}^{s}(0)\approx 1$ (1) Hence although at first each susceptible agent has their own initial belief on the underlying state $\Theta$, there is a consensus on this underlying state between the susceptible agents. On the other hand, the initial belief of conspirator agents is 0 and remains 0 during the simulation, irrespective of their interactions. These could be thought of as individuals, entities, media or propaganda outlets that deliberately and constantly disseminate false information throughout the population. (a) Scale Free (Barabasi-Albert) (b) Small World (Watts-Strogatz) Figure 1: NetLogo environment. A) shows a network generated with Barabasi- Albert algorithm. B) shows a network generated by Watts-Strogat algorithm There are two main differences of between our proposed model and the work of acemoglu2010spread : 1) Agents are not able to freely communicate directly with any other agents. Rather, each agent is only able to communicate with the agents that she is connected to by a link. This means that a network structure regulates the communication in the population. 2) In the variation of acemoglu2010spread on DeGroot model, each agent meets and communicates with other agents at instances defined by a rate one Poisson process independent of other agents. In our model agents are connected to each other by undirected links, and together, they form a network. in each timestep of the simulation, an agent will choose another agent to which it is connected and may or may not communicate with them. In this model, the capacity of an agent for communication in each timestep is the number of her links. For example an agent with 5 links will communicate with 5 or less other agents in each timestep. Following the interaction of two agents i and j, there is a potential exchange of information between them with probability $\textrm{p}_{interaction}$. Agents update their beliefs according to one of the following possibilities: If i and j are both susceptibles: $\displaystyle\begin{cases}x_{i}(k+1)=x_{j}(k+1)=\frac{1}{2}[x_{i}(k)+x_{j}(k)]&\text{with probability}\;\textrm{p}\\\ x_{i}(k+1)=x_{i}(k)\;\&\;x_{j}(k+1)=x_{j}(k)&\text{with probability}\;1-\textrm{p}\end{cases}$ If i is susceptible and j is conspirator: $\displaystyle\begin{cases}x_{i}(k+1)=\frac{x_{i}(k)+0}{2}&\text{with probability}\;\textrm{p}\\\ x_{i}(k+1)=x_{i}(k)&\text{with probability}\;1-\textrm{p}\\\ x_{j}(k+1)=x_{j}(k)=0&\text{with probability 1}\end{cases}$ If i and j are both conspirators: no opinion sharing. The underlying network structure of the model plays an important role in determining the behavior of the model, and it is itself determined by the algorithm by which the model forms the network. It determines which agents are connected to each other, how easily the information would propagate throughout the network, and so forth. Watts and Strogatz proposed a model to capture the small world, high clustering, and low average path properties of real complex networks watts1998collective . On the other hand, Barabasi and Albert showed that power-law degree distribution is the property of many real world networks and proposed an algorithm that can capture this phenomenon barabasi2003scale . Indeed, it is believed that majority of complex networks exhibit small world and scale free properties wang2003complex . In order to capture all of these essential properties, the model generates the network of agents using both Watts-Strogatz and Barabasi-Albert algorithms. ## 4 Results We developed our agent-based model in NetLogo sunstein2009conspiracy . Figure 1 shows two instances of the model with watts-strogatz and Barabasi-Albert model. For our purposes we defined a variable named collective-thought. This variable is simply the average of the belief of all susceptible agents and represents the collective belief of the population about the underlying state $\Theta$. Collective-thought starts with a value close to one, and always converges to zero(Figure 2). A desired population is one that resists the activity of conspirators. Numerous individuals and groups deliberately disseminate false information in a society. As we know the collective-thought of the model always converges to zero after some timesteps. We call this state the convergence. In order to quantify the resistance of a network against the activity of conspirators, we record the number of timesteps required for collective-thought of the network to converge. This number can be thought of as symbolically representing how long a society resist the effort of conspirators, before it is nearly deceived into believing that the underlying state $\Theta$ is zero. We argue that contrary to conventional belief that conspiratorial beliefs are untenable with larger network structure (e.g. grimes2016viability ), a large network cannot ensure eradication of conspiratorial beliefs. Indeed, the important aspect of a network structure that can ensure resistance against conspiracies is the _connectedness_ of the network;that is, how easily an agent can send an information through the network to any other agent. A network with high connectedness means that information propagates more easily and eco-chambers are less likely to form. We needed to come up with a benchmark that enables us to quantitatively compare different networks by our objective of connectedness. For this purpose, the model records the mean path length of the network in each instance of the simulation as follows fronczak2004average : mean path length = average shortest path between all distincrt pairs of nodes in network (2) The model was run 1000 times for each of the Watts-Strogatz and Barabasi- Albert networks. In each run, the required timsteps for the collective-thought to converge and the mean path length of the network were recorded. Figure 3 shows that there is a positive relationship between mean path length of the network, and required timesteps for the collective-thought to reach convergence. With Barabasi-Albert algorithm, the Pearson correlation coefficient between the mean path length and the required timesteps was 0.18, with a p-value less than 0.00001 which shows the correlation is statistically significant. The corresponding coefficient for Watts-Strogatz networks was 0.39 with a p-value less than 0.00001, which shows this correlation is also significant. These results prove our hypothesis that the connectedness of a network improves its resistance against conspirators. (a) Scale Free (Barabasi-Albert) (b) Small World (Watts-Strogatz) Figure 2: Figure show the required timesteps required for the network’s collective-thought to converge. Small world network tends to be deceived more quickly than a scale free network. (a) Scale Free (Barabasi-Albert) (b) Small World (Watts-Strogatz) Figure 3: The relationship between mean path length and the required timesteps for a population’s collective-thought to converge in A)scale free and B)small world networks (a) Scale Free (Barabasi-Albert) (b) Small World (Watts-Strogatz) Figure 4: The relationship between sum of the eigenvector of all 4 conspirators and the required timesteps for a population’s collective-thought to converge in A)scale free and B)small world networks Besides the connectedness of a network, the structural position of conspirators in a network determines the reach of their disinformation and enhances their ability to disseminate their message through the whole population more effectively. To show that our model captures this phenomenon, we assigned to each agent in the network their eigenvector centrality. Eigenvector centrality is a measure of centrality in a network newman2008mathematics . The more central a node is, the closer it is to all other nodes. The model was run 1000 times for each of the Watts-Strogatz and Barabasi- Albert networks. For each run, the required timsteps for the collective belief to reach convergence and the sum of the eigenvector centrality of conspirator agents were recorded. Figure 4 shows the negative relationship between required timesteps for collective-thought to reach convergence and the sum of the eigenvector centrality of conspirators. With Barabasi-Albert algorithm, the Pearson correlation coefficient between the mean path length and the required timesteps was -0.19, with a p-value less than 0.00001 which shows the correlation is statistically significant. The corresponding coefficient for Watts-Strogatz networks was -0.42 with a p-value less than 0.00001, which shows this correlation is also statistically significant. ## 5 Discussion The model shows promising results in that its vulnerability is correlated with the connectedness of the network and the importance (eigenvector) of conspirators. Considering the minimum required time, the resistance of a population against conspiracies was imitated. The results show that a network that is built by the Watts-Strogatz algorithm was slightly more vulnerable to conspiracies than a network which is built by Barabasi-Albert algorithm (scale free). The reason for this difference might be that a Watts-Strogatz network has a high local clustering, while holding a short average path length like random networks, therefore the propagation of (dis)information is easier and faster in them. On the other hand, while noting that we used only 4 conspirators in our model, and the total number of agents were 100, the probability that conspirators become a hub was low, and therefore their ability to propagate their conspiracy was slightly smaller than in Watts- Strogatz network. A probable misinterpretation here might be that a less connected network is better against conspiracy theories. This is not correct because in our model, only conspirators were acting to deceive the population and were trying to propagate their disinformation in the whole network. In fact, the results of this study can be interpreted in a content-agnostic manner. In reality, both sides of the discussion try to influence the network. Nevertheless, both networks show that the connectedness and the eigenvector centrality of conspirators is a network is highly correlated with the network’s vulnerability to conspiracies. This was a first step to make a framework for computational study of conspiracy propagation. Further research must be conducted using other network formation algorithms. Other ranges of conspirators’ ratio and population sizes should also be experimented. Another extension to this research might be to test the effect of multi-dimensional opinion-space in agents. Finally, other non-Bayesian models of opinion dynamics could be tested. ## References * (1) Acemoglu, D., and Ozdaglar, A. Opinion dynamics and learning in social networks. Dynamic Games and Applications 1, 1 (2011), 3–49. * (2) Acemoglu, D., Ozdaglar, A., and ParandehGheibi, A. Spread of (mis) information in social networks. Games and Economic Behavior 70, 2 (2010), 194–227. * (3) Bakshy, E., Messing, S., and Adamic, L. A. Exposure to ideologically diverse news and opinion on facebook. Science 348, 6239 (2015), 1130–1132. * (4) Barabási, A.-L., and Bonabeau, E. Scale-free networks. Scientific american 288, 5 (2003), 60–69. * (5) Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A., and Bonneau, R. Tweeting from left to right: Is online political communication more than an echo chamber? Psychological science 26, 10 (2015), 1531–1542. * (6) Barkun, M. A culture of conspiracy: Apocalyptic visions in contemporary America, vol. 15. Univ of California Press, 2013. * (7) Bauman, Z. Liquid modernity. John Wiley & Sons, 2013. * (8) Bavelas, A. Communication patterns in task-oriented groups. The Journal of the Acoustical Society of America 22, 6 (1950), 725–730. * (9) Bayes, T. An essay towards solving a problem in the doctrine of chances. 1763. MD computing: computers in medical practice 8, 3 (1991), 157. * (10) Breton, A., Galeotti, G., Salmon, P., and Wintrobe, R. Political extremism and rationality. Cambridge University Press, 2002. * (11) Castellano, C., Muñoz, M. A., and Pastor-Satorras, R. Nonlinear q-voter model. Physical Review E 80, 4 (2009), 041129. * (12) Clifford, P., and Sudbury, A. A model for spatial conflict. Biometrika 60, 3 (1973), 581–588. * (13) DeGroot, M. H. Reaching a consensus. Journal of the American Statistical Association 69, 345 (1974), 118–121. * (14) Fronczak, A., Fronczak, P., and Hołyst, J. A. Average path length in random networks. Physical Review E 70, 5 (2004), 056110. * (15) Galam, S. Minority opinion spreading in random geometry. The European Physical Journal B-Condensed Matter and Complex Systems 25, 4 (2002), 403–406. * (16) Garibay, I., Mantzaris, A. V., Rajabi, A., and Taylor, C. E. Polarization in social media assists influencers to become more influential: analysis and two inoculation strategies. Scientific Reports 9, 1 (2019), 1–9. * (17) Grimes, D. R. On the viability of conspiratorial beliefs. PloS one 11, 1 (2016), e0147905. * (18) Hegselmann, R., Krause, U., et al. Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of artificial societies and social simulation 5, 3 (2002). * (19) Latané, B. The psychology of social impact. American psychologist 36, 4 (1981), 343. * (20) Lewandowsky, S., Gignac, G. E., and Oberauer, K. The role of conspiracist ideation and worldviews in predicting rejection of science. PloS one 8, 10 (2013), e75637. * (21) Newman, M. E. The mathematics of networks. The new palgrave encyclopedia of economics 2, 2008 (2008), 1–12. * (22) Sunstein, C. R., and Vermeule, A. Conspiracy theories: Causes and cures. Journal of Political Philosophy 17, 2 (2009), 202–227. * (23) Sunstein, C. R., and Vermeule, A. Conspiracy theories: Causes and cures. Journal of Political Philosophy 17, 2 (2009), 202–227. * (24) Wang, X. F., and Chen, G. Complex networks: small-world, scale-free and beyond. IEEE circuits and systems magazine 3, 1 (2003), 6–20. * (25) Watts, D. J., and Strogatz, S. H. Collective dynamics of ‘small-world’networks. nature 393, 6684 (1998), 440.
# Sparse vs Contiguous Adversarial Pixel Perturbations in Multimodal Models: An Empirical Analysis Cristian-Alexandru Botocan<EMAIL_ADDRESS>EPFLLausanneSwitzerland armasuisse S+TThunSwitzerland , Raphael Meier <EMAIL_ADDRESS>armasuisse S+TThunSwitzerland and Ljiljana Dolamic<EMAIL_ADDRESS>armasuisse S+TThunSwitzerland (2024) ###### Abstract. Assessing the robustness of multimodal models against adversarial examples is an important aspect for the safety of its users. We craft $L_{0}$-norm perturbation attacks on the preprocessed input images. We launch them in a black-box setup against four multimodal models and two unimodal DNNs, considering both targeted and untargeted misclassification. Our attacks target less than 0.04% of perturbed image area and integrate different spatial positioning of perturbed pixels: sparse positioning and pixels arranged in different contiguous shapes (row, column, diagonal, and patch). To the best of our knowledge, we are the first to assess the robustness of three state-of- the-art multimodal models (ALIGN, AltCLIP, GroupViT) against different sparse and contiguous pixel distribution perturbations. The obtained results indicate that unimodal DNNs are more robust than multimodal models. Furthermore, models using CNN-based Image Encoder are more vulnerable than models with ViT—for untargeted attacks, we obtain a 99% success rate by perturbing less than 0.02% of the image area. AI security, Multimodal models, Contiguous Attacks, Sparse Attacks, Pixel- perturbations ††copyright: cc ## 1\. Introduction In computer vision, the researchers try to find new methods for improving image classification and other tasks beyond that (e.g. given a set of textual descriptions and an image, the model chooses the most likely image description). For that, the research community developed multimodal models (Radford et al., 2021; Haas et al., 2023; Jia et al., 2021; Chen et al., 2022; Li et al., 2022; Xu et al., 2022; Pham et al., 2023; Zhong et al., 2022), which are used as a basis for further research (CLIP (Radford et al., 2021) led e.g. to various modifications such as RegionCLIP (Zhong et al., 2022)) and are increasingly also considered for commercial usage 111Deloitte report: Novel Design Classification with CLIP. Thus, it is essential to use such models, knowing their robustness and security implications. While prior studies have demonstrated the vulnerability of multimodal models to perturbations of the whole image (Mao et al., 2022; Qiu et al., 2022), their vulnerability to pixel-level perturbations remains largely unexplored. In particular, the impact of the number of perturbed pixels and their spatial distribution on the attack performance has not been explored so far. Both of these basic aspects must be considered when constructing adversarial attacks using pixel perturbations. Empirical analysis of their impact on attack performance would enable more informed design choices when crafting novel adversarial pixel perturbations and ultimately lead to a better understanding of the overall robustness of multimodal models. In this work, we develop methods and experiments to address this gap. We focus on the black-box scenario because we want to confront the security problem of an attacker who does not have any prior information about the model. We rely on $L_{0}$-norm perturbations to have control over the number of pixels being perturbed. We extend _Sparse_ pixel distribution attack presented in work by Su et al. (Su et al., 2019) to incorporate different spatial encodings resulting in five different _Contiguous Attacks_ (details are described in Section 4.2). Depending on the number and distribution of perturbed pixels, our method exploits potential vulnerabilities hidden in the model architectures, such as the kernel sizes in the case of CNN-based Image Encoder of multimodal models. Pixel perturbations are usually performed in the original image. This assumes a threat model with minimal information available to the adversary (i.e., does not need to know the preprocessing routine). However, if you perturb the preprocessed image, you can be certain that your experimental results are not confounded by the different preprocessing pipelines, which we prioritized for this study. We evaluate the effectiveness of these attacks with images from ImageNet on four open-source models (ALIGN (Jia et al., 2021), AltCLIP(Chen et al., 2022), CLIP-B/32 (Radford et al., 2021), and GroupViT (Xu et al., 2022)) representative of the domain. Additionally, we evaluate our approach for two state-of-the-art Deep Neural Networks (DNNs), which are trained on the ImageNet dataset (Krizhevsky et al., 2012). Finally, we also release the code for reproducing the study: https://github.com/ChristianB024/SparseVsContiguityRepo Our empirical analysis led to several key insights: * • We found that three out of four multimodal models and both unimodal DNN models are most vulnerable to the _Sparse Attack_ for all examined attack scenarios (targeted and untargeted misclassification). * • The _Patch Attack_ is most effective against the CNN-based multimodal model (ALIGN (Jia et al., 2021)). In an untargeted scenario, we reach a 99% success rate only by perturbing 0.01915% of the image (16 pixels). * • Overall, we observe that the multimodal models are more vulnerable to pixel perturbation attacks than the state-of-the-art DNNs, which we suspect to be linked to the way multimodal models and unimodal DNNs are trained. ## 2\. Related Works ### 2.1. Advesarial Attacks on multimodal models In recent years, multimodal models (Radford et al., 2021; Haas et al., 2023; Jia et al., 2021; Chen et al., 2022; Li et al., 2022; Xu et al., 2022; Pham et al., 2023; Zhong et al., 2022), which combine information from different modalities such as text, image, and audio, have emerged rapidly as powerful tools across various applications ranging from natural language processing and computer vision to speech recognition. Previous research studied attacking techniques on multimodal models and tried to identify certain vulnerabilities in those models. An adversarial perturbation attack can be defined by the approach used for perturbing the input. For instance, the works (Cao et al., 2023; Fort, 2021) explored the CLIP (Radford et al., 2021) multimodal model’s robustness against typographical attacks—the attacker introduced a textual sticker containing a word on the image to misclassify the image to the class written on the textual sticker. The paper by Qiu et al. (Qiu et al., 2022) analyzed the robustness of multimodal models by adding noise/blur to the whole image to create the image perturbations, while in our work, we focus on perturbing only a limited number of pixels. Similarly, the study by Mao et al. (Mao et al., 2022) quantified the robustness of the CLIP model by creating adversarial images using white- box PGD attack (Madry et al., 2017). They also presented two defense methods for mitigating that kind of attack. Moreover, some works, such as Yang et al. (Yang et al., 2021), focus on adversarial perturbation against multimodal models using only the PGD white-box scenario. Similarly, in the work by Schlarmann et al. (Schlarmann and Hein, 2023), they launch imperceptible attacks using the same PGD white-box technique on a single open-source model (OpenFlamingo (Awadalla et al., 2023)). In contrast, in our study, we limit the adversary’s power to considering the black-box scenarios only. A more different attack is presented in the work by Freiberger et al. (Freiberger et al., 2023) where it exploited the vulnerability of the modality gap in a contrastive learning approach by generating adversarial examples using the evolution strategy for searching generative models’ latent space. ### 2.2. Adversarial attacks using Genetic Algorithms (GA) Certain adversarial perturbation attacks on Deep Learning models focused only on the white-box scenarios (Szegedy et al., 2013; Goodfellow et al., 2014; Kurakin et al., 2016; Madry et al., 2017; Papernot et al., 2016; Carlini and Wagner, 2017; Moosavi-Dezfooli et al., 2016; Athalye et al., 2018), while others evolved in direction of black-box attacks (Su et al., 2019; Chen et al., 2017; Brendel et al., 2017; Tu et al., 2019). Moreover, for improving their attacks, there are studies where genetic algorithms have been introduced (Su et al., 2019; Wu et al., 2021). For instance, the study by Su et al. (Su et al., 2019) generated the adversarial examples against the DNNs by perturbing a specific number of pixels, thus using an $L_{0}$ norm. Furthermore, the paper by Jere et al. (Jere et al., 2019) used evolutionary strategies (including Differential Evolution) for generating _scratch_ perturbations on the image against DNNs only. Similarly, in terms of applying the evolutionary algorithms but using sparse adversarial perturbations through simultaneous minimization of $L_{0}$ and $L_{2}$ norms, the research by Williams et al. (Williams and Li, 2023) restricted the space of potential perturbations quite aggressively and focused on studying only sparsely distributed pixel perturbations. So there is no exploration regarding the impact of contiguous shapes of the perturbation on attack performance. In addition, both studies (Su et al., 2019; Williams and Li, 2023) performed the pixel perturbations on the original images. In this work, we perturb the preprocessed image such that our attack is neither influenced by the dimension of the original image nor the particular preprocessing routines. Therefore, we investigate the attack based solely on the different models. Also, the work by Qiu et al. (Qiu et al., 2021) included an analysis of creating the image perturbations with respect to the $L_{\infty}$-norm in a black-box setup, using different evolutionary strategies and characterizing which evolutionary algorithm is more suitable for this kind of attack. More recently, the paper by Nam et al. (Nam and Kil, 2023) explored the one-pixel idea but using an exhaustive search procedure, and the perturbations were identified directly on the original image while we are attacking the preprocessed images. In the study by Ghosh et al. (Ghosh et al., 2022), the differential evolution (DE) (Storn and Price, 1997) technique generated the pixel-perturbations in a black-box setup. The attack was launched against relatively old CNN models (VGG16 (Simonyan and Zisserman, 2014), GoogleNet (Szegedy et al., 2015), InceptionV (Szegedy et al., 2016), ResNet-50 (He et al., 2016)), while we attack more recently proposed multimodal models. Since we want to compare their robustness with established unimodal deep neural networks, we also compare them against the ResNet-50 (He et al., 2016) and the VAN (Visual Attention Network (Guo et al., 2023), which used Dilated Convolution Layers (Yu and Koltun, 2015)). ### 2.3. Adversarial Patch Attacks The idea of perturbing pixels in the form of a patch is not new, and it was already tested before in the white-box (Brown et al., 2017; Karmon et al., 2018) and black-box setups (Brown et al., 2017; Zhou et al., 2021). For instance, the work by Alrasheedi et al. (Alrasheedi and Zhong, 2023) explored the problem of making the image perturbations imperceptible and also found certain boundaries in the width limit. However, the entire attack was launched in a white-box setup, while we focus on black-box attacks since they are much closer to real-world scenarios. The study by Wei et al. (Wei et al., 2022) performed the attacks in a black-box manner by applying reinforcement learning as an optimization technique to simultaneously find the best position and perturbation values for the patch. This process is similar to what we try to achieve in our experiments, but we use the DE approach to evolve our perturbations. Different methods were used in the context of patch attacks by even hiding the patch as a QR code (Chindaudom et al., 2020, 2022), such that the pixel perturbations became less suspicious to the human eye. Moreover, these patch adversarial attacks were also examined concerning object detection models using aerial images (Tang et al., 2023) and deep hashing models (Hu et al., 2021b). Lastly, the research community relied on GANs to generate adversarial patches for image classification (Bai et al., 2021; Demir and Unal, 2018; Liu et al., 2019) or object-detection models (Hu et al., 2021a; Pavlitskaya et al., 2022). With this technique, the recent paper by Zhou et al. (Zhou et al., 2023) showed a non-targeted attack against the multimodal model CLIP (Radford et al., 2021). However, we do not train GAN models since we try to limit the attacker’s power (including the computational one). ## 3\. Attack model Model Figure 1. Threat Model Visualization We analyze the black-box scenario of an adversary with access to Artificial Intelligence (AI) models during inference, so the attacker has access only to the model’s input and output. Figure 1 presents a visualization of the described threat model. To formalize the attack, let the function $f:\\{0,...,255\\}^{w\times h\times 3}\rightarrow\mathbb{[}0,1]^{C}$ represent the model classification which takes as an input $x\in\\{0,...,255\\}^{w\times h\times 3}$ representing an RGB image with dimension $w$ and $h$ for width and height respectively. The output corresponds to a probability distribution for the $C$ classes. The maximum probability from the list of all given classes $C$ yields the label (or textual description) $y_{original}=\arg\max_{i\in\\{1,\ldots,C\\}}f_{i}(\mathbf{x})$. We consider two possible pixel-perturbation attacks. In the first one, the goal of the attacker is to find a perturbation ($\Delta\mathbf{x}$) such that the new image is classified as a specific target label (targeted attack noted as _tar_). The attack is presented in the Equation 1 where the adversary is maximizing the target label ($p_{tar}$) probability by perturbing at most $d$ pixels. (1) $\displaystyle\text{find}\;\Delta\mathbf{x}\;\text{such that}\begin{cases}p_{\text{tar}}=\underset{\Delta\mathbf{x}}{\max}\;f_{i:=\text{tar}}(\mathbf{x}+\Delta\mathbf{x}),\\\ \text{and}\;||\Delta\mathbf{x}||_{0}\leq d\end{cases}$ The second one, untargeted attack (noted as _untar_), has as the goal to do random misclassification (maximizing any other class probability) as it is described similarly by the Equation 2. (2) $\displaystyle\text{find}\;\Delta\mathbf{x}\;\text{such that}\begin{cases}p_{untar}=\underset{\Delta\mathbf{x}}{\max}\;f_{i\neq original}(\mathbf{x}+\Delta\mathbf{x}),\\\ \text{and}\;||\Delta\mathbf{x}||_{0}\leq d\end{cases}$ The perturbation of our attack is created for the preprocessed image instead of the original image. In fact, if the adversary perturbs one pixel in a $32\times 32$ image (dimension of the original image), then it perturbed 10.24% of the image (approach presented in the paper by Su et al. (Su et al., 2019)), while one pixel in a preprocessed image of dimension $224\times 224$ represents less than 0.002% of the image area. Table 1 shows the exact percentages of the perturbed area for the target models. Furthermore, attacking this small area of the image makes the attack less perceptible for the human eye and hard to detect by the systems, hence having potentially catastrophic consequences in a sensitive application. Moreover, this setup is closer to the real-world scenario, where companies usually release new AI models via an API, which takes input images of a specific dimension (the company also need to release the Image Processor of the model) and outputs the probabilities for specific classes/textual descriptions. Lastly, the attack is limited to $Q$ number of queries to the model. Hence, the attacker creates the perturbations using an evolutionary algorithm (differential evolution—DE (Storn and Price, 1997)) and makes a maximum of $Q$ queries. Table 1. Percentage of the perturbed area in the attacked image Preprocessed image dimension \Pixels | 4 pixels | 9 pixels | 16 pixels ---|---|---|--- ALIGN - 289 x 289 | 0.00478% | 0.01077% | 0.01915% AltCLIP, CLIP-ViT-B/32, GroupViT - 224 x 224 | 0.00797% | 0.01793% | 0.03188% VAN-base, ResNet-50 - 224 x 224 | 0.00797% | 0.01793% | 0.03188% ## 4\. Evolutionary Attacks This section presents the encoding for each type of attack used (_Sparse_ and _Contiguous Attacks_), followed by how we initialize the genetic algorithm setup. The evolutionary process is then explained in depth, covering mutation, crossover, and fitness functions. ### 4.1. Sparse Attack Figure 2. Agent encoding for the Sparse Attack As mentioned in Section 3, we use Su et al. (Su et al., 2019)’s approach, the differential evolution (DE) (Storn and Price, 1997) genetic algorithm, as an optimization method for finding the best perturbations for misclassification tasks. Encodings of the perturbations in specific vectors (noted as agents or population members) become accessible to the evolutionary approach, which tries to iteratively yield better perturbations over time. For the Sparse Attack, we can use five values, $[x_{coord},y_{coord},r_{value},g_{value},b_{value}]$, to describe a pixel, namely its positioning by the coordinates and its RGB values. As a result, an agent is represented by $N$ flattened and concatenated vectors, where $N$ is the number of pixels for which the perturbation is being performed. Section 2 illustrates the encoded perturbation vector we use for this type of attack. An important property of the _Sparse Attack_ is that there are no constraints on the spatial distribution of the pixels used for the attack. ### 4.2. Contiguous Attacks In contrast to the _Sparse Attack_ , we can impose constraints on the spatial distribution of the pixels used for an attack. In particular, we choose to force pixels to form spatially contiguous regions in the image (i.e., anti- diagonal, diagonal, column, row, patch). Since we also address the question of how different shapes of contiguous pixels can affect the model robustness, we create appropriate encodings for the _Contiguous Attack_. We decrease the number of the values that must be mutated for other shapes and store only the coordinates of the first pixel from that shape. Thus, we have the generic encoding pattern $[x_{coord},y_{coord},r_{value_{1}},g_{value_{1}},b_{value_{1}},...,r_{value_{n}},g_{value_{n}},b_{value_{n}}]$, where the $r$, $g$, $b$ values represent the colors for each perturbed pixel in the specific shape. The first two values are the x and y coordinates, representing different starting pixels: Anti-Diag Attack: the x and y coordinates of the lowest pixel of the shape. Diagonal Attack: the x and y coordinates of the up-most pixel of the diagonal. Column Attack: the x and y coordinates of the up-most pixel of the column. Row Attack: the x and y coordinates of the left-most pixel of the row. Patch Attack: the x and y coordinates of the up-left corner pixel of the patch. As an example on how the pixels are positioned and what is their order in the encoding vector depending on the attack type, we illustrate the case where we perturb four pixels in Figure 3. (a) Anti-Diagonal (b) Diagonal (c) Column (d) Row (e) Patch Figure 3. Spatial arrangement of pixels in Contiguous Attacks for exemplary case of four perturbed pixels The idea of _Contiguous Attacks_ is to actually try to perturb multiple visual patches (depending on the length of the shape) from the ViT model. So, suppose an attacker chooses the row shape. In that case, it can perturb a number of consecutive visual patches (or tokens) differently depending on their alignment with the row of perturbed pixels. Figure 4 shows an example of perturbing 6 pixels in different shapes. We have most perturbations in the middle visual patch for the _Row Attack_ and less for the beginning and final patch. Hence, we want to test if it is better to perturb less information but from many very different visual patches (_Sparse Attack_), perturb less information to some visual patches and more in some others (_Row Attack_), or perturb more in a smaller number of visual patches (_Patch Attack_). Thus, we do not focus on the evolutionary method for improving the position of those pixel shapes. However, we emphasize that if we use an evolutionary approach to find the best perturbation based on the provided shape, we can still exploit how ViT creates the image tokens. (a) Sparse Attack (b) Contiguous - Row Attack (c) Patch Attack Figure 4. Pixel perturbations for different attacks on patch-based models (e.g. Vision Transformers, ViTs) ### 4.3. Initialization Similarly with the study by Su et al. (Su et al., 2019), for the initialization process of the population, we do the following: * • for $x_{coord}$ and $y_{coord}$ we assign uniformly random integer values based on the dimension of the attacking image (i.e., $U(0,max_{dim})$) * • for RGB values, since their values are between 0 and 255, we assign them as a random value from a normal distribution $N(\mu=128,\sigma=127)$ ### 4.4. Evolutionary process After initializing the first population (generating $\mathcal{P}$ perturbations, also called agents), we calculate the fitness value for each one. This is computed by applying a fitness function. As mentioned in Section 3, we have two possible attack scenarios. Hence, we use two hinge loss functions, inspired by the paper by Chen et al. (Chen et al., 2017) and exemplified in the Equations 3 and 4—for targeted attacks $F_{tar}$ and $F_{untar}$ for the non-targeted attacks. Furthermore, we maximize the fitness function of every new generation. Since we assume a black-box attack scenario, we need to consider that we can only access the probabilities for each class. For instance, in our case, the $p[x]$ means we obtain the softmax (probability) value from the model for class $x$. Also, as these functions ascend, sorting the agents by their fitness values, at position zero in the population yields the best perturbation. (3) $\displaystyle F_{tar}$ $\displaystyle=p_{tar}-\max_{i\neq\text{tar}}\,p_{i}$ (4) $\displaystyle F_{untar}$ $\displaystyle=\max_{i\neq\text{original}}\,p_{i}-p_{\text{original}}$ (5) $\displaystyle agent_{cand}=population_{0}+mutation_{rate}\cdot(population_{r1}-population_{r2})$ where $r1$ and $r2$ are random indices that are within the population size For the next iteration, we create $\mathcal{P}$ other new candidates ($agent_{cand}$). Thus, for each member in the population, we apply the mutation function from Equation 5 and the crossover operation, using the exponential strategy approach (see Appendix A). A list with the parameter values used in the DE is presented in the Section 5.1. After obtaining the candidates, we perform $\mathcal{P}$ queries to the model and compute the fitness values based on the probabilities obtained. Therefore, we decide which $\mathcal{P}$ agents out of $2\cdot\mathcal{P}$ (old and new agents) survive for the next generation, based on their fitness values—the $\mathcal{P}$ agents with the highest fitness values are selected. We repeat this process $\mathcal{G}$ times, as we are limited to the maximum number of queries we can do. The idea of the evolutionary approach is generating increasingly better perturbations over time and, as a result, have at least one perturbation that categorizes the attack as successful. Further details about how we quantify the success of an attack are explained in Section 5.2. ## 5\. Experiments This section presents the images used for the attack and the values for the parameters used. Moreover, we describe the evaluation metrics and baselines. Lastly, we present our numerical results. ### 5.1. Experiment setup We evaluate our attacks on samples from ImageNet (Krizhevsky et al., 2012). It contains images with a variable resolution length and, thus, a different number of pixels. As explained in Section 3, we attack the preprocessed image before it is fetched into the model; we do not apply any preprocessing operations to the original images from the dataset to scale all of them to a fixed resolution, $256\times 256$ resolution, as it is done in the original paper (Krizhevsky et al., 2012). The number of labels for this dataset is 1000. Table 2. Performance comparison between Multimodal Models and DNNs Model | Accuracy | Model Type ---|---|--- ALIGN (Jia et al., 2021) | 0.5442 | Multimodal CLIP_ViT-B32 (Radford et al., 2021) | 0.5601 | Multimodal AltCLIP (Chen et al., 2022) | 0.6993 | Multimodal GroupViT (Xu et al., 2022) | 0.3187 | Multimodal ResNet-50 (He et al., 2016) | 0.8087 | DNN VAN-base (Guo et al., 2023) | 0.8022 | DNN The models used can be split into two branches based on their architecture. In the first category, we have the state-of-the-art multimodal models; in the other, we have the state-of-the-art DNNs, trained on the ImageNet dataset. Table 2 shows the list of all the models and their measured accuracy for 10,000 images from the testing dataset of ImageNet. The motivation behind this assessment is given by the connection between robustness, the architecture used, and the versatility of the models (multimodal models can be applied to different datasets without the need for extensive retraining due to the zero- shot property, while DNNs usually require training on a specific dataset and are optimized for that one only). In order to use the multimodal models for image classification, we rely on appropriate input captions. In particular, we append the string _”a photo of”_ in front of the original label of the image to enable zero-shot image classification. As observed in Table 2, the models perform very differently, and we want an extensive analysis of their robustness. Thus, to avoid any possible bias, we extract 100 random correctly classified images for each model. Those are part of the testing dataset; consequently, the models did not see those images during their training phase. Then, after extracting those images, we store the preprocessed version based on each model. Hence, we test different attack setups on the same images to quantify the attack’s effectiveness. Based on empirical observations, we observe that the best parameters for the Differential Evolution are $mutation_{rate}=0.55$ and $crossover_{rate}=0.8$. Moreover, the population of $\mathcal{P}=300$ gives the best results when we limit the number of iterations to $\mathcal{G}=100$. As a remark from Section 3, the total number of queries an attacker needs to do to perturb an image is at most $Q$, which is translated to $\mathcal{P}\cdot\mathcal{G}\leq Q$, which means $300\cdot 100\leq 30,000\leq Q$. ### 5.2. Evaluation Metrics and Baselines Attack performance is quantified by computing the _SR_ (success rate) as an empirical measurement. Depending on the type of attack, we define the success of the attack as follows: targeted attacks involves how many images managed to be misclassified as the target label, while for untargeted scenarios, we count the number of images that are misclassified as a different label than the original one. As mentioned Section 5.1, for our setup, it translates to using 100 images and computes how many of them we manage to attack successfully. Then, since the SR is a value between 0 and 1, we divide it by 100 (the number of images). Thus, an SR value of 1 means we create successful perturbations for all images. Lastly, to investigate the effectiveness of the evolutionary attacks, we create random versions for each type of attack and use them as baselines. These _Random Attacks_ use the same parameters as the evolutionary one ($\mathcal{P}=300$ and $\mathcal{G}=100$), but the difference is in generating those random agents without an evolutionary approach. At each iteration, we generate $\mathcal{P}$ agents according to their respective encodings presented in Sections 4.1 and 4.2, and the values are random and set based on the initialization values presented in Section 4.3. However, some corrections are applied if the values are outside the ranges (especially for the RGB values). ### 5.3. Results Figure 5. Targeted Attacks Figure 6. Untargeted Attacks We evaluate the performances of attacks based on the SR metric and against the baselines described in Section 5.2. For instance, Figure 5 presents the SR for the targeted attacks for each selected model and the shape of the attacks, depending on the number of pixels perturbed. At first glance, we observe that for the ALIGN model, the most successful attack is the _Patch Attack_ , while for the others, it is the _Sparse Attack_. Furthermore, the best attacks against the multimodal models achieve an SR higher than 0.35, while for the DNNs, the SR is generally lower except for ResNet-50 with the _Sparse Attack_ and perturbing 16 pixels. Overall, the progression of SR values with increasing perturbed pixels for the DE attacks is steeper than for the random baseline attacks. Thus demonstrating that the SR of the proposed attacks is a function of the number of perturbed pixels with an increased number of pixels yielding higher SR. For ALIGN, Figure 5 shows that the _Patch Attack_ performs the best (0.74), but the value against the _Sparse Attack_ for 4 pixels is relatively close (0.58 for _Patch_ and 0.57 for _Sparse_). However, the transition from 4 to 6 pixels is steeper for the _Patch Attack_ , starting from 0.57 and reaching 0.71, while for the _Sparse Attack_ , it starts from 0.56 and reaches 0.64 for the 9 pixels. Nevertheless, for the transition from 9 to 16 pixels, the slopes for both attacks seem similar. Moreover, all the _Evolutionary Attacks_ manage to beat the _Random Attacks_ , with one exception: _Random Patch Attack_ , which has an SR close to the ones of the _Contiguous Attacks_. Besides this one, the rest of the _Random_ ones only achieve an SR below 40%, even after increasing the number of pixels. Figure 5 highlights that for AltCLIP, the _Sparse Attack_ is the best attack for 4 and 9 pixels with SR of 0.35 and 0.46, respectively. For 16 pixels, the most successful attack is the _Patch Attack_ , with the SR almost 0.5. All the _Contiguous attacks_ outperform the _Random_ ones, with the gap between those becoming more prominent with more pixels. Initially, the difference is negligible, while for 16 pixels, the gap between the _Row Attack_ (worst _Contiguous Attack_) and _Random Patch_ (the best _Random Attack_) is approximately 0.7. However, _Random Patch_ is still the best _Random attack_ , usually above 3-5% from the second-best _Random Attack_. Additionally, there is a big difference between the _Patch_ and _Sparse Attacks_ and the remaining _Contiguous Attacks_ (10-25%). Based on Figure 5, CLIP is more robust to our attacks than ALIGN but less robust when compared against the other multimodal models (GroupViT, AltCLIP). The best attack remains the _Sparse Attack_ with values from 0.5 for 4 pixels to 0.68 for 16 pixels, followed by the _Patch Attack_ with values from 0.33 to 0.50. The third best attack is _Row_ one, and for the 16 pixels, it is followed behind with 2% by _Column_ one. Similar to CLIP, Figure 5 shows that for GroupViT, the top two best attacks are the _Sparse Attack_ , reaching an SR of more than 55% for all pixels, and the _Patch Attack_ , achieving a maximum SR of 47%. Nonetheless, there is a gap between those two attacks and the others: for 16 pixels, the _Row Attack_ reaches a value of 0.45 and below, with 0.02 less than the _Random Sparse Attack_. The best _Random Attack_ is the _Sparse_ one, which outperforms the _Patch_ with approximately 10% for 16 pixels. Similarly, all the other _Contiguous Attacks_ are better than the other _Random_ ones, except for _Random Sparse_. From Figure 5 we observe that both DNNs have similar behavior: they are more robust than any multimodal model. With ResNet-50, the SR value does not exceed 0.45 for the best attack (_Sparse Attack_), while for the VAN-base, the best one stops at 0.25. Moreover, the trend regarding the best two attacks remains the same as for CLIP and GroupViT, with a significant difference regarding the gaps between those two attacks and the rest—the discrepancy is much smaller (approx. 5% for ResNet-50 and less than 2% for the VAN-base). Moreover, the _Random Attacks_ behave in the same manner as the rest of the _Contiguous Attacks_ with values fluctuating within the limit of 5% for both models. Results for the untargeted attacks are presented in Figure 6. Overall, the trends remain the same as the targeted attacks, but the SR value is increased by approximately 25% for ALIGN, 15% for AltCLIP, and 10% for CLIP and GroupViT. Interestingly, a much smaller increase is observable for both DNNs (approximately 5%). ALIGN appears to have the worst security against the _Patch Attack_ for a more significant number of pixels since, using 16 pixels, we obtain a result of 99% SR. Moreover, the _Random Patch Attack_ is in the third position, with an SR close to the _Sparse Attack_ for 16 and achieving a score of 0.85. For AltCLIP, the difference between the _Random Attacks_ and the _Evolutionary_ ones increases proportionally with the number of pixels. In addition, similar to the targeted attack, the _Patch Attack_ is slightly better than the _Sparse_ one for the 16 pixels, while for 4 and 9, the winning one is the _Sparse Attack_. For CLIP and GroupViTs, the plots for ranking the effectiveness of the attacks remain the same as for targeted attacks, as is the case for the DNNs. ## 6\. Discussion At first glance, based on the results presented in Section 5.3, we can see that the targeted attack is more challenging than the untargeted one, which is indeed expected. However, the average difference between those two attacks for multimodal models differs from the DNNs. Also, the DNNs are more robust to our attacks than any multimodal models. An important observation is that for all setups we have at least 2-3 attacks which are better than any of the baselines. Our results suggest that multimodal models with ViT (Visual Transformer) (Dosovitskiy et al., 2020) as an Image Encoder are more prone to the _Sparse_ than the _Patch Attack_. To provide a potential explanation, we describe what happens when an image is fed to a ViT. Initially, ViT split the images into visual patches and then considers those as tokens. After this stage, those tokens are processed similarly to a normal Transformer (Vaswani et al., 2017). Suppose the adversary perturbs pixels on different visual patches (as you have in _Sparse Attack_), then it can alter more tokens. Hence, in the self- attention layer (Vaswani et al., 2017) (core component of a Transformer that also provides good results in terms of robustness), there is a high possibility of breaking the semantical meaning information from different visual patches. However, using any _Contiguous Attack_ , the adversary cannot perturb as many visual patches as would be possible using the _Sparse Attack_. Additionally, the _Patch Attack_ outperforms any _Contiguous Attack_ because, with this shape, we can alter more information in a single visual patch than other contiguous pixel distributions. We also question why CNN-based Image Encoders (ALIGN model) are more prone to _Patch Attacks_. Based on the idea from the previous paragraph, if one piece of information in the kernel is altered for those types of architectures, it is a complete disaster for the upcoming max-pooling layers. The altered information propagates to deeper layers, which try to extract correct information that can be used as features for classification. Based on our results, ALIGN (Jia et al., 2021) is the most vulnerable model to the implemented $L_{0}$-norm perturbations. The best method to attack the ALIGN model is by using the _Patch Attack_ and not the _Sparse_ one, as it is for most of the cases in the other models. A potential explanation for this observation could be because ALIGN uses the EfficientNet (Tan and Le, 2021) (CNN-based architecture that uses 3x3 Convolutional Layers) as an Image Encoder. This is why we have a steep increase in the SR for _Patch Attack_ from 4 to 9 pixels, from a patch of 2x2 to 3x3 (representing the same size as the Convolutional Layer). However, the transition from 9 to 16 pixels is smoother because we already manage to create a patch with the same dimension as the Convolutional layer, so the attack does not increase a lot in effectivness from the previous size (it increased only by 4%, while before we have an increasing of approximate 10%). Based on this information, the adversary can deduce some information regarding the architecture used (the size of the kernel in initial Convolutional layers) in the model and therefore here we could have a potential privacy issue. Moreover, this vulnerability of the ALIGN model against the _Patch Attack_ can have serious consequences since, for the untargeted attack, a patch of 16 pixels obtains a 99% success rate. We see that the shape of the attack (_Patch_) additionally confirms this vulnerability since the _Random Patch Attacks_ achieve high scores in both cases, especially in the untargeted setting, being the third most powerful attack. We observe that AltCLIP is more robust than the original implementation of CLIP. This could be explained because of the architecture used in the Image Encoder. More precisely, AltCLIP (Chen et al., 2022) uses the ViT-L14, which has the _dimension of the embedding_ equals to 768, 24 _layers_ , _width_ equals to 1024, and 16 _heads_ , while CLIP (Radford et al., 2021) uses the ViT-B/32 which has _dimension of the embedding_ equals to 512, 12 _layers_ , _width_ equals to 768, and 16 _heads_. Hence, since the embeddings from AltCLIP are larger, they capture more information, and therefore, it is harder to fool those systems with the $L_{0}$-norm perturbations. Moreover, since the AltCLIP Image Encoder is deeper (based on the number of layers), it can capture more complex patterns and hierarchical features. It may exhibit better robustness and generalization—this is also proved by the Table 2, where AltCLIP obtained the best performance compared to the other multimodal models. However, it seems that there exists a connection between the depth and the width configuration of the model with larger patches since for both non- targeted and targeted attacks in the case of 16 pixels, the _Patch Attack_ is slightly better, and there exists a stagnation in the _Sparse Attack_. Furthermore, AltCLIP uses the XLM-R (Conneau et al., 2019) as a Text Encoder, while CLIP uses the masked self-attention transformer (Dosovitskiy et al., 2020). Although this aspect might influence the robustness in favor of AltCLIP, the core component that improves the security aspect is the Image Encoder since we target it with our adversarial examples and do not give adversarial textual input. For the second most robust multimodal model, GroupViT (Xu et al., 2022), we consider that the good results in terms of robustness are determined mainly by the grouping mechanism used in the Image Encoder. This new architecture uses a hierarchical group ViT—in short, besides the classical ViT approach, where the image is initially split into visual patches and projected into tokens (using Linear Projection), it also contains the Group Learnable Tokens. Those group tokens describe a different semantic notion; thus, the model also groups more semantic concepts. Since those tokens are learned during the training phase, during the grouping semantic phase, the model tries to avoid perturbed visual tokens; therefore, this mechanism increases the robustness of the model. In addition, we observe significant differences in the SR between the DNNs and the rest of the multimodal models. A potential cause for this behavior could be related to the fact that we evaluate the DNNs on the images with specific class labels for which they were originally trained. Consequently, these DNNs contain class-specific features extracted during training on ImageNet (Krizhevsky et al., 2012) which requires an attacker to perturb more pixels. In contrast to DNNs, multimodal models were trained on large corpuses of image and text data without explicit definition of target classes during training (contrastive learning). Hence, they can be used on any dataset other than the ones they were trained on. However, we argue that this flexibility comes with an increased vulnerability to pixel perturbations, which is evident from our experiments. Lastly, a potential explanation why VAN-base (Guo et al., 2023) is the most resilient model is based on its use of Dilated Convolutional Layers (Yu and Koltun, 2015). Attacks based on contiguous pixels may be thwarted by expanding the filter’s receptive field when it is widened through introducing gaps between successive elements—the main feature of dilated convolutions. As a result, the _Sparse Attack_ appears to be the most effective attack on VAN- base, while the _Contiguous_ ones behave similarly to _Random Attacks_. Moreover, the robustness of the VAN-base model surpasses the ResNet-50 (He et al., 2016), which employs regular convolutional layers, providing further evidence for the robustness of dilated convolutions. ## 7\. Conclusion and Future Work Our work analyzes the robustness of four multimodal models and two state-of- the-art unimodal DNNs against Sparse and Contiguous adversarial examples defined by the $L_{0}$-norm in a black-box attack scenario. For both types of attack (targeted and untargeted) and all models, we have at least two evolutionary attacks that are more effective than the rest—all multimodal models that use a ViT as an Image Encoder are most vulnerable to the _Sparse Attack_. In contrast, in the CNN-based multimodal model (ALIGN), the most effective attack is represented by a contiguous pixel perturbation in the form of a _Patch_. This attack prove to be so powerful against ALIGN that with an increased patch dimension of 4x4 (less than 0.02% of the image area), in a non-targeted scenario, we achieved a 99% SR. Furthermore, based on the overall SR, we rank the most secure models as follows: VAN-base, ResNet-50, AltCLIP, GroupViT, CLIP-B/32, and ALIGN. Our results also point towards different characteristics of AI model architectures (e.g., tokenization, convolutions, dilated convolutions) that may be responsible for different levels of robustness against our attacks. When contrasting the results for multimodal and unimodal models, we observe that there may be an essential trade-off between the robustness of the model and its adaptability to be used in diverse tasks (i.e., zero-shot capability). Both of these aspects should be further investigated in a follow-up study. For future work, we propose to study the attacks on the $L_{0}$-norm for other types of multimodal models that focus on object detection and text generation, such as IDEFICS (Laurençon et al., 2023), Kosmos-2 (Peng et al., 2023), and Gemini (Team et al., 2023). Moreover, we want to do more investigation on the contiguous attacks and focus on finding the best hyperparameters for the evolution algorithm used for those specific patterns, such that we can construct a systematic search setup, where we can infer different information regarding the model architecture from the observed attack performance. ## References * (1) * Alrasheedi and Zhong (2023) Fahad Alrasheedi and Xin Zhong. 2023. Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary. _arXiv preprint arXiv:2308.15344_ (2023). * Athalye et al. (2018) Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In _International conference on machine learning_. PMLR, 274–283. * Awadalla et al. (2023) Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. 2023\. Openflamingo: An open-source framework for training large autoregressive vision-language models. _arXiv preprint arXiv:2308.01390_ (2023). * Bai et al. (2021) Tao Bai, Jinqi Luo, and Jun Zhao. 2021. Inconspicuous adversarial patches for fooling image-recognition systems on mobile devices. _IEEE Internet of Things Journal_ 9, 12 (2021), 9515–9524. * Brendel et al. (2017) Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2017. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. _arXiv preprint arXiv:1712.04248_ (2017). * Brown et al. (2017) Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2017. Adversarial patch. _arXiv preprint arXiv:1712.09665_ (2017). * Cao et al. (2023) Liangliang Cao, Bowen Zhang, Chen Chen, Yinfei Yang, Xianzhi Du, Wencong Zhang, Zhiyun Lu, and Yantao Zheng. 2023. Less is More: Removing Text-regions Improves CLIP Training Efficiency and Robustness. _arXiv preprint arXiv:2305.05095_ (2023). * Carlini and Wagner (2017) Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In _2017 ieee symposium on security and privacy (sp)_. Ieee, 39–57. * Chen et al. (2017) Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In _Proceedings of the 10th ACM workshop on artificial intelligence and security_. 15–26. * Chen et al. (2022) Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, and Ledell Wu. 2022. Altclip: Altering the language encoder in clip for extended language capabilities. _arXiv preprint arXiv:2211.06679_ (2022). * Chindaudom et al. (2020) Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. 2020. AdversarialQR: An adversarial patch in QR code format. In _2020 Joint 9th International Conference on Informatics, Electronics & Vision (ICIEV) and 2020 4th International Conference on Imaging, Vision & Pattern Recognition (icIVPR)_. IEEE, 1–6. * Chindaudom et al. (2022) Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, and Kazunori Kotani. 2022. Surreptitious Adversarial Examples through Functioning QR Code. _Journal of Imaging_ 8, 5 (2022), 122. * Conneau et al. (2019) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. _arXiv preprint arXiv:1911.02116_ (2019). * Demir and Unal (2018) Ugur Demir and Gozde Unal. 2018. Patch-based image inpainting with generative adversarial networks. _arXiv preprint arXiv:1803.07422_ (2018). * Dosovitskiy et al. (2020) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020\. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ (2020). * Fort (2021) Stanislav Fort. 2021. Pixels still beat text: attacking the OpenAI CLIP model with text patches and adversarial pixel perturbations. _Stanislav Fort [Internet]_ 5 (2021). * Freiberger et al. (2023) Matthias Freiberger, Peter Kun, Anders Sundnes Løvlie, and Sebastian Risi. 2023. CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution. _arXiv preprint arXiv:2307.03798_ (2023). * Ghosh et al. (2022) Arka Ghosh, Sankha Subhra Mullick, Shounak Datta, Swagatam Das, Asit Kr Das, and Rammohan Mallipeddi. 2022. A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers. _Pattern Recognition_ 122 (2022), 108279. * Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_ (2014). * Guo et al. (2023) Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, and Shi-Min Hu. 2023. Visual attention network. _Computational Visual Media_ 9, 4 (2023), 733–752. * Haas et al. (2023) Lukas Haas, Silas Alberti, and Michal Skreta. 2023. Learning Generalized Zero-Shot Learners for Open-Domain Image Geolocalization. arXiv:2302.00275 [cs.CV] * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 770–778. * Hu et al. (2021b) Shengshan Hu, Yechao Zhang, Xiaogeng Liu, Leo Yu Zhang, Minghui Li, and Hai Jin. 2021b. Advhash: Set-to-set targeted attack on deep hashing with one single adversarial patch. In _Proceedings of the 29th ACM International Conference on Multimedia_. 2335–2343. * Hu et al. (2021a) Yu-Chih-Tuan Hu, Bo-Han Kung, Daniel Stanley Tan, Jun-Cheng Chen, Kai-Lung Hua, and Wen-Huang Cheng. 2021a. Naturalistic physical adversarial patch for object detectors. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 7848–7857. * Jere et al. (2019) Malhar Jere, Loris Rossi, Briland Hitaj, Gabriela Ciocarlie, Giacomo Boracchi, and Farinaz Koushanfar. 2019. Scratch that! An evolution-based adversarial attack against neural networks. _arXiv preprint arXiv:1912.02316_ (2019). * Jia et al. (2021) Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In _International conference on machine learning_. PMLR, 4904–4916. * Karmon et al. (2018) Danny Karmon, Daniel Zoran, and Yoav Goldberg. 2018. Lavan: Localized and visible adversarial noise. In _International Conference on Machine Learning_. PMLR, 2507–2515. * Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. _Advances in neural information processing systems_ 25 (2012). * Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. _arXiv preprint arXiv:1611.01236_ (2016). * Laurençon et al. (2023) Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M Rush, Douwe Kiela, et al. 2023\. Obelisc: An open web-scale filtered dataset of interleaved image-text documents. _arXiv preprint arXiv:2306.16527_ (2023). * Li et al. (2022) Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _International Conference on Machine Learning_. PMLR, 12888–12900. * Liu et al. (2019) Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. 2019. Perceptual-sensitive gan for generating adversarial patches. In _Proceedings of the AAAI conference on artificial intelligence_ , Vol. 33. 1028–1035. * Madry et al. (2017) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. _arXiv preprint arXiv:1706.06083_ (2017). * Mao et al. (2022) Chengzhi Mao, Scott Geng, Junfeng Yang, Xin Wang, and Carl Vondrick. 2022. Understanding zero-shot adversarial robustness for large-scale models. _arXiv preprint arXiv:2212.07016_ (2022). * Moosavi-Dezfooli et al. (2016) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 2574–2582. * Nam and Kil (2023) Wonhong Nam and Hyunyoung Kil. 2023. AESOP: Adjustable Exhaustive Search for One-Pixel Attacks in Deep Neural Networks. _Applied Sciences_ 13, 8 (2023), 5092. * Papernot et al. (2016) Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In _2016 IEEE European symposium on security and privacy (EuroS &P)_. IEEE, 372–387. * Pavlitskaya et al. (2022) Svetlana Pavlitskaya, Bianca-Marina Codău, and J Marius Zöllner. 2022. Feasibility of inconspicuous GAN-generated adversarial patches against object detection. _arXiv preprint arXiv:2207.07347_ (2022). * Peng et al. (2023) Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. 2023. Kosmos-2: Grounding Multimodal Large Language Models to the World. _arXiv preprint arXiv:2306.14824_ (2023). * Pham et al. (2023) Hieu Pham, Zihang Dai, Golnaz Ghiasi, Kenji Kawaguchi, Hanxiao Liu, Adams Wei Yu, Jiahui Yu, Yi-Ting Chen, Minh-Thang Luong, Yonghui Wu, et al. 2023\. Combined scaling for zero-shot transfer learning. _Neurocomputing_ 555 (2023), 126658. * Qiu et al. (2021) Hao Qiu, Leonardo Lucio Custode, and Giovanni Iacca. 2021. Black-box adversarial attacks using evolution strategies. In _Proceedings of the Genetic and Evolutionary Computation Conference Companion_. 1827–1833. * Qiu et al. (2022) Jielin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, and Mu Li. 2022. Are Multimodal Models Robust to Image and Text Perturbations? _arXiv preprint arXiv:2212.08044_ (2022). * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021\. Learning transferable visual models from natural language supervision. In _International conference on machine learning_. PMLR, 8748–8763. * Schlarmann and Hein (2023) Christian Schlarmann and Matthias Hein. 2023. On the adversarial robustness of multi-modal foundation models. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 3677–3685. * SciPy Contributors (2024) SciPy Contributors. Accessed: 2024. _scipy.optimize.differential_evolution_. * Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_ (2014). * Storn and Price (1997) Rainer Storn and Kenneth Price. 1997. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. _Journal of global optimization_ 11 (1997), 341–359. * Su et al. (2019) Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. One pixel attack for fooling deep neural networks. _IEEE Transactions on Evolutionary Computation_ 23, 5 (2019), 828–841. * Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 1–9. * Szegedy et al. (2016) Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 2818–2826. * Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. _arXiv preprint arXiv:1312.6199_ (2013). * Tan and Le (2021) Mingxing Tan and Quoc Le. 2021. Efficientnetv2: Smaller models and faster training. In _International conference on machine learning_. PMLR, 10096–10106. * Tang et al. (2023) Guijian Tang, Tingsong Jiang, Weien Zhou, Chao Li, Wen Yao, and Yong Zhao. 2023. Adversarial patch attacks against aerial imagery object detectors. _Neurocomputing_ 537 (2023), 128–140. * Team et al. (2023) Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023\. Gemini: a family of highly capable multimodal models. _arXiv preprint arXiv:2312.11805_ (2023). * Tu et al. (2019) Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, and Shin-Ming Cheng. 2019. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 742–749. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ 30 (2017). * Wei et al. (2022) Xingxing Wei, Ying Guo, Jie Yu, and Bo Zhang. 2022. Simultaneously optimizing perturbations and positions for black-box adversarial patch attacks. _IEEE transactions on pattern analysis and machine intelligence_ (2022). * Williams and Li (2023) Phoenix Neale Williams and Ke Li. 2023. Black-Box Sparse Adversarial Attack via Multi-Objective Optimisation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 12291–12301. * Wu et al. (2021) Chenwang Wu, Wenjian Luo, Nan Zhou, Peilan Xu, and Tao Zhu. 2021. Genetic algorithm with multiple fitness functions for generating adversarial examples. In _2021 IEEE Congress on Evolutionary Computation (CEC)_. IEEE, 1792–1799. * Xu et al. (2022) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, and Xiaolong Wang. 2022. Groupvit: Semantic segmentation emerges from text supervision. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 18134–18144. * Yang et al. (2021) Karren Yang, Wan-Yi Lin, Manash Barman, Filipe Condessa, and Zico Kolter. 2021. Defending multimodal fusion models against single-source adversaries. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 3340–3349. * Yu and Koltun (2015) Fisher Yu and Vladlen Koltun. 2015. Multi-scale context aggregation by dilated convolutions. _arXiv preprint arXiv:1511.07122_ (2015). * Zhong et al. (2022) Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, et al. 2022\. Regionclip: Region-based language-image pretraining. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 16793–16803. * Zhou et al. (2021) Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. 2021. A data independent approach to generate adversarial patches. _Machine Vision and Applications_ 32 (2021), 1–9. * Zhou et al. (2023) Ziqi Zhou, Shengshan Hu, Minghui Li, Hangtao Zhang, Yechao Zhang, and Hai Jin. 2023. Advclip: Downstream-agnostic adversarial examples in multimodal contrastive learning. In _Proceedings of the 31st ACM International Conference on Multimedia_. 6311–6320. ## Appendix A Exponential Strategy in Differential Evolution Algorithm 1 was used for the Crossover operation - Exponential Strategy in the Differential Evolution. The idea of this algorithm is to recombine the agents based on some $crossover_{rate}$—how much information from the old agents ($agent_{old}$) should also be preserved in the new ones ($agent_{cand}$). Thus, we directly modify the new agents in the $agent_{cand}$ vector and return it for further processing. This code was adapted from the Differential Evolution implemented in Scipy (SciPy Contributors, 2024). Algorithm 1 Exponential Strategy (SciPy Contributors, 2024) 1:function Crossover Exp($agent_{old},agent_{cand},crossover_{rate}$) 2: $big_{r}\leftarrow\text{random index value of the }agent_{cand}$ 3: $i\leftarrow 0$ 4: while $i<\text{size of }agent_{cand}$ do 5: $r\leftarrow$ random number between 0 and 1 6: if $crossover_{rate}<r$ then 7: break 8: $agent_{cand}[big_{r}]\leftarrow agent_{old}[big_{r}]$ 9: $big_{r}\leftarrow(big_{r}+1)\mod agent_{cand}\\_size$ 10: $i\leftarrow i+1$ 11: return $agent_{cand}$
# Pickup and Delivery Problem with Transfers Santiago Hincapie Potes , Catalina Lesmes Ramirez Mathematical Engineering Student<EMAIL_ADDRESS>Universidad EAFIT, Medellín, Colombia.Mathematical Engineering Student<EMAIL_ADDRESS>Universidad EAFIT, Medellín, Colombia. ###### Abstract In this article we will be talking about Pickup and Delivery Problems with Transfers (PDP-T), the main idea is to show first the Pickup and Delivery Problem (PDP), which is the problem from which the PDP-T derives from, later show a description of the problem and the mathematical formulation to get a clear view of what we are going to solve, which leads us to present different ways of solving this kind of problems. Keywords:Delivery problem, local search, random search, heuristics, hybrid methods ## 1 Introduction In many logistics problems we require to pick some products in one place and take them to another place, which is why we define the pickup and delivery problem (PDP). In this problem a set of vehicles pick up and deliver a set of items. The goal is to deliver the items at the lowest cost while obeying a set of constraints, such as time windows and capacities. The PDP is a well- studied, NP-hard problem, so approximation algorithms and heuristics have been developed to address variants of the PDP. There are many techniques we can use to solve the PDP problem, such as genetic algorithms, various metaheuristics, taboo search heuristics and branch and cut algorithms. To the PDP certain transfers can be added, which means that the vehicle which is delivering the product transfers it to another vehicle before the delivery is done. By adding those transfers we have the PDP with Transfers (PDP-T), in which we consider transferring items between vehicles. We can convert the PDP problem into a PDP-T problem by adding some variables and constraints. This article is organized as follows: In Section 1 an introduction is presented which gives a general notion of the problem, later in Section 2 we give a brief literature review were some examples of solution methods for the PDP-T are presented, after in Section 3 we give the problem description, were all the variables, parameters and constraints of the mathematical model are explained in detail, next in Section 4 the solution algorithms we give to the problem are explained, later in Section 5 we present the results of the computational experimental and finally in Sections 6 and 7 we give some conclusions and future work suggestions. ## 2 Literature Review As we saw before, there are different methods to solve the PDP-T. As an example a branch and cut method using Benders Decomposition can be used. In this method, the set of constraints is decomposed into pure integer and mixed constraints, and then a branch-and-cut procedure is applied to the resulting pure integer problem, by using real variables and constraints related as cut generators. The key on the success of this method is that those constraints defined by a logical sentence are not modeled using the big-M technique, as usual in a branch-and-bound methodology behind the original PDP formulation. This method may be applied only when the objective function is either pure real or pure integer (Cortés et al. , , 2010). Another method which solves the problem is a Very Large Neighborhood Search with Transfers (VLNS-T) based on the Adaptive Very Large Neighborhood Search (VLNS). The VLNS algorithm uses simulated annealing to randomly choose neighboring schedules and iteratively improve the schedule. Neighboring schedules are formed by removing random items and reinserting them with heuristics. VLNS-T is based on the VLNS algorithm for the PDP without transfers, a variant of simulated annealing in which the neighborhood of states is ”very large”. In this case we remove random items from the schedule and then reinsert them with multiple heuristics to find neighbors (Cat, , n.d.). There are also some special cases of the PDP-T, as the Pickup and Delivery Problem with Shuttle Routes (PDP-S) which relies on a structured network with two categories of routes. Pickup routes visit a set of pickup points independently of their delivery points and end at one delivery point. Shuttle routes are direct trips between two delivery points. Requests can be transported in one leg (pickup route) or two legs (pickup route plus shuttle route) to their delivery point. The PDP-S applies to transportation systems with a multitude of pickup points and a few, common delivery points (Masson et al. , , 2010). ## 3 Problem Description and Mathematic Formulation In the PDP, a heterogeneous vehicle fleet based at multiple terminals must satisfy a set of transportation requests. Each request is defined by a pickup point, a corresponding delivery point, and a demand to be transported between these locations (Parragh et al. , , 2008a). Now, PDP-T allows the option for passengers to transfer between vehicles, provided that the locations of the transfer points are fixed and known. ### 3.1 Problem Formulation Let $N$ be the set of transportation requests. For each transportation request $i\in N$ a load of size $\bar{q_{i}}\in\mathbb{N}$ has to be transported from a set of origins $N^{+}_{i}$ to a set of destinations $N_{i}^{-}$. Each load is divided as follows $\displaystyle\bar{q_{i}}=\sum_{j\in N_{i}^{+}}q_{j}=-\sum_{j\in N_{i}^{-}}q_{j}$, i.e., positive quantities for pickups and negative quantities for deliveries. Define $\displaystyle N^{+}:=\cup_{i\in N}N_{i}^{+}$ as the set of all origins and $\displaystyle N^{-}:=\cup_{i\in N}N_{i}^{-}$ as the set of all destinations. Let $V:=N^{+}\cup N^{-}$. Furthermore, let $M$ be the set of vehicles. Each vehicle $k\in M$ has a capacity $Q_{k}\in\mathbb{N}$ a start location $k^{+}$ and an end location $k^{-}$. Define $M^{+}:=\\{k^{+}|k\in M\\}$ as the set of start locations and $M^{-}:=\\{k^{-}|k\in M\\}$ as the set of end locations. Let $W:=M^{+}\cup M^{-}$. For all $i,j\in V\cup W$ let $d_{ij}$ denote the travel distance, $t_{ij}$ the travel time and $c_{ij}$ the travel cost. Note that the dwell time at origins and destinations can be easily incorporated in the travel time and therefore will not be considered explicitly. To formulate the PDP as a mathematical program we introduce four types of variables: for $i\in N$ and $k\in M$: $z_{i}^{k}=\begin{cases}1&\text{if transportation request }i\text{ is assigned to vehicle }k\\\ 0&\text{In other case}\end{cases}$ for $(i,j)\in(V\times V)\cup\\{(k^{+},j)|j\in V\\}\cup\\{(j,k^{-})|j\in V\\}$ and $k\in M$: $x_{ij}^{k}=\begin{cases}1&\text{if vehicle }k\ \text{travels from location }i\ \text{to location }j\\\ 0&\text{In other case}\end{cases}$ $D_{i}$ with $(i\in V\cup W)$, specifying the departure time at vertex $i$ and $y_{i}$ with $(i\in V\cup W)$, specifying the load of the vehicle arriving at vertex $i$ (Parragh et al. , , 2008a). All this information can be summarized in Table 1: Table 1: Parameters and variables found in (Parragh et al. , , 2008a) for the model Object | Meaning ---|--- $M$ | Set of vehicles $C$ | Set of requests $T$ | Set of transference point $M^{+}$ | Set of origin depots for vehicles $M^{-}$ | Set of destination depots for vehicles $N^{+}$ | Set of origin nodes for requests $N^{-}$ | Set of destination nodes for requests $N$ | Set of nodes associated with requests $V$ | Set of nodes $q_{ij}$ | Size of request $i\in C$ $Q_{k}$ | Capacity of vehicle $k\in K$ $t_{ij}$ | Minimum ride time from node $i$ to node $j$ $c_{ij}$ | The travel cost $d_{ij}$ | The travel distance $z_{i}^{k}$ | bind transportation request and vehicles $x_{ij}$ | bind routes and vehicles information $D_{i}$ | specifying the departure time at specific vertex $y_{i}$ | specifying the load of the vehicle arriving Now the model is: minimize $\displaystyle\sum_{i,j\in V\cup W}dij$ (1) subject to $\displaystyle\sum_{k\in M}z_{i}^{k}=1$ for all $\displaystyle i\in N$ (2) $\displaystyle\sum_{j\in V\cup W}x_{il}^{k}=z_{i}^{k}$ for all $\displaystyle i\in N,l\in N^{+}_{i}\cup N_{i}^{-}k\in M$ (3) $\displaystyle\sum_{j\in V\cup\\{k^{-}\\}}x^{k}_{k^{+}j}=1$ for all $\displaystyle k\in M$ (4) $\displaystyle\sum_{j\in V\cup\\{k^{+}\\}}x^{k}_{k^{-}j}=1$ for all $\displaystyle k\in M$ (5) $\displaystyle D_{k^{+}}=0$ for all $\displaystyle k\in M$ (6) $\displaystyle D_{p}\leq D_{q}$ for all $\displaystyle i\in N,p\in N_{i}^{+}q\in N_{i}^{-}$ (7) $\displaystyle x_{ij}^{k}=1\Rightarrow D_{i}+t_{ij}\leq D_{j}$ for all $\displaystyle i,j\in V\cup W,k\in M$ (8) $\displaystyle y_{k^{+}}=0$ for all $\displaystyle k\in M$ (9) $\displaystyle y_{l}\leq\sum_{k\in M}Q_{k}z_{i}^{k}$ for all $\displaystyle i\in N,l\in N_{i}^{+}\cup N_{i}^{-}$ (10) $\displaystyle D_{i}\geq 0$ for all $\displaystyle i\in V\cup W$ (11) $\displaystyle y_{i}\geq 0$ for all $\displaystyle i\in V\cup W$ (12) Constraint (2) ensures that each transportation request is assigned to exactly one vehicle. By constraint (3) a vehicle only enters or leaves a location $l$ if it is an origin or a destination of a transportation request assigned to that vehicle. The next (4) and (5) make sure that each vehicle starts and ends at the correct place. Also the constraints number (6), (7), (8) and (11) together form the precedence constraints the others together form the capacity constraints. Constraints (9), (10) and (12) together form the capacity constraints. This mathematical program, models the PDP. Now for a PDP-T model in the literature it can bee seen that the transfer point is introduced, and the procedure for the extended model is to iterative add constrains that involve this transfer points. The process of inserting the transfers will be described in Section 4. ## 4 Solution Algorithms In this section we present solution algorithm for the PDP-T. In general, for this part, we first solve the PDP using different methods (which would be described in this section) and then we add the transship option to the PDP solution, transship option always increases the computational time but makes the solution better. In subsection 4.1 we present two constructive methods: a greedy approach and multistart heuristic in Takoudjou et al. , (2012), then in subsection 4.2 we present a random method GRASP which is applied to the constructive methods mentioned before, later in subsection 4.3 we present three local search methods: Variable neighborhood descent, Adaptive large neighborhood search and simulated annealing, finally in subsection 4.4 we present two hybrid methods which combine local search methods and genetic algorithms. ### 4.1 Constructive Methods #### 4.1.1 Greedy approach We implemented the greedy approach described in Coltin et al. , (2014), which basically iterates through every item and vehicle and then inserts the best pickup and delivery action into the schedule, it always chooses the option that increases the least the cost, we repeat the process until no unassigned items remain. Then we add the transfer option with another greedy idea, inspired on Clarke saving algorithm, this method computes the savings of the transfer, and it improves the solution, then we modify the solution by adding this new note. #### Clarke and Wright Algorithm The Clarke and Wright savings algorithm is one of the most known heuristic for VRP. It applies to problems for which the number of vehicles is not fixed (it is a decision variable), and it works equally well for both directed and undirected problems. When two routes $(0,...,i,0)$ and $(0,j,...,0)$ can feasibly be merged into a single route $(0,...,i,j,...,0)$, a distance saving $s_{ij}=c_{i0}+c_{0j}-c_{ij}$ is generated. The algorithm works as it follows: Step 1. Savings computation * • Compute the savings $s_{ij}=c_{i0}+c_{0j}-c_{ij}$ for $i,j=1,...,n$ and $i\neq j$. * • Create n vehicle routes $(0,i,0)$ for $i=1,...,n$. * • Order the savings in a non increasing fashion. Step 2. Route Extension (Sequential version) * • Consider in turn each route $(0,i,...,j,0)$. * • Determine the first saving $s_{ki}$ or $s_{jl}$ that can feasibly be used to merge the current route with another route ending with $(k,0)$ or starting with $(0,l)$. * • Implement the merge and repeat this operation to the current route. * • If not feasible merge exists, consider the next route and reapply the same operations. * • Stop when not route merge is feasible. #### 4.1.2 Hybrid Multistart Heuristic The PDPT is a hard combinatorial optimization problem. The heuristics for this problem must avoid being trapped by local optimum. To overcome local optimality, a diversification procedure is needed. So we use the transshipment heuristic from the hybrid heuristic for the PDP found in Takoudjou et al. , (2012) to obtain a solution to the PDP-T. At each iteration, the transshipment heuristic is used to improve the PDP solution and obtain a solution to the PDP-T. From the current PDP solution, each request ($i$, $i+n$) $\in$ R; is removed from the solution. Then, ($i$, $i+n$) is split into two different requests ($i$, $e_{t}$) and ($s_{t}$, $i+n$), where $e_{t}$ and $s_{t}$ are the inbound/outbound doors of a transshipment point $t$ $\in$ T. The best reinsertion cost of ($i$, $i+n$) in the solution is computed. The best insertion cost to insert ($i$, $e_{t}$) following by insertion of ($s_{t}$, $i+n$) in the solution is computed. The cost to insert ($s_{t}$, $i+n$) following by the insertion of ($i$, $e_{t}$) at their best position is computed. Between the three possibilities, the insertion or the reinsertions offering the minimum cost is performed. As described in Takoudjou et al. , (2012), this hybrid approach merges many ideas. The solving principle is based on the following steps: * • An initial PDP solution (Sol) is calculated by gradually inserting the requests in routes associated to vehicles. The initial solution is consisting of a single route containing only the starting point and the ending point of each route. Requests are then successively introduced into the tour of a vehicle offering the minimal increase of the cost of transport. * • Transshipment is then used to “destroy” and “repair” PDP solution to obtain a better solution (PDPT solution). At each iteration, the transshipment heuristic is used to improve the PDP solution and obtain a solution to the PDPT. From the current PDP solution, each request; is removed from the solution. Then, the request is split into two different requests, the first go from the start point to the best transfer point, the other go from the transfer point to the ending point. ### 4.2 Random Search Method #### 4.2.1 GRASP GRASP is a multi-start metaheuristic for combinatorial optimization problems, in which each iteration consists of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is investigated until a local minimum is found during the local search phase. The best overall solution is kept as the result. Here we will only use one phase, which is the construction of the solution Resende & Ribeiro, (2010). ### 4.3 Local Search Methods #### 4.3.1 Variable Neighborhood Descent The VND is used to explore the neighborhood of the current solution, which is based on three operators defined below: ADR, RNR and SWR. With these three operators, we can explore the solution space more intensively. An operator is a move that transforms one solution to another with small modifications. In best improvement operators the whole neighborhood is analyzed and the best solution is kept. In first improvement local search the first better solution found in the neighborhood is kept (Takoudjou et al. , , 2012). ##### SWR (Swap requests between routes) In SWR neighborhood, two requests belonging to two different routes are exchanged together provided that all PDP constraints are satisfied. ##### RNR (Remove and insert a request) In RNR neighborhood, provided that all PDP constraints are satisfied, request belonging to one route is removed and inserted in another route. ##### ADR (Advance or delay a request) In ADR neighborhood, requests belonging to one given route is advanced or delayed in the same route if PDP constraints are all satisfied. #### 4.3.2 Adaptive Large Neighborhood Search The general idea is to remove some requests from routes in the current solution and then reinsert them elsewhere to arrive at a new solution. At each iteration, one each of several removal and insertion heuristics is randomly selected based on their designated weights. Letting the set of removal heuristics be ${rh_{1},...,rh_{nr}}$ and their corresponding weights be ${rw_{1},...,rw_{nr}}$, $rh_{i}$ is chosen with probability $\frac{rw_{i}}{\sum_{j=1}^{nr}rw_{j}}$, where $nr$ is the number of options. Similarly for the insertion heuristics ${ih_{1},...,ih_{ni}}$ and their corresponding weights ${iw_{1},...,iw_{ni}}$, the probability of $ih_{i}$ being $\frac{ih_{i}}{\sum_{j=1}^{n}ih_{j}}$ selected is, where $ni$ is the number of options. After each iteration, if the solution realized is feasible and better than the incumbent, the latter is updated. In any case, the process is repeated until a predefined number of iterations $n_{II}^{max}$ is reached (Masson et al. , , 2012). #### 4.3.3 Simulated Annealing Simulated annealing is a metaheuristic that begins at some state, and chooses a random “neighbor” of that state. With probability accept $(e,e_{0},t)$ the new state is accepted as the current state, where $e$ is the “energy” of the current state, $e_{0}$ is the energy of the new state, and $t$ is the temperature, or the fraction of iterations of the algorithm currently completed. If the new state is rejected we remain at the current state and repeat with a new neighbor. The algorithm continues either for a fixed number of iterations or until the energy crosses some threshold, when the best solution that has been encountered thus far is returned (Coltin & Veloso, , 2014). ### 4.4 Hybrid Methods In this subsection we present two hybrid methods, both of this methods are based on the use of a local search methods and genetic algorithms but they possess different hybridizations. The first algorithm uses simulated annealing for the population selection and for its mutation variance of the probability, this is made with the purpose of achieves a better convergence, next to a diverse population. The second hybrid method uses the taboo list to avoid mating of chromosomes that have low Minkowski in order to keep diversity, also use this information in order to change mutation probability. We show in precise detail how the genetic algorithm was made, so in subsection 4.4.1 we show the encoding of the problem, then we describe the initial population in subsection 4.4.2, which we got in two different ways, we have a totally randomized initial solution and a constructive solution, later we select the population to be matched which is described in subsection 4.4.3, then in subsection 4.4.4 we describe the crossover between the population, after that, in subsection 4.4.5 we describe the mutation for both first and second methods, then in subsection 4.4.6 we explain how the new population is chosen for the first method and the second method, and finally in subsection 4.4.7 we explain the stop criterion. #### 4.4.1 Encoding Given the structure of the problem, the solutions can be represented as a set of $N$ paths, one for each vehicle, considering that it is important to consider the transfers as much as the possibility of a node recurrence, it was opted to represent the solution as a directed multi-graph, in which the set of vertexes represent some of the visited nodes and there exists a connexion between two vertexes if one is a successor from the other, the direction goes in the way of the route, additionally each one of the multi-sets is found reserved for each route. To represent each multi-graph, a multidimensional matrix was used, which positions $i,j,k$ possessed the locality distance $i$ to the locality $j$, if $j$ is the successor of $i$ in route $k$. For the processes of the genetic algorithm to each element of the solution matrix a derivate from the logistic function was applied (fixing the value from $0$ to $0$), meaning that to each element a function as is shown below was applied: $f(x)=\begin{cases}\frac{e^{-x}}{(1+e^{-x})^{2}}&\text{if }x>0\\\ 0&\text{otherwise}\end{cases}$ this function was applied with the purpose of giving more importance to the short routes, in this way the intention is to preserve this property. The complexity in space of this encoding is $O(N|path|^{2})$. #### 4.4.2 Initial Population For the computational experiments we performers with two district initial population methods, a totally randomized approach and a set of district greedy random solutions. #### Totally Randomized Initial Solution A set of 256 random multi-graphs were generated using the model of a modification of the algorithm from Erdős-Rényi as described in (Godehardt, , 1993). It was opted to generate random graphs in place of random matrices, due to the fact that the solutions generated this way are usually more feasible with a higher probability, additionally they use to be better. #### Constructive Solution We construct a set of 256 district random greedy solution using the algorithm described in the section 4.2. #### 4.4.3 Selection For the selection part we use Baker’s stochastic universal selection (SUS) (Baker, , 1987), this algorithm uses a single random value to sample all the solutions by choosing them at evenly spaced intervals. This gives weaker members of the population (according to their fitness) a chance to be chosen and thus reduces the unfair nature of fitness-proportional selection methods, also the experimental work by Hancock (Hancock, , 1994) clearly demonstrates the superiority of this approach. #### Fitness Function For a $K$ matrix its fitness function is the sum of the total distance of each one of its routes, in case that the solution is not feasible (it does not satisfy all the request) a $\infty$ value is assigned. #### 4.4.4 Crossover To make the crossover the solutions which were previously selected were matched and a new matrix was generated in which its entries were uniform random numbers. The resultant son matrix possesses the gen of its first parent in the position $i,j,k$ if the generated random matrix possesses a lower number to $0.5$ in the corresponding position, in the other cases it possesses the gen of its other parent. #### 4.4.5 Mutation ##### First method For mutation operator we use a shift operator, that shifts the value of two elements of the matrix depending on a parameter that we varied according to the diversity in the population (measured in terms of the fitness variance) and actual solution temperature (describe in the next step). ##### Second Method For mutation operator we also use a shift operator but in this method we uses a fixed probability $\alpha=0.05$ #### 4.4.6 New population ##### First method For the actualization of the population, we perform a probabilistic replacement test, in this approach the solutions are accepted into the new generation using a Boltzmann type calculation. At each generation the temperature is decreased, thereby decreasing the probability of accepting higher energy individuals. ##### Second method For the population update, we compute the Minkowski distance between population members and uses taboo list to avoid nears solutions, in order to keep population diversity. #### 4.4.7 Stop criterion For the stop criterion we compared the variance between the solutions of the last $4$ generations, this with the purpose of ending the algorithm when it reaches an stationary state. ## 5 Computational Experimentation In this section we show the computational results given by the algorithms described in section 4 that solve the PDP-T. In subsection 5.1, we present tables with the instance size, the number of vehicles, optimal solutions and the computational time, then we compare them to see which is the best heuristic used. All algorithms were coded on python 3.5.3 running on Intel Xeon E5-1620V4 , 64-bit processor with 32 gigabyte RAM and Ubuntu 17.10. We used problem instances from those in for the PDP, which are related to the well-known Solomon instances. The datasets are available at http://www.sintef.no/Projectweb/TOP/PDPTW/ . For each instance we introduce a random number of transfer points, in a random node, to become the original PDP instance in a PDP-T instance. Each data set contains $X-Y$ hence, the lengths of the arcs in the network are $L_{2}$ distances. For the locations of the vehicle depots, the origin and final depots for each vehicle were randomly generated, scattered over the region formed by the nodes. The number of vehicles are also random generated, and finally we ignore the time windows restriction because are not consider in the model. ### 5.1 Computational results In this subsection we show the results of all the heuristics mentioned before and compare them. #### 5.1.1 Constructive Methods In this subsection we show the tables of the computational results for the constructive methods, in Table 2 we present the results for the Greedy approach and in Table 3 we show the results for the MULTISTART method. Table 2: Greedy approach Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 2001.31 | 28.23 lc2_2_3 | 200 | 6 | 4920.43 | 50.48 lc2_4_5s | 400 | 12 | 7235.92 | 123.43 LC2_8_5 | 800 | 25 | 226748.20 | 240.12 Table 3: MULTISTART Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 1825.83 | 40.81 lc2_2_3 | 200 | 6 | 4378.92 | 76.21 lc2_4_5s | 400 | 12 | 6891.06 | 173.79 LC2_8_5 | 800 | 25 | 201234.73 | 301.83 As we can see, the MULTISTART method gives better optimal solutions for the PDP-T, but at the same time it requires more computational time. #### 5.1.2 Random Search Method In this subsection the results for the GRASP which is the random search method are shown, as we mentioned in Section 4, we add GRASP to the constructive methods. In Table 4, the greedy approach with GRASP is shown and in Table 5 we have the results for the MULTISTART with GRASP. Table 4: Greedy approach (GRASP) Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 1713.01 | 54.12 lc2_2_3 | 200 | 6 | 4001.12 | 80.15 lc2_4_5s | 400 | 12 | 6481.10 | 196.92 LC2_8_5 | 800 | 25 | 198917.87 | 320.16 Table 5: MULTISTART (GRASP) Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 1521.31 | 58.63 lc2_2_3 | 200 | 6 | 3832.12 | 89.53 lc2_4_5s | 400 | 12 | 6312.18 | 200.12 LC2_8_5 | 800 | 25 | 190143.12 | 354.10 If we compare Tables 4 and 5 we can see that the optimal solutions are given by the MULTISTART (GRASP), but it also takes more computational time than the Greedy approach (GRASP). This results are coherent, since they follow the same behavior as we mentioned in subsection 5.1.1, but we can also see that when we add GRASP to our constructive methods the optimal solutions are better, even thought they take more computational time. #### 5.1.3 Local Search Methods In this subsection we show tables comparing all the results, plus one more, the other is called ”MIX”, this table refers to a union of the VND and ALNS methods, we wanted to see what happened if we merged two of them. So in Table 6 we present the ALNS, in Table 7 the VND, in Table 8 the SA and in table 9 the mix between the VND and the ALNS. Also in this subsection we also wanted to show a 3D plot, showing the results of all the heuristics including the initial solution, this plot is shown in Figure 1. Table 6: ALNS Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 1154.21 | 160.32 lc2_2_3 | 200 | 6 | 3213.31 | 233.21 lc2_4_5s | 400 | 12 | 5421.32 | 312.23 LC2_8_5 | 800 | 25 | 182313.32 | 600.12 Table 7: VND Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 1032.12 | 187.21 lc2_2_3 | 200 | 6 | 3102.23 | 254.12 lc2_4_5s | 400 | 12 | 5250.12 | 352.21 LC2_8_5 | 800 | 25 | 173213.12 | 721.48 Table 8: SA Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 1321.23 | 140.32 lc2_2_3 | 200 | 6 | 3543.34 | 221.34 lc2_4_5s | 400 | 12 | 5732.12 | 301.31 LC2_8_5 | 800 | 25 | 182313.32 | 572.12 Table 9: MIX Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 1023.62 | 350.14 lc2_2_3 | 200 | 6 | 3102.23 | 500.51 lc2_4_5s | 400 | 12 | 5291.17 | 698.92 LC2_8_5 | 800 | 25 | 177843.01 | 1425.92 size020406080100time050010001500200025003000objective020040060080010001200ALNSVNSSAMIXInitial Solution Figure 1: Comparison of the heuristic methods In Figure 1, we can see in the plot the time, the objective function and the size of the instance in contrast between each other, is a comparison of all methods and the initial solution. We can see from Tables 6 through 9 and Figure 1 that the best algorithm as seen in computational time is the Simulated Annealing and that best algorithm as seen in optimal solution is the mix between the ALNS and the VND. As until now, our best results are given with the random search algorithms if we compare them with the local search algorithms we can say that the best results are given by the local search algorithms specifically with the mix of the ALNS and the VNS. For this mix, we can see a very interesting result, even thought it was the best result of all, we should expect it gave a much better result, but the difference from the initial solution is not that wide. #### 5.1.4 Hybrid Methods In this subsection we show the results given by the algorithms mentioned in subsection 4.4, it is important to mention that the reaction of the initial population and the associated individual operations such as encoding were computed in parallel. So, in Tables 10 and 11 we show the results for the genetic algorithms and simulated annealing with both the constructive and the random initial solution, and in Tables 12 and 13 we show the results for the genetic algorithms and taboo with both the constructive and the random initial solutions. Table 10: Genetic algorithm & SA, with random initial solution Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 720.60 | 789.12 lc2_2_3 | 200 | 6 | 2454.12 | 1325.21 lc2_4_5s | 400 | 12 | 4843.93 | 2012.32 LC2_8_5 | 800 | 25 | 14794.27 | 3024.15 Table 11: Genetic algorithm & SA, with Constructive solution Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 673.12 | 1254.65 lc2_2_3 | 200 | 6 | 2021.23 | 1743.54 lc2_4_5s | 400 | 12 | 4654.63 | 2433.33 LC2_8_5 | 800 | 25 | 13647.12 | 3502.12 Table 12: Genetic algorithm & Taboo, with random initial solution Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 690.43 | 823.43 lc2_2_3 | 200 | 6 | 2144.33 | 1513.43 lc2_4_5s | 400 | 12 | 4136.38 | 2243.43 LC2_8_5 | 800 | 25 | 13474.76 | 3324.69 Table 13: Genetic algorithm & Taboo, with Constructive solution Instance | Instance Size | Number of vehicles | Optimal Solution | Time ---|---|---|---|--- lc204 | 100 | 3 | 610.60 | 1242.43 lc2_2_3 | 200 | 6 | 1944.33 | 2091.35 lc2_4_5s | 400 | 12 | 4070.48 | 2695.49 LC2_8_5 | 800 | 25 | 12689.23 | 3820.94 When we compare the results given by Tables 10 through 13 we can see that the constructive solution gives better results than the random initial solution for both SA and Taboo, but they also take more computational time. In general terms, the best solution as seen in optimal solution is given by the genetic algorithm and Taboo with the constructive solution. The best solution as seen in computational time is given by genetic algorithm and simulated annealing with a random initial solution. ## 6 Conclusions * • The best solution as seen in optimal solution is given by the genetic algorithm and Taboo with the constructive solution in hybrid methods. * • The best solution as seen in computational time is given by the greedy approach in constructive methods. * • An interesting matter with VND is that even though the solution is bad it has a good computational time. * • The mix between the ALNS and the VND is a kind of hybrid but it is slow and gives bad solution, but the hybrids with the genetic algorithm give better results due to the fact that they use the hybridization to cover its weakness. * • We can see from all the computational times that when the instances are small, the times between the heuristics are very similar but when the instances are bigger, the times between the heuristics are very different. ## 7 Future Work An interesting matter that it could be talked about in the future is about the lower bound of this problem, because in none of the articles we have reviewed the topic has risen up, and it’s important because we can not compare how much closer we are getting to a better solution, we can see it gets better every time but we can not see if we are closer to our goal. Also, it would be interesting to run the algorithm several times rising the number of transfer points with the purpose of seeing how it affects the computational time and how much it diminishes the objective function. ## References * Cat, (n.d.) * Baker, (1987) Baker, James E. 1987. Genetic Algorithms and their Applications (Hard). Lawrence Erlbaum. * Bouros et al. , (2011) Bouros, Panagiotis, Sacharidis, Dimitris, Dalamagas, Theodore, & Sellis, Timos. 2011. Dynamic Pickup and Delivery with Transfers. * Coltin & Veloso, (2014) Coltin, Brian, & Veloso, Manuela. 2014. Online pickup and delivery planning with Transfers for mobile robots. Proceedings - IEEE International Conference on Robotics and Automation, 5786–5791. * Coltin et al. , (2014) Coltin, Brian, Choset, Howie, & Smith, Stephen. 2014. Multi-agent Pickup and Delivery Planning with Transfers. * Cortés et al. , (2010) Cortés, Cristián E., Matamala, Martín, & Contardo, Claudio. 2010\. The pickup and delivery problem with transfers: Formulation and a branch-and-cut solution method. European Journal of Operational Research, 200(3), 711–724. * Godehardt, (1993) Godehardt, Erhard A.J. 1993. Probability Models for Random Multigraphs with Applications in Cluster Analysis. Pages 93–108 of: Quo Vadis, Graph Theory? - A Source Book for Challenges and Directions. Elsevier. * Hancock, (1994) Hancock, Peter J. B. 1994. An empirical comparison of selection methods in evolutionary algorithms. Pages 80–94 of: Evolutionary Computing. Springer Berlin Heidelberg. * Li et al. , (2016) Li, Yuan, Chen, Haoxun, & Prins, Christian. 2016. Adaptive large neighborhood search for the pickup and delivery problem with time windows, profits, and reserved requests. European Journal of Operational Research, 252(1), 27–38. * Masson et al. , (2010) Masson, Renaud, Lehuede, Fabien, Peton, Olivier, & Ropke, Stefan. 2010. The Pickup and Delivery Problem with Shuttle routes. 1–4. * Masson et al. , (2012) Masson, Renaud, Lehuédé, Fabien, & Péton, Olivier. 2012. An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Time Windows. Transportation Science, 47(3), 344–355. * Mladenović & Hansen, (1997) Mladenović, N., & Hansen, P. 1997. Variable neighborhood search. Computers & Operations Research, 24(11), 1097–1100. * Nanry & Wesley Barnes, (2000) Nanry, William P., & Wesley Barnes, J. 2000. Solving the pickup and delivery problem with time windows using reactive tabu search. Transportation Research Part B: Methodological, 34(2), 107–121. * Parragh et al. , (2008a) Parragh, Sophie N., Doerner, Karl F., & Hartl, Richard F. 2008a. A survey on pickup and delivery problems. Journal fur Betriebswirtschaft, 58(1), 21–51. * Parragh et al. , (2008b) Parragh, Sophie N., Doerner, Karl F., & Hartl, Richard F. 2008b. A survey on pickup and delivery problems. Part I: Transportation between customers and depot. Journal für Betriebswirtschaft, 58(2), 81–117. * Polat et al. , (2015) Polat, Olcay, Kalayci, Can B., Kulak, Osman, & Günther, Hans-Otto. 2015. A perturbation based variable neighborhood search heuristic for solving the Vehicle Routing Problem with Simultaneous Pickup and Delivery with Time Limit. European Journal of Operational Research, 242(2), 369–382. * Qu & Bard, (2012) Qu, Yuan, & Bard, Jonathan F. 2012. A GRASP with adaptive large neighborhood search for pickup and delivery problems with transshipment. Computers and Operations Research, 39(10), 2439–2456. * Resende & Ribeiro, (2010) Resende, Mauricio G.C., & Ribeiro, Celso C. 2010. Greedy Randomized Adaptive Search Procedures: Advances, Hybridizations, and Applications. Pages 283–319 of: Handbook of Metaheuristics. Springer US. * Takoudjou et al. , (2012) Takoudjou, Rodrigue Tchapnga, Deschamps, Jean-christophe, & Dupas, Rémy. 2012\. A hybrid multistart heuristic for the pickup and delivery problem with and without transshipment. 9th International Conference on Modeling, Optimization & SIMulation.
# MM-GNN: Mix-Moment Graph Neural Network towards Modeling Neighborhood Feature Distribution Wendong Bi Institute of Computing Technology, Chinese Academy of SciencesBeijingChina<EMAIL_ADDRESS>, Lun Du Microsoft Research AsiaBeijingChina<EMAIL_ADDRESS>, Qiang Fu Microsoft Research AsiaBeijingChina<EMAIL_ADDRESS>, Yanlin Wang Microsoft Research AsiaBeijingChina<EMAIL_ADDRESS>, Shi Han Microsoft Research AsiaBeijingChina<EMAIL_ADDRESS>and Dongmei Zhang Microsoft Research AsiaBeijingChina<EMAIL_ADDRESS> (2023) ###### Abstract. Graph Neural Networks (GNNs) have shown expressive performance on graph representation learning by aggregating information from neighbors. Recently, some studies have discussed the importance of modeling neighborhood distribution on the graph. However, most existing GNNs aggregate neighbors’ features through single statistic (e.g., mean, max, sum), which loses the information related to neighbor’s feature distribution and therefore degrades the model performance. In this paper, inspired by the method of moment in statistical theory, we propose to model neighbor’s feature distribution with multi-order moments. We design a novel GNN model, namely Mix-Moment Graph Neural Network (MM-GNN), which includes a Multi-order Moment Embedding (MME) module and an Element-wise Attention-based Moment Adaptor module. MM-GNN first calculates the multi-order moments of the neighbors for each node as signatures, and then use an Element-wise Attention-based Moment Adaptor to assign larger weights to important moments for each node and update node representations. We conduct extensive experiments on 15 real-world graphs (including social networks, citation networks and web-page networks etc.) to evaluate our model, and the results demonstrate the superiority of MM-GNN over existing state-of-the-art models Graph neural networks, Graph representation learning, Feature distribution, Moment, Social networks ††journalyear: 2023††copyright: acmcopyright††conference: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining; February 27-March 3, 2023; Singapore, Singapore††booktitle: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining (WSDM ’23), February 27-March 3, 2023, Singapore, Singapore††price: 15.00††doi: 10.1145/3539597.3570457††isbn: 978-1-4503-9407-9/23/02††ccs: Computing methodologies Neural networks††ccs: Information systems Social networks ## 1\. Introduction Graph data is ubiquitous in the real world, such as social networks, citation networks, etc. Graph Neural Networks (GNNs) have shown expressive power on graph representation learning in recent years. Most existing GNNs follow the neighborhood aggregation scheme, where each node aggregates features of neighbors to update its own representation. Many GNNs (Kipf and Welling, 2016; Veličković et al., 2017; Derr et al., 2020; Yang et al., 2016) with different aggregators have been proposed to solve various downstream tasks, which can be divided into graph-level tasks, edge-level tasks, and node-level tasks, such as graph classification (Xu et al., 2018a; Bai et al., 2019), link prediction (Zhang and Chen, 2018; Zhou et al., 2018; Guo et al., 2022), and node classification (Dai et al., 2022; Lin et al., 2020, 2014; Fettal et al., 2022; Jin et al., 2021; Zhao et al., 2021). However, the aggregation schemes of existing GNNs usually use single statistics (e.g., mean, max, sum). For example, GCN (Kipf and Welling, 2016) uses mean aggregator, GraphSAGE (Hamilton et al., 2017) uses max aggregator, and GIN uses sum aggregator. And some state-of-the-art GNNs (e.g. GCNII (Chen et al., 2020), DAGNN (Liu et al., 2020)) still use single statistics, without retaining the complete information of neighbor’s feature distribution. Recently, researchers (Xu et al., 2018a; Du et al., 2021b; Ma et al., 2022; Luo et al., 2022) begin to study the modeling of neighborhood distribution and have discussed its importance for graph data from various perspectives. However, the studies of neighbor’s feature distribution are limited to lower- order characteristics without considering higher-order distribution characteristics. Considering that single statistic cannot represent complex distribution with more than one degree of freedom, it’s intuitive that we should introduce more higher-order statistics. To demonstrate the prominent effects of higher-order distribution characteristics on distinguishing different nodes, we present an example from Facebook Social Network (Traud et al., 2012) in Fig. 1. Considering the graph composed of student/teacher nodes with class year as attributes, Fig. 1 (b) shows the mean and variance of their neighbors’ features. And we cannot distinguish the label of node A and node B by the mean of their neighbors’ features. However, we can succeed to distinguish the two nodes if we consider the variance of their neighbors’ features. And it’s intuitional because the teachers are more likely to know students from different class years, thus the variance of teacher’s neighbors is larger. And this motivates us to consider higher-order characteristics of neighbor’s feature distribution for graph data, and we further demonstrate the critical roles of higher-order characteristics by analysis on real data in Section 3. Figure 1. An example in Facebook social network. (a) is an example of Facebook college social network, where the student node A and the teacher node B have different neighbors set. (b) shows the feature distributions of neighbors of A and B by their mean and variance, where the mean information fails to distinguish the two nodes. A toy example of Facebook social network. Inspired by the method of moments (Bowman and Shenton, 2004), which is used to represent unknown probability distributions, we introduce methods of moments to design a novel aggregator for GNNs. Specifically, we propose to use adapted multi-order moments to represent the neighbor’s feature distribution and further embed the distribution signals into the message passing mechanism of GNNs. Then we design a novel Mix-Moment Graph Neural Network (MM-GNN) model, which includes two major components: Multi-order Moments Embedding (MME) and Attention-based Moment Adaptor (AMA). MM-GNN first computes multi-order moments for neighbors of each node as their signatures. Based on the observations that different order of moments may have various impact for each node (Sec 3), an Element-wise Attention-based Moment Adaptor is designed to select important moment signatures for specific nodes adaptively. As it is impossible to compute infinite-orders of moments to represent the distribution perfectly, we only compute finite-order moments in parallel at each graph convolution year. Finally, we conduct extensive experiments to demonstrate the superiority of MM-GNN over other state-of-the-art methods on 15 real-world graphs, including social networks, citation networks, webpage networks and image networks. The improved performance on all types of graphs demonstrate the effectiveness of our method. The main contributions of this paper are summarized as follows: 1. (1) We demonstrate that higher-order statistics are needed to model neighbor’s feature distribution on graphs. And we introduce the method of moments to GNNs from the perspective of modeling neighborhood feature distributions. 2. (2) We design a novel MM-GNN model with an aggregator based on the multi-order moments and further design an Element-wise Attention-based Moment Adaptor to adaptively adjust the weights of multi-order moments for each node. 3. (3) We conduct extensive experiments on 15 real-world graphs (i.e., social networks, citation networks, webpage networks) to evaluate our method. The results show that MM-GNN gains consistent improvements on each type of graphs. ## 2\. Preliminaries In this section, we give the definitions of some important terminologies and concepts appearing in this paper. ### 2.1. Graph Neural Networks (GNNs) GNNs aim to learn representation for nodes on the graph. Let $G=(V,E)$ denotes a graph, where $V=\\{v_{1},\cdots,v_{n}\\}$ is the node set, $n=|V|$ is the number of nodes. Let $X\in\mathbb{R}^{n\times d}$ denote the feature matrix and the $l$-th row of $X$ denoted as $x_{i}$ is the $d$-dimensional feature vector of node $v_{i}$. $E$ is the edge set in $G$. Typically, GNN-based model follows a neighborhood aggregation framework, where node representation is updated by aggregating information of its first or multi-order neighboring nodes. Let $h_{i}^{(l)}$ denotes the output vector of node $v_{i}$ at the $l$-th hidden layer and let $h_{i}^{(0)}=x_{i}$. The update function of $l$-th hidden layer is: (1) $h_{i}^{(l)}=\text{COMBINE}\left(h_{i}^{(l-1)},\text{AGGREGATE}\big{(}\\{h_{j}^{(l-1)}|v_{j}\in\mathcal{N}(v_{i})\\}\big{)}\right),$ where $\mathcal{N}(v_{i})$ is the set of neighbors of $v_{i}$. The AGGREGATE function is aimed to gather information from neighbors and the goal of COMBINE function is to fuse the information from neighbors and the central node. For graph-level tasks, an additional READOUT function is required. ### 2.2. Method of Moments #### 2.2.1. Definitions about moments: We give some basic definitions of statistical moments (Papoulis and Pillai, 2002; Bowman and Shenton, 2004) in this section. First, the definition of moments is as follows: ###### Definition 2.1 (Moments in Mathematics). For a random variable $Z$, its $k$-th order origin moment is denoted as $\mathbb{E}(Z^{k})$ , where $\mathbb{E}(Z^{k})=\int^{\infty}_{-\infty}z^{k}\cdot f(z;\theta_{1},\cdots,\theta_{k})dz$. The $k$-th order central moment of $Z$ is ${\small\mathbb{E}\left(\big{(}Z-\mu_{z}\big{)}^{k}\right)}=\int^{\infty}_{-\infty}(z-\mu_{z})^{k}\cdot f(z;\theta_{1},\cdots,\theta_{k})dz$. The $k$-th order standardized moment of $Z$ is ${\small\mathbb{E}\left(\frac{\big{(}Z-\mu_{z}\big{)}^{k}}{\sigma_{z}^{k}}\right)}=\int^{\infty}_{-\infty}\frac{(z-\mu_{z})^{k}}{\sigma_{z}^{k}}\cdot f(z;\theta_{1},\cdots,\theta_{k})dz$, where $\sigma_{z}$ is the standard deviation of random variable Z. Then, moments can be derived from the Moment-Generating Function (MGF), which is an alternative specification of its probability distribution. The MGF (Papoulis and Pillai, 2002) is defined as follows: ###### Definition 2.2 (Moment-Generating Function (MGF)). Let $X$ be a random variable with cumulative distribution function $F_{X}$. The Moment Generating Function (MGF) of $X$ (or $F_{X}$), denoted by $M_{X}(t)$, is $M_{X}(t)=\mathbb{E}(e^{tX})=\int_{-\infty}^{\infty}e^{tx}dF_{X}(x)$ An essential property is that MGF can uniquely determine the distribution. The Taylor Expansion of MGF is as follows: (2) $M_{X}(t)=\mathbb{E}(e^{tX})=\sum_{k=0}^{\infty}\frac{t^{k}\mathbb{E}(X^{k})}{k!}=\sum_{k=0}^{\infty}\frac{t^{k}m_{k}}{k!}$ The $k$-th term of Eq. 2 is exactly the $k$-th order origin moment. #### 2.2.2. Relationship between moments and common statistics: In order to intuitively explain the physical meaning of multi-order moments, we give the relationship between moments and some common distribution statistics of data. The common distribution statistics can be viewed as special cases of moments. For a random variable $Z$, the expectation of $Z$ is $\mathbb{E}(Z)$, which is the 1-st order origin moment. The variance of $Z$ is $\mathbb{E}\left(\big{(}Z-\mu_{z}\big{)}^{2}\right)$, which is the 2-nd order central moment. The skewness of $Z$ is $\mathbb{E}\left(\frac{\big{(}Z-\mu_{z}\big{)}^{3}}{\sigma_{z}^{3}}\right)$, which is the 3-rd order standardized moment. ## 3\. Findings: neighbor distribution benefits node classification In this section, We verify the importance of neighbor’s feature distribution through comprehensive data analysis. We first select three common statistics (i.e., Mean, Variance, Skewness, see Sec. 2.2.2) to represent distribution of neighbor’s feature. Then we calculate the selected statistics on the graphs of Facebook social network (Traud et al., 2012), where the feature of nodes on the graphs have practical meanings (e.g., gender, major, year) and easy to analyze the physical meanings of their statistics. More details of the datasets are presented in Sec. 6.1. And we further calculate the selected statistics on the neighbors’ feature for all nodes and then take an average on all feature dimensions. Finally, we choose two metrics (i.e. Fisher Discrimination Index, Mutual Information) to measure the correlation between statistics of neighbor distribution and the node label, and one statistic is more important for node classification if it has stronger correlations with the node label. ### 3.1. Data Analysis on Correlation between Neighbor Distribution and Label (a) Fisher Discrimination Index. (b) Mutual Information. Figure 2. Correlation analysis between the node label and neighbor’s feature distribution represented by statistics (i.e., Mean, Variance, Skewness) on the Facebook Social Networks. #### 3.1.1. Analysis by Fisher Discrimination Index Inspired by Fisher’s Linear Discrimination (Fisher, 1936), we also use the Fisher discrimination index (Ratio of inter-class variance to intra-class variance) to measure the node classification capability distinguished by different statistics of neighbors’ features distribution (e.g., Mean, Variance): $\small S_{Fisher}=\frac{\sigma^{2}_{between}}{\sigma^{2}_{within}}=\sum_{\mathcal{C}_{i},\mathcal{C}_{j}\in\mathcal{C}}\frac{\left(\mu_{i}-\mu_{j}\right)^{2}}{\sigma^{2}_{i}+\sigma^{2}_{j}},$ where $S_{Fisher}$ denotes the Fisher Discrimination Index, $\sigma^{2}_{between}$ and $\sigma^{2}_{within}$ denote the inter-class and intra-class variation. $\mathcal{C}$ is the set of classes, $\mathcal{C}_{i}$ and $\mathcal{C}_{j}$ are two different categories, $\mu_{i}$ and $\sigma^{2}_{i}$ are respectively the mean and variance of samples in the class $\mathcal{C}_{i}$. As shown in Fig. 2 (a). We observe that, for different feature dimensions, different statistic of neighbor distribution contributes variously to distinguish node labels. For example, we observe that the skewness statistic has the best discriminative ability for the feature dimension ”year”, while the variance statistic has best discriminative ability for the feature dimension ”major”. #### 3.1.2. Analysis by Mutual Information Apart from Fisher Discrimination Index, we also analyze the Mutual Information between distribution statistics and the node label to measure theirs correlations. As shown in Fig. 2 (b), the results of mutual information also demonstrate that different statistics have different strength of correlation with the class label. In summary, all these results show that different order of moments contribute quite differently to node classification. However, any single statistic (e.g., mean, max, sum) cannot reflect complete neighborhood feature distribution, we thus need to consider multi-order statistics such as variance, skewness and other higher-order distribution statistics. Figure 3. An overview of MM-GNN including Multi-order Moment Embedding (MME) and Attention-based Moment Adaptor (AMA) modules. $H^{(l)}_{N(v_{i})}$ and $P^{(l)}_{N(v_{i})}$ denotes the features and distributions (represented by moments) of neighbors at the $l$-th layer. $h_{i}^{(l)}$ is the node representation of node $v_{i}$ at the $l$-th layer. $M_{K}^{(l+1)}$ denotes the $K$-th order moment of neighbors at the $(l+1)$-th layer. $Q_{i}$ and $K_{i}^{k}$ denotes query and key vectors at the $l$-th layer. $W_{query}^{(l+1)}$, $W_{key}^{(l+1)}$ and $W_{query}^{(l+1)}$ are learnable transformation matrix at the $(l+1)$-th layer. $a_{k}^{(l+1)}$ is the learned attention matrix for $k$-th order moment at the $(l+1)$-th layer. Figure of model structure ## 4\. Mix-Moment Graph Neural Network In this section, we introduce our Mix-Moment Graph Neural Network (MM-GNN) model. An overview of MM-GNN is given in the Fig. 3. Compared with existing GNNs, our MM-GNN introduces the Method of Moments to better represent neighborhood feature distribution when updating node representations. ### 4.1. Modeling Neighbors’ Feature Distribution via Method of Moments Based on the foundations of method of moments given in Sec. 2.2, we introduce multi-order moments to represent the feature distribution of neighboring nodes on the graph. And based on the analysis in Sec. 3, we observe that, for the node classification task, the feature distribution of neighbors for the nodes belonging to the same class follow the similar distributions, and vice versa. Unlike other parametric methods, the outstanding power of estimating distribution with method of moments is that we do not need to assume an explicit form of the distribution function. In this paper, we propose to use finite-order moments to represent the neighbors’ feature distribution for a graph, and we further present the analysis on the theoretical and empirical deviation of representing the neighbors’ feature distribution for graph data with finite-order moments in Sec 5.2. ### 4.2. Multi-order Moments Embedding (MME) As aforementioned, we model neighbors’ feature distribution by introducing the Method of Moments to GNN. The Method of Moments illustrated in Section 2.2 is a statistical method that use sample moments to estimate the distribution function. Therefore, we use multi-order moments to estimate the feature distribution of neighboring nodes. Moreover, moments of different nodes can be computed in parallel at the aggregation step, and we select both origin moment and central moment to represent neighborhood feature distribution. Here we present the form of $k$-th order origin moment for $(l+1)$-th layer of MM-GNN: (3) $M_{k}^{(l+1)}(v_{i})=\mathbb{E}\big{(}(h_{j}^{(l)})^{k}\big{)}=\left(\frac{1}{|\mathcal{N}(v_{i})|}\sum_{j\in\mathcal{N}(v_{i})}((h_{j}^{(l)})^{k}\right)^{\frac{1}{k}}\cdot W_{k}^{M}$ where $W_{k}^{M}\in\mathbb{R}^{D_{in}\times D_{hidden}}$ is the transformation matrix for $k$-th order moment signature, $h_{j}^{(l)}\in\mathbb{R}^{D_{in}}$ is the node representation of $v_{j}$ output by the $l$-th layer, and $h_{j}^{0}$ is the raw node feature $x_{j}$. Based on the Eq. 3, we can observe that the $1$-st origin moment is exactly the expectation, which is equivalent to the MEAN aggregation strategy. Thus previous GNNs with MEAN aggregator are special cases of MM-GNN using $1$-st order origin moment. Similarly, we can give the form of $k$-th order central moment for $(l+1)$-th layer of GNNs: (4) $M_{k}^{(l+1)}(v_{i})=\mathbb{E}\big{(}(h_{j}^{(l)}-\mu_{i}^{(l)})^{k}\big{)}=\left(\frac{1}{|\mathcal{N}(v_{i})|}\sum_{j\in\mathcal{N}(v_{i})}(x_{j}-\mu_{i}^{(l)})^{k}\right)^{\frac{1}{k}}\cdot W_{k}^{M}$ , where (5) $\mu_{i}^{(l)}=\frac{1}{|\mathcal{N}(v_{i})|}\sum_{j\in\mathcal{N}(v_{i})}h_{j}^{(l)}$ To keep the same order of magnitude, we normalize moments of different orders by computing the $\frac{1}{k}$ power of $k$-th order moment vectors in the model. Besides, we set a hyper-parameter for flexible model configuration of using the central moment or origin moment. ### 4.3. Attention-based Moment Adaptor (AMA) With the Multi-order Moments Embedding (MME) module, we get signatures representing moments of different orders for each node. Then, we need to fuse these signatures into one final representation for each node. As illustrated in Section 3, different statistics (corresponding to moments) have different effects on node classification, we design an Element-wise Attention-based Moment Adaptor (AMA) that can adaptively select the important order of moments for different nodes and feature dimensions. For each MM-GNN layer, after we get the multi-order moment signatures with the MME module, we use the transformed input node representation (the output of the previous layer) as the query vector for each node, and then we use different moment-signatures (output of the MME module of current layer) as key vectors. Then we compute attentions as weights of different moments for each individual nodes. The attention of $k$-th order moment for $v_{i}$ at $(l+1)$-th layer, denoted by $a_{k}^{(l+1)}(v_{i})\in\mathbb{R}^{D_{hidden}}$, is: (6) $a_{k}^{(l+1)}(v_{i})=\sigma\left([h_{i}^{(l)}\cdot W_{query}^{(l+1)}||M_{k}^{(l+1)}(v_{i})\cdot W_{key}^{(l+1)}]\cdot W_{a}^{(l+1)}\right)$ where $W_{a}^{(l)}\in\mathbb{R}^{2D_{hidden}\times D_{hidden}}$ is the learnable attention matrix in $l$-th layer, $W_{query}^{(l)},W_{key}^{(l)}\in\mathbb{R}^{D_{hidden}\times D_{hidden}}$ are the transformation matrix for the query and key vectors in $l$-th layer. Then different dimensions of $k$-th order moment signature for node $v_{i}$ use different attention weights and therefore implement an element-wise attention mechanism (attention of each order moment is a $D_{hidden}$-dimensional vector rather than a scalar). Then we can get the output representation of node $v_{i}$ at $(l+1)$-th layer: (7) $h_{i}^{(l+1)}=\sum_{k=1}^{K}a_{k}^{(l+1)}(v_{i})\odot M_{k}^{(l+1)}(v_{i}),\ a_{k}^{(l+1)}(v_{i})\in\mathbb{R}^{1\times D_{hidden}}$ where $\odot$ is the element-wise product and $K$ is the hyper-parameter for the largest ordinal of moments used in MM-GNN. Besides, we add residual connections across layers to further boost the performance of MM-GNN by aggregating neighborhood feature distribution of multi-hop neighbors. We finally use the output of the last GNN layer as the node representation and further compute Cross Entropy Loss for node classification task and back propagate to optimize the model parameters. Table 1. Information of Facebook social networks. We present the number of nodes, number of edges for each social graph in this table. Each node has 6-dimensional attributes and belongs to 6 possible classes. Dataset | Northeastern | Caltech | UF | Hamilton | Howard | Simmons | GWU | UGA | Tulane ---|---|---|---|---|---|---|---|---|--- #Nodes | 13882 | 769 | 35123 | 2314 | 1909 | 1518 | 12193 | 24389 | 7752 #Edges | 763868 | 33312 | 2931320 | 192788 | 409700 | 65976 | 939056 | 2348114 | 567836 Table 2. Information of citation/webpage/image networks Dataset | #Nodes | #Edges | #Features | #Classes ---|---|---|---|--- Cora | 2,708 | 10,556 | 1,433 | 7 CiteSeer | 3,327 | 9,104 | 3,703 | 6 PubMed | 19,717 | 88,648 | 500 | 3 Chameleon | 2277 | 36101 | 2325 | 5 Squirrel | 5201 | 217073 | 2089 | 5 Flickr | 89,250 | 899,756 | 500 | 7 ## 5\. Deep Analysis of MM-GNN In this section, we first theoretically analyze the limitations of existing GNNs using single statistic, which cannot retain the full information of neighbor distribution. Then we analyze the deviation of estimating neighbors’ feature distribution with finite-order moments. Finally, we analyze the time complexity of MM-GNN. ### 5.1. Limitations of Existing GNNs In this sections, we prove the existing GNNs using single statistic (e.g. GCN, GraphSAGE) have severe limitations when representing neighbor’s feature distribution, which reduces the generalization ability of the model. Complexity measure (Neyshabur et al., 2017) is the current mainstream method to measure the generalization ability of the model. And we choose Consistency of Representations (Natekar and Sharma, 2020) (champion of the NIPS2020 Competition on generalization measure). And the complexity measure of the model, denoted as $\Gamma$, is: (8) $\Gamma=\frac{1}{|\mathcal{C}|}\sum_{i=1}^{|\mathcal{C}|}\max_{i\neq j}\frac{\mathcal{S}_{i}+\mathcal{S}_{j}}{\mathcal{M}_{i,j}},$ where $\mathcal{C}$ is the set of classes, $\mathcal{C}_{i},\mathcal{C}_{j}\in\mathcal{C}$ are two different classes, $\mathcal{S}_{i}=\left(\mathbb{E}_{v_{k}\sim\mathcal{C}_{i}}(|h_{k}-\mu_{\mathcal{C}_{i}}|^{p})\right)^{\frac{1}{p}}$ is the intra-class variance of class $\mathcal{C}_{i}$, $\mathcal{M}_{i,j}=||\mu_{\mathcal{C}_{i}}-\mu_{\mathcal{C}_{j}}||_{p}$ is the inter-class variance between $\mathcal{C}_{i}$ and $\mathcal{C}_{j}$, $h_{k}$ is the representations of nodes $v_{k}$ learned by the model and $\mu_{\mathcal{C}_{i}}=\mathbb{E}_{v_{k}\sim\mathcal{C}_{i}}(h_{k})$. And higher complexity measure means lower generalization ability. Then we demonstrate that GNNs using single statistic have weak generalization ability. Take GCN (Kipf and Welling, 2016) as an example, we give the following theorem: ###### Theorem 5.1. Given a graph $G(V,E)$, we denote the nodes belonging to class $\mathcal{C}_{i}$ as $\\{v_{i}|v_{i}\in\mathcal{C}_{i}\\}$, and assume that the neighbor’s feature distribution of nodes in $\mathcal{C}_{i}$ follows the same i.i.d Gaussian distribution $\mathrm{N}(\bm{\mu}_{i},\bm{\Sigma}_{i})$. Given any two classes $\mathcal{C}_{i}\neq\mathcal{C}_{j}$, if $\bm{\mu}_{i}=\bm{\mu}_{j}$ and $\bm{\Sigma}_{i}\neq\bm{\Sigma}_{j}$ , then the complexity measure $\Gamma$ (Eq.8) of GCN with mean aggregation reaches $+\infty$, and the GCN thus lose the generalization ability. ###### Proof. The update function of GCN with mean aggregator is: $h^{(l+1)}_{k}=\sum_{j\in\mathcal{N}(v_{k})}\frac{1}{d_{k}}\cdot h_{j}^{(l)}\cdot W^{(l)}=\left(\frac{1}{d_{k}}\cdot\sum_{j\in\mathcal{N}(v_{k})}h_{j}^{(l)}\right)\cdot W^{(l)}$ To simplify the analysis, we omit the activation function. And we can get the expectation of $h^{(l+1)}_{k}$ for nodes in $\mathcal{C}_{i}$ and $\mathcal{C}_{j}$: (9) $\mathbb{E}_{v_{k}\sim\mathcal{C}_{i}}(h_{k}^{(l+1)})=\bm{\mu}_{i}\cdot W^{(l)}=\mathbb{E}_{v_{k}\sim\mathcal{C}_{j}}(h_{k}^{(l+1)})=\bm{\mu}_{j}\cdot W^{(l)},$ which means the expectation of the learned representations of nodes in $\mathcal{C}_{i}$ equals to that of nodes in $\mathcal{C}_{j}$. Then according to Eq. 9: $\mathcal{M}_{i,j}=||\mathbb{E}_{v_{k}\sim\mathcal{C}_{i}}(h_{k})-\mathbb{E}_{v_{k}\sim\mathcal{C}_{j}}(h_{k})||_{p}=\bm{0}$ Considering that $\mathcal{S}_{i}+\mathcal{S}_{i}>0$, the complexity measure $\Gamma$ in Eq. 8 equals to $+\infty$. Then GCN loses generalization ability, and this rule also applies to any other GNNs with mean aggregator. ∎ And we can intuitively generalize Theorem 5.1 to all GNNs using single statistics. For example, GraphSAGE (Hamilton et al., 2017) use the max/mean aggregator, GIN (Xu et al., 2018a) uses the sum aggregator. And most state-of- the-art GNNs (e.g., GCNII (Chen et al., 2020), DAGNN (Liu et al., 2020)) still use single statistic to represent the neighbor’s feature distribution , and therefore have the same limitations when neighbor’s feature distribution has more than one degree of freedom. ### 5.2. Deviation Analysis on Finite-order Moments. We give the theoretical guarantee of estimating neighborhood feature distributions on the graph with finite-order moments by a theorem and give its proof as follows: ###### Theorem 5.2. Given a graph $G$, we denote the deviation of using top-$K$ order moments to represent the neighbor’s feature distribution as $e_{K}$, and $e_{K}$ wil not larger than an upper bound: $e_{K}\leq\frac{(\epsilon\cdot c)^{k+1}}{(k+1)!}$, where $c\in\mathbb{R}^{+}$ is a constant and $\epsilon\in\mathbb{R}$ is a small number around 0. And larger $K$ leads to smaller estimation deviation (convergence to 0). ###### Proof. According to the Sec. 2.2, the $k$-th order moment can be derived from the $k$-th term of the MGF’s Taylor Expansion (see eq. 2). Therefore, the top-$K$ order moments can be viewed as an estimation of the $K$-th order Taylor expansion for MGF of neighbor’s feature distribution. In real cases, the feature of samples is usually bounded. We assume that for $\forall x_{i,j}\in X$, we have $|x_{i,j}|\leq c$, where $c\in\mathbb{R}^{+}$ is a constant. Besides, sine the Eq. 2 is the Taylor expansion around $t=0$, then $t$ is a small real number $t\leq\epsilon$, where $\epsilon\in\mathbb{R}$ is a small number around 0. Then the deviation of estimating neighbor’s feature distribution by $k$-th order Taylor expansion of MGF can be approximated by its Lagrange Remainder: (10) $R_{k}(x)=\frac{t^{k+1}\mathbb{E}(X^{k+1})}{(k+1)!}\leq\frac{(t\cdot c)^{k+1}}{(k+1)!}\leq\frac{(\epsilon\cdot c)^{k+1}}{(k+1)!}$ Besides, in real scenarios, we usually normalize the data, so the upper bound of the estimation error $\frac{(t\cdot c)^{k+1}}{(k+1)!}$ is further reduced. And it’s obvious that the upper bound of estimation error converges to zero as the moment ordinal $k$ increases. ∎ ### 5.3. Time Complexity Analysis Here we analyze the time complexity of training MM-GNN. Let $D_{in}$ denote the dimension of input node feature, and $D_{hidden}$ denote the hidden dimension of node representation at each layer. $|V|$ and $|E|$ denotes the number of nodes on the graph. For each layer of MM-GNN, the aggregation and transformation of graph convolution cost $O\left((|V|+|E|)\cdot D_{in}D_{hidden}\right)$. Let $k$ denote the max ordinal of used moments, and we used $k\leq 3$ in our experiments and thus can be seen as a small constant integer. The calculation of query vector costs $O(|V|D_{hidden}^{2})$, and the calculation of all key vectors costs $O\left(k\cdot(|V|+|E|)\cdot D_{hidden}^{2}\right)$. The Attention-based Moment Adaptor costs $O(k|V|D_{hidden}^{2})$. Considering that $D_{hidden}\leq D_{in}$ and $k$ is a constant integer, the overall time complexity of MM-GNN layer is: $\begin{aligned} T&=O\left((k+1)|V|D_{hidden}^{2}+(k+1)\cdot(|V|+|E|)\cdot D_{in}D_{hidden}\right)\\\ &=O\left((|V|+|E|)\cdot D_{in}D_{hidden}\right)\end{aligned},$ which equals to the complexity of other mainstream GNNs. ## 6\. Experiments ### 6.1. Datasets The proposed MM-GNN is evaluated on 15 real-world graphs, including 9 social graphs from Facebook social networks, 3 citation networks (Cora, CiteSeer, PubMed), 2 webpage networks (Chameleon, Squirrel) and one image network (Flickr). The detailed information of theses datasets are presented in Table 1 and Table 2. Social network datasets: Facebook100 (Traud et al., 2012) provides 100 college graphs, each graph describes the social relationship in a university. Each graph has categorical node attributes with practical meaning (e.g., gender, major, class year.). Moreover, nodes in each dataset belong to six different classes (a student/teacher status flag). Citation network datasets: We choose three public citation network datasets (Cora, PubMed, CiteSeer) to evaluate our model. The nodes on such graphs represent papers and edges represent citation relationships between papers. Webpage network datasets: Two webpage network datasets (Chameleon, Squirrel) are used for model evaluation. Nodes of these datasets represent articles and edges are mutual links between them. Image network datasets: We also choose one image network (Flickr) for evaluation. The nodes represent images and edges means two images share the same metadata (e.g., location, gallery, tags). Table 3. Performance (accuracy) on node classification task ($60\%$ training set ratio) on Facebook social network datasets. Model | Northeastern | Caltech | UF | Hamilton | Howard | Simmons | GWU | UGA | Tulane ---|---|---|---|---|---|---|---|---|--- GCN | $90.58\pm 0.28$ | $88.47\pm 1.91$ | $83.94\pm 0.61$ | $92.26\pm 0.35$ | $91.21\pm 0.69$ | $88.74\pm 0.61$ | $86.66\pm 0.48$ | $85.73\pm 0.51$ | $87.93\pm 0.97$ GAT | $88.69\pm 0.30$ | $81.17\pm 2.15$ | $81.68\pm 0.59$ | $91.43\pm 1.25$ | $89.35\pm 0.44$ | $89.14\pm 0.57$ | $84.27\pm 0.88$ | $82.95\pm 0.75$ | $84.45\pm 1.40$ GraphSAGE | $91.77\pm 0.86$ | $92.14\pm 1.38$ | $86.21\pm 0.56$ | $94.75\pm 0.35$ | $93.13\pm 0.93$ | $92.31\pm 0.92$ | $89.43\pm 0.78$ | $88.11\pm 0.39$ | $89.87\pm 0.47$ GIN | $89.62\pm 1.10$ | $78.35\pm 4.24$ | $85.40\pm 0.71$ | $85.47\pm 6.20$ | $89.37\pm 2.05$ | $88.37\pm 1.18$ | $84.73\pm 2.80$ | $87.68\pm 0.40$ | $80.71\pm 4.78$ APPNP | $91.11\pm 0.27$ | $90.76\pm 2.38$ | $83.07\pm 0.54$ | $93.99\pm 0.47$ | $91.88\pm 1.22$ | $90.31\pm 1.20$ | $90.43\pm 0.38$ | $86.31\pm 0.41$ | $88.52\pm 0.44$ JKNet | $92.32\pm 0.39$ | $92.34\pm 1.09$ | $85.43\pm 0.71$ | $94.72\pm 0.76$ | $93.11\pm 0.78$ | $91.81\pm 0.71$ | $89.25\pm 0.65$ | $87.68\pm 0.40$ | $89.04\pm 0.43$ DAGNN | $92.43\pm 0.30$ | $91.15\pm 2.67$ | $86.71\pm 0.51$ | $94.75\pm 0.75$ | $93.44\pm 0.68$ | $92.49\pm 0.96$ | $89.83\pm 0.42$ | $88.20\pm 0.29$ | $89.29\pm 0.50$ MM-GNN | $93.16\pm 0.32$ | $93.28\pm 1.94$ | $88.20\pm 0.45$ | $95.46\pm 0.53$ | $94.08\pm 0.95$ | $93.61\pm 0.45$ | $90.78\pm 0.45$ | $89.44\pm 0.30$ | $90.22\pm 0.33$ Table 4. Performance (accuracy) on node classification task on citation/webpage/image network datasets Dataset | Cora | CiteSeer | PubMed | Chameleon | Squirrel | Flickr ---|---|---|---|---|---|--- GCN | $81.13\pm 0.26$ | $70.08\pm 0.43$ | $78.62\pm 0.63$ | $37.68\pm 3.06$ | $26.39\pm 0.88$ | $50.31\pm 0.30$ GAT | $81.36\pm 0.96$ | $70.93\pm 0.70$ | $78.19\pm 0.78$ | $44.34\pm 1.42$ | $29.82\pm 0.98$ | $50.59\pm 0.26$ GraphSAGE | $80.43\pm 0.39$ | $69.56\pm 0.29$ | $77.47\pm 0.76$ | $47.06\pm 1.88$ | $35.62\pm 1.21$ | $50.21\pm 0.31$ GIN | $76.77\pm 0.88$ | $67.67\pm 0.34$ | $77.03\pm 0.53$ | $32.18\pm 1.98$ | $25.08\pm 1.36$ | $46.65\pm 0.33$ APPNP | $82.57\pm 0.10$ | $70.53\pm 0.25$ | $78.33\pm 0.39$ | $40.44\pm 2.02$ | $29.20\pm 1.45$ | $49.21\pm 0.37$ JKNet | $79.43\pm 0.53$ | $70.36\pm 0.33$ | $77.80\pm 0.82$ | $42.43\pm 1.76$ | $35.52\pm 1.06$ | $49.83\pm 0.58$ DAGNN | $83.93\pm 0.63$ | $72.56\pm 1.76$ | $80.50\pm 0.75$ | $47.29\pm 0.62$ | $36.23\pm 1.58$ | $50.78\pm 0.62$ GPR-GNN | $81.85\pm 0.38$ | $70.64\pm 0.65$ | $79.78\pm 0.43$ | $60.83\pm 1.37$ | $49.21\pm 1.15$ | $50.01\pm 0.53$ MM-GNN | $84.21\pm 0.56$ | $73.03\pm 0.58$ | $\underline{80.26\pm 0.69}$ | $63.32\pm 1.31$ | $51.38\pm 1.73$ | $\bm{51.73\pm 0.35}$ ### 6.2. Baselines We compare our proposed MM-GNN model against eight widely-used GNN models, including four baseline GNNs ( GCN (Kipf and Welling, 2016), GAT (Veličković et al., 2017), GraphSAGE (Hamilton et al., 2017), GIN (Xu et al., 2018a)) and four state-of-the-art GNNs (APPNP (Klicpera et al., 2018), JKNet (Xu et al., 2018b), DAGNN (Liu et al., 2020), GPRGNN (Chien et al., 2020)). Considering that the two webpage networks (Chameleon, Squirrel) are heterophily graph datasets unlike other datasets used in this paper, we also choose GPR-GNN (Chien et al., 2020), which designed for graphs with heterophily as one of the baseline models. ### 6.3. Experimental Setup We evaluate MM-GNN on the semi-supervised node classification task compared with state-of-the-art methods. For each dataset, we all run our model and other baseline models for 10 times and compute the mean and standard deviation of the results. For Facebook social networks, we randomly generate 3 different data splits with an average train/val/test split ratio of 60%/20%/20% in the Table 3. And we further generate data splits with lower training ratio (40%,20%,10%) and conduct experiments to validate the robustness of our model in the Table 6. For citation networks (Cora, Citeseer, and Pubmed), we use the public split recommended by (Kipf and Welling, 2016) with fixed 20 nodes per class for training, 500 nodes for validation, and 1000 nodes for testing. For webpage networks (Chameleon, Squirrel) (Pei et al., 2020) and image network (Flickr (Zeng et al., 2019)), we all use the public splits recommended in the original papers. The max moment ordinal $K$ for MM-GNN is searched in {1, 2, 3} in our experiments. To keep fairness, we search the shared hyper-parameters for all models in the same searching spaces. Specifically, we train 200/400/600 epochs for each model. we search the hidden units in {16, 32, 64, 128, 256} and search the number of GNN layers in {2, 6, 10, 16}. We search learning rate from $1e-4$ to $1e-1$ and the weight decay from $1e-4$ to $1e-2$. For each experiment, we use the Adam optimizer (Kingma and Ba, 2014) to train each GNN model on Nvidia Tesla V100 GPU. ### 6.4. Main Results We compare the performance of MM-GNN to the state-of-the-art methods in Table 3 and Table 4. Compared with all baselines, MM-GNN generally achieves the best performance on all datasets. For social networks, we observe from Table 3 that our MM-GNN model gains significant improvement on all datasets. Besides, MM- GNN also gain significant improvements on other public real-world datasets, including citation networks, webpage networks and image networks. And the results demonstrate that MM-GNN gains superior performance on the graphs of various types. ### 6.5. Ablation Study To estimate the effectiveness of each component in MM-GNN, we conduct comprehensive ablation studies by removing certain component at a time. The results are shown in the Table 5. Table 5. Performance (accuracy) on node classification task compared with single-moment GNN models. M-1, M-2, M-3 represent the single-moment GNN models, where M-$k$ denotes the model whose aggregator only uses the $k$-th order moment of neighbors. Ensemble denotes the mean ensemble of the three single-moment models. Model | M-1 | M-2 | M-3 | Ensemble | MM-GNN ---|---|---|---|---|--- Fusion mode | — | — | — | Mean | MLP | Attention Howard | 93.15 | 93.61 | 93.83 | 93.70 | 94.00 | 94.37 Simmons | 90.90 | 92.36 | 92.45 | 91.85 | 92.99 | 93.66 GWU | 88.57 | 89.79 | 90.44 | 89.69 | 90.48 | 91.11 Cora | 82.31 | 81.70 | 80.07 | 81.98 | 82.58 | 84.21 CiteSeer | 71.31 | 70.70 | 71.53 | 71.28 | 72.33 | 73.03 PubMed | 79.23 | 78.01 | 78.27 | 78.93 | 79.35 | 80.26 Chameleon | 48.17 | 48.09 | 48.24 | 49.76 | 50.05 | 51.93 Squirrel | 35.58 | 36.29 | 35.97 | 36.09 | 36.35 | 38.92 Flickr | 50.37 | 51.09 | 50.88 | 51.15 | 51.37 | 51.73 #### 6.5.1. Effects of MME To validate the effects of Multi-order Moments Embedding (MME) module, we design several model variants as shown in Table 5. The variants in the first three columns use only one moment (e.g. 1st-order moment) for neighborhood aggregation. The Ensemble-Mean in the table is another variant that ensemble all single-moment models with the average logits of all single-moment variants. And we observe that each single-moment model has different performance on different datasets. And the simple ensemble of them usually have better performance. However, our full MM-GNN (the rightmost column) always has the best performance on all datasets against all variants. #### 6.5.2. Effects of AMA To validate the effects of the Attention-based Moment Adaptor (AMA) module, we design a variant of MM-GNN by removing this module and use a simple MLP to fuse the output of MME module. MM-GNN with Attention-based Moment Adaptor gains the state-of-the-art results on all datasets compared with the simple MLP fusion method, which demonstrates the superiority of the Attention-based Moment Adaptor module. Table 6. Performance (accuracy) on node classification task with different training set ratio on Facebook social networks. Dataset | Northeastern | UF | GWU | UGA | Howard ---|---|---|---|---|--- Training | $40\%$ | $20\%$ | $10\%$ | $40\%$ | $20\%$ | $10\%$ | $40\%$ | $20\%$ | $10\%$ | $40\%$ | $20\%$ | $10\%$ | $40\%$ | $20\%$ | $10\%$ GCN | 90.34 | 90.38 | 90.35 | 83.97 | 83.76 | 83.62 | 85.76 | 85.65 | 85.67 | 85.5 | 85.46 | 85.42 | 91.17 | 90.95 | 90.99 GAT | 88.57 | 88.63 | 88.69 | 79.79 | 79.89 | 79.85 | 83.95 | 84.07 | 83.98 | 82.87 | 82.92 | 82.79 | 89.09 | 88.92 | 88.93 GraphSAGE | 91.91 | 92.22 | 92.18 | 86.26 | 86.34 | 86.51 | 89.31 | 89.45 | 89.15 | 88.21 | 88.09 | 87.77 | 93.21 | 93.29 | 93.17 GIN | 89.45 | 89.59 | 89.57 | 85.64 | 85.44 | 85.57 | 84.27 | 83.98 | 84.07 | 87.57 | 87.44 | 87.34 | 89.40 | 89.24 | 89.31 APPNP | 89.76 | 89.94 | 89.85 | 81.89 | 81.92 | 81.87 | 87.14 | 87.11 | 87.12 | 84.79 | 84.75 | 84.67 | 89.20 | 89.51 | 89.27 JKNet | 92.23 | 92.35 | 92.28 | 85.51 | 85.67 | 85.49 | 89.12 | 89.06 | 89.02 | 87.91 | 87.82 | 87.81 | 93.15 | 93.14 | 93.17 DAGNN | 92.49 | 92.36 | 92.30 | 86.86 | 86.46 | 85.70 | 89.73 | 89.31 | 89.11 | 88.35 | 87.79 | 87.62 | 93.31 | 93.33 | 93.27 MM-GNN | 93.08 | 92.96 | 92.73 | 88.34 | 87.87 | 87.38 | 90.63 | 90.50 | 90.14 | 89.52 | 89.38 | 88.91 | 94.03 | 93.88 | 93.76 (a) Social network (b) Citation network (c) Webpage/Image network Figure 4. Average attention value of different moments. The value in each block means the averaged attentions of all nodes and all feature dimensions. And the deeper color means larger attention values learned by MM-GNN. ### 6.6. Robustness Analysis In this section, we evaluate the robustness of our MM-GNN by changing the training set ratio and the key hyper-parameters. #### 6.6.1. Analysis on effects of training set ratio: We conducted extensive experiments with different training set ratios (10%, 20%, 40%) on the Facebook social networks. And due to space limitations, we only show results on five of these datasets. As shown in Table 6, our MM-GNN model has the best performance on each split of the five graphs even when the proportion of training set is small. (a) Simmons, GWU, Cora (b) Chameleon, Squirrel, Flickr Figure 5. Hyper-parameter analysis on max ordinal of moments $K$ used in MM- GNN #### 6.6.2. Analysis on the hyper-parameter $K$: As shown in the Fig. 5, the performance of MM-GNN is improved with larger $K$ (max ordinal of moments). And when $K$ is no less than 3, the performance of our model is convergent, which matches the data complexity. ### 6.7. Visualizations The Attention-based Moment Adaptor (AMA) improves the classification performance and also provides interpretability for MM-GNN, which plays an critical role in selecting important orders of moments for each node on the graph. To further explore the interpretability of the learned attention, we analyze their effects through attention visualization. We output the attention value learned by the first layer of MM-GNN for each node and compute the averaged attention of all nodes on all hidden dimensions. As shown in the Fig. 4, the average attention values for each moment on different graph dataset present different patterns. MM-GNN output larger weights for higher-order moments on the social networks, while output smaller weights for higher-order moments on citation networks. Moreover, the magnitude of learned attentions are aligned with the performance of the single-moment model shown in Table 5, which further demonstrates the interpretability of MM-GNN. ## 7\. Related work In recent years, Graph Neural Networks (GNNs) have been widely studied on graph representation learning for its expressive power (Du et al., 2021a; Yao et al., 2022; Wang et al., 2019; Zhu et al., 2020; Monti et al., 2018; Wang et al., 2022; Bi et al., 2022b). Spectral GNNs (Defferrard et al., 2016; Boscaini et al., 2015; Xu et al., 2019) were first proposed based on graph signal processing theory. Then Kipf et al. proposed GCN (Kipf and Welling, 2016), which simplifies the convolution operation with low pass filtering on graphs and bridges the gap between spectral GNNs and spatial GNNs. GAT further (Veličković et al., 2017) introduced an attention mechanism to GNNs to learn the importance of different neighbors, instead of using equal weights for neighbors. Afterwards, some researchers turned their attention to considering long-range dependencies (Dwivedi et al., 2021; Ying et al., 2021; Abu-El-Haija et al., 2019). DAGNN (Liu et al., 2020) is a deeper GCN model with decoupled transformation and aggregation strategy . However, most existing GNNs update node representations by aggregating feature from neighbors with single statistic (e.g., mean, max, sum). Though such lower-order statistics can also reflect certain characteristics of neighbor distribution, they lose the full information of the neighbor’s feature distribution. Even for the simple Gaussian distribution which has two degree of freedom (mean, variance), and the information of 2nd-order statistic variance cannot be retained by mean/max/sum (1st-order statistic). Some graph filter based methods (Dehmamy et al., 2019; Corso et al., 2020) proposed to model graph signals by integrating multiple types of aggregators, without considering the relationship between different signals and neighborhood distribution. Recently, some works (Du et al., 2021b; Ma et al., 2022; Luo et al., 2022; Bi et al., 2022a) begin to study the modeling of neighbor distribution on the graph, but they do not consider the information loss caused by single lower-order statistics. ## 8\. Conclusion In this paper, we embrace the critical roles of neighbor’s feature distribution for modeling graph data. And we introduce the multi-order moments to model neighbor’s feature distribution on graphs, fixing the severe limitations of existing GNNs using only single statistic for neighborhood aggregation. Based on the method of moments, we propose a novel Mix-Moment GNN model with adaptive attention mechanism, which includes two components, the Multi-order Moments Embedding (MME) and the Attention-based Moment Adaptor (AMA). MM-GNN use multi-order moments to represent neighbor’s feature distribution and select important moments for nodes adaptively. The consistent improvement of MM-GNN over other state-of-the-art GNNs on 15 real-world graphs demonstrate the superiority of our method. ## References * (1) * Abu-El-Haija et al. (2019) Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. 2019. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In _international conference on machine learning_. PMLR, 21–29. * Bai et al. (2019) Yunsheng Bai, Hao Ding, Song Bian, Ting Chen, Yizhou Sun, and Wei Wang. 2019\. Simgnn: A neural network approach to fast graph similarity computation. In _Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining_. 384–392. * Bi et al. (2022a) Wendong Bi, Lun Du, Qiang Fu, Yanlin Wang, Shi Han, and Dongmei Zhang. 2022a. Make Heterophily Graphs Better Fit GNN: A Graph Rewiring Approach. _arXiv preprint arXiv:2209.08264_ (2022). * Bi et al. (2022b) Wendong Bi, Bingbing Xu, Xiaoqian Sun, Zidong Wang, Huawei Shen, and Xueqi Cheng. 2022b. Company-as-Tribe: Company Financial Risk Assessment on Tribe-Style Graph with Hierarchical Graph Neural Networks. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. 2712–2720. * Boscaini et al. (2015) Davide Boscaini, Jonathan Masci, Simone Melzi, Michael M Bronstein, Umberto Castellani, and Pierre Vandergheynst. 2015. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. In _Computer Graphics Forum_ , Vol. 34. Wiley Online Library, 13–23. * Bowman and Shenton (2004) KO Bowman and LR Shenton. 2004. Estimation: Method of moments. _Encyclopedia of statistical sciences_ 3 (2004). * Chen et al. (2020) Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. 2020. Simple and deep graph convolutional networks. In _International Conference on Machine Learning_. PMLR, 1725–1735. * Chien et al. (2020) Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2020\. Adaptive Universal Generalized PageRank Graph Neural Network. In _International Conference on Learning Representations_. * Corso et al. (2020) Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veličković. 2020\. Principal neighbourhood aggregation for graph nets. _Advances in Neural Information Processing Systems_ 33 (2020), 13260–13271. * Dai et al. (2022) Enyan Dai, Wei Jin, Hui Liu, and Suhang Wang. 2022\. Towards robust graph neural networks for noisy graphs with sparse labels. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_. 181–191. * Defferrard et al. (2016) Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. _Advances in neural information processing systems_ 29 (2016). * Dehmamy et al. (2019) Nima Dehmamy, Albert-László Barabási, and Rose Yu. 2019. Understanding the representation power of graph neural networks in learning graph topology. _Advances in Neural Information Processing Systems_ 32 (2019). * Derr et al. (2020) Tyler Derr, Yao Ma, Wenqi Fan, Xiaorui Liu, Charu Aggarwal, and Jiliang Tang. 2020\. Epidemic graph convolutional network. In _Proceedings of the 13th International Conference on Web Search and Data Mining_. 160–168. * Du et al. (2021a) Lun Du, Fei Gao, Xu Chen, Ran Jia, Junshan Wang, Jiang Zhang, Shi Han, and Dongmei Zhang. 2021a. TabularNet: A Neural Network Architecture for Understanding Semantic Structures of Tabular Data. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_. 322–331. * Du et al. (2021b) Lun Du, Xiaozhou Shi, Qiang Fu, Hengyu Liu, Shi Han, and Dongmei Zhang. 2021b. GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily. _arXiv preprint arXiv:2110.15777_ (2021). * Dwivedi et al. (2021) Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. 2021\. Graph neural networks with learnable structural and positional representations. _arXiv preprint arXiv:2110.07875_ (2021). * Fettal et al. (2022) Chakib Fettal, Lazhar Labiod, and Mohamed Nadif. 2022\. Efficient Graph Convolution for Joint Node Representation Learning and Clustering. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_. 289–297. * Fisher (1936) Ronald A Fisher. 1936\. The use of multiple measurements in taxonomic problems. _Annals of eugenics_ 7, 2 (1936), 179–188. * Guo et al. (2022) Zhihao Guo, Feng Wang, Kaixuan Yao, Jiye Liang, and Zhiqiang Wang. 2022. Multi-Scale Variational Graph AutoEncoder for Link Prediction. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_. 334–342. * Hamilton et al. (2017) William L Hamilton, Rex Ying, and Jure Leskovec. 2017\. Inductive representation learning on large graphs. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_. 1025–1035. * Jin et al. (2021) Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu, and Jiliang Tang. 2021. Node similarity preserving graph convolutional networks. In _Proceedings of the 14th ACM international conference on web search and data mining_. 148–156. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ (2014). * Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_ (2016). * Klicpera et al. (2018) Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Predict then propagate: Graph neural networks meet personalized pagerank. _arXiv preprint arXiv:1810.05997_ (2018). * Lin et al. (2020) Wenqing Lin, Feng He, Faqiang Zhang, Xu Cheng, and Hongyun Cai. 2020. Initialization for network embedding: A graph partition approach. In _Proceedings of the 13th International Conference on Web Search and Data Mining_. 367–374. * Lin et al. (2014) Wenqing Lin, Xiaokui Xiao, and Gabriel Ghinita. 2014\. Large-scale frequent subgraph mining in mapreduce. In _2014 IEEE 30th International Conference on Data Engineering_. IEEE, 844–855. * Liu et al. (2020) Meng Liu, Hongyang Gao, and Shuiwang Ji. 2020. Towards Deeper Graph Neural Networks. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_. ACM. * Luo et al. (2022) Zihan Luo, Jianxun Lian, Hong Huang, Hai Jin, and Xing Xie. 2022. Ada-GNN: Adapting to Local Patterns for Improving Graph Neural Networks. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_. 638–647. * Ma et al. (2022) Xiaojun Ma, Qin Chen, Yuanyi Ren, Guojie Song, and Liang Wang. 2022. Meta-Weight Graph Neural Network: Push the Limits Beyond Global Homophily. In _Proceedings of the ACM Web Conference 2022_. 1270–1280. * Monti et al. (2018) Federico Monti, Karl Otness, and Michael M Bronstein. 2018\. Motifnet: a motif-based graph convolutional network for directed graphs. In _2018 IEEE Data Science Workshop (DSW)_. IEEE, 225–228. * Natekar and Sharma (2020) Parth Natekar and Manik Sharma. 2020. Representation based complexity measures for predicting generalization in deep learning. _arXiv preprint arXiv:2012.02775_ (2020). * Neyshabur et al. (2017) Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. 2017. Exploring generalization in deep learning. _Advances in neural information processing systems_ 30 (2017). * Papoulis and Pillai (2002) Athanasios Papoulis and S Unnikrishna Pillai. 2002. _Probability, random variables, and stochastic processes_. Tata McGraw-Hill Education. * Pei et al. (2020) Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. 2020. Geom-gcn: Geometric graph convolutional networks. _arXiv preprint arXiv:2002.05287_ (2020). * Traud et al. (2012) Amanda L Traud, Peter J Mucha, and Mason A Porter. 2012\. Social structure of Facebook networks. _Physica A: Statistical Mechanics and its Applications_ 391, 16 (2012), 4165–4180. * Veličković et al. (2017) Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. _arXiv preprint arXiv:1710.10903_ (2017). * Wang et al. (2022) Tao Wang, Di Jin, Rui Wang, Dongxiao He, and Yuxiao Huang. 2022. Powerful graph convolutional networks with adaptive propagation mechanism for homophily and heterophily. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 36. 4210–4218. * Wang et al. (2019) Yun Wang, Lun Du, Guojie Song, Xiaojun Ma, Lichen Jin, Wei Lin, and Fei Sun. 2019. Tag2Gauss: Learning Tag Representations via Gaussian Distribution in Tagged Networks.. In _IJCAI_. 3799–3805. * Xu et al. (2019) Bingbing Xu, Huawei Shen, Qi Cao, Yunqi Qiu, and Xueqi Cheng. 2019. Graph wavelet neural network. _arXiv preprint arXiv:1904.07785_ (2019). * Xu et al. (2018a) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018a. How powerful are graph neural networks? _arXiv preprint arXiv:1810.00826_ (2018). * Xu et al. (2018b) Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. 2018b. Representation learning on graphs with jumping knowledge networks. In _International Conference on Machine Learning_. PMLR, 5453–5462. * Yang et al. (2016) Zhilin Yang, William Cohen, and Ruslan Salakhudinov. 2016\. Revisiting semi-supervised learning with graph embeddings. In _International conference on machine learning_. PMLR, 40–48. * Yao et al. (2022) Di Yao, Haonan Hu, Lun Du, Gao Cong, Shi Han, and Jingping Bi. 2022. TrajGAT: A Graph-based Long-term Dependency Modeling Approach for Trajectory Similarity Computation. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. 2275–2285. * Ying et al. (2021) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. 2021\. Do Transformers Really Perform Badly for Graph Representation? _Advances in Neural Information Processing Systems_ 34 (2021). * Zeng et al. (2019) Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. 2019\. Graphsaint: Graph sampling based inductive learning method. _arXiv preprint arXiv:1907.04931_ (2019). * Zhang and Chen (2018) Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. _Advances in Neural Information Processing Systems_ 31 (2018), 5165–5175. * Zhao et al. (2021) Tianxiang Zhao, Xiang Zhang, and Suhang Wang. 2021\. Graphsmote: Imbalanced node classification on graphs with graph neural networks. In _Proceedings of the 14th ACM international conference on web search and data mining_. 833–841. * Zhou et al. (2018) Lekui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. 2018. Dynamic network embedding by modeling triadic closure process. In _Proceedings of the AAAI conference on artificial intelligence_ , Vol. 32. * Zhu et al. (2020) Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. 2020\. Beyond homophily in graph neural networks: Current limitations and effective designs. _Advances in Neural Information Processing Systems_ 33 (2020), 7793–7804.
# On the Stability of Random Multiple Access with Feedback Exploitation and Queue Priority Karim G. Seddik Electronics Engineering Department, American University in Cairo AUC Avenue, New Cairo 11835, Egypt. email<EMAIL_ADDRESS> ###### Abstract In this paper, we study the stability of two interacting queues under random multiple access in which the queues leverage the feedback information. We derive the stability region under random multiple access where one of the two queues exploits the feedback information and backs off under negative acknowledgement (NACK) and the other, higher priority, queue will access the channel with probability one. We characterize the stability region of this feedback-based random access protocol and prove that this derived stability region encloses the stability region of the conventional random access (RA) scheme that does not exploit the feedback information. ## I Introduction The stability of interacting queues has been extensively considered in literature. Several works have considered the characterization of the stability region of interacting queues under random access protocols. The stability region is characterized for the case $M=2$ and $M=3$ interacting queues as well as the case of $M>3$ with symmetric arrivals. The stability region for the general case of $M>3$ with asymmetric arrivals is still an open problem and only inner achievable bounds are known. Recently, many papers have considered the problem of interacting queues in different contexts. For example, [1] considers the problem of interacting queues in a TDMA system where a relay is used to help the source nodes in forwarding their lost packets. In [2], the stability of interacting queues under a random access protocol in the context of Cognitive Radio Network was derived. In [3], the stability region of two interacting queues under random access protocol where the two queues harvest energy was characterized. Other works can be found in [4, 5], where derivations of the stability regions in the context of different cognitive radio networks were considered. In this paper, we derive the stability region of a two-queue random access (RA) protocol with priorities. The queues will apply the conventional RA protocol but in the case of packet loss due to collision the two queues will exploit the feedback information to provide some level of coordination. We set a priority to one of the two queues as follows. In the case of a negative acknowledgement, the queue with the higher priority will attempt transmission in the following time slot with probability one and the other queue will back off to allow for collision-free transmission of the higher priority queue. Clearly, this will enhance the service rate for the higher priority queue but more interestingly it will also improve the service rate for the other, less priority queue as will be explained later. We derive an expression for the boundary of the stability region and prove that the RA with priority scheme encloses the stability region of the conventional RA scheme. To the best of our knowledge, the problem of characterizing the stability region of the random access protocol with feedback leveraging has not been considered before. We will characterize the stable arrival rates region and prove that it contains that of the conventional random multiple access scheme (with no feedback exploitation). The rest of the paper is organized as follows. The system model is presented in Section II. The performance of the proposed scheme is investigated in Section III. The paper is concluded in Section IV. We have moved most of the proofs to the appendices to preserve the flow of ideas in the paper. ## II System Model The system model is shown in Fig. 1. We consider the case of two interacting packet queues, namely $Q_{1}$ and $Q_{2}$. $Q_{1}$ and $Q_{2}$ have infinite buffers for storing fixed length packets. The channel is slotted in time and any slot duration equals one packet transmission time. The arrival processes at the two queues, $Q_{1}$ and $Q_{2}$, are modeled as Bernoulli arrival processes with means $\lambda_{1}$ and $\lambda_{2}$, respectively [3]. Under our system model assumptions, the average arrival rates are $\lambda_{1}$ and $\lambda_{2}$ packets per time slot, and are bounded as $0<\lambda_{i}<1$, $i=1,2$111The maximum service rate in our model is 1 packet/slot, since the slot duration equals one packet transmission time, then the arrival rates must be less than 1 otherwise the system will be unstable [3].. We can assume that the packets arrive at the start of the time slot. Figure 1: The system model. The channel is modeled as a collision channel, where packet loss results only in the case of simultaneous transmissions from the two queues. If only one queue attempts transmitting at a given time slot, the packet is considered to be correctly received [6, 3]. In the random access phase, the first queue accesses the channel with probability $p_{1}$ whenever it has packets to send and the second queue will access the channel with probability $p_{2}$ whenever it has packets to send. If at any time slot some queue is empty, it will not attempt any channel access. In this paper, we will consider the use of the feedback information that is leveraged at the queues in the case of collision. In the conventional random multiple access system and in the case of collision, the collided packets stay on the head of the queues and retransmissions are attempted employing the same random multiple access scheme. In this paper, we consider a system where the feedback information is leveraged at the queues and a priority is set to the first queue; in the next time slot after collision, queue 2 ($Q_{2}$) will back off and queue 1 ($Q_{1}$) will retransmit its collided packet to allow for collision-free transmission of $Q_{1}$; after that the two queues return to the conventional random multiple access scheme. The priority set to queue 1 can be due to some quality of service (QoS) requirement that is different from the QoS requirement of queue 2. The interesting result is that although the feedback will enhance the service of queue 1 by setting a higher priority to it, the service will be enhanced as well for queue 2 as will be explained later. ## III The Stability Region for the Feedback-Based Random Access Protocol with Priorities In this section, we will characterize the stability region for the feedback- based random access scheme. Stability can be loosely defined as having a certain quantity of interest kept bounded. In our case, we are interested in the queue size being bounded. For an irreducible and aperiodic Markov chain with countable number of states, the chain is stable if and only if it is positive recurrent, which implies the existence of its stationary distribution. For a rigorous definition of stability under more general scenarios see [6] and [7]. If the arrival and service processes of a queueing system are strictly stationary, then one can apply Loynes’s theorem to check for stability conditions [8]. This theorem states that if the arrival process and the service process of a queueing system are strictly stationary, and the average arrival rate is less than the average service rate, then the queue is stable, otherwise it is unstable. Characterizing the stability region will be a difficult problem due to the interaction of the two queues and due to the fact that the service for one queue will depend on the state of the other queue. We will consider the use of the Dominant System concept that was proposed in [6] to characterize the stability region of the conventional RA scheme. We will define two dominant systems tailored to match our feedback-based random access scheme and in each of the two systems we will determine the boundaries of the stability region. ### III-A Dominant System 1 In any dominant system, we define a system that “stochastically dominates” our system, that is the queues lengths in the dominant system are always larger than the queues lengths in our system if both, the dominant system and our system, start from the same initial state and have the same arrivals and encounter the same packet collisions. Figure 2: Queue 1, $Q_{1}$, Markov chain model for Dominant System 1. For the first Dominant System, we assume that queue 2 will always have packets to transmit; even if the queue was empty dummy packets will be transmitted from queue 2. Clearly this will set a dominant system to our system since the transmission of dummy packets can only result in more collisions and packet losses. If for a given arrival rate pair ($\lambda_{1}$, $\lambda_{2}$) the first dominant system is stable then clearly our system will be stable. Therefore, the stability region of the first dominant system will provide an inner bound for our system stability region. For queue 1, the Markov chain describing the evolution of the queue is shown in Fig. 2. Note that the Markov chain has two classes of states, namely, $k_{F}$ and $k_{R}$ and $k=0,1,2,\cdots$. The subscript $F$ denotes first transmission states and the subscript $R$ denotes retransmission states. Note that in the retransmission states, queue 1 packet will always be delivered since there is no collisions in these states (queue 2 is backing off); in these states, either queue 1 length decreases by 1 if no arrival occurs or the queue length will remain the same if an arrival occurs while being in these retransmission states since the packet on the head of the queue is successfully transmitted with probability 1. The stability condition for queue 1 in Dominant System 1 is given in the following lemma, which is proved in Appendix A. ###### Lemma III.1 The arrival rates for queue 1 and queue 2 in Dominant System 1 must satisfy the following two conditions, respectively, $\begin{split}\lambda_{1}&<\frac{p_{1}}{1+p_{1}p_{2}}\\\ \lambda_{2}&<p_{2}(1-\lambda_{1}-\lambda_{1}p_{2})\end{split}$ (1) for the system to be stable. ### III-B Dominant System 2 In the second Dominant System, we assume that queue 1 always has packets to send (dummy packets are sent if the queue decides to transmit while being empty). Again, this will decouple the interaction of the two queues since the service rate of queue 2 will be independent of the state of queue 1. The Markov chain for the evolution of queue 2 is shown in Fig. 3. Two classes of states are defined in Fig. 3 and denoted by the subscripts ON and OFF. The ON states denote the states where queue 2 can access the channel. The OFF states denote the back off states where queue 1 is retransmitting its collided packets. Note that the transitions from the $k_{\text{OFF}}$ state can be either to the $k_{\text{ON}}$ state, if no arrival occurs in the slot, or to the $(k+1)_{\text{ON}}$ state, if one arrival occurs in the slot. The OFF states can never make a transition to a state with a lower number of packets since in the OFF states queue 2 is in the back off mode and no access is attempted. The stability condition for queue 2 in Dominant System 2 is given in the following lemma, which is proved in Appendix B (the analysis in Appendix B will be based on the theory of homogeneous quasi birth-and-death (QBD) Markov chains [9]). ###### Lemma III.2 The arrival rates for queue 1 and queue 2 in Dominant System 2 must satisfy the following two conditions, respectively, $\begin{split}\lambda_{1}&<\frac{p_{1}(1-p_{1}-\lambda_{2}p_{1})}{(1-p_{1})}\\\ \lambda_{2}&<\frac{p_{2}(1-p_{1})}{1+p_{1}p_{2}}\end{split}$ (2) for the system to be stable. Figure 3: Queue 2, $Q_{2}$, Markov chain model for Dominant System 2. Note that the intersection of the two stability regions described in Lemma III.1 and Lemma III.2 for a given access vector $\mathbf{p}=[p_{1}\;p_{2}]^{T}$ (grey area in Fig. 4) can be interpreted as follows. Define a new Dominant System (Dominant System 3) in which every queue has always a packet to transmit. In this case, the transmission state of queue 1 can be represented by the two-state Markov chain model shown in Fig 5(a); note that in this case queue 1 will be either in the “Transmission” state denoted by $F$ or in the “Retransmission” state denoted by $R$ in Fig. 5(a). Fig. 5(b) shows the Markov chain model for queue 2. Queue 2 will have two states denoted by ON when queue 1 is in the F state and OFF when queue 1 is the $R$ state (when queue 1 is in the $R$ state queue 2 will be in the back off, OFF state). It is straightforward to show that the steady state distributions for the two Markov chains shown in Fig. 5 are given by $\begin{split}\pi_{F}&=\pi_{\text{ON}}=\frac{1}{1+p_{1}p_{2}}\\\ \pi_{R}&=\pi_{\text{OFF}}=\frac{p_{1}p_{2}}{1+p_{1}p_{2}}.\end{split}$ (3) Figure 4: The union of the stability regions for the two dominant systems for fixed access probabilities $p_{1}$ and $p_{2}$. The service rate for queue 1 in Dominant System 3, $\mu_{1}^{\prime\prime}$, is given by $\begin{split}\mu_{1}^{\prime\prime}=p_{1}(1-p_{2})\pi_{F}+\pi_{R}=\frac{p_{1}}{1+p_{1}p_{2}},\end{split}$ (4) where queue 1 is served with probability $p_{1}(1-p_{2})$ in the $F$ state and with probability 1 in the $R$ state. The service rate for queue 2 in Dominant System 3, $\mu_{2}^{\prime\prime}$, is given by $\begin{split}\mu_{2}^{\prime\prime}=p_{2}(1-p_{1})\pi_{\text{ON}}+0\times\pi_{\text{OFF}}=\frac{p_{2}(1-p_{1})}{1+p_{1}p_{2}},\end{split}$ (5) where queue 2 is served with probability $p_{2}(1-p_{1})$ in the ON state and with probability 0 in the OFF state. (a) The two-state Markov chain model for queue 1 transmission state in Dominant System 3. (b) The two-state Markov chain model for queue 2 transmission state in Dominant System 3. Figure 5: Dominant System 3 Markov chain model. ### III-C The Stability Region of the Random Access Protocol with Priorities In this section, we derive the expression for the stability region of the random access scheme with feedback exploitation where a priority is set to one of the two queues. The following Lemma characterizes the stability region for fixed random access probabilities, $p_{1}$ and $p_{2}$, for queue 1 and queue 2, respectively. ###### Lemma III.3 For a fixed random access probability vector $\mathbf{p}=[p_{1}\;p_{2}]^{T}$, the stability region $\mathcal{R}(\mathbf{p})$ of the random access with priorities is the union of the two regions described by $\begin{split}\lambda_{2}<p_{2}(1-\lambda_{1}-\lambda_{1}p_{2})\end{split}$ (6) when $\begin{split}\lambda_{1}<\frac{p_{1}}{1+p_{1}p_{2}}\end{split}$ (7) and $\begin{split}\lambda_{1}<\frac{p_{1}(1-p_{1}-\lambda_{2}p_{1})}{(1-p_{1})}\end{split}$ (8) when $\begin{split}\lambda_{2}<\frac{p_{2}(1-p_{1})}{1+p_{1}p_{2}}.\end{split}$ (9) for the system to be stable. ###### Proof: The result in Lemma III.3 can be proved using the tool of stochastic dominance presented in [6]. The indistinguishability argument at the stability region boundary states that if the original system is unstable then its queues will saturate and they will always have packets to transmit; therefore at the boundaries of the stability region of the original system, the original system will be indistinguishable from the dominant system and thus has the same stability region boundaries [6]. ∎ The next theorem characterizes the entire stability region for the random access protocol with priorities. ###### Theorem III.4 The boundary of the stability region, $\mathcal{R}$, of the random access protocol with priorities, which is defined as the union of the $\mathcal{R}(\mathbf{p})$ regions for the different $\mathbf{p}=[p_{1}\;p_{2}]^{T}$ as $\begin{split}\mathcal{R}=\bigcup_{\mathbf{p}\in[0,1]^{2}}\mathcal{R}(\mathbf{p})\end{split}$ (10) can be characterized as $\begin{split}\lambda_{2}=\left\\{\begin{array}[]{ll}1-2\lambda_{1}&\lambda_{1}\leq\frac{1}{3}\\\ \frac{(1-\lambda_{1})^{2}}{4\lambda_{1}}&\lambda_{1}>\frac{1}{3}.\end{array}\right.\end{split}$ (11) ###### Proof: First, we will derive the boundary of the stability region defined in lemma III.1, which can be found as $\begin{split}\lambda_{2}^{*}(\lambda_{1})=&\mathbf{max}_{p_{1},p_{2}}\;p_{2}(1-\lambda_{1}-\lambda_{1}p_{2})\\\ &\text{{subject to }}0\leq p_{1}\leq 1,\;0\leq p_{2}\leq 1,\;\lambda_{1}<\frac{p_{1}}{1+p_{1}p_{2}}.\end{split}$ (12) Ignoring the constraints in the last optimization problem and differentiating the cost function in the last expression with respect to $p_{2}$ and equating the derivative to 0 we can get the optimal value for $p_{2}$, denoted by $p_{2}^{*}$, as222it is straightforward to prove that the cost function is concave in $p_{2}$. $p_{2}^{*}=\frac{1-\lambda_{1}}{2\lambda_{1}}.$ (13) Note that for $\lambda_{1}\geq\frac{1}{3}$, we have $p_{2}^{*}\leq 1$. Also, for $p_{1}=1$ and $p_{2}^{*}=\frac{1-\lambda_{1}}{2\lambda_{1}}$, the maximum value for the first queue arrival rate is $\frac{p_{1}}{1+p_{1}p_{2}}=\frac{2\lambda_{1}}{1+\lambda_{1}}>\lambda_{1}$ (i.e., the last constraint, $\lambda_{1}<\frac{p_{1}}{1+p_{1}p_{2}}$ is satisfied with $p_{1}=1$), which means that for $\lambda_{1}\geq\frac{1}{3}$, the value for $p_{2}$ that maximizes $\lambda_{2}$ for a given $\lambda_{1}$ is given by $p_{2}^{*}=\frac{1-\lambda_{1}}{2\lambda_{1}}$, with all the constraints in (12) not being violated. For $\lambda_{1}<\frac{1}{3}$, following similar steps to the $\lambda_{1}\geq\frac{1}{3}$ case, we can easily prove that the value for $p_{2}$ that maximizes $\lambda_{2}$ is giving by $p_{2}^{*}=1$; clearly the values of $p_{1}=1$ and $p_{2}^{*}=1$ can be easily checked to satisfy the constraints in (12) for $\lambda_{1}<\frac{1}{3}$. Substituting the optimal values for $p_{2}$ for the different ranges of $\lambda_{1}$ we can easily get the boundary of the stability region spanned by the expression in lemma III.1 to be given by $\begin{split}\lambda_{2}=\left\\{\begin{array}[]{ll}1-2\lambda_{1}&\lambda_{1}\leq\frac{1}{3}\\\ \frac{(1-\lambda_{1})^{2}}{4\lambda_{1}}&\lambda_{1}>\frac{1}{3}.\end{array}\right.\end{split}$ (14) Finally, following a similar approach to that considered here it is straightforward to show that the boundary derived in (14) is the boundary of the stability regions defined in lemma III.2, which completes the proof. ∎ In Fig. 6, we have plotted the regions $\mathcal{R}(\mathbf{p})$, for $p_{1}$ and $p_{2}$ ranging from 0 to 1 with a step of 0.01, along with the derived stability region boundary given in the previous theorem. Fig. 6 also shows the stability region of the random access scheme, whose boundary is given by the following relation [6] $\sqrt{\lambda_{1}}+\sqrt{\lambda_{2}}=1.$ (15) In Fig. 6, we also show the boundary of the stability region for the time division (TD) based scheme (genie-aided), which serves as the stability region upper bound, given by333Time Division (TD) corresponds to full coordination between the two queues and requires knowledge of the queues arrival rates a priori before dividing the resources (time slots in this case). $\lambda_{1}+\lambda_{2}=1.$ (16) Figure 6: The stability regions for the Random Access, Random Access with Priorities, and Time Division schemes. It is clear, and straightforward to analytically prove from the closed-form stability region boundary expressions, that the stability region for the RA scheme with priorities encloses the stability region of the RA scheme. This can explained as follows. For a given arrival rate at the first queue, $\lambda_{1}$, the RA with priority scheme will provide a better service rate to that queue if compared to the RA scheme and this means that queue 1 will be empty with a higher probability and this means that queue 2 will have a higher service rate as well under the RA with priority scheme as compared to the RA scheme. So setting a priority to the first queue in the retransmission will also result in a service rate improvement for the second queue; this is because the RA with priority scheme has some form of coordination between the two queues in the retransmission stage. Allowing for collision free retransmission from the first queue will decrease the amount of expected collisions between the transmissions of the two queues and this will result in better service rates for the two queues. ## IV Conclusions In this paper, we consider the problem of deriving the stability region for random access protocol with feedback exploitation. We consider the case of two interacting queue with priority set to one of the two queues. The two queues will access the channel through a conventional random access protocol and in the case of collision the higher priority queue will access the channel in the next slot with probability 1 while the other queue will back off. We derive the stability region for the random access with priorities protocol and prove that it contains the stability region for the conventional random access protocol. We show that not only the service rate for the higher priority queue is enhanced but also the service rate for the other queue is improved if compared to the conventional random access protocol. ## Appendix A Proof of Lemma III.1 In this Appendix, we provide a proof for Lemma III.1. We start by calculating the steady state distribution for the Markov chain shown in Fig. 2. First, it is clear that $\epsilon_{0}=0$ since the queue can never be in a retransmission state while being empty. Writing the balance equation around $1_{R}$, we have $\epsilon_{1}=\lambda_{1}p_{1}p_{2}\pi_{0}+\left(1-\lambda_{1}\right)p_{1}p_{2}\pi_{1}.$ (17) Then around $0_{F}$, we have $(\lambda_{1}p_{1}p_{2}+\lambda_{1}(1-p_{1}))\pi_{0}=\left(1-\lambda_{1}\right)\epsilon_{1}+\left(1-\lambda_{1}\right)p_{1}(1-p_{2})\pi_{1}.$ (18) Substituting for $\epsilon_{1}$ from (17) into (18), and after some manipulations, we can get $\pi_{1}=\frac{\lambda_{1}\left(1-p_{1}+\lambda_{1}p_{1}p_{2}\right)}{p_{1}\left(1-\lambda_{1}\right)\left(1-\lambda_{1}p_{2}\right)}\pi_{0}.$ (19) Substituting from (19) into (17), we get $\epsilon_{1}=\frac{\lambda_{1}p_{2}}{1-\lambda_{1}p_{2}}\pi_{0}.$ (20) Writing the balance equation around $1_{F}$, we have $\begin{split}&\left(1-\lambda_{1}p_{1}\left(1-p_{2}\right)-\left(1-\lambda_{1}\right)(1-p_{1})\right)\pi_{1}=\\\ &\quad\lambda_{1}\pi_{0}+\lambda_{1}\epsilon_{1}+\left(1-\lambda_{1}\right)\epsilon_{2}+\left(1-\lambda_{1}\right)p_{1}\left(1-p_{2}\right)\pi_{2}.\end{split}$ (21) Around $2_{R}$, we have $\epsilon_{2}=\lambda_{1}p_{1}p_{2}\pi_{1}+\left(1-\lambda_{1}\right)p_{1}p_{2}\pi_{2}.$ (22) To get the relation between $\pi_{1}$ and $\pi_{2}$, we can substitute for the values of $\epsilon_{1}$, $\pi_{0}$ and $\epsilon_{2}$ from equations (17), (18) and (22), respectively in equation (21); after some tedious manipulation, we get $\pi_{2}=\frac{\lambda_{1}\left(1-p_{1}+\lambda_{1}p_{1}p_{2}\right)}{p_{1}\left(1-\lambda_{1}\right)\left(1-\lambda_{1}p_{2}\right)}\pi_{1}.$ (23) Substituting from (23) into (22), we get $\epsilon_{2}=\frac{\lambda_{1}p_{2}}{1-\lambda_{1}p_{2}}\pi_{1}.$ (24) Note that the Markov chain is repeating from stage 2 till the end. For $k\geq 2$, we have the following relations. $\pi_{k}=\frac{\lambda_{1}\left(1-p_{1}+\lambda_{1}p_{1}p_{2}\right)}{p_{1}\left(1-\lambda_{1}\right)\left(1-\lambda_{1}p_{2}\right)}\pi_{k-1}.$ (25) $\epsilon_{k}=\frac{\lambda_{1}p_{2}}{1-\lambda_{1}p_{2}}\pi_{k-1}.$ (26) The last relation can be used to prove the following relation between $\epsilon_{k}$ and $\epsilon_{k-1}$. $\epsilon_{k}=\frac{\lambda_{1}\left(1-p_{1}+\lambda_{1}p_{1}p_{2}\right)}{p_{1}\left(1-\lambda_{1}\right)\left(1-\lambda_{1}p_{2}\right)}\epsilon_{k-1}.$ (27) The steady state distribution can now be written as follows. * • $\epsilon_{0}=0$. * • $\pi_{k}=\rho^{k}\pi_{0}$, $k\geq 1$ and $\rho=\frac{\lambda_{1}\left(1-p_{1}+\lambda_{1}p_{1}p_{2}\right)}{p_{1}\left(1-\lambda_{1}\right)\left(1-\lambda_{1}p_{2}\right)}$. * • $\epsilon_{1}=\frac{\lambda_{1}p_{2}}{1-\lambda_{1}p_{2}}\pi_{0}$. * • $\epsilon_{k}=\rho^{k-1}\epsilon_{1}$, $k\geq 2$. This steady state distribution can be easily checked to satisfy the balance equation at any general state (details are omitted since it is a rather straightforward, yet very tedious, procedure). To get the value of the steady state probabilities, we apply the following normalization requirement. $\begin{split}&\sum_{k=0}^{\infty}(\pi_{k}+\epsilon_{k})=1\\\ &\quad\quad\rightarrow\pi_{0}+\sum_{k=1}^{\infty}(\pi_{k}+\epsilon_{k})=\pi_{0}\left(1+\frac{\lambda_{1}p_{2}}{1-\lambda_{1}p_{2}}\right)\sum_{k=0}^{\infty}\rho^{k}=1,\end{split}$ (28) where $\rho=\frac{\lambda_{1}\left(1-p_{1}+\lambda_{1}p_{1}p_{2}\right)}{p_{1}\left(1-\lambda_{1}\right)\left(1-\lambda_{1}p_{2}\right)}$ as defined above. Note that for the steady state distribution to exist, i.e. to have $\pi_{0}$ to be non zero, then we must have $\rho<1$, which is the stability condition for queue 1 in this dominant system. Therefore, the stability condition can be stated as $\rho<1\;\rightarrow\lambda_{1}<\frac{p_{1}}{1+p_{1}p_{2}}.$ (29) From the normalization condition in (28), we can get the value of $\pi_{0}$ as $\pi_{0}=\frac{p_{1}-\lambda_{1}(1+p_{1}p_{2})}{p_{1}(1-\lambda_{1})}.$ (30) In Dominant System 1, queue 2 will be served only in the states denoted by the subscript $F$ in Fig. 2 since in the retransmission states, denoted by the subscript $R$ in Fig. 2, queue 2 will be in the back off mode. Hence, the service rate, $\mu_{2}$, for queue 2 in Dominant System 1 is given by $\begin{split}\mu_{2}&=p_{2}(1-\lambda_{1})\pi_{0}+p_{2}(1-p_{1})\lambda_{1}\pi_{0}+\sum_{k=1}^{\infty}p_{2}(1-p_{1})\pi_{k}\\\ &=p_{2}(1-p_{1}\lambda_{1})\pi_{0}+\sum_{k=1}^{\infty}p_{2}(1-p_{1})\pi_{k},\end{split}$ (31) where in the $0_{F}$ state, and with the arrival at the beginning of the slot assumption, queue 2 is served with a rate of $p_{2}(1-\lambda_{1})\pi_{0}$ with no arrival at the beginning of the slot since queue 1 will not attempt any random access since it is empty, and $p_{2}(1-p_{1})\lambda_{1}\pi_{0}$ with arrival at the slot beginning; for the other first transmission states, queue 2 will be served if it decides to access the medium, which occurs with probability $p_{2}$, and queue 1 decides not to access the medium, which occurs with probability $(1-p_{1})$. After some manipulation, we can write the expression for $\mu_{2}$ as $\mu_{2}=p_{2}(1-\lambda_{1}-\lambda_{1}p_{2}).$ (32) For the stability of queue 2, we must have $\lambda_{2}<\mu_{2}=p_{2}(1-\lambda_{1}-\lambda_{1}p_{2}).$ (33) ## Appendix B Proof of Lemma III.2 In this Appendix, we provide a proof for Lemma III.2. We start by calculating the steady state distribution for the Markov chain shown in Fig. 3. The state transition matrix, $\mathbf{\Phi}$, of the Markov chain shown in Fig. 3 can be written as $\mathbf{\Phi}=\left(\begin{array}[]{ ccccc}\mathbf{B}&\mathbf{A}_{0}&\mathbf{0}&\mathbf{0}&\cdots\\\ \mathbf{A}_{2}&\mathbf{A}_{1}&\mathbf{A}_{0}&\mathbf{0}&\cdots\\\ \mathbf{0}&\mathbf{A}_{2}&\mathbf{A}_{1}&\mathbf{A}_{0}&\cdots\\\ \mathbf{0}&\mathbf{0}&\mathbf{A}_{2}&\mathbf{A}_{1}&\cdots\\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right)$ (34) where $\begin{split}\mathbf{B}&=\left(\begin{array}[]{ cc}(1-\lambda_{2})+\lambda_{2}(1-p_{1})p_{2}&0\\\ 0&0\end{array}\right),\\\ \mathbf{A}_{0}&=\left(\begin{array}[]{ cc}(1-\lambda_{2})(1-p_{1})p_{2}&0\\\ 0&0\end{array}\right),\\\ \mathbf{A}_{1}&=\left(\begin{array}[]{ cc}\lambda_{2}p_{2}(1-p_{1})+(1-\lambda_{2})(1-p_{2})&1-\lambda_{2}\\\ (1-\lambda_{2})p_{1}p_{2}&0\end{array}\right),\\\ \mathbf{A}_{2}&=\left(\begin{array}[]{ cc}\lambda_{2}&\lambda_{2}\\\ \lambda_{2}p_{1}p_{2}&\lambda_{2}\end{array}\right).\end{split}$ The steady state distribution vector is given by $\mathbf{v}=[\pi_{0}^{\prime}\;\epsilon_{0}^{\prime}\;\pi_{1}^{\prime}\;\epsilon_{1}^{\prime}\;\pi_{2}^{\prime}\;\epsilon_{2}^{\prime}\;\cdots]^{T}$ and $\mathbf{v}=\mathbf{\Phi}\mathbf{v}$. The state transition matrix $\mathbf{\Phi}$ is a block-tridiagonal matrix; therefore the Markov chain shown in Fig. 3 is a homogeneous quasi birth-and- death (QBD) Markov chain [9]. Note that to make the state transition matrix a block-tridiagonal matrix we have added a transition from the $0_{\text{OFF}}$ state to the $1_{\text{ON}}$ state as shown in Fig. 7 and this will preserve the structure of the state transitions between the different stages in the Markov chain. Note that adding this transition will not affect the stationary state distribution of the Markov chain as well as the balance equations since $\epsilon_{0}^{\prime}=0$ even with the added transition since the Markov chain will never enter the $0_{\text{OFF}}$ state444The analysis presented here could have been used for analyzing the Markov chain shown in Fig. 2; however, the structure of this Markov chain allowed for the use of a simpler approach that was adopted in Appendix A. Figure 7: The queue 2 Markov chain with added transition between $0_{\text{OFF}}$ and $1_{\text{ON}}$ to make the state transition matrix a block-tridiagonal matrix. Define the vector $\mathbf{v}_{k}^{\prime}=[\pi_{k}^{\prime}\;\epsilon_{k}^{\prime}]^{T}$. Note that $\mathbf{v}_{0}^{\prime}=[\pi_{0}^{\prime}\;0]^{T}$. The steady state distribution of the Markov chain shown in Fig. 3 satisfies the following equation [9] $\mathbf{v}^{\prime}_{k}=\mathbf{R}^{k}\mathbf{v}_{0}^{\prime},\quad k\geq 1,$ (35) where the $2\times 2$ matrix $\mathbf{R}$ is given by the solution to the following equation. $\mathbf{A}_{0}+\mathbf{R}(\mathbf{A_{1}}-\mathbf{I}_{2})+\mathbf{R}^{2}\mathbf{A_{2}}=\mathbf{0}_{2\times 2},$ (36) where $\mathbf{I}_{2}$ is the $2\times 2$ identity matrix and $\mathbf{0}_{2\times 2}$ is the all zeros $2\times 2$ matrix. To get the stationary distribution, we have to find the matrix $\mathbf{R}=\left(\begin{array}[]{ cc}r_{11}&r_{12}\\\ r_{21}&r_{22}\end{array}\right).$ Note that for $\mathbf{v}_{1}^{\prime}=\mathbf{R}\mathbf{v}_{0}^{\prime}$, where $\mathbf{v}_{0}^{\prime}=[\pi_{0}^{\prime}\;0]^{T}$ and $\mathbf{v}_{1}^{\prime}=[\pi_{1}^{\prime}\;\epsilon_{1}^{\prime}]^{T}$. Therefore, we have $\begin{split}r_{11}=\frac{\pi_{1}^{\prime}}{\pi_{0}^{\prime}}\text{ and }r_{21}=\frac{\epsilon_{1}^{\prime}}{\pi_{0}^{\prime}}.\end{split}$ (37) Writing the balance equation around the $0_{\text{ON}}$ in Fig. 3, we have $\begin{split}&(\lambda_{2}p_{1}p_{2}+\lambda_{2}(1-p_{2}))\pi_{0}^{\prime}=(1-\lambda_{2})(1-p_{1})p_{2}\pi_{1}^{\prime}\\\ &\quad\rightarrow\pi_{1}^{\prime}=\frac{\lambda_{2}(1-p_{2}+p_{1}p_{2})}{(1-\lambda_{2})(1-p_{1})p_{2}}\pi_{0}^{\prime}.\end{split}$ (38) Therefore, we have $\begin{split}r_{11}=\frac{\lambda_{2}(1-p_{2}+p_{1}p_{2})}{(1-\lambda_{2})(1-p_{1})p_{2}}.\end{split}$ (39) Writing the balance equation around $1_{\text{OFF}}$, we have $\epsilon_{1}^{\prime}=\lambda_{2}p_{1}p_{2}\pi_{0}^{\prime}+(1-\lambda_{2})p_{1}p_{2}\pi_{1}^{\prime}\rightarrow\epsilon_{1}^{\prime}=\frac{\lambda_{2}p_{1}}{1-p_{1}}\pi_{0}^{\prime}.$ (40) Therefore, we have $r_{21}=\frac{\lambda_{2}p_{1}}{1-p_{1}}.$ (41) To get the values of $r_{12}$ and $r_{22}$, we consider the transition across the border shown in Fig. 8. For the Markov chain to be positive recurrent then the probability of going across the border in both directions must be the same [10]; hence, we have $\begin{split}(1-\lambda_{2})(1-p_{1})p_{2}\pi_{2}^{\prime}=(\lambda_{2}p_{1}p_{2}+\lambda_{2}(1-p_{2}))\pi_{1}^{\prime}+\lambda_{2}\epsilon_{1}^{\prime}.\end{split}$ (42) But we have $\mathbf{v}_{2}^{\prime}=\mathbf{R}\mathbf{v}_{1}^{\prime}$, from which we have $\pi_{2}^{\prime}=r_{11}\pi_{1}^{\prime}+r_{12}\epsilon_{1}^{\prime}$; using (39) and (42), we can easily show that $\begin{split}r_{12}=\frac{\lambda_{2}}{(1-\lambda_{2})(1-p_{1})p_{2}}.\end{split}$ (43) Figure 8: The segment of the Markov chain used to calculate the values of $r_{12}$ and $r_{22}$. Finally, the balance equation around $2_{\text{OFF}}$ can be written as $\epsilon_{2}^{\prime}=\lambda_{2}p_{1}p_{2}\pi_{1}^{\prime}+(1-\lambda_{2})p_{1}p_{2}\pi_{2}^{\prime}=r_{21}\pi_{1}^{\prime}+r_{22}\epsilon_{2}^{\prime}.$ (41) Substituting for $\pi_{2}^{\prime}$ from (42), we can easily show that $r_{22}=\frac{\lambda_{2}p_{1}}{1-p_{1}}.$ (42) Now the matrix $\mathbf{R}$ is given by $\mathbf{R}=\left(\begin{array}[]{cc}\frac{\lambda_{2}(1-p_{2}+p_{1}p_{2})}{(1-\lambda_{2})(1-p_{1})p_{2}}&\frac{\lambda_{2}}{(1-\lambda_{2})(1-p_{1})p_{2}}\\\ \frac{\lambda_{2}p_{1}}{1-p_{1}}&\frac{\lambda_{2}p_{1}}{1-p_{1}}\end{array}\right),$ (43) which can be easily checked to satisfy the balance equation given by (36). To get the stationary distribution of the Markov chain shown in Fig. 3, we apply the following normalization requirement. $\sum_{k=0}^{\infty}(\pi_{k}^{\prime}+\epsilon_{k}^{\prime})=1\rightarrow[1\;1]\left(\sum_{k=0}^{\infty}\mathbf{R}^{k}\right)\mathbf{v}_{0}^{\prime}=1.$ (44) For the summation $\left(\sum_{k=0}^{\infty}\mathbf{R}^{k}\right)$ to converge we must have the spectral radius of the matrix $\mathbf{R}$, $\textbf{sp}(\mathbf{R})$, to be less than one [9]555The spectral radius of a matrix is the maximum over the magnitudes of its eigenvalues.. From (43), we can easily get $\textbf{sp}(\mathbf{R})$ to be given by $\begin{split}\textbf{sp}(\mathbf{R})=\frac{\lambda_{2}\left(1-p_{2}-\lambda_{2}p_{1}p_{2}+2p_{1}p_{2}+\sqrt{1-2p_{2}+p_{2}^{2}+4p_{1}p_{2}-2\lambda_{2}p_{1}p_{2}-2\lambda_{2}p_{1}p_{2}^{2}+\lambda_{2}^{2}p_{1}^{2}p_{2}^{2}}\right)}{2p_{2}\left(1-\lambda_{2}-p_{1}+\lambda_{2}p_{1}\right)}.\end{split}$ (45) The requirement that $\textbf{sp}(\mathbf{R})<1$ can be used in the last expression to get the stability condition of the second queue arrival rate $\lambda_{2}$ as $\begin{split}\lambda_{2}<\frac{p_{2}(1-p_{1})}{1+p_{1}p_{2}}.\end{split}$ (46) Going back to the normalization requirement in (44), we have $[1\;1]\left(\sum_{k=0}^{\infty}\mathbf{R}^{k}\right)\mathbf{v}_{0}^{\prime}=[1\;1]\left(\mathbf{I}_{2}-\mathbf{R}\right)^{-1}\left[\begin{array}[]{c}\pi_{0}^{\prime}\\\ 0\end{array}\right]=1.$ (47) From the last expression, we can easily prove that $\pi_{0}^{\prime}$ is given by $\pi_{0}^{\prime}=\frac{p_{2}-\lambda_{2}-p_{1}p_{2}-\lambda_{2}p_{1}p_{2}}{(1-\lambda_{2})(1-p_{1})p_{2}}.$ (48) Note that the requirement that $\pi_{0}^{\prime}>0$, i.e. a non-zero empty queue probability, is satisfied if $\lambda_{2}<\frac{p_{2}(1-p_{1})}{1+p_{1}p_{2}}$, which is the queue stability condition. The service rate, $\mu_{1}^{\prime}$, for queue 1 in Dominant System 2 can now be expressed as $\begin{split}\mu_{1}^{\prime}&=p_{1}(1-\lambda_{2})\pi_{0}^{\prime}+p_{1}(1-p_{2})\lambda_{2}\pi_{0}^{\prime}+p_{1}(1-p_{2})\sum_{k=2}^{\infty}\pi_{k}^{\prime}+\sum_{k=2}^{\infty}\epsilon_{k}^{\prime}\\\ &=p_{1}(1-\lambda_{2}p_{2})\pi_{0}^{\prime}+[p_{1}(1-p_{2})\;1]\left(\sum_{k=1}^{\infty}\mathbf{R}^{k}\right)\left[\begin{array}[]{c}\pi_{0}^{\prime}\\\ 0\end{array}\right]\\\ &=p_{1}(1-\lambda_{2}p_{2})\pi_{0}^{\prime}+[p_{1}(1-p_{2})\;1]\;\mathbf{R}\left(\mathbf{I}_{2}-\mathbf{R}\right)^{-1}\left[\begin{array}[]{c}\pi_{0}^{\prime}\\\ 0\end{array}\right]\\\ &=\frac{p_{1}(1-p_{1}-\lambda_{2}p_{1})}{(1-p_{1})},\end{split}$ (49) where in the OFF states, queue 1 is served with probability 1 since queue 2 will be in the back off mode. For the stability of queue 1 in Dominant System 2 we must have $\lambda_{1}<\mu_{1}^{\prime}=\frac{p_{1}(1-p_{1}-\lambda_{2}p_{1})}{(1-p_{1})},$ (50) which completes the proof. ## References * [1] A.K. Sadek, K.J.R. Liu, and Anthony Ephremides, “Cognitive multiple access via cooperation: Protocol design and performance analysis,” Information Theory, IEEE Transactions on, vol. 53, no. 10, pp. 3677–3696, 2007. * [2] S. Kompella, Gam D. Nguyen, J.E. Wieselthier, and Anthony Ephremides, “Stable throughput tradeoffs in cognitive shared channels with cooperative relaying,” in INFOCOM, 2011 Proceedings IEEE, 2011, pp. 1961–1969. * [3] Jeongho Jeon and Anthony Ephremides, “The stability region of random multiple access under stochastic energy harvesting,” in Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on, 2011, pp. 1796–1800. * [4] A. Fanous and Anthony Ephremides, “Effect of secondary nodes on the primary’s stable throughput in a cognitive wireless network,” in Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on, 2012, pp. 1807–1811. * [5] A. Fanous and A. Ephremides, “Stable throughput in a cognitive wireless network,” Selected Areas in Communications, IEEE Journal on, vol. 31, no. 3, pp. 523–533, 2013. * [6] R. Rao and A. Ephremides, “On the Stability of Interacting Queues in a Multi-Access System,” IEEE Trans. Info. Theory, vol. 34, pp. 918–930, Sept. 1988. * [7] W. Szpankowski, “Stability Conditions for Some Multiqueue Distributed System: Buffered Random Access Systems,” Adv. Appl. Probab., vol. 26, pp. 498–515, 1994. * [8] R. M. Loynes, “The Stability of a Queue with Non-Independent Interarrival and Service Times,” Proc. Cambridge Philos. Soc., pp. 497–520, 1962. * [9] G. Latouche and V. Ramaswami, Introduction to Matrix Analytic Methods in Stochastic Modeling, ASA-SIAM Series on Statistics and Applied Probability. Society for Industrial and Applied Mathematics, 1999. * [10] R.G. Gallager, Discrete stochastic processes, vol. 101, Kluwer Academic Publishers, 1996.
# Closest Wannier functions to a given set of localized orbitals Taisuke Ozaki Institute for Solid State Physics, The University of Tokyo, Kashiwa 277-8581, Japan ###### Abstract A non-iterative method is presented to calculate the closest Wannier functions (CWFs) to a given set of localized guiding functions, such as atomic orbitals, hybrid atomic orbitals, and molecular orbitals, based on minimization of a distance measure function. It is shown that the minimization is directly achieved by a polar decomposition of a projection matrix via singular value decomposition, making iterative calculations and complications arising from the choice of the gauge irrelevant. The disentanglement of bands is inherently addressed by introducing a smoothly varying window function and a greater number of Bloch functions, even for isolated bands. In addition to atomic and hybrid atomic orbitals, we introduce embedded molecular orbitals in molecules and bulks as the guiding functions, and demonstrate that the Wannier interpolated bands accurately reproduce the targeted conventional bands of a wide variety of systems including Si, Cu, the TTF-TCNQ molecular crystal, and a topological insulator of Bi2Se3. We further show the usefulness of the proposed method in calculating effective atomic charges. These numerical results not only establish our proposed method as an efficient alternative for calculating WFs, but also suggest that the concept of CWF can serve as a foundation for developing novel methods to analyze electronic structures and calculate physical properties. ## I INTRODUCTION The Wannier functions (WFs) Wannier37 ; Kohn59 ; Marzari12 play a central role in analyzing electronic structures of real materials and advancing electronic structure methods alongside density functional theory (DFT) Hohenberg1964 and the other electronic structure theories Hamann09 ; Lechermann06 . A widely adopted method for calculating WFs involves maximizing their localization, which can be reformulated as minimizing the spread function characterizing the variance of WFs in real space Marzari97 . The concept of maximal localization leads to an elegant formulation, resulting in maximally localized Wannier functions (MLWFs), and the compact representation of the Hamiltonian that follows allows us to analyze the electronic structures of real materials using a localized orbital picture. However, it is also recognized that the minimization of the spread function can often encounter local minima, particularly in large-scale systems having complicated electronic structures Marzari12 . To overcome this challenge, methods aimed at automated high-throughput Wannierisation have been developed Vitale20 ; Damle18 ; Qiao23 . In addition, the other methods for generating (nonorthogonal) WFs have been proposed based on projection methods Ku02 ; Lu2004 ; Qian08 , which produce compact atomic like orbitals. The orthogonalization of such nonorthogonal orbitals obtained by the projection can be achieved by the Löwdin orthogonalization procedure Lowdin1950 ; Cloizeaux1964 . These orthogonalized orbitals are utilized as an initial guess of WFs in the minimization of the spread function to obtain MLWFs Marzari97 ; Mostofi2008 . Apart from the role of initial guess, it is worth noting that the Löwdin orthogonalized orbitals possess an important variational property, which in particular minimizes the sum of squared distances between the orthogonal and original nonorthogonal orbitals in the Hilbert space Carlson57 . As long as the original nonorthogonal orbitals are localized, the localization of the Löwdin orthogonalized orbitals in real space is guaranteed in a sense of minimization of the distance. In quantum chemistry, this property is leveraged to develop methods for generating localized orthogonal orbitals. These methods, which include the natural atomic and bond orbital method Reed88 and the intrinsic atomic orbital method Knizia13 , involve obtaining orthogonalized orbitals through Löwdin type orthogonalization, giving heavy weight to occupied states via the density matrix. However, in solid state physics the variational property inherent in the Löwdin orthogonalization has not been fully explored yet. A reason for this is that the Löwdin orthogonalization requires calculation of $S^{-1/2}$ for the overlap matrix $S$, which causes numerical instability for nearly ill- conditioned matrices. Due to the difficulty, advancements in methodologies along these lines appear to be slow. Our study aims to develop an efficient and robust method to generate the closest Wannier functions (CWFs) to a given set of localized guiding functions. We will demonstrate that exploiting the variational property leads to a versatile method that eliminates the need for iterative optimization and avoids complications related to gauge choice. The structure of this paper is as follows: In Section II, we present the theory of calculating CWFs. Section III provides a detailed discussion on the implementation of the method. A series of benchmark calculations are presented in Section IV. Finally, in Section V, we summarize the theory of CWFs and suggest a potential role for CWFs as a foundation for developing novel methods. ## II THEORY Let us start by introducing Bloch functions $\\{\phi\\}$, which can be obtained by solving the Kohn-Sham (KS) equation Kohn1965 within the density functional theory (DFT) Hohenberg1964 , normalized as $\displaystyle\langle\phi_{{\bf k}_{1}\mu}|\phi_{{\bf k}_{2}\nu}\rangle=N_{\rm BC}\delta_{{\bf k}_{1}{\bf k}_{2}}\delta_{\mu\nu},$ (1) where ${\bf k}$ and $\mu$ are the k-vector and the band index, respectively, and $N_{\rm BC}$ is the number of primitive cells in the Born-von Karman (BvK) boundary condition. We consider a projection of a localized guiding function $Q$ onto the Bloch functions $\\{\phi\\}$ weighted with a window function $w(\varepsilon)$ as $\displaystyle|L_{{\bf R}p}\rangle=\frac{1}{N_{\rm BC}}\sum_{{{\bf k},\mu}}{\rm e}^{-{\rm i}{\bf k}\cdot{\bf R}}|\phi_{{\bf k}\mu}\rangle a_{{\bf k}\mu,p}$ (2) with $\displaystyle a_{{\bf k}\mu,p}=w(\varepsilon_{{\bf k}\mu})\langle\phi_{{\bf k}\mu}|Q_{{\bf 0}p}\rangle,$ (3) where ${\bf R}$ and $p$ are a translational lattice vector and the index of localized orbital, respectively, $\varepsilon_{{\bf k}\mu}$ is the eigenvalue of the KS equation, and $Q_{{\bf 0}p}\equiv Q_{{\bf R}p}~{}({\bf R}={\bf 0})$. In the summation of Eq. (2), the numbers of k-point and the Bloch functions to be included are $N_{\rm BC}$ and $N_{\rm band}$ per k-point, respectively. Throughout the paper we do not consider the spin dependency on the formulation for sake of simplicity, but the generalization is straightforward. The localized functions $\\{Q_{{\bf R}p}\\}$, such as atomic orbitals, hybrid orbitals, or molecular orbitals (MOs), are assumed to localize in the unit cell specified by ${\bf R}$. When the window function $w(\varepsilon)$ is taken to be unity, Eq. (2) is nothing but an expansion of $Q_{{\bf R}p}$ with Bloch functions from the identity relation $\frac{1}{N_{\rm BC}}\sum_{{{\bf k}\mu}}|\phi_{{\bf k}\mu}\rangle\langle\phi_{{\bf k}\mu}|=\hat{I}$. To introduce an expansion of $Q_{{\bf R}p}$ by the Bloch functions $\\{\phi_{{\bf k}\mu}\\}$ in a subspace, we use the following window function $w(\varepsilon)$: $\displaystyle w(\varepsilon)=\frac{1-\exp(x_{0}+x_{1})}{\left(1+\exp(x_{0}))(1+\exp(x_{1})\right)}+\delta$ (4) with the definition of $x_{0}\equiv\beta(\varepsilon_{0}-\varepsilon)$ and $x_{1}\equiv\beta(\varepsilon-\varepsilon_{1})$, where $\beta=\frac{1}{k_{\rm B}T}$ with the Boltzmann constant of $k_{\rm B}$ and a temperature of $T$. The window function of Eq. (4) is obtained by subtracting 1 from the sum of two functions of $1/(1+\exp(x))$. The last term $\delta$ is a small constant, e.g., $10^{-12}$, which is introduced to avoid the ill-conditioning of the matrix consisting of $a_{{\bf k}\mu,p}$ by Eq. (3) as discussed later on. As shown in Fig. 1, the parameters of $\varepsilon_{0}$ and $\varepsilon_{1}$ ($\varepsilon_{0}<\varepsilon_{1}$) determine the range of energy where the Bloch states are included in the expansion of Eq. (2) with a large weight, and the temperature $T$ gives the degree of smearing around $\varepsilon_{0}$ and $\varepsilon_{1}$ in $w(\varepsilon)$. To control the degree of smearing, we introduce the temperature of $T$ so that one can intuitively understand the degree of smearing. It is noted that the choice of the window function by Eq. (4) is not unique, and the other choices should give almost equivalent results as long as they are chosen properly. Figure 1: Window functions of Eq. (4) with $k_{\rm B}T$=0.1, 1.0, and 3.0 eV. $\varepsilon_{0}$ and $\varepsilon_{1}$ were set to be -5.0 and 5.0 eV, respectively, and $\delta=10^{-12}$ was used. The function $L_{{\bf R}p}$ defined by Eq. (2) must be similar to the localized function $Q_{{\bf R}p}$, but they are non-orthogonal to each other in general. We now consider generating a set of closest Wannier functions (CWFs) $\\{W_{{\bf R}p}\\}$ to a set of the localized functions $\\{L_{{\bf R}p}\\}$ in a sense that the sum of the squared distance between $L$ and $W$ in the Hilbert space is minimized. As well as Eq. (2), noting that the CWFs can be expressed Marzari12 as $\displaystyle|W_{{\bf R}p}\rangle=\frac{1}{N_{\rm BC}}\sum_{{{\bf k},\mu}}{\rm e}^{-{\rm i}{\bf k}\cdot{\bf R}}|\phi_{{\bf k}\mu}\rangle b_{{\bf k}\mu,p},$ (5) and defining the residual function $R$: $\displaystyle|R_{{\bf R}p}\rangle=|L_{{\bf R}p}\rangle-|W_{{\bf R}p}\rangle,$ (6) the distance measure (DM) function $F$ we minimize is given by $\displaystyle F[B]$ $\displaystyle=$ $\displaystyle\sum_{p}\langle R_{{\bf 0}p}|R_{{\bf 0}p}\rangle,$ (7) $\displaystyle=$ $\displaystyle\frac{1}{N_{\rm BC}}\sum_{{\bf k}}X[B,{\bf k}]$ with $\displaystyle X[B,{\bf k}]={\rm tr}\left[\left(A^{{\dagger}}({\bf k})-B^{{\dagger}}({\bf k})\right)\left(A({\bf k})-B({\bf k})\right)\right],$ (8) where the elements of the matrices $A({\bf k})$ and $B({\bf k})$ are given by $a_{{\bf k}\mu,p}$ and $b_{{\bf k}\mu,p}$, respectively. The second line of Eq. (7) is obtained by considering the orthonormality of Eq. (1), and $X[B,{\bf k}]$ of Eq. (8) is regarded as the squared Frobenius norm of $\left(A({\bf k})-B({\bf k})\right)$ Golub1996 . The same Bloch functions as in Eq. (2) are included in the summation of Eq. (5). The number of CWFs to be generated is $N_{\rm CWF}$ per unit cell, being equivalent to the number of the guiding functions per cell, and $N_{\rm CWF}$ should be smaller than or equal to $N_{\rm band}$ to guarantee the linear independency of the subspace spanned by the CWFs, resulting in size of $N_{\rm band}\times N_{\rm CWF}$ for the matrices $A({\bf k})$ and $B({\bf k})$. Assuming $B^{{\dagger}}({\bf k})B({\bf k})=I$, where $I$ is of size $N_{\rm CWF}\times N_{\rm CWF}$, the CWFs are ensured to form a set of orthonormal functions. Under this constraint, we consider the optimization of the DM function $F$. Furthermore, the matrix $B({\bf k})$ is regarded as a part of a unitary matrix of size $N_{\rm band}\times N_{\rm band}$, and will be referred to as a partial unitary matrix in subsequent discussions. The minimum of the DM function of Eq. (7) is obtained by choosing $B({\bf k})$ as $U({\bf k})$ which is calculated by the polar decomposition of $A({\bf k})=U({\bf k})P({\bf k})$, where $U({\bf k})$ and $P({\bf k})$ are unitary and hermitian, respectively Fan1955 . Let us prove the statement below. The polar decomposition of $A({\bf k})$ is obtained via the singular value decomposition (SVD) of $A({\bf k})$ as $\displaystyle A=W\Sigma V^{{\dagger}}=WV^{{\dagger}}V\Sigma V^{{\dagger}}=UP,$ (9) where we dropped the dependency on ${\bf k}$ for simplicity of the notation, and hereafter we will denote the dependency if necessary. $W$ and $V$ are the left and right singular matrices in size of $N_{\rm band}\times N_{\rm CWF}$ and $N_{\rm CWF}\times N_{\rm CWF}$, respectively, and $\Sigma$ is the singular value diagonal matrix in size of $N_{\rm CWF}\times N_{\rm CWF}$. Note that $U\equiv WV^{{\dagger}}$ and $P\equiv V\Sigma V^{{\dagger}}$. It is worth mentioning that $W$ and $U$ are partial unitary matrices, and hold $W^{{\dagger}}W=I$ ($WW^{{\dagger}}\neq I$) and $U^{{\dagger}}U=I$ ($UU^{{\dagger}}\neq I$), and that $V$ is a full unitary matrix holding $V^{{\dagger}}V=I$ ($VV^{{\dagger}}=I$). We evaluate $X$ of Eq. (8) for each ${\bf k}$ with both the matrix $U$ obtained by the polar decomposition and an arbitrary partial unitary matrix $B$, and calculate the difference as $\displaystyle X[U]-X[B]$ $\displaystyle=$ $\displaystyle 2{\rm tr}\left[\frac{1}{2}\left(A^{{\dagger}}B+B^{{\dagger}}A\right)-P\right],$ (10) $\displaystyle=$ $\displaystyle 2{\rm tr}\left[\Sigma D-\Sigma\right],$ $\displaystyle=$ $\displaystyle 2\sum_{n}\sigma_{n}\left(d_{nn}-1\right)$ with $\displaystyle D=\frac{1}{2}\left(V^{{\dagger}}U^{{\dagger}}BV+V^{{\dagger}}B^{{\dagger}}UV\right).$ (11) The second line of Eq. (10) is derived by using the polar decomposition of $A$ and $P=V\Sigma V^{{\dagger}}$, and $\sigma_{n}$ and $d_{nn}$ in the third line are diagonal elements of $\Sigma$ and $D$, respectively. The upper bound of the diagonal elements of the hermitian matrix $D$ is found to be unity, since the matrix $D$ consists of the sum of the product of partial unitary matrices of $V^{{\dagger}}U^{{\dagger}}$ ($V^{{\dagger}}B^{{\dagger}}$) and $BV$ ($UV$). Another proof for the upper bound is also given based on the Cauchy- Schwarz inequality in the appendix. By considering the upper bound of the diagonal elements of $D$ and $0\leq\sigma_{n}$, the third line of Eq. (10) leads to the following inequality: $\displaystyle X[U]\leq X[B].$ (12) The equality of Eq. (12) holds if $B=U$. If some of $\sigma_{n}$ is zero, the corresponding $d_{nn}$ in Eq. (10) can be arbitrarily chosen. So, we see from Eq. (11) that $U$ is not uniquely determined. If all the singular values of $\sigma_{n}$ are positive, all the $d_{nn}$s should be unity when $X[U]=X[B]$. The uniqueness of the solution can be confirmed as follows: Since the matrix $D$ consists of the products of the partial unitary matrices as discussed above, the case that all the diagonal elements $d_{nn}$ are unity happens when $\displaystyle U^{{\dagger}}B+B^{{\dagger}}U=2I,$ (13) which is derived by equating $D=I$ and multiplying $V$ and $V^{{\dagger}}$ from the left and right of the equation, respectively. By noting again that $B$ and $U$ in Eq. (13) are the partial unitary matrices, it is found that Eq. (13) holds if and only if $B=U$. Thus, we have proven the statement that the minimum of the DM function of Eq. (7) is uniquely determined when $B=U$ as long as all the singular values of $\sigma_{n}$ are positive, since $F[B]$ of Eq. (7) consists of the sum of $X[B,{\bf k}]$ over k. The uniqueness of $U$ itself is related to that of the SVD for the matrix $A$ of Eq. (3). If $A$ has $N_{\rm CWF}$ positive singular values which are non-degenerate, the SVD is uniquely determined except for the non-unique phases in $W$ and $V$. Even if there are degenerate singular values, the matrix $U$ is invariant, since the freedom of the unitary transformation $K$ for the degenerate left and right singular vectors is canceled out as $U=WKK^{{\dagger}}V^{{\dagger}}=WV^{{\dagger}}$ with $KK^{{\dagger}}=I$. Further, noting that the non-unique phases in $W$ and $V$ are canceled out when the matrix $U$ is computed as $WV^{{\dagger}}$, we conclude that $U$ is uniquely determined if $A$ has the $N_{\rm CWF}$ positive singular values. The violation from the positive definiteness of the singular values can be avoided by the small constant of $\delta$ in Eq. (4). On the other hand, if $\delta$ in Eq. (4) is set to be 0 and a narrow window, including fewer than $N_{\rm CWF}$ eigenstates for a given ${\bf k}$, is used with a small $k_{\rm B}T$, the matrix $A$ has $N_{\rm PSV}$ positive singular values, where $N_{\rm PSV}<N_{\rm CWF}$. In this case, a subspace of the $\left(N_{\rm CWF}-N_{\rm PSV}\right)$ dimension is arbitrary chosen to form $W$ and $V$ of the dimension of $N_{\rm CWF}$ in addition to the subspace of the $N_{\rm PSV}$ dimension, which breaks the symmetry of CWFs. The situation can be avoided by introducing a small constant $\delta$ in the window function of Eq. (4). The simple treatment restores the symmetry of CWFs, allowing us to calculate symmetry preserving CWFs for a wide variety of choices in the energy range of the window function. It is also noted that the non-unique phase of the Bloch functions is canceled out via the polar decomposition of $A$ and Eq. (5). Thus, we see that the proposed method is free from complications arising from the choice of gauge. Using Eqs. (8) and (9), the minimum of the DM function $F$ at $B=U$ is given by $\displaystyle F[U]=\frac{1}{N_{\rm BC}}\sum_{{\bf k},p}\left(\sigma_{{\bf k}p}-1\right)^{2},$ (14) where $\sigma_{{\bf k}p}$ is a singular value of the matrix $A({\bf k})$. From Eq. (14), we find that the mean squared distance between $L$ and $W$ is related to the deviation of $\sigma$ from unity. The proposed method based on the polar decomposition of $A$ is closely related to the Löwdin orthogonalization Lowdin1950 as shown below. The Fourier transform of the overlap integrals for $\\{L\\}$ is given by $\displaystyle\sum_{{\bf R}}{\rm e}^{{\rm i}{\bf k}\cdot{\bf R}}\langle L_{{\bf 0}p}|L_{{\bf R}q}\rangle=\sum_{\mu}a_{{\bf k}\mu,p}^{*}a_{{\bf k}\mu,q},$ (15) which is obtained by noting that $\frac{1}{N_{\rm BC}}\sum_{{\bf R}}{\rm e}^{{\rm i}({\bf k}-{\bf k}^{\prime})\cdot{\bf R}}=\delta_{{\bf k}{\bf k}^{\prime}}$. By writing Eq. (15) as $S({\bf k})=A^{{\dagger}}({\bf k})A({\bf k})$ in a matrix form, and using Eq. (9), we have the following relation: $\displaystyle S({\bf k})=A^{{\dagger}}({\bf k})A({\bf k})=V^{{\dagger}}({\bf k})\Sigma^{2}({\bf k})V({\bf k}).$ (16) Comparing Eq. (16) with $P({\bf k})=V({\bf k})\Sigma({\bf k})V^{{\dagger}}({\bf k})$, one obtains a relation $P({\bf k})=S^{1/2}({\bf k})$. If $P({\bf k})$ is invertible, the matrix $U({\bf k})$ is given with Eq. (9) by $\displaystyle U({\bf k})=A({\bf k})S^{-1/2}({\bf k}).$ (17) The matrix $U$ obtained by Eq. (17) is exactly equivalent to that by the Löwdin orthogonalization Lowdin1950 , and the closest property of the Löwdin orthogonalized orbitals to a given set of orbitals was proven in Ref. Carlson57 . On the other hand, the proposed method does not require the calculation of $S^{-1/2}$, and can be applied in a numerical stable manner even to the case that the matrix $A$ is nearly ill-conditioned. The definition of $A$ by Eq. (3) with the window function allows us to calculate CWFs in various respects, e.g., heavily weighting to occupied states and disentangling of localized states embedded in the wider bands as demonstrated in Section IV. In this sense, the method we propose can be regarded as a generalization of the Löwdin orthogonalization. The disentanglement of bands can be easily performed by properly selecting the window function of Eq. (3). Since the $N_{\rm band}(\geq N_{\rm CWF})$ Bloch functions per k-point are included in Eqs. (2) and (5), it should be emphasized that the proposed method always disentangles $N_{\rm band}$ states to generate $N_{\rm CWF}$ Wannier functions. A couple of examples will be shown in Section IV, including valence and low-lying conduction bands for the diamond Si and narrow $3d$-bands embedded in the wider $4s$-band for the face centered cubic (FCC) Cu. Once the $k$-dependent $U({\bf k})$ are obtained by the polar decomposition, tight-binding (TB) parameters are calculated as expectation values of the KS Hamiltonian $\hat{H}_{\rm KS}$ by summing contributions over ${\bf k}$ and $\mu$ as $\displaystyle t_{{\bf 0}p,{\bf R}q}$ $\displaystyle=$ $\displaystyle\langle W_{{\bf 0}p}|\hat{H}_{\rm KS}|W_{{\bf R}q}\rangle,$ (18) $\displaystyle=$ $\displaystyle\sum_{{\bf k},\mu}\varepsilon_{{\bf k}\mu}u_{{\bf k}\mu,p}^{*}u_{{\bf k}\mu,q}{\rm e}^{-{\rm i}{\bf k}\cdot{\bf R}},$ where $u_{{\bf k}\mu,q}$ is the matrix element of $U({\bf k})$. Like with MLWFs, the TB parameters can be used for the Wannier interpolation in calculations of band structures and physical quantities. The computational procedure to generate the CWFs is summarized as follows: 1. 1. Determining $\varepsilon_{0}$, $\varepsilon_{1}$, and $k_{\rm B}T$ in Eq. (4) by checking the band dispersion of a system of interest. The parameters should be chosen properly so that the targeted eigenstates can be included. 2. 2. Choosing a set of localized orbitals {$Q$}. Atomic orbitals, hybrid orbitals, and MOs might be possible choices. Depending on the symmetry of the targeted eigenstates, a proper set of localized orbitals needs to be chosen, e.g., $d$-orbitals need to be employed in the generation of CWFs for the $d$-bands in FCC Cu as shown later on. 3. 3. Calculation of $A({\bf k})$ by Eq. (3). The calculation of the overlap $\langle\phi_{{\bf k}\mu}|Q_{{\bf 0}p}\rangle$ depends on the implementation of the KS method. 4. 4. Performing the SVD of $A({\bf k})$. A proper numerical library might be used. 5. 5. Calculation of $U({\bf k})$. Using the left and right singular matrices, $W({\bf k})$ and $V({\bf k})$, of $A({\bf k})$, $U({\bf k})$ is calculated as $W({\bf k})V^{{\dagger}}({\bf k})$. 6. 6. Calculations of the CWFs and TB parameters. The CWFs can be calculated using Eq. (5) with $B=U$ obtained by the step 5. The calculation is expressed by the sum of the matrix-matrix product over ${\bf k}$. Also, the TB parameters are obtained by Eq. (18). It should be stressed that the minimization of $F$ can be efficiently performed in a non-iterative manner by the steps 1 to 6 using the polar decomposition via the SVD. The computational cost of each step is estimated to be $O(N_{\rm BC}N_{\rm band}N_{\rm CWF}N_{\rm basis})$, $O(N_{\rm BC}N_{\rm band}^{3})$, $O(N_{\rm BC}N_{\rm band}N_{\rm CWF}^{2})$, or $O(N_{\rm BC}N_{\rm band}N_{\rm CWF}N_{\rm basis})$ for the step 3, 4, 5, or 6, respectively, where $N_{\rm basis}$ is the number of basis functions to expand the KS orbitals. If the localized basis functions are used, the computational cost of the step 3 becomes $O(N_{\rm BC}N_{\rm band}N_{\rm CWF})$. For isolated systems, the Brillouin zone sampling is limited to only the $\Gamma$ point. No modification of the proposed method is needed to calculate the CWFs. The theoretical framework we have discussed so far is general for any implementation of the DFT-KS method. However, the choice of the the localized functions $\\{Q\\}$ in Eq. (3) may depend on the implementation. So, we will discuss how the localized functions $\\{Q\\}$ can be properly generated in Section III. ## III IMPLEMENTATION ### III.1 General We have implemented the proposed method into the OpenMX DFT software package OpenMX ; Ozaki2005 ; Duy2014 which is based on norm-conserving pseudopotentials (PPs) MBK1993 ; Theurich2001 and optimized pseudo-atomic orbitals (PAOs) Ozaki2003 ; Ozaki2004 as basis set. All the benchmark calculations were performed with a computational condition of a production level. The basis functions used are listed in Table 1. In the abbreviation of basis functions such as H7.0-s2p2d1, H stands for the atomic symbol, 7.0 the cutoff radius (Bohr) in the generation by the confinement scheme Ozaki2003 ; Ozaki2004 , and s2p2d1 means the employment of two, two, and one optimized radial functions for the $s$-, $p$-, and $d$-orbitals, respectively. The radial functions were optimized by a variational optimization method Ozaki2003 . These basis functions we used can be regarded as at least double zeta plus double polarization basis sets if we follow the terminology of Gaussian or Slater-type basis functions. Valence states included in the PPs are listed in Table 1. All the PPs and PAOs we used in the study were taken from the database (2019) in the OpenMX website OpenMX , which were benchmarked by the delta gauge method Lejaeghere2016 . Real space grid techniques were used for the numerical integrations and the solution of the Poisson equation using fast Fourier transform (FFT) with the energy cutoff of 250 to 500 Ryd Ozaki2005 . We used a generalized gradient approximation (GGA) proposed by Perdew, Burke, and Ernzerhof to the exchange-correlation functional Perdew1996 . An electronic temperature of 300 K was used to count the number of electrons by the Fermi-Dirac function for all the systems we considered. For all the calculations, $\delta=10^{-12}$ was used in the window function of Eq. (4), and $N_{\rm band}$ was set to be equilavent to the number of basis functions in both the summation of Eqs. (2) and (5). Table 1: Basis functions and valence states included in PPs. ∗For the calculations of atomic charges, the basis functions listed in Table 3 were used. Element | | | Basis functions | | | Valence states in PP ---|---|---|---|---|---|--- H | | | H7.0-s2p2d1 | | | $1s$ C | | | C6.0-s3p2d2 | | | $2s$, $2p$ N | | | N6.0-s3p2d2 | | | $2s$, $2p$ Na | | | ∗Na9.0-s3p3d2f2 | | | $2s$, $2p$, $3s$ Si | | | Si7.0-s2p2d2f1 | | | $3s$, $3p$ Cl | | | ∗Cl9.0-s3p3d2f2 | | | $3s$, $3p$ S | | | S7.0-s3p2d2f1 | | | $3s$, $3p$ Cu | | | Cu6.0-s3p3d3f1 | | | $3s$, $3p$, $3d$, $4s$ Se | | | Se7.0-s3p2d2 | | | $4s$, $4p$ Bi | | | Bi8.0-s3p2d2f1 | | | $5d$, $6s$, $6p$ ### III.2 Choice of Guiding Functions $\\{Q\\}$ Depending on the targeted eigenstates in the window function of Eq. (4), one can choose either atomic orbitals, hybrid atomic orbitals, or MOs as the guiding functions $\\{Q\\}$ in Eq. (3). In this subsection we discuss the three kinds of the localized guiding functions $\\{Q\\}$, i.e., atomic orbitals, hybrid atomic orbitals, and embedded MOs in molecules and bulks, and how they can be generated in our implementation. In our implementation, the Bloch function $\phi_{{\bf k}\mu}$ is expanded by PAOs $\chi$ Ozaki2003 ; Ozaki2004 as $\displaystyle|\phi_{{\bf k}\mu}\rangle=\sum_{{\bf R}}{\rm e}^{{\rm i}{\bf k}\cdot{\bf R}}\sum_{i\alpha}c_{{\bf k}\mu,i\alpha}|\chi_{{\bf R}i\alpha}\rangle,$ (19) where $i$ and $\alpha$ are atomic and orbital indices, and $c$ is a linear combination of PAO (LCPAO) coefficient. Also note that $\langle{\bf r}|\chi_{{\bf R}i\alpha}\rangle\equiv\chi_{i\alpha}({\bf r}-{\bf\tau}_{i}-{\bf R})$, where ${\bf\tau}_{i}$ is the position of atom $i$. We use PAOs as the guiding functions $\\{Q\\}$ of atomic orbitals, corresponding to valence orbitals or specific orbitals such as localized $d$-orbitals in a narrow energy window. The choice of the atomic orbitals gives a good guiding function, however, the resultant CWFs may not recover the symmetry of CWFs respecting the bond direction to the neighboring atoms unlike hybrid atomic orbitals due to the closest property of CWFs. In this case, it would be better to use the hybrid atomic orbitals rather than the atomic orbitals as explained below. Let us introduce a projection operator for the occupied space by $\displaystyle\hat{P}=\frac{1}{N_{\rm BC}}\sum_{{\bf k}\mu}|\phi_{\bf k\mu}\rangle f(\varepsilon_{\bf k\mu})\langle\phi_{\bf k\mu}|,$ (20) where $f$ is the Fermi-Dirac function. With the projection operator, the density matrix is calculated by $\displaystyle\rho_{{\bf R}i\alpha,{\bf R}^{\prime}j\beta}$ $\displaystyle=$ $\displaystyle\sum_{{\bf k}\mu}\langle\widetilde{\chi}_{{\bf R}i\alpha}|\hat{P}|\widetilde{\chi}_{{\bf R}^{\prime}j\beta}\rangle,$ $\displaystyle=$ $\displaystyle\frac{1}{N_{\rm BC}}\sum_{{\bf k}\mu}{\rm e}^{{\rm i}{\bf k}\cdot\left({\bf R}-{\bf R}^{\prime}\right)}f(\varepsilon_{\bf k\mu})c_{{\bf k}\mu,i\alpha}c^{*}_{{\bf k}\mu,j\beta},$ where $\widetilde{\chi}$ is the dual orbital defined by $\displaystyle|\widetilde{\chi}_{{\bf R}i\alpha}\rangle$ $\displaystyle=$ $\displaystyle\sum_{{\bf R}^{\prime}j\beta}|\chi_{{\bf R}^{\prime}j\beta}\rangle S_{{\bf R}^{\prime}j\beta,{\bf R}i\alpha}^{-1}$ (22) holding the following orthonormality: $\displaystyle\langle\chi_{{\bf R}i\alpha}|\widetilde{\chi}_{{\bf R}^{\prime}j\beta}\rangle$ $\displaystyle=$ $\displaystyle\langle\widetilde{\chi}_{{\bf R}i\alpha}|\chi_{{\bf R}^{\prime}j\beta}\rangle=\delta_{{\bf RR}^{\prime}}\delta_{ij}\delta_{\alpha\beta}.$ (23) By setting ${\bf R}={\bf R}^{\prime}={\bf 0}$ and $i=j$ in Eq. (LABEL:eq:DM), and diagonalizing the diagonal block element consisting of $\rho_{{\bf 0}i\alpha,{\bf 0}i\beta}$, which is associated with selected atomic orbitals on the atomic site $i$, we obtain the hybrid atomic orbitals respecting the symmetry of the bond direction to the neighboring atoms. We employ the hybrid atomic orbitals as the localized functions $\\{Q\\}$ in our implementation. When the same atomic orbitals are chosen to form the diagonal block element as for the case of atomic orbitals $\\{Q\\}$, the resultant CWFs span the same subspace in the Hilbert space. For some systems the electronic structures can be rather easily understood by employing MOs as building blocks. A molecular crystal is such a case, where the eigenstates near the chemical potential can be approximately constructed by a linear combination of the MOs of constituting molecules. Another example is the bond in molecules and bulks. The bond between two atoms embedded in the system might be analyzed by MOs associated with the two atoms. Here we show how embedded MOs in molecules and bulks can be calculated in a simple procedure. Let us start by noting that using Eq. (20) and assuming that the Bloch function is expressed by Eq. (19) the total number of electrons for a spin degenerate case is given by Ozaki2018 $\displaystyle N_{\rm ele}$ $\displaystyle=$ $\displaystyle 2{\rm tr}\left[\hat{P}\right],$ (24) $\displaystyle=$ $\displaystyle 2\sum_{{\bf R}i\alpha}\langle\widetilde{\chi}_{{\bf R}i\alpha}|\hat{P}|\chi_{{\bf R}i\alpha}\rangle,$ In the second line of Eq. (24) we used the following identity operator: $\displaystyle\hat{I}=\sum_{{\bf R}i\alpha}|\chi_{{\bf R}i\alpha}\rangle\langle\widetilde{\chi}_{{\bf R}i\alpha}|=\sum_{{\bf R}i\alpha}|\widetilde{\chi}_{{\bf R}i\alpha}\rangle\langle\chi_{{\bf R}i\alpha}|.$ (25) Since $N_{\rm ele}=N_{\rm BC}N_{\rm ele}^{(0)}$ with $N_{\rm ele}^{(0)}=2\sum_{i\alpha}\langle\widetilde{\chi}_{{\bf 0}i\alpha}|\hat{P}|\chi_{{\bf 0}i\alpha}\rangle$, it is enough to consider $N_{\rm ele}^{(0)}$. We utilize the formula for $N_{\rm ele}^{(0)}$ to calculate embedded MOs in molecules and bulks, and rewrite it the sum of partial traces as $\displaystyle N_{\rm ele}^{(0)}=2\sum_{g}{\rm tr}_{g}\left[(\widetilde{\chi}_{{\bf 0}g}|\hat{P}|\chi_{{\bf 0}g})\right]=2\sum_{g}{\rm tr}_{g}\left[\Lambda_{g}\right],$ (26) where $g$ is the index of group including PAOs on grouped atoms, e.g., a set of PAOs on a molecule. The notation of $|\chi_{{\bf 0}g})$ stands for a subset of PAOs as $\displaystyle|\chi_{{\bf 0}g})=\left(\cdots,|\chi_{{\bf 0}i1}\rangle,|\chi_{{\bf 0}i2}\rangle,\cdots,\right),$ (27) where PAOs on all the atoms in the group $g$ are included in the subset. The notation of $|\widetilde{\chi}_{{\bf 0}g})$ is the counterpart for the dual orbitals. Using the identity operator of Eq. (25), $\Lambda_{g}$ in Eq. (26) is given by $\displaystyle\Lambda_{g}=\sum_{{\bf R}g^{\prime}}\rho_{{\bf 0}g,{\bf R}g^{\prime}}S_{{\bf R}g^{\prime},{\bf 0}g}$ (28) with definition of block elements: $\displaystyle\rho_{{\bf R}g,{\bf R}^{\prime}g^{\prime}}$ $\displaystyle=$ $\displaystyle(\widetilde{\chi}_{{\bf R}g}|\hat{P}|\widetilde{\chi}_{{\bf R}^{\prime}g^{\prime}}),$ (29) $\displaystyle S_{{\bf R}g,{\bf R}^{\prime}g^{\prime}}$ $\displaystyle=$ $\displaystyle(\chi_{{\bf R}g}|\chi_{{\bf R}^{\prime}g^{\prime}}).$ (30) We now introduce the embedded MOs orbitals in molecules and bulks by a right- singular vectors of $\Lambda_{g}$. Since the elements of $\Lambda_{g}$ are real in case of the collinear DFT, the right-singular vectors can be obtained with real components. However, the SVD of $\Lambda_{g}$ tends to produce the right-singular vectors with complex components, which results in the complex CWFs making analysis complicated. To obtain the real right-singular vectors, we perform the eigendecomposition of $\Lambda_{g}^{{\dagger}}\Lambda_{g}$ as $\displaystyle\Lambda_{g}^{{\dagger}}\Lambda_{g}=Y_{g}\Omega_{g}^{2}Y_{g}^{{\dagger}},$ (31) where $\Omega_{g}^{2}$ is the diagonal matrix consisting of the eigenvalues of $\Lambda_{g}^{{\dagger}}\Lambda_{g}$, and $Y_{g}$ is a unitary matrix consisting of the corresponding eigenvectors $\\{y_{g,\nu}\\}$. By noting that the SVD of $\Lambda_{g}$ is given by $Z_{g}\Omega_{g}Y_{g}^{{\dagger}}$, we see that the right-singular vectors $Y_{g}$ can be obtained by diagonalizing $\Lambda_{g}^{{\dagger}}\Lambda_{g}$ as $Y_{g}$. The square roots of the eigenvalues of $\Lambda_{g}^{{\dagger}}\Lambda_{g}$ are singular values of $\Lambda_{g}$, and can be approximately regarded as the occupation of the corresponding eigenvector $y_{g,\nu}$. If $\Lambda_{g}$ is a real matrix, $Y_{g}$ obtained by the diagonalization of $\Lambda_{g}^{{\dagger}}\Lambda_{g}$ is guaranteed to be a real unitary matrix. We further normalize the eigenvector $y_{g,\nu}$ by considering the overlap integrals of Eq. (30) as $\displaystyle|\bar{y}_{g,\nu}\rangle=\frac{|y_{g,\nu}\rangle}{\sqrt{\langle y_{g,\nu}|S_{{\bf 0}g,{\bf 0}g}|y_{g,\nu}\rangle}},$ (32) and calculate an expectation value of the KS Hamiltonian $\hat{H}_{\rm KS}$ with $\bar{y}_{g,\nu}$ as $\displaystyle\epsilon_{g,\nu}=\langle\bar{y}_{g,\nu}|\hat{H}_{\rm KS}|\bar{y}_{g,\nu}\rangle.$ (33) After $\\{\bar{y}_{g,\nu}\\}$ are ordered based on the expectation values, we employ selected ones among $\\{\bar{y}_{g,\nu}\\}$, e.g., ones near the chemical potential, as the guiding functions $\\{Q\\}$ in Eq. (3) by monitoring the expectation values and the corresponding singular values. They are what we call embedded MOs in molecules and bulks. As demonstrated in Section IV, $\\{\bar{y}\\}$ work as the good guiding functions to capture the electronic structure of a molecular crystal. ## IV NUMERICAL RESULTS ### IV.1 Wannier Interpolated Bands Figure 2: Interpolated bands (red circles) of silicon in the diamond structure, calculated by the TB Hamiltonian derived from CWFs with (a) $k_{\rm B}T$=0.01 and (b) 3.0 eV in the window function of Eq. (4), respectively. $\varepsilon_{0}$ and $\varepsilon_{1}$ relative to the chemical potential were set to be -15.0 and 0.0 eV, respectively. The solid line is the original one directly calculated by the conventional scheme. The number of k-points for the Brillouin zone sampling was 13 $\times$ 13 $\times$ 13\. The experimental lattice constant of 5.43 Å was used. The values of the DM function per CWF are 0.571 and 0.444 for (a) and (b), respectively. Figure 3: CWFs for (a) Si, (b) Cu, (c) TTF in TTF-TCNQ, and (d) TCNQ in TTF-TCNQ. In all the cases, isovalues of $\pm 0.04$ (orange:0.04, blue:-0.04) are used for drawing the isosurfaces using OpenMX Viewer Lee2019 . The computational conditions for (a), (b), (c), and (d) are the same as those in Fig. 2 (b), Fig. 4 (a), Fig. 5 (b), and Fig. 5 (b), respectively. We present four numerical examples to demonstrate the broad applicability of the proposed method across various systems. In Figs. 2 (a) and (b), the interpolated bands of Si in the diamond structure, calculated by the TB Hamiltonian derived from CWFs, are compared with those by the conventional calculation. The hybrid atomic orbitals consisting of valence minimal orbitals, which are one $s$\- and a set of $p_{x}$-, $p_{y}$-, and $p_{z}$-orbitals per Si, were used as the guiding functions $\\{Q\\}$. For the window function of Eq. (4) we use -15.0 and 0.0, relative to the chemical potential, in eV for $\varepsilon_{0}$ and $\varepsilon_{1}$, respectively, which covers the energy range of valence bands. On the other hand, two $k_{\rm B}T$ of 0.01 and 3.0 eV were used to check how conduction bands are reproduced depending on the effect of smearing. As clearly seen, the two cases well reproduce the valence bands, and the conduction bands are also reproduced well in the case of $k_{\rm B}T=3.0$ eV. Evaluating the DM function per CWF using Eq. (14), we obtain values of 0.571 and 0.444 for $k_{\rm B}T$ of 0.01 and 3.0 eV, respectively. For $k_{\rm B}T=0.01$ eV, the first four singular values at each $k$ point are close to 1, while the remaining four approach $\delta(=10^{-12})$. Conversely, for $k_{\rm B}T=3.0$ eV, the eight singular values at each $k$ point distribute ranging from 1.2 to 0.1. These differences in the distribution of singular values explain why the value of the DM function for $k_{\rm B}T=0.01$ eV is larger than that for $k_{\rm B}T=3.0$ eV. One of the obtained CWFs is shown in Fig. 3 (a), which represents a $p$-like CWF. In case of $k_{\rm B}T=0.01$ eV, the conduction bands are treated less importantly due to heavily weighting to the valence bands, resulting in the poor reproduction of the conduction bands. At first glance, one may consider that this is an undesirable feature in such a treatment. However, the feature enables us to generate the minimal atomic-like orthogonal orbitals, which well span the subspace by the valence bands, and to calculate an effective charge allocated to each atom using the minimal atomic-like orthogonal orbitals in an unbiased way. We will demonstrate the calculation of the effective atomic charges and stress the usefulness of the method later on. Figure 4: Interpolated bands (red circles) of copper in the FCC structure, calculated by the TB Hamiltonian derived from CWFs with (a) $\varepsilon_{0}=-5.5$, $\varepsilon_{1}=-1.0$ eV, $k_{\rm B}T=1.0$ eV, and hybrid $3d$-orbitals as $\\{Q\\}$, and (b) $\varepsilon_{0}=-11.0$, $\varepsilon_{1}=24.0$ eV, $k_{\rm B}T=3.0$ eV, and hybrid $3d$-, $4s$-, $4p$-orbitals as $\\{Q\\}$. The solid line is the original one directly calculated by the conventional scheme. The number of k-points for the Brillouin zone sampling was 21 $\times$ 21 $\times$ 21\. The experimental lattice constant of 3.61 Å was used. The values of the DM function per CWF are 0.081 and 0.211 for (a) and (b), respectively. The disentanglement of bands in metals is demonstrated for Cu in the FCC structure as shown in Figs. 4 (a) and (b). By selecting the parameters $\varepsilon_{0}$ and $\varepsilon_{1}$, so that the $3d$ bands are included, and only five $d$-orbitals as the guiding functions $\\{Q\\}$, the five $d$-bands are reproduced as shown in Fig. 4 (a). We see that the $3d$-bands are properly disentangled from the dispersive $4s$ band, and one of the obtained CWFs preserves the shape of $d_{z^{2}}$-orbital as shown in Fig. 3 (b). When $3d$-, $4s$-, and $4p$-orbitals are used as the guiding functions $\\{Q\\}$, a broad range of bands are reproduced as shown in Fig. 4 (b). Since our PP of Cu includes $3s$, $3p$, $3d$, and $4s$ states, the original band structure has the deep $3s$\- and $3p$-bands. We set the parameters in the window function so as to discard the $3s$\- and $3p$-bands, and use the $3d$-, $4s$-, and $4p$-orbitals, not $3s$\- and $3p$-orbitals, as the guiding functions $\\{Q\\}$. So, the $3s$\- and $3p$-bands are not included in the interpolated bands (not shown). The examples show that the CWFs can be flexibly and easily constructed in accordance with the purpose of study without numerical difficulties. Table 2: Expectation values $\epsilon$ (eV) of the KS Hamiltonian calculated by Eq. (33), which is relative to the chemical potential, and singular values $\omega$ of $\Lambda_{g}$ for embedded MOs in the TTF-TCNQ molecular crystal. | TTF | | | | | TCNQ | ---|---|---|---|---|---|---|--- index | $\epsilon$ | $\omega$ | | | index | $\epsilon$ | $\omega$ 23 | -0.069 | 1.359 | | | 34 | 0.135 | 1.082 24 | 0.719 | 1.346 | | | 35 | 0.375 | 1.329 25 | 0.761 | 1.449 | | | 36 | 0.477 | 1.336 26 | 3.408 | 0.864 | | | 37 | 4.928 | 0.535 27 | 5.947 | 0.047 | | | 38 | 9.389 | 0.036 28 | 6.197 | 0.047 | | | 39 | 9.795 | 0.019 Figure 5: Interpolated bands, red lines in (a) and red circles in (b), of TTF-TCNQ, calculated by the TB Hamiltonian derived from CWFs with (a) $\varepsilon_{0}=-22.0$ and $\varepsilon_{1}=1.0$ eV relative to the chemical potential, $k_{\rm B}T=2.0$ eV, and hybrid atomic valence orbitals as $\\{Q\\}$, and (b) $\varepsilon_{0}=-1.0$ and $\varepsilon_{1}=1.0$ eV relative to the chemical potential, $k_{\rm B}T=0.01$ eV, and the 26th MO and the 37th MO for TTF and TCNQ as $\\{Q\\}$. The solid line is the original one directly calculated by the conventional scheme. The number of k-points for the Brillouin zone sampling was 5 $\times$ 19 $\times$ 3\. An experimental crystal structure was used Kistenmacher1974 . The values of the DM function per CWF are 0.441 and 0.022 for (a) and (b), respectively. The interpolated bands for a molecular crystal, consisting of tetrathiafulvalene (TTF) and tetracyanoquinodimethane (TCNQ), are shown in Figs. 5 (a) and (b). By employing hybrid atomic orbitals as the guiding functions $\\{Q\\}$, the wide range of bands are reproduced as shown in Fig. 5 (a). On the other hand, as shown in Fig. 5 (b), only four bands near the chemical potential are reproduced using the embedded MOs in the bulk as the guiding functions $\\{Q\\}$. The MOs were calculated by grouping the TTF and TCNQ molecules separately as explained in Sec. III B. The expectation values calculated by Eq. (33) and the singular values $\omega$ of $\Lambda_{g}$ are listed in Table 2. It can be seen that the singular values largely change from $\omega_{26}$ to $\omega_{27}$ and from $\omega_{37}$ to $\omega_{38}$ for TTF and TCNQ, respectively. So, the 26th MO and the 37th MO for TTF and TCNQ, respectively, were used as the guiding functions $\\{Q\\}$. Since there are two TTF molecules and two TCNQ molecules in the unit cell, we have the four embedded MOs as $\\{Q\\}$, resulting in the four bands. In Figs. 3 (c) and (d) two of the resultant CWFs are shown, which localize in TTF and TCNQ molecules, respectively, as expected. The value of the DM function per CWF is found to be 0.022, which implies that the guiding MOs are very close to the resultant CWFs. The example demonstrates that the proposed method provides a direct way to calculate CWFs for targeted bands in molecular crystals. We extend the proposed method to the non-collinear DFT vonBarth1972 ; Kubler1988 with spin-orbit interaction (SOI) Theurich2001 . The KS orbitals are expressed by two-component spinor, and the SOI is introduced by fully relativistic $j$-dependent PPs OpenMX . The theoretical framework is readily extended to the non-collinear DFT with the SOI without any difficulty. However, it should be noted that the hybrid atomic orbitals and embedded MOs in molecules and bulks as the guiding functions $\\{Q\\}$ can be complex functions in the case. In Figs. 6 (a) and (b), we show the Wannier interpolated bands of Bi2Se3 which is known to be a topological insulator when the SOI is included in the DFT calculation. For the cases without and with the SOI, corresponding to Figs. 6 (a) and (b), respectively, it is confirmed that the interpolated bands accurately reproduce the original ones regardless of existence of the band inversion, demonstrating a wide variety of applicability of the proposed method. Figure 6: Interpolated bands (red circles) of Bi2Se3 (a) without and (b) with the spin-orbit interaction, calculated by the TB Hamiltonian derived from CWFs. The solid line is the original ones directly calculated by the conventional scheme. $\varepsilon_{0}=-6.0$ and $\varepsilon_{1}=3.0$ eV relative to the chemical potential, $k_{\rm B}T=1.0$ eV, and hybrid Bi $6p$-orbitals and Se $4p$-orbitals as $\\{Q\\}$, were used in both the calculations. The number of k-points for the Brillouin zone sampling was 7 $\times$ 7 $\times$ 7\. An experimental crystal structure was used Nakajima1963 . The values of the DM function per CWF are 0.163 and 0.184 for (a) and (b), respectively. ### IV.2 Effective Atomic Charges From the construction, one can calculate CWFs closest to (hybrid) valence atomic orbitals, while fully respecting the occupied states. The CWFs provide us a unique way to evaluate effective atomic charges. Since the CWFs are a set of orthonormal functions, the effective atomic charge $\kappa_{i}$ of atom $i$ is easily calculated without arbitrariness as $\displaystyle\kappa_{i}$ $\displaystyle=$ $\displaystyle N_{i}^{\rm(val)}-\sum_{p\in{\rm atom}~{}i}\langle W_{{\bf 0}p}|\hat{P}|W_{{\bf 0}p}\rangle,$ (34) $\displaystyle=$ $\displaystyle N_{i}^{\rm(val)}-\frac{1}{N_{\rm BC}}\sum_{p\in{\rm atom}~{}i}\sum_{\bf k,\mu}f(\varepsilon_{\bf k\mu})u_{{\bf k}\mu,p}^{*}u_{{\bf k}\mu,p},$ where $\hat{P}$ is the projection operator defined by Eq. (20), and $N_{i}^{\rm(val)}$ is the number of valence electrons in the PP of atom $i$. Though the summation over orbitals $p$ belonging to atom $i$ is taken into account in Eq. (34), one can analyze the orbital resolved charges as well. Table 3 shows effective atomic charges in a HCN molecule and the NaCl bulk, calculated by Eq. (34) and the Mulliken population analysis Mulliken1955 . Both the systems are known to be notorious cases because of the difficulty in calculating the effective charges Knizia13 . We see that the effective charges by the CWFs quickly converge as a function of basis functions, while those by the Mulliken population are highly dependent on the choice of the basis function. The effective atomic charges obtained from the CWFs could serve as a valuable tool for analyzing electronic structures in a manner related to the approach discussed in Ref. Knizia13 . Further investigation in this direction will be conducted in future studies. Table 3: Effective atomic charges in a HCN molecule and the NaCl bulk, calculated by the proposed method (CWF) and Mulliken population analysis (MP) with a variety of basis functions, where $A$ represents the constituting atoms. Hybrid atomic valence orbitals are used as the guiding functions $\\{Q\\}$. For the window function, $\varepsilon_{0}=-55.0$ and $\varepsilon_{1}=0.0$ eV for HCN and $\varepsilon_{0}=-35.0$ and $\varepsilon_{1}=0.0$ eV for NaCl, relative to the chemical potential, and $k_{\rm B}T=0.1$ eV are used. The number of k-points for the Brillouin zone sampling is 9 $\times$ 9 $\times$ 9 for the NaCl bulk with the lattice constant of 5.63 Å. HCN | | CWF | | | | | MP | ---|---|---|---|---|---|---|---|--- Basis | H | C | N | | | H | C | N $A$6.0-s1p1 | 0.077 | -0.052 | -0.025 | | | 0.384 | -0.164 | -0.221 $A$6.0-s2p2 | 0.069 | 0.003 | -0.073 | | | -0.070 | 0.321 | -0.252 $A$6.0-s2p2d1 | 0.066 | 0.009 | -0.076 | | | -0.008 | 0.393 | -0.385 $A$6.0-s3p3d2 | 0.066 | 0.008 | -0.074 | | | 0.110 | 0.298 | -0.408 $A$6.0-s3p3d2f2 | 0.066 | 0.008 | -0.074 | | | 0.167 | 0.297 | -0.464 NaCl | CWF | | | | | MP | | Basis | Na | Cl | | | | Na | Cl | $A$9.0-s2p1 | 0.391 | -0.391 | | | | 0.716 | -0.716 | $A$9.0-s3p2 | 0.422 | -0.422 | | | | 0.595 | -0.595 | $A$9.0-s3p3d2 | 0.421 | -0.421 | | | | 0.158 | -0.158 | $A$9.0-s3p3d2f2 | 0.421 | -0.421 | | | | -0.062 | 0.062 | ## V CONCLUSIONS We presented a non-iterative method to calculate the closest Wannier functions (CWFs) to a given set of localized guiding functions such as atomic orbitals, hybrid atomic orbitals, and molecular orbitals (MOs). We defined the distance measure (DM) function $F$ by the sum of the squared distance between the projection functions $L$ and Wannier functions $W$ in the Hilbert space, and considered minimizing the DM function. It was shown that the minimization of the DM function is achieved by the polar decomposition of the projection matrix $A$ with a window function via the singular value decomposition (SVD) in a non-iterative manner. The CWFs can be uniquely constructed as long as the projection matrix $A$ is positive definite. It was also shown that the method is free from subtle choice of the gauge. The disentanglement of bands is naturally taken into account by introducing a smoothly varying window function, and including $N_{\rm band}$ Bloch functions to generate $N_{\rm CWF}$ CWFs, where $N_{\rm CWF}\leq N_{\rm band}$. Even for isolated bands, the method always performs the disentanglement of bands without any difficulty. We have implemented the proposed method into the OpenMX code, which is based on the density functional theory (DFT), numerical pseudo-atomic orbitals (PAOs), and norm-conserving pseudopotentials (PPs), and introduced three types of guiding functions, i.e., atomic orbitals, hybrid atomic orbitals, and embedded MOs in molecules and bulks. The first two are easily employed from the PAOs. For the last one, we developed a method to calculate embedded MOs in molecules and bulks, which focuses on a partial trace formula of the projection operator for the occupied space and applies SVD to the partial matrix for the projection operator. The interpolated bands by tight-binding (TB) models derived from the CWFs reproduce well the targeted conventional bands of a wide variety of systems including Si, Cu, the TTF-TCNQ molecular crystal, and a topological insulator of Bi2Se3. These successful reproduction of targeted bands clearly demonstrates a wide variety of applicability of the proposed method. We further show the usefulness of the proposed method in calculating effective atomic charges, implying that the CWFs closest to atomic orbitals can be used as a measure to analyze electronic structures from one system to the others. Thus, we conclude that the proposed method is an alternative way in efficiently calculating WFs, and the concept of CWFs will provide a basis for development of novel methods of analyzing electronic structures and calculating physical properties. ###### Acknowledgements. The author wishes to express gratitude to Prof. Yoshiaki Sugimoto for inspiring the development of CWFs through collaborative research. Thanks are also extended to Mr. Ryotaro Koshoji for his insightful comments on the theoretical aspect. Part of the computation in this study was carried out using the computational facility of the Institute for Solid State Physics at the University of Tokyo. ## Appendix A The upper bound of the diagonal elements of the matrix $D$ A proof for the upper bound of the diagonal elements of the matrix $D$ is given in the appendix. Note that $UV$ and $BV$ in Eq. (11) are partial unitary matrices in size of $N_{\rm band}\times N_{\rm CWF}$. Since $D$ is hermitian, its eigenvalues are real. Let $\lambda$ be an eigenvalue of $D$ and $x$ be the corresponding eigenvector. Then, the following equation holds: $\displaystyle\frac{1}{2}\left(V^{{\dagger}}U^{{\dagger}}BV+V^{{\dagger}}B^{{\dagger}}UV\right)|x\rangle=\lambda|x\rangle.$ (35) Defining $|y\rangle=UV|x\rangle$ and $|z\rangle=BV|x\rangle$, and operating $\langle x|$ from the left side of Eq. (35), we have $\displaystyle\langle y|z\rangle+\langle z|y\rangle=2\lambda\langle x|x\rangle.$ (36) Noting $\langle y|z\rangle+\langle z|y\rangle\leq 2|\langle y|z\rangle|$, and using the Cauchy-Schwarz inequality $|\langle y|z\rangle|\leq\sqrt{\langle y|y\rangle\langle z|z\rangle}$, we obtain $\displaystyle\langle y|z\rangle+\langle z|y\rangle\leq 2\sqrt{\langle y|y\rangle\langle z|z\rangle}.$ (37) Since $\langle y|y\rangle=\langle z|z\rangle=\langle x|x\rangle$, combining Eq. (36) and Eq. (37) results in $\displaystyle\lambda\langle x|x\rangle\leq\langle x|x\rangle.$ (38) Considering $\langle x|x\rangle\neq 0$, the upper bound of the eigenvalues of $D$ is found to be $\displaystyle\lambda\leq 1.$ (39) Noting that the matrix $D$ can be written by $x$ and $\lambda$ as $\displaystyle D=\sum_{\nu}|x_{\nu}\rangle\lambda_{\nu}\langle x_{\nu}|,$ (40) the diagonal elements $d_{nn}$ of the matrix $D$ is given by $\displaystyle d_{nn}=\sum_{\nu}|\langle n|x_{\nu}\rangle|^{2}\lambda_{\nu},$ (41) where $\langle n|x_{\nu}\rangle$ is the $n$-th element in the vector $x_{\nu}$. Since the upper bound of $\lambda$ is unity, and $\\{x\\}$ forms a unitary matrix, we obtain the upper bound of the diagonal elements of the matrix $D$ as $\displaystyle d_{nn}\leq 1.$ (42) ## References * (1) G.H. Wannier, Phys. Rev. 52, 191 (1937). * (2) W. Kohn, Phys. Rev. 115, 809 (1959). * (3) N. Marzari, A.A. Mostofi, J.R. Yates, I. Souza, and D. Vanderbilt, Rev. Mod. Phys. 84, 1419 (2012). * (4) P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964). * (5) D.R. Hamann and D. Vanderbilt, Phys. Rev. B 79, 045109 (2009). * (6) F. Lechermann, A. Georges, A. Poteryaev, S. Biermann, M. Posternak, A. Yamasaki, and O.K. Andersen, Phys. Rev. B 74, 125120 (2006). * (7) N. Marzari and D. Vanderbilt, Phys. Rev. B 56, 12847 (1997). * (8) V. Vitale, G. Pizzi, A. Marrazzo, J.R. Yates, N. Marzari, and A.A. Mostofi, npj Computational Materials 6, 66 (2020). * (9) A. Damle and L. Lin, Multiscale Model. Simul. 16, 1392 (2018). * (10) J. Qiao, G. Pizzi, and N. Marzari, arXiv:2306.00678. * (11) W. Ku, H. Rosner, W.E. Pickett, and R.T. Scalettar, Phys. Rev. Lett. 89, 167204 (2002). * (12) W.C. Lu, C.Z. Wang, T.L. Chan, K. Ruedenberg, and K.M. Ho, Phys. Rev. B 70, 041101(R) (2004). * (13) X. Qian, J. Li, L. Qi, C.-Z. Wang, T.-L. Chan, Y.-X. Yao, K.-M. Ho, and S. Yip, Phys. Rev. B 78, 245112 (2008). * (14) P.-O. Löwdin, J. Chem. Phys. 18, 365 (1950). * (15) J. Des Cloizeaux, Phys. Rev. 135, A685 (1964). * (16) A.A. Mostofi, J.R. Yates, Y.-S. Lee, I. Souza, D. Vanderbilt, and N. Marzari, Comput. Phys. Commun. 178, 685 (2008). * (17) B.C. Carlson and J.M. Keller, Phys. Rev. 105, 102 (1957). * (18) A.E. Reed, L.A. Curtiss, and F. Weinhold, Chem. Rev. 88, 899 (1988). * (19) G. Knizia, J. Chem. Theory Comput. 9, 4834 (2013). * (20) W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965). * (21) Matrix Computations (3rd ed.). Baltimore: The Johns Hopkins University Press (1996), G. Golub and C.F Van Loan. * (22) K. Fan and A. J. Hoffman, Proc. Amer. Math. Soc. 6, 111 (1955). * (23) The code, OpenMX, pseudo-atomic basis functions, and pseudopotentials are available under terms of the GNU-GPL on a web site (http://www.openmx-square.org/). * (24) T. Ozaki and H. Kino, Phys. Rev. B 72, 045121 (2005). * (25) T.V.T. Duy and T. Ozaki, Comput. Phys. Commun. 185, 777 (2014). * (26) I. Morrison, D.M. Bylander, L. Kleinman, Phys. Rev. B 47, 6728 (1993). * (27) G. Theurich and N.A. Hill, Phys. Rev. B 64, 073106 (2001). * (28) T. Ozaki, Phys. Rev. B. 67, 155108, (2003). * (29) T. Ozaki and H. Kino, Phys. Rev. B 69, 195113 (2004). * (30) K. Lejaeghere, G. Bihlmayer, T. Björkman, P. Blaha, S. Blügel, V. Blum, D. Caliste, I.E. Castelli, S.J. Clark, A. Dal Corso, S. de Gironcoli, T. Deutsch, J.K. Dewhurst, I. Di Marco, C. Draxl, M. Dułak, O. Eriksson, J.A. Flores-Livas, K.F. Garrity, L. Genovese, P. Giannozzi, M. Giantomassi, S. Goedecker, X. Gonze, O. Grånäs, E.K. Gross, A. Gulans, F. Gygi, D.R. Hamann, P.J. Hasnip, N.A. Holzwarth, D. Iu̧san, D.B. Jochym, F. Jollet, D. Jones, G. Kresse, K. Koepernik, E. Küçü kbenli, Y.O. Kvashnin, I.L. Locht, S. Lubeck, M. Marsman, N. Marzari, U. Nitzsche, L. Nordströ m, T. Ozaki, L. Paulatto, C.J. Pickard, W. Poelmans, M.I. Probert, K. Refson, M. Richter, G.M. Rignanese, S. Saha, M. Scheffler, M. Schlipf, K. Schwarz, S. Sharma, F. Tavazza, P. Thunströ m, A. Tkatchenko, M. Torrent, D. Vanderbilt, M.J. van Setten, V. Van Speybroeck, J.M. Wills, J.R. Yates, G.X. Zhang, and S. Cottenier, Science 351, aad3000 (2016). * (31) J.P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). * (32) T. Ozaki, M. Fukuda, and G. Jiang, Phys. Rev. B 98, 245137 (2018). * (33) Y.-T. Lee and T. Ozaki, J. Mol. Graph. Model. 89, 192 (2019); https://www.openmx-square.org/viewer/ * (34) T.J. Kistenmacher, T.E. Phillips, and D.O. Cowan, Acta Cryst. B30, 763 (1974). * (35) S. Nakajima, J. Phys. Chem. Solids 24, 479 (1963). * (36) U. von Barth and L. Hedin, J. Phys. C: Solid State Phys. 5, 1629 (1972). * (37) J. Kubler, K.-H. Hock, J. Sticht, and A. R. Williams, J. Phys. F: Met. Phys. 18, 469 (1988). * (38) R.S. Mulliken, J. Chem. Phys. 23, 1833 (1955).
# Aerodynamic effects and performance improvements of running in drafting formations Lukas Schickhofer KTM E-TECHNOLOGIES, Salzburg, AT-5081, Austria <EMAIL_ADDRESS>Henry Hanson ADIDAS, Herzogenaurach, DE-91074, Germany ###### Abstract Drafting as a process to reduce drag and to benefit from the presence of other competitors is applied in various sports with several recent examples of competitive running in formations. In this study, the aerodynamics of a realistic model of a female runner is calculated by computational fluid dynamics (CFD) simulations at four running speeds of $15\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$, $18\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$, $21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$, and $36\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$. Aerodynamic power fractions of the total energy expenditure are found to be in the range of $2.6\%$-$8.5\%$. Additionally, four exemplary formations are analysed with respect to their drafting potential and resulting drag values are compared for the main runner and her pacers. The best of the formations achieves a total drag reduction on the main runner of $75.6\%$. Moreover, there are large variations in the drag reduction between the considered formations of up to $42\%$ with respect to the baseline single-runner case. We conclude that major drag reduction of more than $70\%$ can already be achieved with fairly simple formations, while certain factors, such as runners on the sides, can have a detrimental effect on drag reduction due to local acceleration of the passing flow. Using an empirical model for mechanical power output during running, gains of metabolic power and performance predictions are evaluated for all considered formations. Improvements in running economy are up to $3.5\%$ for the best formation, leading to velocity gains of $2.3\%$. This translates to $154\text{\,}\mathrm{s}$ ($\approx$2.6\text{\,}\mathrm{min}$$) saved over a marathon distance. Consequently, direct conclusions are drawn from the obtained data for ideal drafting of long-distance running in highly packed formations. ###### keywords: Running aerodynamics, Drafting, Drag reduction, Metabolic power, Computational fluid dynamics ###### MSC: [2010] 00-01, 99-00 ††journal: Journal of Biomechanics ## 1 Introduction In most sports involving the race against other participants for time, drag plays a crucial role. It is defined as the aerodynamic force acting in an axial direction on the athlete as he or she moves against the surrounding fluid, such as water or air. As a result, it has become common practice both in sports and motorsports to benefit from the presence of other athletes or vehicles to alleviate the resistance of the fluid by drafting in the associated wake region (i.e. _slipstreaming_). In the wake there is not just drastically lower fluid velocity, but also a negative pressure coefficient leading to suction, which additionally benefits the trailing competitor. While in motorsports with its high speeds and considerable research efforts, drafting has long been investigated (e.g. Katz [1995], Romberg et al. [1971]), it is less researched in other sports. However, Rundell [1996] investigated drafting during speed skating and showed large positive effects on metabolic activity, heart rate, and lactate response. Also in swimming, drafting was associated with considerable energy saving and the optimal drafting distance was established at $0$-$0.5\text{\,}\mathrm{m}$ behind the leading swimmer, causing power savings of up to $31\%$ [Bassett Jr et al., 1991, Chatard and Wilson, 2003]. Furthermore, cycling is a sport of large benefits through aerodynamic drafting and riding in formations, such as tightly packed pelotons [Edwards and Byrnes, 2007, Blocken et al., 2018, Malizia and Blocken, 2020]. Apart from testing in wind tunnels, typically applying similarity principles through scaled models at similar Reynolds numbers, computational fluid dynamics (CFD) has become a widely used method to accurately assess drag distributions across athletes. Blocken et al. [2013] examined the effect of two drafting cyclists at various upper-body positions through CFD and found drag reductions of up to $27.1\%$ for the trailing cyclist, while also the leading cyclist's drag was decreased by up to $2.6\%$. Moreover, drag reductions of $42\%$ to $48\%$ have been measured for drafting during cycling in the velodrome [Fitton et al., 2018]. Fuss [2018] recently suggested an analytical model for slipstreaming in gravity-powered sports such as ski cross and found significant advantages in gained velocity and glide distance with respect to a leading skier, especially in a tucked body position. Although running happens typically at fairly low speeds, drafting shows also here a measurable impact on aerodynamic forces. In a major study on the subject, Pugh [1970] demonstrated drag savings of up to $80\%$ when running at a middle-distance speed of $4.46\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ against a slight head wind. More recently, particularly in light of attempts to break the $2$-hour barrier of the marathon distance (e.g. _INEOS 1:59 Challenge_ , _Nike Breaking2_), increasing research efforts were directed to minimize drag for a main runner in formations. Hoogkamer et al. [2018] used a reduced-order model to establish a sustainable velocity of $5.93\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ using cooperative drafting in a four-runner team. Polidori et al. [2020] applied CFD to compute the drag and power savings of Kenenisa Bekele, while running the Berlin marathon and using cooperative drafting. It was shown that up to $57.3\%$ of aerodynamic power were gained in the ideal case of running directly behind a front pacer at a distance of $1.3\text{\,}\mathrm{m}$ when compared to the case of running alone. The current study aims at shedding more light on the aerodynamic effects of running alone and in formations by using a realistic model of a female runner and a validated CFD methodology. Four different formations are chosen to reflect various running scenarios and their implications on the drag on the main runner as well as her pacers are discussed. The effect of the resulting drag reduction on the metabolic rate and running economy is computed by using a mechanical power model of running. As a result, possible performance predictions are made in terms of improvements of running speed and time. Finally, by applying the conclusions from this study, direct implications for an optimized drafting strategy are achieved. ## 2 Methods ### 2.1 Numerical method For the computation of the flow field variables and the resulting aerodynamic forces on the runner, computational fluid dynamics (CFD) using the finite- volume method is applied. The Navier-Stokes equations describing the conservation of momentum are discretized in its Reynolds-averaged (RANS) form. Turbulence models have an important effect on aerodynamic force computations, mainly because of their modelling of momentum transfer near surfaces and in boundary layers. This in turn influences the locations of flow separation, which is crucial for correct drag predictions. Here, turbulence is treated with the $k$-$\omega$ model in its shear-stress-transport (SST) formulation according to Menter [1994]. The model has been repeatedly shown to be suitable for external aerodynamics and to give superior performance to the Spalart- Allmaras and the $k$-$\epsilon$ models in flow scenarios with adverse pressure gradients and free shear layers, which do occur in our setup [Menter, 1992, Hellsten, 1998]. Defraeye et al. [2010] tested the performance of various turbulence treatments such as the most common RANS models and large eddy simulation with respect to drag predictions for cycling aerodynamics. They found the $k$-$\omega$ SST model to perform best in comparison with experimental data (i.e. force and moment areas) from wind tunnel measurements, with average discrepancies of $6\%$ and consistently below $11\%$. \begin{overpic}[width=390.25534pt]{Figure1.pdf} \end{overpic} Figure 1: Geometry of a formation of runners surrounded by the computational domain and the imposed boundary conditions (a). The axial spacing of $1.2\,m$ and the lateral spacing of $0.7\,m$ between the runners are indicated (b). Moreover, Crouch et al. [2016] has shown for cycling aerodynamics that the effect of the dynamical motion of the limbs on instantaneous drag is only minor. Additionally, wind tunnel experiments by Inoue et al. [2016] revealed an approximately $10\%$ increase in drag values of solo running when using a moving-belt system as compared with a stationary setup. In their measurements the formation of unrealistic ground boundary layers was thus inhibited, as it is the case in this study. We use a stationary model of an athlete together with a moving ground boundary, which was found to be sufficient for the prediction of the global aerodynamic forces. Furthermore, discretization in this study is performed with a second-order scheme for the spatial derivatives. Pressure and velocity are calculated in a coupled approach. The fluid is defined as air at ambient temperature of $T=$293.15\text{\,}\mathrm{K}$=$20\text{\,}\mathrm{\SIUnitSymbolCelsius}$$ and a kinematic viscosity of $\nu=$1.516\text{\times}{10}^{-5}\text{\,}{\mathrm{m}}^{2}\text{\,}{\mathrm{s}}^{-1}$$. Additionally, the velocity inlet of the domain pictured in Fig. 1(a) imposes an axial flow velocity at the chosen running speeds, while the pressure outlet applies an ambient pressure of $p=$101\,325\text{\,}\mathrm{Pa}$=$1\text{\,}\mathrm{atm}$$. The ground is defined as wall moving at the running speed, which ensures that no boundary layers form at the bottom that could invalidate the results. A Dirichlet boundary condition, $u_{i}=0$, is applied for all components of the flow velocity directly at the ground. The surrounding walls are symmetry boundaries with frictionless properties. This is achieved through a Neumann boundary condition $\partial u_{i}/\partial x_{i}=0$. Tab. 1 gives the various speeds of this study with the respective Reynolds numbers of the flow, using the runner's height for the definition of the characteristic length. While all four speeds are considered for the assessment of single-runner aerodynamics (cf. Sec. 3.1), the drafting formations are studied at a running speed of $21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ (cf. Sec. 3.2-3.4). This is a high speed occurring during elite marathon running and allows for good comparison between the considered formations. In fact, the current official marathon world record for men is 2:01:39 by Eliud Kipchoge (September 16, 2018 during the Berlin Marathon; at an average speed of $\sim$21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$$) and for women is 2:14:04 by Brigid Kosgei (October 13, 2019 during the Chicago Marathon; at an average speed of $\sim$19\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$$). Here, we chose a running velocity at the upper range of these numbers to assess the ideal potential of drafting with respect to metabolic quantities, drafted velocity, and time savings. Table 1: Running speeds and associated Reynolds numbers $Re=\left(U_{\infty}L\right)/\nu$ at a characteristic length of $L=$1.65\text{\,}\mathrm{m}$$. Running speed ($\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$) | Reynolds number ---|--- $15$ ($4.16\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) | $452\,770\text{\,}$ $18$ ($5\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) | $544\,195\text{\,}$ $21$ ($5.83\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) | $634\,531\text{\,}$ $36$ ($10\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) | $1\,088\,390\text{\,}$ The inherent challenge of simulations lies in ensuring that physical effects are computed as accurately as possible. Here, that means that flow separation lines and points, as well as boundary layer thickness and characteristics are captured for the investigated Reynolds numbers, which lie in the turbulent regime (cf. Tab. 1). In order to avoid inaccurate prediction of aerodynamic forces on athletes at high Reynolds numbers, as described by Meile et al. [2006], it is crucial to optimize near-surface grid resolutions, such that no numerically induced jumps of separation points occur. This is important, as an artificially delayed flow separation would lead to a sharp drop in the pressure drag, which is the dominant component of total drag of bluff bodies (such as the human body). A thorough verification and validation of the applied numerical approach is presented in A. The solution of the discretized continuity and momentum equations is performed with the solver _ANSYS Fluent 19.2_ , post-processing of flow variables is achieved using the post- processing tool _ParaView_ and drag distribution and resulting power values are calculated with the free scientific programming language _GNU Octave_. ### 2.2 Geometry and mesh For the numerical setup we apply the realistic, three-dimensional geometry of a female runner with a height of $L=$1.65\text{\,}\mathrm{m}$$, a projected frontal area of $A\approx$0.44\text{\,}{\mathrm{m}}^{2}$$, and a body mass of $m\approx$55\text{\,}\mathrm{kg}$$. The geometry represents an average of 59 individual track athletes, scanned in-house and positioned into a running pose. The obtained pose-invariant statistical model for the female human body was reconstructed from the high-quality scan data using the method outlined by Colaianni et al. [2014]. The axial and lateral spacing between the runners is chosen according to estimates of realistic running conditions. Polidori et al. [2020] for instance uses an axial distance of $1.3\text{\,}\mathrm{m}$ and a lateral shoulder-to-shoulder distance of $0.3\text{\,}\mathrm{m}$, which are taken from an actual drafting situation during a marathon race. The computational domain shown in Fig. 1(a) has a distance of $7\text{\,}\mathrm{m}$ from the main runner to the upstream boundary and $13\text{\,}\mathrm{m}$ to the downstream boundary. It contains far-field refinement as well as three refinement stages close to the runner's surfaces. Furthermore, 10 prism layers (i.e. inflation layers) are inserted directly at all solid boundaries with a steady growth factor of the prism layer thickness of $1.2$. These measures ensure that the dimensionless wall distance of $y^{+}\approx 1$ holds in the entire domain and all boundary layers are well resolved. Due to the local mesh refinement the cell count varies sharply between formations depending on the number of runners included. It rises from $40$ million cells for the single-runner case up to $107$ million cells for the final formation including a total of four runners. The near-surface mean cell size of $3.125\text{\times}{10}^{-3}\text{\,}\mathrm{m}$ and the outer- boundary mean cell size of $0.196\text{\,}\mathrm{m}$ for the final grid settings have been chosen following a grid convergence study (cf. A). Homogeneous, gradual cell growth is applied between the near and far field. Locally, especially in the boundary layer regions, perpendicular mesh dimensions are significantly lower (more than by a factor 10) than the near- surface mean cell size, which is the cell size of the finest grid level just outside the prism layers. A depiction of the considered formations of this study is shown in Fig. 4-5. ### 2.3 Calculation of power values Various empirical models exist for the calculation of the mechanical power output while running. Fukunaga et al. [1980] found through experiments on athletic runners on force platforms a relation between running velocity $u$ and the exerted power for forward motion as $P_{R,0}=0.436u^{2.01},$ (1) with $u$ in $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ and $P_{R}$ in $\text{\,}\mathrm{W}\text{\,}{\mathrm{kg}}^{-1}$. In a study using motion tracking, Cavagna and Kaneko [1977] also took into account additional energy expenditure created by the movement of the limbs. They arrive at a function for the specific running power per unit mass as $P_{R,0}=9.42+4.73u+0.266u^{1.993}.$ (2) Here, $u$ is given in $\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ and $P_{R,0}$ in $\text{\,}\mathrm{cal}\text{\,}{\mathrm{kg}}^{-1}\text{\,}{\mathrm{min}}^{-1}$. For our considered cases, both approaches give similar results with deviations in the range of only $2$-$5\%$, however, Eq. (2) is used in the following power calculations due to being the more complete model of running motion. By `more complete' we mean that it takes into account both internal and external work performed for the modelling of the total mechanical work $W_{tot}=W_{ext}+W_{int}.$ (3) Here, external work $W_{ext}$ is due to acceleration and lift of the centre of mass, while internal work refers to both the translational and rotational acceleration of the limbs relative to the trunk. This approach has also been suggested more recently by the work of Saibene and Minetti [2003], Pavei et al. [2019], and Gray et al. [2020]. Using Eq. (2), we arrive at the power $P_{R}=P_{R,0}\cdot\frac{4.1868}{60}\cdot m$ (4) in $W$ for a runner of mass $m$ in $\text{\,}\mathrm{kg}$, in the following simply called _running power_. Furthermore, there is a certain aerodynamic power $P_{A}$, used to overcome drag $F_{D}$, which is defined as $P_{A}=F_{D}\cdot u=\frac{1}{2}\rho u^{3}C_{D}A.$ (5) Here, the expression for $F_{D}$ assumes stagnant air and the absence of crosswinds. As a result, the total mechanical power generated while running is the sum of the running power $P_{R}$ and the aerodynamic power $P_{A}$: $P_{tot}=P_{R}+P_{A}.$ (6) Using the total mechanical power $P_{tot}$ and the associated efficiency $\epsilon$ at a certain run speed by Cavagna and Kaneko [1977], the metabolic power $P_{meta}=\frac{P_{tot}}{\epsilon},$ (7) which gives the metabolic energy expenditure during running, can be computed. It must be emphasized here that the science behind efficiency of locomotion and in particular of walking and running is far from concluded. In both running and walking the muscles utilize energy stored during a previous phase of stretching in the following phase of contraction. As stated by Cavagna and Kaneko [1977], the efficiency of walking reaches a maximum of $0.35$-$0.40$ at intermediate speeds due to the expected properties of the contractile component of muscle. This maximum of efficiency follows the force-velocity relation and the trends due to initial efficiency of muscle [Hill, 1964]. Also in cycling the efficiency of locomotion was found to reach maximum values at intermediate speeds ($\sim 0.22$ according to the early study of Dickinson [1929], $\sim 0.25$ according to Ettema and Lorås [2009]). In running, however, the efficiency increases steadily with speed (from $0.45$ to $0.70$), suggesting that the positive work during that activity derives mainly from the passive recoil of elastic muscle tissue and not from the active shortening of contractile muscle components [Cavagna and Kaneko, 1977, Saunders et al., 2004]. In support of this hypothesis, Cavagna et al. [1968] found that the useful effect of pre-streched muscle on the performed work increases with the speed of stretching and shortening. Furthermore, Komi and Bosco [1978] investigated this utilization of elastic energy during eccentric-concentric contraction of muscles in human motion and found that up to $90\%$ of energy produced in the pre-stretching phase is recovered in case of counter-movement jumps. More recently, Komi [2000] used data based on in-vivo force measurements, buckle-transducer technique, and optic-fiber technique to show that the stretch-shortening cycle in human skeletal muscle leads to considerable performance enhancement compared to simple concentric action. With respect to running economy, Hunter et al. [2015] showed a clear positive correlation of stretch-shortening cycle potentiation and increases of running economy and recommended eccentric force development (e.g. by resistance training) for performance improvements. Nonetheless, we point out that efficiency and total mechanical power models are inherently difficult to define due to the sometimes conflicting understanding of internal and external work performed during human locomotion and the question, whether internal work should appear in total mechanical power and efficiency expressions, as has been suggested by Winter [1979] and more recently by Minetti et al. [2001] (see e.g. Ettema and Lorås [2009] for a discussion of these issues with regards to cycling, and Williams [2000] with regards to running). ## 3 Results ### 3.1 Single-runner aerodynamics Using the geometry of the single runner, the drag forces acting on various body parts are computed over the range of relevant velocities (cf. Fig. 2(b)). It can be seen that the drag acting on torso and legs is highest, with both parts roughly affected by the same aerodynamic loads. Each make for a fraction of $39$-$40\%$ of the total drag on the runner. Furthermore, arms are affected considerably less, with a share of approximately $13\%$, and the head with $8\%$ of the total drag. These estimates give an overview of the sensitivity of the various parts of the body towards drag loads and are directly related to the frontal area of each part (cf. Eq. (5)). Fig. 2(a) gives the total drag on the runner for the four simulated running speeds. While the drag for lower speeds of $15$-$21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$, which are relevant for long distance disciplines like marathon, lies in the range of $3.39$-$6.52\text{\,}\mathrm{N}$, it sharply rises to $18.45\text{\,}\mathrm{N}$ at the higher considered speed of $36\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$. This demonstrates the (quadratically) increasing relevance of drag reduction at higher speeds. Moreover, the associated power values are calculated using the definitions of Sec. 2.3. Fig. 2(c) shows the share of aerodynamic and running power in the total mechanical power over the considered range of velocity. However, the actual energy per unit time, which needs to be made available during motion, is the metabolic power (cf. Eq. (7)). \begin{overpic}[width=216.81pt]{Figure2.pdf} \end{overpic} Figure 2: Total drag over the range of considered running speeds (a), as well as drag acting on the body parts of the runner (b). Total mechanical power, consisting of running power computed according to the empirical model by Cavagna and Kaneko [1977] and aerodynamic power, as a function of speed (c). Polynomial functions of second order (aerodynamic forces and running power) and of third order (aerodynamic and total mechanical power) are used for fitting the data. Here, the aerodynamic component provides a fraction of the metabolic power of $2.6\%$, $3.4\%$, $4.3\%$, and $8.5\%$ for $4.16\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ ($15\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$), $5\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ ($18\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$), $5.83\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ ($21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$), and $10\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ ($36\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$). These can be considered as threshold values for the maximum savings in metabolic energy consumption during drafted running. The resulting wake regions are depicted in Fig. 3. They are visualized by isosurfaces of total pressure, giving the boundary between the negative pressure region in the wake and the positive pressure region due to recovery of the flow. There is a notable decrease of the wake length from $1.14\text{\,}\mathrm{m}$ at $15\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ to $0.98\text{\,}\mathrm{m}$ at $36\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ due to increased turbulent mixing. The wake length is a good indication of the most beneficial range for drafting, as the suction provided by the negative total pressure in the wake slightly pulls a following runner. However, even at a length larger than that there will be a considerable drag-reducing effect due to flow deceleration of the head wind. \begin{overpic}[width=433.62pt]{Figure3.pdf} \end{overpic} Figure 3: Wake region behind the runner visualized by isosurfaces of total pressure at $p_{tot}=0$. ### 3.2 Drag comparison of formations At first we consider the performance of the investigated formations with respect to their effect on the main runner's drag at a running speed of $21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$. It can be concluded from Fig. 4 that formation $2$ leads to the largest reduction of drag of $75.6\%$, followed by formation $1$ with $70.1\%$. Formations $3$ and $4$ cause significantly less drag reduction of $41.3\%$ and $33.4\%$, respectively. \begin{overpic}[width=216.81pt]{Figure4.pdf} \end{overpic} Figure 4: Sketch of the investigated formations (a) together with their effect on the main runner's drag (b). The formations are compared to the case of the single runner. Axial and lateral spacing between the runners follows the values introduced in Fig. 1(b). Furthermore, considering the drag distribution across the whole body of the main runner, it can be recognized that the dominant part of the drag acts on legs and torso (cf. Sec. 3.1). Thus, it is crucial for a drafting formation to reduce the drag in these areas. Looking at Fig. 5, it can be seen that formations $1$-$2$ achieve this, while formations $3$-$4$ cannot adequately shield the body of the main runner from oncoming air. Additionally, it is shown that while the drag reduction on head and arms is fairly equal for formations $1$-$3$, formation $4$ shows almost no improvement for the drag on the arms. The reasons for the varying aerodynamic performances of formations $1$-$4$ are elaborated further in Sec. 3.3. \begin{overpic}[width=433.62pt]{Figure5.pdf} \end{overpic} Figure 5: Drag acting on the various body parts of the main runner alongside the total drag for all involved runners, including pacers, for the formations 1 (a), 2 (b), 3 (c), and 4 (d). Using the results for formation $1$, the distance between main runner and pacer was further varied to give the drag values shown in Fig. 6. An exponential function of the type $y=a+b\cdot\exp\left(-c/x\right)$ with $a=0$, $b=5.63$, and $c=1.28$ was used to fit the data and has the desired property of asymptotically approaching zero drag at small distances and the single runner's maximum drag at large distances. The values of the parameters were estimated by least-squares optimization using the Levenberg-Marquardt algorithm [Levenberg, 1944, Marquardt, 1963]. The absolute axial limit of separation between the two runners, given the dimension and stride length of the applied geometry, was found to be approximately $0.6\text{\,}\mathrm{m}$. In reality, however, this limit will barely be reached and the runners will be unable to approach already considerably earlier. Pugh [1970] estimates a natural limit of axial spacing of $1\text{\,}\mathrm{m}$ for middle-distance running and states a possible drag reduction at this distance of up to $80\%$. In this study we find a drag of $1.45\text{\,}\mathrm{N}$ acting on the main runner at a distance of $1\text{\,}\mathrm{m}$, which corresponds to $22.2\%$ of the drag on the single runner and thereby lies in good agreement with the results by Pugh [1970]. \begin{overpic}[width=216.81pt]{Figure6.pdf} \end{overpic} Figure 6: Drag on the main runner as a function of the distance to the leading pacer in formation 1. The numerical data is fitted using an exponential function with a determination coefficient of $R^{2}=0.99$. In reality, there is an axial limit which prohibits the further approach of the main runner to the pacer. This limit is estimated from geometric considerations as $0.6\text{\,}\mathrm{m}$. Pugh [1970] estimates this minimum distance during drafting at $1.0\text{\,}\mathrm{m}$. ### 3.3 Aerodynamic effects on runners within formations Air resistance while running manifests itself in static pressure increase as the air is being decelerated and diverted at the front of the runner. Fig. 7 provides insight to the magnitude of the pressure acting on the main runner for the various formations. The pressure coefficient $c_{p}=\frac{p-p_{\infty}}{\frac{1}{2}\rho U_{\infty}^{2}}=\frac{p-p_{\infty}}{p_{0}-p_{\infty}},$ (8) with the free-stream pressure of $p_{\infty}=$101\,325\text{\,}\mathrm{Pa}$=$1\text{\,}\mathrm{atm}$$ and a relative free-stream velocity of $U_{\infty}=$5.83\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$=$21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$$, indicates regions of high or low relative pressure. \begin{overpic}[width=433.62pt]{Figure7.pdf} \end{overpic} Figure 7: Pressure coefficient on the main runner for the single-runner case (a) compared to the formations 1-4 (b)-(e). Fig. 7 confirms the beneficial pressure distribution for formations $1$ and $2$. This means that there are no large areas of high pressure coefficient at the front of the runner, nor any large regions of low negative pressure coefficient at the back, which would be an indication for suction force slightly pulling the runner back. Formation $3$ on the other hand shows a high positive pressure coefficient at the front, which already hints at the leakage of oncoming flow between the front pacers that can be seen in Fig. 8(c) and Fig. 9(c). While formation $4$ does not display any significantly high pressure peaks at the front, it is the elevated negative pressure coefficient at the back that makes it less advantageous than formations $1$-$2$ (cf. Sec. 3.2). \begin{overpic}[width=433.62pt]{Figure8.pdf} \end{overpic} Figure 8: Streamlines seeded at the front of the leading pacers showing the diversion of air around the main runner for all four formations. Streamlines are coloured by velocity magnitude and the runners' surfaces by pressure coefficient. \begin{overpic}[width=433.62pt]{Figure9.pdf} \end{overpic} Figure 9: Velocity magnitude contours in the horizontal plane section through the runners' centre. Formations 1-4 are shown in images (a)-(d). Fig. 8-9 both demonstrate the reasons for the large differences in aerodynamic performance between the formations. In case of formations $1$ and $2$ the main runner is well embedded inside the wake region of the front pacer and therefore only affected by low-speed, almost stagnant air relative to herself. Formation $2$ has the additional advantage of a rear pacer, whose high- pressure stagnation region at the front positively affects the low-pressure wake region of the main runner by raising the pressure coefficient to a value closer to zero, thus lowering suction. This can be seen when comparing Fig. 10(a) and Fig. 10(b). Furthermore, Fig. 8(c) and Fig. 9(c) clearly demonstrate a major weakness of formation $3$, which is the passing air between the front pacers. This leads to a large region of high stagnation pressure directly at the area of the impacting air on the main runner, which is also visible in Fig. 7(d) and Fig. 10(c). Also formation $4$ is suboptimal due to air being accelerated between the main runner and the pacers on the side. This leads to the aforementioned effect of higher drag at the arms compared to the other formations (cf. Sec. 3.2). Additionally, this effect of locally increased flow velocity between the runners leads to a shortening of the wake region of slow air for the main runner and thus negatively impacts her drafting conditions (cf. Fig. 9(d)). \begin{overpic}[width=325.215pt]{Figure10.pdf} \end{overpic} Figure 10: Pressure coefficient in the vertical plane section through the main runner's centre. Formations 1-4 are depicted in images (a)-(d). ### 3.4 Metabolic power savings and performance predictions Using the drag values from Sec. 3.2, we can compute the associated aerodynamic power and the total mechanical power using Eq. (2)-(6). Tab. 2 lists the results for the single runner and the four formations. Additionally, relative changes in aerodynamic power and total power are given by comparison with the results of the single-runner case. The resulting metabolic power needed for running in the considered formations is computed by using the efficiency of running at the given velocity as detailed in Cavagna and Kaneko [1977]. Furthermore, running economy as the specific metabolic rate per unit mass is given together with its relative changes in Tab. 3. As described by Kipp et al. [2019], the relationship between improvements in running economy and improvements in velocity is not a linear, but rather a curvilinear one. This means that for a given improvement in running economy, velocity gains will be smaller at higher speeds and larger at lower speeds. At elite marathon speeds of around $5.5\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ there is roughly a $2/3^{\text{rds}}$ increase in velocity for each percent increase in running economy. By applying the conversion factors of Kipp et al. [2019] we arrive at estimates for the velocity gains for each formation in Tab. 4. The relative changes in speed are $2.1\%$ (formation 1), $2.3\%$ (formation 2), $1.3\%$ (formation 3), and $1.1\%$ (formation 4). Table 2: Aerodynamic power $P_{A}$, and total mechanical power $P_{tot}$ computed with Eq. (5) and (6) and by computing the running power $P_{R}=$858.1\text{\,}\mathrm{W}$$ at $21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ ($5.83\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) using the model of Cavagna and Kaneko [1977]. Savings $\Delta P_{A}$ and $\Delta P_{tot}$ are given with respect to the case of the runner alone. Formation | $F_{D}$ ($\text{\,}\mathrm{N}$) | $P_{A}$ ($\text{\,}\mathrm{W}$) | $\Delta P_{A}$ | $P_{tot}$ ($\text{\,}\mathrm{W}$) | $\Delta P_{tot}$ ---|---|---|---|---|--- Single runner | $6.52$ | $38.0$ | - | $896.1$ | - $1$ | $1.96$ | $11.4$ | $70.0\%$ | $869.5$ | $3.0\%$ $2$ | $1.58$ | $9.2$ | $75.8\%$ | $867.3$ | $3.2\%$ $3$ | $3.84$ | $22.4$ | $41.1\%$ | $880.5$ | $1.7\%$ $4$ | $4.33$ | $25.3$ | $33.4\%$ | $883.4$ | $1.4\%$ Table 3: Metabolic power $P_{meta}$ according to Eq. (7) and running economy $RE$, as well as percent improvements of the running economy $\Delta RE$ for each of the formations with respect to the single-runner case. Efficiency of locomotion for a running velocity of $21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ ($5.83\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) is taken as $\epsilon=0.64$ [Cavagna and Kaneko, 1977]. Formation | $P_{meta}$ ($\text{\,}\mathrm{W}$) | $RE$ ($\text{\,}\mathrm{W}\text{\,}{\mathrm{kg}}^{-1}$) | $\Delta RE$ ---|---|---|--- Single runner | $1400.2$ | $25.5$ | - 1 | $1358.6$ | $24.7$ | $3.1\%$ 2 | $1355.2$ | $24.6$ | $3.5\%$ 3 | $1375.8$ | $25.0$ | $2.0\%$ 4 | $1380.3$ | $25.1$ | $1.6\%$ Table 4: Drafted running speeds $u_{draft}$ for all formations alongside potential time $t_{draft}$ and time savings through drafting $\Delta t_{draft}$ over the marathon distance of $42\,195\text{\,}\mathrm{m}$ by comparison with the undrafted single-runner case. Formation | $u_{draft}$ ($\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) | $t_{draft}$ ($\text{\,}\mathrm{s}$) | $\Delta t_{draft}$ ($\text{\,}\mathrm{s}$) ---|---|---|--- 1 | $5.95$ | $7092$ ($\approx$1.970\text{\,}\mathrm{h}$$) | $142$ ($\approx$2.4\text{\,}\mathrm{min}$$) 2 | $5.96$ | $7080$ ($\approx$1.967\text{\,}\mathrm{h}$$) | $154$ ($\approx$2.6\text{\,}\mathrm{min}$$) 3 | $5.91$ | $7140$ ($\approx$1.983\text{\,}\mathrm{h}$$) | $98$ ($\approx$1.6\text{\,}\mathrm{min}$$) 4 | $5.89$ | $7164$ ($\approx$1.990\text{\,}\mathrm{h}$$) | $70$ ($\approx$1.2\text{\,}\mathrm{min}$$) ## 4 Discussion and conclusions Drafting and the resulting aerodynamic drag reduction cause a notable effect on running and the energy expenditure used for it. This study shows the overall aerodynamics and associated energy expenditure during running for a range of relevant speeds. Considerable shares of up to $8.5\%$ at a velocity of $36\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ are computed. Additionally, differences in drag, pressure distribution across the runners, and potential power savings are demonstrated for four formations that could arise during running competitions. As shown in Sec. 3.2, simple formations of axial spacing of the pacers (i.e. one pacer in front, as in formation 1, or one pacer in front and one in the back, as in formation 2) already lead to considerable reduction of over $70\%$ in drag at a distance of $1.2\text{\,}\mathrm{m}$. Furthermore, formations 1 and 2 also show a clear positive effect on leading and trailing pacers, since their overpressure and underpressure regions interact with the main runner. This effect has also been noted by Beaumont et al. [2019] together with a measurable advantage for the pacers with respect to oxygen consumption. We further deduce an empirical relationship of drag force acting on the main runner versus her distance to a leading pacer, which can be used for future prediction of a pacer's aerodynamic effects (cf. Fig. 6). The results are congruent with experimental data by Pugh [1970], who predicts drag decrease of up to $80\%$ when running behind a pacer at $1\text{\,}\mathrm{m}$. Davies [1980], who claimed possible drag savings in the range of $80$-$85\%$, also confirms these results. Moreover, the drag distribution at various body parts is examined in Sec. 3.2. From this and the explanations in Sec. 3.3, we can see a large sensitivity of the drag acting on the main runner if pacers are positioned laterally, such as in formations $3$ and $4$. In formation $3$, flow leaks between the front pacers, causing large pressure peaks at the runner's centre (cf. Fig. 7). Formation $4$ on the other hand has the negative effect of pacers at the runner's sides and the resulting acceleration of air between the runners leads to a relatively low drag reduction of only $33.4\%$ compared to formation $1$ with $70.1\%$. This suggests that runners at the side should be avoided if optimal drag savings are the goal. In essence, there are three aerodynamic effects of drafting that lead to a reduction of axial force (i.e. drag) while running: 1. (i) _Shielding against fast, oncoming air._ Therefore, there is no sharp deceleration of air and resulting formation of areas of high stagnation pressure on the runner's surface. 2. (ii) _Suction effects due to the region of negative total pressure in the leading runner's wake._ Any object moving through air forms a wake region of relatively low pressure. This underpressure can be harnessed to the benefit of a trailing runner. The wake region of negative total pressure has been found in this study to extend to about $1.14\text{\,}\mathrm{m}$ at $15\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ down to $0.98\text{\,}\mathrm{m}$ at $36\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ from the trailing runner's back. 3. (iii) _Delay of separation points due to turbulent free shear layers._ Turbulence levels are naturally increased in the high-shear-stress regions of the wake, where air is displaced against the stagnant surroundings. This raises momentum transfer in the boundary layers of the runner and delays separation due to formation of a turbulent boundary layer from a laminar one. As a result, pressure drag (i.e. form drag) drops, which is the dominant source of drag of bluff bodies. Eventually, the aerodynamic power savings are computed in this study and listed together with the total mechanical power and the metabolic power output in Sec. 3.4. The derived metabolic power and running economy of up to $1400.2\text{\,}\mathrm{W}$ and $25.5\text{\,}\mathrm{W}\text{\,}{\mathrm{kg}}^{-1}$ for the single runner at a speed of $21\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$ agrees well with values from literature for trained athletes [Kipp, 2017, Batliner et al., 2018]. The largest improvements of running economy occur for formation $1$ with $3.1\%$ and formation $2$ with $3.5\%$. These metabolic gains are slightly higher than the $2.84\%$ gain computed by Polidori et al. [2020] for the best position of Kenenisa Bekele's drafting strategy during the 2019 Berlin marathon. The reason for this is most likely the larger axial spacing of $1.3\text{\,}\mathrm{m}$ to the front pacer as opposed to $1.2\text{\,}\mathrm{m}$ in the current study. Furthermore, a particular benefit of this study is the additional performance prediction in terms of improvements of velocity and time for the investigated formations. By applying the curvilinear relationship between gains in velocity and running economy suggested by Kipp et al. [2019], we deduce possible speed gains in the range of $1.1$-$2.3\%$. Time savings of $1.2\text{\,}\mathrm{min}$ (formation 4) to $2.6\text{\,}\mathrm{min}$ (formation 2) are achieved over the marathon distance. In this study we focus on the impact of aerodynamic drafting on running economy. However, there are a multitude of other factors that influence running economy and performance. With respect to biomechanics, some of these factors are (lower) limb mass distribution, Achilles tendon moment arm and lower body musculotendinous structures for reutilization of elastic energy, as well as running style and gait patterns (e.g. so-called _pose running_ to benefit from elastic recoil forces in the lower limbs), among others [Williams and Cavanagh, 1987, Saunders et al., 2004, Barnes and Kilding, 2015]. Concluding from this work, it is possible to give targeted suggestions for formation patterns that are best suited for cooperative drafting. It is shown that axially positioned pacers give the best performance and that laterally positioned pacers should be running at a distance further away from the main runner to avoid local flow acceleration, while still contributing positively to drag reduction. Such an approach has been applied with the recent successful world record attempt by Eliud Kipchoge during the _INEOS 1:59 Challenge_ , where a staggered formation in the shape of an inverted V has been chosen. Future studies could explore a larger number of formations by CFD and subsequently derive reduced-order models for parameters such as number of pacers, or axial and lateral spacing. Acknowledgements This work was realised through research budget from ADIDAS and KTM E-TECHNOLOGIES. Computational resources were provided by the parallel computing cluster at KTM in Mattighofen, Austria. Conflict of interest statement The authors have nothing to disclose that would have biased this work. ## Appendix A Verification and validation Since errors and uncertainties are an unavoidable part of any simulation method such as CFD, measures must be taken to quantify their levels and to minimize them. According to the guidelines by the American Institute of Aeronautics and Astronautics [1998], both verification and validation form a crucial first step of any CFD simulation. While verification can be seen as the process of ensuring that `the equations are solved right', validation means ensuring `that the right equations are solved' [Roache, 1998]. In this work, we first perform a grid convergence study by successively refining the mesh until no significant changes in the relevant quantities are occurring at the chosen high-order schemes (i.e. verification). Then we compare the obtained drag coefficients of typical bluff bodies to well- established values from literature. Finally, since no wind tunnel measurement results are available for the specific runner geometry we use, we compare the numerically computed drag coefficient of the runner to experimental values of previous studies (i.e. validation). #### Grid convergence study The average cell size of the mesh at the finest grid level outside the prism layers has been refined by a factor of two to obtain the target cell sizes at the runner's surface as illustrated in Fig. 11. All grids contain $5$ inflation layers (i.e. prism layers) at the object surfaces, except the final one, which has $10$ inflation layers. It can be seen in Fig. 11 that for both the drag of the considered bluff bodies as well as for the applied runner geometry the resulting changes in the final refinement stages are small. Following Roache [1997], we can calculate the discretization error of two meshes with refinement ratio $r=h_{2}/h_{1}=0.5$ and with cell sizes $h_{1}$ and $h_{2}$ by using the difference $\phi_{2}-\phi_{1}$ between the two solutions of the target quantity, as $\displaystyle E_{\phi,1}$ $\displaystyle=$ $\displaystyle\frac{\phi_{2}-\phi_{1}}{1-r^{p}},$ (9) $\displaystyle E_{\phi,2}$ $\displaystyle=$ $\displaystyle r^{p}\left(\frac{\phi_{2}-\phi_{1}}{1-r^{p}}\right),$ (10) with the order of the numerical scheme $p=2$. Using this procedure, we obtain $E_{\phi,1}=[E_{\phi,1,cube},E_{\phi,1,cylinder},E_{\phi,1,sphere}]=[0.0027,0.0213,0.0240]$ and $E_{\phi,2}=[E_{\phi,2,cube},E_{\phi,2,cylinder},E_{\phi,2,sphere}]=[0.0007,0.0053,0.0060]$ for the cell sizes $0.006\,25\text{\,}\mathrm{m}$ and $0.003\,12\text{\,}\mathrm{m}$, both with five inflation layers. For the applied runner geometry the discretization errors are computed as $E_{\phi,1,runner}=0.0027$ and $E_{\phi,2,runner}=0.0007$. Already with the coarser mesh the error is consistently below $3\%$ for all considered bluff bodies, and with the fine mesh it is below $1\%$. Apart from the discretization error there is also a linearization error inherent to all CFD simulations, which comes from the linearization of the governing equations through numerical schemes. Minimization of the linearization errors of the simulations is ensured through monitoring of the residuals of all flow variables and the convergence of the aerodynamic forces. We apply a second- order scheme for increased accuracy as well as tolerance values for all residuals of $\mathcal{O}(10^{-6})$. \begin{overpic}[width=216.81pt]{Figure11.pdf} \end{overpic} Figure 11: Mesh convergence demonstrated by the final four refinement stages. The drag coefficient of typical bluff bodies (a) and of the runner geometry used for this study (b) are shown (i.l. stands for inflation layers). #### Drag coefficients of general bluff bodies Using the numerical results for the drag coefficients of the bluff bodies, a comparison is done with experimental data from literature. The results are summarized in Tab. 5. Table 5: Numerical values from this study and experimental values from literature for typical bluff-body drag coefficients $C_{D}$ at a considered free-stream flow velocity of $U_{\infty}=$18\text{\,}\mathrm{km}\text{\,}{\mathrm{h}}^{-1}$$ ($5\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$) and a Reynolds number of $Re\approx(1.5\text{-}1.6)\times 10^{5}$. Object | $C_{D}$ (num.) | $C_{D}$ (exp.) ---|---|--- Cube | $1.06$ | $1.05$ [Hoerner, 1951] Cylinder | $0.58$ | $0.60$ [Prosser and Smith, 2015] Sphere | $0.38$ | $0.40$ [Munson et al., 1990] It can be seen that there is an overall good agreement between the numerical results and literature, despite the fact that the considered Reynolds number is close to the drag crisis for the sphere and the cylinder in crossflow. In this range there is a transition from laminar to turbulent boundary layers, which leads to a sudden drop in the viscous drag due to delayed flow separation. It is therefore crucial to properly resolve the flow field across the boundary layer, which is achieved here through insertion of a sufficient number of prism layers. #### Drag coefficients of runners In the final part of the validation the runner's drag is compared to the results of several experimental studies on runners. Table 6: Comparison of values from literature in chronological order for the drag coefficient $C_{D}$ of humans at various activities. Authors | Activity | $C_{D}$ ---|---|--- Walpert and Kyle [1989] | Running | $0.79$ Davies [1980] | Running | $0.82$-$0.91$ Pugh [1971] | Running | $0.8$ | Walking | $0.7$ Hoerner [1965] | Standing | $1.0$-$1.3$ (clothed) | | $0.9$-$1.2$ (nude) Hill [1928] | Running | $0.9$ | Standing | $0.98$ From the data in Tab. 6 it is obvious that a spread of the drag coefficient values exists with respect to the different activities, but also for running itself. It proves the difficulty of establishing well-defined drag coefficients for humans experimentally. However, there is a clear reduction of the drag coefficient from standing to walking and running. This has been argued to be due to the limbs being partially lifted in the air leading to a temporary reduction of frontal area and the subject being less affected by the air resistance over time. Furthermore, clothing has a significant impact as well, leading to increased drag and possibly earlier flow separation, especially if loose [Nørstrud, 2009]. The quantitative effect of surface properties on running aerodynamics has been assessed by Kyle and Caiozzo [1986], who estimate the increase of drag by loose clothing to be up to $4.2\%$ and by hair to be $4$-$6\%$, depending on length. In the current study, we obtain a drag coefficient of the considered runner of $0.74$ at a comparable Reynolds number, which is close to the values established by Pugh [1971], Davies [1980], and Walpert and Kyle [1989]. One reason, why the drag coefficient computed in our case is slightly lower than the values from literature given in Tab. 6 might be the smooth geometry, allowing for delayed flow separation and thus reduced form drag on the runner. This is an effect that would not occur in a wind tunnel setup involving real runners or their scaled models due to the aforementioned conditions of surface roughness, hair, and clothing. ## References * American Institute of Aeronautics and Astronautics [1998] American Institute of Aeronautics and Astronautics . AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, 1998. * Barnes and Kilding [2015] Barnes KR, Kilding AE. Running economy: Measurement, norms, and determining factors. Sports Medicine 2015;1(1):1–15. * Bassett Jr et al. [1991] Bassett Jr DR, Flohr J, Duey WJ, Howley ET, Pein RL. Metabolic responses to drafting during front crawl swimming. Medicine and Science in Sports and Exercise 1991;23(6):744–7. * Batliner et al. [2018] Batliner ME, Kipp S, Grabowski AM, Kram R, Byrnes WC. Does metabolic rate increase linearly with running speed in all distance runners? Sports Medicine International Open 2018;2(1):E1. * Beaumont et al. [2019] Beaumont F, Bogard F, Murer S, Polidori G, Madaci F, Taiar R. How does aerodynamics influence physiological responses in middle-distance running drafting? Mathematical Modelling of Engineering Problems 2019;6:129–35. * Blocken et al. [2013] Blocken B, Defraeye T, Koninckx E, Carmeliet J, Hespel P. CFD simulations of the aerodynamic drag of two drafting cyclists. Computers & Fluids 2013;71:435–45. * Blocken et al. [2018] Blocken B, Toparlar Y, van Druenen T, Andrianne T. Aerodynamic drag in cycling team time trials. Journal of Wind Engineering and Industrial Aerodynamics 2018;182:128–45. * Cavagna et al. [1968] Cavagna G, Dusman B, Margaria R. Positive work done by a previously stretched muscle. Journal of Applied Physiology 1968;24(1):21–32. * Cavagna and Kaneko [1977] Cavagna G, Kaneko M. Mechanical work and efficiency in level walking and running. Journal of Physiology 1977;268(2):467–81. * Chatard and Wilson [2003] Chatard JC, Wilson B. Drafting distance in swimming. Medicine and Science in Sports and Exercise 2003;35(7):1176–81. * Colaianni et al. [2014] Colaianni M, Zollhöfer M, Süßmuth J, Seider B, Greiner G. A pose invariant statistical shape model for human bodies. In: Proceedings of the 5th International Conference on 3D Body Scanning Technologies. 2014. p. 327–36. * Crouch et al. [2016] Crouch TN, Burton D, Thompson MC, Brown NA, Sheridan J. Dynamic leg-motion and its effect on the aerodynamic performance of cyclists. Journal of Fluids and Structures 2016;65:121–37. * Davies [1980] Davies C. Effects of wind assistance and resistance on the forward motion of a runner. Journal of Applied Physiology 1980;48(4):702–9. * Defraeye et al. [2010] Defraeye T, Blocken B, Koninckx E, Hespel P, Carmeliet J. Computational fluid dynamics analysis of cyclist aerodynamics: Performance of different turbulence-modelling and boundary-layer modelling approaches. Journal of Biomechanics 2010;43(12):2281–7. * Dickinson [1929] Dickinson S. The efficiency of bicycle-pedalling, as affected by speed and load. Journal of Physiology 1929;67(3):242–55. * Edwards and Byrnes [2007] Edwards AG, Byrnes WC. Aerodynamic characteristics as determinants of the drafting effect in cycling. Medicine and Science in Sports and Exercise 2007;39(1):170–6. * Ettema and Lorås [2009] Ettema G, Lorås HW. Efficiency in cycling: A review. European Journal of Applied Physiology 2009;106(1):1–14. * Fitton et al. [2018] Fitton B, Caddy O, Symons D. The impact of relative athlete characteristics on the drag reductions caused by drafting when cycling in a velodrome. Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology 2018;232(1):39–49. * Fukunaga et al. [1980] Fukunaga T, Matsuo A, Yuasa K, Fujimatsu H, Asahina K. Effect of running velocity on external mechanical power output. Ergonomics 1980;23(2):123–36. * Fuss [2018] Fuss FK. Slipstreaming in gravity powered sports: Application to racing strategy in ski cross. Frontiers in Physiology 2018;9:1032. * Gray et al. [2020] Gray A, Andrews M, Waldron M, Jenkins D. A model for calculating the mechanical demands of overground running. Sports Biomechanics 2020;:1–22. * Hellsten [1998] Hellsten A. Some improvements in Menter's k-$\omega$ SST turbulence model. In: 29th AIAA, Fluid Dynamics Conference. 1998\. p. 2554. * Hill [1928] Hill AV. The air-resistance to a runner. Proceedings of the Royal Society of London Series B, Containing Papers of a Biological Character 1928;102(718):380–5. * Hill [1964] Hill AV. The efficiency of mechanical power development during muscular shortening and its relation to load. Proceedings of the Royal Society of London Series B Biological Sciences 1964;159(975):319–24. * Hoerner [1951] Hoerner SF. Aerodynamic Drag: Practical Data on Aerodynamic Drag Evaluated and Presented. Otterbein Press, 1951. * Hoerner [1965] Hoerner SF. Fluid-Dynamic Drag: Theoretical, experimental and statistical information. SF Hoerner Fluid Dynamics, 1965. * Hoogkamer et al. [2018] Hoogkamer W, Snyder KL, Arellano CJ. Modeling the benefits of cooperative drafting: Is there an optimal strategy to facilitate a sub-2-hour marathon performance? Sports Medicine 2018;48(12):2859–67. * Hunter et al. [2015] Hunter GR, McCarthy JP, Carter SJ, Bamman MM, Gaddy ES, Fisher G, Katsoulis K, Plaisance EP, Newcomer BR. Muscle fiber type, achilles tendon length, potentiation, and running economy. Journal of Strength & Conditioning Research 2015;29(5):1302–9. * Inoue et al. [2016] Inoue T, Okayama T, Teraoka T, Maeno S, Hirata K. Wind-tunnel experiment on aerodynamic characteristics of a runner using a moving-belt system. Cogent Engineering 2016;3(1):1231389. * Katz [1995] Katz J. Race Car Aerodynamics: Designing for Speed. Bentley Publishers, 1995. * Kipp [2017] Kipp S. Why does metabolic rate increase curvilinearly with running velocity? Master's thesis; University of Colorado, Boulder, Colorado; 2017\. * Kipp et al. [2019] Kipp S, Kram R, Hoogkamer W. Extrapolating metabolic savings in running: Implications for performance predictions. Frontiers in Physiology 2019;10:79. * Komi [2000] Komi PV. Stretch-shortening cycle: A powerful model to study normal and fatigued muscle. Journal of Biomechanics 2000;33(10):1197–206. * Komi and Bosco [1978] Komi PV, Bosco C. Utilization of stored elastic energy in leg extensor muscles by men and women. Medicine and Science in Sports 1978;10(4):261–5. * Kyle and Caiozzo [1986] Kyle CR, Caiozzo VJ. The effect of athletic clothing aerodynamics upon running speed. Medicine and Science in Sports and Exercise 1986;18(5):509–15. * Levenberg [1944] Levenberg K. A method for the solution of certain non-linear problems in least squares. Quarterly of Applied Mathematics 1944;2(2):164–8. * Malizia and Blocken [2020] Malizia F, Blocken B. Bicycle aerodynamics: History, state-of-the-art and future perspectives. Journal of Wind Engineering and Industrial Aerodynamics 2020;200:104134. * Marquardt [1963] Marquardt DW. An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics 1963;11(2):431–41. * Meile et al. [2006] Meile W, Reisenberger E, Mayer M, Schmölzer B, Müller W, Brenn G. Aerodynamics of ski jumping: Experiments and CFD simulations. Experiments in Fluids 2006;41(6):949–64. * Menter [1992] Menter FR. Improved two-equation k-$\omega$-turbulence models for aerodynamic flows. National Aeronautics and Space Administration, Ames Research Center, Moffett Field, CA 1992;. * Menter [1994] Menter FR. Two-equation eddy-viscosity turbulence models for engineering applications. AIAA Journal 1994;32(8):1598–605. * Minetti et al. [2001] Minetti AE, Pinkerton J, Zamparo P. From bipedalism to bicyclism: Evolution in energetics and biomechanics of historic bicycles. Proceedings of the Royal Society of London Series B: Biological Sciences 2001;268(1474):1351–60. * Munson et al. [1990] Munson BR, Young DF, Okiishi TH. Fundamentals of Fluid Mechanics. John Wiley & Sons, 1990. * Nørstrud [2009] Nørstrud H. Sport Aerodynamics. Springer, 2009. * Pavei et al. [2019] Pavei G, Zamparo P, Fujii N, Otsu T, Numazu N, Minetti AE, Monte A. Comprehensive mechanical power analysis in sprint running acceleration. Scandinavian Journal of Medicine & Science in Sports 2019;29(12):1892–900. * Polidori et al. [2020] Polidori G, Legrand F, Bogard F, Madaci F, Beaumont F. Numerical investigation of the impact of Kenenisa Bekele's cooperative drafting strategy on its running power during the 2019 Berlin marathon. Journal of Biomechanics 2020;107:109854. * Prosser and Smith [2015] Prosser D, Smith M. Aerodynamics of finite cylinders in quasi-steady flow. In: 53rd AIAA Aerospace Sciences Meeting. 2015\. p. 1931. * Pugh [1970] Pugh LGE. Oxygen intake in track and treadmill running with observations on the effect of air resistance. Journal of Physiology 1970;207(3):823–35. * Pugh [1971] Pugh LGE. The influence of wind resistance in running and walking and the mechanical efficiency of work against horizontal or vertical forces. Journal of Physiology 1971;213(2):255–76. * Roache [1997] Roache PJ. Quantification of uncertainty in computational fluid dynamics. Annual Review of Fluid Mechanics 1997;29(1):123–60. * Roache [1998] Roache PJ. Verification and Validation in Computational Science and Engineering. Hermosa Publishers, 1998. * Romberg et al. [1971] Romberg G, Chianese F, Lajoie R. Aerodynamics of race cars in drafting and passing situations. Technical Report; SAE; 1971. * Rundell [1996] Rundell KW. Effects of drafting during short-track speed skating. Medicine and Science in Sports and Exercise 1996;28(6):765–71. * Saibene and Minetti [2003] Saibene F, Minetti AE. Biomechanical and physiological aspects of legged locomotion in humans. European Journal of Applied Physiology 2003;88(4-5):297–316. * Saunders et al. [2004] Saunders PU, Pyne DB, Telford RD, Hawley JA. Factors affecting running economy in trained distance runners. Sports Medicine 2004;34(7):465–85. * Walpert and Kyle [1989] Walpert RA, Kyle CR. Aerodynamics of the human body in sports. Journal of Biomechanics 1989;22(10):1096. * Williams [2000] Williams KR. Biomechanics in Sport; Wiley Online Library. p. 161. * Williams and Cavanagh [1987] Williams KR, Cavanagh PR. Relationship between distance running mechanics, running economy, and performance. Journal of Applied Physiology 1987;63(3):1236–45. * Winter [1979] Winter DA. A new definition of mechanical work done in human movement. Journal of Applied Physiology 1979;46(1):79–83.
11footnotetext: corresponding author , # Effect of medium on fundamental interactions in gravity and condensed matter Alexander Zhuk Valerii Shulga ###### Abstract Recently, it was shown that the gravitational field undergoes exponential cutoff at large cosmological scales due to the presence of background matter. In this article, we demonstrate that there is a close mathematical analogy between this effect and the behavior of the magnetic field induced by a solenoid placed in a superconductor. Keywords:cosmology; scalar perturbations; gravitational potential; magnetic field; superconductor. ## 1 Introduction It seems quite natural that the presence of the medium influences the propagation of fundamental interactions. The simplest example is the Debye screening of the electric field of an individual particle in a plasma by particles of opposite sign. Here, the potential produced by an external point charge has the form of the Yukawa potential (but not the Coulomb one) with the Debye screening length (see, e.g., [1]). A similar screening mechanism of the electron charge due to vacuum polarization takes place in quantum electrodynamics (see, e.g., [2]). The Anderson-Higgs mechanism is another example of the influence of the medium on fundamental interactions, which are carried by gauge fields. In this case, after symmetry breaking, the Higgs vacuum field acts as a medium [3, 4, 5]. As a result of interaction with this medium, the initially massless gauge fields gain mass [6]. It is also known that medium in the form of the superconductor affects the electromagnetic interaction. For example, external magnetic field undergoes the exponential cutoff inside the superconductor due to the Meissner effect (see, e.g., [7]). The examples above did not concern the gravitational interaction between massive bodies. It is known that in a vacuum in the weak field limit the gravitational potential satisfies the Poisson equation and has the form of Newton’s potential [8]. From a naive point of view, since all masses have the same sign and are attracted to each other, one should hardly expect a screening of the gravitational interaction, as, for example, for electric charges in a plasma. However, it was demonstrated recently [9, 10, 11] that medium in the case of gravity also plays important role. It was shown that, due to the interaction of the gravitational potential with background matter, there is an exponential cutoff of the gravitational interaction at large cosmological scales. In section 2 we reproduce this result. For many, this result turned out to be rather unexpected. Therefore, in this paper, in section 3, we present a close mathematical analogue of this phenomenon by the example of the magnetic field induced by a solenoid placed in a superconductor. ## 2 Screening of the gravitational interaction in cosmology We consider the Universe containing the cosmological constant $\Lambda$ and filled with discrete point-like gravitating sources (galaxies and the group of galaxies) with comoving mass density $\rho=\sum_{n}\rho_{n}=\sum_{n}m_{n}\,\delta(\mathbf{r}-\mathbf{r}_{n})\,,$ (2.1) where $\mathbf{r}=(x^{1},x^{2},x^{3})$ is comoving distance. This is our medium. Such matter has a dust-like equation of state and the average energy density $\bar{\varepsilon}=\bar{\rho}c^{2}/a^{3}$ where comoving averaged mass density $\bar{\rho}=\mathrm{const}$, $c$ is the speed of light and $a$ is the conformal factor. The corresponding background metric is described by Friedmann-Lema$\mathrm{\hat{\i}}$tre-Robertson-Walker (FLRW) one. The discrete inhomogeneities perturb the FLRW metric [12, 13]: $ds^{2}=a^{2}\big{[}(1+2\Phi)d\eta^{2}-(1-2\Phi)\delta_{\alpha\beta}\,dx^{\alpha}dx^{\beta}\,\big{]}\,,$ (2.2) where we restrict ourselves to scalar perturbations in conformal Newtonian gauge. Scalar function $\Phi(\eta,\mathbf{r})$ is the gravitational potential created at the point with the radius-vector $\mathbf{r}$ by all gravitating masses in the Universe [8]. The perturbed Einstein equations are [12, 13]: $\Delta\Phi-3{\mathcal{H}}\left(\Phi^{\prime}+{\mathcal{H}}\Phi\right)=\frac{1}{2}\kappa a^{2}\delta\varepsilon,$ (2.3) $\Phi^{\prime}+{\mathcal{H}}\Phi=-\frac{1}{2}\kappa a^{2}\bar{\varepsilon}v\,,$ (2.4) $\Phi^{\prime\prime}+3{\mathcal{H}}\Phi^{\prime}+\left(2{\mathcal{H}}^{\prime}-{\mathcal{H}}^{2}\right)\Phi=0\,,$ (2.5) where $\Delta\equiv\delta^{\alpha\beta}\partial_{\alpha}\partial_{\beta}$ is the Laplace operator, the prime denotes the conformal time $\eta$ derivative, ${\mathcal{H}}\equiv(da/d\eta)/a=(a/c)H$ and $H\equiv(da/dt)/a$ is the Hubble parameter, $v(\eta,\textbf{r})$ is the peculiar velocity potential and $\kappa\equiv 8\pi G_{\\!N}/c^{4}$, where $G_{\\!N}$ is the gravitational constant. The energy density fluctuation reads [14, 15]: $\delta\varepsilon=\frac{c^{2}}{a^{3}}\delta\rho+\frac{3\bar{\rho}c^{2}}{a^{3}}\Phi\,,$ (2.6) where $\delta\rho(\eta,\mathbf{r})\equiv\rho-\bar{\rho}$ is the fluctuation of the mass density (2.1) around its constant average value $\bar{\rho}$. Eq. (2.4) demonstrates that the peculiar velocities affect the gravitational potential. If we neglect this influence (i.e. $\Phi^{\prime}+{\mathcal{H}}\phi=0$), then equation (2.3) takes the form $\Delta\Phi-\frac{a^{2}}{\lambda^{2}}\Phi=\frac{\kappa c^{2}}{2a}\delta\rho\,,$ (2.7) where the screening length $\lambda\equiv\sqrt{\frac{2a^{3}}{3\kappa\bar{\rho}c^{2}}}\,.$ (2.8) With the help of the transformation (to remove the $\bar{\rho}$ contribution on the RHS of Eq. (2.7)) $\phi=c^{2}a\Phi-\frac{4\pi G_{N}\bar{\rho}}{a^{2}}\lambda^{2}=c^{2}a\Phi-\frac{c^{2}a}{3}$ (2.9) Eq. (2.7) is reduced to $\Delta\phi-\frac{a^{2}}{\lambda^{2}}\phi=4\pi G_{N}\rho\,.$ (2.10) For the mass density (2.1), we can easily solve this Helmholtz equation, and applying transformation (2.9) obtain: $\Phi=\frac{1}{3}-\frac{\kappa c^{2}}{8\pi a}\sum_{n}\frac{m_{n}}{|\mathbf{r}-\mathbf{r}_{n}|}\exp\left(-\frac{a|\mathbf{r}-\mathbf{r}_{n}|}{\lambda}\right)\,.$ (2.11) It is worth noting that the physical distance is $R=ar$. The term 1/3 (which is due to $\bar{\rho}$ in $\delta\rho$) plays an important role since only with this term the averaged over all volume value of the gravitational potential $\bar{\Phi}$ is equal to zero as it should be for fluctuations [9]. In Eq. (2.11), we neglect the peculiar velocities of the inhomogeneities. However, they also play an important role [16, 17] and must be taken into account. For the considered model, as was shown in [16], it is sufficient in (2.7), (2.9)-(2.11) to replace $\lambda$ with $\lambda_{\mathtt{eff}}$ and additionally in (2.11): $1/3\to 1/3(\lambda_{\mathtt{eff}}/\lambda)^{2}$ where $\lambda_{\mathtt{eff}}=\sqrt{\frac{c^{2}a^{2}H}{3}\int\frac{da}{a^{3}H^{3}}}\,.$ (2.12) To get this result, we should take into consideration Eq. (2.5). This screening length (as well as $\lambda$) depends on time. For example, for the standard $\Lambda$CDM model at present time $\lambda_{\mathtt{eff}}=2.57$ Gpc [16]. Therefore, the gravitational potential $\Phi$ satisfies the Helmholtz equation, not the Poisson equation. This is due to the interaction of the gravitational potential with the medium. We can see it directly from Eq. (2.6) where the term $\sim\bar{\rho}\Phi$ describe this interaction. Due to the peculiar velocity, Eq. (2.3) also acquires an additional term proportional to $\Phi$ [16]. If the medium is absent that corresponds to the limit $\bar{\rho}\to 0,\,v\to 0$, then the screening lengths $\lambda$ and $\lambda_{\mathtt{eff}}$ tends to infinity, and the Yukawa potentials in (2.11) are reduced to the Newton’s ones without screening of the gravitational interaction. ## 3 Solenoid in a superconductor. Screening of the induced magnetic field. In this section, in order to present the mathematical analog of the screening effect described above, we render some of equations of the paper [18] in a form suitable for our purpose. Following this paper, we consider a thin solenoid placed in a superconductor. Thin means that the diameter of the solenoid is much smaller than the magnetic field penetration length $\lambda_{\mathtt{m}}$. It is well known that the magnetic field of the solenoid $\mathbf{B}_{\mathtt{sol}}$ is absent from the outside it, but the vector potential $\mathbf{A}_{\mathtt{sol}}$ is not equal to zero. The interaction of this potential with the superconducting medium induces a current $\mathbf{J}_{\mathtt{sc}}$, which, in turn, leads to the appearance of an induced magnetic field $\mathbf{B}_{\mathtt{sc}}$. Thus, the Maxwell equation has the form222In this section, we use the system of units adopted in book [7]. $\mathtt{curl}\mathbf{B}_{\mathtt{tot}}=\mathtt{curl}\left(\mathbf{B}_{\mathtt{sc}}+\mathbf{B}_{\mathtt{sol}}\right)=\mathbf{J}_{\mathtt{sc}}+\mathbf{J}_{\mathtt{sol}}\,.$ (3.1) Since outside the solenoid $\mathbf{B}_{\mathtt{sol}},\mathbf{J}_{\mathtt{sol}}=0$, we get $\mathtt{curl}\mathbf{B}_{\mathtt{sc}}=\mathbf{J}_{\mathtt{sc}}\,,$ (3.2) where in the London limit the superconducting current density is [7, 18] $\mathbf{J}_{\mathtt{sc}}=-\frac{1}{\lambda^{2}_{\mathtt{m}}}\left(\frac{1}{q}\nabla\theta+\mathbf{A}_{\mathtt{tot}}\right)\,.$ (3.3) Here, $\mathbf{A}_{\mathtt{tot}}=\mathbf{A}_{\mathtt{sc}}+\mathbf{A}_{\mathtt{sol}}$, $\theta$ is the phase of the order parameter and the magnetic field penetration length $\lambda_{\mathtt{m}}=\frac{1}{q\sqrt{n_{\mathtt{s}}}}\,,$ (3.4) whrere $n_{\mathtt{s}}$ is the superfluid density, parameter $q$ defines the superconducting flux quanta (see, e.g., Eq. (3.7) below) and in the real superconductor $q=2e/(\hbar c)$ [7]. The absence of a superconducting medium corresponds to the limit $n_{\mathtt{s}}\to 0\Rightarrow\lambda_{\mathtt{m}}\to\infty$. Expression (3.4) is an analogue of cosmological formula (2.8) (and, accordingly, formula (2.12)). In Eq. (3.3) the term $\lambda^{-2}_{\mathtt{m}}\mathbf{A}_{\mathtt{sol}}\sim n_{\mathtt{s}}\mathbf{A}_{\mathtt{sol}}$ describes the interaction of the solenoid magnetic field with the superconducting medium just as the term $\sim\bar{\rho}\Phi$ on the RHS of Eq. (2.6) describes the interactions of the gravitational potential with the cosmological medium. Now, applying curl operation to both sides of (3.3), we obtain $\mathbf{B}_{\mathtt{sc}}-\lambda^{2}_{\mathtt{m}}\Delta\mathbf{B}_{\mathtt{sc}}=0\,,$ (3.5) where we took into account that outside of the solenoid $\mathtt{curl}\nabla\theta=0$ and $\mathbf{B}_{\mathtt{sol}}=0$. $\Delta$ is the Laplace operator in flat space. To solve this equation, we need to define the boundary conditions. Let the solenoid be extended along the $z$-axis. Obviously, due to the cylindrical symmetry the induced magnetic field inside the superconductor is also parallel to the $z$-axis: $\mathbf{B}_{\mathtt{sc}}(r)={B}_{\mathtt{sc}}(r)\hat{z}$, where $\hat{z}$ is the unit vector along $z$-axis. In cylindric coordinates, $\mathbf{r}$ is the radius-vector in the $xy$-plane (it is worth noting that in the previous section $\mathbf{r}$ denotes the comoving three-dimensional radius-vector). At distances $r>>\lambda_{\mathtt{m}}$, the superconducting current goes to zero: $\mathbf{J}_{\mathtt{sc}}\to 0$. Therefore, at this distances Eq. (3.3) reads $\mathbf{A}_{\mathtt{tot}}=-\frac{1}{q}\nabla\theta\,.$ (3.6) Integrating both sides of this equation over an area inside the contour $r=\mbox{\rm const}\,$, and performing the Stokes area-to-contour transformation for the RHS, we find $\Phi_{\mathtt{tot}}=-\frac{2\pi}{q}N\equiv-\Phi_{0}N\,,\quad N=0,1,2,\ldots\,,$ (3.7) where $\Phi_{\mathtt{tot}}=\Phi_{\mathtt{sc}}+\Phi_{\mathtt{sol}}$ is the total magnetic flux consisting of the sum of the magnetic fluxes of the induced magnetic field and the magnetic field inside the solenoid. $\Phi_{0}$ is the superconducting flux quanta. Therefore, $\Phi_{\mathtt{sc}}=\Phi_{\mathtt{tot}}-\Phi_{\mathtt{sol}}\,.$ (3.8) This is our boundary condition. We can include it directly into Eq. (3.5): $B_{\mathtt{sc}}-\lambda^{2}_{\mathtt{m}}\Delta B_{\mathtt{sc}}=\Phi_{\mathtt{sc}}\delta(\mathbf{r})\,,$ (3.9) where we took into account 2D cylindrical symmetry of the problem and, consequently, $\Delta$ is a radial Laplace operator. Obviously, integrating this equation over an area inside the contour $r=\mbox{\rm const}\,$ we arrive at identity. Eq. (3.9) is the Helmholtz one (similar to Eq. (2.10)), and has the decreasing solution $B_{\mathtt{sc}}=\frac{\Phi_{\mathtt{sc}}}{2\pi\lambda^{2}_{\mathtt{m}}}K_{0}\left(\frac{r}{\lambda_{\mathtt{m}}}\right)\,,$ (3.10) where $K_{0}$ is the modified Bessel function. The induced magnetic field behaves asymptotically as follows: $B_{\mathtt{sc}}(r\to 0)\sim-\ln(r)\,,\quad B_{\mathtt{sc}}(r\to\infty)\sim\frac{1}{\sqrt{r}}\exp(-r/\lambda_{\mathtt{m}})\,$ (3.11) This behavior reflects the cylindrical symmetry of the model. For example, Yukawa’s potential has been transformed: $(1/r)\exp(-r/\lambda_{\mathtt{m}})\to(1/\sqrt{r})\exp(-r/\lambda_{\mathtt{m}})$. As expected, the screening length coincides with the magnetic field penetration length $\lambda_{\mathtt{m}}$. Formula (3.11) is 2D analog of Eq. (2.11). ## 4 Conclusion In this paper, we have touched upon the problem of the influence of the medium on fundamental interactions. First, on the basis of articles [9, 10, 11], we showed that as a result of the interaction of the gravitational field with the cosmological medium, the gravitational potential is subject to exponential screening on large cosmological scales. Then, following the model considered in paper [18], we have traced a close analogy between the interaction of the gravitational field with the cosmological medium and the interaction of the magnetic field of a solenoid with a superconducting medium. As a result of this interaction, the induced magnetic field in the superconductor undergoes exponential screening at distances exceeding the magnetic field penetration length. ## Acknowledgements The authors are grateful to Boris Svistunov for fruitful discussions and valuable comments. ## References * [1] Brydges, D. C., and Martin, Ph. A. (1999). Coulomb systems at low density. J. Stat. Phys. 96, 1163–1330. arXiv:cond-mat/9904122 [cond-mat.stat-mech]. doi: 10.1023/A:1004600603161. * [2] Merches, I., Tatomir, D., and Lupu, R. E. (2019). Basics of Quantum Electrodynamics. NY, USA: Taylor & Francis. * [3] Anderson, P.W. (1963). Plasmons, gauge invariance, and mass. Phys. Rev. 130 (1), 439-442. doi:10.1103/physrev.130.439. * [4] Englert, F., and Brout, R. (1964). Broken symmetry and the mass of gauge vector mesons. Phys. Rev. Lett.. 13, 321-323. doi: 10.1103/PhysRevLett.13.321. * [5] Higgs, P. W. (1964). Broken symmetries and the masses of gauge bosons. Phys. Rev. Lett. 13, 508-509. doi: 10.1103/PhysRevLett.13.508. * [6] Linde, A. D. (1990). Particle Physics and Inflationary Cosmology (CONTEMPORARY CONCEPTS IN PHYSICS SERIES). NY, USA: Taylor & Francis. * [7] Svistunov, B., Babaev, E., and Prokof’ev N. (2015). Superfluid States of Matter. NY, USA: Taylor & Francis. * [8] Landau, L.D., and Lifshitz, E.M. (2000). The Classical Theory of Fields (Course of Theoretical Physics Series,V.2). Oxford, UK: Pergamon Press. * [9] Eingorn, M. (2016). First-order cosmological perturbations engendered by pointlike masses. Astrophys. J. 825, 84. arXiv:1509.03835 [gr-qc]. doi: 10.3847/0004-637X/825/2/84 * [10] Eingorn, M., Kiefer, C., and Zhuk, A. (2016). Scalar and vector perturbations in a universe with discrete and continuous matter sources. J. Cosmol. Astropart. Phys. 2016, 032. arXiv:1607.03394 [gr-qc]. doi: 10.1088/1475-7516/2016/09/032 * [11] Eingorn, M., Kiefer, C., and Zhuk, A. (2017). Cosmic screening of the gravitational interaction. Int. J. Mod. Phys. D 26, 1743012. arXiv:1711.01759 [gr-qc]. doi: 10.1142/S021827181743012X. * [12] Mukhanov, V.F., Feldman, H.A., and Brandenberger, R.H. (1992). Theory of cosmological perturbations. Phys. Rept. 215, 203-333. https://doi.org/10.1016/0370-1573(92)90044-Z. * [13] Gorbunov, D.S., and Rubakov, V.A. (2011) Introduction to the theory of the early universe: Cosmological perturbations and inflationary theory. Singapore: World Scientific. * [14] Eingorn, M., and Zhuk, A. (2012). Hubble flows and gravitational potentials in observable Universe. J. Cosmol. Astropart. Phys. 2012, 026. arXiv:1205.2384 [astro-ph.CO]. DOI: 10.1088/1475-7516/2012/09/026 * [15] Eingorn, M., and Zhuk, A. (2014). Remarks on mechanical approach to observable Universe. J. Cosmol. Astropart. Phys. 2014, 024. arXiv:1309.4924 [astro-ph.CO]. doi: 10.1088/1475-7516/2014/05/024. * [16] Canay, E., and Eingorn, M. (2020). Duel of cosmological screening lengths. Phys. Dark Univ. 29, 100565. arXiv:2002.00437 [gr-qc]. doi: 10.1016/j.dark.2020.100565. * [17] Canay, E., Eingorn, M., McLaughlin II, A., Arapoğlu, A.S., and Zhuk, A. (2022). Effect of peculiar velocities of inhomogeneities on the shape of gravitational potential in spatially curved universe. arXiv:2201.07561 [gr-qc]. * [18] Mangel, I., Kapon, I., Blau, N., Golubkov, K., Gavish, N., and Keren, A. (2020). Stiffnessometer: A magnetic-field-free superconducting stiffness meter and its application. Phys. Rev. B 102, 024502. arXiv:1705.00624 [cond-mat.supr-con]. doi: 10.1103/PhysRevB.102.024502.
# Many wrong models approach to localize an odor source in turbulence: introducing the weighted Bayesian update Lorenzo Piro<EMAIL_ADDRESS>Department of Physics & INFN, University of Rome “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Rome, Italy Robin A. Heinonen Department of Physics & INFN, University of Rome “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Rome, Italy Massimo Cencini Istituto dei Sistemi Complessi, CNR, Via dei Taurini 19, 00185 Rome, Italy INFN “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Rome, Italy Luca Biferale Department of Physics & INFN, University of Rome “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Rome, Italy ###### Abstract The problem of locating an odor source in turbulent environments is central to key applications such as environmental monitoring and disaster response. We address this challenge by designing an algorithm based on Bayesian inference, which uses odor measurements from an ensemble of static sensors to estimate the source position through a stochastic model of the environment. Given the practical impossibility of achieving a fully consistent turbulent model and guaranteeing convergence to the correct solution, we propose a method to rank “many wrong models” and to blend their predictions. We evaluate our _weighted Bayesian update_ algorithm by its ability to estimate the source location with predefined accuracy and/or within a specified time frame, and compare it to standard Monte Carlo sampling methods. To demonstrate the robustness and potential applications of both approaches under realistic environmental conditions, we use high-quality direct numerical simulations of the Navier- Stokes equations to mimic the transport of odors in the atmospheric boundary layer. Despite minimal prior information about the source and environmental conditions, our proposed approach consistently proves to be more accurate, reliable, and robust than Monte Carlo methods, thus showing promise as a new tool for addressing the odor source localization problem in real-world scenarios. ## I Introduction Identifying the origin of noxious odors such as gas leaks and chemical or radioactive emissions is critical for averting potential environmental disasters [1, 2, 3, 4]. Moreover, tracing the location of a source from the odor signal detected in the atmosphere is a vital task for animals looking for food or a mate [5, 6]. Such source localization problems are already difficult in laminar flows, where the structure of the odor plume can be highly complex and sensitive to the source location due to chaotic mixing [7, 8]. In fully turbulent three- dimensional (3-D) environments, the difficulty becomes severe, owing to numerical, experimental, and phenomenological complexities and challenges associated with the multi-scale nature of 3-D turbulence, which is characterized by a broad range of rough velocity fluctuations as well as highly intermittent and non-Gaussian properties for both the advecting flow and the odor transport [9, 10, 11, 12]. This makes the design of efficient and reliable algorithms for source localization a challenging theoretical task situated at the intersection of fluid dynamics, optimal navigation of mobile agents, and information theory [12, 13, 14]. Over the last decades, a growing community of scientists has developed multiple methodologies to address this problem, ranging from bio- inspired [15, 16, 17] to heuristic search algorithms based on information theory [18, 19, 20] and Bayesian inference [21, 22]. Although most studies have focused on the performance optimization of mobile agents, the use of static sensors has become increasingly popular [23, 24, 25] thanks to the simplicity of their implementation, also resulting in reduced setup and maintenance costs. Moreover, being well-suited for continuous and large-scale environmental monitoring, they are ideal for use in early warning systems [26, 27, 28, 29], which may enable targeted and effective response strategies, ranging from containment and mitigation to rescue and prevention. On a more theoretical level, the employment of static sensors reduces the space of possible choices for the search algorithm [30], thereby facilitating the study of more fundamental questions. Indeed, despite the recent progress, the problem of optimally locating a source within a given time with a prescribed accuracy in a turbulent flow is far from solved [25]. Figure 1: Left: a snapshot of DNS showing relative positions of sources (squares) and particles color-coded according to the source from which each was emitted. Here, we show particles in a thin slab parallel to the wind and containing the sources, and we have zoomed in close to the sources. We have also inserted in the same plane a network of static sensors (yellow circles) whose measurements will be used in the following to infer the source location. Right: probability map of making a detection. Hereafter referred to as _empirical likelihood_ , such distribution has been obtained from coarse- graining the DNS data on a square lattice, setting a threshold $n_{\rm thr}=2$ on the number of detectable particles, and then averaging over time the signal of all the five sources (properly placed, by means of a suitable shift, in the same position as indicated by the grey square). A key challenge is the practical impossibility of specifying a detailed model for the statistics governing odor transport. This is a crucial point when using Bayesian approaches that heavily depend on a model of the environment to integrate sensor observations into a probability distribution describing our information about the source. However, in all real-world applications, we have only partial knowledge of the statistical properties of the environment, which may be arbitrarily complex. Moreover, the inherent complexity of turbulent transport precludes a complete analytical description even in an idealized physical scenario. Therefore, we are invariably compelled to work with an _wrong_ model of the environment. This fundamentally limits the utility of Bayesian approaches, which are only guaranteed to converge to the ground truth when the model is correctly specified [31, 32]. To partially overcome these intrinsic limitations, we explore a new Bayesian approach designed to mitigate the effects of errors in the environmental model. Building upon the _many wrongs principle_ [33, 34], we introduce a quantity that allows to rank the models of the environment in the Bayesian framework, providing a principled way to blend information coming from several (inevitably wrong) models. We show that merging the information gathered from a number of different models – a procedure that we dubbed Weighted Bayesian Update (WBU) – helps in reliably inferring the source location. In order to test these ideas and the proposed methodology in realistic conditions, we devise a set of direct numerical simulations that mimic the emission of a localized odor source in a three-dimensional turbulent environment. Taking advantage of the Galilean invariance of the Navier-Stokes equation, we model the odor transport by Lagrangian tracers advected by the flow with a constant mean wind, obtaining odor plumes that well approximate those one can observe in the atmospheric boundary layer [11, 12]. Here, for the sake of a first benchmark of the localization algorithms, we shall assume the source lies on the same plane as the sensors, which can thus detect odor particles contained in a thin slab parallel to the wind [see Fig.1(a)]. Then, we use a simplified (effective) model for the environment that depends on a few parameters only and neglects temporal and spatial correlations of the odor signal. In spite of the obvious shortcomings of this model, we show how, by suitably weighting different models (i.e. the same one with different parameters), WBU outperforms widely-used Monte Carlo methods [21, 35] when locating the odor source with a specified level of accuracy. By performing several tests with synthetic data and empirically based (_a posteriori_) models, we trace the origin of the problems for Monte Carlo methods to the effects of correlations in the realistic odor signal, to which the WBU shows superior resilience. The paper is organized as follows. In Sec. II, we illustrate the setup where the numerical simulations have been carried out and describe the simplified model of the environment as well as the algorithms employed, namely the weighted Bayesian update and another approach based on Monte Carlo techniques. We then present the results in Sec. III, which is divided into two parts. In the first one, Sec. III.1, we discuss and compare the performance of the algorithms in the setting closest to reality. In Sec. III.2, we then systematically study how model inaccuracy affects the quality of the source localization. Finally, we summarize our findings and provide an outlook for future studies in Sec. IV. ## II Methods ### II.1 Setup of the numerical simulations Let us assume we aim to locate the position $\bm{r}_{\rm s}$ of an odor source emitting at rate $Q$ within a two-dimensional square grid $\Omega$ of size $L\times L$ and lattice spacing $\Delta x$. The odor particles are swept by the underlying turbulent flow characterized by a mean wind in a given direction $\hat{\bm{U}}$. We then place a number $N_{\rm s}$ of static sensors in this environment. We arrange the sensors in a lattice that, assuming no prior knowledge of the wind direction, will be typically tilted by an angle $\theta$ with respect to the latter, as depicted in Fig. 1(a). Instead of using an odor concentration field, we model it in terms of particles advected by a turbulent flow. Each particle can be thought of as a patch of odor, or one can consider the number of particles in a given small region as an estimate of the odor concentration. We produced realistic trajectories of odor particles using state-of-the-art direct numerical simulation (DNS) of the incompressible, 3–D Navier-Stokes equations $\displaystyle\partial_{t}\bm{u}+(\bm{u}\cdot\nabla)\bm{u}$ $\displaystyle=-\nabla p+\nu\nabla^{2}\bm{u}+f,$ (1) $\displaystyle\nabla\cdot\bm{u}$ $\displaystyle=0,$ (2) under turbulent conditions with $\mathrm{Re}_{\lambda}\simeq 150.$ Here, $f$ is a random, Sawford-type [36] isotropic forcing at the smallest nonzero wavenumbers of the system, with a correlation time of 160 simulation timesteps. Using a pseudospectral code dealiased according to the two-thirds rule, the system was solved on a $1024\times 512\times 512$ grid, with a uniform spacing $\delta x=\delta y=\delta z\simeq\eta$ (with $\eta$ the Kolmogorov scale), and periodic boundary conditions in all three directions. The timestepping was performed using the second-order explicit Adams-Bashforth method. The system was advected by a uniform mean wind $\bm{U}\approx-2.5u_{\rm rms}\hat{x},$ where $u_{\rm rms}$ is the rms speed of the flow in the comoving frame of the wind, and $\hat{x}$ is the elongated axis of the grid. We produced the mean wind by means of a Galilean transformation. The odor particles were modeled as Lagrangian tracer particles, which were emitted by five stationary point sources [Fig. 1(a)]. Each source emitted 1000 particles every 10 simulation timesteps, which corresponds to every $\approx 1/15$ Kolmogorov times $\tau_{\eta}$. The fluid velocities $\bm{u}$ at the particle positions were obtained using a sixth-order B-spline interpolation scheme and then used to evolve the particle positions $\bm{X}$ in time according to $\dot{\bm{X}}=\bm{u}(\bm{X},t)$ over an infinite lattice of copies of the periodic flow. Their positions, velocities, and accelerations were tracked and dumped every $\tau_{\eta},$ for a total of 3015 timesteps. Each source of particles was treated as independent, and we averaged our results over them for the purpose of achieving better statistics. To simulate realistic environmental conditions and emulate turbulent dispersion in the atmospheric boundary layer, we then coarse-grain the particles’ concentration inside a thin layer containing the source and set a threshold $n_{\rm thr}=2$ on the number of particles above which sensors make a detection. A few snapshots of the resulting odor dispersion in the aforementioned arena $\Omega$ are shown in Fig. 2, where the pink patches indicate the odor plume. Figure 2: Three time snapshots showing a qualitative comparison between the performance of the two algorithms discussed in this work. Both approaches use the measurements made by an array of static sensors (circles) looking for a source of odors (red square) advected by a turbulent flow featuring a horizontal wind blowing from right to left. Pink patches indicate the odor plume; greyscale codes the probability of the odor source location obtained from an algorithm based on Bayesian inference; green open triangles depict the candidate source positions yielded by a Sequential Monte Carlo sampling method. Time here is in units of observations made by each sensor, which is hereafter assumed to be equal to the Kolmogorov timescale $\tau_{\eta}$ of the flow. ### II.2 Model of the environment To localize the source within this setting given the sensors’ measurements, we shall now introduce a model that captures the mean-flow and mean-diffusion properties of the environment. If, on the one hand, we assume to have a statistical knowledge of the environment through, for example, the history of prior measurements in the field, we can compute empirically what is the probability of detecting an odor particle from the time average of the DNS data [Fig. 1(b)]. We will hereafter refer to this distribution as the _empirical likelihood_ , in accordance with Bayesian nomenclature. The use of such likelihood has the advantage of avoiding the complication of fitting some environmental parameters while looking for the source. However, it relies on a far more detailed prior knowledge of the environment, which is not generally available. On the other hand, we can still exploit it to understand the strengths and weaknesses of the localization algorithms, as discussed in Sec. III.2. In a realistic setup, we are therefore compelled to use a statistical description of odor encounters in a turbulent flow to infer the source location. Out of a wide range of possibilities [37], we model the turbulent transport of odor particles emitted at rate $Q$ by a point source as an effective advection-diffusion process [38] $\partial_{t}c+\bm{U}\cdot\bm{\nabla}c=D\nabla^{2}c+Q\delta(\bm{r}-\bm{r}_{\rm s})-c/\tau\,,$ (3) where $c$ is the odor concentration field, $\bm{U}$ the mean wind featured by the turbulent flow, $\tau$ the lifetime of the odors, and $\bm{r}_{\rm s}$ the source position. Note that the combination of molecular and turbulent diffusivity (due to the flow velocity fluctuations) is here described by a single effective (eddy) diffusion coefficient $D$ [39]. Despite being a strong oversimplification that ignores important multiscale, non-Gaussian properties of the underlying turbulence, as well as spatiotemporal fluctuations in the scalar advection, this model reasonably captures the mean field properties of the odor concentration [12]. In the stationary regime, Eq. (3) has an analytical solution, which in three dimensions reads (see, e.g., [18]) $c(\bm{r}-\bm{r}_{\rm s})=\frac{Q}{4\pi D\lVert\bm{r}-\bm{r}_{\rm s}\rVert}\exp\left[\frac{(\bm{r}-\bm{r}_{\rm s})\cdot\hat{\bm{U}}-\lVert\bm{r}-\bm{r}_{\rm s}\rVert}{\lambda}\right]\,,$ (4) where we have assumed $\tau\gg 1$. A key advantage of the adopted model is that it essentially depends only on three environmental parameters, i.e., the source emission rate $Q$, the mean wind direction $\hat{\bm{U}}$, and the characteristic length scale of the flow $\lambda\equiv 2D/U$. To mimic the sparseness and intermittency of the odor signals observed from the DNS, and typical of turbulent environments, we assume that the detection is a random process modeled in terms of a Poissonian process with mean $\mu$ [18, 40]. Therefore, the probability that a sensor makes a detection is given by $p(h_{i}|\bm{r}_{i}-\bm{r}_{\rm s})=\frac{[\mu(\bm{r}_{i}-\bm{r}_{\rm s})]^{h_{i}}\exp[-\mu(\bm{r}_{i}-\bm{r}_{\rm s})]}{h_{i}!}\,,$ (5) where $h_{i}$ is the number of odor particles detected by the $i-$th sensor. The mean number $\mu$ of particles hitting, within a time interval $\Delta t$, the $i$-th sensor is related to the mean concentration (4) via the classical Smoluchowski formula [41] $\mu(\bm{r}_{i}-\bm{r}_{\rm s})=4\pi aD\Delta t\,c(\bm{r}_{i}-\bm{r}_{\rm s})=\frac{Q}{d_{i}}a\Delta t\exp\left[\frac{(\bm{r}_{i}-\bm{r}_{\rm s})\cdot\hat{\bm{U}}-d_{i}}{\lambda}\right]\,,$ (6) where $\bm{r}_{i}$ stands for the $i-$th sensor position, and $d_{i}\equiv\lVert\bm{r}_{i}-\bm{r}_{\rm s}\rVert$, $a$ is the sensors’ radius. Hereafter, we will actually assume that each sensor can detect the presence of odors only if the number of particles within its radius $a=\Delta x/2$ exceeds a particular threshold. In other words, the sensors can only perform binary measurements (i.e., $h_{i}=\\{0,1\\}$), such that the probability of detection (5) simplifies into $\begin{cases}p(0|\bm{r}_{i}-\bm{r}_{\rm s})=\exp[-\mu(\bm{r}_{i}-\bm{r}_{\rm s})]\\\ p(1|\bm{r}_{i}-\bm{r}_{\rm s})=1-\exp[-\mu(\bm{r}_{i}-\bm{r}_{\rm s})]\,.\end{cases}$ (7) ### II.3 Weighted Bayesian Update: a new way to rank and exploit _many wrong_ models Each measurement made by each sensor thus provides information about the position of the source $\bm{r}_{\rm s}$, which can be processed employing Bayesian inference. The whole set of measurements from all sensors can be used to update a probability map – the “posterior” or “belief” in Bayesian jargon – of the source’s location defined over the whole arena $\Omega$, i.e. $B(\bm{r})\equiv\mathrm{Prob}(\bm{r}_{\rm s}=\bm{r})$, which we shall dub as _common belief_ to emphasize that it exploits all sensors’ detections. Since we assume no prior knowledge and that the source cannot be at the same location as one of the sensors, the belief is always initialized to a uniform distribution and set to zero only in the sensors’ positions. Assuming the simultaneous measurements made by all the $N_{\rm s}$ sensors at time $t$ are independent, the overall conditional probability of a set of observations $\bm{h}^{(t)}$ for a possible given source position $\bm{r}$ is simply a product: $\mathcal{L}(\bm{h}^{(t)}|\bm{r})=\prod\limits_{i=1}^{N_{\rm s}}p(h_{i}^{(t)}|\bm{r}_{i}-\bm{r})\,,$ (8) which depends on the model of the environment as specified in Eq. (7), and is also known as the _likelihood_ function in Bayesian terminology. Given a sequential process like the one at hand, at every time step (i.e., once all sensors have performed a measurement), the belief is updated following Bayes’ rule [42] $B^{(t)}(\bm{r})=\frac{\mathcal{L}(\bm{h}^{(t)}|\bm{r})B^{(t-1)}(\bm{r})}{\int_{\Omega}{\rm d}\bm{r}\,\mathcal{L}(\bm{h}^{(t)}|\bm{r})B^{(t-1)}(\bm{r})}\,,$ (9) where time $t$ is hereafter measured in units of observations made by each sensor. Under this update rule, the belief is guaranteed to converge to the correct solution as long as the correct model of the environment is deployed [43]. This is, however, a rather uncommon case in any realistic scenario, and the model used to update the common belief will always be wrong. Furthermore, wrong models are indistinguishable from one another during the search since they always make the belief converge to a source position, regardless of whether that is the right one or not. Therefore, finding a way to rank the models and assess their reliability is of great relevance for any applications. In order to introduce the quantity we used for ranking the models, it is useful to rewrite Bayes’ update rule (9) in a different way: $B^{(t)}(\bm{r})=\frac{\prod\limits_{i=1}^{N_{\rm s}}b_{i}^{(t)}(\bm{r})}{\mathcal{Z}}\,,\;\;\;\;\;\;b_{i}^{(t)}(\bm{r})=\frac{p(h_{i}^{(t)}|\bm{r}_{i}-\bm{r})b_{i}^{(t-1)}(\bm{r})}{\int_{\Omega}{\rm d}\bm{r}\,p(h_{i}^{(t)}|\bm{r}_{i}-\bm{r})b_{i}^{(t-1)}(\bm{r})}\,,$ (10) where $\mathcal{Z}\equiv\int_{\Omega}{\rm d}\bm{r}\,\prod\limits_{i=1}^{N_{\rm s}}b_{i}^{(t)}(\bm{r})$. That is to say, owing to the assumption of independence between measurements, the common belief can be built from the superposition of the _private_ beliefs (i.e. the belief that can be constructed for each sensor only on the basis of its own measurements history) $b_{i}$ of the $i=1,...,N_{\rm s}$ sensors, each of which is updated independently at every time step via Bayes’ rule. The key advantage of this change of perspective lies in the interpretation of the update formula [Eqs. (9-10)]. Indeed, the hitherto overlooked normalization constant $\mathcal{Z}$ essentially quantifies how much the sensors’ _private_ beliefs overlap or, in other words, how much they agree on a source position given their measurements and the model at hand. In fact, in Appendix A, we show analytically that this quantity achieves an asymptotic global maximum when the model is exact (even in the presence of correlations between observations). We also show numerically that this quantity is typically smaller the farther the model is from the ground truth. This makes $\mathcal{Z}$, hereafter referred to as the _overlap integral_ , an ideal candidate to achieve our task as we could use it to weigh different models of the environment. To this end, let us scan the parameters space of the stochastic model introduced above [see Eqs. (6)-(7)] assuming to know only on the mean wind direction $\hat{\bm{U}}$, which can be always measured in practice using an anemometer, and run the Bayes update (9) independently for each set of parameters $\bm{P}\equiv\\{Q,\lambda\\}$. Then, we shall blend the information collected into a master belief as $\mathcal{B}^{(t)}_{\rm M}(\bm{r})=\frac{\prod\limits_{j=1}^{M}\left[B_{j}^{(t)}(\bm{r})\right]^{\beta_{j}^{(t)}}}{\int_{\Omega}\prod\limits_{j=1}^{M}\left[B_{j}^{(t)}(\bm{r})\right]^{\beta_{j}^{(t)}}}\,,\;\;\;\beta_{j}^{(t)}\equiv\frac{\mathcal{Z}_{j}^{(t)}}{\sum\limits_{j=1}^{M}\mathcal{Z}_{j}^{(t)}}\,,$ (11) where $M$ is the total number of distinct sets $\bm{P}_{j}=\\{Q_{j},\lambda_{j}\\}$ of model parameters considered, and the index $j$ refers to which of these sets was used to obtain the common belief $B_{j}^{(t)}$ at time $t$ and the corresponding overlap integral $\mathcal{Z}_{j}^{(t)}$. The algorithmic procedure described in Eqs. (9)$-$(11) thus provides a principled way to define a single master belief $\mathcal{B}_{\rm M}$, i.e., the probability distribution about the source location, which is obtained from a weighted average over the results yielded by different models. We will hereafter refer to this approach as the weighted Bayesian update (WBU), whose numerical implementation is detailed in Appendix B. ### II.4 Sequential Monte Carlo with Importance Sampling The above-described WBU method offers a new perspective on the implementation of algorithms for odor source localization in turbulent flows. There exists, however, already a vast literature on the possible methods to address the same kind of problem [25]. One of the most commonly used approaches is the one based on the use of Monte Carlo sampling methods to estimate the belief [21, 40, 35]. It is, therefore, natural to ask how the WBU’s performance compares with such more conventional approaches. Due to the wide range of potential applications [44], much research has been conducted over the last decades to make these algorithms more and more computationally efficient and accurate, and, to date, there exist many different variants depending on the specific setup at hand. We refer the reader to Ref. [25] for a comprehensive review of the topic. In the following, we shall use a state-of-the-art version of the so-called _Sequential Monte Carlo_ (SMC) algorithm that also involves a Markov chain Monte Carlo (MCMC) perturbation step [45, 46]. Using such an approach, we can simultaneously infer both the source position $\bm{r}_{\rm s}$ and the (unknown) parameters $\bm{P}$ of the stochastic model of the environment. Let us therefore first define a _sample_ $\bm{\theta}_{i}$ as a possible combination of such source parameters, i.e. $\bm{\theta}_{i}\equiv\\{\bm{r}_{{\rm s},i},\bm{P}_{i}\\}$. At every time step, after all sensors have measured, a collection of $N$ of such samples is drawn from the current belief defined in the $\bm{\theta}$ space, and we assign a weight $w$ to each of them equal to the likelihood of the latest measurement as defined in Eq. (8). Upon normalization of the samples’ weights, we shall then compute the effective sample size $N_{\rm eff}$ to avoid the so- called _degeneracy problem_ [46]. Indeed, if $N_{\rm eff}$ goes below a given threshold $N_{\rm thr}$ (typically set to $N/2$), then it is necessary to generate a new set of $N$ samples. This is known as the _resampling_ step. There, for each sample $\tilde{\bm{\theta}}_{i}$, a new one is selected from the same pool and with a probability equal to its weight that replaces the former. After resampling, all weights are then set equal to $1/N$. Next is the so-called Metropolis-Hastings MCMC perturbation step, which consists of moving each sample in its neighborhood and deciding whether to accept or reject the new proposal based on some acceptance criteria [47]. This is essentially done to diversify the $N$ samples and, therefore, improve the sampling efficiency of the Monte Carlo algorithm [46]. More specifically, starting from one of the samples at time $t$, say $\tilde{\bm{\theta}}_{i}^{(t)}$, a Markov chain of length $K$ is generated where new inferences $\hat{\bm{\theta}}^{(t)}$ are drawn from the previous link in the chain, $\tilde{\bm{\theta}}_{i,j-1}^{(t)}$, using a proposal distribution $q(\hat{\bm{\theta}}^{(t)}|\tilde{\bm{\theta}}_{i,j-1}^{(t)})$. Although there exist several valid choices for this distribution [48], here we shall use a Gaussian with a mean $\mu=\tilde{\bm{\theta}}_{i,j-1}^{(t)}$ and variance $\sigma^{2}$ as a free hyperparameter, i.e. $q(\hat{\bm{\theta}}^{(t)}|\bm{\theta}^{(t)}_{i,j-1})=\mathcal{N}(\bm{\theta}^{(t)}_{i,j-1},\sigma^{2})$. Once the new sample is generated, we shall compute the so-called _acceptance ratio_ [45] $\alpha=\prod\limits_{k=1}^{t}\left[\frac{\mathcal{L}(\bm{h}^{(k)}|\hat{\bm{\theta}}^{(k)})}{\mathcal{L}(\bm{h}^{(k)}|\bm{\theta}^{(k)}_{i,j-1})}\right]\frac{q(\bm{\theta}^{(t)}_{i,j-1}|\hat{\bm{\theta}}^{(t)})}{q(\hat{\bm{\theta}}^{(t)}|\bm{\theta}^{(t)}_{i,j-1})}\,,$ (12) which basically amounts to the likelihood’s history ratio (also known as the _posterior ratio_) divided by the proposal ratio. Including the latter is necessary to correct a bias in the proposal distribution if it is asymmetric [45]. The proposed sample $\hat{\bm{\theta}}^{(t)}$ will then be accepted as a new link in the chain as long as $\alpha>1$, and with probability $\alpha$ otherwise. Finally, once all samples have been perturbed, we are left with a new approximation of the belief, which reads: $\tilde{B}^{(t)}(\bm{\theta})=1/N\sum_{i=1}^{N}\delta(\bm{\theta}-\bm{\theta}^{(t)}_{i})$. A detailed step-by-step description of the procedure outlined above is given in Appendix B. Analogous to what we observed in the Bayesian update, once the SMC algorithm’s hyperparameters are properly adjusted, it will systematically converge to the correct source location as long as the model of the environment is functionally exact. However, it is not clear how this approach would compare with the WBU one in a realistic scenario where it uses an inevitably misspecified model of the environment. ## III Results ### III.1 Stop criterion for source localization The performance of localization algorithms is typically judged based on their ability to estimate the odor source position with a specified accuracy or within a given time limit [25, 49, 50]. However, more generally, we shall test the reliability of such algorithms by envisioning their application in a real- world scenario, where we do not know _a priori_ where the source is and have to decide when to stop looking for it. Therefore, for an algorithm to be effective in practice, it must show a correlation between an observable quantity and the quality of its estimate of the source location. Figure 3: Four panels showing the probability that the distance $d$ between the estimated source position $\bm{\bar{r}}$ and the ground truth $\bm{r}_{\rm s}$ is below a given threshold $\delta$ (indicated in the legend) as a function of the belief/sample standard deviation $\sigma_{\rm loc}$. Lengths are in units of the distance between sensors. The columns differ in the number of sensors, while the two rows correspond to the algorithm used (top: WBU, bottom: SMC). To this end, a good candidate quantity is the current uncertainty one has about the source location. This can be formally defined in the WBU approach as the variance of the master belief (11), which reads $\sigma^{2}_{\rm loc}\equiv\int_{\Omega}\,{\rm d}\bm{r}\,\mathcal{B}_{\rm M}(\bm{r})(\bm{r}-\bm{\bar{r}})^{2}\,,$ (13) where $\bm{\bar{r}}\equiv\int_{\Omega}\,{\rm d}\bm{r}\,\mathcal{B}_{\rm M}(\bm{r})\bm{r}$ is the estimated source location 111Clearly, the $\sigma^{2}_{\rm loc}$ and $\bm{\bar{r}}$ counterparts in the SMC algorithm are respectively defined as the variance and mean computed over the Monte Carlo samples.. In the ideal case where the model of the environment is exact, $\sigma^{2}_{\rm loc}$ would be inversely correlated with how close the estimated position is to the ground truth. In other words, the smaller the variance of the belief, the more accurate the source location estimate would be. However, the exact model of the environment is inaccessible to us in any practical scenario. Therefore, such correlation is not guaranteed to hold in general and may depend on the algorithmic procedure employed. We shall thus quantify the robustness of a given localization algorithm by looking at the probability $P(d<\delta|\sigma_{\rm loc})$ that the distance $d$ between $\bm{\bar{r}}$ and the actual source position $\bm{r}_{\rm s}$ is below a given threshold $\delta$, conditioned on the uncertainty on the source estimate as defined in (13). In particular, the reliability of a localization algorithm can then be measured by checking whether such probability $P(d<\delta|\sigma_{\rm loc})$ correlates with the standard deviation $\sigma_{\rm loc}$. We find that this is indeed the case for the WBU approach when implemented on the most realistic scenario introduced in Sec. II.1. As shown in Fig. 3, the probability of locating the source within a distance smaller than the one between sensors is well-correlated with the corresponding uncertainty when using WBU (top row). However, this does not hold for the more standard SMC algorithm, which hardly shows any correlation (bottom row). Moreover, this turns out to be independent of the number of sensors deployed ($N_{\rm s}=\\{16,64\\}$ are shown in the same figure) and of the threshold $\delta$ on the accuracy (different symbols in each plot). We refer the reader to Appendix B for further details on the numerical simulations. Hence, our results suggest that the WBU is a more reliable method than the standard SMC in the sense that, in WBU, the uncertainty about the source location $\sigma_{\rm loc}$ is a more reliable indicator of the proximity of its estimate to the ground truth than it is in SMC. Furthermore, this has the additional benefit that it allows us to define a robust stop criterion. Indeed, when employing WBU, it will be sufficient to look at the time evolution of $\sigma_{\rm loc}$ to know when one has a good chance of finding the source at a sufficiently small distance from the current estimate and stop the search accordingly. Figure 4: Same as Fig. 3 but computed using the models which ranked first with the overlap criterion discussed in Sec. II.3. Remarkably, this would no longer hold if we used only the model with the largest overlap integral $\mathcal{Z}$ to infer the source location. Indeed, in such a case, the correlation between the distance from the true source location and its uncertainty almost disappears, with the outcome comparable to the one of the SMC algorithm, compare Fig. 4 with the top panels of Fig. 3. This emphasizes the importance of blending information from many wrong models to estimate the source location, as done in the WBU approach. ### III.2 Model misspecification: the effect of correlations Although WBU and SMC use both the same (wrong) model of the environment to infer the source location, they have different outcomes, with the former proving to be less sensitive to the model misspecification than the latter. It is, therefore, worth investigating more systematically how model errors affect the performance of such source localization algorithms. As shown in the table in Fig. 5(a), we can consider four possible ways to infer the source location with a given algorithm when dealing with realistic data of turbulent odor dispersion. The first possibility is to use the Lagrangian time series obtained from the DNS to determine whether a sensor detects an odor or not and then deploy a (inevitably misspecified) model of the environment —like the one derived from Eq. (3)— to interpret such detections and compute the likelihood (_Model likelihood-Correlated signal_ —MlCs— top left cell in the table). This is precisely the scenario we have analyzed so far, which is the closest one to reality as it relies on minimal prior knowledge of the environment and directly uses the Lagrangian data. Alternatively, as a model of the environment, we can use the empirical likelihood [Fig. 1(b)] computed from the time history of all the odor trajectories obtained from the DNS. On the one hand, this approach [_Empirical likelihood-Correlated signal_ —ElCs— bottom left cell in the table in Fig. 5(a)] greatly simplifies the search since there are no environmental parameters to fit, and it also represents the best existing (empirical) model one can aim for in practice to infer the source location. On the other hand, however, it is still an imperfect model of the environment as it does not capture the time and spatial correlations featured by the odor plume [12]. To study how such correlations affect the source position estimation, we should then consider the case where the detections are not directly taken from the DNS time series but instead randomly drawn from the empirical likelihood. Indeed, the sensors’ measurements are in this way uncorrelated while still featuring the same detection statistics as the original signal. At this point, we can decide whether to deploy the empirical likelihood itself to infer the source location, in which case we would be using the exact model of the environment (_Empirical likelihood-Uncorrelated signal_ —ElUs— bottom right cell in the table) or the usual probabilistic model of detections (_Model likelihood-Uncorrelated signal_ —MlUs— top right cell in the table). The former can be helpful as a benchmark since, provided the number of measurements is large enough, it should show convergence to the correct solution for any properly implemented localization algorithm based on Bayesian inference. The latter is instead useful compared to the first two cases illustrated above since, in this case, sensors’ observations are uncorrelated and, as a result, it isolates the effect of dealing with a functionally wrong model of the environment. Figure 5: (a) Table summarizing all the possible ways to infer the source location when dealing with realistic data of turbulent odor dispersion. They may differ for the model used to compute the likelihood (rows) or for the signal received by the sensors (columns). (b) Mean distance $d$ from the source at the end of each episode in units of the distance $d_{\rm{s}}$ between sensors as a function of the number of sensors $N_{\rm s}$. Different symbols correspond to the algorithm employed (SMC: inverted purple triangles; WBU: yellow diamonds). Error bars are the first and third quartiles of the distribution. Each plot refers to one of the four cases illustrated in table (a), as indicated by the box color in the upper left corner. This completes the picture in the table in Fig. 5(a). We are now ready to compare the performance of WBU and SMC in each of the four scenarios just described. To this end, we shall look at the results reported in Fig. 5(b). There, we show the average distance $d$ between the actual source location and its estimate given by either algorithm at the end of distinct episodes of set time length as a function of the number of sensors $N_{\rm s}$ deployed in the search (details in Appendix B). Some observations are in order. First of all, both WBU (yellow diamonds) and SMC (purple inverted triangles) show comparable performance as long as they use the empirical likelihood to model the detection statistics (plots in the bottom row). More specifically, when using the exact model as in the ElUs scenario, both approaches tend to converge to the correct source location as expected (bottom right plot). At the same time, by looking at the ElCs case (bottom left plot), we may observe how much the sole presence of time and spatial correlations in the signal affects the quality of the source estimate. Remarkably, even though the empirical model does not account for such correlations, both algorithms manage to locate the source within a distance $d$ smaller than half of the sensor separation. Now, let us see what changes when using the probabilistic model introduced in Sec. II.2 to infer the source location (MlCs and MlUs scenarios —plots in the top row). While using the wrong model has obvious shortcomings and further degrades the performance of both approaches, overall, they prove to be quite robust and still manage, on average, to locate the source within a distance smaller than the one between sensors. Furthermore, comparing the results obtained with uncorrelated signals (MlUs, top right plot) with the most realistic scenario featuring the correlated signal (MlCs, top left plot), we may notice that the SMC’s performance gets substantially worse in the latter case, while WBU is basically unaffected. The higher sensitivity of SMC to correlations can be rationalized by looking at the time evolution of the model parameters, namely the emission rate $Q$ and the characteristic length scale of turbulent odor dispersion $\lambda$, inferred by such an algorithm. As shown in Fig. 6, when the signal received by the sensors is correlated in space and time (blue curves), SMC has the tendency to overestimate both $\lambda$ and $Q$, even saturating to the maximum allowed value. The picture dramatically changes when the sensors’ observations are instead uncorrelated (orange curves). There, the inferred values of the parameters are much closer to the ones obtained from the best fit of the empirical likelihood (dashed red line in both plots), which means the algorithm is, in this case, pointing in the right direction. Figure 6: Comparison between the values of the model parameters (left: $\lambda$; right: $Q$) inferred by the SMC algorithm as a function of time when directly using the DNS time series (blue curves, time-correlated signal) and when extracting the detections from the corresponding empirical likelihood (orange curves, uncorrelated signal). In both cases, each curve corresponds to the episodes-averaged value of the parameter obtained in one of the $30$ configurations (different source positions) where the algorithm has been tested. The histograms on the right of each plot have been obtained by considering the inferred values of the parameters at times $t\geq 300$, while the horizontal dashed red lines stand for the values of $\lambda$ and $Q$ derived from the best fit of the empirical likelihood. Data shown here correspond to the setup with $N_{\rm s}=16$ sensors. Hence, the observed sensitivity of the SMC algorithm in the most realistic scenario lies in the fact that it must perform a real-time inference of the model parameters (while looking for the source), which is greatly impacted by the presence of time correlations in the signal. However, this is not the case for the WBU approach since it uses a predetermined and discrete set of parameters, then runs the different models independently, and only at the end merges their outcome based on the current ranking provided by the overlap integral of each model. The _static_ nature of the parameter space and the use of all available (misspecified) models are thus the defining strengths of the newly introduced approach. ## IV Discussion In this work, we revisited the problem of locating an odor source in a realistic turbulent environment with a network of static sensors. To this end, we employed DNS of the incompressible 3–D Navier-Stokes equations to simulate odor dispersion in the atmospherical boundary layer. We then used the data of the Lagrangian particles trajectories to systematically investigate how the errors made in modeling such an environment, a problem unavoidable in field applications, affect the performance of source localization algorithms based on Bayesian inference. Within this framework, we identified a quantity that effectively ranks different models of the environment just based on the history of observations made by the sensors in the field. This is what we called the _overlap integral_ $\mathcal{Z}$ as it essentially measures the degree of consensus among the sensors about a single source location. This quantity, which is maximized when the model is exact, also represents the posterior estimate of the probability of making the sequence of observations, integrated over all possible source locations. Thus, our weighted Bayesian update (WBU) approach to source localization may be viewed as a form of maximum likelihood estimation (MLE) applied to the model itself. The use of MLE to quantify the degree of model misspecification was previously studied in an abstract setting in Ref. [52]. Through our analysis, we may conclude that WBU is a more robust approach to locating an odor source in a turbulent environment than state-of-the-art methods relying on Monte Carlo sampling techniques. In fact, our results highlight some fundamental weaknesses of the Sequential Monte Carlo (SMC) algorithm, which features a strong sensitivity to the presence of time/space correlations in the sensors’ detections, especially in the most general case where it must also infer the model parameters in real-time together with the source location. This is in contrast with WBU, which is insensitive to correlations and proved to be only mildly affected by the use of wrong models. In particular, WBU, as opposed to SMC, is able to maintain the desired correlation between the uncertainty about the source location and the distance of its current estimate from the ground truth. Thus, WBU provides a robust stop criterion for the search. The main novelty of the WBU approach stems from the idea that merging the information gathered from many possible interpretations of the measurements recorded by the sensors may help in compensating for model errors consistently with the _many wrongs principle_ [33, 34]. In the realm of olfactory search, this represents a fundamental stepping stone toward a thorough investigation of the effect of misspecified models on the localization of odor sources in turbulent environments, an unavoidable difficulty in practical applications. The presented results and methodology have clear potential applications in environmental monitoring [28, 53] and early warnings [29], as they allow to reliably identify a potential area of intervention when some hazardous substances are detected. It would also be interesting to explore the possibility of capitalizing on the use of many wrong models in the case of olfactory search by single [18, 20] or multiple [19] moving agents, where it can help mitigate the model misspecification and thus allow for better decisions of the agent(s). Beyond olfactory search, the WBU approach is, in principle, also adaptable to any other Bayesian inference problem with model misspecification. Finally, we suggest that the overlap integral could be maximized instead by gradient ascent, which would afford a reduction in computational cost and memory load while losing the extra information from running many wrong models. It would be interesting, in particular, to explore the use of a powerful, many-parameter model such as a neural network in such a context. ## Acknowledgments We thank M. Sbragaglia and M. Vergassola for useful discussions. We acknowledge financial support under the National Recovery and Resilience Plan (NRRP), Mission 4, Component 2, Investment 1.1, Call for tender No. 104 published on 2.2.2022 by the Italian Ministry of University and Research (MUR), funded by the European Union – NextGenerationEU– Project Title Equations informed and data-driven approaches for collective optimal search in complex flows (CO-SEARCH), Contract 202249Z89M. – CUP B53D23003920006 and E53D23001610006. This work was also supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 882340). ## Appendix A Analytical argument for the convergence of the overlap integral Let the true likelihood be $f(h|\bm{r}_{i}-\bm{r})$ and let Bayesian updates be performed with a model $g(h|\bm{r}_{i}-\bm{r}),$ not necessarily correctly specified. Let the prior be $\pi(\bm{r}),$ and let the true source location be $\bm{r}^{*}.$ After $T$ timesteps, we have $\mathcal{Z}=\int_{\Omega}d\bm{r}\,\pi(\bm{r})\prod_{i=1}^{N_{s}}\prod_{t=1}^{T}g(h_{i}^{(t)}|\bm{r}_{i}-\bm{r})=\int_{\Omega}d\bm{r}\,\pi(\bm{r})\exp\left(\sum_{i=1}^{N_{s}}\sum_{t=1}^{T}\log g(h_{i}^{(t)}|\bm{r}_{i}-\bm{r})\right).$ (A1) As long as each observation $h_{i}^{(t)}$ has bounded variance and correlations between observations decay to zero over time (i.e. $\mathrm{Cov}(h^{(t)},h^{(t+\tau)})\xrightarrow{\tau\to\infty}0$), we may apply the (weak) law of large numbers [54], which implies that the argument of the exponential converges in probability to its expectation over observations $h$: $\displaystyle\mathcal{Z}$ $\displaystyle\to\bar{\mathcal{Z}}_{T}\equiv\int_{\Omega}d\bm{r}\,\pi(\bm{r})\exp\left(T\sum_{i=1}^{N_{s}}\sum_{h\in\\{0,1\\}}f(h|\bm{r}_{i}-\bm{r}^{*})\log g(h|\bm{r}_{i}-\bm{r})\right)$ $\displaystyle=C_{T}\int_{\Omega}d\bm{r}\,\pi(\bm{r})\exp\left[-T\sum_{i=1}^{N_{s}}D_{\rm KL}(f(\cdot|\bm{r}_{i}-\bm{r}^{*})\parallel g(\cdot|\bm{r}_{i}-\bm{r}))\right],$ (A2) where $C_{T}=\exp\left(-T\sum_{i=1}^{N_{s}}H(f(\cdot|\bm{r}_{i}-\bm{r}^{*}))\right),$ (A3) $D_{\rm KL}(p\parallel q)$ indicates the Kullback-Leibler divergence between distributions $p$ and $q,$ and $H$ is the Shannon entropy. By Gibbs’ inequality, $F(\bm{r})\equiv\sum_{i=1}^{N_{s}}D_{\rm KL}(f(\cdot|\bm{r}_{i}-\bm{r}^{*})\parallel g(\cdot|\bm{r}_{i}-\bm{r}))\geq 0,$ (A4) with equality iff $f(h|\bm{r}_{i}-\bm{r}^{*})=g(h|\bm{r}_{i}-\bm{r})$ for each $1\leq i\leq N_{s}$ and each $0\leq h\leq N_{o}-1$ (where $N_{o}$ is the number of possible observations, which we have set to two). The set of points $\bm{r}$ where this holds is the intersection of $N_{s}\times N_{o}$ manifolds of codimension 1; assuming $f$ and $g$ are well-behaved functions of position, for large enough $N_{s}$ this intersection should be empty unless $f(h|\bm{r}_{i}-\bm{r}^{*})=g(h|\bm{r}_{i}-\bm{r}^{*})$ for each $i$ and $h$, in which case the unique member will be the point $\bm{r}=\bm{r}^{*}.$ Figure A1: (a) Curves of the overlap integral $\mathcal{Z}$ as a function of the percentage error on either of the model parameters. Different curves refer to distinct times, as indicated in the legend. (b) Colormap of the overlap integral $\mathcal{Z}$ at time $t=10^{3}$ as a function of the percentage error on both model parameters. As expected, it features a maximum when the model is exact ($\varepsilon_{\lambda}=\varepsilon_{Q}=0$) and decays to zero as we move away from it. Data shown here have been obtained by averaging over $10^{2}$ episodes of time length $T=10^{3}$ in units of observations made by each sensor. Since $T$ is large, Laplace’s method gives $\bar{\mathcal{Z}}_{T}\sim T^{-d/2}C_{T}\sum_{j=1}^{k}a_{j}\exp(-TF(\hat{\bm{r}}_{j}))=T^{-d/2}C_{T}\exp(-T\hat{F})\sum_{j=1}^{k}a_{j},$ (A5) where $d$ is the number of spatial dimensions, $\hat{F}$ is the minimum of $F$ and is attained on the set $\\{\hat{\bm{r}}_{j}\\}_{j=1}^{k},$ and each $a_{j}$ is nonnegative and independent of $T.$ Therefore, $\bar{\mathcal{Z}}_{T}$ is maximized when $f$ and $g$ agree at each $\bm{r}_{i}-\bm{r}^{*}$ and is exponentially smaller when they do not. This is the key result. We can numerically show that the overlap integral $\mathcal{Z}$ relaxes to zero faster the farther from the ground truth the model is. To this end, let us take into account the same setup as the one shown in Fig. 2, with the source located in $\bm{r}_{\rm s}=(92,70)$ and $N_{\rm s}=16$ sensors placed on a square lattice tilted by an angle $\theta=1.04\,\rm{rad}$ with respect to the wind direction. We can then use the probabilistic model described in Sec. II.2 to generate the observations made by the static sensors and set the source emission rate and the characteristic length scale of the turbulent odor dispersion to $Q^{*}=5.5$ and $\lambda^{*}=3.5$, respectively. Since we know the exact model of the detection statistics, we can systematically study how the overlap integral $\mathcal{Z}$ varies depending on the error made on such model parameters. Figure A1(a) shows the values of $\mathcal{Z}$ obtained when either the value of the characteristic length scale $\lambda$ (top panel) or the one of the emission rate $Q$ (bottom panel) is wrong. In both cases, as time progresses, the curve of the overlap integral peaks more and more around the correct value of the parameter, i.e., the one corresponding to a percentage error $\varepsilon_{*}=0$, and it smoothly goes to zero as the error grows. More generally, this still holds when both model parameters are misspecified, as shown in the colormap in Fig. A1(b). Let us now comment on one feature that stands out from these plots, that is, the observed asymmetry in the values of $\mathcal{Z}$ when $\lambda$ is under/over-estimated. In fact, given the same history of measurements, the source will be estimated as farther from the sensors, the greater the estimated value of $\lambda$. This will eventually cause the overlap among the private beliefs to accumulate at the border of the finite-size arena $\Omega$ where simulations are performed. Therefore, the larger values of the overlap integral $\mathcal{Z}$ when $\lambda$ is overestimated are a mere finite-size effect. On the one hand, this is not a problem in the case of a functionally correct model, where the ground truth is attainable (as in this section) since, eventually, the correct model will still be the one with the largest value of $\mathcal{Z}$ anyway. On the other hand, however, this effect can persist in the general case of a functionally wrong model (scenario discussed in the main text), and we are compelled to address it for consistency. To this end, during our analysis, we systematically set to zero the values of the overlap for the models that would place the source at a distance $d\leq 5\Delta x$ (with $\Delta x$ being the lattice spacing) from the border of the arena $\Omega$. This avoids the selection of clearly wrong models that only artificially would feature a large value of the overlap $\mathcal{Z}$, preserving the structure and the basic concepts behind the WBU approach. ## Appendix B Details on the numerical simulations In all our numerical simulations, the size of the square arena $\Omega$ has been set to $128\Delta x\times 128\Delta x$, with $\Delta x=1$ being the lattice spacing. The sensors have been placed in a square lattice of side $\ell=88\Delta x$, tilted with an angle $\theta=1.04$ with respect to the mean wind direction. All the data presented in Sec. III have then been obtained by averaging over the results obtained by shifting the odor source in $30$ different locations drawn at random within an inner box $108\Delta x\times 108\Delta x$ inside $\Omega$, as illustrated in Fig. A2. This was done to avoid boundary effects that would have altered the performance statistics of the localization algorithms. Moreover, to gather more statistics for each source location, we have divided the time series of the odor dispersion obtained from the DNS described in Sec. II.1 into $25$ runs of set time length $T=600$ (in units of the Kolmogorov timescale $\tau_{\eta}$). Figure A2: Illustration of the sensors’ placement (magenta circles, here $N_{\rm s}=16$) and of the $30$ source locations (green squares) drawn at random inside the arena. Definition | Symbol | Value ---|---|--- Number of samples | $N$ | 50 Number of MCMC perturbation steps | $K$ | 5 Variance of proposal distribution of $\bm{r}_{\rm s}$ | $\sigma^{2}_{\rm pos}$ | 100 Variance of proposal distribution of $\lambda$ | $\sigma^{2}_{\lambda}$ | 1 Variance of proposal distribution of $Q$ | $\sigma^{2}_{Q}$ | 1 Range of values of $\lambda$ | $[\lambda_{\rm m},\lambda_{\rm M}]$ | [0,10] Range of values of $Q$ | $[Q_{\rm m},Q_{\rm M}]$ | [0,10] Table A1: Values of the hyperparameters used in the SMC algorithm. In order to get the master belief in the WBU approach, we have used in every configuration a total of $M=100$ models, each of them corresponding to a different combination of parameters $\\{Q,\lambda\\}$ in the stochastic model introduced in Sec. II.2, with the specific values of $Q$ and $\lambda$ being $Q=\\{0.7,1.7,\dots,9.7\\}$ and $\lambda=\\{0.7,1.7,\dots,9.7\\}$, respectively. The values of the SMC hyperparameters used in this work are summarized in Table A1. We have also numerically checked that all the results presented here are not qualitatively affected by this choice. Note that, in the SMC implementation, we start from a flat prior defined over the interval $[0;10]$ for both model parameters $Q$ and $\lambda$. Moreover, their values are bounded therein, which is, for consistency, the same range used in the WBU approach. Lastly, the detailed step-by-step explanation of the WBU and SMC implementation is reported in Algorithm 1 and Algorithm 2, respectively. Algorithm 1 Weighted Bayesian update 1:for $j=1$ to $M$ do $\triangleright$ Loop over different models 2: Set initial common belief $B^{(0)}_{j}$ to a uniform distribution. 3: Set model parameters $\bm{P}_{j}=\\{Q_{j},\lambda_{j}\\}$. 4: for $t=1$ to $T$ do $\triangleright$ Loop over time 5: Perform sensors’ measurements $\bm{h}^{(t)}$. 6: Update common belief $B^{(t)}_{j}$ using Bayes’ rule (10). 7: Compute overlap integral $\mathcal{Z}^{(t)}_{j}$. 8: end for 9:end for 10:Compute master belief $\mathcal{B}_{M}$ at all times from Eq. (11). $\triangleright$ Output master belief from weighted models Algorithm 2 Sequential Monte Carlo with Importance Sampling and perturbation step 1:Set initial belief $\tilde{B}^{(0)}$ to a uniform distribution. 2:for $t=1$ to $T$ do 3: Perform sensors’ measurements $\bm{h}^{(t)}$. 4: for $i=1$ to $N$ do $\triangleright$ Sample’s initialization 5: Draw sample from current belief: $\tilde{\bm{\theta}}_{i}\sim\tilde{B}^{(t-1)}(\bm{\theta})$. 6: Compute weight $w_{i}=\mathcal{L}(\bm{h}^{(t)}|\tilde{\bm{\theta}}_{i})$. 7: end for 8: for $i=1$ to $N$ do 9: Normalize weight: $w_{i}/=\sum_{i=1}^{N}w_{i}$. 10: Compute Effective Sample Size: $N_{{\rm eff}}=1/\sum_{i=1}^{N}w_{i}^{2}$. 11: end for 12: if $N_{\text{eff}}<N_{\text{thr}}$ then $\triangleright$ Resampling step (if needed) 13: for $i=1$ to $N$ do 14: Select $\tilde{\bm{\theta}}_{k}$ with probability $w_{k}$. 15: Put $\tilde{\bm{\Theta}}_{i}=\tilde{\bm{\theta}}_{k}$. 16: end for 17: for $i=1$ to $N$ do 18: Replace $\tilde{\bm{\theta}}_{i}=\tilde{\bm{\Theta}}_{i}$. 19: Set uniform weights: $w_{i}=1/N$. 20: end for 21: end if 22: for $i=1$ to $N$ do $\triangleright$ MCMC perturbation step 23: Select $l\in\\{1,\dots,N\\}$ with probability $w_{l}$. 24: Set $\bm{\theta}^{(t)}_{i,0}=\tilde{\bm{\theta}}^{(t)}_{l}$. 25: for $j=1$ to $K$ do 26: Draw new sample from proposal distribution: $\hat{\bm{\theta}}^{(t)}\sim q(\hat{\bm{\theta}}^{(t)}|\bm{\theta}^{(t)}_{i,j-1})$. 27: Compute acceptance ratio $\alpha$ from Eq. (12). 28: Draw random number $u\sim U[0,1]$. 29: if $u<\min(1,\alpha)$ then 30: Set $\bm{\theta}^{(t)}_{i,j}=\hat{\bm{\theta}}^{(t)}$. 31: else 32: Set $\bm{\theta}^{(t)}_{i,j}=\bm{\theta}^{(t)}_{i,j-1}$. 33: end if 34: end for 35: Set $\bm{\theta}^{(t)}_{i}=\bm{\theta}^{(t)}_{i,\tilde{B}}$. 36: end for 37: Set $\tilde{B}^{(t)}(\bm{\theta})=1/N\sum_{i=1}^{N}\delta(\bm{\theta}-\bm{\theta}^{(t)}_{i})$. $\triangleright$ Output belief approximation 38:end for ## References * Bayat _et al._ [2017] B. Bayat, N. Crasta, A. Crespi, A. M. Pascoal, and A. Ijspeert, Environmental monitoring using autonomous vehicles: a survey of recent searching techniques, Curr. Opin. Biotechnol. 45, 76 (2017). * Burgués and Marco [2020] J. Burgués and S. Marco, Environmental chemical sensing using small drones: A review, Sci. Total Environ. 748, 141172 (2020). * Francis _et al._ [2022] A. Francis, S. Li, C. Griffiths, and J. Sienz, Gas source localization and mapping with mobile robots: A review, J. Field Robot. 39, 1341 (2022). * Karafasoulis and Kyriakis [2023] K. Karafasoulis and A. Kyriakis, Spatial localization of radioactive sources for homeland security, in _Gamma Ray Imaging: Technology and Applications_, edited by J. Du and K. K. Iniewski (Springer International Publishing, Cham, 2023) pp. 87–102. * Murlis _et al._ [1992] J. Murlis, J. S. Elkinton, and R. T. Cardé, Odor plumes and how insects use them, Annu. Rev. Entomol. 37, 505 (1992). * Hein _et al._ [2016] A. M. Hein, F. Carrara, D. R. Brumley, R. Stocker, and S. A. Levin, Natural search algorithms as a bridge between organisms, evolution, and ecology, Proc. Natl. Acad. Sci. 113, 9413 (2016). * Aref _et al._ [2017] H. Aref, J. R. Blake, M. Budišić, S. S. S. Cardoso, J. H. E. Cartwright, H. J. H. Clercx, K. El Omari, U. Feudel, R. Golestanian, E. Gouillart, G. F. van Heijst, T. S. Krasnopolskaya, Y. Le Guer, R. S. MacKay, V. V. Meleshko, G. Metcalfe, I. Mezić, A. P. S. de Moura, O. Piro, M. F. M. Speetjens, R. Sturman, J.-L. Thiffeault, and I. Tuval, Frontiers of chaotic advection, Rev. Mod. Phys. 89, 025007 (2017). * Speetjens _et al._ [2021] M. Speetjens, G. Metcalfe, and M. Rudman, Lagrangian Transport and Chaotic Advection in Three-Dimensional Laminar Flows, Appl. Mech. Rev. 73, 030801 (2021). * Frisch [1995] U. Frisch, _Turbulence: The Legacy of A. N. Kolmogorov_ (Cambridge University Press, 1995). * Alexakis and Biferale [2018] A. Alexakis and L. Biferale, Cascades and transitions in turbulent flows, Phys. Rep. 767-769, 1 (2018), cascades and transitions in turbulent flows. * Crimaldi and Koseff [2001] J. P. Crimaldi and J. R. Koseff, High-resolution measurements of the spatial and temporal scalar structure of a turbulent plume, Exp. Fluids 31, 90 (2001). * Celani _et al._ [2014] A. Celani, E. Villermaux, and M. Vergassola, Odor landscapes in turbulent environments, Phys. Rev. X 4, 041015 (2014). * Piro [2023] L. Piro, _Optimal navigation in active matter_ , Ph.D. thesis, Georg-August-Universität Göttingen Göttingen (2023). * Reddy _et al._ [2022] G. Reddy, V. N. Murthy, and M. Vergassola, Olfactory sensing and navigation in turbulent environments, Annu. Rev. Condens. Matter Phys. 13, 191 (2022). * Belanger and Willis [1998] J. Belanger and M. Willis, Biologically-inspired search algorithms for locating unseen odor sources, in _Proc. 1998 IEEE International Symposium on Intelligent Control (ISIC) held jointly with IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA) Intell_ (1998) pp. 265–270. * Balkovsky and Shraiman [2002] E. Balkovsky and B. I. Shraiman, Olfactory search at high reynolds number, Proc. Natl. Acad. Sci. 99, 12589 (2002), https://www.pnas.org/doi/pdf/10.1073/pnas.192393499 . * Durve _et al._ [2020] M. Durve, L. Piro, M. Cencini, L. Biferale, and A. Celani, Collective olfactory search in a turbulent environment, Phys. Rev. E 102, 012402 (2020). * Vergassola _et al._ [2007] M. Vergassola, E. Villermaux, and B. I. Shraiman, ‘Infotaxis’ as a strategy for searching without gradients, Nature 445, 406 (2007). * Masson _et al._ [2009] J.-B. Masson, M. B. Bechet, and M. Vergassola, Chasing information to search in random environments, J. Phys. A-Math. Theor. 42, 434009 (2009). * Loisy and Eloy [2022] A. Loisy and C. Eloy, Searching for a source without gradients: how good is infotaxis and how to beat it, Proc. R. Soc. A 478, 20220118 (2022), https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2022.0118 . * Keats _et al._ [2007] A. Keats, E. Yee, and F.-S. Lien, Bayesian inference for source determination with applications to a complex urban environment, Atmos. Environ. 41, 465 (2007). * Hutchinson _et al._ [2019] M. Hutchinson, C. Liu, and W.-H. Chen, Source term estimation of a hazardous airborne release using an unmanned aerial vehicle, J. Field Robot. 36, 797 (2019), https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.21844 . * Aslam _et al._ [2003] J. Aslam, Z. Butler, F. Constantin, V. Crespi, G. Cybenko, and D. Rus, Tracking a moving object with a binary sensor network, in _Proc. of the 1st International Conference on Embedded Networked Sensor Systems_, SenSys ’03 (Association for Computing Machinery, New York, NY, USA, 2003) p. 150–161. * Shankar Rao [2007] K. Shankar Rao, Source estimation methods for atmospheric dispersion, Atmos. Environ. 41, 6964 (2007). * Hutchinson _et al._ [2017] M. Hutchinson, H. Oh, and W.-H. Chen, A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors, Inform. Fusion 36, 130 (2017). * Sättele _et al._ [2015] M. Sättele, M. Bründl, and D. Straub, Reliability and effectiveness of early warning systems for natural hazards: Concept and application to debris flow warning, Reliab. Eng. Syst. Saf. 142, 192 (2015). * Stähli _et al._ [2015] M. Stähli, M. Sättele, C. Huggel, B. W. McArdell, P. Lehmann, A. Van Herwijnen, A. Berne, M. Schleiss, A. Ferrari, A. Kos, D. Or, and S. M. Springman, Monitoring and prediction in early warning systems for rapid mass movements, Nat. Hazards Earth Syst. Sci. 15, 905 (2015). * Tariq _et al._ [2021] S. Tariq, Z. Hu, and T. Zayed, Micro-electromechanical systems-based technologies for leak detection and localization in water supply networks: A bibliometric and systematic review, J. Clean. Prod. 289, 125751 (2021). * Esposito _et al._ [2022] M. Esposito, L. Palma, A. Belli, L. Sabbatini, and P. Pierleoni, Recent advances in internet of things solutions for early warning systems: A review, Sensors 22, 10.3390/s22062124 (2022). * Platt and DeRiggi [2012] N. Platt and D. DeRiggi, Comparative investigation of source term estimation algorithms using fusion field trial 2007 data: linear regression analysis, Int. J. Environ. Pollut. 48, 13 (2012), https://www.inderscienceonline.com/doi/pdf/10.1504/IJEP.2012.049647 . * Berk [1966] R. H. Berk, Limiting behavior of posterior distributions when the model is incorrect, Ann. Math. Stat. 37, 51 (1966). * Berk [1970] R. H. Berk, Consistency a posteriori, Ann. Math. Stat. 41, 894 (1970). * Simons [2004] A. Simons, Many wrongs: The advantage of group navigation, Trends Ecol. Evol. 19, 453–455 (2004). * Berdahl _et al._ [2018] A. M. Berdahl, A. B. Kao, A. Flack, P. A. H. Westley, E. A. Codling, I. D. Couzin, A. I. Dell, and D. Biro, Collective animal navigation and migratory culture: from theoretical models to empirical evidence, Philos. Trans. R. Soc. Lond. B Biol. Sci. 373, 20170009 (2018), https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.2017.0009 . * Doucet _et al._ [2000] A. Doucet, S. Godsill, and C. Andrieu, On sequential Monte Carlo sampling methods for Bayesian filtering, Stat. Comput. 10, 197 (2000). * Sawford [1991] B. Sawford, Reynolds number effects in lagrangian stochastic models of turbulent dispersion, Phys. Fluids A: Fluid Dyn. 3, 1577 (1991). * Holmes and Morawska [2006] N. Holmes and L. Morawska, A review of dispersion modelling and its application to the dispersion of particles: An overview of different dispersion models available, Atmos. Environ. 40, 5902 (2006). * Falkovich _et al._ [2001] G. Falkovich, K. Gawȩdzki, and M. Vergassola, Particles and fields in fluid turbulence, Rev. Mod. Phys. 73, 913 (2001). * Biferale _et al._ [1995] L. Biferale, A. Crisanti, M. Vergassola, and A. Vulpiani, Eddy diffusivities in scalar transport, Phys. Fluids 7, 2725 (1995). * Ristic _et al._ [2016] B. Ristic, A. Gunatilaka, and R. Gailis, Localisation of a source of hazardous substance dispersion using binary measurements, Atmos. Environ. 142, 114 (2016). * Smoluchowski [1918] M. v. Smoluchowski, Versuch einer mathematischen theorie der koagulationskinetik kolloider lösungen, Z. Phys. Chem. 92, 129 (1918). * Box and Tiao [2011] G. E. Box and G. C. Tiao, _Bayesian inference in statistical analysis_ (John Wiley & Sons, 2011). * Schwartz [1965] L. Schwartz, On Bayes procedures, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 4, 10 (1965). * Fishman [1996] G. S. Fishman, _Monte Carlo: Concepts, Algorithms and Applications_ (Springer Verlag, New York, NY, USA, 1996). * Johannesson _et al._ [2004] G. Johannesson, B. Hanley, and J. Nitao, _Dynamic Bayesian Models via Monte Carlo - An Introduction with Examples -_, Tech. Rep. (U.S. Department of Energy Office of Scientific and Technical Information, 2004). * Elfring _et al._ [2021] J. Elfring, E. Torta, and R. van de Molengraft, Particle filters: A hands-on tutorial, Sensors 21, 10.3390/s21020438 (2021). * Metropolis _et al._ [1953] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of State Calculations by Fast Computing Machines, J. Chem. Phys. 21, 1087 (1953). * Musso _et al._ [2001] C. Musso, N. Oudjane, and F. Le Gland, Improving regularised particle filters, in _Sequential Monte Carlo Methods in Practice_, edited by A. Doucet, N. de Freitas, and N. Gordon (Springer New York, New York, NY, 2001) pp. 247–271. * Thomson _et al._ [2007] L. C. Thomson, B. Hirst, G. Gibson, S. Gillespie, P. Jonathan, K. D. Skeldon, and M. J. Padgett, An improved algorithm for locating a gas source using inverse methods, Atmos. Environ. 41, 1128 (2007). * Ristic _et al._ [2015] B. Ristic, A. Gunatilaka, and R. Gailis, Achievable accuracy in gaussian plume parameter estimation using a network of binary sensors, Inform. Fusion 25, 42 (2015). * Note [1] Clearly, the $\sigma^{2}_{\rm loc}$ and $\bm{\bar{r}}$ counterparts in the SMC algorithm are respectively defined as the variance and mean computed over the Monte Carlo samples. * White [1982] H. White, Maximum likelihood estimation of misspecified models, Econometrica 50, 1 (1982). * Ullo and Sinha [2020] S. L. Ullo and G. R. Sinha, Advances in smart environment monitoring systems using iot and sensors, Sensors 20, 10.3390/s20113113 (2020). * Bernshtein [1918] S. N. Bernshtein, Sur la loi des grands nombres, Communications de la Société mathématique de Kharkow 16, 82 (1918).
# Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers Silvan Mertes1 Tobias Huber1 Christina Karle1 Katharina Weitz1,2 Ruben Schlagowski1 Cristina Conati3 Elisabeth André1 1University of Augsburg, Germany 2Fraunhofer HHI, Germany 3University of British Columbia, Canada {silvan.mertes, tobias.huber, ruben.schlagowski<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract In this paper, we demonstrate the feasibility of alterfactual explanations for black box image classifiers. Traditional explanation mechanisms from the field of Counterfactual Thinking are a widely-used paradigm for Explainable Artificial Intelligence (XAI), as they follow a natural way of reasoning that humans are familiar with. However, most common approaches from this field are based on communicating information about features or characteristics that are especially important for an AI’s decision. However, to fully understand a decision, not only knowledge about relevant features is needed, but the awareness of irrelevant information also highly contributes to the creation of a user’s mental model of an AI system. To this end, a novel approach for explaining AI systems called alterfactual explanations was recently proposed on a conceptual level. It is based on showing an alternative reality where irrelevant features of an AI’s input are altered. By doing so, the user directly sees which input data characteristics can change arbitrarily without influencing the AI’s decision. In this paper, we show for the first time that it is possible to apply this idea to black box models based on neural networks. To this end, we present a GAN-based approach to generate these alterfactual explanations for binary image classifiers. Further, we present a user study that gives interesting insights on how alterfactual explanations can complement counterfactual explanations. ## 1 Introduction With the steady advance of Artificial Intelligence (AI), and the resulting introduction of AI-based applications into everyday life, more and more people are being directly confronted with decisions made by AI algorithms (Stone et al., 2016). As the field of AI advances, so does the need to make such decisions explainable and transparent. The development and evaluation of _Explainable AI_ (XAI) methods is important not only to provide end users with explanations that increase acceptance and trust in AI-based methods, but also to empower researchers and developers with insights to improve their algorithms. The need for XAI methods has prompted the research community to develop a large variety of different approaches to unravel the black boxes of AI models. A considerable part of these approaches is based on telling the user of the XAI system in various ways _which_ features of the input data are important for a decision (often called _Feature Attribution_) (Arrieta et al., 2020). Other methods, which are close to human habits of explanation, are based on the paradigm of _Counterfactual Thinking_ (Miller, 2019). Procedures that follow this guiding principle try answering the question of _What if…?_ by showing an alternative reality and the corresponding decision of the AI. Here, in contrast to feature attribution mechanisms, not only the importance of the various features is emphasized. Rather, it is conveyed, even if only indirectly, _why_ features are relevant. Figure 1: (A) Examples of a counterfactual and an alterfactual explanation. Input features to a fictional decision system to be explained are _Income_ and _Gender_ , whereas the former is relevant and the latter is irrelevant to the AI’s decision on whether a credit is given or not. (B) Conceptual comparison of factual, counterfactual, semifactual, and alterfactual explanations. Prominent examples of these explanatory mechanisms are _Counterfactual Explanations_ and _Semifactual Explanations_ (Kenny and Keane, 2020). Counterfactual explanations show a version of the input data that is altered just enough to change an AI’s decision. By doing so, the user is shown not only _which_ features are relevant to the decision, but more importantly, _how_ they would need to be changed to result in a different decision of the AI. Semifactual explanations follow a similar principle, but they modify the relevant features of the input data to an extent that the AI’s decision does not change. All of these methods have in common that they focus on the _important_ features. However, we argue that awareness of irrelevant features can also contribute substantially to the complete understanding of a decision domain, as knowledge of the important features for the AI does not necessarily imply knowledge of the unimportant ones. For example, consider an AI system that assesses a person’s creditworthiness based on various characteristics. If that system was completely fair, a counterfactual explanation might be of the form: _If your income was higher, you would be creditworthy._ However, this explanation does not exclude the possibility that your skin color also influenced the AI’s decision. It only shows that the income had a high impact on the AI. An explanation confined to the irrelevant features, on the other hand, might say _No matter what your skin color is, the decision would not change._ In this case, direct communication of irrelevant features ascertains fairness with regards to skin color. Conventional counterfactual thinking explanation paradigms do not provide this information directly. To address this issue, Mertes et al. (2022b) recently conceptually introduced the explanatory paradigm of _Alterfactual Explanations_ that is meant to complement counterfactual explanations. This principle is based on showing the user of the XAI system an alternative reality that leads to the exact same decision of the AI, but where irrelevant features are altered. All relevant features of the input data, on the other hand, remain the same. As such, alterfactual explanations form the complement to counterfactual explanations - providing both explanation types should enable the user to grasp both relevant _and_ irrelevant features. Mertes et al. already showed the potential in the concept of raising user awareness about irrelevant features (Mertes et al., 2022b). Nevertheless, due to the absence of an implementable solution, the researchers could only delve into the concept through the utilization of a fictional AI. As such, in this work we introduce a GAN-based generation algorithm that is able to create both alterfactual and counterfactual explanations for image classifiers.111Our full implementation is open-source and available at https://github.com/hcmlab/Alterfactuals. As alterfactual explanations convey completely different information than common methods, we investigate whether the understanding that users have of the explained AI system is also formed in a different way, or can even be improved. Our results show that alterfactual explanations outperform counterfactual explanations with regards to local model understanding. ## 2 Related Work As the approach presented in this paper can be counted to the class of XAI methods that work by inducing counterfactual thinking processes, it is important to gain an understanding of how common methods from this field work. Therefore, this section gives an overview on related explanation concepts. Figure 1A illustrates the difference between those concepts using exemplary explanations for a fictional AI that decides if a person is creditworthy or not. We will use that scenario as a running example of how the different explanation paradigms would answer the question of _Why does the AI say that I am not creditworthy?_. Factual Explanations \- _There was another female person that also had rather little money, and she also did not get the credit._ \- Factual explanations are the traditional way of explaining by example, and often provide a similar instance from the underlying data set (adapted or not) for the input data point that is to be explained (Keane et al., 2021b). Other approaches do not choose an instance from the dataset, but generate new ones (Guidotti et al., 2019). The idea behind factual explanations is that similar data instances lead to similar decisions, and the awareness of those similarities leads to a better understanding of the model. Further explanation mechanism that fall in this category are _Prototypical Explanations_ and _Near Hits_ Kim et al. (2016); Herchenbach et al. (2022). Counterfactual Explanations \- _If you had that amount of money, you would get the credit._ \- Counterfactual explanations are a common method humans naturally use when attempting to explain something and answer the question of _Why not …?_ (Miller, 2019; Byrne, 2019). In XAI, they do this by showing a modified version of an input to an AI system that results in a different decision than the original input. Counterfactual explanations should be minimal, which means they should change as little as possible in the original input (Keane et al., 2021b; Miller, 2021). In certain scenarios, modern approaches for generating counterfactual explanations have shown significant advantages over feature attribution mechanisms (i.e., explanation approaches that highlight _which_ features are important for a decision) in terms of mental model creation and explanation satisfaction (Mertes et al., 2022a). Wachter et al. (2017) name multiple advantages of counterfactual explanations, such as being able to detect biases in a model, providing insight without attempting to explain the complicated inner state of the model, and often being efficient to compute. For counterfactual explanations, a multitude of works exist that, similar to how we do it for alterfactual explanations, use GANs to automatically generate explanations for image classifiers Nemirovsky et al. (2022); Van Looveren et al. (2021); Khorram and Fuxin (2022); Mertes et al. (2022a). Semifactual Explanations \- _Even if you had that amount of money, you would still not get the credit._ \- Similar to counterfactual explanations, semifactual explanations are an explanation type humans commonly use. They follow the pattern of _Even if X, still P._ , which means that even if the input was changed in a certain way, the prediction of the model would still not change to the foil (McCloy and Byrne, 2002). In an XAI context, this means that an example, based on the original input, is provided that modifies the input in such a way that moves it toward the decision boundary of the model, but stops just before crossing it (Kenny and Keane, 2020). Similar to counterfactual explanations, semifactual explanations can be used to guide a user’s future action, possibly in a way to deter them from moving toward the decision boundary (Keane et al., 2021b). ## 3 Alterfactual Explanations _No matter what your gender is, the decicison would not change._ \- The basic idea of alterfactual explanations investigated in this paper is to strengthen the user’s understanding of an AI by showing irrelevant attributes of a predicted instance. Hereby, we define irrelevance as the property that the corresponding feature, regardless of its value, does not contribute in any way to the decision of the AI model. When looking at models that are making decisions by mapping some sort of input data $x\in X$ to output data $y\in Y$, the so-called _decision boundary_ describes the region in $X$ which contains data points where the corresponding $y$ that is calculated by the model is ambiguous, i.e., lies just between different instances of $Y$. Thus, irrelevant features can be thought of as features that do not contribute to a data point’s distance to the decision boundary. However, information that is carried out by explanations should be communicated as clearly as possible. Alterfactual explanations inform about the _irrelevance_ of certain features - as such, it should be made clear that these features can take _any_ possible value. If we would change the respective features only to a small amount, the irrelevance is not clearly demonstrated to the user. Therefore, an alterfactual explanation should change the affected features to the maximum amount possible. By doing so, they communicate that the feature, _even if it is changed as much as it can change_ , still does not influence the decision. Thus, the definition of an alterfactual explanation is as follows: > Let $d:X\times X\rightarrow\mathbb{R}$ be a distance metric on the input > space $X$. An _alterfactual explanation_ for a model $M$ is an altered > version $a\in X$ of an original input data point $x\in X$, that maximizes > the distance $d(x,a)$ whereas the distance to the decision boundary > $B\subset X$ and the prediction of the model do not change: $d(x,B)=d(a,B)$ > and $M(x)=M(a)$ Thus, the main difference between an alterfactual explanation and a counterfactual or semifactual explanation is that while the latter methods alter features resulting in a decreased distance to the decision boundary, our alterfactual method tries to keep that distance fixed. Further, while counterfactual and semifactual methods try to keep the overall change to the original input minimal (Keane et al., 2021a; Kenny and Keane, 2020), alterfactual explanations do exactly the opposite, which is depicted in Figure 1B. ## 4 Generating Alterfactual Explanations $x$ $64\times 64$ $32\times 32$ $16\times 16$ $8\times 8$ $4\times 4$ $2\times 2$ $1\times 1$ $2\times 2$ $4\times 4$ $8\times 8$ $16\times 16$ $32\times 32$ $64\times 64$ $128\times 128$ $64$ $128$ $256$ $512$ $512$ $512$ $512$ $512$ $512$ $512$ $256$ $128$ $64$ $1$ $n$ Conv2D + LeakyReLU Conv2D + ReLU Conv2DTrans + Tanh filter number Conv2D + BatchNorm + LeakyReLU Conv2DTrans + BatchNorm + Dropout + ReLU Conv2DTrans + BatchNorm + ReLU Figure 2: Architecture overview of the generator network. $x$ $\hat{y}$ $+$ Embedding Upsample $64\times 64$ $32\times 32$ $31\times 31$ $30\times 30$ $64$ $128$ $256$ $1$ $n$ Conv2D + LeakyReLU Conv2D + BatchNorm + LeakyReLU Conv2D + Sigmoid filter number Figure 3: Architecture overview of the discriminator network. As we argue that alterfactual and counterfactual explanations convey different information, we designed a generative approach that is capable of creating both types of explanations in order to explain an image classifier. For both, a set of requirements arises that needs to be reflected in the objectives of our explanation generation approach. 1. 1. The generated explanations should have high quality and look realistic. 2. 2. The resulting explanation should be either classified as the same class as the original input (for alterfactual explanations), or as the opposite class (for counterfactual explanations). 3. 3. For alterfactual explanations, the output image should change as much as possible, while for counterfactual explanations, it should change as little as possible. 4. 4. For alterfactual explanations, only irrelevant features should change, i.e., the distance to the decision boundary should be maintained. To address these objectives, different loss components (see next sections) were used to steer a GAN-based architecture to generate the desired explanations. A GAN-based approach was chosen as similar concepts have successfully been applied to the task of counterfactual explanation generation in various existing works (Olson et al., 2021; Huber et al., 2023; Nemirovsky et al., 2022; Zhao, 2020; Mertes et al., 2022a). In order to allow for a more focused and comprehensive user study design, in this work, we focus on explaining a binary image classifier. However, although our specific generation architecture is designed for a binary classification problem, it would theoretically be possible to apply it to non-binary tasks by training separate models for each class vs. the union over all other classes. A schematic overview of our architecture can be seen in Figures 2 and 3. For a more detailed description, we refer to the appendix. ### 4.1 Adversarial Component To address the first objective, an adversarial setting is used. Here, a generator network $G$ is trained to take an original image $x$ and a random noise vector $z$ and transforms them into the respective explanation $\hat{x}$. As such, a mapping $\\{x,z\\}\rightarrow\hat{x}$ is learned by the generator. A discriminator network $D$ is trained to identify the generated images as _fake_ images in an adversarial manner. Additionally, to partly target the second objective, we feed a target class label $\hat{y}\in\\{0,1\\}$ to the discriminator. By doing so, the discriminator learns not only to assess if the produced images are real or fake, but also has the capability to decide if an explanation fits the data distribution of the class it is supposed to belong to. A somewhat similar idea was put forth by Sharmanska et al. (2020) within the context of fairness and yielded promising results there. During training, the discriminator is alternately fed with real and fake data. For real data, the target class label $\hat{y}$ reflects the class that the classifier to be explained assigns to the respective image $x$. For the generated explanations, the target class label $\hat{y}$ reflects either the class that was assigned to the original image $x$ (for alterfactual explanations), or the opposite class (for counterfactual explanations). By letting the generator and discriminator compete against each other during training, it is enforced that the resulting images look realistic and resemble the data distribution of the respective target classes. The objective function for the adversarial setting is formulated as follows: $\mathcal{L}_{adversarial}=\mathbb{E}_{x\sim p_{\text{data}}(x)}\left[\log D(x,\hat{y})\right]+\\\ \mathbb{E}_{x\sim p_{\text{data}}(x),z\sim p_{\text{noise}}(z)}\left[\log(1-D(G(x,z),\hat{y}))\right]$ (1) ### 4.2 Including Classifier Information The second objective is further addressed by incorporating the decisions of the classifier to be explained into the generator’s loss function. Let $C:X\rightarrow\left[0,1\right]$ be a binary classifier with threshold $0.5$. We define the classification target $\tilde{C}(x)$ as $\tilde{C}(x):=C(x)$ for alterfactual explanations and $\tilde{C}(x):=1-C(x)$ for counterfactual explanations. To measure the error between the actual classification of the generated explanation and the target classification, we used Binary Crossentropy (BCE) to define a classification loss $\mathcal{L}_{C}$: $\mathcal{L}_{C}=\mathbb{E}_{x,\hat{x}\sim p_{data}(x,\hat{x})}[\tilde{C}(x)\cdot\log C(\hat{x})\\\ +(1-\tilde{C}(x))\cdot\log(1-C(\hat{x}))]$ (2) ### 4.3 SSIM Component The third objective was addressed by including a similarity component into the loss function. Explanations are meant for humans. Therefore, using the Structural Similarity Index (SSIM) seemed to be an appropriate choice to measure image similarity for our approach, as it correlates with how humans are perceiving similarity in images (Wang et al., 2004). The parameters for SSIM were chosen as recommended by Wu et al. (2019). As alterfactual explanations should change irrelevant features _as much as possible_ , while counterfactual explanations should be _as close as possible_ to the original image, the learning objective differs for both (low similarity for alterfactual explanations, high similarity for counterfactual explanations). With $\left[0,1\right]$ as the range of SSIM, we designed the loss function as follows: $\mathcal{L}_{sim}=\begin{cases}\mathbb{E}_{x,\hat{x}\sim p_{data}(x,\hat{x})}\left[SSIM(x,\hat{x})\right]\text{\hskip 5.0pt\hskip 5.0pt\hskip 5.0pt\hskip 5.0pt\hskip 5.0pt\hskip 5.0pt\hskip 5.0ptAlterfactual}\\\ \mathbb{E}_{x,\hat{x}\sim p_{data}(x,\hat{x})}\left[1-SSIM(x,\hat{x})\right]\text{Counterfactual}\end{cases}$ (3) ### 4.4 Feature Relevance Component The fourth objective, i.e., forcing the network to only modify irrelevant features when generating alterfactual explanations, was addressed by using an auxiliary Support Vector Machine (SVM) classifier. Note that this loss is only applied when generating alterfactual explanations, not when generating counterfactual explanations. Li et al. (2018) and Elsayed et al. (2018) have shown theoretically and empirically that the last weight layer of a Neural Network converges to an SVM trained on the data transformed up to this layer if certain restrictions are met (e.g., the last two layers of the network have to be fully connected). An SVM’s decision boundary can be calculated directly - unlike the one of a Neural Network (Jiang et al., 2018). As such, we use an SVM which was trained to predict the classifier’s decision based on the activations of the classifier’s penultimate layer as a way to approximate the classifier’s decision boundary - if the generated alterfactual explanation has moved closer to the SVM’s separating hyperplane, relevant features were most likely modified. Although an unchanged decision boundary distance does not necessarily guarantee that no relevant features were modified, in our experiments, it was a good indicator. The distance of $x$ to the SVM’s separating hyperplane $f$ was defined as follows, with $w$ as the SVM’s weight vector: $\text{{SVM}}(x)=\left\lvert\frac{f(x)}{||w||}\right\lvert$ (4) The SVM loss is defined by the absolute difference in distance to the separating hyperplane between the original image and the generated alterfactual explanation: $\mathcal{L}_{\text{{SVM}}}=\mathbb{E}_{x\sim p_{data}(x),z\sim p_{\text{noise}}(z)}\left[|\text{{SVM}}(x)-\text{{SVM}}(\hat{x})|\right]$ (5) The final loss function is a summation of all the four loss components introduced above. ## 5 Evaluation Scenario Figure 4: Example outputs of our system. It can be seen that alterfactual explanations change features that are irrelevant to the classifier, e.g., the color of the shoes or the width of the boot shaft, while counterfactual explanations change relevant features like the presence or absence of a boot shaft. From top to bottom the original images are a correctly classified ankle boot and sneaker, followed by two inputs incorrectly classified as ankle boot and sneaker. To assess the performance of our approach, we applied it to the Fashion-MNIST data set (Xiao et al., 2017). That data set contains 7,000 gray scale images for each of its ten categories of clothes, such as ‘ankle boots’ or ‘pullover’, splitted into _train_ (6,000 images per class) and _test_ (1,000 images per class) sets. The two classes we chose, ‘ankle boots’ and ‘sneakers’, were selected due to being somewhat similar in order not to oversimplify the classification task while still being distinct enough to be able to visually assess whether the generated explanations are clear. To create the classifier to be explained, we trained a relatively simple four- layer convolutional neural network, achieving an accuracy of $96.7\%$ after 40 training epochs. The exact architecture and training configuration can be found in the appendix. Our explanation generation architecture was trained for 14 epochs, until visually no further improvement could be observed. For alterfactual explanations, we reached a validity (i.e., which portion of the explanations are classified as the correct target class by the classifier) of $96.20\%$ and an average SSIM of $0.32$ (here, lower is better), whereas the counterfactual explanations reached a validity of $87.70\%$ and an average SSIM of $0.90$ (here, higher is better). For more details refer to the appendix. Exemplary generated explanations are shown in Figure 4. Note that, in order to verify if our alterfactual generation approach is applicable on a wider range of datasets, we additionally trained our approach on three other datasets: MNIST (LeCun, 1998), MaskedFace-Net (Cabani et al., 2021), and a gray scale version of MaskedFace-Net. To further demonstrate that our approach can be adapted to be more model-agnostic and work without access to intermediate layers, we omitted the Feature Relevance component for those experiments. Training details, example outputs and computational results for those experiments can be found in the appendix. ## 6 User Study ### 6.1 Research Question and Hypotheses We conducted a user study to validate whether the counterfactual and alterfactual explanations generated by our approach help human users to form correct model understanding of an AI system. Therefore, we only used results from the model trained on the Fashon-MNIST classifier in order to not overwhelm participants. To be able to compare our findings to existing work, we designed our study similar to Mertes et al. (2022b). Our hypotheses are as follows: 1) Alterfactual and counterfactual explanations, as well as the combination of both, are more effective in enabling model understanding than no explanations. 2) There is a difference in model understanding and explanation satisfaction between alterfactual and counterfactual explanations, but we did not anticipate a specific direction since we see them as complementary concepts. 3) Compared to the individual explanations, a combination of alterfactual and counterfactual explanations is a more effective way to enable a good model understanding and is more satisfying for users. 4) There is a difference between conditions regarding the understanding of relevant and irrelevant features, where alterfactual explanations are more effective to identify irrelevant features while counterfactual explanations should help more with identifying relevant features. ### 6.2 Methodology ##### Conditions and Explanation Presentation We used a between-groups design with four conditions. Participants in the _Control_ condition were presented only with the original input images to the AI. No explanation was shown. In the _Alterfactual_ and _Counterfactual_ conditions, participants were presented with the original input images and either alterfactual or counterfactual explanations. In the _Combination_ condition, participants were presented with the original input images as well as both the alterfactual and the counterfactual explanations. ##### Procedure The whole study was built using the _oTree_ framework by Chen et al. (2016). After answering questions about their demographic background, participants were given some general information about the data and their task during the prediction task. For the classifier, they were only told that an AI was trained to distinguish between ankle boots and sneakers. Two example images for each class (ankle boots and sneakers) were shown and some shoe specific terminology (e.g., ”shaft”) was introduced in order to make sure that participants have a common understanding of the terms they are asked about later on. Following this information, the participants were given an example input image for each class together with the classifier’s prediction for this input image. In the explanation conditions, the participants were introduced to their corresponding explanation types (counterfactuals, alterfactuals or a combination) and could explore the explanations for those two images. After that, each participant answered a quiz about the information that was given up to that point, to make sure that they understood everything correctly. Subsequently, the study itself started. It was divided into three parts: For assessing the participants’ understanding of the classifier, we used _(i)_ a prediction task for assessing the local understanding, i.e., to assess if the participants understand why the AI makes a _specific_ decision, and _(ii)_ a questionnaire about the relevance of certain features for assessing the global understanding, i.e., to assess if the participants understand how the AI works _overall_. To assess the participants’ explanation satisfaction, we used _(iii)_ an explanation satisfaction questionnaire. The three phases of the experiment are described below. ##### Local Model Understanding: Prediction Task To measure the local understanding of the classifier, we used a prediction task, which assesses the participants’ ability to anticipate the AI classifier’s decisions (Hoffman et al., 2018). Eight examples were shown, covering all possible classification outcomes (two correctly classified images for both sneakers and ankle boots, and two incorrectly classified images for both) to avoid bias. Figure 4 shows four of the images from the study. The example images were chosen randomly but we made sure that the alterfactual and counterfactual explanations generated by our model for those images were valid (i.e., the classifier predicted the same class as for the original image when fed with the alterfactual explanation, and the opposite class when fed with the counterfactual explanation). Participants had to predict the classifier’s decision for each example image. Participants were additionally asked about their own opinion on which class the original shoe image belonged to. The answers to that particular question were not further analyzed - it was only added to help the participants distinguish between their own opinion and their understanding of the classifier. After predicting an example, they were told the correct label and the AI classifier’s decision before moving on to the next example. The order of the examples was randomized. ##### Global Model Understanding: Feature Relevance While the Prediction Task can be seen as _local_ measurement of the users’ understanding of the model in specific instances, we also wanted to investigate whether participants understood the _global_ relevance of different features. To this end, we looked at two features that were relevant for our classifier (”presence/absence of a boot shaft” and ”presence/absence of an elevated heel”) as well as two features that were irrelevant for our classifier (”boot shaft width” and ”the shoe’s color and pattern on the surface area”). These features were chosen based on the authors’ experience from training the classifier and a-priori explorations with the Feature Attribution explanation mechanisms LIME (Ribeiro et al., 2016) and SHAP (Lundberg and Lee, 2017). Note that, although the classifier is still a black box and there is no definitive proof that the chosen features reflect the classifier’s inner workings entirely accurately, we decided that using those mechanisms for the feature choice are the best proxy that we have. As such, after the participants went through the eight examples that were used for the prediction task, they were asked for each feature how much they agreed that it was relevant to the AI’s decisions on a 5-point Likert scale (0 = strongly disagree, 4 = strongly agree). ##### Explanation Satisfaction In order to measure the participants’ subjective satisfaction, we used the Explanation Satisfaction Scale proposed by Hoffman et al. (2018) which consists of eight items rated on a 5-point Likert scale (0 = strongly disagree, 5 = strongly agree) that we averaged over all items. Since it does not apply to our use-case, we excluded the 5th question of the questionnaire. The seven remaining items address _confidence_ , _predictability_ , _reliability_ , _safety_ , _wariness_ , _performance_ , _likeability_. Finally, the participants had the possibility to give free text feedback. ### 6.3 Participants Figure 5: Left: Mean participant prediction accuracy of the AI’s prediction by condition. The conditions containing alterfactual explanations outperformed all other conditions. Right: Mean understanding of the irrelevant and relevant features in our study. Error bars represent the 95% CI. *p $<$ .05, **p $<$ .001. Through a power analysis, we estimated a required sample size of at least 21 per condition for a MANOVA with 80% power and an alpha of 5%, based on the Pillai’s Trace of 0.13 reported for the study by Mertes et al. (2022b). 131 Participants between 18 and 29 years (M = 22.2, SD = 2.44) were recruited at the University of _blinded for review_. 61 of them were male, 70 female. The participants were randomly separated into the four conditions (33 per condition and 32 in the Alterfactual condition). The highest level of education that most participants held (76.3%) was a high-school diploma. Only 11.5% of the participants had no experience with AI. Most of the participants (74%) have heard from AI in the media. Excluding participants that had no opinion on the subject, the participants expected a positive impact of AI systems in the future (M = 3.73 on a 5-point Likert Scale from 1 = ”Extremely negative” to 5 = ”Extremely positive”). There were no substantial differences in the demographics between conditions (see appendix). ## 7 Results ### 7.1 Model Understanding To investigate the impact of the four different experimental conditions on the (1) feature understanding and (2) prediction accuracy, we conducted a MANOVA. We found a significant difference, Wilks’ Lambda = 0.859, F(6,252) = 3.31, p = .004. The following ANOVA revealed that only the prediction accuracy of the participants showed significant differences between the conditions: * • _Feature Understanding_ : F(3,127) = 0.877, p = .455. * • _Prediction Accuracy_ : F(3,127) = 6.578, p $<$ .001. As displayed in Figure 5, the post-hoc t-tests showed that the participants’ prediction accuracy was significantly better in the _Alterfactual_ and _Combination_ conditions compared to the other conditions. The effect size d is calculated according to Cohen (2013): * • _Alterfactual_ vs. _Control_ : t(127) = 3.19, p = .002, d = 0.79 (medium effect). * • _Alterfactual_ vs. _Counterfactual_ : t(127) = 2.06, p = .042, d = 0.51 (medium effect). * • _Combination_ vs. _Control_ : t(127) = 3.93, p $<$ .001, d = 0.97 (large effect). * • _Combination_ vs. _Counterfactual_ : t(127) = 2.79, p = .006, d = 0.69 (medium effect). These results regarding the prediction task confirm our hypothesis that the conditions with alterfactual explanations outperform the condition without explanations in the prediction task. Further, the combination of both explanation types did significantly outperform counterfactual explanations. However, our hypothesis that the combination is more effective in terms of enabling a correct model understanding than alterfactual explanations has to be rejected. ### 7.2 Relevant and Irrelevant Information As reported in the section above, we did not find a significant overall difference in the feature understanding task (see Figure 5). However, in order to investigate our hypotheses about irrelevant vs. relevant features, we conducted another MANOVA between the conditions and the combined understanding values for the two relevant features and the two irrelevant features. This MANOVA did not find any significant differences, Wilks’ Lambda = 0.951, F(6,252) = 1.07, p = .379. The mean understanding per condition can be found in the appendix. ### 7.3 Explanation Satisfaction The ANOVA revealed that there were no significant differences in the subjective explanation satisfaction between the three explanation conditions, F(2,95) = 0.34, p = .713. The mean satisfaction values with standard deviation were: _Counterfactual_ condition: $3.54\pm 0.53$ ; _Alterfactual_ condition: $3.65\pm 0.6$; _Combination_ condition: $3.58\pm 0.5$. ## 8 Discussion With our proposed GAN-based approach, we demonstrated that it is possible to generate both counterfactual and alterfactual explanations for a black box image classifier. Using computational metrics, we showed that both of those generated explanations fulfill their respective requirements: The counterfactual explanations are very similar to the original images (i.e., 0.90 average SSIM) but change the classifiers prediction in 87.70% of the cases while alterfactual explanations are very different from the original image (i.e., 0.32 average SSIM), but do not change the classifier’s prediction in 96.20% of the cases. For the prediction task of our user study, alterfactual explanations and the combination of alterfactual and counterfactual explanations performed significantly better than the other two conditions demonstrating the potential of alterfactual explanations to facilitate local model understanding. However, we did not observe a significant difference for the feature relevance understanding. This is highly interesting, as it contrasts with a previous study by Mertes et al. (2022b). There, a similar experimental design was employed for assessing the effect that alterfactual explanations have on users’ mental models of a hard-coded classifier that assesses numerical feature descriptors for a fictional classification problem. In contrast to our work, they neither used a real classifier nor an alterfactual generation algorithm, but only mock-up decisions and explanations. In their scenario, alterfactual explanations led to a significantly better feature relevance understanding, while not having a substantial impact on the performance in a prediction task. A possible explanation for this is the fact that our study was conducted in the context of fashion classification, where the users might already have had a quite distinctive mental model of the problem domain itself. Further, images might be more accessible than numerical feature descriptors to end users. As such, the global understanding of the classifier might already be positively biased. This argument is supported by looking at the feature relevance understanding results of the control group - although not seeing any explanations, they already performed very well in identifying relevant features. However, as can be seen by the significant performance improvement in the prediction task, the local understanding of the model does not necessarily benefit from the identification of globally relevant features. As the classification model is imperfect, a global understanding of the use case itself does not necessarily imply an understanding of cases, e.g. when the classifier’s decision does not correctly model reality. Furthermore, our results regarding the global model understanding should be taken with a grain of salt, since the fashion- classifier is a black-box model - even though we used post-hoc explanation methods (SHAP and LIME), we cannot be certain that our choice of features is completely accurate. Interestingly, we did not observe any significant differences in explanation satisfaction. This indicates that participants felt similarly satisfied by all explanation methods even though the alterfactual and combined explanations objectively helped more during the prediction task. The presentation of more information (i.e., in the combination condition) could have led to a higher cognitive load and influenced the subjective assessments of explanation satisfaction, resulting in the difference between objective measurement (i.e., model understanding) and subjective measurement (i.e., explanation satisfaction). ## 9 Conclusion In this paper, we demonstrate the practical feasibility of a recently proposed concept for explaining AI models called _alterfactual explanations_ that _alter_ as much irrelevant information as possible while maintaining the distance to the decision boundary. We show for the first time that it is possible to generate such explanations for black box models and briefly evaluated them computationally. Furthermore, we showed in a user study that our generated alterfactual explanations can complement counterfactual explanations. In that study, we compared how users’ model understanding of a binary image classifier changes when being confronted with counterfactual explanations, alterfactual explanations, or a combination of both. Further, a control group was assessed that did not see any explanations. We found that in a prediction task, where the classifier’s prediction had to be anticipated by looking at the explanations, users performed significantly better when they were provided with explanations that included alterfactual explanations compared to users that did not see alterfactual explanations, although we did not observe a significant difference in explanation satisfaction. Overall, we showed that alterfactual explanations are a promising explanation method that can complement counterfactual explanations in future XAI systems. ## 10 Acknowledgments This research was partially funded by the DFG through the Leibniz award of Elisabeth André (AN 559/10-1). ## References * Arrieta et al. [2020] Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115, 2020. * Byrne [2019] Ruth MJ Byrne. Counterfactuals in explainable artificial intelligence (xai): Evidence from human reasoning. In IJCAI, pages 6276–6282, 2019. * Cabani et al. [2021] Adnane Cabani, Karim Hammoudi, Halim Benhabiles, and Mahmoud Melkemi. Maskedface-net–a dataset of correctly/incorrectly masked face images in the context of covid-19. Smart Health, 19:100144, 2021. * Chen et al. [2016] Daniel L Chen, Martin Schonger, and Chris Wickens. otree—an open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance, 9:88–97, 2016. * Cohen [2013] Jacob Cohen. Statistical power analysis for the behavioral sciences. Academic press, 2013. * Elsayed et al. [2018] Gamaleldin Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, and Samy Bengio. Large margin deep networks for classification. Advances in neural information processing systems, 31, 2018. * Guidotti et al. [2019] Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. Factual and counterfactual explanations for black box decision making. IEEE Intelligent Systems, 34(6):14–23, 2019. * Herchenbach et al. [2022] Marvin Herchenbach, Dennis Müller, Stephan Scheele, and Ute Schmid. Explaining image classifications with near misses, near hits and prototypes: Supporting domain experts in understanding decision boundaries. In International Conference on Pattern Recognition and Artificial Intelligence, pages 419–430. Springer, 2022. * Hoffman et al. [2018] Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. Metrics for explainable ai: Challenges and prospects. arXiv preprint arXiv:1812.04608, 2018. * Huber et al. [2023] Tobias Huber, Maximilian Demmler, Silvan Mertes, Matthew L. Olson, and Elisabeth André. Ganterfactual-rl: Understanding reinforcement learning agents’ strategies through visual counterfactual explanations. In Noa Agmon, Bo An, Alessandro Ricci, and William Yeoh, editors, Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023, London, United Kingdom, 29 May 2023 - 2 June 2023, pages 1097–1106. ACM, 2023. * Jiang et al. [2018] Yiding Jiang, Dilip Krishnan, Hossein Mobahi, and Samy Bengio. Predicting the generalization gap in deep networks with margin distributions. arXiv preprint arXiv:1810.00113, 2018. * Keane et al. [2021a] Mark T Keane, Eoin M Kenny, Eoin Delaney, and Barry Smyth. If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual xai techniques. arXiv preprint arXiv:2103.01035, 2021. * Keane et al. [2021b] Mark T Keane, Eoin M Kenny, Mohammed Temraz, Derek Greene, and Barry Smyth. Twin systems for deepcbr: A menagerie of deep learning and case-based reasoning pairings for explanation and data augmentation. arXiv preprint arXiv:2104.14461, 2021. * Kenny and Keane [2020] Eoin M Kenny and Mark T Keane. On generating plausible counterfactual and semi-factual explanations for deep learning. arXiv preprint arXiv:2009.06399, 2020. * Khorram and Fuxin [2022] Saeed Khorram and Li Fuxin. Cycle-consistent counterfactuals by latent transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10203–10212, 2022. * Kim et al. [2016] Been Kim, Rajiv Khanna, and Oluwasanmi O Koyejo. Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems, 29, 2016. * LeCun [1998] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998. * Li et al. [2018] Yu Li, Lizhong Ding, and Xin Gao. On the decision boundary of deep neural networks. arXiv preprint arXiv:1808.05385, 2018. * Lundberg and Lee [2017] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. * McCloy and Byrne [2002] Rachel McCloy and Ruth MJ Byrne. Semifactual “even if” thinking. Thinking & Reasoning, 8(1):41–67, 2002. * Mertes et al. [2022a] Silvan Mertes, Tobias Huber, Katharina Weitz, Alexander Heimerl, and Elisabeth André. Ganterfactual—counterfactual explanations for medical non-experts using generative adversarial learning. Frontiers in artificial intelligence, 5, 2022. * Mertes et al. [2022b] Silvan Mertes, Christina Karle, Tobias Huber, Katharina Weitz, Ruben Schlagowski, and Elisabeth André. Alterfactual explanations–the relevance of irrelevance for explaining ai systems. arXiv preprint arXiv:2207.09374, 2022. * Miller [2019] Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1–38, 2019. * Miller [2021] Tim Miller. Contrastive explanation: A structural-model approach. The Knowledge Engineering Review, 36, 2021. * Nemirovsky et al. [2022] Daniel Nemirovsky, Nicolas Thiebaut, Ye Xu, and Abhishek Gupta. Countergan: Generating counterfactuals for real-time recourse and interpretability using residual gans. In Uncertainty in Artificial Intelligence, pages 1488–1497. PMLR, 2022. * Olson et al. [2021] Matthew L. Olson, Roli Khanna, Lawrence Neal, Fuxin Li, and Weng-Keen Wong. Counterfactual state explanations for reinforcement learning agents via generative deep learning. Artif. Intell., 295:103455, 2021. * Ribeiro et al. [2016] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ”why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 1135–1144, New York, NY, USA, 2016. Association for Computing Machinery. * Russell and Norvig [2016] Stuart Russell and Peter Norvig. Artificial intelligence: A modern approach global edition. Pearson, 2016. * Sharmanska et al. [2020] Viktoriia Sharmanska, Lisa Anne Hendricks, Trevor Darrell, and Novi Quadrianto. Contrastive examples for addressing the tyranny of the majority. arXiv preprint arXiv:2004.06524, 2020. * Stone et al. [2016] Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, et al. One hundred year study on artificial intelligence: Report of the 2015-2016 study panel. Technical report, Technical report, Stanford University, 2016. * Van Looveren et al. [2021] Arnaud Van Looveren, Janis Klaise, Giovanni Vacanti, and Oliver Cobb. Conditional generative models for counterfactual explanations. arXiv preprint arXiv:2101.10123, 2021. * Wachter et al. [2017] Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017. * Wang et al. [2004] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004. * Wu et al. [2019] Yifan Wu, Fan Yang, Yong Xu, and Haibin Ling. Privacy-protective-gan for privacy preserving face de-identification. Journal of Computer Science and Technology, 34:47–60, 2019. * Xiao et al. [2017] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. * Zhang and Dafoe [2019] Baobao Zhang and Allan Dafoe. Artificial intelligence: American attitudes and trends. Available at SSRN 3312874, 2019. * Zhao [2020] Yunxia Zhao. Fast real-time counterfactual explanations. arXiv preprint arXiv:2007.05684, 2020. ## Appendix A GAN Architecture and Training #### A.0.1 Generator Model The GAN’s generator architecture is listed in Table 1. #### A.0.2 Discriminator Model The GAN’s discriminator architecture is listed in Table 2. ### A.1 Training Configuration and Hyperparameters The training configuration and hyperparamters are shown in Table 3. The Adam optimizer was configured with $\beta_{1}=0.5$, $\beta_{2}=0.999$, $\epsilon=1e-8$. Further, the Support Vector Machine (SVM) that was included in the loss function (see main paper) was trained with the parameters listed in Table 4. ## Appendix B Classifier Architecture and Training In Table 5, the model architecture for the classifier that we used in our evaluation scenario is described. The training configuration and hyperparamters are shown in Table 6. The Adam optimizer was configured with $\beta_{1}=0.9$, $\beta_{2}=0.999$, $\epsilon=1e-8$. ## Appendix C Additional Dataset Experiments In order to demonstrate that our alterfactual generation approach is generalizable to different datasets, we additionally trained models for three other datasets. Here, we omitted the Feature Relevance component. As for that component an additional SVM has to be trained on the penultimate layer of the classifier layer, it takes away the model-agnostic property from the alterfactual generation network. By performing these additional experiments, we show that the approach can simply be adapted to be model-agnostic, although that may negatively affect the outcomes of the results - it is not specifically forced that _only_ irrelevant features change. For the classifiers, we used the same architecture as for the Fashion-MNIST dataset, although batch size and epochs were modified to fit the hardware that we used. ### C.1 MNIST As the MNIST datasets has more than two classes (each class contains hand- drawn images of one specific digit), we picked the two digits that are most likely to be confused by deep learning classifiers: _Three_ and _Eight_. The MNIST classifier was trained for 9 epochs with batch size 32. Besides not using the Feature Relevance component and increasing the epoch number to 42, the GAN network was trained with the same parameter settings as for the Fashion-MNIST dataset. We reached a validity of 95.92% and an average SSIM of 0.425. Example outputs are shown in Figure 6. Figure 6: Examplary alterfactual outputs for the MNIST dataset. ### C.2 MaskedFace-Net The MaskedFace-Net dataset contains images of people wearing face masks. Binary labels are provided, indicating that on the respective image the mask is worn correctly or incorrectly. The classifier was trained for 2 epochs with batch size 128. Besides not using the Feature Relevance component and decreasing the epoch number to 11, the GAN network was trained with the same parameter settings as for the Fashion-MNIST dataset. We reached a validity of 84.27% and an average SSIM of 0.091. Example outputs are shown in Figure 7. Figure 7: Examplary alterfactual outputs for the MaskedFace-Net dataset. ### C.3 MaskedFace-Net (Gray Scale) Here, we also used the MaskedFace-Net dataset, but converted it to gray scale, demonstrating that our approach also works with gray scale data. The classifier was trained for 1 epoch with batch size 128. Besides not using the Feature Relevance component and decreasing the epoch number to 6, the GAN network was trained with the same parameter settings as for the Fashion-MNIST dataset. We reached a validity of 48.89% and an average SSIM of 0.002. Example outputs are shown in Figure 8. Figure 8: Examplary alterfactual outputs for the gray scale version of the MaskedFace-Net dataset. It can be clearly seen that the only part of the image that gets unchanged is the mask itself - indicating that everything else is irrelevant. ## Appendix D User Study ### D.1 Demographic Details _Control_ _Counterfactual_ _Alterfactual_ _Combination_ Figure 9: Distribution of the chosen AI experience items for each condition. The x-axis depicts the following items: 1 - I do not have any experience in AI related topics; 2 - I know AI from the media; 3 - I use AI technology in my private life; 4 - I use AI technology in my work; 5 - I have taken at least one AI related course; 6 - I do research on AI-related topics; 7 - Other: The mean age and education level, as well as the percentage of female participants, per condition can be seen in Table 7. For the AI experience and Attitude we adapted a description of AI from Zhang and Dafoe [2019] and Russell and Norvig [2016] to “The following questions ask about Artificial Intelligence (AI). Colloquially, the term ‘artificial intelligence’ is often used to describe machines (or computers) that mimic ‘cognitive’ functions that humans associate with the human mind, such as ‘learning’ and ‘problem solving’.” After this description, participants had to select one or more item describing their experience with AI. The distribution of the items for each condition is shown in Fig. 9. Following this we adapted a question from Zhang and Dafoe [2019] to measure the participants’ attitude towards AI. We asked them to rate their answer to the question “Suppose that AI agents would achieve high-level performance in more areas one day. How positive or negative do you expect the overall impact of such AI agents to be on humanity in the long run?” on a 5-point Likert scale from “Extremely negative” to “Extremely positive”. The participants also had the option to answer “I do not know” here, which would exclude them from the evaluation of this question. The mean results for each condition are shown in Table 7. ## Appendix E Additional Post-Hoc Results For completeness, we also report the results of the post-hoc t-tests on the participants’ prediction accuracy that were not significant. The effect size d is calculated according to Cohen [2013]: * • _Counterfactual_ vs. _Control_ : t(127) = 1.14, p = .258, d = 0.28 * • _Combination_ vs. _Alterfactual_ : t(127) = 0.71, p $<$ .478, d = 0.18. For feature understanding and explanation satisfaction we did not calculate post-hoc tests since the ANOVA was not significant. ### E.1 Mean Understanding of Features Figure 10 shows the mean understanding (as assessed by the feature understanding task) per condition. Irrelevant Features Relevant Features Figure 10: Mean understanding of the irrelevant and relevant features in our study. Error bars represent the 95% CI. ### E.2 Explanation Satisfaction Scale For evaluating explanation satisfaction, we used the Explanation Satisfaction scale by Hoffmann [Hoffman et al., 2018] except one item that did not apply to our use case. The items that we used were as follows, where each item was rated on a 5-point likert scale (1 = strongly disagree, 5 = strongly agree): * • From the explanations, I understand how the AI makes its decision. * • The explanations of how the AI makes its decision are satisfying. * • The explanations of how the AI makes its decision have sufficient detail. * • The explanations of how the AI makes its decision seem complete. * • The explanations of how the AI makes its decision are useful to predict the AI’s decision. * • The explanations of how the AI makes its decision show me how accurate the AI is. * • The explanations let me judge when I should trust and not trust the AI. ### E.3 Study Design Screenshots of the user study are shown in Figures 11 to 40. The _Combination_ condition is shown. Layer | Description | # Filter | Size | Stride | Dropout | BatchNorm | Activation ---|---|---|---|---|---|---|--- 1 | Conv2D | 64 | 4x4 | 2 | - | no | LeakyReLU (0.2) 2 | Conv2D | 128 | 4x4 | 2 | - | yes | LeakyReLU (0.2) 3 | Conv2D | 256 | 4x4 | 2 | - | yes | LeakyReLU (0.2) 4 | Conv2D | 512 | 4x4 | 2 | - | yes | LeakyReLU (0.2) 5 | Conv2D | 512 | 4x4 | 2 | - | yes | LeakyReLU (0.2) 6 | Conv2D | 512 | 4x4 | 2 | - | yes | LeakyReLU (0.2) 7 | Conv2D | 512 | 4x4 | 2 | - | no | ReLU 8 | Conv2DTranspose | 512 | 4x4 | 2 | 0.5 | yes | ReLU 9 | Conv2DTranspose | 512 | 4x4 | 2 | 0.5 | yes | ReLU 10 | Conv2DTranspose | 512 | 4x4 | 2 | 0.5 | yes | ReLU 11 | Conv2DTranspose | 256 | 4x4 | 2 | - | yes | ReLU 12 | Conv2DTranspose | 128 | 4x4 | 2 | - | yes | ReLU 13 | Conv2DTranspose | 64 | 4x4 | 2 | - | yes | ReLU 14 | Conv2DTranspose | 1 | 4x4 | 2 | - | no | Tanh Table 1: Generator Architecture used in our evaluation scenario. The architecture is adapted from Wu et al. [2019]. Where BatchNorm, Dropout, or Activation function occurred together, the order applied was BatchNorm - Dropout - Activation. Layer | Description | # Filter | Size | Stride | BatchNorm | Activation ---|---|---|---|---|---|--- 0a | Embedding | - | 8x8 | - | no | - 0b | Upsample | - | 128x128 | - | no | - 1 | Conv2D | 64 | 4x4 | 2 | no | LeakyReLU (0.2) 2 | Conv2D | 128 | 4x4 | 2 | yes | LeakyReLU (0.2) 3 | Conv2D | 256 | 4x4 | 2 | yes | LeakyReLU (0.2) 4 | Conv2D | 1 | 4x4 | 2 | no | Sigmoid Table 2: Discriminator Architecture used in our evaluation scenario. Where BatchNorm and Activation function occurred together, BatchNorm preceded the activation function. The first two layers, marked as ‘0a’ and ‘0b’ were used to upsample the label information to the size of the input image. The label and image were passed together to layer 1. The architecture is adapted from Wu et al. [2019]. Batch Size | 1 ---|--- Epochs | 14 Learning Rate Generator | 1e-4 Learning Rate Discriminator | 1e-4 Optimizer | Adam Table 3: The setting used to train the GAN. C (Regularisation) | 10 ---|--- Kernel | linear Iterations | 5000 Table 4: The setting used to train the SVM. Layer | Description | # Filter | Size | Stride | BatchNorm | Activation ---|---|---|---|---|---|--- 1 | Conv2D | 32 | 3x3 | 1 | yes | ReLU 2 | Conv2D | 32 | 3x3 | 1 | yes | ReLU 3 | MaxPool2D | - | 2x2 | 2 | no | - 4 | Conv2D | 64 | 3x3 | 1 | yes | ReLU 5 | Conv2D | 64 | 3x3 | 1 | yes | ReLU 6 | GAP | - | - | - | no | - 7 | Dense | - | 2 | - | no | Softmax Table 5: Classifier architecture used to train the classifier for the MNIST-Fashion dataset (classes _Sneaker_ and _Ankle Boot_). Where BatchNorm and Activation function occurred together, BatchNorm preceded the activation function. Batch Size | 32 ---|--- Epochs | 40 Learning Rate | 1e-3 Optimizer | Adam Loss Function | Binary Cross Entropy Table 6: The setting used to train the Fashion-MNIST classifier. | Control | Counterfactual | Alterfactual | Combination ---|---|---|---|--- Mean age | 22.0 | 22.5 | 21.6 | 22.6 Percentage of female participants | 52 | 58 | 47 | 58 Highest level of Education | 2.27 | 2.42 | 2.19 | 2.21 Mean AI Attitude | 3.50 | 3.82 | 3.84 | 3.75 Table 7: Demographic data across conditions. The highest level of education was measured as follows: 1 - No education, 2 - High school graduation, 3 - Vocational training, 3 - Bachelor, 4 - Master, 5 - Doctor. The Attitude towards AI is measured on 5-Point Likert scale from “Extremely negative” to “Extremely positive”. Figure 11: Screenshot of the user study, part 1. Figure 12: Screenshot of the user study, part 2. Figure 13: Screenshot of the user study, part 3. Figure 14: Screenshot of the user study, part 4. Figure 15: Screenshot of the user study, part 5. Figure 16: Screenshot of the user study, part 6. Figure 17: Screenshot of the user study, part 7. Figure 18: Screenshot of the user study, part 8. Figure 19: Screenshot of the user study, part 9. Figure 20: Screenshot of the user study, part 10. Figure 21: Screenshot of the user study, part 11. Figure 22: Screenshot of the user study, part 12. Figure 23: Screenshot of the user study, part 13. Figure 24: Screenshot of the user study, part 14. Figure 25: Screenshot of the user study, part 15. Figure 26: Screenshot of the user study, part 16. Figure 27: Screenshot of the user study, part 17. Figure 28: Screenshot of the user study, part 18. Figure 29: Screenshot of the user study, part 19. Figure 30: Screenshot of the user study, part 20. Figure 31: Screenshot of the user study, part 21. Figure 32: Screenshot of the user study, part 22. Figure 33: Screenshot of the user study, part 23. Figure 34: Screenshot of the user study, part 24. Figure 35: Screenshot of the user study, part 25. Figure 36: Screenshot of the user study, part 26. Figure 37: Screenshot of the user study, part 27. Figure 38: Screenshot of the user study, part 28. Figure 39: Screenshot of the user study, part 29. Figure 40: Screenshot of the user study, part 30.
# Observation of anomalous amplitude modes in the kagome metal CsV3Sb5 Gan Liu Xinran Ma Kuanyu He Qing Li National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Hengxin Tan Yizhou Liu Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Jie Xu Wenna Tang National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Kenji Watanabe Research Center for Functional Materials, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan Takashi Taniguchi International Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan Libo Gao Yaomin Dai Hai-Hu Wen National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China Binghai Yan <EMAIL_ADDRESS>Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel Xiaoxiang Xi<EMAIL_ADDRESS>National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China Abstract The charge-density wave (CDW) phase is often accompanied by the condensation of a soft acoustic phonon mode, giving rise to lattice distortion and charge density modulation. This picture was challenged for the recently discovered kagome metal CsV3Sb5, based on the evidence of absence of soft phonons. Here we report the observation of Raman-active CDW amplitude modes in this material, which are collective excitations typically thought to emerge out of frozen soft phonons. The amplitude modes strongly hybridize with other superlattice modes, imparting them with clear temperature-dependent frequency shift and broadening, rarely seen in other known CDW materials. Both the mode mixing and the large amplitude mode frequencies suggest that the CDW exhibits the character of strong electron-phonon coupling, a regime in which acoustic phonon softening can cease to exist. The observation of amplitude modes in the absence of soft phonons highlights the unconventional nature of the CDW in CsV3Sb5. Introduction Materials with a kagome lattice can host rich phenomena encompassing quantum magnetism [1, 2], Dirac fermions [3, 4], nontrivial topology [5, 6, 7], density waves, and superconductivity [8, 9, 10]. The recently discovered kagome metals $A$V3Sb5 ($A=\mathrm{K,Rb,or~{}Cs}$) [11, 12] offer a new platform to study the interplay of these phenomena. These compounds have Fermi levels close to Dirac points or van Hove singularities [12, 13, 14], leading to a plethora of possible intriguing ground states. Indeed, charge-density waves and superconductivity have been discovered [12, 15, 16], with ample evidence showing that both types of orders are exotic. For example, the CDW transition is accompanied possibly by a large anomalous Hall effect [17, 18], and the superconductivity features a pair-density wave state [19]. The nature of the CDW state and the mechanism for its formation have been under close scrutiny. In this work, we focus on CsV3Sb5, which has a CDW transition temperature $T_{\mathrm{CDW}}=94$ K [12]. Both hard-x-ray and neutron scattering showed the lack of soft acoustic phonons [20, 21], although density functional theory (DFT) calculations found two phonon instabilities at the $M$ and $L$ points of the Brillouin zone [22, 23, 24]. Considering that a $2\times 2$ modulation of the crystal lattice is well established [12, 19, 25, 26, 27], the absence of soft modes apparently breaks a pattern proven generic to many known CDW systems — a soft acoustic phonon freezes to zero frequency and triggers the formation of a distorted lattice [28]. Currently, there is still no consensus on the form of the in-plane structure and the $c$-axis periodicity [27, 29]. The roles of Fermi-surface nesting and electron-phonon coupling are also debated. Because the period of the $2\times 2$ superlattice matches perfectly with the Fermiology of the van Hove singularity, it is natural to ascribe the CDW transition to Fermi surface nesting [22], supported by the appreciable partial gapping of the Fermi surface observed experimentally [30, 31, 32, 33, 34]. However, the calculated electronic susceptibility lacks the expected divergence [35, 36], and the effect of electron-phonon coupling may not be dismissed [36, 21]. Because the CDW features lattice distortions, studies of the lattice degree of freedom can offer insight into the mechanism. Raman scattering is a valuable tool in this respect. In well-studied CDW systems, such as the transition metal dichalcogenides, as a soft acoustic phonon mode condenses to form a distorted lattice, new Raman-active collective excitations, known as amplitude modes, emerge, providing a direct probe of the CDW order parameter [37, 38, 39] (see Fig. 1a). Conversely, the observation of amplitude modes is typically considered as evidence for the soft mode. The temperature dependence of the amplitude modes as well as the zone-folded modes, which become Raman-active due to zone folding induced by the superlattice, can both reflect the CDW transition [40, 41, 42, 43, 44]. Combined with symmetry information from polarization-resolved measurements, constraints can be set on the possible CDW ground state. Here, we report Raman scattering measurements on CsV3Sb5. We observe a multitude of CDW-induced modes, whose symmetries and frequencies are in good agreement with DFT calculations for a single layer CsV3Sb5 under inverse Star of David distortion. The observed temperature dependence of these modes and their calculated evolutions with varying lattice distortion allow us to identify two of them as amplitude modes, emerging from the predicted soft modes, although the soft modes are elusive experimentally. In contrast to mostly independent amplitude modes and zone-folded modes in well-known CDW materials [41, 42, 43], we show that they hybridize strongly in CsV3Sb5, causing spectral weight redistribution to the latter and rendering them amplitude-mode-like. The anomalous hybridization and the large values of the amplitude mode frequencies provide evidence of strong-coupling CDW, offering a possible explanation for the lack of soft acoustic modes. These results stress the importance of the lattice degree of freedom and electron-phonon coupling in the CDW formation in CsV3Sb5. Results Raman-active modes in CsV3Sb5 CsV3Sb5 crystallizes in a hexagonal lattice with the $P6/mmm$ space group [11]. Fig. 1b shows the unit cell of its crystal structure. The V atoms form a kagome net interspersed by Sb atoms (labeled Sb1), all within the ab-plane. The V atoms are further bonded by Sb atoms above and below the kagome plane (labeled Sb2). These V3Sb5 slabs are separated by Cs layers, with weak coupling between them to form a quasi-two-dimensional (quasi-2D) structure. Factor group analysis yields three Raman-active phonon modes, $\Gamma_{\mathrm{Raman}}=A_{1g}+E_{2g}+E_{1g}$. The former two can be detected when the photons are polarized in the $ab$-plane, satisfied by the back- scattering geometry used in our experiment. These intense modes are marked by dashed lines in Figs. 1c, d. They involve only the Sb2 atoms, with their atomic vibrations along the $c$-axis and within the $ab$-plane for the $A_{1g}$ and $E_{2g}$ modes, respectively; see Fig. 1b. The $E_{2g}$ modes are a pair of degenerate vibrations with opposite circling directions, i.e., opposite chiralities (see Supplementary Note 1). These two types of symmetries can be distinguished by polarization-resolved measurements. Specifically, the $A_{1g}$ modes can be detected in the XX and LL polarization configurations, whereas the $E_{2g}$ modes appear in the XX, XY, and LR configurations. Here, XX and XY represent collinear and cross-linear polarization for the incident and scattered photons, and LL and LR involve circularly polarized light with left (L) and right (R) helicity. A comparison of data in all four configurations is included in Supplementary Fig. 1. Below $T_{\mathrm{CDW}}$, multiple peaks emerge, highlighted by the dotted lines in Figs. 1c, d. Their origin will be discussed in the next sections. These modes are rather weak compared to the main lattice phonons. Their disappearance at 100 K suggests a close correlation with CDW formation. In contrast, many weak peak-like structures below 100 cm-1 lack temperature dependence, whose origin is unclear. Fig. 1e compares the observed Raman mode frequencies with those from DFT calculations for a single layer of CsV3Sb5 [22], considering two possible forms of lattice distortion, the Star of David (SD) and inverse Star of David (ISD, also referred to as tri-hexagonal) structures. Both of them show the same number of $A_{1g}$ and $E_{2g}$ modes, but with different ordering. Overall, the calculated ISD phonons agree much better with the experimental results, as shown in the figure and in Supplementary Tab. 1. All the five predicted $A_{1g}$ modes and five out of the eight predicted $E_{2g}$ modes are observed. The observed $A_{1g}$ mode below 50 cm-1 is unaccounted for by our calculations. This mode was also observed by pump-probe time-resolved spectroscopy, which, when compared with calculations taking into account interlayer coupling, was assigned as a Cs-mode due to CDW modulation along the $c$-axis [23]. Except for this mode and the three missing $E_{2g}$ modes due to their weak scattering cross section, the symmetry ordering of all the other modes is in exact agreement between the experiment and theory. These results suggest that the CDW ground state consists of weakly coupled layers with ISD- type distortion, but CDW modulation along the $c$-axis is also indispensable. Since the single-layer CsV3Sb5 holds the key to unraveling the CDW mechanism, we attempted creating atomically thin CsV3Sb5 by mechanical exfoliation. However, the loss of crystallinity impeded further investigation (Supplementary Figs. 2 and 3). Temperature dependence of Raman modes Figs. 2a, b show the temperature dependent Raman intensity color plot for CsV3Sb5, obtained in the LL and LR configurations, respectively. The intense $A_{1g}$ and $E_{2g}$ main lattice modes are the most conspicuous features. Figs. 2e–g show the frequency (with the corresponding value at 200 K subtracted), linewidth (full width at half maximum), and normalized integrated area for both modes, extracted from Lorentzian fits of the peaks. The $A_{1g}$ frequency sharply increases below $T_{\mathrm{CDW}}$, whereas the $E_{2g}$ frequency exhibits a subtle kink across the CDW transition. This is consistent with the planar ISD lattice distortion mainly involving V atoms, forcing the Sb2 atoms to displace along the $c$-axis, hence affecting the out-of-plane vibration of the $A_{1g}$ mode more effectively. The calculated phonon vibration patterns and frequencies confirm this picture (see Supplementary Fig. 4 and Supplementary Tab. 1). The CDW transition also causes a faster decrease in the linewidths below $T_{\mathrm{CDW}}$. This can be understood as being due to the CDW-induced partial gapping of the Fermi surface [30, 31, 32, 33, 34], which reduces the electron-phonon interaction. The integrated peak intensity for both phonons increases upon warming, in line with increased thermal phonon populations. The rate of increase is faster when approaching $T_{\mathrm{CDW}}$ from below, and interestingly, the value saturates below approximately 50 K. The renormalization of the phonon parameters across the CDW transition evidences sizable electron-phonon coupling. CDW-induced modes are labeled in Figs. 2a, b. Except for the $A_{1}$ mode, there appears to be two types of modes, represented by $A_{2}$ and $E_{3}$. $A_{2}$ exhibits appreciable softening and broadening upon warming toward $T_{\mathrm{CDW}}$. It is overdamped before disappearing, visualized in the color plot in Fig. 2a as the streak of signal below 100 cm-1 between 60–90 K. These are signatures of a CDW amplitude mode [40, 41, 42, 43, 44], caused by the collapse of coherent CDW order near $T_{\mathrm{CDW}}$. $E_{3}$ shows a smaller change of frequency and much less broadening, more consistent with the characteristics of a zone-folded mode [41], as this type of mode arises from folding a zone-boundary phonon to the zone center, making its temperature dependence of the frequency as weak as that of normal phonons. Figs. 2c, d compare the distinct temperature dependence of these two types of modes. While $A_{2}$ broadens significantly above 40 K, $E_{3}$ maintains its linewidth and suddenly vanishes above $\sim$80 K. The dramatic difference in the linewidth broadening is quantified in Fig. 2i. Fig. 2h shows the frequencies of all the observed Raman modes on the same scale. Upon warming, the CDW-induced modes ($A_{1}$ excluded) soften more dramatically than the main lattice modes. While it is tempting to assign most of them as amplitude modes because of the apparent softening behavior, we show below that they are in fact zone-folded modes, mixed with the amplitude modes to partially inherit their properties. Nature of CDW-induced modes Although soft acoustic phonons were not detected experimentally, our DFT results show that the formation of CDW in CsV3Sb5 is similar to that in other well-known systems [40, 41, 42, 43, 44], in the sense that a soft acoustic phonon mode at the CDW wavevector condenses and gives rise to a distorted lattice [38]. The imaginary phonon modes of pristine CsV3Sb5 at three $M$ points (see Supplementary Fig. 5) transform as irreducible representation $M_{1}^{+}$ ($A_{g}$) of the space group $P6/mmm$ (little co-group $D_{2h}$). Fig. 3a shows that after the $2\times 2\times 1$ CDW transition, they are folded to $\Gamma$ and form triply degenerate modes. With lattice distortion, they decompose to a singlet $A_{1g}$ mode and a doublet $E_{2g}$ mode under the point group $D_{6h}$: $3M^{+}_{1}\rightarrow A_{1g}\oplus E_{2g}.$ (1) As the lattice distorts from the pristine phase to the stable ISD pattern, the imaginary $A_{1g}$ and $E_{2g}$ modes turn real with positive frequencies, expected to be observable as two Raman-active amplitude modes. The phonon wavefunctions of the soft $A_{1g}$ and $E_{2g}$ modes are shown in Figs. 4a, b, dominated by vibrations of V atoms. The $A_{1g}$ mode is fully symmetric, involving breathing-type motion for the V triangles, V hexagons, and Sb2 atoms. The $E_{2g}$ mode involves circling motion for the atoms forming the V hexagons, while the amplitude for the Sb2 vibration is almost ten times smaller. DFT calculations further reveal that the amplitude modes strongly hybridize with the other CDW-induced Raman modes (i.e. the zone-folded modes always at positive frequencies at the $\Gamma$ point in Fig. 3a), rendering them amplitude-mode-like, hence their apparent temperature-dependent frequencies. Figs. 4c, d show the real space wavefunctions of all the CDW-induced modes in the $2\times 2\times 1$ ISD phase. $A_{2-5}$ and $E_{1-4}$ correspond to those in Fig. 2, and $E^{\prime}_{1-3}$ are undetected experimentally. Comparison with Figs. 4a, b shows that $A_{2-5}$ ($E_{1-4}$) all resemble the $A_{1g}$ ($E_{2g}$) soft mode, but the difference is also apparent. The similarity results from hybridization of phonon wavefunctions. To quantify the mode mixing, we calculated the overlap between the soft modes and all the real modes of the stable ISD phase by projecting the phonon wavefunctions, $P_{f}=|\langle\bm{u}_{f}|\bm{u}_{\mathrm{SM}}\rangle|^{2}$, where $|\bm{u}_{\mathrm{SM}}\rangle$ refers to the wavefunction of the soft modes shown in Figs. 4a, b and $|\bm{u}_{f}\rangle$ refers to the wavefunction of the mode at frequency $f$ in the ISD phase. The results in Fig. 3b show that as the soft $A_{1g}$ and $E_{2g}$ modes shift from negative to positive frequencies and turn into amplitude modes, they hybridize with most of the zone-folded modes belonging to the same irreducible representation. The amount of calculated projection in the stable ISD phase correlates reasonably well with the observed mode intensity in Fig. 1c, d. $A_{2}$ and $E_{1}$ are residual amplitude modes after mode mixing. $E^{\prime}_{1-3}$ show minor projection from the $E_{2g}$ soft mode because of negligible wavefunction overlap, and accordingly their scattering cross section is weak. $E_{2-4}$ all involve V triangles (Fig. 4d), indicating that they have contributions unrelated with the $E_{2g}$ soft mode. Indeed, as discussed earlier, $E_{3}$ shows clear experimental signatures of a zone-folded mode. The $A_{2}$ mode was also observed by Wulferding et al. in their Raman study [45] and by time-resolved pump-probe spectroscopy [23, 46]. However, in these works, it was suggested to emerge below $\sim$60 K [23, 46, 45], hence ascribed to another phase transition associated with a unidirectional order [19, 25, 26]. According to our data (Fig. 2a), the $A_{2}$ mode survives above 60 K, and there is no clear evidence for two distinct phase transitions. Its vibration pattern shown in Fig. 4c confirms no relation with the unidirectional order. Another Raman study by Wu et al. [47] reported a similar set of modes as ours, but with different relative intensities. They also observed extra modes that are possibly due to stronger $c$-axis modulation in their sample. Discussion The anomalously large hybridization between the amplitude modes and zone- folded modes is rare, because they are mostly decoupled in the canonical CDW materials, with the amplitude modes dominating the spectral intensity [41, 42, 43]. The hybridization is highly unusual, because the $A_{2}$ and $E_{1}$ amplitude modes and the zone-folded Raman modes span a wide frequency range, and they do not overlap in energy (except for $E_{1}$ and $E^{\prime}_{1}$) to exhibit the typical anti-crossing [48, 49]. Their strong coupling suggests that the hybridization occurs indirectly, through interaction with the common electronic system. As the Fermi surface instability associated with the van Hove singularity is from the V bands [22], modes mainly involving V (including $A_{2-5}$ and $E_{1-4}$) naturally mix with the amplitude modes, whereas those mainly involving Sb (including $E^{\prime}_{1-2}$ and the $A_{1g}$ and $E_{2g}$ main lattice modes) do not. Similar mode mixing was also observed in the quasi-one-dimensional (quasi-1D) K0.3MoO3 using time-resolved pump-probe spectroscopy [50], and a simple model based on Ginzburg-Landau theory can well describe the entanglement of the electronic and lattice parts of the CDW order parameter. The similar phenomena observed in two systems with different dimensionality suggest the importance of electron-phonon coupling in both materials. However, a soft acoustic phonon is well established in K0.3MoO3 [51], but shown to be absent in CsV3Sb5 [20, 21]. In the mean field weak-coupling theory [38, 28], the acoustic phonon softening, known as Kohn anomaly [52], is a direct consequence of the divergent electronic susceptibility, which screens the phonon vibration at the CDW wavevector. In reality, the singular electronic susceptibility is smeared out, especially at dimensions higher than one, and momentum dependent electron-phonon coupling dictates the phonon renormalization in certain systems such as 2$H$-NbSe2 [53, 54]. The lack of soft acoustic phonons in CsV3Sb5 clearly rules out both mechanisms. Instead, CsV3Sb5 may fall into the strong electron-phonon coupling regime [55], in which the non-detection of Kohn anomaly in the quasi-1D (TaSe4)2I, NbSe3, and BaVS3 has also been reported [56, 57, 58]. In all these materials, strong electron-phonon coupling tends to localize electrons, violating the adiabatic Born-Oppenheimer approximation [55] used in DFT. Failure of the conduction electrons to screen the phonon vibration can naturally explain the absence of phonon softening [59]. The strong-coupling nature of the CDW in CsV3Sb5 is indeed supported by multiple facts, according to the qualitative criteria discussed in Ref. [28]. The CDW-induced gap $\Delta_{\mathrm{CDW}}$ is large, with $2\Delta_{\mathrm{CDW}}/k_{B}T_{\mathrm{CDW}}\approx 22$ according to infrared spectroscopy [30], where $k_{B}$ is the Boltzmann constant. The lattice distortion is substantial (amounting to about 5% of the lattice constant [22]), the distorted lattice exhibits clustering of V atoms to form trimers and hexamers, and the CDW locks with the pristine lattice to form a commensurate structure, all indicating local chemical bonding [22, 35]. Moreover, DFT shows that the elastic potential for the ions in the pristine structure features double minima deeper than the thermal energy $k_{B}T_{\mathrm{CDW}}$ at the transition [22], a defining feature of the strong-coupling theory proposed by Gor’kov [55]. Such potential well traps the ions in one of its minima, precluding soft phonon condensation. From the perspective of Raman scattering, the electron-phonon coupling constant $\lambda$ can be estimated from the amplitude mode frequency $\omega_{\mathrm{AM}}$ and the unscreened soft mode frequency $\omega_{\mathrm{SM}}^{0}$ as $\lambda=(\omega_{\mathrm{AM}}/\omega_{\mathrm{SM}}^{0})^{2}$, valid on the mean-field level [38]. The results for CsV3Sb5 and a variety of other CDW materials are complied in Fig. 5. Notably, the four quasi-2D compounds 2$H$-NbSe2, 2$H$-TaSe2, CsV3Sb5, and 1$T$-TaS2 are roughly located in the expected order according to their $T_{\mathrm{CDW}}$, $\Delta_{\mathrm{CDW}}$, and commensurability. The frequencies of the amplitude modes in CsV3Sb5 are large, only lower than that of the higher one in 1$T$-TiSe2. Although the exact value of $\lambda$ may not be meaningful beyond the weak-coupling limit, these results clearly indicate the strong-coupling nature of the CDW in CsV3Sb5. Our Raman results offer informative insights into the CDW phase in CsV3Sb5, suggesting the dominance of the single layer in driving the transition and evidencing strong electron-phonon coupling. CsV3Sb5 represents a unique case in which the amplitude modes emerge in the absence of soft acoustic phonons. As important collective excitations of the CDW ground state, how they forms without being driven by folding of a soft acoustic phonon warrants further investigation. Although DFT accurately predicts the CDW-induced Raman modes in the zero-temperature limit, in good agreement with our experiment, to understand how the amplitude modes emerge below $T_{\mathrm{CDW}}$ apparently requires models and computational methods working at finite temperature. Methods Sample preparation CsV3Sb5 single crystals were synthesized using the flux method [11]. The freshly cleaved surface of the samples was used in the study of bulk crystals. Raman scattering spectroscopy was performed using home-built confocal microscopy setups in the back-scattering geometry with 532 nm laser excitation. The normally incident light was focused on the sample to a micron- sized spot, and the scattered light was directed through Bragg notch filters to access the low-wavenumber region. The Raman signal was collected using a grating spectrograph and a liquid-nitrogen-cooled charge-coupled device. The samples were mounted in a vacuum chamber during data acquisition. Temperature control was achieved using a Montana Instrument Cryostation. Calculations The DFT calculation results by Tan et al. [22] are used to compare with the experiment. Details of the calculations can be found therein. In addition, we calculated the force constants by Vienna ab-initio Simulation Package (VASP) [60] and computed the phonon dispersion relation by Phonopy [61]. For the DFT calculation, a $5\times 5\times 5$ $\bm{k}$ mesh and an energy cutoff of 400 eV were used. Data availability The data in Figure 1e are available in Supplementary Table 1. Other data are available from the corresponding authors upon request. References ## References * [1] Sachdev, S. Kagomé- and triangular-lattice Heisenberg antiferromagnets: Ordering from quantum fluctuations and quantum-disordered ground states with unconfined bosonic spinons. _Phys. Rev. B_ 45, 12377–12396 (1992). URL https://link.aps.org/doi/10.1103/PhysRevB.45.12377. * [2] Han, T.-H. _et al._ Fractionalized excitations in the spin-liquid state of a kagome-lattice antiferromagnet. _Nature_ 492, 406–410 (2012). URL https://doi.org/10.1038/nature11659. * [3] Mazin, I. I. _et al._ Theoretical prediction of a strongly correlated Dirac metal. _Nat. Commun._ 5, 4261 (2014). URL https://doi.org/10.1038/ncomms5261. * [4] Ye, L. _et al._ Massive Dirac fermions in a ferromagnetic kagome metal. _Nature_ 555, 638–642 (2018). URL https://doi.org/10.1038/nature25987. * [5] Tang, E., Mei, J.-W. & Wen, X.-G. High-temperature fractional quantum Hall states. _Phys. Rev. Lett._ 106, 236802 (2011). URL https://link.aps.org/doi/10.1103/PhysRevLett.106.236802. * [6] Xu, G., Lian, B. & Zhang, S.-C. Intrinsic quantum anomalous Hall effect in the kagome lattice ${\mathrm{Cs}}_{2}{\mathrm{LiMn}}_{3}{\mathrm{F}}_{12}$. _Phys. Rev. Lett._ 115, 186802 (2015). URL https://link.aps.org/doi/10.1103/PhysRevLett.115.186802. * [7] Kang, M. _et al._ Topological flat bands in frustrated kagome lattice CoSn. _Nat. Commun._ 11, 4004 (2020). URL https://doi.org/10.1038/s41467-020-17465-1. * [8] Yu, S.-L. & Li, J.-X. Chiral superconducting phase and chiral spin-density-wave phase in a Hubbard model on the kagome lattice. _Phys. Rev. B_ 85, 144402 (2012). URL https://link.aps.org/doi/10.1103/PhysRevB.85.144402. * [9] Kiesel, M. L., Platt, C. & Thomale, R. Unconventional Fermi surface instabilities in the kagome Hubbard model. _Phys. Rev. Lett._ 110, 126405 (2013). URL https://link.aps.org/doi/10.1103/PhysRevLett.110.126405. * [10] Wang, W.-S., Li, Z.-Z., Xiang, Y.-Y. & Wang, Q.-H. Competing electronic orders on kagome lattices at van Hove filling. _Phys. Rev. B_ 87, 115135 (2013). URL https://link.aps.org/doi/10.1103/PhysRevB.87.115135. * [11] Ortiz, B. R. _et al._ New kagome prototype materials: discovery of ${\mathrm{KV}_{3}\mathrm{Sb}_{5}}$, ${\mathrm{RbV}}_{3}{\mathrm{Sb}}_{5}$, and ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. Mater._ 3, 094407 (2019). URL https://link.aps.org/doi/10.1103/PhysRevMaterials.3.094407. * [12] Ortiz, B. R. _et al._ $\mathrm{Cs}{\mathrm{V}}_{3}{\mathrm{Sb}}_{5}$: A ${\mathbb{Z}}_{2}$ Topological Kagome Metal with a Superconducting Ground State. _Phys. Rev. Lett._ 125, 247002 (2020). URL https://link.aps.org/doi/10.1103/PhysRevLett.125.247002. * [13] Liu, Z. _et al._ Charge-density-wave-induced bands renormalization and energy gaps in a kagome superconductor ${\mathrm{RbV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. X_ 11, 041010 (2021). URL https://link.aps.org/doi/10.1103/PhysRevX.11.041010. * [14] Kang, M. _et al._ Twofold van Hove singularity and origin of charge order in topological kagome superconductor CsV3Sb5. _Nature Physics_ (2022). URL https://doi.org/10.1038/s41567-021-01451-5. * [15] Ortiz, B. R. _et al._ Superconductivity in the ${\mathbb{Z}}_{2}$ kagome metal ${\mathrm{KV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. Mater._ 5, 034801 (2021). URL https://link.aps.org/doi/10.1103/PhysRevMaterials.5.034801. * [16] Yin, Q. _et al._ Superconductivity and normal-state properties of kagome metal RbV3Sb5 single crystals. _Chin. Phys. Lett._ 38, 037403 (2021). URL https://doi.org/10.1088/0256-307x/38/3/037403. * [17] Yang, S.-Y. _et al._ Giant, unconventional anomalous Hall effect in the metallic frustrated magnet candidate, KV3Sb5. _Sci. Adv._ 6, eabb6003 (2020). URL https://advances.sciencemag.org/content/advances/6/31/eabb6003.full.pdf. * [18] Yu, F. H. _et al._ Concurrence of anomalous Hall effect and charge density wave in a superconducting topological kagome metal. _Phys. Rev. B_ 104, L041103 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.L041103. * [19] Chen, H. _et al._ Roton pair density wave in a strong-coupling kagome superconductor. _Nature_ 599, 222–228 (2021). URL https://doi.org/10.1038/s41586-021-03983-5. * [20] Li, H. _et al._ Observation of unconventional charge density wave without acoustic phonon anomaly in kagome superconductors ${A\mathrm{V}}_{3}{\mathrm{Sb}}_{5}$ ($A=\mathrm{Rb}$, Cs). _Phys. Rev. X_ 11, 031050 (2021). URL https://link.aps.org/doi/10.1103/PhysRevX.11.031050. * [21] Xie, Y. _et al._ Electron-phonon coupling in the charge density wave state of CsV3Sb5 (2021). URL https://arxiv.org/abs/2111.00654. eprint 2111.00654. * [22] Tan, H., Liu, Y., Wang, Z. & Yan, B. Charge density waves and electronic properties of superconducting kagome metals. _Phys. Rev. Lett._ 127, 046401 (2021). URL https://link.aps.org/doi/10.1103/PhysRevLett.127.046401. * [23] Ratcliff, N., Hallett, L., Ortiz, B. R., Wilson, S. D. & Harter, J. W. Coherent phonon spectroscopy and interlayer modulation of charge density wave order in the kagome metal ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. Mater._ 5, L111801 (2021). URL https://link.aps.org/doi/10.1103/PhysRevMaterials.5.L111801. * [24] Christensen, M. H., Birol, T., Andersen, B. M. & Fernandes, R. M. Theory of the charge density wave in $A$V3Sb5 kagome metals. _Phys. Rev. B_ 104, 214513 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.214513. * [25] Zhao, H. _et al._ Cascade of correlated electron states in the kagome superconductor $\mathrm{Cs}{\mathrm{V}}_{3}{\mathrm{Sb}}_{5}$. _Nature_ 599, 216–221 (2021). URL https://doi.org/10.1038/s41586-021-03946-w. * [26] Wang, Z. _et al._ Electronic nature of chiral charge order in the kagome superconductor $\mathrm{Cs}{\mathrm{V}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. B_ 104, 075148 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.075148. * [27] Liang, Z. _et al._ Three-dimensional charge density wave and surface-dependent vortex-core states in a kagome superconductor ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. X_ 11, 031026 (2021). URL https://link.aps.org/doi/10.1103/PhysRevX.11.031026. * [28] Rossnagel, K. On the origin of charge-density waves in select layered transition-metal dichalcogenides. _Journal of Physics: Condensed Matter_ 23, 213001 (2011). URL https://doi.org/10.1088/0953-8984/23/21/213001. * [29] Ortiz, B. R. _et al._ Fermi surface mapping and the nature of charge-density-wave order in the kagome superconductor ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. X_ 11, 041030 (2021). URL https://link.aps.org/doi/10.1103/PhysRevX.11.041030. * [30] Zhou, X. _et al._ Origin of charge density wave in the kagome metal ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$ as revealed by optical spectroscopy. _Phys. Rev. B_ 104, L041101 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.L041101. * [31] Uykur, E. _et al._ Low-energy optical properties of the nonmagnetic kagome metal ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. B_ 104, 045130 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.045130. * [32] Nakayama, K. _et al._ Multiple energy scales and anisotropic energy gap in the charge-density-wave phase of the kagome superconductor ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. B_ 104, L161112 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.L161112. * [33] Lou, R. _et al._ Charge-density-wave-induced peak-dip-hump structure and the multiband superconductivity in a kagome superconductor CsV3Sb5. _Phys. Rev. Lett._ 128, 036402 (2022). URL https://link.aps.org/doi/10.1103/PhysRevLett.128.036402. * [34] Luo, Y. _et al._ Distinct band reconstructions in kagome superconductor CsV3Sb5 (2021). URL https://arxiv.org/abs/2106.01248. eprint 2106.01248. * [35] Wang, C., Liu, S., Jeon, H. & Cho, J.-H. Origin of charge density wave in the layered kagome metal ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$. _Phys. Rev. B_ 105, 045135 (2022). URL https://link.aps.org/doi/10.1103/PhysRevB.105.045135. * [36] Ye, Z., Luo, A., Yin, J.-X., Zahid Hasan, M. & Xu, G. Structural instability and charge modulations in the Kagome superconductor $A$V3Sb5 (2021). URL https://arxiv.org/abs/2111.07314. eprint 2111.07314. * [37] Rice, M. & Strässler, S. Theory of the soft phonon mode and dielectric constant below the Peierls transition temperature. _Solid State Communications_ 13, 1931–1933 (1973). URL https://www.sciencedirect.com/science/article/pii/0038109873900033. * [38] Grüner, G. _Density Waves in Solids_. Advanced book program: Addison-Wesley (Perseus Books Group, 2000). * [39] Sugai, S. Lattice vibrations in the charge-density-wave states of layered transition metal dichalcogenides. _phys. stat. sol. (b)_ 129, 13–39 (1985). URL http://dx.doi.org/10.1002/pssb.2221290103. * [40] Travaglini, G., Mörke, I. & Wachter, P. CDW evidence in one-dimensional K0.3MoO3 by means of Raman scattering. _Solid State Commun_ 45, 289–292 (1983). URL https://www.sciencedirect.com/science/article/pii/0038109883904830. * [41] Joshi, J. _et al._ Short-range charge density wave order in $2H$-TaS2. _Phys. Rev. B_ 99, 245144 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.99.245144. * [42] Hill, H. M. _et al._ Phonon origin and lattice evolution in charge density wave states. _Phys. Rev. B_ 99, 174110 (2019). URL https://link.aps.org/doi/10.1103/PhysRevB.99.174110. * [43] Lin, D. _et al._ Patterns and driving forces of dimensionality-dependent charge density waves in 2$H$-type transition metal dichalcogenides. _Nature Commun._ 11, 2406 (2020). URL https://doi.org/10.1038/s41467-020-15715-w. * [44] Barath, H. _et al._ Quantum and classical mode softening near the charge-density-wave–superconductor transition of ${\mathrm{Cu}}_{x}{\mathrm{TiSe}}_{2}$. _Phys. Rev. Lett._ 100, 106402 (2008). URL https://link.aps.org/doi/10.1103/PhysRevLett.100.106402. * [45] Wulferding, D. _et al._ Fermi surface instabilities in electronic Raman scattering of the metallic kagome lattice CsV3Sb5 (2021). URL https://arxiv.org/abs/2108.11690. eprint 2108.11690. * [46] Wang, Z. X. _et al._ Unconventional charge density wave and photoinduced lattice symmetry change in the kagome metal ${\mathrm{CsV}}_{3}{\mathrm{Sb}}_{5}$ probed by time-resolved spectroscopy. _Phys. Rev. B_ 104, 165110 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.104.165110. * [47] Wu, S. _et al._ Charge density wave order in kagome metal $A$V3Sb5 ($A$=Cs, Rb, K) (2022). URL https://arxiv.org/abs/2201.05188. eprint 2201.05188. * [48] Yusupov, R. V., Mertelj, T., Chu, J.-H., Fisher, I. R. & Mihailovic, D. Single-particle and collective mode couplings associated with 1- and 2-directional electronic ordering in metallic $R{\mathrm{Te}}_{3}$ ($R=\mathrm{Ho},\mathrm{Dy},\mathrm{Tb}$). _Phys. Rev. Lett._ 101, 246402 (2008). URL https://link.aps.org/doi/10.1103/PhysRevLett.101.246402. * [49] Lavagnini, M. _et al._ Raman scattering evidence for a cascade evolution of the charge-density-wave collective amplitude mode. _Phys. Rev. B_ 81, 081101 (2010). URL https://link.aps.org/doi/10.1103/PhysRevB.81.081101. * [50] Schäfer, H., Kabanov, V. V., Beyer, M., Biljakovic, K. & Demsar, J. Disentanglement of the electronic and lattice parts of the order parameter in a 1D charge density wave system probed by femtosecond spectroscopy. _Phys. Rev. Lett._ 105, 066402 (2010). URL https://link.aps.org/doi/10.1103/PhysRevLett.105.066402. * [51] Pouget, J. P., Hennion, B., Escribe-Filippini, C. & Sato, M. Neutron-scattering investigations of the Kohn anomaly and of the phase and amplitude charge-density-wave excitations of the blue bronze K0.3MoO3. _Phys. Rev. B_ 43, 8421–8430 (1991). URL https://link.aps.org/doi/10.1103/PhysRevB.43.8421. * [52] Kohn, W. Image of the Fermi surface in the vibration spectrum of a metal. _Phys. Rev. Lett._ 2, 393–394 (1959). URL https://link.aps.org/doi/10.1103/PhysRevLett.2.393. * [53] Johannes, M. D. & Mazin, I. I. Fermi surface nesting and the origin of charge density waves in metals. _Phys. Rev. B_ 77, 165135 (2008). URL https://link.aps.org/doi/10.1103/PhysRevB.77.165135. * [54] Zhu, X., Cao, Y., Zhang, J., Plummer, E. W. & Guo, J. Classification of charge density waves based on their nature. _Proceedings of the National Academy of Sciences_ 112, 2367–2371 (2015). URL https://www.pnas.org/content/112/8/2367. * [55] Gor’kov, L. P. Strong electron-lattice coupling as the mechanism behind charge density wave transformations in transition-metal dichalcogenides. _Phys. Rev. B_ 85, 165142 (2012). URL https://link.aps.org/doi/10.1103/PhysRevB.85.165142. * [56] Lorenzo, J. E. _et al._ A neutron scattering study of the quasi-one-dimensional conductor (TaSe4)2I. _Journal of Physics: Condensed Matter_ 10, 5039–5068 (1998). URL https://doi.org/10.1088/0953-8984/10/23/010. * [57] Requardt, H., Lorenzo, J. E., Monceau, P., Currat, R. & Krisch, M. Dynamics in the charge-density-wave system ${\mathrm{NbSe}}_{3}$ using inelastic x-ray scattering with mev energy resolution. _Phys. Rev. B_ 66, 214303 (2002). URL https://link.aps.org/doi/10.1103/PhysRevB.66.214303. * [58] Ilakovac, V. _et al._ Order-disorder type of peierls instability in ${\mathrm{BaVS}}_{3}$. _Phys. Rev. B_ 103, 014306 (2021). URL https://link.aps.org/doi/10.1103/PhysRevB.103.014306. * [59] Pouget, J.-P. The Peierls instability and charge density wave in one-dimensional electronic conductors. _Comptes Rendus Physique_ 17, 332–356 (2016). URL https://www.sciencedirect.com/science/article/pii/S163107051500225X. * [60] Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. _Phys. Rev. B_ 54, 11169–11186 (1996). URL https://link.aps.org/doi/10.1103/PhysRevB.54.11169. * [61] Togo, A. & Tanaka, I. First principles phonon calculations in materials science. _Scr. Mater._ 108, 1–5 (2015). URL https://doi.org/10.1016/j.scriptamat.2015.07.021. * [62] Hu, Y., Zheng, F., Ren, X., Feng, J. & Li, Y. Charge density waves and phonon-electron coupling in ${\mathrm{ZrTe}}_{3}$. _Phys. Rev. B_ 91, 144502 (2015). URL https://link.aps.org/doi/10.1103/PhysRevB.91.144502. * [63] Hoesch, M., Bosak, A., Chernyshov, D., Berger, H. & Krisch, M. Giant Kohn anomaly and the phase transition in charge density wave ${\mathrm{ZrTe}}_{3}$. _Phys. Rev. Lett._ 102, 086402 (2009). URL https://link.aps.org/doi/10.1103/PhysRevLett.102.086402. * [64] Maschek, M. _et al._ Wave-vector-dependent electron-phonon coupling and the charge-density-wave transition in $\mathrm{TbT}{\mathrm{e}}_{3}$. _Phys. Rev. B_ 91, 235146 (2015). URL https://link.aps.org/doi/10.1103/PhysRevB.91.235146. * [65] Weber, F. _et al._ Electron-phonon coupling and the soft phonon mode in ${\mathrm{TiSe}}_{2}$. _Phys. Rev. Lett._ 107, 266401 (2011). URL https://link.aps.org/doi/10.1103/PhysRevLett.107.266401. * [66] Moncton, D. E., Axe, J. D. & DiSalvo, F. J. Study of superlattice formation in 2$H$-NbSe2 and 2$H$-TaSe2 by neutron scattering. _Phys. Rev. Lett._ 34, 734–737 (1975). URL https://link.aps.org/doi/10.1103/PhysRevLett.34.734. * [67] Weber, F. _et al._ Extended phonon collapse and the origin of the charge-density wave in 2$H$-NbSe2. _Phys. Rev. Lett._ 107, 107403 (2011). URL https://link.aps.org/doi/10.1103/PhysRevLett.107.107403. * [68] Tsang, J. C., Smith, J. E. & Shafer, M. W. Raman spectroscopy of soft modes at the charge-density-wave phase transition in 2$H$-NbSe2. _Phys. Rev. Lett._ 37, 1407–1410 (1976). URL https://link.aps.org/doi/10.1103/PhysRevLett.37.1407. Acknowledgements This work was supported by the National Key Research and Development Program of China (Grant Nos. 2018YFA0307000 and 2017YFA0303201) and the National Natural Science Foundation of China (Grant Nos. 11774151). Growth of hexagonal boron nitride crystals was supported by the Elemental Strategy Initiative conducted by the MEXT, Japan (Grant No. JPMXP0112101001), JSPS KAKENHI (Grant No. JP20H00354), and A3 Foresight by JSPS. B.Y. acknowledges the financial support by the European Research Council (ERC Consolidator Grant “NonlinearTopo”, No. 815869) and the ISF - Quantum Science and Technology (No. 1251/19). Author information Contributions X.X. conceived the project. G.L., X.M., and K.H. performed the Raman experiments. Q.L., Y.D., and H.-H.W. grew the CsV3Sb5 crystals. K.W. and T.T. grew the h-BN crystals. G.L. and X.X. analysed the experimental data. H.T., Y.L., and B.Y performed the DFT calculations. J.X., W.T., and L.G. performed atomic force microscopy measurements. X.X. and B.Y. interpreted the results and co-wrote the paper, with comments from all authors. Corresponding authors Correspondences should be sent to Binghai Yan<EMAIL_ADDRESS>or Xiaoxiang Xi ([email protected]). Competing interests The authors declare no competing interests. Figure 1: Raman-active phonon modes in CsV3Sb5. a Schematic illustration of the relation between the soft mode and amplitude mode in typical CDW materials, showing the latter emerges after the former freezes below $T_{\mathrm{CDW}}$. b Crystal structure of CsV3Sb5. Sb sites with different Wyckoff positions are labeled as Sb1 and Sb2. The arrows illustrate the vibration patterns of the main lattice $A_{1g}$ and $E_{2g}$ modes. The $E_{2g}$ mode is doubly degenerate, and only one form is shown. c, d Raman spectra measured on the $ab$-plane at 100 K and 4 K in the LL and LR polarization configurations. The dashed lines denote the main lattice phonons, and the dotted lines indicate the CDW-induced modes. e Comparison of the measured (Expt.) and the calculated Raman mode frequencies for the inverse Star of David (ISD) and Star of David (SD) lattice distortions. The thick lines denote the main lattice phonons. The dots indicate modes undetected in our experiment. Figure 2: Evolution of the Raman modes in CsV3Sb5 across the CDW transition. a, b Temperature-dependent Raman intensity color plot for CsV3Sb5, measured in the LL and LR configurations. The normal phonon modes are labeled in black and the CDW-induced modes in white. The dashed lines mark $T_{\mathrm{CDW}}$. c, d Temperature-dependent spectra for the $A_{2}$ and $E_{3}$ modes. e–g Frequency, linewidth, and amplitude for the $E_{2g}$ and $A_{1g}$ main lattice phonons. The frequency and amplitude are compared to the corresponding values at 200 K. h Temperature dependence of the Raman mode frequencies. i Temperature dependence of the linewidth of the $A_{2}$ and $E_{3}$ modes. Figure 3: Phonon band structures and mode mixing in the process of CDW distortion. a Phonon band structures. Here, 100% (0%) refers to the fully stable ISD ($2\times 2$ pristine) structure. 10% refers to the intermediate structure with 10% distortion from the pristine to ISD phases. After $2\times 2\times 1$ band folding with no distortion, the imaginary triply-degenerate modes form at $\Gamma$. A weak ISD-type distortion lifts the degeneracy and leads to $A_{1g}$ and $E_{2g}$ modes. The ISD distortion gradually transforms imaginary modes to real. b Projections of the imaginary $A_{1g}$ ($E_{2g}$) mode with 10% distortion to all the other phonon modes at $\Gamma$, as evolving into the stable ISD phase (100%). We highlight all $A_{1g}$ and $E_{2g}$ modes by orange and blue dots, respectively, at the $\Gamma$ point. The dashed orange (blue) curve in b guides eyes to show the evolution of the imaginary $A_{2}$ ($E_{1}$) modes in the CDW distortion. Figure 4: Real space wavefunctions of the imaginary soft modes and the stable CDW-induced Raman modes in the $2\times 2\times 1$ ISD phase. a, b Soft modes with $A_{1g}$ and $E_{2g}$ symmetries, respectively. c, d CDW-induced $A_{1g}$ and $E_{2g}$ stable modes. The $E_{2g}$ modes are pairs of chiral phonons and only one chiral mode is shown. The radius of the circles represents the amplitude of the vibration, and the arrow on the circles stands for the initial phase of the wavefunction. Cs atoms are omitted in the crystal structure for clarity, because they do not contribute to lattice vibrations. Figure 5: Evidence of strong-coupling CDW in CsV3Sb5. Frequency of the amplitude mode $\omega_{\mathrm{AM}}$ in the zero-temperature limit and the unscreened frequency of the soft mode $\omega_{\mathrm{SM}}^{0}$ far above $T_{\mathrm{CDW}}$ for a collection of CDW materials. Some of the materials feature two amplitude modes, hence two data points connected by a vertical line. Since no soft mode is observed in CsV3Sb5, $\omega_{\mathrm{SM}}^{0}$ is taken to be its acoustic phonon frequency at 300 K [21]. Open (filled) symbols indicate the material is quasi-1D (quasi-2D). The dashed lines mark electron- phonon coupling constant $\lambda=1$ and 3 according to mean-field theory. Source of data: ZrTe3 [62, 63], TbTe3 [48, 64], K0.3MoO3 [51, 40], 1$T$-TiSe2 [65, 44], 2$H$-TaSe2 [66, 42, 43], 2$H$-NbSe2 [66, 67, 68].
# First detection of $c$-C3H2 in a circumstellar disk Chunhua Qi Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA Karin I. Öberg Departments of Chemistry and Astronomy, University of Virginia, Charlottesville, VA 22904, USA David J. Wilner, Katherine A. Rosenfeld Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA ###### Abstract We report the first detection of $c$-C3H2 in a circumstellar disk. The $c$-C3H2 J=$6-5$ line (217.882 GHz) is detected and imaged through Atacama Large Millimeter Array (ALMA) Science Verification observations toward the disk around the Herbig Ae star HD 163296 at 0.8′′ resolution. The emission is consistent with that arising from a Keplerian rotating disk. Two additional $c$-C3H2 transitions are also tentatively detected, bolstering the identification of this species, but with insufficient signal-to-noise ratio to constrain the spatial distribution. Using a previously developed model for the physical structure of this disk, we fit a radial power-law distribution model to the $c$-C3H2 $6-5$ emission and find that $c$-C3H2 is present in a ring structure from an inner radius of about 30 AU to an outer radius of about 165 AU. The column density is estimated to be 1012–1013 cm-2. The clear detection and intriguing ring structure suggest that $c$-C3H2 has the potential to become a useful probe of radiation penetration in disks. protoplanetary disks; astrochemistry; stars: formation; ISM: molecules; techniques: high angular resolution; radio lines: ISM ## 1 Introduction Pre-main sequence stars are commonly surrounded by disks, which serve the dual purpose of funneling accretion onto the central star and transporting away angular momentum. These disks are also the formation sites of planets and connect protostars and planets physically and chemically. The main component of gas-rich circumstellar disks, cold H2, cannot be observed directly because of a lack of transitions at suitable energy levels. Instead, constraints on disk mass, density and temperature structures depend on observations of trace species, most commonly dust and CO gas (e.g. Qi et al., 2006; Piétu et al., 2007; Andrews et al., 2009). To access other disk properties, such as ionization levels, X-ray and UV radiation fields, turbulent mixing, chemical age and bulk composition requires the development of additional molecular probes whose formation and destruction depend critically on one or several of these properties. The past decade has witnessed significant progress on disk chemistry theory, resulting in large numbers of predictions on how molecular abundance structures should depend on different disk properties (e.g. Aikawa & Nomura, 2006; Willacy, 2007; Semenov & Wiebe, 2011; Fogel et al., 2011; Walsh et al., 2012). A lack of observational sensitivity has however limited detections to the simplest molecules in the millimeter/submillimeter range which characterizes the bulk of the gas disk, including CO, HCO+, DCO+, CN, HCN, DCN, C2H, H2CO, CS, SO, CH2, N2H+ and HC3N (Dutrey et al., 1997; Aikawa et al., 2003; Thi et al., 2004; Semenov et al., 2005; Dutrey et al., 2007; Qi et al., 2008; Fuente et al., 2010; Henning et al., 2010; Öberg et al., 2010, 2011). This list is expected to expand with the arrival of ALMA and its unprecedented sensitivity and spatial resolution, which will enable the development of a more diverse set of molecular probes. In this Letter, we use ALMA Science Verification observations of HD 163296 to report the first detection of cyclopropenylidene, $c$-C3H2, in a circumstellar disk. At a distance of $\sim$122 pc (Perryman et al., 1997), this Herbig Ae star (stellar mass 2.3 M⊙; spectral type A1; age $\sim$4 Myr) harbors a large disk (a scattered light pattern extending out to a radius of $\sim$500 AU, Grady et al., 2000) with strong millimeter continuum and molecular line emission (Mannings & Sargent, 1997; Thi et al., 2004; Natta et al., 2004; Isella et al., 2007; Qi et al., 2011). The large disk size and strong molecular emission make this system an ideal target to search for new molecules. The $c$-C3H2 molecule is a small cyclic hydrocarbon, which was first detected in space by Thaddeus et al. (1985) towards Sgr B2. Since then, $c$-C3H2 has been detected in a wide range of astrophysical environments, including diffuse and dense clouds, protostars, planetary nebulae and extragalactic sources (e.g., Matthews & Irvine, 1985; Cox et al., 1988, 1989; Madden et al., 1989; Benson et al., 1998; Menten et al., 1999; Fossé et al., 2001; Sakai et al., 2008; Tenenbaum et al., 2009; Liszt et al., 2012), and is one of the most abundant molecules with 3 carbon atoms in the interstellar medium (Teyssier et al., 2004). In the context of disks, $c$-C3H2 has some favorable characteristics that may make it a powerful molecular probe, especially when observed in combination with isotopologues and chemically related molecules: 1. 1. $c$-C3H2 has two equivalent H nuclei, which couple to generate ortho (nuclear spin of 1) and para (nuclear spin of 0) states. Deviations from the statistical weights of 3 to 1 may be used to constrain the formation conditions of the molecule (Park et al., 2006). 2. 2. $c$-C3H2 is also one of two isomers; the other is the linear molecule $l$-C3H2. The isomer ratio can be used to assess the fractional ionization in clouds (Fossé et al., 2001) and may provide similar constraints on disks. 3. 3. The formation and destruction chemistry of $c$-C3H2 (and carbon chains) are sensitive to high energy radiation. The vertical and radial abundance structure should provide strong constraints on the penetration depth of X-rays and UV photons (Semenov & Wiebe, 2011). 4. 4. Within the family of hydrocarbons, the sensitivity to turbulent mixing increases dramatically from C2 to C3H2, and from C3H2 to larger hydrocarbons. Observed ratios and limits have the potential to constrain transport in disks (Semenov & Wiebe, 2011). Finally, the chemistry that produces $c$-C3H2 is important to constrain in its own right, since $c$-C3H2 shares precursors with other, more complex carbon chains, which may be an important source of carbon during planet formation. In light of these possibilities, we aim to provide first constraints on the abundance, distribution and $o/p$ ratio of $c$-C3H2 in the HD 163296 disk. ## 2 Observations HD 163296 (RA: 17h56m21$\fs$287, DEC: $-$21°57′22$\farcs$39; J2000.0) was observed on 2012 June 9, June 23 and July 7 as part of the ALMA Science Verification (SV) program in band 6. At the time of observations, 25 12-m antennae were used (2 7-m ACA antennae were present in the dataset but flagged) in the array with a resulting synthesized beam size of 0.9$\times$0.7$\arcsec$ (PA=84°). The source was observed with a total integration time of 84 minutes. The ALMA correlator was configured to simultaneously observe four spectral windows (two in each sideband). One of those windows (SpwID #1) covered both 13CO and C18O $2-1$ lines (220.399 and 219.560 GHz, respectively). The data were calibrated in the CASA software package (v3.4), following the detailed calibration and imaging scripts provided by the ALMA science verification team. Since those scripts, along with fully calibrated measurement sets, are publicly available online111https://almascience.nrao.edu/almadata/sciver/HD163296Band6, we do not repeat the details here. The 218 GHz (1.37mm) continuum was generated from the line-free channels in the two spectral windows (SpWID #0 and #1) of the lower sideband, and the flux is determined to be 608.5$\pm$2.5 mJy, which agrees with the SMA observations (Qi et al., 2011). Excellent agreement is also seen between the SMA and ALMA spectra for the 13CO and C18O $2-1$ lines with fluxes within 10% (Qi et al., 2011). All of the $c$-C3H2 lines are continuum-subtracted and retrieved from SpwID #0 with a channel width of 0.488 MHz (0.67 km s-1) covering the total 1.875 GHz bandwidth from 216.167 to 218.042 GHz. We have binned the lines in channels of 1 km s-1 to achieve higher signal-to-noise ratios and still keep the key kinematic features of the disk. The resulting line rms noise level is estimated to be 2.2 mJy beam-1 per 1 km s-1 channel in line free channels. ## 3 Results and Analysis ### 3.1 $c$-C3H2 detection Figure 1 shows that the $c$-C3H2 $6-5$ line is clearly detected towards HD 163296 and that the position angle of its resolved velocity field is consistent with that of 13CO from the same data set, which shows a clear pattern of Keplerian rotation. Interestingly, the $c$-C3H2 $6-5$ moment map displays a double-peaked morphology, indicative of an inner hole. The $c$-C3H2 $3-2$ and $5-4$ lines are only tentatively detected, insufficient to resolve the velocity field or any spatial structure. To improve the sensitivity in imaging these lines, we applied a Gaussian taper to the visibility data that corresponds to 1″ FWHM in the image domain, which resulted in a beam of 1.3$\times$1.2$\arcsec$ (PA=78°). Because the line emission changes position with channels, using a single mask for all channels to extract spectra will lead to a degradation of the signal. Instead, we create separate masks to cover the $c$-C3H2 $6-5$ line emission at each channel and use these masks to extract the line spectra. The same masks are also used for other $c$-C3H2 lines. Figure 2 shows the resulting spectra of all three $c$-C3H2 $3-2$ lines together with 13CO $2-1$ line (with its own masks covering the 13CO $2-1$ emission at each channel). The $6-5$ line is clearly detected with the expected line shape and Vlsr consistent with 13CO. The $5-4$ line presents a consistent spectral profile, at lower signal-to- noise ratio. In contrast, the $3-2$ line is both narrower and clearly blue- shifted, suggestive of a different physical origin, perhaps in a disk wind. However, both the $3-2$ and $5-4$ lines are observed at the edge of the spectral window, so this apparent asymmetry should be confirmed by independent observations before any strong conclusions are drawn. The integrated fluxes are reported in Table 1. These are just below the upper limits reported in two previous failed searches of $c$-C3H2 in disks (Fuente et al., 2010; Öberg et al., 2010). ### 3.2 Ring structure model We explore the distributions of $c$-C3H2 in HD 163296 based on a previously developed accretion disk model with well-defined temperature and density structures (Qi et al., 2011). To simulate and compare with the data, we assume the disk material orbits the central star in Keplerian motion, and fix the disk geometric and kinematic parameters that affect the observed spatio- kinematic behavior of the disk. The details of the stellar and accretion properties and the disk geometric and kinematic parameters are summarized in Table 3 of Qi et al. (2011). We have developed a physically self-consistent accretion disk model with an exponentially tapered edge that matches the spectral energy distribution and spatially resolved millimeter dust continuum emission. The disk temperature and density structures are further constrained by multiple CO and CO isotopologue line observations. Such analysis provides the essential framework for constraining the distribution of molecular abundances in protoplanetary disks. Within this model framework, we assume that $c$-C3H2 is present with a constant abundance in a layer with boundaries toward the midplane and toward the surface of the disk (Qi et al., 2008; Öberg et al., 2012). This assumption is motivated by chemical models (e.g. Aikawa & Nomura, 2006) that predict a three-layered structure where most molecules are photodissociated in the surface layer, frozen out in the midplane, and have an abundance that peaks at intermediate disk heights. The surface ($\sigma_{s}$) and midplane ($\sigma_{m}$) boundaries are presented in terms of $\Sigma_{21}=\Sigma_{H}/(1.59\times 10^{21}cm^{-2})$, where $\Sigma_{H}$ is the hydrogen column density measured from the disk surface in the adopted physical model. This simple model approach approximates the vertical location where $c$-C3H2 is most abundant. Due to the limited signal-to-noise of the multiple $c$-C3H2 line detections, we cannot constrain the location of this vertical layer based on the $c$-C3H2 data. We therefore fix the vertical surface boundary $\sigma_{s}$ as 0.32 ($\log_{10}(\sigma_{s})=-0.5$) and the midplane boundary $\sigma_{m}$ as 3.2 ($\log_{10}(\sigma_{m})=0.5$), between which lies the expected location of most molecules with peak abundances in the warm molecular layer (Aikawa & Nomura, 2006). We model the radial column density distribution of $c$-C3H2 as a power law N${}_{100}\times(r/100)^{p}$, where N100 is the column density at 100 AU in cm-2, $r$ is the distance from the star in AU, and $p$ is the power-law index. Since the emission shows a clear indication of a central hole, we fit for both an inner radius Rin and outer radius Rout together with the power-law parameters (N100 and $p$). Using the structure model and the fixed vertical distributions, we compute a grid of synthetic $c$-C3H2 J=$6-5$ visibility datasets over a range of Rout, Rin, $p$ and N100 values and compare with the observations. The best-fit model is obtained by minimizing $\chi^{2}$, the weighted difference between the real and imaginary part of the complex visibility measured in the ($u,v$)-plane sampled by the ALMA observations. We use the two-dimensional Monte Carlo model RATRAN (Hogerheijde & van der Tak, 2000) to calculate the radiative transfer and molecular excitation. The collisional rates are taken from Chandra & Kegel (2000) and the molecular data file is retrieved from the Leiden Atomic and Molecular Database (Schöier et al., 2005). Figure 3 shows the $\chi^{2}$ surfaces for the Rin and $R_{out}$ versus the power law index $p$. We find the $p$ is constrained between -2.5 and -1.5, Rin 15–40 AU and Rout 150–200 AU (within 1$\sigma$) with best-fit values at -2, 30 and 165 AU, respectively (reduced $\chi^{2}=1.07$). By fixing the above parameters at their best-fit values, N($c$-C3H2) is determined to be 2.2$\pm$0.2 $\times$1012 cm-2 at 100 AU. Figure 4 presents comparisons between the observed channel maps and the best-fit model. The residual image doesn’t show any significant emission, which indicates that the $c$-C3H2 emission is consistent with that arising from a Keplerian rotating disk. The $6-5$ line is actually blended with the $6_{1,6}-5_{0,5}$ (ortho) and $6_{0,6}-5_{1,5}$ (para) lines at the same frequency. Since the Einstein $A$-coefficients for both ortho and para lines are the same, 5.393$\times$10-4 s-1 (Schöier et al., 2005), we fit the data with the ortho line only and the resulting column density can be accounted as the sum of both ortho and para forms of the molecule. For the weaker $5-4$ ortho line, the signal-to-noise ratio is too low to provide any constraints on the $c$-C3H2 the distribution. By assuming the $c$-C3H2 distribution derived from the $6-5$ line, however, we can use the unresolved line flux to constrain the $o/p$ ratio. The resulting best-fit N(C3H2-($o$))=1.6$\pm$0.3 $\times$1012 and N(C3H2-($o+p$)=2.2$\pm$0.2 $\times$1012 cm-2 at 100 AU (as shown above). Hence $o/p$=2.7$\pm$1.7, which is close to the statistical value of 3, but the large uncertainty implies that better data are required to use this ratio as a probe of formation conditions. ## 4 Discussion We have clearly detected $c$-C3H2 in the disk around HD 163296. This is one of the two largest molecules detected in disks so far, the other being HC3N, also a carbon-chain molecule, detected recently using deep single-dish observations (Chapillon et al., 2012). Using the spatially and spectrally resolved line emission, we have constrained the radial distribution of $c$-C3H2 in the HD 163296 disk as a ring structure ranging from $\sim$30 to $\sim$165 AU. The outer edge of $<$250 AU (2$\sigma$ limit) is much smaller than the gas disk size which extends to 500 AU (Qi et al., 2011), which suggests that $c$-C3H2 formation is slow or that $c$-C3H2 destruction is fast in the outer disk. In terms of formation, the best-fit outer radius coincides with the previously determined location of the CO midplane snow line, and it is possible that CO freeze-out limits the carbon available in the gas phase to form hydrocarbons in general and $c$-C3H2 in particular. A second characteristic of the outer disk is that it is rather tenuous because of a rapidly decreasing column density outside of 200 AU (Qi et al., 2011). This may result in efficient penetration of radiation and thus destruction of the easily dissociated $c$-C3H2. Explicit model predictions are required to resolve which effect drives the disappearance of $c$-C3H2 in the outer disk. No existing disk chemistry model in the literature contains predictions for the radial distribution of $c$-C3H2. Based on strong correlations found between $c$-C3H2 and CCH in several astrophysical environments (Lucas & Liszt, 2000; Pety et al., 2005; Gerin et al., 2011), predictions on CCH in disks can be used to shed further light on the origins of $c$-C3H2. Aikawa & Herbst (2001) predicts a CCH central cavity in the absence of X-ray radiation; when X-rays are present, and HD 163296 is a known X-ray emitter (Swartz et al., 2005; Günther & Schmitt, 2009), the CCH column is instead centrally peaked. Despite the presence of X-rays, the central cavity could be due to a UV dominated radiation field. This may be tested by observing $c$-C3H2 towards T Tauri stars with weak UV fields and strong X-rays, and towards additional Herbig Ae stars with weak X-rays. The inner cavity could also be a product of the main formation pathway of $c$-C3H2. Carbon chains can form through at least three different pathways: (1) ion-neutral reactions at low temperatures, (2) photo-erosion of larger carbonaceous compounds, and (3) neutral-neutral reactions at luke-warm temperatures following CH4 ice evaporation at $\sim$30 K (Herbst et al., 1984; Teyssier et al., 2004; Sakai et al., 2008; Gerin et al., 2011). Detailed modeling of $c$-C3H2 radial distributions in each of these formation scenarios are clearly needed to use $c$-C3H2 rings as tracers of the radiation field and other disk characteristics. In addition to more detailed modeling, additional observations are needed to constrain whether different hydrocarbons, especially CCH and $c$-C3H2, are correlated in disk environments, and to constrain the vertical distribution of these molecules. The latter requires multiple lines with a range of excitation energies. With the combined advance in theory and observations, $c$-C3H2 may become one of the more useful probes of penetration of the radiation fields in disks. Facility: ALMA This paper makes use of the following ALMA data: ADS/JAO.ALMA#2011.0.00010.SV. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We would like to thank an anonymous referee for thoughtful suggestions for the paper. We also acknowledge NASA Origins of Solar Systems grant No. NNX11AK63. ## References * Aikawa & Herbst (2001) Aikawa, Y. & Herbst, E. 2001, A&A, 371, 1107 * Aikawa et al. (2003) Aikawa, Y., Momose, M., Thi, W., et al. 2003, PASJ, 55, 11 * Aikawa & Nomura (2006) Aikawa, Y. & Nomura, H. 2006, ApJ, 642, 1152 * Andrews et al. (2009) Andrews, S. M., Wilner, D. J., Hughes, A. M., Qi, C., & Dullemond, C. P. 2009, ApJ, 700, 1502 * Benson et al. (1998) Benson, P. J., Caselli, P., & Myers, P. C. 1998, ApJ, 506, 743 * Chandra & Kegel (2000) Chandra, S., & Kegel, W. H. 2000, A&AS, 142, 113 * Chapillon et al. (2012) Chapillon, E., Dutrey, A., Guilloteau, S., et al. 2012, ApJ, 756, 58 * Cox et al. (1988) Cox, P., Guesten, R., & Henkel, C. 1988, A&A, 206, 108 * Cox et al. (1989) Cox, P., Walmsley, C. M., & Guesten, R. 1989, A&A, 209, 382 * Dutrey et al. (1997) Dutrey, A., Guilloteau, S., & Guelin, M. 1997, A&A, 317, L55 * Dutrey et al. (2007) Dutrey, A., Henning, T., Guilloteau, S., et al. 2007, A&A, 464, 615 * Fogel et al. (2011) Fogel, J. K. J., Bethell, T. J., Bergin, E. A., Calvet, N., & Semenov, D. 2011, ApJ, 726, 29 * Fossé et al. (2001) Fossé, D., Cernicharo, J., Gerin, M., & Cox, P. 2001, ApJ, 552, 168 * Fuente et al. (2010) Fuente, A., Cernicharo, J., Agúndez, M., et al. 2010, A&A, 524, A19 * Gerin et al. (2011) Gerin, M., Kaźmierczak, M., Jastrzebska, M., et al. 2011, A&A, 525, A116 * Grady et al. (2000) Grady, C. A., et al. 2000, ApJ, 544, 895 * Günther & Schmitt (2009) Günther, H. M., & Schmitt, J. H. M. M. 2009, A&A, 494, 1041 * Henning et al. (2010) Henning, T., Semenov, D., Guilloteau, S., et al. 2010, ApJ, 714, 1511 * Herbst et al. (1984) Herbst, E., Adams, N. G., & Smith, D. 1984, ApJ, 285, 618 * Hogerheijde & van der Tak (2000) Hogerheijde, M. R. & van der Tak, F. F. S. 2000, A&A, 362, 697 * Isella et al. (2007) Isella, A., Testi, L., Natta, A., Neri, R., Wilner, D., & Qi, C. 2007, A&A, 469, 213 * Liszt et al. (2012) Liszt, H., Sonnentrucker, P., Cordiner, M., & Gerin, M. 2012, ApJ, 753, L28 * Lucas & Liszt (2000) Lucas, R. & Liszt, H. S. 2000, A&A, 358, 1069 * Mannings & Sargent (1997) Mannings, V., & Sargent, A. I. 1997, ApJ, 490, 792 * Matthews & Irvine (1985) Matthews, H. E. & Irvine, W. M. 1985, ApJ, 298, L61 * Madden et al. (1989) Madden, S. C., Irvine, W. M., Swade, D. A., Matthews, H. E., & Friberg, P. 1989, AJ, 97, 1403 * Menten et al. (1999) Menten, K. M., Carilli, C. L., & Reid, M. J. 1999, in Astronomical Society of the Pacific Conference Series, Vol. 156, Highly Redshifted Radio Lines, ed. C. L. Carilli, S. J. E. Radford, K. M. Menten, & G. I. Langston, 218 * Natta et al. (2004) Natta, A., Testi, L., Neri, R., Shepherd, D. S., & Wilner, D. J. 2004, A&A, 416, 179 * Öberg et al. (2010) Öberg, K. I., Qi, C., Fogel, J. K. J., et al. 2010, ApJ, 720, 480 * Öberg et al. (2011) Öberg, K. I., Qi, C., Fogel, J. K. J., et al. 2011, ApJ, 734, 98 * Öberg et al. (2012) Öberg, K. I., Qi, C., Wilner, D. J., & Hogerheijde, M. R. 2012, ApJ, 749, 162 * Perryman et al. (1997) Perryman, M. A. C., et al. 1997, A&A, 323, L49 * Park et al. (2006) Park, I. H., Wakelam, V., & Herbst, E. 2006, A&A, 449, 631 * Pety et al. (2005) Pety, J., Teyssier, D., Fossé, D., et al. 2005, A&A, 435, 885 * Piétu et al. (2007) Piétu, V., Dutrey, A., & Guilloteau, S. 2007, A&A, 467, 163 * Qi et al. (2011) Qi, C., D’Alessio, P., Öberg, K. I., et al. 2011, ApJ, 740, 84 * Qi et al. (2008) Qi, C., Wilner, D. J., Aikawa, Y., Blake, G. A., & Hogerheijde, M. R. 2008, ApJ, 681, 1396 * Qi et al. (2006) Qi, C., Wilner, D. J., Calvet, N., et al. 2006, ApJ, 636, L157 * Sakai et al. (2008) Sakai, N., Sakai, T., & Yamamoto, S. 2008, Ap&SS, 313, 153 * Schöier et al. (2005) Schöier, F. L., van der Tak, F. F. S., van Dishoeck, E. F., & Black, J. H. 2005, A&A, 432, 369 * Semenov et al. (2005) Semenov, D., Pavlyuchenkov, Y., Schreyer, K., et al. 2005, ApJ, 621, 853 * Semenov & Wiebe (2011) Semenov, D. & Wiebe, D. 2011, ApJS, 196, 25 * Swartz et al. (2005) Swartz, D. A., Drake, J. J., Elsner, R. F., et al. 2005, ApJ, 628, 811 * Tenenbaum et al. (2009) Tenenbaum, E. D., Milam, S. N., Woolf, N. J., & Ziurys, L. M. 2009, ApJ, 704, L108 * Teyssier et al. (2004) Teyssier, D., Fossé, D., Gerin, M., et al. 2004, A&A, 417, 135 * Thaddeus et al. (1985) Thaddeus, P., Vrtilek, J. M., & Gottlieb, C. A. 1985, ApJ, 299, L63 * Thi et al. (2004) Thi, W., van Zadelhoff, G., & van Dishoeck, E. F. 2004, A&A, 425, 955 * Walsh et al. (2012) Walsh, C., Nomura, H., Millar, T. J., & Aikawa, Y. 2012, ApJ, 747, 114 * Willacy (2007) Willacy, K. 2007, ApJ, 660, 441 Table 1: $c$-C3H2 line results. Transition | Frequency (GHz) | Eu (K) | Beam | $\int Fdv$ (mJy km s-1) ---|---|---|---|--- $3_{3,0}-2_{2,1}$ | 216.279 | 19 | $1\farcs 3\times 1\farcs 2$(76°) | 53[9] $6_{1,6}-5_{0,5}/6_{0,6}-5_{1,5}$ | 217.822 | 39 | $0\farcs 9\times 0\farcs 7$(83°) | 185[10] $5_{1,4}-4_{2,3}$ | 217.940 | 35 | $1\farcs 3\times 1\farcs 2$(78°) | 74[9] Figure 1: The integrated intensity maps summed between 0 and 11 km s-1) and intensity-weighted mean velocity fields of 13CO $2-1$ and $c$-C3H2 $6-5$ lines (left panel), $c$-C3H2 $5-4$ and $3-2$ lines (right panel) toward HD 163296. The resolved velocity field of the $c$-C3H2 $6-5$ line agrees with the CO kinematics. In the $c$-C3H2 maps, the first contour marks 3$\sigma$ followed by 1$\sigma$ contour increases. The rms varies between 6 and 9 mJy km s-1 per beam. Synthesized beams are presented in the lower left corners. The star symbol indicates the continuum (stellar) position. The axes are offsets from the pointing center in arcseconds. Figure 2: Extracted spectra of the three $c$-C3H2 transitions toward HD 163296, plotted together with 13CO $2-1$. The dashed line marks VLSR toward HD 163296. Figure 3: Iso-$\chi^{2}$ surfaces of $R_{\rm out}$ and $R_{\rm in}$ versus $p$. Contours correspond to the 1–5 $\sigma$ errors. Figure 4: The top panel shows the velocity channel map of the $c$-C3H2 $6-5$ emission toward HD 163296 (velocities binned in 1 km s-1). Contours are 0.0022 Jy Beam-1 (1$\sigma$) $\times[3,4,5,6,7,8,9]$. The middle and bottom panels show the best-fit model and the difference between the best- fit model and data on the same contour scale. The star symbol indicates the continuum (stellar) position. The axes are offsets from the pointing center in arcseconds.
# Sphere Packing Densities of Certain Families of Elliptic Curves Arjun Nigam ###### Abstract. In this paper, we examine the Mordell-Weil lattices of two families of elliptic curves over fields of characteristic $p>0$. We compute explicit lower bounds on the densest sphere packings of Mordell-Weil lattices using geometric information about the curve such as its Néron–Tate height and rational points on the curve. We see how these methods may be generalized to other “nice" families of elliptic curves and how encode interesting properties about the elliptic curves themselves. ## 1\. Introduction A sphere packing in $\mathbb{R}^{n}$ is an arrangement of non-overlapping spheres of the same radius. The densest sphere packing in $n$ dimensions is a sphere packing in $\mathbb{R}^{n}$ such that the fraction of the volume occupied by the spheres is maximal. Lagrange proved in $1773$ that the densest sphere packing in two dimensions is the hexagonal packing. More than 200 years later, Thomas Hales proved that the cubic close packing and hexagonal close packing arrangements achieve the densest sphere packing in three dimensions. Later, Maryna Viazovska resolved the problem of finding the densest sphere packing for dimension $8$ in [Via17]. Together with Henry Cohn, Abhinav Kumar, Stephen Miller, and Danylo Radchenko, she later applied her method in [Via+17] to resolve the problem in dimension $24$ as well. In this paper, we find lower bounds on the sphere packing densities associated to Mordell-Weil lattices of two families of elliptic curves. After introducing the relevant definitions and general results of sphere packings, we relate the density of the Mordell-Weil lattice of an elliptic curve to its Néron–Tate height function and regulator in section 3. In section 4, we focus on a specific family of elliptic curves and obtain lower bounds on their densities in two different ways. These bounds are obtained using subtle properties of the given family of elliptic curves. We compare our results to the best known sphere densities in [Coh17]. We also see how the information about the two lower bounds can be combined to obtain results about the original elliptic curve, namely a lower bound on the order of its Tate–Shafarevich group. Next, we focus on a different family of elliptic curves, one where we have more limited information. While we are unable to obtain a lower bounds on the densities of the Mordell-Weil lattices, we are able to find lower bounds on the densities of certain sublattices. We see that while this method requires less information about the elliptic curves than the method employed in the previous section, the resulting lower bounds on densities turn out to be much smaller than the best known sphere packing densities in the corresponding dimensions. In section 6, we explore how the ideas of the previous two sections may be applied to other families of elliptic curves. It is often the case that subtle properties of specific families of elliptic curves allow us to deduce results that do not hold for elliptic curves in general. However, there are many examples in the literature where the same result can be proven for different families of elliptic curves using different arguments. We state a few examples relevant to this paper and describe how the methods of this paper may be modified to study the sphere packing densities associated to other families of elliptic curves. ## 2\. Preliminaries We now make some basic definitions and introduce notation that we will use throughout the paper. ###### Definition 2.1. Let $\Lambda$ be a free $\mathbb{Z}$-module in $\mathbb{R}^{n}$ of maximal rank. A sphere packing on $\Lambda$ is a non-overlapping arrangement of spheres of the same radius in $\mathbb{R}^{n}$ such that each sphere is centered at a point of $\Lambda$. The densest sphere packing of $\Lambda$ is a sphere packing on $\Lambda$ of maximal density. The packing radius $\rho$ of $\Lambda$ is the radius of the largest open ball in $\mathbb{R}^{n}$ whose translates by elements of $\Lambda$ do not overlap. Let $E$ be an Elliptic curve over a field $k$ and let $E(k)$ denote the set of $k$-rational points of $E$. It is a well-known fact that there is a binary operation on $E(k)$ which makes it into an abelian group. Moreover, if $k$ is a number field or a function field over a finite field, then $E(k)$ is a finitely generated abelian group. Thus, we get that $E(k)_{\text{free}}\mathrel{\mathop{\mathchar 58\relax}}=E(k)/E(k)_{\text{torsion}}$ is a free abelian group of finite rank. There is also a positive-definite quadratic form $\hat{h}\mathrel{\mathop{\mathchar 58\relax}}E(k)_{\text{free}}\longrightarrow\mathbb{R}$ called the Néron–Tate height. The Néron–Tate height has an associated positive-definite symmetric bilinear form $\langle-,-\rangle\mathrel{\mathop{\mathchar 58\relax}}E(k)_{\text{free}}\times E(k)_{\text{free}}\longrightarrow\mathbb{R}$ given by $\langle P,Q\rangle=\frac{1}{2}(\hat{h}(P+Q)-\hat{h}(P)-\hat{h}(Q)).$ ###### Definition 2.2. A lattice is a finitely generated free abelian group equipped with a positive- definite symmetric bilinear form. The Mordell-Weil lattice of $k$-rational points of an elliptic curve $E$ is the group $E(k)_{\text{free}}$ equipped with the bilinear form described above. The sphere packing density associated to an elliptic curve $E$ is the density of the densest sphere packing on its Mordell-Weil lattice. If $(\Lambda,\langle-,-\rangle)$ is a lattice where $\Lambda$ is of rank $n$, then we can find a group homomorphism $\Phi\mathrel{\mathop{\mathchar 58\relax}}\Lambda\longrightarrow\mathbb{R}^{n}$ such that $(\Phi(P))\cdot(\Phi(Q))=\langle P,Q\rangle$ for all $P$ and $Q$ in $\Lambda$. $\Phi$ can be constructed using the Gram-Schmidt process. Thus, instead of studying the lattice $(\Lambda,\langle-,-\rangle)$, we can study its image in $\mathbb{R}^{n}$ equipped with the usual dot product. Note that $\Phi$ is not unique. In fact, the images of $\Lambda$ under different $\Phi$ may be different. However, they will be isomorphic as lattices. Thus, we can define the $k$-density of the densest sphere packing associated to an elliptic curve $E$ as the density of the densest sphere packing of an embedding of the lattice $E(k)_{\text{free}}$ into $\mathbb{R}^{n}$. ## 3\. General Results ###### Proposition 3.1. Let $\rho$ be packing radius of a maximal rank abelian subgroup $\Lambda\subseteq\mathbb{R}^{n}$ and let $V(\Lambda)$ be the volume of the fundamental domain of $\Lambda$. Then the density of the densest sphere packing of $\Lambda$ is given by $\frac{\pi^{\frac{n}{2}}}{V(\Lambda)\cdot\Gamma(\frac{n}{2}+1)}\rho^{n}.$ ###### Proof. This follows from the formula for the volume of the $n$-ball of radius $\rho$ and the observation that the volume of the fundamental domain occupied by the spheres to achieve the densest packing is exactly the volume of an $n$-sphere of radius $\rho$. ∎ ###### Definition 3.2. Let $\Lambda$ be a maximal rank free subgroup of $\mathbb{R}^{n}$. Then the normalized center density $\delta_{\Lambda}$ is given by $\frac{\rho^{n}}{V(\Lambda)}$ where $\rho$ is the packing radius of $\Lambda$ and $V(\Lambda)$ is the volume of the fundamental domain of $\Lambda$. It is easy to see that for a fixed $n$, the ratio of the normalized sphere density and the sphere density is constant for all lattices in $\mathbb{R}^{n}$. Thus, having a large lower bound on the normalized sphere packing density gives us a large lower bound on the sphere packing density. ###### Proposition 3.3. [Elk94] Let $E$ be an elliptic curve over $K$ where $K$ is a number field or function field over a finite field. For any subfield $k$ of $K$, the normalized center density of $E(k)$ is given by $\Delta^{-1/2}\left(\frac{N_{\text{min}}}{4}\right)^{n/2}$ where $n$ is the rank of $E(k)$, $N_{min}$ is the minimal non-zero value of the Néron–Tate height function, and $\Delta$ is the absolute value of the determinant of the pairing matrix of the generators of $E(k)_{\text{free}}$. ###### Proof. Note that the volume of the fundamental domain of a lattice is the square root of the absolute value of the determinant of the Gram matrix of a set of generators of the lattice. To avoid overlaps in the Mordell-Weil lattice, the packing radius $\rho$ must be half the minimal distance between any two distinct points of the lattice. This minimal distance is given by $\sqrt{N_{min}}$. ∎ ## 4\. The Curve $E\mathrel{\mathop{\mathchar 58\relax}}y^{2}=x^{3}+t^{q}-t$ Let $p>3$ be a prime such that $p\equiv-1$ mod $6$, $q$ an odd power of $p$, and $r$ a sufficiently large power of $p$ (the notion of “sufficiently large" is made precise below). In this section, we consider the curve given by $y^{2}=x^{3}+t^{q}-t$ over the field $k\mathrel{\mathop{\mathchar 58\relax}}=\mathbb{F}_{r}(t)$. ###### Definition 4.1. Fix an odd $p$-power $q=p^{c}$. We say that $r\mathrel{\mathop{\mathchar 58\relax}}=p^{s}$ is sufficiently large if $s$ is a multiple of $c$, $8$ divides $(p+1)s$, and $3(p^{c}-1)$ divides $p^{s}-1$. In particular, $\mathbb{F}_{q}\subseteq\mathbb{F}_{r}$. If $r$ is sufficiently large, then so are all powers of $r$. ###### Definition 4.2. The naive height $h$ of a $\mathbb{F}_{r}(t)$-rational point $P=(x,y)$ on an elliptic curve is $\text{deg}(x)\mathrel{\mathop{\mathchar 58\relax}}=\text{max}(\text{deg}(f),\text{deg}(g))$ where $x=\frac{f(t)}{g(t)}$ for $f$ and $g$ coprime polynomials. If $P=[0\mathrel{\mathop{\mathchar 58\relax}}1\mathrel{\mathop{\mathchar 58\relax}}0]$, then we define $h(P)=0$. We wish to find a lower bound on $N_{\text{min}}$ for our elliptic curve. This is the same as finding the minimum value of $\hat{h}$ on non-torsion points. Since $\hat{h}$ has a bounded difference with the naive height, it is perhaps useful to find a lower bound on the naive height. ###### Proposition 4.3. The naive height on $E$ is bounded below by $\frac{q+1}{3}$. ###### Proof. Let $\text{ord}_{\infty}$ denote the valuation at $\infty$. Let $P=(x,y)$ be a point on $E$ that is not the identity. Then we can apply $\text{ord}_{\infty}$ to both sides of $y^{2}=x^{3}+t^{q}-t$ to get $2\text{ord}_{\infty}(y)=\text{ord}_{\infty}(x^{3}+t^{q}-t).$ Note that $\text{ord}_{\infty}(t)=-1\neq-q=\text{ord}_{\infty}(t^{q})$. Thus, we may use the strict triangle inequality to get that $\text{ord}_{\infty}(t^{q}-t)=-q$. However, $q$ is not a multiple of $3$, whereas $\text{ord}_{\infty}(x^{3})=3\text{ord}_{\infty}(x)$ is, so we must have, using the strict triangle inequality again, that $\text{ord}_{\infty}(x^{3}+t^{q}-t)=\text{min}(3\text{ord}_{\infty}(x),-q).$ We also know that $2\text{ord}_{\infty}(y)=\text{ord}_{\infty}(y^{2})=\text{min}(3\text{ord}_{\infty}(x),-q)$. Since $2$ does not divide $q$, we must have that $3\text{ord}_{\infty}(x)<-q$. Thus, we get $\text{ord}_{\infty}(x)\leq-\left(\frac{q+1}{3}\right).$ This shows that $x$ has a pole of order at least $\frac{q+1}{3}$ at $\infty$, so we must have $h(P)=\text{deg}(x)\geq\frac{q+1}{3}$. ∎ Using this proposition, we can now prove that the naive height and the Néron–Tate height are the same for our curve. ###### Proposition 4.4. The Néron–Tate height and the naive height agree on $E$. ###### Proof. Since the Neron-Tate height $\hat{h}$ is uniquely characterized by the fact that its difference with the naive height $h$ is bounded and $\hat{h}(2P)=4\hat{P}$ for all $\mathbb{F}_{r}(t)$-rational points $P$ on the curve, it suffices to show that the naive height on $E$ satisfies $h(2P)=4h(P)$ for all rational points points $P$ on the curve $E$. Clearly, this holds for when $P$ is the identity. Let $P=(x,y)$ be a non-identity $\mathbb{F}_{r}(t)$-rational point on the curve where $x=\frac{f(t)}{g(t)}$ where $f$ and $g$ are coprime polynomials. By the proof of Proposition 4.3, we have $\text{deg}(g)-\text{deg}(f)=\text{ord}_{\infty}(x)\leq-\left(\frac{q+1}{3}\right).$ This implies that $\text{deg}(f)\geq\text{deg}(g)+\frac{q+1}{3}$, so we conclude that $\text{deg}(x)=\text{deg}(f)$ for all points $P=(x,y)$ with $x=\frac{f}{g}$ where $f$ and $g$ are coprime polynomials. Now, we compute $2P$ explicitly. Using basic arithmetic of elliptic curves, we see that the first coordinate of the point $2P$ is $\frac{f^{4}-8fg^{3}\cdot(t^{q}-t)}{4(gf^{3}+g^{4}\cdot(t^{q}-t))}.$ By our earlier observation on the height of $\mathbb{F}_{r}(t)$-rational points of $E$, if there is no cancellation between the numerator and the denominator, the naive height of $2P$ is $\text{deg}(f^{4}-8fg^{3}\cdot(t^{q}-t))$. However, using Proposition 4.3, we get $\text{deg}(f^{4})=4\text{deg}(f)\geq\text{deg}(f)+3\text{deg}(g)+q+1>\text{deg}(8fg^{3}\cdot(t^{q}-t)).$ Thus, we get $\text{deg}(f^{4}-8fg^{3}\cdot(t^{q}-t))=\text{deg}(f^{4})=4\text{deg}(f)=4h(P)$. All we need to show now is that $\frac{f^{4}-8fg^{3}\cdot(t^{q}-t)}{4(gf^{3}+g^{4}\cdot(t^{q}-t))}$ has no cancellation. Assume that $\tau$ is an irreducible (in $\mathbb{F}_{r}[t]$) that divides both the numerator and denominator. Then $\tau$ must also divide $g(f^{4}-8fg^{3}\cdot(t^{q}-t))-\frac{f}{4}(4(gf^{3}+g^{4}\cdot(t^{q}-t)))=-9g^{4}f\cdot(t^{q}-t).$ Since $\tau$ is irreducible and $9\neq 0$, we must then have that $\tau$ either divides $g,f,$ or $t^{q}-t$. Case 1: $\tau$ divides $g$ Since $\tau$ divides both $f^{4}-8fg^{3}\cdot(t^{q}-t)$ and $g$, we must have that it divides $f$ as well. This is a contradiction as $f$ and $g$ were assumed to be coprime polynomials. Case 2: $\tau$ divides $f$ Since $\tau$ divides both $gf^{3}+g^{4}\cdot(t^{q}-t)$ and $f$, we must have that it divides either $g$ or $t^{q}-t$. Since $g$ and $f$ are coprime polynomials, this can only happen if $\tau$ divides $t^{q}-t$. However, $\mathbb{F}_{q}\subseteq\mathbb{F}_{r}$ and the roots of $t^{q}-t$ are precisely the elements of $\mathbb{F}_{q}$. Thus, the irreducible $\tau$ must be of the form $t-\alpha$ for some $\alpha\in\mathbb{F}_{q}$. This implies that $\alpha$ is a root of $f$. Since $x=\frac{f(t)}{g(t)}$ where $f$ and $g$ are coprime polynomials, we also get that $x$ has a root at $\alpha$. Note that $t^{q}-t$ has a root at $\alpha$ of multiplicity one, so we can use the strict triangle inequality to get $2\text{ord}_{\alpha}(y)=\text{ord}_{\alpha}(y^{2})=\text{ord}_{\alpha}(x^{3}+t^{q}-t)=\text{min}(\text{ord}_{\alpha}(x^{3}),\text{ord}_{\alpha}(t^{q}-t))=1,$ which is a contradiction as $2$ does not divide $1$. Case 3: $\tau$ divides $t^{q}-t$ Since $\tau$ divides both $f^{4}-8fg^{3}\cdot(t^{q}-t)$ and $t^{q}-t$, we must have that it divides $f$ as well. This then leads to the same contradiction as in Case 2. This shows that there is no cancellation in $\frac{f^{4}-8fg^{3}\cdot(t^{q}-t)}{4(gf^{3}+g^{4}\cdot(t^{q}-t))}$, so the naive height satisfies $h(2P)=4h(P)$. ∎ ###### Remark. Using Proposition 4.4, we can give an elementary argument to show that $E(\mathbb{F}_{r}(t))$ is torsion-free for $r$ sufficiently large. Let $P=(x,y)$ be a non-trivial $\mathbb{F}_{r}(t)$-rational point on $E$ that is torsion. By Proposition 4.4, we know that $0=\hat{h}(P)=\text{deg}(x)$. This is only possible if $x=c$ for some $c\in\mathbb{F}_{r}$. This would imply that $y^{2}=t^{q}-t+c^{3}$. This means that $y$ is a polynomial in $t$. Since $q$ is an odd integer, $t^{q}-t+c^{3}$ cannot be the square of a polynomial. This is a contradiction. ###### Proposition 4.5 (Proposition 8.4.1(3) in [UG20]). For $r$ sufficiently large, rank$(E(\mathbb{F}_{r}(t)))=2(q-1)$. ###### Proposition 4.6 (Corollary 9.2(3) in [UG20]). $\Delta$ is bounded above by $r^{\lfloor\frac{q}{6}\rfloor}$ for $r$ sufficiently large. ###### Proposition 4.7. For $r$ sufficiently large, the normalized center density $\delta_{E(\mathbb{F}_{r}(t))}$ is bounded below by $\frac{\sqrt{|\Sha(E/\mathbb{F}_{r}(t))|}}{(r^{\lfloor\frac{q}{6}\rfloor})^{1/2}}\cdot\left(\frac{q+1}{12}\right)^{q-1}.$ ###### Proof. This follows by combining the results of Proposition 3.3, Proposition 4.3, Proposition 4.4, Proposition 4.5, and Proposition 4.6. ∎ Using the trivial bound on the Tate–Shafarevich group, we can now use Proposition 4.7 to get explicit lower bounds on the densities of our family of curves. However, as we will see, this lower bound can often be improved by considering sublattices of the Mordell-Weil lattice. Indeed, let $\Lambda$ be a maximal rank sublattice of the Mordell-Weil lattice of $E$. We may then replace $\Delta$ by $V(\Lambda)$ in the in Proposition 3.3 to find a lower bound on the normalized packing density of E. With this as the motivation, we list some explicit rational points on the curve and consider the sublattice that they generate. ###### Proposition 4.8. Let $\sigma$ be a solution to $\sigma^{6(q-1)}=-1$ and $\beta$ a solution to $\beta^{q}+\beta=1$ in $\mathbb{F}_{r}$ for $r$ sufficiently large. Then $P=(\sigma^{2}(t-(\beta/\sigma^{6}))^{\frac{q+1}{3}},\sigma^{3}(t-(\beta/\sigma^{6})^{q})^{\frac{q+1}{2}})$ is a point in $E(K)$. Thus, we get $6q(q-1)$ $\mathbb{F}_{r}(t)$-rational points on $E$. ###### Proof. This can be verified by plugging in the point $P$ into the defining equation of our elliptic curve and using the relations on $\sigma$ and $\beta$. Alternatively, these points can be deduced by considering the map $C_{6,q}\longrightarrow E_{0}$ which we get by presenting $C_{6,q}$ and $E_{0}\mathrel{\mathop{\mathchar 58\relax}}y^{2}=x^{3}+1$ as quotients of the Fermat curve of degree $q+1$. ∎ Using this MAGMA script, we find the volume of (an Euclidean embedding of) the sublattice generated by the points in Proposition 4.8 which we then use to find lower bounds on the sphere packing density of the Mordell-Weil lattice for different values of $p,q,$ and $r$. We compare it with the lower bounds obtained using Proposition 4.7 and the trivial bound $|\Sha(E/\mathbb{F}_{r}(t))|\geq 1$ in the table below. $q$ | $r$ | Lower Bound on Normalized Density (Using MAGMA) | Lower Bound on Normalized Density (Using Proposition 4.7 and the trivial bound $|\Sha(E/\mathbb{F}_{r}(t))|\geq 1$) | Dimension | Best Known (Normalized) Sphere Packing Density (If Known) [Coh17] ---|---|---|---|---|--- $5$ | $5^{4}$ | $0.0625$ | $0.0625$ | $8$ | $0.0625$ $5$ | $5^{8}$ | $0.0625$ | $0.0625$ | $8$ | $0.0625$ $5$ | $5^{12}$ | $0.0625$ | $0.0625$ | $8$ | $0.0625$ $5^{3}$ | $5^{16}$ | $\sim 2.653\times 10^{20}$ | $\sim 6.198\times 10^{12}$ | $248$ | $11$ | $11^{2}$ | $\sim 0.0909$ | $\sim 0.0909$ | $20$ | $\sim 0.1315$ $11$ | $11^{6}$ | $\sim 0.0909$ | $\sim 0.00075$ | $20$ | $\sim 0.1315$ $17$ | $17^{4}$ | $\sim 2.272$ | $\sim 0.0078$ | $32$ | $\sim 2.565$ We see that there is repetition in the lower bound of the normalized density (computed using MAGMA) when $q$ is fixed and $r$ varies. This is to be expected as the explicit points in Proposition 4.8 and the explicit formulation of the Néron–Tate height do not depend on $r$ (assuming that $r$ is sufficiently large). Thus, they generate the same lattice. The same repetition is not observed in the lower bound obtained using Proposition 4.7. This is likely due to the fact that we used the trivial bound on $|\Sha(E(\mathbb{F}_{r}(t)))|$ when computing it and $\Sha(E(\mathbb{F}_{r}(t)))$ may depend on $r$. Up to this point, we have been using properties of the given elliptic curves to deduce information about the densities of their Mordell-Weil lattices. Now, we can use the two lower bounds to say something interesting about the elliptic curve $E$. The lower bound obtained using MAGMA divided by the lower bound obtained using Proposition 4.7 is a lower bound on $\sqrt{\Sha(E/\mathbb{F}_{r}(t))}$. This follows from how we computed these lower bounds. In particular, we see that for the cases considered above, $E(E/\mathbb{F}_{r}(t))$ is non-trivial when $q=5^{3}$ and $r=5^{16}$ or when $q=17$ and $r=17^{4}$. ###### Remark. $E_{8}$ is the unique lattice (up to isometry and rescaling) to have the highest density sphere packing in dimension $8$. For $q=5$ and $r=5^{4}$, we observe that the lower bound obtained equals the sphere packing density of $E_{8}$. This means that the Mordell-Weil lattice must have sphere packing density equal to $E_{8}$ which, by uniqueness, implies that the Mordell-Weil lattice is $E_{8}$. This also implies that $E(E/\mathbb{F}_{r}(t))$ is trivial in this case. ## 5\. The Legendre Curve Let $p$ be an odd prime and $d=p^{f}+1$ for some positive integer $f$. Consider the function field $K_{d}\mathrel{\mathop{\mathchar 58\relax}}=\mathbb{F}_{p}(\mu_{d},u)$ where $\mu_{d}$ is the set of primitive $d^{\text{th}}$ roots of unity and $u^{d}-t=0$ for indeterminate $t$. We study the elliptic curve $E\mathrel{\mathop{\mathchar 58\relax}}y^{2}=x(x+1)(x+t).$ As with the previous curve, we would like to have explicit points on this curve. ###### Proposition 5.1. Let $E$ and $K_{d}$ be as defined. Then for a fixed primitive $d^{\text{th}}$ root of unity $\zeta$ and $i\in\\{0,1,\ldots,d-1\\}$, the point given by $P_{i}^{(d)}\mathrel{\mathop{\mathchar 58\relax}}=(\zeta^{i}_{d}u,\zeta^{i}_{d}u(\zeta^{i}_{d}u+1)^{d/2})$ is a $K_{d}-$rational point of $E$. ###### Proof. This can be checked by plugging in each $P^{(d)}_{i}$ into the equation defining $E$ or by observing the action of $Gal(K_{d}/K)$ on the $K_{d}-$ rational point $P\mathrel{\mathop{\mathchar 58\relax}}=(u,u(u+1)^{d/2})$. ∎ ###### Proposition 5.2. The set of explicit rational points in Proposition 5.1 generate a rank $d-2$ subgroup of the Mordell-Weil group. $E(K_{d})$ also has rank $d-2$. ###### Proof. The first part of this proposition follows from Corollary 4.3 in [Ulm14] and the second part follows from Corollary 5.3 of the same paper. ∎ Thus, we get that the subgroup generated by the $P_{i}^{(d)}$ is of finite index in $E(K_{d})$. Just like in the previous section, it is difficult to find a set of generators for the free part of the Mordell-Weil group. However, instead of finding a lower bound on the normalized sphere packing density of $E$, we can find a lower bound on the normalized sphere packing density of the finite index sublattice generated by the $P_{i}^{(d)}$. ###### Proposition 5.3. The height pairing of the points $P^{(d)}_{i}$ is given by $\langle P^{(d)}_{i},P^{(d)}_{j}\rangle=\begin{dcases}\frac{(d-1)(d-2)}{2d}&\text{if }i=j\\\ \frac{1-d}{d}&\text{if }i\neq j\text{ and }i-j\text{ is even}\\\ 0&\text{if }i-j\text{ is odd}\\\ \end{dcases}$ ###### Proof. This is Theorem 8.2 in [Ulm14] ∎ ###### Proposition 5.4. For any non-torsion point $Q$ in the subgroup generated by the $P_{i}^{(d)}$, we have the inequality $\frac{d-1}{2d}\leq\hat{h}(Q).$ ###### Proof. Let $Q=\sum_{0}^{d-1}a_{i}P_{i}^{(d)}$ be a point in the subgroup generated by the $P_{i}^{(d)}$. Then we can use bilinearity of the height pairing and the previous proposition to get $\hat{h}(Q)=\left\langle\sum_{0}^{d-1}a_{i}P_{i}^{(d)},\sum_{0}^{d-1}a_{i}P_{i}^{(d)}\right\rangle=\frac{(d-1)(d-2)(a_{0}^{2}+\cdots+a_{d-1}^{2})+(1-d)(\sum_{i-j\text{ is even}}2a_{i}a_{j})}{2d}.$ Thus, we may write $\hat{h}(Q)$ as $\frac{d-1}{2d}\cdot m$ where $m$ is some integer. Since the height function is positive definite, we must have that $m\geq 1$. This gives us the required lower bound. ∎ Using these results, we can now compute explicit lower bounds on the normalized sphere packing density of a sublattice of $E$ using this MAGMA script. $p$ | $f$ | Dimension | Normalized Density (Lower Bound) | Best Known (Normalized) Sphere Packing Density [Coh17] ---|---|---|---|--- $3$ | $1$ | $2$ | $\sim 0.125$ | $\sim 0.288$ $3$ | $2$ | $8$ | $\sim 1.953\times 10^{-6}$ | $\sim 0.0625$ $3$ | $3$ | $26$ | $\sim 3.208\times 10^{-26}$ | $\sim 0.577$ $5$ | $1$ | $4$ | $\sim 0.005$ | $\sim 0.125$ $5$ | $2$ | $24$ | $\sim 8.119\times 10^{-24}$ | $\sim 1.003$ $7$ | $1$ | $6$ | $\sim 1.22\times 10^{-4}$ | $\sim 0.0721$ We observe that as $p$ and $f$ get larger, the normalized density gets smaller. This is because the Gram matrix obtained from the height pairing has a large determinant for large values of $p$ and $f$. Note that, unlike the previous section, the computational lower bounds that we obtained here are not necessarily lower bounds on the density of $E$. While we were able to obtain a lower bound on the minimum non-zero value of the Néron–Tate height function on the sublattice generated by the $P_{i}^{(d)}$, we do not know if this value is also the minimum on the entire Mordell-Weil lattice. ## 6\. Generalizations We now discuss how the methods used in this paper may be generalized to other “nice" families of elliptic curves. Our strategy in Section 4 is motivated by [Elk94]. We first proved that the Néron–Tate height and the naive height agree for our family of elliptic curves and then found a lower bound for the naive height using elementary techniques. While the proofs of these results use properties specific to the family of curves being studied, there are many examples in the literature of elliptic curves where the naive and Néron–Tate height agree (for example, the family of hyperelliptic curves considered in [Elk94]). Even if the naive and Néron–Tate heights do not agree for all points, we might still be able to perform explicit computations on the lower bounds of lattice densities. Let $E$ be an elliptic curve over the field $\mathbb{F}_{q}(t)$ where $q$ is a $p$-power for a prime $p>3$. We choose a Weierstrass model $y^{2}=x^{3}+a_{4}x+a_{6}$ for $E$ where $a_{i}\in\mathbb{F}_{q}[t]$ are polynomials and max$(\lceil\frac{\text{deg}(a_{i})}{i}\rceil)$ is minimal (doing so is possible by lemma 5.43 of [SS19]). We define $d$ to be the smallest integer such that $a_{i}\leq di$. We say that $E$ is a “nice" elliptic curve if 1. (1) For all places $\nu$, the discriminant $\Delta$ has no roots, simple roots, or double roots at $\nu$. 2. (2) If $\Delta$ has double roots at $\nu$, then the reduction of $E$ at $\nu$ is additive. Assume that $E$ is a “nice" curve and let $P$ be a $\mathbb{F}_{q}(t)$-rational point. We have $\langle P,P\rangle=2\chi+2(P\cdot O)+\sum_{v}\text{contr}_{v}(P)$ where the sum is taken over all singular reducible fibers and $P\cdot O$ is the intersection number (Theorem 6.24 in [SS19]). Since we assumed that $E$ is “nice", table 15.1 of [Sil09] tells us that all singular fibers are irreducible. Thus, we get $\langle P,P\rangle=2\chi+2(P\cdot O)$. However, since $\chi=d$, all we need to do to compute the height is to find the intersection number $P\cdot O$. Let $(x_{0}(t),y_{0}(t))$ be the coordinate representation of the point $P$ where $x_{0}=\frac{f(t)}{g(t)}$ for polynomials $f$ and $g$. Since $a_{i}$ are polynomials, they do not blow up when $t$ is finite. Thus, given a finite place $t_{0}$, we have that $x_{0}(t_{0})$ blows up if and only if $y_{0}(t_{0})$ blows up. This means us that the section determined by $P$ meets the section determined by $O$ at deg$(g)$-many times (counted with multiplicity). For $t=\infty$, we change coordinates of our curve by $x=t^{2d}x^{\prime}$ and $y=t^{3d}y^{\prime}$. By our choice of $a_{i}$, we get that the coefficients in our new model are regular functions at $t=\infty$. Thus, we get that the section determined by $P$ intersects the section determined by $O$ at infinity max$(0,\text{deg}(f)-\text{deg}(g)-2d)$-many times. If deg$(f)$ large enough, we get that the section determined by $P$ meets the section determined by $O$ deg$(g)+\text{deg}(f)-\text{deg}(g)-2d)=\text{deg}(f)-2d$ many times. Accounting for the degree $2$ morphism $E\longrightarrow\mathbb{P}^{1}$, we get $P\cdot O=\frac{1}{2}\left(\text{deg}(f)-2d\right)$. This gives us $\langle P,P\rangle=2d+2(P\cdot O)=\text{deg}(f)$. We have now shown that for “nice" curves $E$ and points $P=(\frac{f}{g},y)$ with $\text{deg}(f)$ is large enough, the naive and Néron–Tate heights on $P$ agree. Thus, if we have a lower bound on the naive height for $E$ and a set of points $P_{i}=(\frac{f_{i}}{g_{i}},y_{i})$ where deg$(f_{i})$ is large enough for each $i$ and the $P_{i}$’s generate a maximal rank sublattice $L$, we can compute lower bounds on the lattice density of $L$. ## 7\. Acknowledgements I would like to thank Dr. Douglas Ulmer for suggesting the project and mentoring me with it. Dr. Ulmer’s constant encouragement, expertise, and patient guidance were invaluable in the writing of this paper. I would also like to thank Dr. Bryden Cais, Dr. Steven J. Miller, Daniel Lewis, and Tristan Phillips for their valuable feedback on the first draft of this paper. ## References * [Coh17] H. Cohn “A conceptual breakthrough in sphere packing” In _Notices of the American Mathematical Society_ 64, 2, 2017 * [Elk94] N. Elkies “Mordell-Weil Lattices in Characteristic 2: I. Construction and First Properties” In _International Mathematics Research Notices_ 1994, Issue 8, 1994 * [Sil09] J. Silverman “The Arithmetic of Elliptic Curves” 106, Graduate Texts in Mathematics Springer NEW YORK, NY, 2009 * [SS19] M. Schütt and T. Shioda “Mordell–Weil Lattices” 70, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics Springer Singapore, 2019 * [UG20] D. Ulmer and R. Griffon “On the arithmetic of a family of twisted constant elliptic curves” In _Pacific Journal of Mathematics_ 305, 2020 * [Ulm14] D. Ulmer “Explicit points on the Legendre curve” In _Journal of Number Theory_ 136, 2014 * [Via+17] M. Viazovska et al. “The sphere packing problem in dimension 24” In _Annals of Mathematics_ 185, 2017 * [Via17] M. Viazovska “The sphere packing problem in dimension 8” In _Annals of Mathematics_ 185, 2017
(17) Another advantage of using two pairs of exposures is that one can calculate the null polarization (NULL1 and NULL2) as in equations 20 and 26 of Bagnulo et al. (2009), which provides a way to quantify the amount of spurious polarization. The null polarization for the Difference method is given by $NULL_{X}=\frac{1}{4}\sum_{k=1}^{2}{\left[(-1)^{k-1}\left(\frac{r_{2k-1}-1}{r_{2k-1}+1}-\frac{r_{2k}-1}{r_{2k}+1}\right)\right]},$ (18) and for the Ratio method the null polarization is given by $NULL_{X}=\frac{\left(\prod_{k=1}^{2}{r_{2k-1}/r_{2k}}\right)^{\frac{(-1)^{k-1}}{4}}-1}{\left(\prod_{k=1}^{2}{r_{2k-1}/r_{2k}}\right)^{\frac{(-1)^{k-1}}{4}}+1}.$ (19) Finally, the uncertainties of polarimetric measurements can be calculated from the extracted fluxes and their uncertainties (denoted here by $\sigma$) by equations A3 and A10 of Bagnulo et al. (2009). In the Difference method, the variance for each spectral element is given by $\sigma_{X}^{2}=\frac{1}{16}\sum_{i=1}^{4}{\left\\{\left[\frac{2f_{i\parallel}f_{i\perp}}{(f_{i\parallel}+f_{i\perp})^{2}}\right]^{2}\left[\frac{\sigma_{i\parallel}^{2}}{f_{i\parallel}^{2}}+\frac{\sigma_{i\perp}^{2}}{f_{i\perp}^{2}}\right]\right\\}},$ (20) and in the Ratio method the variance is given in terms of the flux ratio $r_{i}$ as defined in Equation 14, i.e., $\sigma_{X}^{2}=\frac{\left(\frac{r_{1}}{r_{2}}\frac{r_{4}}{r_{3}}\right)^{1/2}}{4\left[\left(\frac{r_{1}}{r_{2}}\frac{r_{4}}{r_{3}}\right)^{1/4}+1\right]^{4}}\sum_{i=1}^{4}{\left[\frac{\sigma_{i\parallel}^{2}}{f_{i\parallel}^{2}}+\frac{\sigma_{i\perp}^{2}}{f_{i\perp}^{2}}\right]}.$ (21) Applying this formalism to SPIRou spectra, we obtain values that vary continuously throughout the spectrum and are systematically above or below zero for each spectrum, which we refer to here as the ‘continuum polarization’. For general scientific applications with SPIRou, this continuum polarization is actually spurious as it reflects small differences in the injection between beams, and must therefore be fitted and removed. This step is mandatory before performing measurements in spectral lines. apero applies an iterative sigma-clip algorithm to fit either a polynomial or a spline to model the continuum polarization. ### 10.2 Least-Squares Deconvolution The least-squares deconvolution method (LSD) is an efficient technique that combines the signal from thousands of spectral lines retaining the same line profile information to obtain a mean velocity profile for the intensity, polarization, and null spectra. A common application of this technique concerns the measurement of the Zeeman split into Stokes V (circularly polarized) profiles. The Zeeman split is a physical process where electronic transitions occurring in the presence of a magnetic field have their main energy transition level split into two additional levels, forming a double line in the intensity spectrum. An interesting feature of these lines is that they are circularly polarized and their polarizations have opposite signs. Therefore, by observing the circularly polarized spectrum one can obtain a characteristic Stokes V profile that provides a way to detect and characterize the magnetism in stellar photospheres with great sensitivity. apero implements the LSD calculations using the formalism introduced by Donati et al. (1997), summarized as follows. Let us first consider the weight of a given spectral line $i$, $w_{i}=g_{i}\lambda_{i}d_{i}$, where $g$ is the Landé factor (magnetic sensitivity), $\lambda$ is the central wavelength, and $d$ is the line depth. Then one can construct the line pattern function, $M(v)=\sum_{i=1}^{N_{l}}{w\delta(v-v_{i})},$ (22) where $N_{l}$ is the number of spectral lines considered in the analysis, $\delta$ is the Dirac function, and $v$ is the velocity. The transformation from wavelength ($\lambda$) to velocity space is performed by the relation $dv/d\lambda=c/\lambda$, where $c$ is the speed of light. The LSD profile is calculated by the following matrix equation: $\rm{\bf Z}=\left(\rm{\bf M}^{t}.\rm{\bf S}^{2}.\rm{\bf M}\right)^{-1}\rm{\bf M}^{t}.\rm{\bf S}^{2}.\rm{\bf P},$ (23) where $\rm{\bf P}$ is the polarimetric spectrum calculated from Equation 16 or 17, and $\rm{\bf S}$ is the covariance matrix, a diagonal matrix where each element in the diagonal is given by $S_{jj}=1/\sigma_{j}$, with $\sigma_{j}$ being the uncertainty in the polarimetric spectrum calculated from Equation 20 or 21. Note that one can also calculate the null polarization LSD profile by substituting the polarimetric spectrum $\rm{\bf P}$ by the null spectrum $\rm{\bf N}$ in Equation 23. The intensity LSD is also possible, by using the flux spectrum $\rm{\bf F}$, but in this case the line weight in Equation 22 is simply given by the line depth, i.e, $w_{i}=d_{i}$. In practice, LSD requires a few important steps to be executed by apero. First, each individual spectrum is cleaned using a sigma-clip rejection algorithm to minimize the impact of outliers in the LSD profile. Then we set a grid of velocities to calculate the LSD profile, where the grid is defined by the following parameters: an initial velocity, $v_{0}$, a final velocity, $v_{f}$, and the total number of points in the grid, $N_{v}$. Next, a fast and accurate method is necessary to project the spectral values onto the velocity grid. Finally, an appropriate catalog of spectral lines (line mask) needs to be adopted for the LSD calculations. apero selects the line mask from a repository of masks, where the selection is based on the proximity to the effective temperature of the star observed. The apero masks are computed using the VALD catalog (Piskunov et al., 1995) and a MARCS model atmosphere (Gustafsson et al., 2008) with an effective temperature ranging from 2500 to 5000 K in steps of 500 K, and the same surface gravity of $\log g=5.0$ dex. The lines that are effectively used in the LSD analysis are selected with line depths above a given threshold, which is set to 3% by default and with a Landé factor of $g_{\rm eff}>0$, resulting in a total of approximately 2500 atomic lines that cover the full spectral range of SPIRou. Figure 27 shows an example of an LSD analysis performed on a 4-exposure Stokes-V sequence of the bright Ap star Gamma Equulei, which has a strong magnetic field (e.g., Bychkov et al., 2006) and therefore shows an obvious Zeeman feature in the SPIRou data. Figure 27: LSD analysis performed on the polarimetric data reduced with APERO and obtained from a 4-exposure Stokes-V sequence of the bright Ap star Gamma Equulei. Panels from top to bottom show Stokes I, Stokes V, and null profiles. In practice, the LSD analysis is not computed in a standard automated run of apero but the module is supplied and can be activated with the use of a single keyword in the apero profiles or run after processing. Figure 28: Polarimetric sequence ## 11 Post processing The final data products that go to PIs are composite files of many of the outputs of apero. For SPIRou, these are sent to the Canadian Data Astronomy Center (CADC)161616accessible from https://www.cadc-ccda.hia-iha.nrc- cnrc.gc.ca/ but are only produced for science targets and hot stars (i.e., obj_fp, obj_dark, polar_fp, and polar_dark) and not for calibrations by default. There are currently five post-processing files each linked to a single odometer code. These are the 2D extracted output (e.fits, Section 11.1), the 2D telluric corrected output (t.fits, Section 11.3), the 1D output (s.fits, Section 11.2), the velocity output (v.fits, Section 11.4), and the polarimetric outputs (p.fits 11.5). A summary of the CADC output files is available in table 4 and the post-process sequence is shown in Figure 29. File | Description ---|--- (odometer)e.fits | 2D extracted spectrum for fibers AB, A, B, C, wavelength solution, and blaze (odometer)s.fits | 1D extracted spectrum for fibers AB, A, B, C, and telluric corrected spectrum if available (odometer)t.fits | 2D telluric corrected spectrum for fiber AB, A, B, wavelength solution, blaze, and reconstructed atmospheric transmission (odometer)v.fits | combined and per order CCFs for fitting the radial velocity of the star (odometer)p.fits | polarimetric products (Polarimetric flux, Stokes I, Null vectors, wavelength solution, and blaze) Table 4: Science ready outputs sent to the Canadian Data Astronomy Center, CADC). ### 11.1 2D extraction product (e.fits) These are the combined extracted products. All extensions are two-dimensional spectra of size $4088\times 49$. The ‘e.fits’ file contains the extracted spectrum for each order for each fiber and the matching wavelength and blaze solution for each order and each fiber. The files are identified with a single odometer generated at the time of observation followed by an ‘e.fits’ suffix. ### 11.2 2D telluric corrected product (t.fits) These are the combined telluric-corrected products. All extensions are two- dimensional spectra of size $4088\times 49$. The ‘t.fits’ file contains the telluric corrected spectrum for each order and each fiber and the matching wavelength and blaze solution for each order and each fiber. The files are identified with a single odometer code at the time of observation followed by a ‘t.fits’ suffix. ### 11.3 1D extraction and 1D telluric corrected product (s.fits) These are the combined 1D spectrum products and consist of two tables. The two tables consist of the 1D spectrum in 1. velocity units and 2. wavelength units. They each consist of the following columns: the wavelength solution, the extracted flux in AB, A, B, and C, the telluric corrected flux in fibers AB, A, and B (if available), and the associated uncertainties for each flux column. The files are identified with a single odometer code at the time of observation followed by an ‘s.fits’ suffix. ### 11.4 Velocity product (v.fits) The velocity products are packaged into the ‘v.fits’ file. Currently, only the CCF values (Section 9.1) are added as an extension as the LBL products are computed separately. The CCF file consists of the CCF generated for each radial velocity element (by default this is between $\pm$300 $\text{ms}^{-1}$ in steps of 0.5 $\text{ms}^{-1}$) for each order and a combined CCF for the same radial velocity elements. The files are identified with a single odometer code at the time of observation followed by a ‘v.fits’ suffix. Once the LBL module is able to be used with apero it will add an extension to the ‘v.fits’ (the ‘rdb’ extension described in the LBL documentation${}^{\ref{footnote:lbl_docs}}$). Figure 29: Post-process sequence ### 11.5 Polarimetric product (p.fits) These are the combined polarimetric products. The ‘p.fits‘ file consists of eight image extensions and three table extensions. The first two tables are the 1D representations of the 2D polarimetric products (listed in the extensions above) in 1. velocity units and 2. wavelength units. They each consist of the following columns: the wavelength solution, the polarimetric flux, the Stokes I flux, the Null 1 and 2 fluxes, and the associated uncertainties on each flux column. The third table lists the configuration parameters used to run apero. Although polarimetric products are the combination of at least 4 odometer codes, files are associated with a single odometer code (the first in the sequence at the time of observation) followed by a ‘p.fits’ suffix. ## 12 Discussion apero has been an ongoing effort since its conception in October 2017 (see Appendix D and table D1). With the first light of SPIRou in April 2018, it took nearly 2 years for apero to start producing precise science results and has been used in publications since 2020 (see Appendix E and table E1). Here we discuss current performances and limitations and planned future work. ### 12.1 Performance and limitations Throughout development, we have tried to optimize the speed of all recipes (e.g., through the use of SQL databases, numba, bottleneck, etc.). With the most current version (v0.7.256), using 35 CPU cores (on a single node), we can reduce all available data (all SPIRou legacy survey data and all PI data to which we have access, covering $\sim$430 nights and $\sim$45000 science observations) in 7 days; table 5 shows a breakdown of the various steps using a 35 core machine. These timings are for all data we have access to, which are equivalent to $\sim$90% of all data taken with SPIRou between April 2018 and June 2022. Sequence | Recipes | Number | Time taken | Efficiency ---|---|---|---|--- Pre-processing | apero_preprocess | 75761 files | 34.5 hours | 0.89 | | | | Reference Calibrations | apero_dark_ref, apero_badpix, apero_loc_spirou, apero_shape_ref, apero_shape, apero_flat, apero_thermal, apero_leak_ref, apero_wave_ref | 1 night | 1.5 hours | - | | | | Nightly Calibrations | apero_badpix, apero_loc_spirou, apero_shape, apero_flat, apero_thermal, apero_wave_night | 432 nights | 7.4 hours | 0.87 | | | | Extraction | apero_extract | 46836 files | 63.3 hours | 0.63 | | | | Telluric (hot star) | apero_mk_tellu, apero_mk_model, apero_fit_tellu, apero_mk_template | 1043 files | 1.1 hours | 0.70 | | | | Telluric (science) | apero_fit_tellu, apero_mk_template | 45524 files | 34.6 hours | 0.75 | | | | CCF | apero_ccf | 45524 files | 7.0 hours | 0.89 | | | | Polarimetry | apero_polar | 9880 groups | 2.1 hours | 0.90 | | | | Post-process | apero_processing | 48472 files | 18.4 hours | 0.93 Total | | | 170.1 hours | 0.76 | | | 7.1 days | 0.76 Table 5: Example full run reducing all SPIRou legacy survey (and some PI) data from April 2018 to April 2022. This was processed on one machine using 35 cores. Note preprocessing is done on both science and calibration observations, reference calibrations are only run on a single night, and each nightly calibration depends on the availability of specific calibrations thus leading to a range of nights (from 366 to 401). Polarimetric groups consist of 4 individual exposures in different rhomb positions and are only processed for polar_fp and polar_dark files. The number of specific steps may depend on previous steps (i.e., quality control failures, engineering nights that were excluded, and odometer codes that were present in a list to not be processed). The efficiency is defined in Equation 24. We find that with 35 CPU cores we reach a point where we start to see input- output bottlenecks, most probably caused by writing to disk and/or writing to the various databases. This manifests as an individual slowdown of each recipe run which limits the efficiency of using more CPU cores (i.e., more recipes run at the same time mean more files being written to disk at the same time and more writing to the various database at the same time causing queuing to occur). We define efficiency in table 5 as the total CPU time taken divided by the total time taken to run multiplied by the number of cores (Equation 24). A perfectly efficient code would give a value of 1. $\text{Efficiency}=\frac{\text{totalCPUtime}}{\text{totaltimetaken}\times N_{cores}}$ (24) We see that on average we have an efficiency between 0.7 and 0.93. We find that recipes that run quickly but save several files have lower efficiency, i.e., that a bottleneck of writing to disk may be occurring; however, these recipes also run quickly and write frequently to the various databases, and as such it is hard to distinguish between the disk and database bottlenecks. Also, recipes that run slow are rated as more efficient due to the amount of time spent using the CPUs (i.e., science algorithms) against reading/writing to and from disk/databases, thus our metric is far from perfect. Another factor is other processes using the machine at that specific time that cannot be easily taken into account when measuring how efficient we are. We will continue to review the performances, speed up the science algorithms and find ways to make the individual recipes faster. Currently, apero is optimized to run on a single node (i.e., a single machine) with access to many CPU cores on this single node. It is possible to run batches of apero in a manual way using multiple runs of apero_processing and controlling (manually) when to run the next step (i.e., making sure all pre- processing is done before reference calibrations, that all reference calibrations are done before any night calibrations, etc.). One can also run recipes one-by-one bypassing the use of the apero_processing recipe completely. apero is optimized to reduce a full data set, this implies many terabytes of raw and reduced data. Currently to reduce a single night of data one requires at the very least a full reference calibration night (i.e., the calibration data from 2020-08-31) and the full set of calibrations for the night to be reduced (and preferably a few nights surrounding it in case any calibrations fail quality control) and the full telluric database of hot stars (from every night processed). This obviously means a large amount of data is required for even a single night or single observation. We plan to release a full calibration database and full telluric database of hot stars with a way to get and install this data, in order to allow any user to reduce a small set of data. We do however always recommend using data reduced with a full data set done in a uniform way. This is currently available at data centers at CFHT, Université de Montréal, and the Laboratoire d’Astrophysique de Marseille. ### 12.2 Future work As with most pipelines, improvements are always ongoing. With the LBL RV analysis code, we have seen RV accuracy down to 2$\text{ms}^{-1}$ with SPIRou, which is an indication that apero is at least this precise. However, there are several features we plan to add: * • The apero recipes do not currently propagate uncertainties throughout the data reduction process, which can be problematic when trying to understand the limiting factors in the data analysis. Full error propagation is important as feedback to the engineering team; for example, quantifying the impact of the thermal background from the optical train on the measurement of $K$-band spectroscopic features such as the 2.29 $\mu$m CO band-head could justify efforts to cool parts of the optical train or not. * • In parallel to the propagation of uncertainties, we plan to propagate quality flags on pixel values, for example, whether a given invalid pixel is due to a cosmic ray hit or a hot pixel. In the current framework, pixels are either deemed valid or invalid and flagged as a nan, which does not allow one to back-trace the origin of missing data. This is done for JWST data products171717https://jwst-reffiles.stsci.edu/source/data_quality.html with a pixel-level data quality encoded with 32-bit integers. * • As the SPIRou fibers are multi-mode, one expects a certain level of noise to arise from the time-varying weight of modes injected in each of the science fibers. This is minimized through the use of a pupil slicer and fiber scrambler at the injection. As the pupil slicer image provides more information on the flux distribution at the fiber exit than a simple fiber (e.g., the bottom-center panel in Figure 19), one could decorrelate this spatially-resolved information against the modal noise. * • Persistence in infrared arrays is a non-trivial problem for faint-target observations Artigau et al. (2018). For any given image, one sees a decaying remnant image of all previous observations with decreasing amplitude. The amplitude of any given observation falls with an amplitude that is proportional to the inverse of the delay since the last illumination. One notable feature of persistence in infrared arrays is that it is not only the previous frame that matters but the entire history of illumination over the last few hours. A bright target observed for a long time at the beginning of a night (a common example being a multi-hour sequence to monitor a transiting planet) will affect all fainter targets observed later during the night. To add further complexity to the matter, the persistence response varies at the pixel-to-pixel level. Work has begun to construct a persistence model for SPIRou. Furthermore, the algorithms need to be run at the observatory level as data obtained earlier in the night may be proprietary and not accessible to all apero users. * • The main limitation for faint object observations with SPIRou or any near- infrared pRV spectrograph, particularly bluer of the $K$ band, is detector readout noise. As has been demonstrated by Payeur et al. (2022), machine- learning algorithms can reduce readout noise in long sequences (100 readouts of $\sim$10 minutes) from $\sim$6 e- to $<2$ e- (see Table 2 therein). This needs to be performed before the current apero steps, as it requires handling the data cubes rather than the 2-D images used here. As long as the output format is maintained, the machine-learning images should be used as inputs to apero. * • Energetic particles regularly hit infrared arrays and deposit electrons in pixels, leading to spurious signals. These hits happen basically instantaneously (on the time scales relevant to the readout) and manifest themselves as discontinuities in the time series of non-destructive readouts. Efficient algorithms have been proposed to handle cosmic ray hits in ramp- fitting frameworks (Anderson & Gordon, 2011) but have yet to be implemented for SPIRou. * • The LBL recipes have been designed to use apero byproducts, but they have not yet been implemented within the automated apero framework. Steps that are currently done manually, such as the association of an appropriate stellar template, will be included within apero in the near future. There are also various planned improvements, they are the optimal extraction (better characterization of extraction weights), the database architecture (currently throttling the maximum number of connections with a large number of cores), some minor memory leaks when parallel processing, handling the thermal contribution at bluer wavelength domains and completing all documentation${}^{\ref{footnote:apero_docs}}$. ## 13 Conclusion We present apero (A PipelinE to Reduce Observations) and highlight its use as the official pipeline for SPIRou. We walk through the steps going from raw data to science-ready products. We detail the pre-processing of raw data to correct detector issues, the production of reference calibrations and nightly calibrations, and the use of these calibrations to correct and extract hot stars and science observations in a consistent, controlled manner. We summarize telluric correction (which will be detailed in a future publication, Artigau et al. in prep), RV analysis, polarimetric analysis, and our post- processing recipes delivering telluric corrected 2D and 1D spectra as well as polarimetry products and enabling precise stable radial velocity calculations (via the LBL algorithm, Artigau et al. 2022), good to at least $\sim 2$ $\text{ms}^{-1}$ over the timescale of the current lifetime of SPIRou (5 years). We would like to thank the anonymous referee for the valuable comments that improved the quality of the paper. The authors wish to thank everyone involved with writing, maintaining, and updating all Python packages. Specifically, apero has made extensive use of: (2) astropy, (Astropy Collaboration et al., 2018, 2013) astroquery (Ginsburg et al., 2019) barycorrpy (Kanodia & Wright, 2018; Wright & Kanodia, 2020) matplotlib (Hunter, J. D., 2007) numpy (Harris et al., 2020) pandas (McKinney, 2010, 2011) scipy (Virtanen et al., 2020) As well as the python packages bottleneck (Goodman & et al., 2019), gitchangelog (Lab, 2018), ipdb (Chapelle, 2021), IPython package (Pérez & Granger, 2007), mysql-connector-python (Mariz, 2021), numba (Lam et al., 2015), pandastable (Farrell, 2016), Pillow (Murray, 2021), pyyaml (Simonov, 2021), sphinx (Komiya & Brandl, 2021), sqlalchemy (Bayer, 2021), Scikit-learn (Pedregosa et al., 2011), tqdm (da Costa-Luis et al., 2021), yagmail (van Kooten, 2021), and xlrd (Withers, 2021). This research made use of ds9, a tool for data visualization supported by the Chandra X-ray Science Center (CXC) and the High Energy Astrophysics Science Archive Center (HEASARC) with support from the JWST Mission office at the Space Telescope Science Institute for 3D visualization. This research made use of TOPCAT, an interactive graphical viewer and editor for tabular data (Taylor, 2005). apero would have been impossible without the use of PyCharm, Git, and GitHub. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. apero has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of NASA’s Astrophysics Data System. apero has made use of the VizieR catalog access tool, CDS, Strasbourg, France. The acknowledgments were compiled using the Astronomy Acknowledgment Generator. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of MaunaKea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work was financially supported by the Natural Sciences and Engineering Research Council of Canada and the Fonds Québécois de Recherche - Nature et Technologies. Observatoire du Mont-Mégantic and the Institute for Research on Exoplanets acknowledge funding from Développement Économique Canada, Quebec’s Ministère de l’Éducation et de l’Innovation, the Trottier Family Foundation and the Canadian Space Agency. M.H. and I.B acknowledge support from ANID – Millennium Science Initiative – ICN12_009. C.M., A.C., P.F., X.D., I.B., and J.F.D. acknowledges funding from the French ANR under contract number ANR18CE310019 (SPlaSH) and the Programme National de Planétologie (PNP). This work is supported by the French National Research Agency in the framework of the Investissements d’Avenir program (ANR-15-IDEX-02), through the funding of the “Origin of Life” project of the Grenoble-Alpes University. J.F.D. acknowledges funding from the European Research Council (ERC) under the H2020 research & innovation programme (grant agreement #740651 NewWorlds). T.V. would like to acknowledge funding from the Fonds de Recherche du Québec - Nature et Technologies (FRQNT, scholarship number 320056), and the Institute for Research on Exoplanets (iREx). ## References * Aceituno et al. (2013) Aceituno, J., Sánchez, S. F., Grupp, F., et al. 2013, A&A, 552, A31, doi: 10.1051/0004-6361/201220361 * Allart et al. (2022) Allart, R., Lovis, C., Faria, J., et al. 2022, arXiv e-prints, arXiv:2209.01296. https://arxiv.org/abs/2209.01296 * Anderson & Gordon (2011) Anderson, R. E., & Gordon, K. D. 2011, Publications of the Astronomical Society of the Pacific, 123, 1237, doi: 10.1086/662593 * Artigau et al. (2018) Artigau, É., Saint-Antoine, J., Lévesque, P.-L., et al. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10709, High Energy, Optical, and Infrared Detectors for Astronomy VIII, ed. A. D. Holland & J. Beletic, 107091P, doi: 10.1117/12.2314475 * Artigau et al. (2014) Artigau, É., Astudillo-Defru, N., Delfosse, X., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9149, Observatory Operations: Strategies, Processes, and Systems V, ed. A. B. Peck, C. R. Benn, & R. L. Seaman, 914905, doi: 10.1117/12.2056385 * Artigau et al. (2021) Artigau, É., Hébrard, G., Cadieux, C., et al. 2021, AJ, 162, 144, doi: 10.3847/1538-3881/ac096d * Artigau et al. (2022) Artigau, É., Cadieux, C., Cook, N. J., et al. 2022, The Astronomical Journal, 164, 84, doi: 10.3847/1538-3881/ac7ce6 * Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 * Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f * Bagnulo et al. (2009) Bagnulo, S., Landolfi, M., Landstreet, J. D., et al. 2009, PASP, 121, 993, doi: 10.1086/605654 * Bayer (2021) Bayer, M. e. a. 2021, sqlalchemy, https://github.com/sqlalchemy/sqlalchemy, GitHub * Bedell et al. (2019) Bedell, M., Hogg, D. W., Foreman-Mackey, D., Montet, B. T., & Luger, R. 2019, AJ, 158, 164, doi: 10.3847/1538-3881/ab40a7 * Bertaux et al. (2014) Bertaux, J. L., Lallement, R., Ferron, S., Boonne, C., & Bodichon, R. 2014, A&A, 564, A46, doi: 10.1051/0004-6361/201322383 * Boisse et al. (2016) Boisse, I., Perruchot, S., Bouchy, F., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI, ed. C. J. Evans, L. Simard, & H. Takami, 990868, doi: 10.1117/12.2231678 * Borgman & Wofford (2021) Borgman, C. L., & Wofford, M. F. 2021, arXiv e-prints, arXiv:2109.01707. https://arxiv.org/abs/2109.01707 * Boucher et al. (2021) Boucher, A., Darveau-Bernier, A., Pelletier, S., et al. 2021, AJ, 162, 233, doi: 10.3847/1538-3881/ac1f8e * Bouchy et al. (2001) Bouchy, F., Pepe, F., & Queloz, D. 2001, A&A, 374, 733, doi: 10.1051/0004-6361:20010730 * Bouchy et al. (2009) Bouchy, F., Hébrard, G., Udry, S., et al. 2009, A&A, 505, 853, doi: 10.1051/0004-6361/200912427 * Bouchy et al. (2011) Bouchy, F., Hébrard, G., Delfosse, X., et al. 2011, in EPSC-DPS Joint Meeting 2011, Vol. 2011, 240 * Bychkov et al. (2006) Bychkov, V. D., Bychkova, L. V., & Madej, J. 2006, MNRAS, 365, 585, doi: 10.1111/j.1365-2966.2005.09738.x * Caballero et al. (2016) Caballero, J. A., Guàrdia, J., López del Fresno, M., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9910, Observatory Operations: Strategies, Processes, and Systems VI, ed. A. B. Peck, R. L. Seaman, & C. R. Benn, 99100E, doi: 10.1117/12.2233574 * Cadieux et al. (2022) Cadieux, C., Doyon, R., Plotnykov, M., et al. 2022, The Astronomical Journal, 164, 96, doi: 10.3847/1538-3881/ac7cea * Chapelle (2021) Chapelle, G. e. a. 2021, ipdb, https://github.com/gotcha/ipdb, GitHub * Chené et al. (2021) Chené, A.-N., Mao, S., Lundquist, M., et al. 2021, AJ, 161, 109, doi: 10.3847/1538-3881/abd411 * Collins et al. (2017) Collins, K. A., Kielkopf, J. F., Stassun, K. G., & Hessman, F. V. 2017, The Astronomical Journal, 153, 77, doi: 10.3847/1538-3881/153/2/77 * Cosentino et al. (2012) Cosentino, R., Lovis, C., Pepe, F., et al. 2012, in Ground-based and Airborne Instrumentation for Astronomy IV, ed. I. S. McLean, S. K. Ramsay, & H. Takami, Vol. 8446, International Society for Optics and Photonics (SPIE), 657 – 676, doi: 10.1117/12.925738 * Cristofari et al. (2022a) Cristofari, P. I., Donati, J. F., Masseron, T., et al. 2022a, MNRAS, 511, 1893, doi: 10.1093/mnras/stab3679 * Cristofari et al. (2022b) —. 2022b, arXiv e-prints, arXiv:2208.10340. https://arxiv.org/abs/2208.10340 * Cushing et al. (2004) Cushing, M. C., Vacca, W. D., & Rayner, J. T. 2004, PASP, 116, 362, doi: 10.1086/382907 * da Costa-Luis et al. (2021) da Costa-Luis, C., Larroque, S. K., Altendorf, K., et al. 2021, tqdm: A fast, Extensible Progress Bar for Python and CLI, v4.62.3, Zenodo, doi: 10.5281/zenodo.5517697 * Donati (2003) Donati, J. F. 2003, in Astronomical Society of the Pacific Conference Series, Vol. 307, Solar Polarization, ed. J. Trujillo-Bueno & J. Sanchez Almeida, 41 * Donati et al. (1997) Donati, J. F., Semel, M., Carter, B. D., Rees, D. E., & Collier Cameron, A. 1997, MNRAS, 291, 658, doi: 10.1093/mnras/291.4.658 * Donati et al. (2018) Donati, J.-F., Kouach, D., Lacombe, M., et al. 2018, SPIRou: A NIR Spectropolarimeter/High-Precision Velocimeter for the CFHT (Springer International Publishing), 107, doi: 10.1007/978-3-319-55333-7_107 * Donati et al. (2020) Donati, J. F., Kouach, D., Moutou, C., et al. 2020, MNRAS, 498, 5684, doi: 10.1093/mnras/staa2569 * Dumusque, X. (2018) Dumusque, X. 2018, A&A, 620, A47, doi: 10.1051/0004-6361/201833795 * Earl et al. (2022) Earl, N., Tollerud, E., Jones, C., et al. 2022, Zenodo, doi: 10.5281/zenodo.6207491 * Errmann et al. (2020) Errmann, R., Cook, N. J., Anglada-Escudé, G., et al. 2020, Publications of the Astronomical Society of the Pacific, 132, 064504, doi: 10.1088/1538-3873/ab8783 * Farrell (2016) Farrell, D. 2016, Journal of Open Research Software, 4, doi: 10.5334/jors.94 * Fiorio & Gustedt (1996) Fiorio, C., & Gustedt, J. 1996, Theoretical Computer Science, 154, 165, doi: https://doi.org/10.1016/0304-3975(94)00262-2 * Follert et al. (2014) Follert, R., Dorn, R. J., Oliva, E., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V, ed. S. K. Ramsay, I. S. McLean, & H. Takami, 914719, doi: 10.1117/12.2054197 * Gaia Collaboration et al. (2016) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2, doi: 10.1051/0004-6361/201629512 * Gaia Collaboration et al. (2018) —. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051 * Gaia Collaboration et al. (2021) —. 2021, A&A, 649, A1, doi: 10.1051/0004-6361/202039657 * Gan et al. (2022a) Gan, T., Soubkiou, A., Wang, S. X., et al. 2022a, MNRAS, 514, 4120, doi: 10.1093/mnras/stac1448 * Gan et al. (2022b) Gan, T., Lin, Z., Wang, S. X., et al. 2022b, MNRAS, 511, 83, doi: 10.1093/mnras/stab3708 * Giardino et al. (2019) Giardino, G., Birkmann, S., Robberto, M., et al. 2019, PASP, 131, 094503, doi: 10.1088/1538-3873/ab2fd6 * Ginsburg et al. (2019) Ginsburg, A., Sipőcz, B. M., Brasseur, C. E., et al. 2019, AJ, 157, 98, doi: 10.3847/1538-3881/aafc33 * Goldoni et al. (2006) Goldoni, P., Royer, F., François, P., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6269, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. I. S. McLean & M. Iye, 62692K, doi: 10.1117/12.669986 * Gonzalez & Woods (2008) Gonzalez, R. C., & Woods, R. E. 2008, Digital image processing (Upper Saddle River, N.J.: Prentice Hall). http://www.amazon.com/Digital-Image-Processing-3rd-Edition/dp/013168728X * Goodman & et al. (2019) Goodman, K. W., & et al., W. C. 2019, Bottleneck, https://github.com/pydata/bottleneck, GitHub * Gustafsson et al. (2008) Gustafsson, B., Edvardsson, B., Eriksson, K., et al. 2008, A&A, 486, 951, doi: 10.1051/0004-6361:200809724 * Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 * Hobson et al. (2021) Hobson, M. J., Bouchy, F., Cook, N. J., et al. 2021, A&A, 648, A48, doi: 10.1051/0004-6361/202038413 * Hodapp et al. (2019) Hodapp, K. W., Hall, D., Jacobson, S., et al. 2019, in American Astronomical Society Meeting Abstracts, Vol. 234, American Astronomical Society Meeting Abstracts #234, 103.02 * Hodapp et al. (1996) Hodapp, K. W., Hora, J. L., Hall, D. N. B., et al. 1996, New A, 1, 177, doi: 10.1016/S1384-1076(96)00013-9 * Horne (1986) Horne, K. 1986, PASP, 98, 609, doi: 10.1086/131801 * Hunter, J. D. (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90 * Kanodia & Wright (2018) Kanodia, S., & Wright, J. 2018, Research Notes of the AAS, 2, 4, doi: 10.3847/2515-5172/aaa4b7 * Komiya & Brandl (2021) Komiya, T., & Brandl, G. e. a. 2021, sphinx, https://github.com/sphinx-doc/sphinx, GitHub * Kotani et al. (2018) Kotani, T., Tamura, M., Nishikawa, J., et al. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10702, Ground-based and Airborne Instrumentation for Astronomy VII, ed. C. J. Evans, L. Simard, & H. Takami, 1070211, doi: 10.1117/12.2311836 * Lab (2018) Lab, V. e. a. 2018, gitchangelog, https://github.com/vaab/gitchangelog, GitHub * Labrie et al. (2019) Labrie, K., Anderson, K., Cárdenes, R., Simpson, C., & Turner, J. E. H. 2019, in Astronomical Society of the Pacific Conference Series, Vol. 523, Astronomical Data Analysis Software and Systems XXVII, ed. P. J. Teuben, M. W. Pound, B. A. Thomas, & E. M. Warner, 321 * Lam et al. (2015) Lam, S. K., Pitrou, A., & Seibert, S. 2015, in Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, 1–6 * Lillo-Box et al. (2020) Lillo-Box, J., Aceituno, J., Pedraz, S., et al. 2020, MNRAS, 491, 4496, doi: 10.1093/mnras/stz3283 * Mahadevan et al. (2010) Mahadevan, S., Ramsey, L., Wright, J., et al. 2010, Proceedings of SPIE - The International Society for Optical Engineering, 7735, doi: 10.1117/12.857551 * Mahadevan et al. (2012) Mahadevan, S., Ramsey, L., Bender, C., et al. 2012, in Ground-based and Airborne Instrumentation for Astronomy IV, ed. I. S. McLean, S. K. Ramsay, & H. Takami, Vol. 8446, International Society for Optics and Photonics (SPIE), 624 – 637, doi: 10.1117/12.926102 * Marconi et al. (2021) Marconi, A., Abreu, M., Adibekyan, V., et al. 2021, The Messenger, 182, 27, doi: 10.18727/0722-6691/5219 * Mariz (2021) Mariz, N. e. a. 2021, mysql-connector-python, https://github.com/mysql/mysql-connector-python, GitHub * Martioli et al. (2012) Martioli, E., Teeple, D., Manset, N., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8451, Software and Cyberinfrastructure for Astronomy II, ed. N. M. Radziwill & G. Chiozzi, 84512B, doi: 10.1117/12.926627 * Martioli et al. (2020) Martioli, E., Hébrard, G., Moutou, C., et al. 2020, A&A, 641, L1, doi: 10.1051/0004-6361/202038695 * Martioli et al. (2022) Martioli, E., Hébrard, G., Fouqué, P., et al. 2022, A&A, 660, A86, doi: 10.1051/0004-6361/202142540 * Mayor et al. (2003) Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20 * McCray (2014) McCray, W. 2014, Technology and Culture, 55, 908, doi: 10.1353/tech.2014.0102 * McCray (2004) McCray, W. P. 2004, Giant telescopes : astronomical ambition and the promise of technology (Harvard University Press) * McKay et al. (2004) McKay, D. J., Ballester, P., Banse, K., et al. 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 5493, Optimizing Scientific Return for Astronomy through Information Technologies, ed. P. J. Quinn & A. Bridger, 444–452, doi: 10.1117/12.551214 * McKinney (2010) McKinney, W. 2010, in Proceedings of the 9th Python in Science Conference, Vol. 445, Austin, TX, 51–56 * McKinney (2011) McKinney, W. 2011, Python for High Performance and Scientific Computing, 14 * Micheau et al. (2018) Micheau, Y., Kouach, D., Donati, J.-F., et al. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10702, Ground-based and Airborne Instrumentation for Astronomy VII, ed. C. J. Evans, L. Simard, & H. Takami, 107025R, doi: 10.1117/12.2305937 * Modigliani et al. (2010) Modigliani, A., Goldoni, P., Royer, F., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7737, Observatory Operations: Strategies, Processes, and Systems III, ed. D. R. Silva, A. B. Peck, & B. T. Soifer, 773728, doi: 10.1117/12.857211 * Moutou et al. (2020) Moutou, C., Dalal, S., Donati, J. F., et al. 2020, A&A, 642, A72, doi: 10.1051/0004-6361/202038108 * Murray (2021) Murray, A. e. a. 2021, Pillow, https://github.com/python-pillow/Pillow, GitHub * Oliva et al. (2006) Oliva, E., Origlia, L., Baffa, C., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6269, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. I. S. McLean & M. Iye, 626919, doi: 10.1117/12.670006 * Payeur et al. (2022) Payeur, G., Artigau, É., Levasseur, L. P., & Doyon, R. 2022, AJ, 163, 292, doi: 10.3847/1538-3881/ac69d2 * Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011, Journal of Machine Learning Research, 12, 2825 * Pelletier et al. (2021) Pelletier, S., Benneke, B., Darveau-Bernier, A., et al. 2021, AJ, 162, 73, doi: 10.3847/1538-3881/ac0428 * Pepe et al. (2002) Pepe, F., Mayor, M., Galland, F., et al. 2002, A&A, 388, 632, doi: 10.1051/0004-6361:20020433 * Pepe et al. (2021) Pepe, F., Cristiani, S., Rebolo, R., et al. 2021, A&A, 645, A96, doi: 10.1051/0004-6361/202038306 * Pérez & Granger (2007) Pérez, F., & Granger, B. E. 2007, Computing in Science and Engineering, 9, 21, doi: 10.1109/MCSE.2007.53 * Perryman et al. (1997) Perryman, M. A. C., Lindegren, L., Kovalevsky, J., et al. 1997, A&A, 323, L49 * Piskunov et al. (2021) Piskunov, N., Wehrhahn, A., & Marquart, T. 2021, A&A, 646, A32, doi: 10.1051/0004-6361/202038293 * Piskunov et al. (1995) Piskunov, N. E., Kupka, F., Ryabchikova, T. A., Weiss, W. W., & Jeffery, C. S. 1995, A&AS, 112, 525 * Piskunov & Valenti (2002) Piskunov, N. E., & Valenti, J. A. 2002, A&A, 385, 1095, doi: 10.1051/0004-6361:20020175 * Quirrenbach et al. (2014) Quirrenbach, A., Amado, P. J., Caballero, J. A., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V, ed. S. K. Ramsay, I. S. McLean, & H. Takami, 91471F, doi: 10.1117/12.2056453 * Rasband (2011) Rasband, W. S. 2011, http://imagej. nih. gov/ij/ * Rupprecht et al. (2004) Rupprecht, G., Pepe, F., Mayor, M., et al. 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 5492, Ground-based Instrumentation for Astronomy, ed. A. F. M. Moorwood & M. Iye, 148–159, doi: 10.1117/12.551267 * Seemann et al. (2014) Seemann, U., Anglada-Escude, G., Baade, D., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9147, Ground-based and Airborne Instrumentation for Astronomy V, ed. S. K. Ramsay, I. S. McLean, & H. Takami, 91475G, doi: 10.1117/12.2056668 * Seifahrt et al. (2020) Seifahrt, A., Bean, J. L., Stürmer, J., et al. 2020, in Ground-based and Airborne Instrumentation for Astronomy VIII, ed. C. J. Evans, J. J. Bryant, & K. Motohara (SPIE), doi: 10.1117/12.2561564 * Simonov (2021) Simonov, K. e. a. 2021, pyyaml, https://github.com/yaml/pyyaml, GitHub * Smette et al. (2015) Smette, A., Sana, H., Noll, S., et al. 2015, A&A, 576, A77, doi: 10.1051/0004-6361/201423932 * Sousa et al. (2021) Sousa, A. P., Bouvier, J., Alencar, S. H. P., et al. 2021, A&A, 649, A68, doi: 10.1051/0004-6361/202140346 * Taylor (2005) Taylor, M. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. P. Shopbell, M. Britton, & R. Ebert, 29 * Tody (1986) Tody, D. 1986, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 627, Instrumentation in astronomy VI, ed. D. L. Crawford, 733, doi: 10.1117/12.968154 * Tran et al. (2016) Tran, H. D., Cohen, R., Colson, A., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9910, Observatory Operations: Strategies, Processes, and Systems VI, ed. A. B. Peck, R. L. Seaman, & C. R. Benn, 99102E, doi: 10.1117/12.2230963 * van Kooten (2021) van Kooten, P. e. a. 2021, yagmail, https://github.com/kootenpv/yagmail, GitHub * Vernet et al. (2011) Vernet, J., Dekker, H., D’Odorico, S., et al. 2011, A&A, 536, A105, doi: 10.1051/0004-6361/201117752 * Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: https://doi.org/10.1038/s41592-019-0686-2 * Vogt et al. (1994) Vogt, S. S., Allen, S. L., Bigelow, B. C., et al. 1994, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 2198, Instrumentation in Astronomy VIII, ed. D. L. Crawford & E. R. Craine, 362, doi: 10.1117/12.176725 * Wenger et al. (2000) Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, Astronomy and Astrophysics Supplement Series, 143, 9–22, doi: 10.1051/aas:2000332 * Wildi et al. (2017) Wildi, F., Blind, N., Reshetov, V., et al. 2017, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10400, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 1040018, doi: 10.1117/12.2275660 * Withers (2021) Withers, C. e. a. 2021, xlrd, https://github.com/python-excel/xlrd, GitHub * Wright & Eastman (2014) Wright, J. T., & Eastman, J. D. 2014, PASP, 126, 838, doi: 10.1086/678541 * Wright & Kanodia (2020) Wright, J. T., & Kanodia, S. 2020, The Planetary Science Journal, 1, 38, doi: 10.3847/PSJ/ababa4 * Wu et al. (2005) Wu, K., Otoo, E., & Shoshani, A. 2005, in Medical Imaging 2005: Image Processing, ed. J. M. Fitzpatrick & J. M. Reinhardt, Vol. 5747, International Society for Optics and Photonics (SPIE), 1965 – 1976, doi: 10.1117/12.596105 * Zacharias et al. (2013) Zacharias, N., Finch, C. T., Girard, T. M., et al. 2013, AJ, 145, 44, doi: 10.1088/0004-6256/145/2/44 * Zandian et al. (2016) Zandian, M., Farris, M., McLevige, W., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9915, High Energy, Optical, and Infrared Detectors for Astronomy VII, ed. A. D. Holland & J. Beletic, 99150F, doi: 10.1117/12.2233664 ## Appendix A Creating a raw SPIRou ramp image The SPIRou detector control software reads the detector continuously every 5.57 s and produces a 2D image ($4096\times 4096$) constructed from the linear fit of the pixel value versus time (as well as a slope, intercept, error, and number of frames used for quality checks). The construction of the 2D image from individual readouts being handled at the acquisition step is not included as part of apero but as software maintained and used at CFHT. The construction of the 2D frame is performed through the following steps. Individual detector frames are obtained from the detector control software every 5.57 s (at time $j$, $t[j]$). A flagging of pixel saturation is performed, and pixels with a non-linearity that would be larger than $\sim 10$ % are considered unreliable and rejected for all future readouts (binary mask $m[i,j]$ for pixel $i$ at time $j$). A non-linearity correction is applied to pixel fluxes (flux at pixel $i$ and at time $j$ is $f[i,j]$). As individual readouts are in computer memory, intermediate quantities necessary for the computation of the total pixel-level slope are computed. The advantage of preserving these quantities in memory is that one can perform a pixel-level ramp-fitting over an arbitrarily large number of frames without being required to have all pixel values in memory at the same time or without having to access files multiple times. * • $\sigma_{x}[i]=\sum_{j}m[i,j]*f[i,j]$ * • $\sigma_{y}[i]=\sum_{j}m[i,j]*t[j]$ * • $\sigma_{xy}[i]=\sum_{j}m[i,j]*f[i,j]*t[j]$ * • $\sigma_{x^{2}}[i]=\sum_{j}m[i,j]*f[i,j]^{2}$ * • $n[i]=\sum_{j}m[i,j]$ Among these intermediate quantities, the only one that has a clear signification is $n[i]$: it corresponds to the number of valid (i.e., below the predefined flux level for saturation) readouts obtained over the entire sequence. For normal scientific exposures, $n[i]$ is equal to the total number of readouts for the vast majority of pixels; pixels with a very large dark current being the exception. From simple linear algebra, one can show that the per-pixel intercept is $b[i]=\frac{\sigma_{x}[i]\sigma_{xy}[i]-\sigma_{x^{2}}[i]*\sigma_{y}[i]}{\sigma_{x}[i]^{2}-n\sigma_{x^{2}}}$ (A1) and correspondingly, the per-pixel slope is: $a[i]=\frac{\sigma_{y}-n[i]b[i]}{\sigma_{x}[i]}$ (A2) Once the slope image $a[i]$ has been computed, it is corrected for correlated amplifier noise using the side reference pixels (along the fast readout axis), and amplifier offset is corrected using the top and bottom reference pixels (both extremities of the slow readout axis). As a quality check of pixel-level value, we further compute the error on the slope to identify pixels having a suspiciously large dispersion of their values around the slope. This is used to flag pixels that have a large slope that is inconsistent with their frame-to-frame accumulation rate. To compute the slope, one has to re-read all frames once (though it is not required to keep them all at once in memory) and compute the following values * • $x_{p}[i]=sx[i]/n[i]$ * • $y_{p}[i,j]=b[i]+a[i]*t[j]$ * • $\varrho_{x^{2}}[i]=\sum_{j}(t[j]-x_{p}[i])m[i,j]$ * • $\varrho_{y^{2}}[i]=\sum_{j}(f[i,j]-y_{p}[i,j])m[i,j]$ From which the slope error is $\varrho(i)=\sqrt{\frac{\varrho_{y^{2}}[i]/(n[i]-2)}{\varrho_{x^{2}}[i]}}$ (A3) ## Appendix B Standard image calibration After pre-processing (Section 4), the reference dark calculation (Section 5.1) and the bad pixel correction (Section 6.1), all images that are used in apero need to be calibrated in a standard way (using both the dark reference and bad pixel recipe outputs). This is not a separate recipe but a set of functions that are used in all recipes that use pre-processed files as inputs. The standard calibration is ordered as follows: 1. 1. dark reference correction (Section B.1). 2. 2. flip, resize and re-scale the image (Section B.2). 3. 3. flag bad pixels (Section B.3). 4. 4. correct background flux (Section B.4). 5. 5. clean hot pixels (Section B.5). 6. 6. flag pixels that are out of bounds (Section B.6). ### B.1 Dark reference correction The first step of the standard calibration of pre-processed files is to correct the input image for the dark signal. $IM_{\text{corr}i,j}=IM_{\text{uncorr}i,j}-N(DARK_{i,j})$ (B1) where $IM_{\text{corr}i,j}$ and $IM_{\text{uncorr}i,j}$ are the flux in the $i^{th}$ row $j^{th}$ column of the corrected image and uncorrected image respectively, $N$ is the number of raw images that went into $IM$ and $DARK_{i,j}$ is flux in the $i^{th}$ row $j^{th}$ column of the reference dark (see Section 5.1). The dark reference is taken from the calibration database. If more than one dark reference exists the closest in time to $IM$ is used (using the header key mjdmid from the header). ### B.2 Flipping, resizing, and re-scaling the image For legacy reasons, the image is flipped in the vertical and horizontal directions (see Figure 4). After this the image is converted from $ADU/s$ to electrons using Equation B2. $IM_{\text{electrons}i,j}=IM_{\text{ADU}/\text{s}i,j}\times\text{Gain}\times t_{\rm exp}$ (B2) where $IM_{\text{electons}i,j}$ is the flux in electrons for the $i^{th}$ row $j^{th}$ column, $IM_{\text{ADU}/\text{s}i,j}$ is the flux in $ADU/s$ for the $i^{th}$ row $j^{th}$ column, the gain is taken from the header key gain (although it has remained constant over the lifetime of SPIRou), and $exptime$ is the exposure time in seconds (taken from the header key exptime). Once the image is in electrons it is then resized. The image is cut in the cross-order direction to start from pixel 250 and end at pixel 3350 (removing a partial blue order and the whole unilluminated dark amplifier region) and in the along-order direction to start from pixel 4 and end at pixel 4092 (removing just the H4RG reference pixels). Thus after this resizing the image is of size 3100$\times$4088 (see Figure 4). ### B.3 Flagging the bad pixels The closest bad pixel map (badpix, see Section 6.1) in time to the image (using the header key mjdmid from the header) is loaded from the calibration database and all pixels that are flagged as bad pixels are set to nan, this is shown in Equation B3. $IM_{\text{corr}i,j}=\left\\{\begin{array}[]{cl}NaN:&BADPIX_{i,j}\equiv 1\\\ IM_{i,j}:&\text{otherwise}\\\ \end{array}\right.$ (B3) where $IM_{\text{corr}i,j}$ and $IM_{i,j}$ are the flux in the $i^{th}$ row $j^{th}$ column of the corrected image and the input image respectively. $BADPIX_{i,j}$ is a bad pixel flag (1 or 0) in the $i^{th}$ row $j^{th}$ column from the bad pixel map. ### B.4 Correcting the background flux Within each science image, we take the median of ‘background’ pixels (identified using the backmap, see Section 6.1) within a region and create a map of large-scale background features (middle panel, Figure 14). This map is then splined into a $4088\times 4088$ image and subtracted from the science frame. ### B.5 Additional cleaning of hot pixels Hot pixels are flagged by finding pixels that are 10$\sigma$ (positive or negative) outliers compared to their immediate neighbors. This is in addition to the cosmic ray rejection applied in Section 4.5 and the bad pixel flagging (Section 6.1) which removes most of the hot pixels. In this additional cleaning of hot pixels, we first construct a flattened image and perform a low-pass filter in the along-order direction, filtering the image so that only pixel-to-pixel structures remain. We then apply median filtering, which removes these big outliers, and then we smooth the image to avoid big regions filled with zeros. We apply a 5-pixel median boxcar and a 5-pixel smoothing in the along-order direction, which blurs along the dispersion over a scale of $\sim$7 pixels. Bad pixels are interpolated with a 2D surface fit by using valid pixels within a $3\times 3$ pixel box centered on the bad pixel. ### B.6 Flagging out of bound pixels Pixel values need to be within reasonable bounds considering the physics of the H4RG detector. If they are not in bounds we set them to nan. The upper bound is the saturation per frame time, but as the flux is expressed as a slope (in the fits2ramp.py), a pixel with a value greater than the saturation point can be recorded by the detector and is nonphysical. The lower bound is set to the negative value of ten times the readout noise, these bounds are shown in Equation B4. $IM_{\text{corr}i,j}=\left\\{\begin{array}[]{cl}NaN:&IM_{i,j}>\text{saturation}/t_{\text{frame}}\\\ NaN:&IM_{i,j}<-10\times\text{readoutnoise}\\\ IM_{i,j}:&\text{otherwise}\\\ \end{array}\right.$ (B4) where $IM_{\text{corr}i,j}$ and $IM_{i,j}$ are the flux in the $i^{th}$ row $j^{th}$ column of the corrected image and the input image respectively, $saturation$ is taken from the header keyword saturate and is converted to electrons via B2, $t_{\rm frame}$ is the individual frame time (from header keyword frmtime), and readout noise is taken from the header keyword rdnoise. ## Appendix C Shape transformation The shape transform algorithm allows three different transformations, that may or may not be used. Here we define $x$ as the direction along the order, $y$ as the direction across the order. 1. 1. a linear transform: defined by $dx$, $dy$, $A$, $B$, $C$, $D$, where $dx$ and $dy$ are shifts $x$ and $y$ respectively and $A$, $B$, $C$, $D$ form the transform matrix: $\left[\begin{array}[]{cl}A&B\\\ C&D\\\ \end{array}\right]$ (C1) This combines with $dx$ and $dy$ in order to form a 3$\times$3 matrix: $\left[\begin{array}[]{ccc}A&B&dx\\\ C&D&dy\\\ 0&0&1\\\ \end{array}\right]$ (C2) This $3\times 3$ linear transformation matrix allows for scaling, rotation, reflection (not used in our case) and shearing (Gonzalez & Woods, 2008). 2. 2. a shift in $x$ position, where a shift is defined for each pixel. 3. 3. a shift in $y$ position, where a shift is defined for each pixel. ## Appendix D Version history of APERO apero has been in development since October 2017. Here we list a few of the major versions to give the reader an idea of how long the development of a full pipeline can take. First version | Last version | Date of first version | Main improvements ---|---|---|--- 0.0.000 | 0.0.048 | 2017-10-12 | First python version of apero 0.1.000 | 0.1.037 | 2018-01-10 | First version to run on SPIRou engineering data (H2RG detector) 0.2.000 | 0.2.128 | 2018-04-17 | First version for SPIRou commissioning (H4RG upgrade) 0.3.000 | 0.3.077 | 2018-09-06 | First implementation of telluric correction 0.4.000 | 0.4.123 | 2018-12-08 | Re-work wave solution and BERV calculation 0.5.000 | 0.5.124 | 2019-05-10 | Implementation of reference calibrations/recipes 0.6.000 | 0.6.132 | 2019-12-06 | Complete re-ordering of apero file structure, first use on NIRPS 0.7.000 | 0.7.255 | 2020-10-16 | Implementation of SQL databases, Telluric pre-cleaning, upgrade calibration recipes, integration of spirou-polar 0.8.000 | active | 2022-09-11 | Currently in development: uncertainty propagation, nan pixel quality flags, optimal extraction weights, database architecture 1.0.000 | - | - | After 0.8, full documentation, including adding new instruments Table D1: History of the major versions of apero ## Appendix E Current science publications using APERO In table E1 we list some science publications using apero for science. This list is not complete but gives an idea of the range of science enabled by apero with SPIRou. Title | Citation ---|--- Spin-orbit alignment and magnetic activity in the young planetary system AU Mic | Martioli et al. 2020 Early science with SPIRou: near-infrared radial velocity and spectropolarimetry of the planet-hosting star HD 189733 | Moutou et al. 2020 SPIRou: NIR velocimetry and spectropolarimetry at the CFHT | Donati et al. 2020 Star-disk interaction in the T Tauri star V2129 Ophiuchi: An evolving accretion-ejection structure | Sousa et al. 2021 Where Is the Water? Jupiter-like C/H Ratio but Strong H2O Depletion Found on Tau Bootis b Using SPIRou | Pelletier et al. 2021 TOI-1278 B: SPIRou Unveils a Rare Brown Dwarf Companion in Close-in Orbit around an M Dwarf | Artigau et al. 2021 Characterizing Exoplanetary Atmospheres at High Resolution with SPIRou: Detection of Water on HD 189733 b | Boucher et al. 2021 TOI-530b: a giant planet transiting an M-dwarf detected by TESS | Gan et al. 2022b TOI-1759 b: A transiting sub-Neptune around a low mass star characterized with SPIRou and TESS | Martioli et al. 2022 TESS discovery of a sub-Neptune orbiting a mid-M dwarf TOI-2136 | Gan et al. 2022a Estimating fundamental parameters of nearby M dwarfs from SPIRou spectra | Cristofari et al. 2022a Line-by-line velocity measurements, an outlier-resistant method for precision velocimetry | Artigau et al. 2022 TOI-1452 b: SPIRou and TESS reveal a super-Earth in a temperate orbit transiting an M4 dwarf | Cadieux et al. 2022 Estimating the atmospheric properties of 44 M dwarfs from SPIRou spectra | Cristofari et al. 2022b CO or no CO? Narrowing the CO abundance constraint and recovering the H2O detection in the atmosphere of WASP-127b using SPIRou | Boucher et al. (accepted 2022) Near-IR and optical radial velocities of the active M-dwarf star Gl 388 (AD Leo) with SPIRou at CFHT and SOPHIE at OHP | Carmona et al. (submitted 2022) New insights on the near-infrared veiling of young stars using CFHT/SPIRou data | Sousa et al. (submitted 2022) A sub-Neptune planet around TOI-1695 discovered and characterized with TESS and SPIRou | Kiefer et al. (submitted 2022) The rotation period of 43 quiet M dwarfs from spectropolarimetry in the near-infrared: I. The SPIRou APERO analysis | Fouqué et al. (in prep) Optical and near-infrared stellar activity characterization of the early M dwarf Gl 205 with SOPHIE and SPIRou | Cortés-Zuleta et al. (in prep) High-resolution Chemical Spectroscopy of Barnard’s Star with SPIRou | Jahanadar et al. (in prep) New methods to correct for systematics in near-infrared radial velocity measurements: Application to GL725B with SPIRou data | Ould-Elhkim et al. (in prep) Characterizing planetary systems with SPIRou: the M-dwarf Planet Search survey and the system of GJ 251 | Moutou et al. (in prep) Table E1: List of some publications using apero for science. ## Appendix F Inputs The currently allowed raw file inputs are listed in table F1. The name becomes the apero header key dprtype. All other columns are header keys found in the raw input files or are added/modified when first processed (in apero_preprocess and apero_processing). Although all SPIRou raw files have suffixes: [style=itemize](2) a.fits (obstype = align) c.fits (obstype = comparison) d.fits (obstype = dark) f.fits (obstype = flat) o.fits (obstype = object) apero does not rely on the filenames to assign a dprtype to a raw input file. Instead, we use header keys to identify file types (Table F1). name | OBSTYPE | SBCCAS_P | SBCREF_P | SBCALI_P | INSTRUME | TRG_TYPE$\ast$ | DRSMODE$\ast$ ---|---|---|---|---|---|---|--- dark_dark_int | DARK | pos_pk | pos_pk | P4 | SPIRou | - | - dark_dark_tel | DARK | pos_pk | pos_pk | P5 | SPIRou | - | - dark_dark_sky | OBJECT | pos_pk | pos_pk | - | SPIRou | SKY | - dark_fp_sky | OBJECT | pos_pk | pos_fp | - | SPIRou | SKY | - dark_flat | FLAT | pos_pk | pos_wl | - | SPIRou | - | - flat_dark | FLAT | pos_wl | pos_pk | - | SPIRou | - | - flat_flat | FLAT | pos_wl | pos_wl | - | SPIRou | - | - flat_fp | FLAT | pos_wl | pos_fp | - | SPIRou | - | - dark_fp | ALIGN | pos_pk | pos_fp | - | SPIRou | - | - fp_dark | ALIGN | pos_fp | pos_pk | - | SPIRou | - | - fp_flat | ALIGN | pos_fp | pos_wl | - | SPIRou | - | - fp_fp | ALIGN | pos_fp | pos_fp | - | SPIRou | - | - lfc_lfc | ALIGN | pos_rs | pos_rs | - | SPIRou | - | - lfc_fp | ALIGN | pos_rs | pos_fp | - | SPIRou | - | - fp_lfc | ALIGN | pos_fp | pos_rs | - | SPIRou | - | - obj_dark | OBJECT | pos_pk | pos_pk | - | SPIRou | TARGET | SPECTROSCOPY obj_fp | OBJECT | pos_pk | pos_fp | - | SPIRou | TARGET | SPECTROSCOPY obj_hcone | OBJECT | pos_pk | pos_hc1 | - | SPIRou | TARGET | - obj_hctwo | OBJECT | pos_pk | pos_hc2 | - | SPIRou | TARGET | - polar_dark | OBJECT | pos_pk | pos_pk | - | SPIRou | TARGET | POLAR polar_fp | OBJECT | pos_pk | pos_fp | - | SPIRou | TARGET | POLAR dark_hcone | COMPARISON | pos_pk | pos_hc1 | - | SPIRou | - | - dark_hctwo | COMPARISON | pos_pk | pos_hc2 | - | SPIRou | - | - fp_hcone | COMPARISON | pos_fp | pos_hc1 | - | SPIRou | - | - fp_hctwo | COMPARISON | pos_fp | pos_hc2 | - | SPIRou | - | - hcone_fp | COMPARISON | pos_hc1 | pos_fp | - | SPIRou | - | - hctwo_fp | COMPARISON | pos_hc2 | pos_fp | - | SPIRou | - | - hcone_hcone | COMPARISON | pos_hc1 | pos_hc1 | - | SPIRou | - | - hctwo_hctwo | COMPARISON | pos_hc2 | pos_hc2 | - | SPIRou | - | - hcone_dark | COMPARISON | pos_hc1 | pos_pk | - | SPIRou | - | - hctwo_dark | COMPARISON | pos_hc2 | pos_pk | - | SPIRou | - | - Table F1: All possible inputs currently accepted by apero. HDR denotes that a keyword is required from an input file header. $\ast$ denotes header key is added or modified by apero before internal use. ## Appendix G APERO Products apero produces outputs after every recipe. These are saved to the reduced directory (except for the apero_preprocess recipe that saves outputs into the working directory). These are intermediary products and are used to create the CADC outputs in post-processing (see Section 11). File | Recipe | Frequency | Description ---|---|---|--- (id)_pp.fits | apero_preprocess | every file | preprocessed file (HASH)_pp_dark_ref.fits | apero_dark_ref | ref night | reference dark file (HASH)_pp_badpixel.fits | apero_badpix | every night | bad pixel map file (HASH)_pp_order_profile_{AB,C}.fits | apero_loc_spirou | every night | order profile file (HASH)_pp_loco_{AB,C}.fits | apero_loc_spirou | every night | localization center map file (HASH)_pp_shapex.fits | apero_shape_ref | ref night | dx shape map file (HASH)_pp_shapey.fits | apero_shape_ref | ref night | dy shape map file (HASH)_pp_fpref.fits | apero_shape_ref | ref night | FP reference file (HASH)_pp_shapel.fits | apero_shape | every night | local shape map file (HASH)_pp_blaze_{AB,A,B,C}.fits | apero_flat | every night | blaze correction file (HASH)_pp_flat_{AB,A,B,C}.fits | apero_flat | every night | flat correction file (HASH)_pp_e2ds_{AB,A,B,C}.fits | apero_thermal | every night | 2D Extracted dark_dark_int and/or dark_dark_tel file [49x4088] (HASH)_pp_e2dsff_{AB,A,B,C}.fits | apero_thermal | every night | 2D extracted + flat fielded dark_dark_int and/or dark_dark_tel file [49x4088] (HASH)_pp_s1d_v_{AB,A,B,C}.fits | apero_thermal | every night | 1D extracted + flat fielded dark_dark_int and/or dark_dark_tel file with constants velocity bins (HASH)_pp_s1d_w_{AB,A,B,C}.fits | apero_thermal | every night | 1D extracted + flat fielded dark_dark_int and/or dark_dark_tel with constants wavelength bins (HASH)_pp_thermal_e2ds_int_{AB,A,B,C}.fits | apero_thermal | every night | extracted thermal internal dark calibration file (HASH)_pp_thermal_e2ds_tel_{AB,A,B,C}.fits | apero_thermal | every night | extracted thermal telescope dark calibration file (id)_pp_e2ds_{AB,A,B,C}.fits | apero_wave_ref | ref night | 2D Extracted dark_fp file [49x4088] (id)_pp_e2dsff_{AB,A,B,C}.fits | apero_wave_ref | ref night | 2D extracted + flat fielded dark_dark_int and/or dark_dark_tel file [49x4088] (id)_pp_s1d_v_{AB,A,B,C}.fits | apero_wave_ref | ref night | 1D extracted + flat fielded dark_dark_int and/or dark_dark_tel file with constants velocity bins (id)_pp_s1d_w_{AB,A,B,C}.fits | apero_wave_ref | ref night | 1D extracted + flat fielded dark_dark_int and/or dark_dark_tel with constants wavelength bins (HASH)_pp_leak_ref_{AB,A,B,C}.fits | apero_leak_ref | ref night | leak correction reference file (HASH)_pp_e2ds_{AB,A,B,C}.fits | apero_wave_ref | ref night | 2D extracted fp_fp and hcone_hconefile [49x4088] (HASH)_pp_e2dsff_{AB,A,B,C}.fits | apero_wave_ref | ref night | 2D extracted + flat fielded fp_fp and hcone_hconefile [49x4088] (HASH)_pp_s1d_v_{AB,A,B,C}.fits | apero_wave_ref | ref night | 1D extracted + flat fielded fp_fp and hcone_hconefile] with constants velocity bins (HASH)_pp_s1d_w_{AB,A,B,C}.fits | apero_wave_ref | ref night | 1D extracted + flat fielded fp_fp and hcone_hconefile] with constants wavelength bins (HASH)_pp_wavesol_ref_{AB,A,B,C}.fits | apero_wave_ref | ref night | reference 2D wavelength solution [49x4088] (HASH)_pp_wavecav_AB.fits | apero_wave_ref | ref night | Cavity width measurement file (HASH)_pp_e2ds_{AB,A,B,C}.fits | apero_wave_night | every night | 2D extracted fp_fp and hcone_hconefile [49x4088] (HASH)_pp_e2dsff_{AB,A,B,C}.fits | apero_wave_night | every night | 2D extracted + flat fielded fp_fp and hcone_hconefile [49x4088] (HASH)_pp_s1d_v_{AB,A,B,C}.fits | apero_wave_night | every night | 1D extracted + flat fielded fp_fp and hcone_hconefile] with constants velocity bins (HASH)_pp_s1d_w_{AB,A,B,C}.fits | apero_wave_night | every night | 1D extracted + flat fielded fp_fp and hcone_hconefile] with constants wavelength bins (HASH)_pp_wave_night_ref_{AB,A,B,C}.fits | apero_wave_night | every night | reference 2D wavelength solution [49x4088] Table G1: Main apero products, where id for SPIRou is the odometer code and the HASH is a checksum created by multiple inputs (of the same data type) to the given recipe. A full list of data products for each recipe can be found in the documentation. Continued in table G2. File | Recipe | Frequency | Description ---|---|---|--- (id)_pp_e2ds_{AB,A,B,C}.fits | apero_extract | every file | 2D extracted science/hot star file [49x4088] (id)_pp_e2dsff_{AB,A,B,C}.fits | apero_extract | every file | 2D extracted + flat fielded science/hot star file [49x4088] (id)_pp_s1d_v_{AB,A,B,C}.fits | apero_extract | every file | 1D extracted + flat fielded science/hot star file with constants velocity bins (id)_pp_s1d_w_{AB,A,B,C}.fits | apero_extract | every file | 1D extracted + flat fielded science/hot star file with constants wavelength bins (id)_pp_tellu_trans{AB,A,B}.fits | apero_mk_tellu | every hot star file | Measured telluric transmission file [49x4088] (id)_pp_tellu_pclean{AB,A,B}.fits | apero_mk_tellu | every hot star file | Telluric pre-cleaning (corrected, transmission mask, measured absorption, sky model) [49x4088] trans_model_AB.fits | apero_mk_model | one | Model of all telluric transmission files (residuals in water, dry and a dc level). (id)_pp_e2dsff_tcorr{AB,A,B}.fits | apero_fit_tellu | every hot star/science file | 2D telluric corrected extracted flat fielded file [49x4088] (id)_pp_s1d_w_tcorr_{AB,A,B}.fits | apero_fit_tellu | every hot star/science file | 1D Telluric corrected extracted flat fielded file with constants velocity bins (id)_pp_s1d_v_tcorr_{AB,A,B}.fits | apero_fit_tellu | every hot star/science file | 1D telluric corrected extracted flat fielded file with constants velocity bins (id)_pp_e2dsff_recon{AB,A,B}.fits | apero_fit_tellu | every hot star/science file | 2D telluric reconstructed absorption file [49x4088] (id)_pp_s1d_w_recon_{AB,A,B}.fits | apero_fit_tellu | every hot star/science file | 1D telluric reconstructed absorption file with constants wavelength bins (id)_pp_s1d_v_recon_{AB,A,B}.fits | apero_fit_tellu | every hot star/science file | 1D telluric reconstructed absorption file with constants velocity bins (id)_pp_tellu_pclean{AB,A,B}.fits | apero_mk_tellu | every hot star/science file | Telluric pre-cleaning (corrected, transmission mask, measured absorption, sky model) [49x4088] Template_{object}_tellu_obj_AB.fits | apero_mk_template | once per object | 2D telluric corrected template of a hot star or science object [49x4088] Template_s1d_{object}_sc1d_w_file_AB.fits | apero_mk_template | once per object | 1D telluric corrected template of a hot star or science object with constants wavelength bins Template_s1d_{object}_sc1d_v_file_AB.fits | apero_mk_template | once per object | 1D telluric corrected template of a hot star or science object with constants velocity bins (id)_pp_e2dsff_tcorr{AB,A,B}_ccf _{mask}_{AB,A,B,C}.fits | apero_ccf | every hot star/science file | The CCF output file (CCFs per order and fitted parameters) (HASH)_pp_e2dsff_tcorr_pol_deg.fits | apero_polar | every polarimetric group | 2D polar file [49x4088] (HASH)_pp_e2dsff_tcorr_null1_pol.fits | apero_polar | every polarimetric group | 2D Null 1 file [49x4088] (HASH)_pp_e2dsff_tcorr_null2_pol.fits | apero_polar | every polarimetric group | 2D Null 2 file [49x4088] (HASH)_pp_e2dsff_tcorr_StokesI.fits | apero_polar | every polarimetric group | 2D Stokes I file [49x4088] (HASH)_pp_e2dsff_tcorr_s1d_w_pol.fits | apero_polar | every polarimetric group | 1D polarimetry, null 1, null 2 and stokes I file with constants wavelength bins (HASH)_pp_e2dsff_tcorr_s1d_v_pol.fits | apero_polar | every polarimetric group | 1D polarimetry, null 1, null 2 and stokes I file with constants velocity bins Table G2: Main apero products, where id for SPIRou is the odometer code and the HASH is a checksum created by multiple inputs (of the same data type) to the given recipe. A full list of data products for each recipe can be found in the documentation. Continued from table G1. ## Appendix H Preliminary usage with NIRPS One of our main goals with apero was to keep the code generic enough that adding new instruments is possible. To document this we detail here the changes required to add NIRPS_HE and NIRPS_HA modes to apero. This work is preliminary as commissioning of NIRPS is currently underway and we expect additional changes will be required when larger data set over longer periods of time exist (including long science sequences); however apero with NIRPS has already demonstrated precision equivalent to SPIRou. The specific details of all changes are not in the scope of this paper and will be part of a future publication. Currently, after extraction, there are no code differences between NIRPS and SPIRou reductions. ### H.1 NIRPS: A comparison with SPIRou NIRPS is very similar to SPIRou but differs in several ways. We list key differences that apero must handle: * • there are two modes, high efficiency (NIRPS_HE) and high accuracy (NIRPS_HA). * • there is only one science fiber and one calibration fiber. * • wavelength domain of 980 nm to 1800 nm (negligible thermal emission) * • missing order(s) around 1400 nm * • the resolution is higher $\sim$100 000 and $\sim$ 80 000 for NIRPS_HE and NIRPS_HA modes respectively. * • there is no slicer and NIRPS_HE and NIRPS_HA have differing fiber geometry. * • there are 73 echelle orders extracted by apero. * • there are no dark unilluminated amplifiers. ### H.2 APERO changes for NIRPS We use Figure 1 as our reference to changes within apero sub-packages. It is worth noting that adapting apero for use with NIRPS did change some code used for SPIRou, as having a second instrument with some unique characteristics informed us of code that could be improved for both instruments. These changes are not mentioned in this section. apero is designed to have each of these instruments in the same code base thus there is no separate installation, nor download of additional code is required for apero for usage with NIRPS. No code was changed in the following apero sub-packages: apero.core: with the exception of apero.core.instruments, apero.documentation, apero.io, apero.lang, apero.plotting, apero.setup, and apero.tools. Minimal code (adding 3 or fewer changes) was changed in the following apero sub-package: apero.base: adding of NIRPS_HE and NIRPS_HA to the supported instruments list. Some code changes were added to the following apero sub-packages: * • apero.data: data files were copied from SPIRou and updated for NIRPS_HE and NIRPS_HA. A few FITS files had to be updated using external scripts. We plan that these scripts will be ingested into apero as tools to be used on other instruments directly with sufficient documentation to do so. * • apero.recipes: recipe scripts were copied from SPIRou (mostly unchanged other than filename). One recipe was added that did not exist before (a reference flat done before the preprocessing recipe), which was added to the preprocessing sequence for NIRPS_HE and NIRPS_HA. * • apero.science: some science algorithms were added (and called from the recipes) in addition to or to replace SPIRou science algorithms. For example, as NIRPS does not have an unilluminated region similar to SPIRou we have to handle the detector corrections in preprocessing slightly differently, building a background image from the between-order regions. Substantial code was added to the following apero sub-package: apero.core.instruments: configuration, constants, keywords, file definitions, pseudo constants and recipe definitions were copied from SPIRou and updated for NIRPS_HE and NIRPS_HA. We also removed the polarimetry recipes as there is no polarimetry mode for NIRPS. Note that pseudo constants are constants and variables that cannot be described by a single number or string of characters, such as a decision between science and reference fibers for a specific step of apero, or a specific fix to a FITS header key for a specific instrument. These pseudo constants are python functions designed to keep these instrument-specific options separate from the rest of the code. ## Appendix I Glossary In Table I1 we present a list of terms used throughout this paper. Term | Description ---|--- apero | A PipelinE to Reduce Observations. apero profile | A specific setup of apero (i.e. with a certain set of constants, reduction directories, database setups, etc). amplifier | Independent electronic readout circuits operating in parallel and used to minimize the total readout time. CCF | Cross-correlation function CADC | Canadian Data Astronomy Center. CFHT | The Canada, France, Hawaii Telescope, situated on Maunakea, Hawaii, US. dprtype | Data product type - this describes what is in the science and reference fibers and distinguishes different calibrations and observations from each other. drsmode | Data product mode - for SPIRou this is spectroscopy data or polarimetry data. e2ds | An extracted order-by-order spectrum. e2dsff | An extracted order-by-order spectrum that has been flat fielded. fast-axis (long-axis) | Axis parallel to the amplifier direction on the detector. For SPIRou this is 4096 pixels per amplifier. FITS file | A Flexible Image Transport System file to hold images, tables, and metadata in the form of a FITS header. FITS header | A Flexible Image Transport System metadata holder. Consists of 8 character ‘keys‘, a value, and a comment. FP | A Fabry Perot etalon used for calibration. hash | A short unique hexadecimal string of characters generated from a long string of characters. HC | A hollow cathode lamp used for calibration. hot star | Bright, fast rotators of B or A spectral type that are spectrally featureless under $\sim$100$\text{kms}^{-1}$. LBL | Line-by-line method for measuring radial velocity, presented in Artigau et al. (2022). NIRPS | Near Infra Red Planet Searcher, a spectrograph on the 3.6m telescope in La Silla, Chile. odometer | A unique sequential number used by CFHT to identify individual observations. order | A domain on the detector at specific wavelengths generated by light passing through a diffraction grating. PI | Principal investigator. pipeline | Software that takes data from the origin to a destination. post-processed data | Data that is given to PIs after apero has been completed. pre-processed data | Data that has been corrected for detector issues - the first step for handling raw data. pRV | Precision radial velocity, measurements at the order of $\text{ms}^{-1}$ accuracy. raw data | For the purposes of apero this is the RAMPS from CFHT. recipe | A top-level script similar to a cookbook recipe where simple steps follow one another; most calculations and algorithms are hidden from these recipes. reduced data | Data that has been created from the raw data using a pipeline. rhomb | An ensemble of prisms used to rotate polarization states. run.ini file | A configuration file used for a specific reduction sequence i.e. a science sequence or a calibration sequence. reduction sequence | A set of recipes run in a certain order with specific filters on which files to reduce for a specific purpose. reference calibration | A calibration done once (not on a nightly basis). science observation | Any observation that is taken by the telescope specifically for the purposes of science i.e. SLS data and PI data. slicer | Device that is used to split an image into narrower images to increase spectral resolution. slow-axis (short-axis) | Axis perpendicular to the amplifier direction on the detector. For SPIRou this is 128 pixels per amplifier. SPIRou | Spectro Polarimètre Infra ROUge, a spectrograph for CFHT. SLS | The SPIRou Legacy Survey. Wollaston prism | A device that allows the incoming beam (either from the telescope or the calibration unit) to be split into two lyorthogonal polarized beams. 1/f noise | A noise component arising from the detector readout electronics that has a low-frequency component that is common to all amplifiers and sampled by the reference pixels. Table I1: Glossary of terms
# Action-Angle Variables for Axisymmetric Potentials via Birkhoff Normalization Sam Hadden Canadian Institute for Theoretical Astrophysics, 60 St George St Toronto, ON M5S 3H8, Canada (April 2024) ###### Abstract We describe a method for calculating action-angle variables in axisymmetric galactic potentials using Birkhoff normalization, a technique from Hamiltonian perturbation theory. An advantageous feature of this method is that it yields explicit series expressions for both the forward and inverse transformations between the action-angle variables and position-velocity data. It also provides explicit expressions for the Hamiltonian and dynamical frequencies as functions of the action variables. We test this method by examining orbits in a Miyamoto-Nagai model potential and compare it to the popular Stäckel approximation method. When vertical actions are not too large, the Birkhoff normalization method achieves fractional errors smaller than a part in $10^{3}$ and outperforms the Stäckel approximation. We also show that the range over which Birkhoff normalization provides accurate results can be extended by constructing Padé approximants from the perturbative series expressions developed with the method. Numerical routines in Python for carrying out the Birkhoff normalization procedure are made available. ## 1 Introduction While a comprehensive dynamical description of a galaxy must take into account mutual gravitational interactions among stars, dark matter, and gas, dramatically simplified models that consider the orbits of stars in static, smooth potentials provide surprisingly powerful tools for understanding the dynamics of these complex systems. Numerical experiments show that typical stellar orbits in model galactic potentials execute quasi-periodic motions (e.g., Binney & Spergel, 1982, 1984). Such motions are restricted to 3-dimensional tori embedded in the full 6-dimensional phase space spanned by stars’ spatial positions and velocities. For completely integrable Hamiltonian systems, action-angle (AA) variables provide a particularly convenient set of canonical coordinates for describing motion on such tori (e.g., Goldstein et al., 2002). Fixing the values of three action variables, $\boldsymbol{J}$, specifies an individual torus as a 3-dimensional submanifold of the 6-dimensional phase space, while the canonically conjugate angle variables, $\boldsymbol{\theta}$, serve as coordinates on the torus. Furthermore, the time evolution of trajectories on a given torus are described simply by the linear advance of the angle variables with time at rates $\frac{d}{dt}\boldsymbol{\theta}=\nabla_{\boldsymbol{J}}H$, where $H$ is the Hamiltonian function. While AA variables are a powerful theoretical tool, it is seldom possible to compute them explicitly from stellar position and velocity data for potentials commonly adopted when modeling our Milky Way and other galaxies. This is because these potentials generally do not admit a set of three globally- defined isolating integral which are required in order to have a well-defined transformation to AA variables everywhere in phase space. Nonetheless, the quasi-periodic nature of stellar orbits over much of the phase space of interest in galactic potentials makes its possible to construct local transformations to AA variables via numerical means. Numerous numerical methods have been developed for the purposes of constructing AA variables in galactic potentials (see review by Sanders & Binney, 2016). One of the most widely-used methods is the Stäckel approximation or “Stäckel fudge”, originally proposed by Binney (2012). In axisymmetric potentials, this method allows radial and vertical actions to be estimated via numerical quadratures by implicitly treating the underlying potential as though it were in Stäckel form, meaning separable in prolate spheroidal coordinates. While the Stäckel approximation usually provides quite accurate determinations of AA variables, it does have some shortcomings. First, the transformation cannot be readily inverted to provide positions and velocities as functions of actions and angle nor express the original Hamiltonian in terms of AA variables. Furthermore, because the method requires evaluation of integrals by numerical quadratures, it can become computationally expensive when evaluating transformations for large amounts of data. In this paper, we present a perturbative method to determine action-angle variables in axisymmetric galactic potentials. This method is based on a method originally proposed by Birkhoff (1927) for developing successively refined approximations to the motion of Hamiltonian systems in the vicinity of elliptic equilibria. It is variously referred to as to as “Birkhoff normalization” (e.g., Deprit, 1969) or the “Birkhoff-Gustavson method” (e.g., Lowenstein, 2012). One of the principal advantages of this method is that it provides explicit expressions for the transformation from position and velocity data to AA variables. Furthermore, this method simultaneously yields explicit expressions for the inverse transformation, meaning stellar positions and velocities as well as the Hamiltonian can be represented explicitly in terms of the AA variables. Expressions are constructed as multi-variate polynomials in complex canonical coordinates, introduced in Section 2, and therefore can be evaluated at minimal computational cost. Together, these features can make Birkhoff normalization an attractive alternative to the Stäckel approximation, depending on the application. The use of perturbation theory in galactic dynamics to derive approximate integrals of motion is not new (e.g., Contopoulos, 1963; Saaf, 1968; de Zeeuw & Merritt, 1983). In fact, Gustavson (1966) implements and applies an algorithm very similar to the one presented here to derive a formal second integral of the Hénon–Heiles potential (Henon & Heiles, 1964). The current paper builds on this past work in at least a couple respects. First, we follow Deprit (1969) and develop perturbative series using the Lie transform method. This method allows any functions defined on phase space in terms of the original canonical variables to be readily expressed in transformed variables and vice versa. This fact is exploited to construct an explicit expression for the canonical angle variable, $\theta_{\phi}$, associated with orbits’ azimuthal degree of freedom, thereby providing a complete transformation from 6D position and velocity data to AA variables. Second, we show how Padé approximants can be deployed to extend the range of validity of the series expressions furnished by the method. Finally, a modern computational implementation of the method in Python is made publicly available.111github.com/shadden/AA_variables_via_Birkhoff_Normalization The availability of computer algebra and automatic differentiation packages in modern languages like Python make it feasible to apply Birkhoff normalization to more realistic potential models than the simple Hénon–Heiles model originally considered by Gustavson (1966). The plan of this paper is as follows: we introduce the algorithm in Section 2. In Section 3, we demonstrate the method with an example application to stellar orbits in Miyamoto-Nagai potential (Miyamoto & Nagai, 1975), a classic model for the potential of a galactic disk. Finally, we summarize and discuss our results in Section 4. ## 2 The Algorithm ### 2.1 Setting We consider orbits in an axisymmetric potential, $\Phi$. The Hamiltonian, expressed in terms of polar coordinates $(R,z,\phi)$ along with their canonically conjugate momenta, $(p_{R},p_{z},L)$, is $\mathcal{H}(R,z,\phi,p_{R},p_{z},L)=\frac{1}{2}\left(p_{R}^{2}+p_{z}^{2}+\frac{L^{2}}{R^{2}}\right)+\Phi(R,z)~{}.$ (1) Since the Hamiltonian in (1) has no explicit $\phi$ dependence, the angular momentum, $L$, is conserved and we can consider the dynamics in the reduced phase space comprised of $(z,R,p_{z},p_{R})$, treating $L$ as a parameter. The Hamiltonian governing the dynamics in this reduced phase space is then given by ${H}(R,z,\phi,p_{R},p_{z};L)=\frac{1}{2}\left(p_{R}^{2}+p_{z}^{2}\right)+{\Phi}_{\mathrm{eff}}(R,z;L)$ (2) where we have defined ${\Phi}_{\mathrm{eff}}={\Phi}({R,z})+\frac{L^{2}}{2R^{2}}$ as the effective potential. For a given value of angular momentum, $L$, there is a (typically unique) circular orbit of radius of $R_{C}(L)$ which satisfies $\frac{\partial}{\partial R}{\Phi}_{\mathrm{eff}}({R,z})\bigg{|}_{(R,z)=(R_{C},0)}=0~{}.$ (3) For nearly circular and planar orbits, the Hamiltonian (2) is given approximately by $H\approx H_{2}(z,R,p_{R},p_{z};L):=\frac{1}{2}\left(p_{R}^{2}+\kappa^{2}\delta R^{2}\right)+\frac{1}{2}\left(p_{z}^{2}+\nu^{2}z^{2}\right)~{}.$ (4) where $\kappa^{2}=\frac{\partial^{2}}{\partial R^{2}}{\Phi}_{\mathrm{eff}}$ and $\nu^{2}=\frac{\partial^{2}}{\partial z^{2}}{\Phi}_{\mathrm{eff}}$, with derivatives evaluated at $(R,z)=(R_{C},0)$, are the epicyclic and vertical frequencies, respectively and $\delta R=R-R_{C}$. Equation (4) is, of course, the Hamiltonian of two un-coupled harmonic oscillators of frequency $\kappa$ and $\nu$ and the equations of motion derived from it provide the so-called “epicyclic approximation” for stars’ orbits (e.g., Binney & Tremaine, 2008). ### 2.2 Complex canonical variables The equations of motion derived from Equation (4) are completely integrable and we can perform a canonical transformation, $T:(R,z,p_{R},p_{z})\mapsto(\phi_{R},\phi_{z},I_{R},I_{z})$, to new variables such that $H_{2}\circ T^{-1}=\kappa I_{R}+\nu I_{z}$. Throughout this paper, we will frequently work instead with the associated complex canonical variables $\displaystyle x_{R}$ $\displaystyle=\sqrt{\frac{\kappa}{2}}\left(\delta R+\mathrm{i}\frac{p_{R}}{\kappa}\right)=\sqrt{I_{R}}e^{-\mathrm{i}\phi_{R}}$ (5) $\displaystyle x_{z}$ $\displaystyle=\sqrt{\frac{\nu}{2}}\left(z+\mathrm{i}\frac{p_{z}}{\nu}\right)=\sqrt{I_{z}}e^{-\mathrm{i}\phi_{z}}~{}$ (6) and their complex conjugates, $\bar{x}_{R}$ and $\bar{x}_{z}$. The Poisson bracket of two functions, $f$ and $g$, of phase space coordinates is expressed in terms of these variables by $[f,g]=-\mathrm{i}\sum_{j}\left(\frac{\partial f}{\partial{x}_{j}}\frac{\partial g}{\partial\bar{x}_{j}}-\frac{\partial f}{\partial\bar{x}_{j}}\frac{\partial g}{\partial{x}_{j}}\right)$ (7) where the subscript $j$ denotes either $R$ or $z$. In terms of the complex canonical variables,222 Here and elsewhere we will make a slight abuse of notation and use the same symbol to denote the Hamiltonian function expressed in terms of complex variables as we do for the Hamiltonian expressed in terms of polar coordinates. Equation (4) becomes $H_{2}=\kappa x_{R}\bar{x}_{R}+\nu x_{z}\bar{x}_{z}$ and Hamilton’s equations give $\frac{d}{dt}x_{R}=[x_{R},H_{2}]=-\mathrm{i}\kappa x_{R}$ and $\frac{d}{dt}x_{z}=[x_{R},H_{2}]=-\mathrm{i}\nu x_{z}$. Thus, under the epicyclic approximation, the complex canonical variables evolve at constant magnitude with a uniformly rotating phase. This will generally no longer be true when higher order terms are included in the Hamiltonian. Our goal will be to develop an algorithm that constructs a canonical transformation to new complex canonical variables, $x_{R}^{\prime}$ and $x^{\prime}_{z}$, for which this is again true up to some specified order in the new variables and their complex conjugates. ### 2.3 An illustrative example Before giving the general algorithm, we first demonstrate its basic principles by way of an example application to the pendulum. This example provides a pedagogical introduction to Birkhoff normalization and the construction of canonical transformations via the Lie series method with complex canonical variables. The example illustrates the basic ideas behind the method while avoiding some of the cumbersome mathematical notation and manipulations required to formulate the procedure in a more general context. Readers uninterested in the technical details involved in an algorithmic implementation of the Birkhoff normalization procedure can therefore read this section while skipping or skimming Section 2.4. We start with the Hamiltonian governing the dynamics of a simple pendulum, $h(\theta,p)=\frac{p^{2}}{2}-\omega_{0}^{2}\cos{\theta}.$ (8) For small $\theta$, the truncated Hamiltonian, $h_{2}(\theta,p)=(p^{2}+\omega_{0}^{2}\theta^{2})/2$, gives a harmonic oscillator approximation of the dynamics. In terms of the complex canonical variable $x=\sqrt{\frac{\omega_{0}}{2}}(\theta+\mathrm{i}p/\omega_{0})$, the Hamiltonian (8) becomes $h(x,\bar{x})=\omega_{0}x\bar{x}+\sum_{n=2}^{\infty}h_{2n}(x,\bar{x})$ (9) where $h_{2n}=\frac{(-1)^{n+1}\omega_{0}^{2}}{(2n)!}\left(\frac{1}{2\omega_{0}}\right)^{n}\left(x+\bar{x}\right)^{2n}~{}.$ (10) We will seek a canonical transformation to a new complex canonical variable, $x^{\prime}$, so that the transformed Hamiltonian, $h^{\prime}$, depends only on $|x^{\prime}|^{2}$ up to a specified power, $N$, in the transformed variable, $x^{\prime}$, and its complex conjugate, $\bar{x}^{\prime}$. The Lie series method (e.g., Ferraz-Mello, 2007; Lichtenberg & Lieberman, 1992) provides a particularly convenient means of constructing such a transformation. This method involves the construction of a generating function, $\chi$, such that the transformation from the new variables to the old variables is given by the time-1 flow of the Hamiltonian vector field generated by $\chi$. The time derivative of a function, $f$, under this flow is given by the Poisson bracket $[f,\chi]$ and we define the differential operator $\mathcal{L}_{\chi}:=[\cdot,\chi]$, referred to as the Lie derivative. This allows us to express a function, $f$, given in terms of the original phase space coordinates $(x,\bar{x})$, in terms of the new phase space variables, $(x^{\prime},\bar{x}^{\prime})$, formally as $f(x,\bar{x})=(\exp[\mathcal{L}_{\chi}]f)(x^{\prime},\bar{x}^{\prime})=\left(\sum_{n=0}^{\infty}\frac{1}{n!}\mathcal{L}_{\chi}^{n}f\right)(x^{\prime},\bar{x}^{\prime})~{}.$ (11) Returning to our pendulum example, we seek a generating function $\chi$ such that $h^{\prime}(x^{\prime},\bar{x}^{\prime})=(\exp[\mathcal{L}_{\chi}]h)(x^{\prime},\bar{x}^{\prime})=\sum_{n=1}^{N/2}h^{\prime}_{2n}(|x^{\prime}|^{2})+R_{N}(x^{\prime},\bar{x}^{\prime})$ where the remainder terms $R_{N}(x^{\prime},\bar{x^{\prime}})\sim\mathcal{O}(|x^{\prime}|^{N+1})$. Our approach will be to write our generating function as $\chi=\sum_{n=2}\chi_{2n}$ where $\chi_{2n}\sim{\mathcal{O}(|x^{\prime}|^{2n})}$. This will allow us to develop an iterative procedure for determining $\chi_{2n}$ and $h^{\prime}_{2n}$ at each stage in terms of the functions $h_{2},h_{4}...,h_{2n}$ defined in Equation (10) and the functions $\chi_{4},\chi_{6},...,\chi_{2n-2}$ determined in previous stages. Writing out the transformation Equation (11) explicitly in terms of Poisson brackets, we obtain $\displaystyle\exp[\mathcal{L}_{\chi}]h$ $\displaystyle=h+[h,\chi]+\frac{1}{2}[[h,\chi],\chi]+...$ (12) $\displaystyle=\underbrace{[h_{2},\chi_{4}]+h_{4}}_{h^{\prime}_{4}}+\underbrace{[h_{2},\chi_{6}]+[h_{4},\chi_{4}]+\frac{1}{2}[[h_{2},\chi_{4}],\chi_{4}]+h_{6}}_{h^{\prime}_{6}}+\mathcal{O}(|x^{\prime}|^{8})$ (13) where, in the second line, we have grouped terms by common powers of $|x^{\prime}|$. Consider the equation $h^{\prime}_{4}=[h_{2},\chi_{4}]-\left(\frac{x^{\prime 4}}{96}+\frac{x^{\prime 3}\bar{x}^{\prime}}{24}+\frac{x^{\prime 2}\bar{x}^{\prime 2}}{16}+\frac{x^{\prime}\bar{x}^{\prime 3}}{24}+\frac{\bar{x}^{\prime 4}}{96}\right)$ (14) defining the fourth order term in the transformed Hamiltonian, where we have used Equation (10) to write out $h_{4}$ explicitly. We wish to define $\chi_{4}$ so that $h^{\prime}_{4}$ depends only on $|x^{\prime}|^{2}$. Notice that the Poisson bracket between $h_{2}$ and any monomial, $x^{\prime k}\bar{x}^{\prime\bar{k}}$, gives $[h_{2},x^{\prime k}\bar{x^{\prime}}^{\bar{k}}]=\mathrm{i}\omega_{0}(k-\bar{k})x^{\prime k}\bar{x}^{\prime\bar{k}}$. Therefore, if we choose $\chi_{4}$ as the sum of monomial terms $\displaystyle\chi_{4}=\frac{1}{\mathrm{i}\omega_{0}}\left(\frac{x^{\prime 3}\bar{x}^{\prime}}{48}-\frac{x^{\prime}\bar{x}^{\prime 3}}{48}-\frac{\bar{x}^{\prime 4}}{384}+\frac{x^{\prime 4}}{384}\right)$ (15) then each term in the polynomial $[h_{2},\chi_{4}]$ cancels a corresponding term appearing in Equation (14) except the term $\propto x^{\prime 2}\bar{x}^{\prime 2}$. We therefore have $h^{\prime}_{4}=-\frac{1}{16}x^{\prime 2}\bar{x}^{\prime 2}~{},$ (16) which depends only on $|x^{\prime}|^{2}$ as desired. With $\chi_{4}$ determine, we can write $h^{\prime}_{6}$ according to Equation (13) as $h^{\prime}_{6}=[h_{2},\chi_{6}]+\frac{x^{\prime 6}}{2560\omega_{0}}-\frac{x^{\prime 5}\bar{x}^{\prime}}{3840\omega_{0}}-\frac{7x^{\prime 4}\bar{x}^{\prime 2}}{1536\omega_{0}}-\frac{x^{\prime 3}\bar{x}^{\prime 3}}{256\omega_{0}}-\frac{7x^{\prime 2}\bar{x}^{\prime 4}}{1536\omega_{0}}-\frac{x^{\prime}\bar{x}^{\prime 5}}{3840\omega_{0}}+\frac{\bar{x}^{\prime 6}}{2560\omega_{0}}~{}.$ (17) We will again choose $\chi_{6}$ so that the Poisson bracket $[h_{2},\chi_{6}]$ cancels all of the monomial terms appearing on the right-hand side of Equation (17) except the term $\propto x^{3}\bar{x}^{3}$. Proceeding, we find $\displaystyle\chi_{6}$ $\displaystyle=\frac{1}{\mathrm{i}\omega_{0}^{2}}\left(-\frac{x^{\prime 6}}{15360}+\frac{x^{\prime 5}\bar{x}^{\prime}}{15360}+\frac{7x^{\prime 4}\bar{x}^{\prime 2}}{3072}-\frac{7x^{\prime 2}\bar{x}^{\prime 4}}{3072}-\frac{x^{\prime}\bar{x}^{\prime 5}}{15360}+\frac{\bar{x}^{\prime 6}}{15360}\right)~{}.$ (18) $\displaystyle h^{\prime}_{6}$ $\displaystyle=-\frac{x^{\prime 3}\bar{x}^{\prime 3}}{256\omega_{0}}~{}.$ (19) If we carried on with the expansion of Equation (13) to higher order, we would find that at each step, we need to solve an equation of the form $h^{\prime}_{2n}=[h_{2},\chi_{2n}]+\Psi_{2n}$ (20) where $\Psi_{2n}$ is a collection of terms of order $2n$ involving Poisson brackets of the known functions $h_{4},h_{6},...,h_{2n}$ and $\chi_{4},...,\chi_{2n-2}$. In the next section we will detail an algorithm for iteratively constructing and solving _homological equations_ like Equation (20). Let us conclude our present example by approximating the pendulum’s oscillation frequency as a function of maximum angular displacement, $\theta_{\mathrm{max}}$, in order to illustrate how functions of the new, transformed variables, $x^{\prime}$ and $\bar{x}^{\prime}$, may be calculated explicitly from the original coordinates and vice versa. Our transformed Hamiltonian is given by $h^{\prime}=\omega_{0}|x^{\prime}|^{2}-\frac{1}{16}|x^{\prime}|^{4}-\frac{1}{256\omega_{0}}|x^{\prime}|^{6}+\mathcal{O}(|x^{\prime}|^{8})$ (21) so the oscillation frequency will be given, in terms of the action $J=|x^{\prime}|^{2}$, by $\omega(J)=\omega_{0}-\frac{1}{8}J-\frac{3}{256\omega_{0}}J^{2}$. We therefore need the value of $J$ in terms of $\theta_{\mathrm{max}}$. Given any function, $f$, on the phase space expressed in terms of the transformed variables, $x^{\prime}$ and $\bar{x}^{\prime}$, its expression in terms of the original, un-primed variables $x$ and $\bar{x}$ is given by $f\left(x^{\prime}(x,\bar{x}),\bar{x}^{\prime}(x,\bar{x})\right)=(\exp[-\mathcal{L}_{\chi}]f)(x,\bar{x})$ (22) since $\exp[-\mathcal{L}_{\chi}]f$ gives the function $f$ evaluated at the time-($-1$) flow of the Hamiltonian vector field generated by $\chi$. Applying Equation (22) to the function $\omega:(x^{\prime},\bar{x}^{\prime})\mapsto\omega_{0}-\frac{1}{8}|x^{\prime}|^{2}-\frac{3}{256\omega_{0}}|x^{\prime}|^{4}$, we obtain the oscillation frequency in terms of the original, un-transformed complex variable $x=\sqrt{\frac{\omega_{0}}{2}}(\theta+\mathrm{i}p/\omega_{0})$ as $\displaystyle(\exp[-\mathcal{L}_{\chi}]\omega)(x,\bar{x})$ $\displaystyle=\omega_{0}-\frac{|x|^{2}}{8}-\frac{3}{256}|x|^{4}+\frac{1}{8}\left[|x|^{2},\chi_{4}\right]+\mathcal{O}(|x|^{6})$ (23) $\displaystyle=\omega_{0}-\frac{x\bar{x}}{8}+\frac{1}{\omega_{0}}\left(\frac{\bar{x}^{4}}{768}+\frac{\bar{x}^{3}x}{192}-\frac{3\bar{x}^{2}x^{2}}{256}+\frac{\bar{x}x^{3}}{192}+\frac{x^{4}}{768}\right)+\mathcal{O}(|x|^{6})~{}.$ (24) Substituting $x=\sqrt{\frac{\omega_{0}}{2}}\theta_{\mathrm{max}}$ into Equation (23) gives $\omega\approx\omega_{0}\left(1-\frac{\theta_{\max}^{2}}{16}+\frac{\theta_{\max}^{4}}{3072}+...\right),$ (25) which matches the first three terms of the Taylor expansion of the exact result, $\omega=\frac{\pi\omega_{0}}{2\mathbb{K}\left(\sin^{2}\left(\frac{\theta_{\max}}{2}\right)\right)}$, where $\mathbb{K}$ denotes the complete elliptic integral of the first kind (e.g., Lichtenberg & Lieberman, 1992). ### 2.4 The general algorithm The Birkhoff normalization procedure, illustrated above in the case of the pendulum, readily generalizes to systems with $d>1$ degrees of freedom. Specifically, the procedure can be applied in the vicinity of an elliptic equilibrium to construct a formal series for a generating function, $\chi$, that transforms the Hamiltonian so that it only depends on $d$ action-like variables. While these formal series are, in general, divergent, their finite- order truncations are nonetheless often very useful for approximating a given system’s dynamics (e.g., Contopoulos, 2003; Efthymiopoulos et al., 2004). The numerical results presented in later in Section 3 show that the approximate integrals computed for our chosen order of truncation are very nearly constants over a large region of the phase space. We apply the Birkhoff normalization procedure to orbits in the vicinity of planar, circular orbits of axisymmetric galactic potentials, which are elliptic equilibria of the two degree-of-freedom Hamiltonian given in Equation (2). In terms of the complex canonical variables $x_{R}$ and $x_{z}$ introduced in Section 2.2, the Hamiltonian can be written as $\displaystyle H=\kappa x_{R}\bar{x}_{R}+\nu\kappa x_{z}\bar{x}_{z}+\sum_{n=3}^{\infty}H_{n}(x_{R},x_{z},\bar{x}_{R},\bar{x}_{z})$ (26) where $\displaystyle H_{n}(x_{R},x_{z},\bar{x}_{R},\bar{x}_{z})=\sum_{m=0}^{n}\frac{\Phi_{\mathrm{eff}}^{(m,n-m)}}{(n-m)!m!}\left(\sqrt{\frac{1}{2\kappa}}\right)^{n-m}\left(\sqrt{\frac{1}{2\nu}}\right)^{m}\sum_{p=0}^{m}\sum_{q=0}^{n-m}\binom{m}{p}\binom{n-m}{q}x_{R}^{p}x_{z}^{q}\bar{x}_{R}^{m-p}\bar{x}_{z}^{n-m-q}$ (27) and $\Phi_{\mathrm{eff}}^{(m,n-m)}=\frac{\partial^{n}}{\partial R^{m}z^{n-m}}\Phi_{\mathrm{eff}}\big{|}_{(R,z)=(R_{C},0)}$. Values of the partial derivatives of the effective potential, $\Phi_{\mathrm{eff}}^{(m,n-m)}$, up to the desired maximum order of the perturbative expansion are required to carry out the Birkhoff normalization procedure. Calculation of these derivatives is most conveniently done with the aid of a computer algebra system or automatic differentiation packages. Equation (26) represents the Hamiltonian as a series of terms grouped by their order in the complex canonical variables. We will construct similar series for our generating function, $\chi=\sum_{k=k_{\mathrm{min}}}^{\infty}\chi_{k}$, and the transformed Hamiltonian, $H^{\prime}=\sum_{m=1}^{\infty}H^{\prime}_{2m}$ through an iterative process. Note that because $H^{\prime}$ depends only on the action variables, it only contains even-order terms. #### 2.4.1 Grouping Lie Series Terms by Order To construct our series solution, it will be necessary to group terms of the same order in the expansion of the Lie series $H^{\prime}=\exp[\mathcal{L}_{\chi}]H$. This is most readily accomplished using recursion formulae originally derived by Deprit (1969). Below, we give a derivation of these formulae that closely follows the presentation of Ferraz- Mello (2007, Chapter 5). If $f_{k}$ and $g_{l}$ are homogeneous polynomials of degree $k$ and $l$, respectively, in complex canonical variables then $[f_{k},g_{l}]$ is a homogeneous polynomial of degree $k+l-2$ in those variables. Define $D_{k}:=[\cdot,\chi_{k}]$ so that, given series expansions $f=\sum_{k^{\prime}=k^{\prime}_{\mathrm{min}}}^{\infty}f_{k^{\prime}}$, and $\chi=\sum_{k=k_{\mathrm{min}}}^{\infty}\chi_{k}$, we can write $\displaystyle\mathcal{L}_{\chi}f$ $\displaystyle=\sum_{k=k_{\mathrm{min}}}^{\infty}\sum_{k^{\prime}=k^{\prime}_{\mathrm{min}}}^{\infty}D_{k}f_{k^{\prime}}=\sum_{l=k_{\mathrm{min}}+k^{\prime}_{\mathrm{min}}-2}^{\infty}\sum_{k=k_{\mathrm{min}}}^{l+2-k^{\prime}_{\mathrm{min}}}D_{k}f_{l+2-k}:=\sum_{l=l_{\mathrm{min}}^{(1)}}^{\infty}\Upsilon^{1}_{l}(f,\chi)$ where $l_{\mathrm{min}}^{(1)}=k_{\mathrm{min}}+k^{\prime}_{\mathrm{min}}-2$ and $\Upsilon^{1}_{l}(f,\chi)$ represents all terms of order $l$ appearing in an expansion of $\mathcal{L}_{\chi}f$. If we now substitute $\mathcal{L}_{\chi}f$ for $f$ in Equation (LABEL:eq:Df_defn), we obtain $\mathcal{L}^{2}_{\chi}f=\sum_{l=k_{\mathrm{min}}+l_{\mathrm{min}}^{(1)}-2}^{\infty}\sum_{k=k_{\mathrm{min}}}^{l+2-l_{\mathrm{min}}^{(1)}}D_{k}\Upsilon^{1}_{l+2-k}(f,\chi):=\sum_{l=l_{\mathrm{min}}^{(2)}}^{\infty}\Upsilon^{2}_{l}(f,\chi)$ (29) where $l_{\mathrm{min}}^{(2)}=k_{\mathrm{min}}+l^{(1)}_{\mathrm{min}}-2$. More generally, we have $\displaystyle l_{\mathrm{min}}^{(n)}$ $\displaystyle=k_{\mathrm{min}}+l^{(n-1)}_{\mathrm{min}}-2$ (30) $\displaystyle\mathcal{L}^{n}_{\chi}f$ $\displaystyle=\sum_{l=l_{\mathrm{min}}^{(n)}}^{\infty}\sum_{k=k_{\mathrm{min}}}^{l+2-l_{\mathrm{min}}^{\mathrm{(n-1)}}}D_{k}\Upsilon^{(n-1)}_{l+2-k}(f,\chi):=\sum_{l=l_{\mathrm{min}}^{(n)}}^{\infty}\Upsilon^{n}_{l}(f,\chi)$ (31) In other words, $\Upsilon^{n}_{l}$ is the collection of order $l$ terms arising from the $n$th Lie derivative in our expansion. Values of $\Upsilon^{n}_{l}$ for a given $n$ and $l$ can be computed using the recursion formulae $\displaystyle\Upsilon^{0}_{l}(f,\chi)$ $\displaystyle=f_{l}$ (32) $\displaystyle l_{\mathrm{min}}^{(0)}$ $\displaystyle=k^{\prime}_{\mathrm{min}}$ (33) $\displaystyle\Upsilon^{n+1}_{l}(f,\chi)$ $\displaystyle=\sum_{k=k_{\mathrm{min}}}^{l+2-l^{(n)}_{\mathrm{min}}}D_{k}\Upsilon^{n}_{l+2-k}(f,\chi)$ (34) $\displaystyle l_{\mathrm{min}}^{(n+1)}$ $\displaystyle=k_{\mathrm{min}}+l^{(n)}_{\mathrm{min}}-2~{}.$ (35) #### 2.4.2 Expansion of the Transformed Hamiltonian We can now write the transformed Hamiltonian, $\exp[\mathcal{L}_{\chi}]H=\sum_{n=0}^{\infty}\frac{1}{n!}{\mathcal{L}^{n}_{\chi}}H$, in terms of $\Upsilon_{l}^{n}(H,\chi)$. Since the lowest order terms appearing in $H$ are order $k^{\prime}_{\mathrm{min}}=2$, Equation (35) becomes $l_{\mathrm{min}}^{(n)}=2+(k_{\mathrm{min}}-2)n$. We have $\exp[\mathcal{L}_{\chi}]H=\sum_{n=0}^{\infty}\sum_{l=l_{\mathrm{min}}^{(n)}}^{\infty}\frac{1}{n!}\Upsilon^{n}_{l}(H,\chi)=\sum_{l=2}^{\infty}\sum_{n=0}^{N^{(l)}}\frac{1}{n!}\Upsilon^{n}_{l}(H,\chi):=\sum_{l=2}^{\infty}(\Psi_{l}+[H_{2},\chi_{l}])$ (36) where we define $N^{(l)}=\lfloor\frac{l-2}{k_{\mathrm{min}}-2}\rfloor$ and $\Psi_{l}=H_{l}+\sum_{k=k_{\mathrm{min}}}^{l-1}D_{k}H_{l+2-k}+\sum_{n=2}^{N^{(l)}}\frac{1}{n!}\Upsilon^{n}_{l}(H,\chi)~{}.$ (37) The expression in Equation (37) defining $\Psi_{l}$ only contains terms, $\chi_{k}$, in the expansion of the generating function $\chi$ with $k<l$. The terms of order $l$ gathered together in the sum in Equation (36) are, by definition, the terms of order $l$ in the transformed Hamiltonian, $H^{\prime}$. In other words, we have $\Psi_{l}+[H_{2},\chi_{l}]=H^{\prime}_{l}~{}.$ (38) Since $\Psi_{l}$ contains only terms $\chi_{k}$ with $k<l$, we can iterate over $l$, solving for $H^{\prime}_{l}$ and $\chi_{l}$ at each stage. Equation (38) is referred to as the homological equation. #### 2.4.3 Solution of Homological Equation Recall that $\Psi_{l}$ is a homogeneous polynomial of degree $l$ in the complex variables $(x^{\prime}_{R},x^{\prime}_{z},\bar{x}^{\prime}_{R},\bar{x}^{\prime}_{z})$, i.e., a sum of terms $\propto x_{R}^{\prime k_{R}}x_{z}^{\prime k_{z}}\bar{x}_{R}^{\prime\bar{k}_{R}}\bar{x}_{z}^{\prime\bar{k}_{z}}$ with $k_{R}+k_{z}+\bar{k}_{R}+\bar{k}_{z}=l$. Let $\langle\Psi_{l}\rangle$ denote the collection of terms in $\Psi_{l}$ with $k_{R}=\bar{k}_{R}$ and $k_{z}=\bar{k}_{z}$ and let $\\{\Psi_{l}\\}:=\Psi_{l}-\langle\Psi_{l}\rangle=\sum_{i}C_{i}x_{R}^{\prime k_{R,i}}x_{z}^{\prime k_{z,i}}\bar{x}_{R}^{\prime\bar{k}_{R,i}}\bar{x}_{z}^{\prime\bar{k}_{z,i}}~{}.$ (39) Also note that $[H_{2},\chi_{k}]=-\mathrm{i}\kappa\left(\bar{x}^{\prime}_{R}\frac{\partial\chi_{k}}{\partial\bar{x}^{\prime}_{R}}-{x}^{\prime}_{R}\frac{\partial\chi_{k}}{\partial{x}^{\prime}_{R}}\right)-\mathrm{i}\nu\left(\bar{x}^{\prime}_{z}\frac{\partial\chi_{k}}{\partial\bar{x}^{\prime}_{z}}-{x}^{\prime}_{z}\frac{\partial\chi_{k}}{\partial{x}^{\prime}_{z}}\right)~{}.$ (40) At each iteration, we solve Equation (38) by taking $\displaystyle H^{\prime}_{l}$ $\displaystyle=\langle\Psi_{l}\rangle$ (41) $\displaystyle\chi_{l}$ $\displaystyle=\sum_{i}\frac{\mathrm{i}C_{i}}{(k_{R,i}-\bar{k}_{R,i})\kappa+(k_{z,i}-\bar{k}_{z,i})\nu}x_{R}^{\prime k_{R,i}}x_{z}^{\prime k_{z,i}}\bar{x}_{R}^{\prime\bar{k}_{R,i}}\bar{x}_{z}^{\prime\bar{k}_{z,i}}~{}.$ (42) Equation (42) assumes a non-resonance condition on $\kappa$ and $\nu$: we cannot have terms333Such resonant terms can instead be incorporated into $H^{\prime}_{l}$ by simply omitting the corresponding terms from the sum in Equation (42) defining $\chi_{l}$. The transformed Hamiltonian would, however, no longer be a function of actions alone. with $(k_{R,i}-\bar{k}_{R,i})\kappa+(k_{z,i}-\bar{k}_{z,i})\nu=0$. ### 2.5 Azimuthal degree of freedom Thus far we have only considered the orbits of stars in the reduced phase space, ignoring their azimuthal degree of freedom. Quasi-periodic orbits in the full 6 dimensional phase space will be characterized by a third dynamical frequency, $\Omega_{\phi}$, in addition to the two frequencies, $\Omega_{R}=\frac{1}{x_{R}}\frac{\partial}{\partial\bar{x}_{R}}H^{\prime}$ and $\Omega_{z}=\frac{1}{x_{z}}\frac{\partial}{\partial\bar{x}_{z}}H^{\prime}$, readily obtained from the Birkhoff normalization procedure. Here we detail how this third frequency can be determined and used to define an additional angle variable, $\theta_{\phi}$, once the Birkhoff normalization procedure has been applied to solve for orbits’ dynamical evolution in the reduced phase space. The time derivative of the azimuthal angle, $\phi$ is given in terms of complex canonical variables by $\displaystyle\frac{d}{dt}{\phi}=\frac{L}{R_{C}^{2}}\left(1+\frac{x_{R}+\bar{x}_{R}}{R_{C}\sqrt{2\kappa}}\right)^{-2}~{}.$ (43) For non-circular orbits, the right-hand side of Equation (43) is not constant in time but will oscillate about a mean value that defines the azimuthal frequency, $\Omega_{\phi}$. Introducing the canonical angle variable $\theta_{\phi}=\Omega_{\phi}t+\phi_{0}$ we can write the $\phi$ coordinate explicitly as $\displaystyle\phi=\theta_{\phi}+\rho_{\phi}(x^{\prime}_{R},x^{\prime}_{z},\bar{x}^{\prime}_{R},\bar{x}^{\prime}_{z},L)$ (44) where $\rho$ is a zero-mean $2\pi$-periodic function of $\theta_{R}=-\arg{x^{\prime}_{R}}$ and $\theta_{z}=-\arg{x^{\prime}_{z}}$. , We can combine Equations (43) and (44) to express $\Omega_{\phi}$ and $\rho_{\phi}$ in terms of the variables, $x^{\prime}_{R}$ and $x^{\prime}_{z}$, and their complex conjugates. Recalling the notation $\langle\cdot\rangle$ and $\\{\cdot\\}$ introduced above in Section 2.4.3 to denote mean and oscillating terms of a Poisson series, the equations $\displaystyle\Omega_{\phi}$ $\displaystyle=$ $\displaystyle\frac{L}{R_{C}^{2}}\left\langle\left(\exp[\mathcal{L}_{\chi}]\sum_{m=0}^{\infty}\binom{-2}{m}\left(\frac{x_{R}+\bar{x}_{R}}{R_{C}\sqrt{2\kappa}}\right)^{m}\right)\right\rangle$ (45) $\displaystyle\frac{d}{dt}\rho_{\phi}$ $\displaystyle=$ $\displaystyle\frac{L}{R_{C}^{2}}\left\\{\left(\exp[\mathcal{L}_{\chi}]\sum_{m=0}^{\infty}\binom{-2}{m}\left(\frac{x_{R}+\bar{x}_{R}}{R_{C}\sqrt{2\kappa}}\right)^{m}\right)\right\\}~{}$ (46) define $\Omega_{\phi}$ and the time derivative of $\rho_{\phi}$ as series in the variables $x^{\prime}_{R}$ and $x^{\prime}_{z}$ and their complex conjugates that can be expanded up to some maximum desired order. In order to derive an expression for $\rho_{\phi}$, we express the series on the right hand side of Equation (46) as a sum of monomial terms $\displaystyle\left\\{\left(\exp[\mathcal{L}_{\chi}]\sum_{m=0}^{\infty}\binom{-2}{m}\left(\frac{x_{R}+\bar{x}_{R}}{R_{C}\sqrt{2\kappa}}\right)^{m}\right)\right\\}=\sum_{i}A_{i}x_{R}^{\prime k_{R,i}}x_{z}^{\prime k_{z,i}}\bar{x}_{R}^{\prime\bar{k}_{R,i}}\bar{x}_{z}^{\prime\bar{k}_{z,i}}$ (47) an then integrate with respect to time to obtain $\displaystyle\rho(x^{\prime}_{R},x^{\prime}_{z},\bar{x}^{\prime}_{R},\bar{x}^{\prime}_{z},L)=\mathrm{i}\frac{L}{R_{C}^{2}}\sum_{i}\frac{A_{i}x_{R}^{\prime k_{R,i}}x_{z}^{\prime k_{z,i}}\bar{x}_{R}^{\prime\bar{k}_{R,i}}\bar{x}_{z}^{\prime\bar{k}_{z,i}}}{(k_{R,i}-\bar{k}_{R,i})\kappa+(k_{z,i}-\bar{k}_{z,i})\nu}~{}.$ (48) It follows immediately from Hamilton’s equations that $J_{\phi}=L$ serves as canonical action coordinate conjugate to the angle $\theta_{\phi}$. We therefore have established a complete canonical transformation from the original variables $(R,z,\phi,p_{R},p_{z},L)$ of the 6 dimnensional phase space into to a complete set AA variables, $(\theta_{R},\theta_{z},\theta_{\phi},J_{R},J_{z},J_{\phi})$ where $J_{i}=|x^{\prime}_{i}|^{2}$ and $\theta_{i}=-\arg x^{\prime}_{i}$ for the subscript $i\in\\{z,R\\}$. ### 2.6 Numerical Implementation The Birkhoff normalization algorithm described above is implemented in the poisson_series module of the open-source Python package celmech (Hadden & Tamayo, 2022). This module provides routines for representing and manipulating so-called “Poisson series” (Brumberg, 1995). These are series in $N$ complex canonical variables $x_{i}$ and their complex conjugates, $\bar{x}_{i}$ along with $M$ real action-angle pairs, $(P_{i},Q_{i})$, that are given as sums of individual monomial terms of the form ${\cal M}(k,\bar{k},p,q)=\prod_{i=1}^{N}x_{i}^{k_{i}}\bar{x}_{i}^{\bar{k}_{i}}\prod_{j=1}^{M}P_{j}^{p_{j}}\exp[iq_{j}Q_{j}]~{}.$ (49) The Poisson bracket of any pair of Poisson series is another Poisson series, as can be readily seen by considering the Poisson bracket of two monomial terms. Defining the length-$N$ vectors $o^{N}_{i}$ such that $[o^{N}_{i}]_{j}=\delta_{ij}$ where $\delta_{ij}$ is the Kroenecker delta, the Poisson bracket of two monomial terms is then a Poisson series consisting of (at most) $N+M$ new monomial terms and given by $[{\cal M}(k,\bar{k},p,q),{\cal M}(l,\bar{l},r,s)]=\mathrm{i}\sum_{i=1}^{N}(\bar{k}_{i}{l}_{i}-k_{i}\bar{l}_{i}){\cal M}(k+l-o^{N}_{i},\bar{k}+\bar{l}-o^{N}_{i},p+r,q+s)+\\\ \mathrm{i}\sum_{j=1}^{M}(q_{j}r_{j}-s_{j}p_{j}){\cal M}(k+l,\bar{k}+\bar{l},p+r-o^{M}_{j},q+s)~{}.$ (50) The poisson_series module defines the PoissonSeries class to store sums of monomial terms of the form given in Equation (49) multiplied by numerical coefficients. It also defines a number of routines for manipulating these series, including performing scalar multiplication, multiplying or adding pairs of series, and calculating Poisson brackets. Note that the Poisson series used in this paper involve only complex canonical variables (i.e., $M=0$). The poisson_series module additionally defines routines for carrying out the Birkhoff normalization procedure and evaluating finite-order approximations of exponential operators, $\exp[\mathcal{L}_{\chi}]$, applied to series. These routines require grouping terms in the expansions of various functions, such as the Hamiltonian, $H$, and the generating function, $\chi$, by their order in powers of the complex canonical variables. In the framework of the poisson_series module, this grouping is done by using Python dictionary objects to associate PoissonSeries objects representing terms in an expansion of a given order to key values that indicate their order. Example Jupyter notebooks illustrating the use of celmech’s poisson_series module to carry out the Birkhoff normalization procedure are available at github.com/shadden/AA_variables_via_Birkhoff_Normalization. ### 2.7 Padé Approximants Since the Birkhoff normalization procedure is based upon a Taylor expansion of the effective potential, the method can only be expected to provide accurate results for orbits that remain within regions of phase space where this expansion is convergent. In practice, this can make the procedure inapplicable to orbits that experience significant excursions in the vertical direction. For example, the Miyamoto-Nagai potential, $\Phi(R,z)=-\frac{1}{\sqrt{R^{2}+(a+\sqrt{z^{2}+b^{2}})^{2}}}~{}$ (51) with parameters $a$ and $b$, is a commonly adopted as a model for the galactic disk’s contribution to the gravitational potential (Miyamoto & Nagai, 1975). The parameter $b$ effectively sets the thickness of the galactic disk and the branch points of Equation (51) occurring in the complex plane at $z=\pm ib$ restrict the convergence Taylor series expansions to $|z|<b$. Accurate local approximations of functions can sometimes be extended beyond the domain of convergence of their Taylor series with the use of Padé approximants. The $(n,m)$ Padé approximant of a function is the rational function with numerator of degree $m$ and denominator of degree $n$ that agrees with the Taylor series expansion of the function up to degree $x^{m+n}$ (e.g., Teukolsky et al., 1992). If the first $n+m$ terms of a function’s Taylor series are $\sum_{k=0}^{n+m}c_{k}x^{k}$ then the coefficients of an $(n,m)$ Padé approximant are readily obtained by multiplying both left and right hand sides of $\displaystyle\sum_{k=0}^{n+m}c_{k}x^{k}=\frac{a_{0}+a_{1}x+...a_{m}x^{m}}{1+b_{1}x+...+b_{n}x^{n}}\mod{x^{n+m+1}}$ (52) by $1+b_{1}x+...+b_{n}x^{n}$. The coefficients, $b_{i}$, satisfy the linear equation $\displaystyle\begin{pmatrix}c_{n+m-1}&\ldots&c_{m}\\\ \vdots&\ddots&\vdots\\\ c_{m}&\ldots&c_{m-n+1}\end{pmatrix}\cdot\begin{pmatrix}b_{1}\\\ \vdots\\\ b_{n}\end{pmatrix}=-\begin{pmatrix}c_{n+m}\\\ \vdots\\\ c_{m+1}\end{pmatrix}.$ (53) and the coefficients, $a_{i}$, are then given by $a_{i}=\sum_{k=0}^{n}c_{i-k}b_{k}$ where $b_{0}:=1$ and $c_{k}$ is taken to be $0$ for $k<0$. Below in Section 3, we show that constructing Padé approximants in the variable $I_{z}=|x_{z}|^{2}$, often improves the accuracy of the Birkhoff normalization procedure. The Padé approximants involve coefficients, $b_{i}$ and $a_{i}$, that are functions of $x_{R},~{}\bar{x}_{R}$ and $\phi_{z}=-\arg(x_{z})$. These coefficients are determined by evaluating finite-order truncations of the series $x^{\prime}_{R}=\exp[-\mathcal{L}_{\chi}]x_{R}$ and $x^{\prime}_{z}=\exp[-\mathcal{L}_{\chi}]x_{z}$ and then grouping terms by powers of $I_{z}$. Note that the generating function, $\chi$, is developed as a series in powers of $x_{z}$ and $\bar{x}_{z}$, which are both proportional to $\sqrt{I_{z}}$. However, it is straightforward to show that for any $\Phi(R,z)$ like Equation (51) that is an even function of $z$, only integer powers of $I_{z}$ will appear in $\chi$ as well as finite-order truncations of $\exp[-\mathcal{L}_{\chi}]x_{R}$. Thus, these finite-order truncations are polynomials in $I_{z}$ (rather than $\sqrt{I_{z}}$) and can be used to obtain Padé approximants in $I_{z}$. Additionally, finite-order truncations of $\exp[-\mathcal{L}_{\chi}]x_{z}$ yield expressions that can be written as $x_{z}P(I_{z})$ where $P$ is a polynomial in $I_{z}$ with coefficients that depend on $x_{R},\bar{x}_{R},$ and $\phi_{z}$. Padé approximants are derived from the polynomial $P$ and used to calculate $\exp[-\mathcal{L}_{\chi}]x_{z}$ in Section 3. ## 3 Tests In this section we apply the Birkhoff normalization procedure to orbits in the Miyamoto-Nagai potential in Equation (51). The parameters $a=3$, $b=0.3$, and $L=3$ are chosen, with a corresponding circular orbit radius of $R_{C}(L)\approx 10.4$. With distance units taken to be kpc, these parameters approximately match the parameters adopted for the Miyamoto-Nagai component of the Milky Way potential model, MWPotential2014, implemented in the galpy Python package (Bovy, 2015). The Birkhoff normalization procedure is carried out by developing the generating function, $\chi$, up to 10th order in complex canonical variables. The sympy symbolic mathematics package (Meurer et al., 2017) is used to calculate the requisite partial derivatives of the effective potential. We also compute actions and frequencies using the Stäckel approximation method introduced by Binney (2012). These calulations are done using the actionAngle.actionAngleStäckel function implemented in galpy (Bovy & Rix, 2013; Bovy, 2015). This function requires as input an estimated focal length parameter, $\Delta$, in addition to a potential and orbital initial conditions. For all orbits considered, The galpy function actionAngle.estimateDeltaStäckel is used to estimate the focal length based on partial derivatives of the potential evaluated at $(R,z)=(R_{C},0)$ following Sanders (2012). Figure 1: Color maps showing r.m.s. fractional variations in estimated values of actions on a grid of orbits in a Miyamoto-Nagai potential (Equation 51) with parameters $a=3$ and $b=0.3$. Actions are computed along orbits starting with $(L,R,z)=(3,R_{C},0)$ and initial velocity values indicated by the axes, which are plotted in units of the circular velocity, $v_{C}=L/R_{C}$. The top row shows variations in the radial action, $J_{R}$, while the bottom row shows variations in the vertical action, $J_{z}$. Columns are labeled according to the method used to compute action values. See text for additional details. Dash-dot, dashed, and solid contours mark fractional error levels of $0.1\%,1\%$ and $10\%$, respectively. Figure 1 illustrates the accuracy of actions determined by Birkhoff normalization and Stäckel approximation across a range of initial conditions. Orbits are initialized at $(R,z)=(R_{C},0)$ over the plotted range of $(p_{R},p_{z})$ values and integrated for 10 times the radial epicyclic period (i.e., $T=10\times(2\pi/\kappa)$) with 512 outputs generated at equally-spaced intervals along the orbits. Transformed complex canonical variable, $x^{\prime}_{i}$, are computed as functions of the un-transformed variables along a given orbit using various approximations to the series $\displaystyle x^{\prime}_{i}(x_{R},x_{z},\bar{x}_{R},\bar{x}_{z})=(\exp[-\mathcal{L}_{\chi}]x_{i})(x_{R},x_{z},\bar{x}_{R},\bar{x}_{z})$ (54) where the subscript $i$ denotes $R$ or $z$. Action values are then computed from the transformed complex canonical variable as $J_{i}=|x^{\prime}_{i}|^{2}$. The root-mean-square (r.m.s.) fractional variations in these action values computed along each orbit are recorded in the color scale. The leftmost column of Figure 1 show the r.m.s. variations in actions computed using the epicyclic approximation, where Equation (54) is approximated simply as $x_{i}^{\prime}\approx x_{i}$. The variations in actions computed via this approximation provide a measure of how significantly orbits are affected by the anharmonicity of the potential. The second column of Figure 1 shows the r.m.s. variations for actions computed by expanding the right hand side of Equation (54) as a Taylor series. With $\chi$ determined up to 10th order in the canonical variables, the terms of these series are determined up to 9th order in the complex canonical variables. The third and fourth columns show r.m.s. variations for actions computed by constructing $(3,1)$ and $(2,2)$ Padé approximants in $I_{z}$ as described in Section 2.7. The final columns the r.m.s. variations for actions computed using the Stäckel approximation. The Taylor series approximations of actions in Figure 1 provide good agreement with the numerical results for mid-plane vertical velocity values $p_{z}\lesssim 0.1v_{C}$, where $v_{C}=L/R_{C}$ is the circular orbital velocity. The poor agreement for greater vertical velocities is attributable to the fact that orbits with greater vertical velocities reach heights $z>b$, beyond the radius of convergence of the Taylor series expansion of the potential in Equation (51). An estimate of the maximum vertical height, $z_{\mathrm{max}}$, achieved by an orbit with a given vertical velocity, $p_{z,0}$, in the midplane can be obtained by neglecting any motion in the radial direction and using conservation of energy to equate $\frac{1}{2}p_{z,0}\approx\Phi_{\mathrm{eff}}(R_{C},z_{\mathrm{max}})-\Phi_{\mathrm{eff}}(R_{C},0)$. Taking $z_{\mathrm{max}}=b$, the corresponding midplane velocity is $p_{z,0}\approx 0.09v_{C}$, closely matching the value of mid-plane velocity above which the Taylor series approximations fail in Figure 1. The Padé approximants clearly extend the range over which the Birkhoff normalization procedure accurately predicts actions, achieving r.m.s. variations in radial actions of $\lesssim 10\%$ for initial vertical velocities $p_{z}\lesssim 0.2v_{C}$. The (2,2) Padé approximant is slightly more accurate than the (3,1) approximant. Performance of the algorithm for vertical action determination is slightly worse, with $\lesssim 10\%$ accuracy restricted to initial vertical velocities $p_{z}\lesssim 0.15v_{C}$. Figure 2 compares the accuracy of action determinations made using Birkhoff normalization with (2,2) Padé approximants against the Stäckel approximation. Birkhoff normalization provides a more accurate determination of radial actions over the majority of the plotted range. Vertical actions are also more accurately determined for $p_{z}\lesssim 0.1v_{C}$. Figure 2: Comparison of the accuracy of actions determined using Birkhoff normalization and (2,2) Padé approximants versus the Stäckel approximation. The color scale indicates the r.m.s. variation in actions computed using the Stäckel approximation divided by r.m.s. variations computed via Birkhoff normalization so that red colors indicate orbits where Birkhoff normalization yields better accuracy while blue colors indicate orbits for which the Stäckel approximation is more accurate. The grid of orbits is the same as shown in Figure 1. Figure 3 illustrates the accuracy of the determination of the dynamical frequencies $\Omega_{R}$, $\Omega_{z}$, and $\Omega_{\phi}$ using various methods. Exact values of frequencies are determined numerically using the frequency modified Fourier transform algorithm of Šidlichovský & Nesvorný (1996), implemented in the celmech package as the function miscellaneous.frequency_modified_fourier_transform. The method was applied to the time series of complex canonical variables, $x_{R}$ and $x_{z}$, for each orbit integration and the frequencies of the largest Fourier components determined for each variable were identified as $-\Omega_{R}$ and $-\Omega_{z}$, respectively. Values of $\Omega_{\phi}$ were determined by applying the same method to time series of $e^{\mathrm{i}\phi}$. The first column of Figure 3 records the fractional errors $\kappa/\Omega_{R}-1$, $\nu/\Omega_{z}-1$, and $R_{C}^{-2}L/\Omega_{\phi}-1$ incurred by adopting the epicyclic approximation for the dynamical frequencies. These panels illustrate that the frequencies $\Omega_{R}$ and $\Omega_{\phi}$ vary gradually with both initial radial and vertical velocities while the frequency $\Omega_{z}$ is a strong function of the initial vertical velocity. The second and third columns of Figure 3 record fractional errors incurred by using Birkhoff normalization and Padé approximants to determine frequencies. Frequencies were determined as follows. First, values of complex canonincal variables were assigned based on orbits’ initial conditions as $x_{R}=\mathrm{i}\sqrt{\frac{1}{2\kappa}}p_{R}$ and $x_{z}=\mathrm{i}\sqrt{\frac{1}{2\nu}}p_{z}$ (see Equations 5 and 6). Next, values of transformed complex canonical variables, $x^{\prime}_{R}$ and $x^{\prime}_{z}$, were computed using the Padé approximation method described in Section 2.7. Then, series expressions for frequencies, $\Omega_{R}(J_{R},J_{z})=\partial_{J_{R}}H^{\prime}$ and $\Omega_{z}(J_{R},J_{z})=\partial_{J_{z}}H^{\prime}$, derived from the transformed Hamiltonian, $H^{\prime}$, and the series expression for $\Omega_{\phi}$ given by Equation (45) were computed. Finally, these series expressions were used to construct a second set of (3,1) and (2,2) Padé approximants, this time in $J_{z}=|x^{\prime}_{z}|^{2}$, and evaluated. Comparing Figures 1 and 3, we see that dynamical frequencies are accurately recovered in essentially the same regions of phase space that Birkhoff normalization and Padé approximants provide accurate action variables. The fourth column of Figure 3 shows frequency errors incurred using the Stäckel approximation for comparison. Figure 3: Color maps showing fractional errors in the estimated values of radial, vertical, and azimuthal frequencies on a grid of orbits in the Miyamoto-Nagai potential (Equation (51)) with parameters $a=3$ and $b=0.3$. Frequencies are computed for orbits starting with $(L,R,z)=(3,R_{C},0)$ and initial velocity values indicated by the axes, which are plotted in units of the circular velocity, $v_{C}=L/R_{C}$. The top, middle, and bottom rows shows fractional errors in the radial ($\Omega_{R}$), vertical ($\Omega_{z}$), and azimuthal ($\Omega_{\phi}$) frequencies, respectively. Columns are labeled according to the method used to compute the frequency values. See text for details. Dash-dot, dashed, and solid contours mark fractional error levels of $0.1\%,1\%$ and $10\%$, respectively. ## 4 Summary & Discussion We have shown how Birkhoff normalization, a technique from Hamiltonian perturbation theory, can be algorithmically implemented and applied to derive AA variables for stellar orbits in axisymmetric galactic potentials. One significant advantage of this method is that it provides explicit expressions for actions and angle variables as functions of positions and velocities and vice versa. It similarly yields an explicit expression for the Hamiltonian as a function of the action variables. The only input required by the method is the partial derivatives of the effective potential up to the maximum desired order of the series expansions. Since Birkhoff normalization relies on expansion of the effective potential about its value for circular, planar orbits, its range of applicability is nominally limited by the radius of convergence of this expansion. However, as illustrated by the tests in Section 3, the series expressions generated by the procedure can be used to construct Padé approximants and extend the range over which it provides accurate results. The tests presented in Section 3 showed that Birkhoff normalization provides comparable or better accuracy than the popular Stäckel approximation, at least for quasi-planar orbits. The expressions provided by Birkhoff normalization take the form of polynomials involving the complex canonical variables introduced in Section 2.2. This makes evaluating both forward and inverse transformations between position-velocity data in AA variables exceptionally computationally efficient. This computational efficiency represents another potential advantage of the method over the Stäckel approximation, which requires evaluation of integrals via numerical quadrature. The most computationally expensive parts of the Birkhoff normalization algorithm will generally be the initial calculation of potential derivatives and construction of series. However, these calculations represent a fixed cost and need only be performed once, after which the Birkhoff normalization procedure can be applied to as many orbits as desired. For the tests presented in Section 3, computing partial derivatives of the Miyamoto-Nagai potential up to 10th order via symbolic means required $\sim 20~{}\mathrm{s}$ on a 2.5 GHz Intel i7 processor. The initial construction of the series expression for $\chi$ up to 10th order in complex canonical variables took $\sim 3~{}\mathrm{s}$ and calculations of additional series expressions approximating the exponential operators $\exp[\pm\mathcal{L}_{\chi}]$ applied to various functions, such as Equation (54), took approximately $\sim 1~{}\mathrm{s}$ each. A potential downside to the Birkhoff normalization method, in terms of computational cost, is that the effective potential partial derivatives and series must nominally be recalculated for each value of angular momentum considered. A simple strategy for applying the method to orbits with a range of angular momenta would be to carry out series constructions on a 1 dimensional grid of angular momenta and simply interpolate results. Alternatively, the construction of various series could be carried out symbolically with the aid of a computer algebra system. We leave the development of such methods to future work as the choice of optimal strategy is likely to be application-specific. We conclude by mentioning a few potential future applications of the Birkhoff normalization procedure presented in this paper. First, analytic expressions furnished by the Birkhoff normalization procedure could make it well-suited for problems requiring the representation of equilibrium distribution functions, which by Jean’s theorem must depend only on integrals of motion, in terms of spatial positions and velocities. Additionally, the explicit expression for the Hamiltonian in terms of action variables provided by the Birkhoff normalization procedure can be used as a starting point for using perturbation theory to understand the non-equilibrium signatures of dynamical perturbations like those induced by a stellar bar (e.g., Binney, 2018) or satellite galaxy (e.g., Banik et al., 2022). Finally, even though the Birkhoff normalization procedure may not supply accurate AA variables for orbits with large vertical actions, it could provide useful initial ‘toy’ AA variables that serve as the starting point of the torus mapping method of McGill & Binney (1990). ## 5 Acknowledgments I thank Rimpei Chiba and Neige Frankel for helpful discussions. I thank Scott Tremaine for suggesting the use of Padé approximants. Simulations were performed on the Sunnyvale cluster at the Canadian Institute for Theoretical Astrophysics. I acknowledge support by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding references CITA 490888-16 and RGPIN-2020-03885. This project was developed in part at the Gaia Hike, a workshop hosted by the University of British Columbia and the Canadian Institute for Theoretical Astrophysics in 2022 June. ## References * Banik et al. (2022) Banik, U., Weinberg, M. D., & van den Bosch, F. C. 2022, ApJ, 935, 135, doi: 10.3847/1538-4357/ac7ff9 * Binney (2012) Binney, J. 2012, MNRAS, 426, 1324, doi: 10.1111/j.1365-2966.2012.21757.x * Binney (2018) —. 2018, MNRAS, 474, 2706, doi: 10.1093/mnras/stx2835 * Binney & Spergel (1982) Binney, J., & Spergel, D. 1982, ApJ, 252, 308, doi: 10.1086/159559 * Binney & Spergel (1984) —. 1984, MNRAS, 206, 159, doi: 10.1093/mnras/206.1.159 * Binney & Tremaine (2008) Binney, J., & Tremaine, S. 2008, Galactic Dynamics: Second Edition (Princeton, NJ: Princeton University Press) * Birkhoff (1927) Birkhoff, G. D. 1927, Dynamical systems, Vol. 9 (American Mathematical Soc.) * Bovy (2015) Bovy, J. 2015, ApJS, 216, 29, doi: 10.1088/0067-0049/216/2/29 * Bovy & Rix (2013) Bovy, J., & Rix, H.-W. 2013, ApJ, 779, 115, doi: 10.1088/0004-637X/779/2/115 * Brumberg (1995) Brumberg, V. A. 1995, Analytical Techniques of Celestial Mechanics (Berlin: Springer-Verlag) * Contopoulos (1963) Contopoulos, G. 1963, AJ, 68, 1, doi: 10.1086/108903 * Contopoulos (2003) —. 2003, Journal of Physics A: Mathematical and General, 36, 8639. http://resolver.scholarsportal.info/resolve/03054470/v36i0032/8639_nofiom.xml * de Zeeuw & Merritt (1983) de Zeeuw, T., & Merritt, D. 1983, ApJ, 267, 571, doi: 10.1086/160894 * Deprit (1969) Deprit, A. 1969, Celestial Mechanics, 1, 12, doi: 10.1007/BF01230629 * Efthymiopoulos et al. (2004) Efthymiopoulos, C., Giorgilli, A., & Contopoulos, G. 2004, Journal of Physics A Mathematical General, 37, 10831, doi: 10.1088/0305-4470/37/45/008 * Ferraz-Mello (2007) Ferraz-Mello, S. 2007, Canonical perturbation theories: degenerate systems and resonance, Vol. 345 (Springer Science & Business Media) * Goldstein et al. (2002) Goldstein, H., Poole, C., & Safko, J. 2002, Classical mechanics (Addison-Wesley) * Gustavson (1966) Gustavson, F. G. 1966, AJ, 71, 670, doi: 10.1086/110172 * Hadden & Tamayo (2022) Hadden, S., & Tamayo, D. 2022, AJ, 164, 179, doi: 10.3847/1538-3881/ac8d01 * Henon & Heiles (1964) Henon, M., & Heiles, C. 1964, AJ, 69, 73, doi: 10.1086/109234 * Lichtenberg & Lieberman (1992) Lichtenberg, A., & Lieberman, M. 1992, Regular and Chaotic Dynamics (New York: Springer-Verlag) * Lowenstein (2012) Lowenstein, J. H. 2012, Essentials of Hamiltonian dynamics (Cambridge, UK: Cambridge University Press) * McGill & Binney (1990) McGill, C., & Binney, J. 1990, MNRAS, 244, 634. https://ui.adsabs.harvard.edu/abs/1990MNRAS.244..634M * Meurer et al. (2017) Meurer, A., Smith, C. P., Paprocki, M., et al. 2017, PeerJ Computer Science, 3, e103, doi: 10.7717/peerj-cs.103 * Miyamoto & Nagai (1975) Miyamoto, M., & Nagai, R. 1975, PASJ, 27, 533 * Saaf (1968) Saaf, A. F. 1968, ApJ, 154, 483, doi: 10.1086/149776 * Sanders (2012) Sanders, J. 2012, MNRAS, 426, 128, doi: 10.1111/j.1365-2966.2012.21698.x * Sanders & Binney (2016) Sanders, J. L., & Binney, J. 2016, MNRAS, 457, 2107, doi: 10.1093/mnras/stw106 * Teukolsky et al. (1992) Teukolsky, S. A., Flannery, B. P., Press, W., & Vetterling, W. 1992, SMR, 693, 59 * Šidlichovský & Nesvorný (1996) Šidlichovský, M., & Nesvorný, D. 1996, Celestial Mechanics and Dynamical Astronomy, 65, 137, doi: 10.1007/BF00048443
# Self-Guided Learning to Denoise for Robust Recommendation Yunjun Gao, Yuntao Du, Yujia Hu, Lu Chen, Xinjun Zhu, Ziquan Fang College of Computer Science, Zhejiang University, China gaoyj, ytdu, yjhu, luchen, xjzhu<EMAIL_ADDRESS>and Baihua Zheng School of Computing and Information Systems, Singapore Management University, Singapore<EMAIL_ADDRESS> (2022) ###### Abstract. The ubiquity of implicit feedback makes them the default choice to build modern recommender systems. Generally speaking, observed interactions are considered as positive samples, while unobserved interactions are considered as negative ones. However, implicit feedback is inherently noisy because of the ubiquitous presence of noisy-positive and noisy-negative interactions. Recently, some studies have noticed the importance of denoising implicit feedback for recommendations, and enhanced the robustness of recommendation models to some extent. Nonetheless, they typically fail to (1) capture the hard yet clean interactions for learning comprehensive user preference, and (2) provide a universal denoising solution that can be applied to various kinds of recommendation models. In this paper, we thoroughly investigate the memorization effect of recommendation models, and propose a new denoising paradigm, _i.e.,_ Self- Guided Denoising Learning (SGDL), which is able to collect memorized interactions at the early stage of the training (_i.e.,_ “noise-resistant” period), and leverage those data as denoising signals to guide the following training (_i.e.,_ “noise-sensitive” period) of the model in a meta-learning manner. Besides, our method can automatically switch its learning phase at the memorization point from memorization to self-guided learning, and select clean and informative memorized data via a novel adaptive denoising scheduler to improve the robustness. We incorporate SGDL with four representative recommendation models (_i.e.,_ NeuMF, CDAE, NGCF and LightGCN) and different loss functions (_i.e.,_ binary cross-entropy and BPR loss). The experimental results on three benchmark datasets demonstrate the effectiveness of SGDL over the state-of-the-art denoising methods like T-CE, IR, DeCA, and even state-of- the-art robust graph-based methods like SGCN and SGL. Recommender System; Denoising Recommendation; Implicit Feedback; Robust Learning ††journalyear: 2022††copyright: acmcopyright††conference: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval; July 11–15, 2022; Madrid, Spain††booktitle: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’22), July 11–15, 2022, Madrid, Spain††price: 15.00††doi: 10.1145/3477495.3532059††isbn: 978-1-4503-8732-3/22/07††ccs: Information systems Recommender systems††ccs: Computing methodologies Learning from implicit feedback ## 1\. Introduction Figure 1. Key idea of SGDL: (a) and (c) show the memory rate when training NeuMF and LightGCN on the MovieLens dataset, respetively; (b) and (d) show the memory rate when training the corresponding model with SGDL. The memory rate is the proportion of memorized data (see Section 3.1.1 for details) in the clean and noisy interactions. NR denotes “noise-resistant” period and NS represents “noise-sensitive” period, respectively. Recommender systems have been widely deployed to perform personalized information filtering, especially for various online services such as E-commerce (Lin et al., 2019; Gharibshah et al., 2020), social media (Ren et al., 2017), and news portals (Wang et al., 2018). Most of existing recommender systems use implicit feedback (_e.g.,_ view and click) to develop machine learning models, due to its large volume and availability (Rendle et al., 2009; He et al., 2016; Du et al., 2022a, b). Specifically, the observed interactions between users and items are viewed as positive instances, while unobserved interactions are viewed as negative instances. However, implicit feedback is inherently noisy because of the ubiquitous presence of noisy- positive and noisy-negative interactions (Hu et al., 2008; Bian et al., 2021; Wang et al., 2021b; Lee et al., 2021). Take the E-commerce scenario as an example. A large portion of click behaviors are triggered by the curiosity of users, which cannot directly indicate the users’ positive views of the products. On the other hand, unobserved interactions may attribute to the unawareness of users because the items are simply not exposed to them. Hence, blindly fitting the implicit feedback to recommender systems without considering the inherent noise would fail to understand users’ true preferences, and eventually harm user experiences and degrade recommendation performance (Xie et al., 2020; Wang et al., 2021a). Considering the widespread use of implicit feedback and its large impact on the recommendation model, some recent studies have noticed the importance of denoising implicit feedback for recommendations. Existing efforts on tackling this problem can be roughly divided into two categories: _sample selection methods_ (Gantner et al., 2012; Ding et al., 2019b; Yu and Qin, 2020; Wang et al., 2021b) and _sample re-weighting methods_ (Wang et al., 2021a; Hu et al., 2021; Wang et al., 2022). Sample selection methods focus on designing more effective samplers to collect clean samples for learning users’ preferences, while their performance suffers from high variance since they heavily depend on the sampling distribution (Yuan et al., 2018). On the other hand, sample re-weighting methods aim to distinguish noisy interactions from clean data in terms of the loss values, and assign lower weights to noisy interactions with high loss values during training. Their key idea is consistent with memorization effect (Arpit et al., 2017): models tend to initially learn easy and clean patterns (_i.e.,_ user preferences) in the early stage of their learning, and eventually memorize all training interactions. Benefiting from this principle, sample re-weighting methods are able to successfully identify noisy interactions. Although sample re-weighting methods can achieve promising performance for denoising implicit feedback, we want to highlight that they commonly suffer from following two problems: * • Abandon of Hard Clean Interactions. These methods heavily rely on the loss values. They simply assume that interactions with large loss values are noisy and penalize them with small weights. However, it has been reported that some clean interactions (_aka._ hard yet clean interactions) may also have high loss values at the beginning of the training, and those interactions play an important role in understanding users’ behaviors and preferences (Ding et al., 2019a; Hu et al., 2021; Wang et al., 2022). Nonetheless, these interactions are simply discarded by existing methods due to high loss values, incurring insufficient understanding of users’ true preferences. * • Lack of Adaptivity and Universality. Existing sample re-weighting methods are able to achieve good performance _only_ when the reweighting configurations (e.g., weights and thresholds) are properly specified. However, obtaining proper configurations normally requires time-consuming procedures (_e.g.,_ grid search), and the optimal configurations for one dataset typically are not applicable for other datasets because of the different data distributions. Meanwhile, these methods are only applicable to the recommender models with predefined pointwise loss function (_e.g.,_ cross-entropy loss in (Wang et al., 2021a, 2022)), which makes it hard to incorporate other popular ranking loss functions (_e.g.,_ BPR loss (Rendle et al., 2009)) and limits their applications. In this regard, we have thoroughly explored the memorization of the representative recommendation models (_e.g.,_ NeuMF (He et al., 2017) and LightGCN (He et al., 2020)) on implicit feedback, including the clean interactions and the noisy interactions. By analyzing the learning processes of different models, we have observed the existence of two learning periods. As shown in Figure 1, a typical learning process consists of the “noise- resistant” (NR) period and the “noise-sensitive” (NS) period. The former is the duration when the memorization of noisy interactions is insignificant because the models focus on memorizing easy and clean patterns at their early stage of training; while the latter is the duration when the memorization of noisy interactions rapidly increases since models eventually begin memorizing all the implicit feedback at the late stage of training. The timestamp in the training process that can best differentiate above two periods is defined as memorization point of the model training, which are represented by dotted lines in Figure 1. These observations motivate us to design a new approach that better leverages the memorization nature of recommendation models. Specifically, we propose a new denoising paradigm, namely, Self-Guided Denoising Learning (SGDL), which is able to collect memorized interactions at the early stage of training (_i.e.,_ “noise-resistant” period), and leverage those data as denoising signals to guide the subsequent training (_i.e.,_ “noise-sensitive” period) of the model in a meta-learning manner. Besides, SGDL can automatically transit from noise-resistant period to noise-sensitive period in order to stop accumulating memorized interactions at the memorization point, since the noisy interactions are gradually memorized with the training process. In a nutshell, corresponding to the two observed periods commonly existing in the training process of different models, SGDL contains two key phases, _i.e.,_ memorization and self-guided learning: * • Memorization. Owing to the negligible memorization of noisy interactions during “noise-resistant” period, the model is initially trained with all the implicit feedback. To better reveal the underlying memorization nature of the model during training, we design the new memorization-based metrics to define the memorization states of data. As the memorized data are mostly easy and clean interactions at the early stage of training, we collect them as denoising signals to guide the following denoising training process. Moreover, SGDL is able to automatically estimate the best memorization point, from which the learning moves from the “noise-resistant” period to “noise-sensitive” period, to stop accumulating memorized data without any supervision. * • Self-Guided Learning. To avoid the memorization of noisy interactions in the “noise-sensitive” period, we leverage the memorized data collected from the “noise-resistant” period to represent user preferences. Specifically, a denoising module is proposed to learn a parameterized weighting function for the implicit feedback, which is guided by memorized data and updated with the learning process of the model. Moreover, since some of the memorized data can also be noisy, we further develop a novel adaptive denoising scheduler to prevent the denoising module from being corrupted by noisy yet memorized samples. Technically, the adaptive denoising scheduler characterizes the contribution of each memorized data to the denoising performance, and decides whether to use the sample by predicting its probability being sampled. The scheduler is also simultaneously optimized together with the learning process to enhance the robustness of the model. Through the above two phases, SGDL can denoise implicit feedback with the help of memorized data, which are naturally collected by exploiting the noise- resistant period of training. Compared with standard training, SGDL can dramatically reduce the memory ratio of noisy samples in the noise-sensitive period, and enhance the robustness of recommendation models, as shown in Figure 1. Moreover, since the model is constantly trained with all implicit feedback data, our method can help the model learn users’ true preferences with hard yet clean samples, leading to better recommendation performance. Last but not the least, our method does not need any thresholds or predefined weighting functions, and is easy to be applied to any learning-based recommendation models. We conduct extensive experiments on three real-world datasets with four representative recommendation models (_i.e.,_ NeuMF (He et al., 2017), CDAE (Wu et al., 2016), NGCF (Wang et al., 2019), and LightGCN (He et al., 2020)) and two loss functions (_i.e.,_ binary cross-entropy loss and BPR loss (Rendle et al., 2009)). Experimental results show that our SGDL significantly outperforms all state-of-the-art denoising methods, and it achieves comparable (in many cases even better) performance to the state-of- the-art robust graph-based methods like SGCN (Chen et al., 2021b) and SGL (Wu et al., 2021). In summary, we make three key contributions in this paper, as listed below. * • We develop a new denoising paradigm, _i.e.,__self-guided denoising learning (SGDL)_ , which leverages the self-labeled memorized data as guidance to offer denoising signals for robust recommendation without defining any weighting functions or requiring any auxiliary information. * • We carefully exploit the memorization effect of recommendation models, and design two training phases that can collect memorized data and utilize them as guidance to denoise implicit feedback, respectively. Besides, a novel adaptive denoising scheduler is introduced to further improve the robustness. * • We incorporate SGDL with four representative recommendation models, and conduct extensive experiments on three public benchmark datasets with various state-of-the-art methods to demonstrate the superiority and universality of SGDL. ## 2\. Problem Formulation We first introduce the common paradigm of user preferences learning from implicit feedback, and then formulate our task. Preference learning from implicit feedback. In this paper, we focus on learning the user preferences from the implicit feedback (Rendle et al., 2009). Specifically, the behavior data (_e.g.,_ click and review) $\mathcal{D}=\\{u,i,y_{ui}|u\in\mathcal{U},i\in\mathcal{I}\\}$ involves a set of users $\mathcal{U}=\\{u\\}$ and a set of items $\mathcal{I}=\\{i\\}$, as well as the interaction $y_{ui}=\\{0,1\\}$ that indicates whether user $u$ has interacted with item $i$. Most of the state-of-the-art recommendation methods (_e.g.,_ NeuMF (He et al., 2017) and LightGCN (He et al., 2020)) assume that the interaction $y_{ui}$ could represent the users’ true preferences, and directly learn the model $f$ with parameters $\theta$ by minimizing a ranking loss function over $\mathcal{D}$. Denoising implicit feedback for recommendations. However, due to the existence of inherent noise in implicit feedback, recommendation models might fail to learn the users’ true preferences with typical training process, resulting in suboptimal performance. Thus, the task of the paper is, given the noisy implicit feedback $\mathcal{D}$ that contains both noisy-positive and noisy- negative feedback, to infer users’ true preferences with optimal model parameters $\theta^{*}$. Figure 2. The overall framework of SGDL. During Phase I (memorization), memorized interactions are collected as denoising signals; During Phase II (self-guided learning), the weighting function is simultaneously learned with the recommender model, which is guided by the memorization data in a meta- learning manner (see Section 3.2.1 for details). ## 3\. Methodology In this section, we detail SGDL that comprises the following two phases: (i) _memorization_ , which exploits the memorization effect of models to collect memorized data during the noise-resistant period and estimates the best memorization point to automatically transit to phase II; and (ii) _self-guided learning_ , which leverages the memorized samples as denoise signals to guide the model learning during the noise-sensitive period, and discards the potential noisy interactions in memorized data with a novel adaptive denoising scheduler for robustness. We detail the two phases as follows. ### 3.1. Phase I: Memorization Initially, Phase I tries to train the recommendation model in a conventional way during the noise-resistant period, where the memorization of noisy interactions is assumed to be suppressed, as we discover in Figure 1. Since most of the memorized interactions until the memorization point are clean, we collect them to form the memorized data, which can be used as the denoising signals to guide the training in Phase II. Therefore, the major challenges in Phase I include 1) how to define the memorization of interactions and 2) how to estimate the memorization point. #### 3.1.1. Memorized Interactions. Previous studies (Wang et al., 2021a, 2022) mainly use loss values of training data to demonstrate the memorization effect of recommendation models. However, we argue that loss values are insufficient to reflect the learning process, since they are inconsistent with the optimization target of recommendation models (_i.e.,_ personalized ranking), and unable to distinguish hard interactions from noisy ones. Thus, it is necessary to design a new memorization-based metric, which takes the learning process of recommendation models into consideration. Inspired by the widely used hit ratio metric (Rendle et al., 2009; Cremonesi et al., 2010), we define that an interaction $(u,i)$ is memorized at epoch $t$ by the model $\theta$ if item $i$ is in the ranking list of user $u$, denoted as $m_{t}(u,i)$. To ensure the reliability of ranking results, for each user $u$, we include the top-$N$ of the ranking items into its ranking list, where $N$ is the length of observed interactions. However, simply calculating the memorization of interactions of a single epoch could lead to unstable results, since the model is not well trained during the noise-resistant period. Hence, we trace the memorization states of interactions for the most recent $h$ epochs, and define the final memorization of interaction $(u,i)$ as follows: (1) $\displaystyle m_{t}^{h}(u,i)=\frac{1}{{|\mathcal{P}_{t}^{h}(u,i)|}}\sum\nolimits_{m_{j}(u,i)\in\mathcal{P}_{t}^{h}(u,i)}m_{i}(u,i)$ where $\mathcal{P}_{t}^{h}(u,i)=\\{m_{t-h+1}(u,i),\cdots,m_{t}(u,i)\\}$ captures the most recent $h$ memorization histories of interaction $(u,i)$. We define an interaction $(u,i)$ memorized by a model $\theta$ if the majority of the recent histories $\mathcal{P}_{t}^{h}(u,i)$ coincides with the memorization state (_i.e.,_ $m_{t}^{h}(u,i)$ is larger than $0.5$). It is worth mentioning that the definition of memorized interactions dose not need any labels for supervision, and is able to effectively indicate the underlying memorization effect of recommendation models. Figure 3. The monotonicity of MP and MR when training NeuMF and LightGCN on the MovieLens dataset. #### 3.1.2. Memorization Point Estimation. As shown in Figure 1, the model predominantly learns clean interactions until the noise-sensitive period begins. In others words, during noise-resistant period, the recommendation model (1) not only learns sufficient information from the clean interactions, (2) but also accumulates some noise from the noisy implicit feedback. Therefore, we aim to design two metrics to reflect the above two memorization characteristics of the model. Formally, we use $\mathcal{M}_{t}$ to denote a set of memorized interactions at epoch $t$, and use $y_{ui}^{*}$ to represent the true label of interaction $(u,i)$, which is not available because of the noise in implicit feedback. Inspired by recent advances in robust learning (Shu et al., 2019; Song et al., 2021) and recommendation evaluation metrics, we propose two memorization-based metrics, namely, _memorization precision_ ($MP$) and _memorization recall_ ($MR$), to address the memorization effect of recommendation models respectively: (2) $\displaystyle MP_{t}=\frac{|\mathcal{R}_{t}|}{|\mathcal{M}_{t}|},\quad MR_{t}=\frac{|\mathcal{R}_{t}|}{|\mathcal{G}|}$ where $\mathcal{R}_{t}=\\{(u,i)\in\mathcal{M}_{t}:y_{ui}=y_{ui}^{*}\\}$ denotes the set of memorized data whose true labels are consistent with predictions, and $\mathcal{G}=\\{(u,i)\in\mathcal{D}:y_{ui}=y_{ui}^{*}\\}$ is the set of true labeled data in implicit feedback. According to the definition of $MP$ and $MR$, we can conclude that: $MP$ monotonically decreases since the model tends to memorize clean data first and then gradually memorizes all the noisy interactions as the training progresses; and $MR$ monotonically increases because the model eventually memorizes all clean interactions as the training progresses, as depicted in Figure 3 (please refer to Section 3.3 for the theoretical analysis of the monotonicity of the two metrics). Thus, the best memorization point $t_{m}$ is naturally the best trade-off epoch when $MP$ and $MR$ share the same value, _i.e.,_ $MP_{t}=MR_{t}$. By substituting it into Equation (2), the best memorization point $t_{m}$ can be calculated as: (3) $\displaystyle\mathcal{M}_{t_{m}}=|\\{(u,i)\in\mathcal{D}_{t}:y_{ui}=y_{ui}^{*}\\}|=(1-\sigma)|\mathcal{D}|$ where $\sigma$ is the noise rate of the implicit feedback. Since $\sigma$ is typically unknown, we leverage the difference of loss distributions in clean and noisy data to estimate $\sigma$, as shown in Figure 4. Figure 4. Loss distribution of MovieLens dataset. Specifically, we first normalize the loss values of all training interactions and then fit them into a two-component Gaussian Mixture Model (GMM) to model the bi-modal distribution (Arazo et al., 2019; Li et al., 2020) of true- and false-labed samples. The GMM model can be easily trained by using the Expectation-Maximization (EM) algorithm. Hence, we could obtain the probability of an interaction $(u,i)$ being noisy through the posterior probability of loss distributions. Accordingly, the noise rate $\sigma$ is estimated as: (4) $\displaystyle\hat{\sigma}=\mathbb{E}_{(u,i)\in\mathcal{D}}[p(\mu|L_{(u,i)}(\theta)]$ where $L_{(u,i)}(\theta)$ is the loss of interaction $(u,i)$ of recommendation model $\theta$, and $\mu$ is the Gaussian component with a larger mean, since noisy data have typically larger loss values. Note that there are many approaches (Liu and Tao, 2015; Chen et al., 2019) available to estimate the noise ratio, we choose GMM because it is easy to apply and has stable performance on different noisy datasets. Therefore, SGDL transits to phase II when the number of memorized interactions reaches the estimated clean data size (_i.e.,_ $|\mathcal{M}_{t}|\geq(1-\hat{\sigma})\mathcal{D}$). Note that, SGDL is able to estimate the memorization point $t_{m}$, and collect memorized data $\mathcal{M}_{t_{m}}$ with nearly zero computational overhead and no additional supervision. ### 3.2. Phase II: Self-Guided Learning Phase II aims to robustly learn the recommendation model with the denoising signals of memorized data, which are collected from the training process of Phase I. Previous studies (Wang et al., 2021a, 2022) need to pre-specify weighting functions (_e.g.,_ exponential function), and require additional hyperparameters (_e.g.,_ threshold) to denoise implicit feedback during training. However, we argue that these methods are fairly hard to be generally applied in real-world recommendation systems due to two issues: (1) The proper weighting schemes heavily rely on the training data, which limits their adaptivity. (2) These methods simply abandon the hard yet clean interactions because the fixed weighting functions fail to distinguish the interactions from noisy data, incurring suboptimal recommendation performance. Thus, we aim to propose a self-guided method that is capable of learning an adaptive weighting function automatically from the collected memorized data to tackle the above two issues. The above denoising scheme assumes that the memorized data is clean and useful to provide denoising signals for learning user preferences. Nevertheless, as the memorized data is collected by leveraging the memorization effect of recommendation models, it inevitably contains some noise. To prevent the model from being corrupted by such detrimental interactions, we further devise a novel adaptive denoising scheduler to select and use proper memorized data for self-guided learning. #### 3.2.1. Denoising Learning with Memorized Data. During the noise-sensitive period, we aim to enhance the robustness of training by imposing a weight on each sample loss. Here, we consider the memorized data that is able to provide denoising signals to learn the weight for each sample, since it is mostly clean thanks to the memorization effect of the model. Specifically, let $L_{k}(\theta)$ be the $k$-th sample loss111The loss can be represented as the pointwise or pairwise loss, according to the optimization target of models. We demonstrate the effectiveness of our method on both kinds of loss functions in Section 4.2. of model $\theta$, and $\mathcal{D}_{T}$ be the training data. The optimal recommendation model parameter $\theta$ is calculated by minimizing the following weighted loss: (5) $\displaystyle\theta^{*}(\psi)=\mathop{\mathrm{argmin}}_{\theta}\frac{1}{|\mathcal{D}_{T}|}\sum\nolimits_{k}^{|\mathcal{D}_{T}|}g(L_{k}(\theta);\psi)L_{k}(\theta)$ where $g(L_{k}(\theta);\psi)$ is the weight on the $k$-th sample loss, and $\psi$ represents the current parameters of weighting function $g(\cdot)$. Here, we formulate $g(\cdot)$ as a simple Multi-Layer Perceptron (MLP) with only one hidden layer, since it is known as a universal approximator for almost any continuous function (Csáji, 2001; Shu et al., 2019). To adaptively learn the weight for each sample from memorized data, we optimize the weighting parameters $\psi$ given current optimal $\theta^{*}(\psi)$:222Notice that, here $\psi$ is a variable instead of a quantity, which makes $\theta^{*}(\psi)$ a function of $\psi$, and the gradient in Equation (6) can be computed. (6) $\displaystyle\psi^{*}=\mathop{\mathrm{argmin}}_{w}\frac{1}{|\mathcal{M}_{t_{m}}|}\sum\nolimits_{m}^{|\mathcal{M}_{t_{m}}|}L_{m}(\theta^{*}(\psi))$ where $L_{m}(\theta^{*}(\psi))$ is the $m$-th memorized sample loss given optimal model parameters $\theta^{*}(\psi)$. Note that the searching for the optimal $\theta^{*}$ and optimal $\psi^{*}$ requires two nested loops of optimization (_i.e.,_ bi-level optimization) (Shu et al., 2019; Chen et al., 2021a), which is costly. Hence, we adopt the idea of meta learning (Finn et al., 2017; Wu et al., 2018), and update $\theta$ and $\psi$ alternately in a single loop to guarantee the efficiency of the algorithm. Specifically, as illustrated in Figure 2, we perform the following training procedure in each iteration: * • Assumed update of $\theta$. As shown by the blue arrows in Figure 2, we first make an assumed update of $\theta$ with current weight $\psi$: (7) $\displaystyle\hat{\theta}(\psi)=\theta-\eta_{1}\frac{1}{|\mathcal{D}_{T}|}\sum\nolimits_{k}^{|\mathcal{D}_{T}|}g(L_{k}(\theta);\psi)\nabla_{\theta}L_{k}(\theta)$ where we update $\theta$ using gradient descent with learning rate $\eta_{1}$. * • Update of $\psi$. As indicated by the brown arrow in Figure 2, the updates of the weighting parameters $\psi$ can be guided by gradient of the memorized data on the updated model: (8) $\displaystyle\psi\leftarrow\psi-\eta_{2}\frac{1}{|\mathcal{M}_{t_{m}}|}\sum\nolimits_{m}^{|\mathcal{M}_{t_{m}}|}\nabla_{\psi}L_{m}(\hat{\theta}(\psi))$ where $\eta_{2}$ is the learning rate of weighting parameters $\psi$. * • Actual update of $\theta$. After receiving the denoising signals (_i.e.,_ updated parameters $\psi$) from memorized data, we use them to update the model: (9) $\displaystyle\theta\leftarrow\theta-\eta_{1}\frac{1}{|\mathcal{D}_{T}|}\sum\nolimits_{k}^{|\mathcal{D}_{T}|}g(L_{k}(\theta);\psi)\nabla_{\theta}L_{k}(\theta)$ We use this alternative strategy to optimize the recommendation model and weighting function during the noise-sensitive period, as illustrated in Figure 2. Although this strategy does not guarantee to find the global optimum, it empirically works well in many bi-level optimization problems (Finn et al., 2017; Shu et al., 2019; Chen et al., 2021a). #### 3.2.2. Adaptive Denoising Scheduler. Above denoising scheme relies on the memorized data to provide denoising signals for training. However, as memorized data inevitably exhibit noise, integrating the noise of them would degrade the denoising performance. Thus, we propose an adaptive denoising scheduler to select _only_ the clean and informative memorized data for denoising learning. Specifically, we define the scheduler as $s$ with parameter $\phi$, and choose two representative factors to quantify the contribution of each memorized data for denoising: (1) the loss $L_{m}(\theta)$ of $m$-th memorized sample, where $\theta$ is the actual updated parameters of the model; and (2) the gradient similarity of $m$-th memorized sample on assumed updated model parameters $\hat{\theta}$ and actual updated model parameters $\theta$, _i.e.,_ $\cos\big{(}\nabla_{\hat{\theta}}L_{m}(\hat{\theta}),\nabla_{\theta}L_{m}(\theta)\big{)}$. Here, we use cosine function as the similarity measurement, and other metrics like inner product can also be applied in practice. The two factors are associated with the learning outcome and learning process of the $m$-th memorized sample, respectively. Specifically, the gradient similarity characterizes the contribution of the memorized sample in the model training. A large loss value may represent a crucial memorized sample if the gradient similarity is also large (_i.e.,_ the gradient direction of memorized sample is consistent with the optimization of the model); and a large loss value with small gradient similarity may indicate a noisy memorized sample. Considering the two factors simultaneously, we formulate the sampling probability of $m$-th memorized data: (10) $\displaystyle o_{m}=s\big{(}L_{m}(\theta),\cos(\nabla_{\hat{\theta}}L_{m}(\hat{\theta}),\nabla_{\theta}L_{m}(\theta));\phi\big{)}$ (11) $\displaystyle\pi_{m}=\frac{\exp(o_{m};\phi)}{\sum_{i\in\mathcal{M}_{t_{m}}}\exp(o_{i};\phi)}$ where $o_{m}$ is the output of the scheduler, and $\pi_{m}$ is the predicted sampling probability of $m$-th sample. We choose LSTM network (Hochreiter and Schmidhuber, 1997) as the scheduler, and feed it with the training factors in each iteration. The intuition behind is that the LSTM could leverage the historical information to capture the prediction variance (Chang et al., 2017; Yao et al., 2021), which shows stable performance in our experiments (different strategies are compared in Section 4.3.3). However, it is intractable to directly optimize the scheduler since the sampling process is not differentiable. To make this procedure differentiable and to jointly optimize the scheduler and the model, we apply the Gumbel-Softmax reparameterization trick (Jang et al., 2017) to generate differentiable samples: (12) $\displaystyle y_{m}=\frac{\exp(\log(\pi_{m})+\epsilon_{m})/\tau}{\sum_{i\in\mathcal{M}_{t_{m}}}\exp(\log(\pi_{i})+\epsilon_{i})/\tau}$ where $\epsilon_{m}$ is randomly drawn from uniform distribution between 0 and 1, and $\tau$ is the temperature that controls the interpolation between the discrete distribution and continuous categorical densities (we set $\tau=0.05$ for all experiments). Thus, the scheduler is able to decide which memorized data to use according to its contribution to denoising implicit feedback, and adpatively adjust the sampling probability for more informative guided learning. ### 3.3. Model Analysis #### 3.3.1. Analysis on the monotonicity of $MP$ and $MR$. We now prove that $MP$ and $MR$ change monotonically over the training time $t$. Let $\mathcal{N}_{t}$ ($=\mathcal{M}_{t}\setminus\mathcal{R}_{t}$) be the set of memorized data that are falsely predicted by models. Then, $|\mathcal{N}_{t+1}|/|\mathcal{N}_{t}|\geq|\mathcal{R}_{t+1}|/|\mathcal{R}_{t}|$ typically holds because the noisy samples are memorized faster than clean samples after the model stabilizes, as depicted in Figure 3. Hence, it is easy to conclude: (13) $\displaystyle|\mathcal{N}_{t+1}|/|\mathcal{N}_{t}|$ $\displaystyle\geq|\mathcal{R}_{t+1}|/|\mathcal{R}_{t}|$ (14) $\displaystyle\implies|\mathcal{N}_{t+1}||\mathcal{R}_{t}|+|\mathcal{R}_{t+1}||\mathcal{R}_{t}|$ $\displaystyle\geq|\mathcal{R}_{t+1}||\mathcal{N}_{t}|+|\mathcal{R}_{t+1}||\mathcal{R}_{t}|$ (15) $\displaystyle\implies(|\mathcal{N}_{t+1}|+|\mathcal{R}_{t+1}|)|\mathcal{R}_{t}|$ $\displaystyle\geq(|\mathcal{N}_{t}|+|\mathcal{R}_{t}|)|\mathcal{R}_{t+1}|$ Then, $MP_{t+1}\geq MP_{t}$ can be directly derived from Inequation (15). Besides, considering that the model eventually would memorize all training data (Song et al., 2021), we assume that $\mathcal{M}$ would gradually include more observed interactions, including true predicted ones, _i.e.,_ $|\mathcal{R}_{t+1}|\geq|\mathcal{R}_{t}|$. Consequently, $MR$ increases monotonically since $|\mathcal{R}_{t+1}|/|\mathcal{G}|\geq|\mathcal{R}_{t}|/|\mathcal{G}|$. #### 3.3.2. Analysis of the Self-Guide Learning Scheme. We then focus on the guidance strategy to explain how memorized data benefit the denoising training. Formally, we follow (Shu et al., 2019), and utilize chain rule to derive the update function of $\psi$: (16) $\displaystyle\psi\leftarrow\psi+\frac{\eta_{1}\eta_{2}}{|\mathcal{D}_{T}|}\sum_{k}^{|\mathcal{D}_{T}|}\big{(}\frac{1}{|\mathcal{M}_{t_{m}}|}\sum_{m}^{|\mathcal{M}_{t_{m}}|}G_{mk}\big{)}\nabla_{\psi}g(L_{k}(\hat{\theta});\psi)$ where $G_{mk}$ ($=\nabla_{\theta}L_{m}(\hat{\theta})^{T}\nabla_{\theta}L_{k}(\theta)$) measures the gradient similarity between $m$-th memorized data and $k$-th training data. Thus, for each $k$-th training sample, if its gradient is similar to the average gradient of memorized data, it would be considered as a beneficial one for learning, and its weight tends to be increased. Conversely, the weight of the sample inclines to be suppressed. Therefore, memorized data is able to offer a proper weight for each training sample in terms of gradient similarities under the self-guide learning scheme. #### 3.3.3. Model Size. The additional parameters of SGDL come from two parts: (1) the parameters $2d_{w}$ of weighting function, where $d_{w}$ is the number of the hidden neurons in one layer MLP; and (2) the $4d_{l}^{2}+12d_{l}$ parameters of LSTM unit in the adaptive denoising scheduler, where $d_{l}$ is the dimension of hidden size. Overall, the additional cost of SDGL is negligible, compared with the tremendous parameters of modern recommendation models. #### 3.3.4. Time Complexity. Assume that the time complexity of the base model is $O(T)$, the additional complexity of SGDL mainly comes from phase II, which consists of two denoising components: (1) the cost of self-guided learning is also $O(T)$, as the alternative optimization scheme takes no more than three times compared with the normal training; and (2) the computational complexity of denoising adaptive scheduler is $O(|\mathcal{D}|d_{l})$. Therefore, the additional time complexity of SGDL is $O(T+|\mathcal{D}|d_{l})$. Under the same experimental settings (_i.e.,_ same base model and same embedding size), SGDL achieves better trade-off between efficiency and effectiveness compared with various state-of-the-art models: i) For sample re-weighting and sample selection methods, although most of them are more time-saving than SGDL, they suffer from difficult/expensive hyperparameters tuning and unstable performance, as discussed in Section 4.2. ii) For robust graph-based methods, SGDL has a complexity that is comparable with them, since graph-based methods typically leverage on extra graph structure to enhance the robustness of the model. ## 4\. Experiments We provide empirical results to demonstrate the effectiveness of our proposed SGDL. The experiments are designed to answer the following research questions: * • RQ1: How does SGDL perform, compared with the state-of-the-art denoising methods as well as the state-of-the-art robust recommender methods? * • RQ2: How does each component of SGDL (_i.e.,_ memorization point estimation, denoising learning strategy, and adaptive denoising scheduler) affect SGDL? * • RQ3: Is SGDL able to distinguish hard yet clean interactions from noisy interactions? ### 4.1. Experimental Settings #### 4.1.1. Dataset Description. We select three real world benchmark datasets to evaluate and compare the performance of SGDL and its competitors. Table 1 lists their statistics. * • Adressa333https://www.adressa.no/ is a news reading dataset from Adressavisen, including user click behaviors and the dwell time for each click. Following previous work (Wang et al., 2021a), clicks with dwell time less than 10 seconds are viewed as noisy interactions. * • Yelp444https://www.yelp.com/dataset/challenge is an open recommendation dataset released by the Yelp challenge. We use the 2018 version in our experiments. We follow (Wang et al., 2021a) to mark ratings below 3 as noisy interactions. * • MovieLens is a widely used dataset for recommendation, which contains 100,000 movie ratings ranging from 1 to 5. Ratings below 3 are regarded as noisy interactions. Table 1. Statistics of the datasets used in our experiments. Dataset | #Users | #Items | #Interactions | Sparsity ---|---|---|---|--- Adressa | 212,231 | 6,596 | 419,491 | 99.97% MovieLens | 943 | 1,683 | 100,000 | 93.70% Yelp | 45,548 | 57,396 | 1,672,520 | 99.94% Table 2. Performance comparison of different denoising methods on robust recommendation. “${\dagger}$” indicates the improvement of the SGDL over the baseline is significant at the level of 0.05. The highest scores are in Bold. R and N refer to Recall and NDCG, respectively. Database | Adressa | MovieLens | Yelp ---|---|---|--- Base Model | Method | R@5 | R@20 | N@5 | N@20 | R@5 | R@20 | N@5 | N@20 | R@5 | R@20 | N@5 | N@20 NeuMF | Normal | 0.1533† | 0.3208† | 0.1224† | 0.1808† | 0.1023† | 0.2687† | 0.2890† | 0.2765† | 0.0129† | 0.0393† | 0.0129† | 0.0215† WBPR | 0.1538† | 0.3207† | 0.1225† | 0.1809† | 0.1025† | 0.2689† | 0.2891† | 0.2769† | 0.0128† | 0.0392† | 0.0127† | 0.0214† IR | 0.1541† | 0.3212† | 0.1229† | 0.1830† | 0.1054† | 0.2704† | 0.2928† | 0.2758† | 0.0132† | 0.0407† | 0.0131† | 0.0229† T-CE | 0.1537† | 0.3220† | 0.1267† | 0.1839† | 0.1025† | 0.2821† | 0.2923† | 0.2845† | 0.0119† | 0.0396† | 0.0119† | 0.0211† DeCA | 0.1597 | 0.3205† | 0.1226† | 0.1799† | 0.1024† | 0.2723† | 0.2904† | 0.2801† | 0.0129† | 0.0394† | 0.0129† | 0.0216† SGDL | 0.1598 | 0.3291 | 0.1272 | 0.1853 | 0.1135 | 0.2844 | 0.3279 | 0.3032 | 0.0155 | 0.0469 | 0.0158 | 0.0260 CDAE | Normal | 0.1445† | 0.3159† | 0.0987† | 0.1886† | 0.0904† | 0.2185† | 0.2617† | 0.2356† | 0.0145† | 0.0436† | 0.0149† | 0.0277† WBPR | 0.1443† | 0.3158† | 0.0987† | 0.1890† | 0.0908† | 0.2184† | 0.2619† | 0.2346† | 0.0148† | 0.0437† | 0.0151† | 0.0278† IR | 0.1444 | 0.3152† | 0.0981† | 0.1893† | 0.0909† | 0.2186† | 0.2612† | 0.2358† | 0.0153† | 0.0438 | 0.0152† | 0.0278† T-CE | 0.1415† | 0.3106† | 0.0991 | 0.1840† | 0.0912† | 0.2158† | 0.2642 | 0.2386† | 0.0147† | 0.0439 | 0.0151† | 0.0279† DeCA | 0.1447† | 0.3159† | 0.0991 | 0.1888† | 0.0917† | 0.2189† | 0.2641 | 0.2378† | 0.0158† | 0.0438 | 0.0154† | 0.0292† SGDL | 0.1450 | 0.3181 | 0.0993 | 0.1956 | 0.0921 | 0.2220 | 0.2643 | 0.2404 | 0.0162 | 0.0439 | 0.0172 | 0.0296 NGCF | Normal | 0.0769† | 0.1322† | 0.0571† | 0.0769† | 0.1285† | 0.3103† | 0.3694† | 0.3392† | 0.0267† | 0.0736† | 0.0262† | 0.0417† WBPR | 0.0770† | 0.1324† | 0.0572† | 0.0769† | 0.1287† | 0.3105† | 0.3692† | 0.3395† | 0.0265† | 0.0739† | 0.0265† | 0.0417† IR | 0.0772† | 0.1337† | 0.0570† | 0.0768† | 0.1280† | 0.3104† | 0.3701† | 0.3395† | 0.0269† | 0.0737† | 0.0261† | 0.0412† DeCA | 0.0760† | 0.1326† | 0.0571† | 0.0766† | 0.1304† | 0.3113† | 0.3729† | 0.3401 | 0.0277 | 0.0739† | 0.0262 | 0.0418 SGCN | 0.0773† | 0.1336† | 0.0543† | 0.0770 | 0.1288† | 0.3112† | 0.3768 | 0.3401 | 0.0267† | 0.0734† | 0.0265 | 0.0443 SGL | 0.0775† | 0.1345 | 0.0576 | 0.0768† | 0.1303† | 0.3141† | 0.3763† | 0.3360† | 0.0279 | 0.0750 | 0.0264† | 0.0409† SGDL | 0.0788 | 0.1347 | 0.0579 | 0.0771 | 0.1309 | 0.3186 | 0.3745 | 0.3404 | 0.0273 | 0.0746 | 0.0267 | 0.0420 LightGCN | Normal | 0.0951† | 0.1817† | 0.0713† | 0.0994† | 0.1258† | 0.3173† | 0.3678† | 0.3358† | 0.0334† | 0.0912† | 0.0332† | 0.0515† WBPR | 0.0958† | 0.1845† | 0.0733† | 0.1006† | 0.1262† | 0.3189† | 0.3701† | 0.3510 | 0.0333† | 0.0911† | 0.0331† | 0.0512† IR | 0.0953† | 0.1822† | 0.0726† | 0.1003† | 0.1285† | 0.3194† | 0.3681† | 0.3361† | 0.0305† | 0.0909† | 0.0326† | 0.0510† DeCA | 0.0974† | 0.1855† | 0.0758† | 0.1162† | 0.1293† | 0.3076† | 0.3575† | 0.3270† | 0.0337 | 0.0911† | 0.0332† | 0.0524 SGCN | 0.0941† | 0.1899† | 0.0765† | 0.1160† | 0.1282† | 0.3210† | 0.3602† | 0.3318† | 0.0335† | 0.0916 | 0.0346 | 0.0528 SGL | 0.0980† | 0.1770† | 0.0741† | 0.0999† | 0.1299† | 0.3156† | 0.3638† | 0.3343† | 0.0341 | 0.0915 | 0.0344 | 0.0526 SGDL | 0.1134 | 0.2105 | 0.0844 | 0.1178 | 0.1378 | 0.3335 | 0.3844 | 0.3513 | 0.0339 | 0.0918 | 0.0341 | 0.0525 #### 4.1.2. Evaluation Metrics. We adopt cross-validation to verify the performance. Specifically, we follow (Wang et al., 2021a, 2022) to split the interactions into the training set, validation set, and clean test set with the ratio of 8:1:1. The performance is measured by two widely used valuation protocols (Wang et al., 2019; He et al., 2020): Recall@$K$ and NDCG@$K$, where $K$ is set as 5 and 20 by default. We report the average metrics for all users in the test set. #### 4.1.3. Baselines. We select four state-of-the-art recommendation methods as the base model $f$ of SGDL: * • NeuMF (He et al., 2017) is a state-of-the-art model, which generalizes Factorization Machines (FM) with a Multi-Layer Perceptron (MLP). * • CDAE (Wu et al., 2016) is denoising auto-encoder model, which corrupts the interactions with random noises, and then employs a MLP model to reconstruct the original input. * • NGCF (Wang et al., 2019) is a graph model, which applies graph convolution network (GCN) to encode high-order collaborative signals in user-item bipartite graph. * • LightGCN (He et al., 2020) is a state-of-the-art graph model, which simplifies the design of GCN by discarding the nonlinear feature transformations for recommendation. We train the base models with different ranking loss functions to demonstrate the universality of SGDL. Specifically, we train NeuMF and CDAE with binary cross-entropy (BCE) loss, and train NGCF and LightGCN with BPR loss (Rendle et al., 2009). Each model is trained with the following denoising approaches: * • Normal is trained with the original architecture design, without any denoising consideration. * • WBPR (Gantner et al., 2012) is a sample selection method, which considers the popular but uninteracted items are likely to be real negative ones. * • IR (Wang et al., 2021b) is the state-of-the-art sample selection method, which interactively relabels uncertain samples to mitigate the noise in both observed and unobserved interactions. * • T-CE (Wang et al., 2021a) is the state-of-the-art sample re-weighting method, which uses the Truncated BCE to assign zero weights to large-loss examples with a dynamic threshold. Note that, this denoising approach can only be used for BCE loss, and thus, we implement it with NeuMF and CDAE for comparison. * • DeCA (Wang et al., 2022) is a newly proposed sample re-weighting method, which considers the disagreement predictions of noisy samples across different models, and minimizes KL-divergence between the two models’ predictions to enhance the robustness of models. In addition, we also compare SGDL with the state-of-the-art robust graph-based methods to further confirm the effectiveness of our model. Note that the methods can only be applied to graph-based recommenders (_i.e.,_ NGCF and LightGCN in our experiments), since they regard the noisy interactions as noisy edges, and devise enhanced graph learning methods for robust recommendation. * • SGCN (Chen et al., 2021b) is the state-of-the-art graph structure enhanced method, which attaches the GCN layers with a trainable stochastic binary mask to prune noisy edges in user-item bipartite graph. * • SGL (Wu et al., 2021) is the state-of-the-art self-supervised graph method, which designs different graph views to mine hard negatives and denoise noise in implicit feedback. We choose the Edge Dropout (ED) view as auxiliary supervision signal as it performs the best for most datasets. #### 4.1.4. Parameter Settings. We implement SGDL in Pytorch, and have released our implementations555https://github.com/ZJU-DAILY/SGDL (codes, parameter settings, andtraining logs) to facilitate reproducibility. We use recommended parameter settings for all models, and optimize them with Adam (Kingma and Ba, 2014) optimizer. The Xavier initializer (Glorot and Bengio, 2010) is used to initialize the model parameters. We set the batch size as 128 for MovieLens, 1024 for Adressa, and 2048 for Yelp due to the different sizes of each dataset. The learning rate of CDAE, NeuMF, and LightGCN is tuned as 0.001; and for NGCF, the learning rate is set as 0.0001. For NeuMF, the embedding size is 32, and the number of layers is 3. For CDAE, the hidden size is 100, and the dropout ratio is set to 0.5. For LightGCN, we set the embedding size to 64, the number of layers to 3, and train it without dropout. For NGCF, the embedding size and layers are the same as LightGCN; and the node dropout rate is set to 0.1. For SGDL, we set $\eta_{1}=\eta_{2}$ for denoising learning, and tune the learning rate amongst $\\{10^{-4},10^{-3},10^{-2}\\}$ for two phases, respectively. The hidden size of MLP and LSTM unit is 64; the length of memorization history $h$ is tuned among $\\{2,5,10,20\\}$. Note that, the hyperparameters of base models keep exactly the same across all training methods for a fair comparison. ### 4.2. Performance Comparison (RQ1) We begin with the performance comparison _w.r.t._ recall@$K$ and NDCG@$K$, where we test two values (5 and 20) of $K$. The experimental results are reported in Table 2, and we find that: * • The proposed SGDL can effectively improve the performance of all base models, and outperform all denoising methods over three datasets. Besides, even if the base model is designed to be robust against noisy implicit feedback (_i.e.,_ CDAE), our method can still boost its performance by a large margin. We attribute these improvement to the memorization-based denoising schemes of SGDL: (1) By tracing the memorization states of data in the noise-resistant period, SGDL is able to collect memorized interactions during training process to provide valuable denoising signals without any supervision. In contrast, none of the baselines considers explicitly characterizing data from the memorization perspective. (2) Benefiting from our learning to weight strategy and adaptive denoising scheduler, SGDL can adaptively select clean and informative samples from memorized interactions, and use them to guide the learning process of the model. However, other re-weighting baselines (_e.g.,_ T-CE and DeCA) are insufficient to provide proper weight for each interaction since they do not have memorized data as guidance. * • Jointly analyzing the performance of SGDL across three datasets, we find that the improvement on MovieLens dataset are less significant than that on other datasets. One possible reason is that the sparsity of MovieLens dataset is denser than the sparsity of Yelp and Adressa. Accordingly, there are sufficient interactions to identify user behavior patterns, which offsets the impact of noisy implicit feedback. * • Jointly analyzing the performance of SGDL across the recommenders, we observe that the relative improvement (_i.e.,_ the performance of SGDL over the strongest baselines) on NeuMF and CDAE are more substantial than that on graph-based methods (_i.e.,_ NGCF and LightGCN). This is because our method is model-agnostic, which does not take the graph structure into consideration. In comparison, both SGCN and SGL are designed for graph-based recommendations, and thus, they can leverage the graph structure information to yield better performance. Nonetheless, our method still achieves the best results on most cases, which demonstrate the superior robustness of SGDL. * • All the denoising approaches show better results compared with normal training in most cases, which indicate the necessities of denoising implicit feedback for recommendations. The results are consistent with prior studies (Wang et al., 2021a, 2022). * • The performance of sample selection methods (_i.e.,_ WBPR and IR) is rather unstable across different recommenders, compared with sample re-weighting methods. This is reasonable since sample selection methods highly depend on the sampling distribution, which makes their performance unstable. For instance, although IR achieves some improvement on Yelp dataset, it performs worse than the base model (_i.e.,_ CDAE) on Adressa dataset. * • Robust Graph-based methods (_i.e.,_ SGCN and SGL) achieve competitive or even the best performance against other methods. We attribute such improvement to their carefully designed graph structures, which is able to prune noisy and insignificant edges for clean information propagation. However, the methods can only be applied to graph-based models, while SGDL is able to be easily integrated with any learning-based recommender systems, and achieves comparable or even better performance. Table 3. Impact of denoising Learning and adaptive denoising scheduler. Database | Adressa | MovieLens | Yelp ---|---|---|--- Base Model | Method | R@5 | R@20 | N@5 | N@20 | R@5 | R@20 | N@5 | N@20 | R@5 | R@20 | N@5 | N@20 NeuMF | w/o DLS | 0.1528 | 0.3107 | 0.1211 | 0.1794 | 0.1055 | 0.2690 | 0.2911 | 0.2774 | 0.0136 | 0.0397 | 0.0131 | 0.0218 w/o ADS | 0.1576 | 0.3285 | 0.1255 | 0.1801 | 0.1097 | 0.2801 | 0.3210 | 0.3008 | 0.0146 | 0.0438 | 0.0146 | 0.0259 LightGCN | w/o DLS | 0.0964 | 0.1810 | 0.0702 | 0.0985 | 0.1244 | 0.3159 | 0.3688 | 0.3349 | 0.0330 | 0.0909 | 0.0331 | 0.0513 w/o ADS | 0.1013 | 0.1995 | 0.0811 | 0.1007 | 0.1316 | 0.3328 | 0.3824 | 0.3502 | 0.0338 | 0.0914 | 0.0340 | 0.0521 ### 4.3. Study of SDGL (RQ2) As the memorization and self-guided denoising learning are the core of SGDL, we also conduct ablation studies to investigate their effectiveness. Specifically, how the presence of denoising learning and adaptive denoising scheduler, the estimation of memorization point, and the design of scheduler affect our model. #### 4.3.1. Impact of Denoising Learning & Scheduler. We first evaluate the effectiveness of the denoising learning scheme and adaptive denoising scheduler. To this end, two variants of SGDL are constructed by (1) discarding the denoising learning strategy in noise- sensitive period, called SGDL${}_{\text{w/o DLS}}$; and (2) removing the scheduler and directly using all memorized data for denoising learning, named SGDL${}_{\text{w/o ADS}}$. We summarize the results of NeuMF and LightGCN in Table 3, while skip the results of other models (_i.e.,_ CDAE and NGCF) because they demonstrate similar trends, and we have space limitation. Obviously, compared with SGDL in Table 2, removing the denoising learning scheme (_i.e.,_ SGDL${}_{\text{w/o DLS}}$) dramatically reduces the predictive accuracy, indicating the necessity of self-guided learning. To be more specific, SGDL${}_{\text{w/o DLS}}$ only trains with memorized data in the noise-sensitive period, and thus, it is insufficient to learn user true preference with hard yet clean interactions. Besides, directly leveraging all memorized data as denoising signals would inevitably introduce some noise, and hence, the SGDL${}_{\text{w/o ADS}}$ also underperforms the complete model. #### 4.3.2. Estimation of Memorization Point. We then verify the estimation of memorization point, since it plays an important role in our model to transit from noise-resistant period to noise- sensitive period. Specifically, we explore the performance change of SGDL by increasing or decreasing the estimated noisy ratio $\hat{\sigma}$ to force early or late transition. The results of SGDL on NeuMF and LightGCN are presented in Table 4. We observe that: * • Generally speaking, the best performance is achieved at the estimated memorization point. When the memorization point more deviates from the estimated one, the performance tends to be worse. This means the existence of the best memorization point and the effectiveness of our estimation. * • It is worth mentioning that, when the memorization point is slightly earlier than the estimated value, it also shows good performance. We attribute this to the clean memorized data: when we transit the model to noise-sensitive period earlier, it is more likely that most of the memorized interactions are clean, which benefits the following self-guided learning. On the contrary, when we delay the memorization point, the memorized data tends to contain more noisy samples, incurring suboptimal performance. Table 4. Impact of estimation of memorization point. R@20 is used to evaluate the performance, and Est. denotes estimated memorization point. Memorization Point | Early | Est. | Late ---|---|---|--- Base Model | Database | +10% | +5% | +0% | -5% | -10% NeuMF | Adressa | 0.3221 | 0.3275 | 0.3291 | 0.3256 | 0.3203 MovieLens | 0.2810 | 0.2851 | 0.2844 | 0.2757 | 0.2704 Yelp | 0.458 | 0.4420 | 0.0469 | 0.4430 | 0.4370 LightGCN | Adressa | 0.2006 | 0.2114 | 0.2105 | 0.2017 | 0.1990 MovieLens | 0.3321 | 0.3325 | 0.3335 | 0.3262 | 0.3198 Yelp | 0.0895 | 0.0912 | 0.0918 | 0.0904 | 0.0887 #### 4.3.3. The Design of Adaptive Denoising Scheduler. We also investigate the design of adaptive denoising scheduler. Specifically, we propose three different approaches to evaluate the denoising contributions of each memorized data: i) Rank memorized data according to their sums of two factors (_i.e.,_ normalized gradient similarity and loss value), and simply pick top-$F$ memorized data as informative denoising signals. To keep the picked data clean, we set $F$ as the half size of memorized data. ii) Choose a one-layer MLP as the scheduler, and train the scheduler with the strategy presented in Section 3.2.2. iii) Select the LSTM as the scheduler, and leverage the historical factors to predict the sampling probabilities. We report the performance of three sampling approaches with Recall@20 and NDCG@20 in Table 5, while Recall@5 and NDCG@5 are omitted due to limited space. We have the following observations: * • By simply choosing top-$F$ memorized data with the highest sums as denoising signals, the performance drops and becomes even worse than the normal training. We attribute such degradation to the non-linear correlation of two factors: the large loss value may not necessarily mean the memorized sample is beneficial to denoising, as it can also be noisy if the gradient similarity is small. Directly summing up the two factors is unable to properly capture the characteristic of memorized data, and thus fails to provide reliable denoising signals for self-guided learning. * • The scheduler with LSTM consistently achieves the best performance cross the datasets. This is because LSTM can leverage the historical factors information to predict more accurate and stable sampling probability for memorized data. The results are consistent with previous robust learning studies (Chang et al., 2017; Yao et al., 2021). Table 5. Impact of the design of adaptive denoising scheduler. Database | Yelp | MovieLens | Adressa ---|---|---|--- Base Model | ADS | R@20 | N@20 | R@20 | N@20 | R@20 | N@20 NeuMF | top-$F$ | 0.3106 | 0.1766 | 0.2590 | 0.2705 | 0.0386 | 0.0212 MLP | 0.3257 | 0.1842 | 0.2749 | 0.2841 | 0.0392 | 0.0225 LSTM | 0.3291 | 0.1853 | 0.2844 | 0.3032 | 0.0469 | 0.0260 LightGCN | top-$F$ | 0.1810 | 0.0985 | 0.3159 | 0.3349 | 0.0909 | 0.0513 MLP | 0.2066 | 0.0103 | 0.3284 | 0.3427 | 0.0912 | 0.0512 LSTM | 0.2105 | 0.1178 | 0.3335 | 0.3513 | 0.0918 | 0.0525 Figure 5. (a) Sample weight distribution on MovieLens dataset. (b) Learned weighting function on MovieLens dataset. ### 4.4. Learned Weights of SGDL (RQ3) In this section, we visualize the learned weights of SGDL to offer an intuitive impression of our denoising performance. Specifically, we train NeuMF with SGDL on the MovieLens dataset, and plot the learned weights distribution _w.r.t._ clean and noisy interactions as well as their loss values in Figure 5. We find that: * • The left of Figure 5 indicates that almost all large weights belong to clean interactions, and the weights of noisy interactions are much smaller than that of clean ones, meaning that the SGDL can differentiate clean implicit feedback from the noisy one. * • The learned weighting function of SGDL in the right of Figure 5 demonstrates that when the loss has relatively small values, the weighting function inclines to increase the weight together with loss, indicating that it tends to emphasize more on clean data for learning user preference; while as the loss gradually becomes larger, the weighting function first remains unchanged and then begins to dramatically decrease its weight, implying that it tends to highlight the hard yet clean interactions with large weights first and then suppress noise interactions. Therefore, SGDL is able to automatically locate hard interactions for better learning user preferences. ## 5\. Related Work Existing recommender systems are typically trained with implicit feedback. Recently, some studies (Jagerman et al., 2019; Wang et al., 2021a, b; Lee et al., 2021) have noticed that implicit feedback could be easily corrupted by different factors (_e.g.,_ popularity bias (Chen et al., 2020) and unawareness of users’ behaviors (Hu et al., 2008)), and the inevitable noise would dramatically degrade the recommendation performance (Wang et al., 2021b, a, 2022; Tan et al., 2022). As a result, some efforts have been dedicated to solving the noisy implicit feedback problem, which can be categorized into sample selection methods (Gantner et al., 2012; Park and Chang, 2019; Ding et al., 2019b; Yu and Qin, 2020; Wang et al., 2021b) and sample re-weighting methods (Wang et al., 2021a; Hu et al., 2021; Wang et al., 2022). Sample Selection. A simple idea to denoise implicit feedback is to select clean and informative samples only, and train the recommendation model with them. For example, WBPR (Gantner et al., 2012) considers that the missing interactions of popular items are highly likely to be real negative examples, and hence assigns higher sampling probabilities to them. IR (Wang et al., 2021b) interactively generates pseudo-labels for user preferences based on the difference between labels and predictions, to discover the noisy-positive and noisy-negative examples. Nonetheless, their performance has high variance since they heavily depend on the sampling distribution (Yuan et al., 2018). Sample Re-weighting. On the other hand, loss-based methods focus on the learning process of models (_e.g.,_ loss values and predictions) to distinguish noisy interactions from clean data. For instance, T-CE (Wang et al., 2021a) dynamically assigns lower weights to high-loss samples since it has been shown that noisy examples would have larger loss values. DeCA (Wang et al., 2022) develops an ensemble method to minimize the KG-divergence between the two models’ predictions, under the assumption that different models make relatively similar predictions on clean examples. Although these methods achieve promising results without additional data, they heavily rely on the predefined loss function and hyperparameters, incurring poor generalization for different recommendation models. Other Directions. There are some recent studies that consider using additional information (Yi et al., 2014; Xie et al., 2020; Bian et al., 2021) or designing model-specific structures (Yang et al., 2021; Chen et al., 2021b; Wu et al., 2021) to improve the robustness of recommender systems. For instance, DFN (Xie et al., 2020) proposes a feedback interaction component to extract clean and useful information from noisy feedback with additional explicit feedback (_e.g.,_ like and dislike). SGCN (Chen et al., 2021b) treat user-item interactions as a bipartite graph, and devise a learnable regularization module to preserve the sparsity and low rank of graph structure. SGL (Wu et al., 2021) advances graph-based recommender systems with self-supervised learning by employing graph structure augmentations. However, these methods suffer from poor generalization, since they either need additional information (_e.g.,_ explicit feedback) as guidance to denoise implicit feedback (Xie et al., 2020; Bian et al., 2021), or are only applicable to specific data structures (_e.g.,_ user-item bipartite graph) (Chen et al., 2021b; Yang et al., 2021; Wu et al., 2021). Difference from Existing Work. Our work can be seen as a variant of self- training (Zoph et al., 2020), which leverages self-labeled memorized data to enhance the robustness of training process. Compared with existing work, SGDL is in an orthogonal direction, opening up a new research line of denoising implicit feedback for recommendations. The most relevant work with ours is MORPH (Song et al., 2021). However, it tackles the classification problem in computer vision, and only uses the memorized data for training, which is not feasible in ranking-based recommendations since the hard yet clean data may not be memorized without clear guidance. Compared with previous methods, SGDL is a truly model-agnostic framework, which can be easily applied to any learning-based recommendation models with any ranking loss functions, and does not need to define any weighting functions. ## 6\. Conclusion and Future Work In this paper, we present a new denoising paradigm, called SGDL, which leverages the memorization effect of recommendation models and designs two training phases to exploit the self-labeled memorization data as guidance for denoising learning. Extensive experiments conducted on three real-world datasets with four representative recommendation models demonstrate the superiority and universality of SGDL. In the future, we plan to jointly explore the noise and bias existing in implicit feedback to develop a universal denoising and debiasing solution. ###### Acknowledgements. This work was supported by the NSFC under Grants No. (62025206, 61972338, and 62102351). Lu Chen is the corresponding author of the work. ## References * (1) * Arazo et al. (2019) Eric Arazo, Diego Ortego, Paul Albert, Noel O’Connor, and Kevin McGuinness. 2019. Unsupervised label noise modeling and loss correction. In _ICML_. 312–321. * Arpit et al. (2017) Devansh Arpit, Stanisław Jastrzundefinedbski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. 2017. A Closer Look at Memorization in Deep Networks. In _ICML_. 233–242. * Bian et al. (2021) Zhi Bian, Shaojun Zhou, Hao Fu, Qihong Yang, Zhenqi Sun, Junjie Tang, Guiquan Liu, Kaikui Liu, and Xiaolong Li. 2021. Denoising User-aware Memory Network for Recommendation. In _RecSys_. 400–410. * Chang et al. (2017) Haw-Shiuan Chang, Erik Learned-Miller, and Andrew McCallum. 2017\. Active bias: Training more accurate neural networks by emphasizing high variance samples. In _NeurIPS_. 1002–1012. * Chen et al. (2021b) Huiyuan Chen, Lan Wang, Yusan Lin, Chin-Chia Michael Yeh, Fei Wang, and Hao Yang. 2021b. Structured Graph Convolutional Networks with Stochastic Masks for Recommender Systems. In _SIGIR_. 614–623. * Chen et al. (2021a) Jiawei Chen, Hande Dong, Yang Qiu, Xiangnan He, Xin Xin, Liang Chen, Guli Lin, and Keping Yang. 2021a. AutoDebias: Learning to Debias for Recommendation. In _SIGIR_. 21–30. * Chen et al. (2020) Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2020\. Bias and debias in recommender system: A survey and future directions. In _arXiv preprint arXiv:2010.03240_. * Chen et al. (2019) Pengfei Chen, Ben Ben Liao, Guangyong Chen, and Shengyu Zhang. 2019. Understanding and utilizing deep neural networks trained with noisy labels. In _ICML_. 1062–1070. * Cremonesi et al. (2010) Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. 2010\. Performance of recommender algorithms on top-n recommendation tasks. In _RecSys_. 39–46. * Csáji (2001) Balázs Csanád Csáji. 2001\. Approximation with artificial neural networks. _Faculty of Sciences, Etvs Lornd University, Hungary_ 24, 48 (2001). * Ding et al. (2019a) Jingtao Ding, Yuhan Quan, Xiangnan He, Yong Li, and Depeng Jin. 2019a. Reinforced Negative Sampling for Recommendation with Exposure Data.. In _IJCAI_. 2230–2236. * Ding et al. (2019b) Jingtao Ding, Guanghui Yu, Xiangnan He, Fuli Feng, Yong Li, and Depeng Jin. 2019b. Sampler design for bayesian personalized ranking by leveraging view data. _TKDE_ 33 (2019), 667–681. * Du et al. (2022a) Yuntao Du, Xinjun Zhu, Lu Chen, Ziquan Fang, and Yunjun Gao. 2022a. MetaKG: Meta-learning on Knowledge Graph for Cold-start Recommendation. _TKDE_ (2022). * Du et al. (2022b) Yuntao Du, Xinjun Zhu, Lu Chen, Baihua Zheng, and Yunjun Gao. 2022b. HAKG: Hierarchy-Aware Knowledge Gated Network for Recommendation. In _SIGIR_. * Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017\. Model-agnostic meta-learning for fast adaptation of deep networks. In _ICML_. 1126–1135. * Gantner et al. (2012) Zeno Gantner, Lucas Drumond, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2012. Personalized ranking for non-uniformly sampled items. In _KDD Cup_. 231–247. * Gharibshah et al. (2020) Zhabiz Gharibshah, Xingquan Zhu, Arthur Hainline, and Michael Conway. 2020. Deep learning for user interest and response prediction in online display advertising. _Data Science and Engineering_ 5, 1 (2020), 12–26. * Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In _AISTATS_. 249–256. * He et al. (2020) Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, YongDong Zhang, and Meng Wang. 2020\. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In _SIGIR_. 639–648. * He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017\. Neural collaborative filtering. In _WWW_. 173–182. * He et al. (2016) Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua. 2016. Fast matrix factorization for online recommendation with implicit feedback. In _SIGIR_. 549–558. * Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ 9 (1997), 1735–1780. * Hu et al. (2021) Kaixi Hu, Lin Li, Qing Xie, Jianquan Liu, and Xiaohui Tao. 2021. What is Next When Sequential Prediction Meets Implicitly Hard Interaction?. In _CIKM_. 710–719. * Hu et al. (2008) Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In _ICDM_. 263–272. * Jagerman et al. (2019) Rolf Jagerman, Harrie Oosterhuis, and Maarten de Rijke. 2019\. To model or to intervene: A comparison of counterfactual and online learning to rank from user interactions. In _SIGIR_. 15–24. * Jang et al. (2017) Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In _ICLR_. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In _ICLR_. * Lee et al. (2021) Dongha Lee, SeongKu Kang, Hyunjun Ju, Chanyoung Park, and Hwanjo Yu. 2021. Bootstrapping User and Item Representations for One-Class Collaborative Filtering. In _SIGIR_. 317–326. * Li et al. (2020) Junnan Li, Richard Socher, and Steven CH Hoi. 2020\. DivideMix: Learning with noisy labels as semi-supervised learning. In _ICLR_. * Lin et al. (2019) Tzu-Heng Lin, Chen Gao, and Yong Li. 2019. Cross: Cross-platform recommendation for social e-commerce. In _SIGIR_. 515–524. * Liu and Tao (2015) Tongliang Liu and Dacheng Tao. 2015. Classification with noisy labels by importance reweighting. _PAMI_ 38, 3 (2015), 447–461. * Park and Chang (2019) Dae Hoon Park and Yi Chang. 2019. Adversarial sampling and training for semi-supervised information retrieval. In _WWW_. 1443–1453. * Ren et al. (2017) Zhaochun Ren, Shangsong Liang, Piji Li, Shuaiqiang Wang, and Maarten de Rijke. 2017. Social Collaborative Viewpoint Regression with Explainable Recommendations. In _WSDM_. 485–494. * Rendle et al. (2009) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian Personalized Ranking from Implicit Feedback. In _UAI_. 452–461. * Shu et al. (2019) Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019. Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting. In _NeurIPS_. 1919–1930. * Song et al. (2021) Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. 2021. Robust Learning by Self-Transition for Handling Noisy Labels. In _KDD_. 1490–1500. * Tan et al. (2022) Yanchao Tan, Carl Yang, Xiangyu Wei, Ziyue Wu, and Xiaolin Zheng. 2022. Partial Relaxed Optimal Transport for Denoised Recommendation. In _arXiv preprint arXiv:2204.08619_. * Wang et al. (2018) Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018\. DKN: Deep knowledge-aware network for news recommendation. In _WWW_. 1835––1844. * Wang et al. (2021a) Wenjie Wang, Fuli Feng, Xiangnan He, Liqiang Nie, and Tat-Seng Chua. 2021a. Denoising implicit feedback for recommendation. In _WSDM_. 373–381. * Wang et al. (2019) Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. In _SIGIR_. 165–174. * Wang et al. (2022) Yu Wang, Xin Xin, Zaiqiao Meng, Xiangnan He, Joemon Jose, and Fuli Feng. 2022\. Learning Robust Recommenders through Cross-Model Agreement. In _WWW_. * Wang et al. (2021b) Zitai Wang, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, and Qingming Huang. 2021b. Implicit Feedbacks are Not Always Favorable: Iterative Relabeled One-Class Collaborative Filtering against Noisy Interactions. In _MM_. 3070–3078. * Wu et al. (2021) Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised graph learning for recommendation. In _SIGIR_. 726–735. * Wu et al. (2018) Lijun Wu, Fei Tian, Yingce Xia, Yang Fan, Tao Qin, Lai Jian-Huang, and Tie-Yan Liu. 2018. Learning to teach with dynamic loss functions. In _NeurIPS_. 6466–6477. * Wu et al. (2016) Yao Wu, Christopher DuBois, Alice X Zheng, and Martin Ester. 2016. Collaborative denoising auto-encoders for top-n recommender systems. In _WSDM_. 153–162. * Xie et al. (2020) Ruobing Xie, Cheng Ling, Yalong Wang, Rui Wang, Feng Xia, and Leyu Lin. 2020\. Deep Feedback Network for Recommendation.. In _IJCAI_. 2519–2525. * Yang et al. (2021) Yonghui Yang, Le Wu, Richang Hong, Kun Zhang, and Meng Wang. 2021. Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization. In _SIGIR_. 71–80. * Yao et al. (2021) Huaxiu Yao, Yu Wang, Ying Wei, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, and Chelsea Finn. 2021. Meta-learning with an Adaptive Task Scheduler. In _NeurIPS_. * Yi et al. (2014) Xing Yi, Liangjie Hong, Erheng Zhong, Nanthan Nan Liu, and Suju Rajan. 2014. Beyond clicks: dwell time for personalization. In _RecSys_. 113–120. * Yu and Qin (2020) Wenhui Yu and Zheng Qin. 2020. Sampler design for implicit feedback data by noisy-label robust learning. In _SIGIR_. 861–870. * Yuan et al. (2018) Fajie Yuan, Xin Xin, Xiangnan He, Guibing Guo, Weinan Zhang, Tat-Seng Chua, and Joemon M Jose. 2018. fBGD: Learning Embeddings From Positive Unlabeled Data with BGD. In _UAI_. * Zoph et al. (2020) Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking Pre-training and Self-training. In _NeurIPS_. 3833–3845.
# SMCL: SALIENCY MASKED CONTRASTIVE LEARNING FOR LONG-TAILED VISUAL RECOGNITION ###### Abstract Real-world data often follow a long-tailed distribution with a high imbalance in the number of samples between classes. The problem with training from imbalanced data is that some background features, common to all classes, can be unobserved in classes with scarce samples. As a result, this background correlates to biased predictions into “major” classes. In this paper, we propose saliency masked contrastive learning, a new method that uses saliency masking and contrastive learning to mitigate the problem and improve the generalizability of a model. Our key idea is to mask the important part of an image using saliency detection and use contrastive learning to move the masked image towards minor classes in the feature space, so that background features present in the masked image are no longer correlated with the original class. Experiment results show that our method achieves state-of-the-art level performance on benchmark long-tailed datasets. Index Terms— Long-tailed learning, contrastive learning, saliency, masking, vision recognition, data augmentation ## 1 Introduction (a) major (bird) (b) minor (horse) (c) saliency masked contrastive learning. Fig. 1: (a): The image and its CAM of a major class sample (bird). (b): The image and its CAMs of a minor class sample (horse), misclassified as “bird” by the classifier. The middle image is the CAM for the predicted label (bird), and the right image is the CAM for the true label (horse). (c): Illustration of saliency masked contrastive learning. Deep neural networks have shown remarkable performance across many computer vision tasks such as image recognition and object detection. It is well known that outstanding performance results from training with a large amount of labeled data. However, many real-world datasets follow a long-tailed distribution, where the number of samples in each class differs greatly. In these datasets, the “major” classes have a large number of samples while the “minor” classes have only a few samples in each class. When a model is trained with a long-tailed dataset, the imbalance between classes hinders the model from correctly learning discriminative features. We focus on the problem where a background feature that should be common to all classes is mistakenly correlated with a particular class. While this problem also occurs in balanced datasets, its effect is amplified in long- tailed datasets due to the lack of diversity in minor classes. For example, background features such as seas and trees should not affect class prediction, but may not be observed in minor classes, but only in some major classes. This biased observation mistakenly biases some background to a major class prediction. Examples in Fig.1a and 1b show the “biased background” problem of a model trained from imbalanced data. The image of a horse (a minor class) in Fig.1b is misclassified as a bird (a major class) by the model. Fig.1b shows two class activation maps (CAMs) obtained from the model. The left CAM is associated with the predicted label “bird”, in which the grass area is highlighted. On the other hand, the right CAM is associated with the true label “horse”, where the body of the animal is given focus. In other words, the model thinks the body looks like a horse, but the grass part is a clue that this image is a bird. Since the background feature (grass) is strongly attached to the bird class, the model outputs “bird” as the final decision. The bias is also observed in the CAM image of a bird in Fig.1a, in which the green area around the bird is highlighted, although the area is not related to the bird itself. Our goal is to move the representation of background features presented in major classes towards minor classes in feature space so that the features are shared with minor classes and no longer biased towards major classes. To this end, we propose a new method called saliency masked contrastive learning (SMCL). As illustrated in Fig.1c, the idea is to mask the salient part of an image so that only the background part remains, and use contrastive learning to pull the masked sample towards minor classes in feature space. While the source image is selected from the original long-tailed distribution, the target class where the masked image moves to is selected from a distribution that gives higher weights to minor classes. Previous methods exist where data augmentation is used to transfer the rich context features from major classes to minor classes [1]. Our key difference is that we use contrastive learning with masked data in order to move background features towards minor classes in feature space, compared to previous practices where augmented data are assigned labels and trained with cross-entropy-only loss. As discussed in experiments and ablation studies, applying contrastive learning is more effective than training them with cross- entropy-only loss. Experiment results show that the proposed method achieves comparable performance with the current state-of-the-art on the benchmark long-tailed datasets. ## 2 Related Work #### Long-tailed Recognition. Long-tailed recognition is an important task since many practical datasets are imbalanced. Initial approaches for long-tailed learning include re-sampling [1] where more samples are selected from minor classes to achieve a balance between classes, and re-weighting [2] where the loss function is adjusted to give more weight to minor class losses. Many variations of re-sampling and re- weighting techniques have been proposed [3, 4, 5, 6], but they do not address the problem of biased background features. Mixup methods [7, 8, 1, 9, 10] can help improve long-tailed learning by mixing two images of different classes and assigning mixed labels. However, they do not apply contrastive learning with mixed samples in order to obtain a better representation. #### Supervised Contrastive Learning. Supervised contrastive learning (SCL) [11, 12] pulls together samples belonging to the same class and pushes apart samples from different classes in the feature space. Recently, SCL is applied to long-tailed learning to maintain uniform distribution in the feature space and improve class boundaries [13, 14]. For example, TSC [14] generates uniformly distributed targets on a hypersphere and uses contrastive learning to make the features of different classes converge to the targets. Different from previous schemes, our proposed method uses supervised contrastive learning so that the masked data containing background features are pulled towards minor class samples in the feature space. ## 3 Proposed Method Overview of the proposed method SMCL is shown in Fig.2. SMCL involves saliency masking to produce background images, weighted sampling to select target labels giving priority to minor classes, and saliency masked contrastive learning to pull the background image towards minor classes in the feature space. ### 3.1 Saliency Masking In order to mask discriminative features that belong to a certain class, we use saliency detection methods. Similarly [8], we use the saliency function $S(\cdot)$ and find the pixel $i$, $j$ that has the maximum value: $\begin{array}[]{cc}i,j=\mathrm{argmax}(S(x)),\end{array}$ (1) where $x$ is a sample in the training set $\mathcal{D}$. Then, following the convention [7], we mask a region of the size generated following a beta distribution $\mathrm{Beta}(\alpha,\alpha)$ centered at ($i$, $j$). ### 3.2 Weighted Sampling For each data in the training set, we sample target data to be used in masked contrastive learning. In order to increase the chance of selecting a minor class label, we follow the sampling strategy [1] based on the effective number calculated from class size [2]: $E_{k}=\frac{(1-\beta^{n_{k}})}{(1-\beta)},$ (2) where $n_{k}$ is the number of samples in $k$th class, $N=\sum_{k}{n_{k}}$, and $\beta=(N-1)/N$. Based on $E_{k}$, the sampling probability of class $k$ is: $p_{k}=\frac{1/E_{k}}{\sum_{k}{1/E_{k}}},$ (3) which leads to a higher sampling probability for the minor classes. The “minor weighted” distribution $\tilde{\mathcal{D}}$ is a distribution resulting from sampling based on $p_{k}$. ### 3.3 Saliency Masked Contrastive Learning The key idea of SMCL is to transfer a background feature from a major class to a minor class. For an image $x$ in a batch, we sample a target image $\tilde{x}$ from the minor weighted distribution $\tilde{\mathcal{D}}$. Then we use basic augmentation schemes such as random crop and random flip to create two versions of each sample, $x_{1}$, $x_{2}$, $\tilde{x_{1}}$, $\tilde{x_{2}}$. Finally, we use saliency masking to generate the masked image $x_{m}$. Inserting the five samples into the model, we obtain their logits $O$ and feature vectors $F$ $\begin{array}[]{cc}O,F=J([x,\tilde{x},x_{m}]),\end{array}$ (4) where $O=[o_{1},o_{2},\tilde{o}_{1},\tilde{o}_{2},o_{m}]$ and $F=[f_{1},f_{2},\tilde{f}_{1},\tilde{f}_{2},f_{m}]$. For the loss function, we use a combination of the cross-entropy loss and the contrastive loss. The cross-entropy loss $\mathcal{L}_{MCE}$ is calculated as: $\mathcal{L}_{MCE}=(1-A)\cdot\mathcal{L}_{CE}(o,y)+A\cdot\mathcal{L}_{CE}(\tilde{o},\tilde{y}),$ (5) where $o=[o_{1},o_{2},o_{m}]$, $\tilde{o}=[\tilde{o}_{1},\tilde{o}_{2},o_{m}]$, $\mathcal{L}_{CE}(o,y)$ is the cross-entropy between the logits $o$ and their true labels $y$, and $A$ indicates the proportion of the masked region. Next, we define the supervised contrastive loss $\mathcal{L}_{SC}$ as: $\mathcal{L}_{SC}=-\frac{1}{|B_{y}|-1}\sum_{p\in B_{y}\setminus\\{z\\}}\log{\frac{\exp(f_{z}\cdot f_{p}/\tau)}{\sum_{k\in B\setminus\\{z\\}}\exp(f_{z}\cdot f_{k}/\tau)}},$ (6) where $B$ is the batch, $B_{y}$ is the set of samples in the batch that has the label $y$, $z$ is the source image, $f$ is the feature vector, and $\tau$ is a temperature hyperparameter. We use a mixed contrastive loss $\mathcal{L}_{MSC}$: $\mathcal{L}_{MSC}=(1-A)\cdot\mathcal{L}_{SC}(f,y)+A\cdot\mathcal{L}_{SC}(\tilde{f},\tilde{y}),$ (7) where $f=[f_{1},f_{2},f^{m}]$, $\tilde{f}=[\tilde{f}_{1},\tilde{f}_{2},f^{m}]$, and $\mathcal{L}_{SC}(f,y)$ is the supervised contrastive loss when $f$ is the set of feature vectors and $y$ is the set of true labels associated with the feature vectors. Finally, our loss function combines the cross-entropy loss and the contrastive loss, where $\lambda$ and $\mu$ are hyperparameters. $\mathcal{L}=\lambda\cdot\mathcal{L}_{MCE}+\mu\cdot\mathcal{L}_{MSC}.$ (8) Fig. 2: Overview of the proposed framework. ## 4 Experiments ### 4.1 Datasets #### CIFAR-10-LT and CIFAR-100-LT. CIFAR-LT datasets are constructed from CIFAR [15] by adjusting the dataset into a long-tailed shape. In our evaluations, we consider three different imbalance ratios $\rho\in\\{10,50,100\\}$ for CIFAR-10 and CIFAR-100 respectively. #### ImageNet-LT. ImageNet-LT is generated by selecting the long-tailed subset from ImageNet-2012 [16]. The imbalance ratio $\rho$ is 256 and has 115.8K images from 1,000 categories. ### 4.2 Implementation Settings #### CIFAR-10-LT and CIFAR-100-LT. We train ResNet-32 [17] for 200 epochs using stochastic gradient descent (SGD) with a batch size of 256, a momentum of 0.9, and a weight decay of 2e-4. The initial learning rate is 0.1 and we decay the learning rate at epochs 160 and 180 by 0.1 [3]. We use CutOut [18] and SimAugment [11] as augmentation strategies. saliency masking is applied with a probability of 0.2 starting from epoch 160. The hyperparameters $\lambda$ and $\mu$ are empirically tuned to $\lambda=1.0$ and $\mu=0.3$. #### ImageNet-LT. We train ResNet-50 for 100 epochs using SGD with a batch size of 256, a momentum of 0.9, and a weight decay of 5e-4. The initial learning rate is set to 0.1 and follows cosine scheduling. We used RandAug and SimAugment as the augmentation strategies. Saliency masking is applied with a probability of 0.4. Both augmentation and re-weighting are applied from epoch 80 [3]. The hyperparameters $\lambda$ and $\mu$ are empirically tuned to $\lambda=1.0$ and $\mu=0.35$. The results from Mixup and CutMix are obtained following the same setup from CMO [1]. ### 4.3 Experimental Results We compare the performance of SMCL with previous methods [3, 2, 19, 6, 20], including mixup augmentation based methods [9, 7, 10, 1], and supervised contrastive learning based methods [12, 14, 13]. Results on CIFAR-10/100-LT using different imbalance ratios are shown in Table 1. SMCL outperforms previous methods on CIFAR-10/100-LT when applied with deferred re-weighting (DRW) [3]. Compared with baseline ERM and Supervised Contrastive Learning (SCL), SMCL achieves 6.2% and 5.1% improvements in CIFAR-100-LT (100), respectively. Furthermore, SMCL significantly outperforms similar augmentation methods, such as Remix [10] and CMO [1]. SMCL also shows consistent improvements even when compared with SCL based methods. Dataset | CIFAR-100-LT | CIFAR-10-LT ---|---|--- Imbalance Ratio | 100 | 50 | 10 | 100 | 50 | 10 ERM† | 38.3 | 43.9 | 55.7 | 70.4 | 74.8 | 86.4 DRW† | 41.5 | 45.3 | 58.1 | 76.3 | 80.0 | 87.6 LDAM-DRW† [3] | 42.0 | 46.6 | 58.7 | 77.0 | 81.0 | 88.2 CB Focal [2] | 39.6 | 45.3 | 58.0 | 74.6 | 79.3 | 87.5 BBN† [19] | 42.6 | 47.0 | 59.1 | 79.8 | 82.2 | 88.3 MiSLAS$\ast$ [6] | 47.0 | 52.3 | 63.2 | 82.1 | 85.7 | 90.0 ResLT$\ast$ [20] | 48.2 | 52.7 | 62.0 | 82.4 | 85.2 | 89.7 DRW+Mixup [9] | 45.3 | 49.7 | 60.3 | 81.1 | 83.8 | 89.1 DRW+CutMix [7] | 46.4 | 51.2 | 62.1 | 81.8 | 85.1 | 89.7 ReMix$\ast$ [10] | 46.8 | - | 61.2 | 79.8 | - | 89.0 DRW+CMO [1] | 46.4 | 51.5 | 61.7 | 81.7 | 84.9 | 89.6 SCL | 39.4 | 44.4 | 56.7 | 72.2 | 76.8 | 87.1 SCL-DRW | 41.7 | 46.8 | 57.9 | 75.4 | 79.3 | 87.9 TSC$\ast$ [14] | 43.8 | 47.4 | 59.0 | 79.7 | 82.9 | 88.7 Hybrid-SC$\ast$ [13] | 46.7 | 51.9 | 63.1 | 81.4 | 85.4 | 91.1 SMCL (Ours) | 44.5 | 49.8 | 61.8 | 80.2 | 84.3 | 90.3 DRW+SMCL (Ours) | 50.1 | 54.8 | 64.1 | 84.2 | 86.8 | 91.0 Table 1: Comparison of classification accuracy(%) on the CIFAR-10/100-LT datasets using ResNet-32. $\ast$ and † are results from the original papers and [19], respectively. Methods | All | Many | Med | Few ---|---|---|---|--- ERM† | 41.6 | 64.0 | 33.8 | 5.8 Decouple-cRT† [4] | 47.3 | 58.8 | 44.0 | 26.1 Decouple-LWS† [4] | 47.7 | 57.1 | 45.2 | 29.3 DRW | 49.9 | 61.2 | 47.1 | 29.2 LDAM-DRW | 50.3 | 61.7 | 47.0 | 30.6 Balanced Softmax [5] | 50.6 | 60.1 | 48.4 | 32.6 DRW+Mixup | 49.3 | 60.7 | 46.4 | 28.7 DRW+CutMix | 50.3 | 60.7 | 47.6 | 31.7 DRW+CMO | 50.8 | 61.2 | 48.7 | 30.1 SCL | 45.9 | 69.0 | 38.7 | 8.2 SCL-DRW | 49.4 | 61.6 | 47.2 | 24.1 TSC$\ast$ | 52.4 | 63.5 | 49.7 | 30.4 SMCL (Ours) | 46.0 | 67.9 | 39.1 | 10.3 DRW+SMCL (Ours) | 51.4 | 60.4 | 49.7 | 32.8 Table 2: Comparison of classification accuracy(%) on the ImageNet-LT dataset using ResNet-50. $\ast$ and † are results from the original papers and [4], respectively. Results for ImageNet-LT are reported in Table 2. The table shows the accuracy of three subsets: Many shots ($>$100 samples), medium shots (20-100 samples), and few shots ($<$20 samples). SMCL shows comparable performance with the state-of-the-art [14], showing 2.4% better performance on a few shots. Since ImageNet-LT is composed of 38.5% of many, 46.9% of medium, and 14.6% of few classes, improved performance in few has relatively little impact on overall performance. However, it is noteworthy that applying SMCL with DRW outperforms most of the existing methods, especially achieving considerable gains in the accuracy of a few shots. ### 4.4 Ablation Study #### Impact of Saliency Masking. To show the impact of saliency masking, we tried masking images with two other methods: masking random area and masking the center. As shown in Table 3, covering the center of an image achieved 0.3% higher than the random. Since many images have their core feature at the center, masking the center area is more useful than masking a random area. Saliency masking showed the highest result, approximately 0.3% higher than center masking. This result shows that saliency masking is indeed effective in selecting background features from an image. | Random | Center | Saliency ---|---|---|--- Acc. (%) | 49.56 | 49.82 | 50.14 Table 3: Performance of saliency masking compared with other masking methods on CIFAR-100-LT ($\rho=100$). #### Effect of Contrastive Learning. To evaluate the effect of saliency masked contrastive learning, we compared the performance of models with and without contrastive learning. In Table 4, “Cross Entropy” indicates the accuracy when the model is trained with only the cross-entropy loss. The results show that applying contrastive learning is effective in improving model performance. | SMCL | DRW+SMCL ---|---|--- Cross Entropy | 43.57 | 49.12 | Supervised --- Contrastive Learning 44.49 | 50.14 Table 4: Performance of applying saliency masked contrastive learning compared with cross-entropy-only learning on CIFAR-100-LT ($\rho=100$). ## 5 Conclusion We proposed SMCL, a saliency masked contrastive learning method that achieves high accuracy on long-tailed datasets. SMCL uses saliency masking to obtain background features from images and applies contrastive learning to move their embeddings towards minor classes so that they are detached from major classes. The proposed method is simple to implement, yet achieves state-of-the-art level performance on benchmark long-tailed datasets such as CIFAR-10/100-LT and ImageNet-LT. ## References * [1] Seulki Park, Youngkyu Hong, Byeongho Heo, Sangdoo Yun, and Jin Young Choi, “The majority can help the minority: Context-rich minority oversampling for long-tailed classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2022. * [2] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie, “Class-balanced loss based on effective number of samples,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 9268–9277. * [3] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma, “Learning imbalanced datasets with label-distribution-aware margin loss,” Advances in Neural Information Processing Systems, vol. 32, pp. 1567–1578, 2019. * [4] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis, “Decoupling representation and classifier for long-tailed recognition,” in International Conference on Learning Representations, 2020. * [5] Jiawei Ren, Cunjun Yu, Xiao Ma, Haiyu Zhao, Shuai Yi, et al., “Balanced meta-softmax for long-tailed visual recognition,” Advances in neural information processing systems, vol. 33, pp. 4175–4186, 2020. * [6] Zhisheng Zhong, Jiequan Cui, Shu Liu, and Jiaya Jia, “Improving calibration for long-tailed recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 16489–16498. * [7] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. * [8] A F M Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, and Sung-Ho Bae, “Saliencymix: A saliency guided data augmentation strategy for better regularization,” in International Conference on Learning Representations, 2021. * [9] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz, “mixup: Beyond empirical risk minimization,” in International Conference on Learning Representations, 2018. * [10] Hsin-Ping Chou, Shih-Chieh Chang, Jia-Yu Pan, Wei Wei, and Da-Cheng Juan, “Remix: rebalanced mixup,” in European Conference on Computer Vision. Springer, 2020, pp. 95–110. * [11] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning. PMLR, 2020, pp. 1597–1607. * [12] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan, “Supervised contrastive learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 18661–18673, 2020. * [13] Peng Wang, Kai Han, Xiu-Shen Wei, Lei Zhang, and Lei Wang, “Contrastive learning based hybrid networks for long-tailed image classification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 943–952. * [14] Tianhong Li, Peng Cao, Yuan Yuan, Lijie Fan, Yuzhe Yang, Rogerio S Feris, Piotr Indyk, and Dina Katabi, “Targeted supervised contrastive learning for long-tailed recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6918–6928. * [15] Alex Krizhevsky, Geoffrey Hinton, et al., “Learning multiple layers of features from tiny images,” University of Toronto, 2009. * [16] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015. * [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. * [18] Terrance Devries and Graham W. Taylor, “Improved regularization of convolutional neural networks with cutout,” CoRR, vol. abs/1708.04552, 2017. * [19] Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen, “Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9719–9728. * [20] Jiequan Cui, Shu Liu, Zhuotao Tian, Zhisheng Zhong, and Jiaya Jia, “Reslt: Residual learning for long-tailed recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
# EOS-ESTM: A flexible climate model for habitable exoplanets L. Biasiotti,1,2 P. Simonetti1,2, G. Vladilo1, L. Silva1,3, G. Murante1, S. Ivanovski1, M. Maris1, S. Monai1, E. Bisesi1, J. von Hardenberg4, A. Provenzale5 1INAF - Trieste Astronomical Observatory, Via G. B. Tiepolo 11, 34143 Trieste, Italy 2University of Trieste - Dep. of Physics, Via G. B. Tiepolo 11, 34143 Trieste, Italy 3IFPU - Institute for Fundamental Physics of the Universe, Via Beirut 2, 34014 Trieste, Italy 4Politecnico di Torino - DIATI, Corso Duca degli Abruzzi 24, 10129 Torino, Italy 5Institute of Geosciences and Earth Resources - IGG, National Research Council of Italy, 56127 Pisa, Italy E-mail<EMAIL_ADDRESS> ###### Abstract Rocky planets with temperate conditions provide the best chance for discovering habitable worlds and life outside the Solar System. In the last decades, new instrumental facilities and large observational campaigns have been driven by the quest for habitable worlds. Climate models aimed at studying the habitability of rocky planets are essential tools to pay off these technological and observational endeavours. In this context, we present EOS-ESTM, a fast and flexible model aimed at exploring the impact on habitability of multiple climate factors, including those unconstrained by observations. EOS-ESTM is built on ESTM, a seasonal-latitudinal energy balance model featuring an advanced treatment of the meridional and vertical transport. The novel features of EOS-ESTM include: (1) parameterizations for simulating the climate impact of oceans, land, ice, and clouds as a function of temperature and stellar zenith distance; (2) a procedure (EOS) for calculating the radiative transfer in atmospheres with terrestrial and non- terrestrial compositions illuminated by solar- and non-solar-type stars. By feeding EOS-ESTM with Earth’s stellar, orbital and planetary parameters we derive a reference model that satisfies a large number of observational constraints of the Earth’s climate system. Validation tests of non-terrestrial conditions yield predictions that are in line with comparable results obtained with a hierarchy of climate models. The application of EOS-ESTM to planetary atmospheres in maximum greenhouse conditions demonstrates the possibility of tracking the snowball transition at the outer edge of the HZ for a variety of planetary parameters, paving the road for multi-parametric studies of the HZ. ###### keywords: astrobiology – planets and satellites: terrestrial planets – planets and satellites: atmospheres ††pubyear: 2021††pagerange: EOS-ESTM: A flexible climate model for habitable exoplanets–EOS-ESTM: A flexible climate model for habitable exoplanets ## 1 Introduction Over the past two decades, ground- and space-based observations have unveiled thousands exoplanets and planetary systems around other stars in our Galaxy. About $4900$ exoplanets are currently confirmed111e.g. https://exoplanets.nasa.gov/; https://exoplanetarchive.ipac.caltech.edu/, in large part detected as transits by the Kepler222https://www.nasa.gov/mission_pages/kepler/main/index.html mission (Borucki et al., 2010). It’s successor TESS333https://tess.mit.edu/ (Transit Exoplanet Survey Satellite, Ricker et al., 2015) is expected to boost the detection number, while CHEOPS444https://www.esa.int/Science_Exploration/Space_Science/Cheops (CHaracterizing ExOPlanet Satellite, Broeg et al., 2018) will help to characterize the structural properties of already selected planets. In the short term, PLATO555https://sci.esa.int/web/plato/ (PLAnetary Transits and Oscillations of stars, Rauer et al., 2014) will search transiting Earth- analogues around bright stars. The statistically relevant numbers of detected planets are allowing to investigate on all aspects of planetary structure and formation in different size ranges, as a function of stellar spectral type, composition, and even stellar multiplicity. A diversity of planetary systems architectures and a large range of planetary masses and/or radii have been observed, showing that the Solar System is just one possible outcome of the planetary formation process (e.g. Udry & Santos, 2007; Howard et al., 2012; Winn & Fabrycky, 2015; Kaltenegger, 2017). Observations are necessarily biased toward giant gaseous planets around late-type stars, but the ever increasing statistics has allowed to infer that virtually any star in our Galaxy hosts at least one planet, with the planetary size distribution suggesting a steep increase towards small rocky Earth-like planets with thin atmospheres. In fact, while it has been found that planetary masses offer a loose constraint on composition, currently in all cases it has been found that at small radii, R${}_{p}\leq 1.5-2$ R⊕, all planets are rocky (e.g. Rogers, 2015) with a gap, i.e. an almost sudden transition, between Earth-like volatile poor and Neptune-like volatile rich planets (e.g. Fulton et al., 2017). These studies are shifting the current research from detection and statistics to full characterization of planetary properties, with one of the main goals of exoplanetary science being the quest for life outside the Solar System. This endeavour can only be tackled through remote atmospheric spectroscopy (transit, reflection, emission and their time variations, e.g. Kreidberg, 2018) of potentially habitable rocky planets, in order to identify spectral features of biological origin. This possibility rests on the notion that the metabolic activity by-products for a well developed surface life may impact the atmospheric chemistry to a measurable amount (e.g. Lovelock, 1965; Kasting et al., 2014). This observational challenge (e.g. Fujii et al., 2018, for a review) should be partly within reach of the recently launched JWST (James Webb Space Telescope, Gardner et al., 2006; Kalirai, 2018), probably limited to nearby M-type stars (e.g. Koll et al., 2019), and, within the next decade, of the approved spatial mission ARIEL666https://arielmission.space/; https://sci.esa.int/web/ariel/ (Atmospheric Remote-sensing Infrared Exoplanet Large-survey, Tinetti et al., 2018), although mainly for objects with warm H-dominated atmospheres. Nearby terrestrial analogues are expected to be detected with the ground-based E-ELT777https://elt.eso.org/ (Snellen et al., 2015; Morley et al., 2017) equipped with the spectrograph HIRES (Maiolino et al., 2013). In the longer term, further space-based projects currently under assessment will be selected, that specifically aim to directly detect and characterize nearby temperate terrestrial analogues, e.g. HabEX888https://www.jpl.nasa.gov/habex/ (Habitable Exoplanet Observatory, Gaudi et al., 2020), LUVOIR999https://asd.gsfc.nasa.gov/luvoir/ (Large UV/Optical/IR Surveyor, The LUVOIR Team, 2019), OST101010https://origins.ipac.caltech.edu/ (Origins Space Telescope, Wiedner et al., 2021), LIFE111111https://www.life-space- mission.com/ (Large Interferometer For Exoplanets, Quanz et al., 2021). The recent decadal survey for astronomy and astrophysics (Astro2020) report121212https://nap.nationalacademies.org/catalog/26141/pathways-to- discovery-in-astronomy-and-astrophysics-for-the-2020s endorsed recommendations for a single UV/Optical/IR flagship mission, that picks a compromise concept between LUVOIR and HabEX. To accomplish the demanding task of searching for and deciphering spectral signatures, a thorough and holistic observational and theoretical characterization of carefully selected rocky exoplanets is required. The selection, among the observationally reachable targets for high-resolution spectroscopy of thin atmospheres, requires habitability studies with climate models. These simulations will enable the identification of those exoplanets with the largest chance of potentially hosting a surface diffuse life, i.e. with the largest habitability, that must be evaluated over a wide range of mostly unknown conditions. Moreover, the interpretation of any detected atmospheric features in terms of physical status of the atmosphere, and of their biotic or abiotic origin, will be unavoidably subjected to huge uncertainties and degeneracies, including false positives even for oxygen (e.g. Schwieterman et al., 2018; Meadows & Barnes, 2018). A considerable effort of modelization that exploits all available observations will be needed in order to assess the global physical characterization of the selected exoplanets, and in particular precisely of their potential surface climate and habitability. Habitability studies for exoplanets rely on the concept of the habitable zone (HZ), classically defined as the range of stellar insolation, the main driver of climate, that allows surface temperatures compatible with a long-term presence of surface liquid water for a planet with an N2–CO2–H2O atmosphere and a climate system stabilized by the carbonate-silicate feedback (Walker et al., 1981; Kasting et al., 1993; Kopparapu et al., 2013a; Kopparapu et al., 2014). In these reference works, the inner and outer edges of the HZ are defined respectively for a H2O- and a CO2-dominated atmosphere, for an otherwise Earth-like planet orbiting stars of different spectral types. The HZ is considered as the prerequisite for potentially inhabited planets with exchange of gases between the biosphere and atmosphere (Kasting et al., 2014; Schwieterman et al., 2018) and therefore for spectroscopic biosignature searches. Actually, in addition to insolation and spectral type, a large range of (mostly unknown) climate forcing factors affects planetary surface temperature and habitability, e.g. atmospheric mass and composition, surface gravity, radius, rotation period, obliquity, geography (e.g. Ramirez et al., 2019), in addition to the observable orbital parameters. Also, different definitions of habitability could be envisaged and calculated for an optimal selection of exoplanets. The large variety of planetary situations, expected and already uncovered by observations hints that a large range of non-Earth conditions should be accounted for. The majority of these parameters can currently only be explored with climate simulations. Currently $\sim 60$ observed rocky exoplanets are considered potentially habitable131313 https://phl.upr.edu/projects/habitable- exoplanets-catalog, where the reported number refer to the empirical liquid water HZ, as defined by the insolation range received by Venus and Mars respectively $\sim 1$ and 4 Gyrs ago, when they could have hosted surface liquid water (Kasting et al., 1993)., but their number may change should a multi-parametric analysis of the huge possible parameter space of surface temperature be performed. The climate and the surface habitability of exoplanets can be explored, as for the Earth, using a hierarchy of models, depending on the aim and problem to be addressed (see e.g. Shields, 2019, for a review). Climate models should be able to account for, even at different levels of simplification, the complexity of the climate system due to the interplay of different components and processes, giving rise to feedbacks leading to multiple equilibria or even runaway conditions (Provenzale, 2013). For instance, the water-vapour and the ice/albedo feedbacks set the spatial and temporal limits of the liquid water HZ. The accounting of these complexities is particularly important to simulate conditions not treated in Earth-tailored climate models. Fully-coupled ocean-atmosphere General Circulation Models (GCM) are the most detailed and computing resources consuming models, in principle requiring a large amount of information to obtain meaningful results (e.g. detailed geography and orography). In fact GCM are often applied for exoplanetary studies by adopting an Earth or simplified configurations, such as an aquaplanet (e.g. Leconte et al., 2013; Wolf & Toon, 2013, 2015; Shields et al., 2014; Kaspi & Showman, 2015; Wolf et al., 2022). They are fundamental tools to compute the coupled atmospheric-ocean dynamics on the long term and to study atmospheric dynamics in particularly complex configurations. These include rocky planets in the HZ of M-type stars, the most numerous and easiest targets for spectroscopy follow-ups. Due to their proximity to the host stars, these planets are expected to be tidally locked into synchronous rotation (Leconte et al., 2015; Barnes, 2017). GCM are also fundamental benchmarks for faster lower complexity models, allowing multi-parametric simulations. Among such simpler models, 1D single-column radiative-convective models including detailed line-by-line radiative transfer (RT) have been used for instance to define the reference classical HZ mentioned above. Another class of 1D models are the so-called zonal Energy Balance Models (EBM), which solve a latitudinally-averaged energy balance with a simplified meridional heat diffusion equation (North et al., 1981; Spiegel et al., 2008). This class of models is still applied to the Earth climate, to be able to explore and isolate the effects of specific processes on the global climate (see e.g. Pierrehumbert, 2010). Their flexibility and short computing time can be exploited also for the large parameter space required to simulate exoplanetary conditions, by properly modelling all the terms entering the energy balance equation. By coupling single-column RT atmospheric modelling with an EBM (e.g. Williams & Kasting, 1997; Vladilo et al., 2013; Haqq-Misra & Hayworth, 2022, hereafter WK97 and V13, respectively), and by further elaborating a physically-based description of the meridional transport, Vladilo et al. (2015, hereafter V15) developed a 2D EBM, the Earth-like planet Surface Temperature Model (ESTM), specifically aiming to compute the seasonal and zonal surface temperature of non-tidally locked exoplanets with a large range of non terrestrial (atmospheric and planetary) physical conditions. The range of applicability of this model was thoroughly explored in V15 by comparing with the 3D aquaplanet model by Kaspi & Showman (2015). The flexibility and fast computing time of ESTM has been exploited in Murante et al. (2020)141414The library of climate models used for this work was extracted from the ESTM-generated ARchive of TErrestrial-type Climate Simulations (ARTECS) available at https://wwwuser.oats.inaf.it/exobio/climates/. The database is in continuous expansion. for a statistical study of the multiple equilibrium states affecting climate systems due to non-linear feedbacks (e.g. the warm and snowball Earth states, during the latter the Earth would have been tagged as non-habitable). In Silva et al. (2017b) we performed with ESTM a multi-parametric exploration of the habitability for Kepler-452b (Jenkins et al., 2015), currently the only known Earth-twin candidate. ESTM, by computing the latitude- and seasonal-dependent surface temperature, allows different operative definitions of habitability to be computed. Given the importance of liquid water for terrestrial life, the liquid-water temperature interval is the commonly adopted definition, and a pressure-dependent, liquid-water habitability index (V13) can be defined. But also biological temperature- based considerations can provide further HZ definitions and can be all computed for each set of parameter choices (e.g. Silva et al., 2017a; Vladilo & Hassanali, 2018). These more restrictive definitions, as compared to the liquid water index, may help to increase the probability of selecting surface ambient conditions that maximize the production and detectability of atmospheric biosignatures (a discussion on the necessity and possibly of the non-limiting assumption on searching for terrestrial-like life requirements can be found in e.g. McKay, 2014; Kasting et al., 2014). In this paper, we present a new release of the ESTM model, which we call EOS- ESTM. EOS is our new procedure for calculating RT in rocky planetary atmospheres with any pressure, chemical composition, and stellar spectral type (Simonetti et al., 2022). In our previous version we were limited to Earth- like systems. We have introduced and improved on several new parameterizations with respect to the V15 model, in particular for the treatment of the temperature-dependence of the ice coverage over land and ocean, and for the zenith distance-dependence of the surface albedo specifically for any type of surface. We have carefully calibrated EOS-ESTM to reproduce the Earth climate by making use of large recent satellite (CERES-EBAF Ed4.1; Loeb et al. (2018)) and reanalysis (ERA5; Hersbach et al. (2020)) datasets, and validated the predictive power of the model through detailed comparison with 1D and 3D models under a large range of physical conditions. We also provide a first exploration of the dependence of the maximum greenhouse distance of the HZ on planetary parameters, as compared to 1D-based values. The paper is structured as follows. In Section 2, after a schematic summary of the ESTM model and parameters by V15, we provide a detailed description of each physical input of the model which has been either newly introduced or improved in the new EOS-ESTM release. In Section 3 we exploit the large amount and good quality of experimental data of the Earth climate system to calibrate and validate our Earth’s model, the reference for habitable rocky exoplanets. In Section 4 we present the validation of EOS-ESTM for a large range of non- terrestrial conditions with a comprehensive comparison of the predictive power of our model with several other 1D and 3D models. Our summary and conclusions are finally presented in Section 5. Table 1: Treatment of the terms in Eq. (1) in classic EBMs, ESTM, and EOS-ESTM Term | Description | Classic EBMs | ESTM | EOS-ESTM | Reference to the most updated prescription ---|---|---|---|---|--- $C$ | Thermal capacity | $C$ = constant | Ocean, Land, Ice | ESTM + Transient ice | This paper $D$ | Meridional transport | $D$ = constant | $D=D(p,g,R_{p},RH,\Omega_{\text{rot}})$ | $D=D(p,g,R_{p},RH,\Omega_{\text{rot}})$ | Vladilo et al. (2015) $I$ | Outgoing Longwave Radiation | $I=I(T)$ | CCM3 atmospheric RT | EOS atmospheric RT | Simonetti et al. (2022) $S$ | Insolation | $S=S(t,\phi)$ | $S=S(t,\phi)$ | $S=S(t,\phi)$ | Vladilo et al. (2013) $A$ | Top-of-Atmosphere Albedo | $A=A(T)$ | Surface & clouds + CCM3 atm. RT | Surface & clouds + EOS atm. RT | This paper; Simonetti et al. (2022) ## 2 The climate model In accordance with classic EBMs, the planetary surface is divided in a number $N$ of latitude zones and the zonal surface quantities of interest are averaged over one rotation period. In this way, the surface quantities depend on a single spatial coordinate, the latitude $\phi$. The thermal state of the surface is described by the temperature $T=T(t,\phi)$. Since the zonal quantities are averaged over one rotation period, the time $t$ represents the seasonal evolution induced by the orbital eccentricity and tilt of the rotation axis. By assuming that the heating and cooling rates normalized per unit area are balanced in each zone, one obtains a set of $N$ zonal energy balance equations $C\frac{\partial T}{\partial t}-\frac{\partial}{\partial x}\left[D\,(1-x^{2})\,\frac{\partial T}{\partial x}\right]+I=S\,(1-A)~{}$ (1) where we omit the index that runs from 1 to $N$ for simplicity. The meaning of the terms in this equation can be summarized as follows. * • The term $C$ represents the zonal heat storage and is expressed as heat capacity per unit area (J m-2 K-1). It is calculated by summing the contributions of lands, $C_{l}$, oceans, $C_{o}$, ice over lands, $C_{il}$ and ice over oceans, $C_{io}$. These contributions are weighted according to the zonal coverage of each surface component. * • The second term of Eq. (1) describes the meridional energy transport along the coordinate $x=\sin\phi$. The transport is modelled using the formalism of heat diffusion modulated by the parameter $D$ (the diffusion term). As a major improvement with respect to classic EBMs, $D$ is expressed as a function of the physical quantities that most affect the meridional transport, such as the planetary radius, rotational angular velocity, surface gravity, and surface atmospheric pressure. A detailed description of the physics behind this formalism can be found in V15. * • The term $I$ is the Outgoing Longwave Radiation (OLR), which peaks in the thermal IR band for typical conditions of habitable planets. At variance with classic EBMs, $I$ is estimated using single-column, radiative-convective calculations. By including the physics of the vertical transport, the ESTM becomes a 2D climate model, one dimension sampling the surface as a function of latitude, as in classic EBMs, the other dimension sampling the atmosphere as a function of height from the surface. In practice, we calculate $I$ as a function of $T$ for a given chemical composition and vertical stratification of the atmosphere. Compared to the original ESTM, the calculations of atmospheric radiative transfer that we present here have been greatly improved (see Section 2.6.1). * • On the right hand of the Eq. (1), the term $S$ represents the insolation, i.e. the incoming stellar radiation with maximum emission in the visibile/near IR spectral range. More specifically, the zonal, instantaneous stellar radiation that heats the planet, $S=S(t,\phi)$, is calculated taking into account the stellar luminosity, the orbital parameters and the inclination of the planet rotation axis. Details on these calculations can be found in V13. * • The term $A$ is the albedo at the top of the atmosphere, i.e. the fraction of incoming photons that are reflected back in space without heating the planet. The calculation of $A$ is extremely more detailed than in classic EBMs and is performed in several steps. First, we calculate the surface albedo, $a_{s}$, by weighting the albedo contribution of lands, $a_{l}$, oceans, $a_{o}$, ice on lands, $a_{il}$, and ice on oceans, $a_{io}$, according the respective fractional coverage. Then, the total albedo at the bottom of the atmosphere is calculated by summing the albedo of the clear-sky surface with the albedo of the clouds, weighted according to the fractional coverage of clouds. As an upgrade over the original ESTM, we now calculate the cloud albedo taking into account the reflection of the underlying surface (Section 2.5.2). Finally, the top-of-atmosphere albedo is calculated as a function of $T$, $a_{s}$ and stellar zenith distance, Z, for a given chemical composition and vertical stratification of the atmosphere. These calculations are performed with the upgraded recipes of radiative transfer that we present here (Section 2.6.1). All the albedo prescriptions are calculated as a function of the zonal, instantaneous stellar zenith distance, $Z=Z(t,\phi)$. In the original ESTM the albedo dependence on $Z$ was considered for oceans, clouds, and atmosphere. Here we improve formulas and we introduce this dependence also for lands and ice. In Table 1 we summarize how the terms in Eq. (1) have been upgraded from classic EBMs to the ESTM. The main differences between the ESTM and the EOS- ESTM are summarized in Table 2. In the rest of this section we review the prescriptions that we adopt to model the different components of the climate system, introducing the recipes that have been upgraded in the current EOS- ESTM version. Technical details on the solution of Eq. (1) in the course of the climate simulation can be found in V15 (Appendix A). In the present version of the code we adopt 60 latitude zones, a starting temperature $T_{0}=300$ K, and a tighter criterion of convergence for the global mean temperature, $\langle T\rangle$: in practice, after running 20 orbits, the convergence is considered to be achieved when $|\delta\langle T\rangle/\langle T\rangle|<10^{-5}$ in two consecutive orbits. All these parameters can be changed according to specific needs. For instance, $T_{0}$ can be varied in studies of climate bistability where two stable solutions (a Snowball state and a warm state) can be found in an appropriate parameter range depending on the initial temperature. As in most EBMs, the original ESTM Murante et al. (2020) and EOS-ESTM produce climate bistability. Table 2: Main differences between ESTM and EOS-ESTM Model prescription | ESTM | EOS-ESTM | Reference in this paper ---|---|---|--- Stellar spectrum | Solar | Any spectral type | Section 2.6.1 Atmospheric composition | Earth-like | Variable bulk composition | Section 2.6.1 Greenhouse gases | Trace amounts of CO2 and CH4 | Significant amounts of any greenhouse gas | Section 2.6.1 Surface albedo vs $Z$ | Oceans | Oceans, lands, ice | Sections 2.1,2.2,2.3 Calibration of ice coverage | Based on Williams & Kasting (1997) | Based on Earth’s satellite data | Section 2.3 Albedo & thermal inertia of transient ice | Not treated | Function of zonal ice cover | Section 2.4 Calibration of cloud albedo vs $Z$ | Based on Cess (1976) | Based on CERES-EBAF satellite data | Section 2.5.2 Cloud short-wavelength transmission | Not treated | Two-valued function of $T$ | Section 2.5.2 Cloud OLR forcing | Constant | Two-valued function of $T$ | Section 2.5.3 Cloud coverage over ice | Constant | Decreasing with global ice coverage | Section 2.5.1 ### 2.1 Oceans #### 2.1.1 Ocean fraction The coverage of oceans on the planetary surface is parameterized by assigning a fractional area coverage of oceans, $f_{o}$, to each latitude zone. This parameterization is sufficient to test the climate impact of different latitudinal distributions and oceans, including the extreme cases of ocean worlds ($f_{o}=1$ in each zone). Oceans are characterized by their specific properties of albedo and thermal inertia. #### 2.1.2 Ocean albedo The surface reflectivity of the oceans is modelled using empirical laws that take into account its dependence on $Z$ and the fact that the water surface is not smooth. We compared previous algorithms published in the literature (Briegleb et al., 1986; Enomoto, 2007) with a recent set of measurements obtained at different values of $Z$ (Huang et al., 2019). The observational data (red dots in Fig. 1) show a large spread at any value of $Z$ due to the variations of atmospheric transmittance created by scattering and absorption of sunlight in the atmosphere (Payne, 1972). To model the ocean albedo we are interested in the data in clear sky conditions, since the transmittance of the atmosphere is accounted for in our radiative transfer calculations (Section 2.6.1). The lower envelope of the data in Fig. 1 represent the clear-sky case. The formula proposed by Enomoto (2007) (black line) and Briegleb et al. (1986) (blue line) are also shown in the figure. One can see that the expression proposed by Enomoto (2007), namely $\small a_{o}=\frac{0.026}{1.1\mu^{1.7}+0.065}+0.15(\mu-0.1)(\mu-0.5)(\mu-1.0)~{}.$ (2) (where $\mu=\cos Z$) yields a slightly better match to the lower envelope of the data. We therefore adopt this expression as in V15. We do not propose an expression to match the lowest points of the observational data set because we do not have information about the measurement errors and we cannot exclude the presence of outliers. Despite having being calibrated with the Earth’s oceans, the empirical law (2) can be reasonably applied to any exoplanetary ocean, since the zenith dependence basically follows the universal Fresnel formula (WK97), corrected for the roughness of the surface. Figure 1: The albedo of oceans as a function of the stellar zenith angle, $Z$. The blue and black lines represent the formulation adopted in Briegleb et al. (1986) and Enomoto (2007), respectively. Red dots represent the data obtained from Huang et al. (2019, Fig. 5a). #### 2.1.3 Thermal inertia of oceans Due to the high thermal capacity of water, the thermal inertia of oceans, $C_{o}$, gives a major contribution to the term $C$. The full oceanic contribution, which is effective over long time scales (typically decades on Earth), is not treated in the ESTM. However, the short-term thermal impact of the oceans is accounted for by considering the contribution of the mixed layer, i.e. the surface layer of water that exchanges heat with the overlying atmosphere (Pierrehumbert, 2010, hereafter P10). In the original version of ESTM, we adopted a mixed-layer contribution $C_{\text{ml}}=C_{\text{ml50}}$ (WK97,P10), corresponding to thermal inertia of a 50-m, wind-mixed ocean layer (see Table 3). In Section 3 we present a new tuning of this parameter based on the short-term monthly variations of Earth’s surface temperatures. For exoplanets with shallow oceans, $C_{\text{ml}}$ can be changed to simulate the impact of water layers of different depths. ### 2.2 Lands #### 2.2.1 Land fraction The surface fraction of continents is described by assigning a fractional area of land $f_{l}=1-f_{o}$, to each latitude zone. This parametrization is sufficient to test the climate impact of extreme distributions of continents, such as polar or equatorial continents, or desert worlds ($f_{o}=0$ in each zone). Continents are characterized by their specific properties of albedo and thermal inertia. #### 2.2.2 Land albedo To model the albedo of land we adopt a formulation proposed by Briegleb (1992), namely $a_{x}(\mu)=a_{x}(0.5)\frac{1+d}{1+2\,d\,\mu}$ (3) where $a_{x}(0.5)$ is the albedo of a surface $x$ when $\mu$=0.5 ($Z=60^{\circ}$), and the parameter $d$ regulates the dependence on stellar zenith distance ($d=0.1$ for a “weak" dependence; $d=0.4$ for a “strong" dependence). These parameters can be varied according to the type of surface (desert, basalt, vegetation, etc.) in order to model planets with specific characteristics (Coakley, 2003, Table 3 therein). The adopted values should be representative of clear-sky conditions, since ESTM takes into account the effects of the atmospheric albedo separately (Section 2.6.1). Table 3: Adopted terms for thermal inertia Parameter | Description | Adopted value | Comments ---|---|---|--- $C_{\text{ml50}}$ | Thermal inertia of the water mixed layer | $210\times 10^{6}$ J m-2 K-1 | Equivalent to a 50-m water layer (Pierrehumbert, 2010) $C_{\text{atm},\circ}$ | Thermal inertia of the atmosphere | $10.1\times 10^{6}$ J m-2 K-1 | Equivalent to a 2.4-m water layer (Pierrehumbert, 2010) $C_{\text{solid}}$ | Thermal inertia of the solid surface | $1\times 10^{6}$ J m-2 K-1 | Equivalent to a 0.5-m rock layer (Vladilo et al., 2013) #### 2.2.3 Thermal inertia The solid surface has a negligible thermal capacity compared to that of the oceans and even compared to that of a relatively thin, Earth-like atmosphere. The value of land thermal inertia that we adopt (Table 3) is representative of a layer of rock with thickness of 0.5 m (Vladilo et al., 2013). Even if small, this value becomes important in planets without oceans and with extremely thin atmospheres. Figure 2: Mean-annual values of Earth’s ice cover over lands and oceans plotted as a function of the mean annual zonal surface temperature. Left panel: Observational data for lands (green symbols) and oceans (blue symbols) obtained with ERA5 reanalysis and NASA’s Terra and Aqua Satellites averaged over the 2010; filled and empty circles represent data collected from the Northern and Southern hemisphere, respectively; red dotted line: prescription for the ice cover adopted in previous versions of ESTM. Right panel: same as in the left panel, but excluding data with altitude above the freezing level and the ocean data at the edge of Antarctica; solid lines: prescriptions adopted in this work for lands (green line) and oceans (blue line); land data influenced by local orographic conditions specific of the Earth have been ignored; see Section 2.3.1 . ### 2.3 Ice #### 2.3.1 Ice fraction ESTM calculates the fractional coverage of ice over lands, $f_{il}$, and oceans, $f_{io}$, making use of temperature-dependent algorithms. These prescriptions are critical because the ice coverage plays a key role in the albedo-temperature feedback and affects the lower temperature limit of liquid- water habitability. We paid special attention to re-calibrate the algorithms by searching for (i) a new set of experimental data and (ii) an appropriate functional dependence on $T$. 1. 1. The distribution of ice on the Earth surface was derived from measurements obtained from NASA’s Terra and Aqua satellites151515See https://modis.gsfc.nasa.gov/ in the period 2005-2015. To find a trend with temperature, we associated the mean annual temperature of each latitude zone to the corresponding fraction of ice. The temperature data were obtained from the ERA5 dataset 161616See https://climate.copernicus.eu/climate-reanalysis (Hersbach et al., 2020) in the same period. This exercise was done separately for ice on lands and on oceans. To minimize the impact of orographic/oceanographic conditions specific of the Earth, we considered only land data in areas unaffected by local mountains and altitude below the freezing level and we excluded ocean data at the edge of Antarctica171717The combined effect of katabatic winds and ocean currents form mesoscale areas of open water near the Antarctica coastline, known as polynyas (Stringer & Groves, 1991). These areas fringe the edge of the continent owing to the opening waterways (known as flaw leads) produced by the interconnection between themselves (Meredith & Brandon, 2017). . The results are shown in Fig. 2, where the data (filled and empty circles) show that the dependence on $T$ is quite different for lands and oceans (green and blue colors, respectively). The empirical trends show two features: (1) a very sharp rise of ice coverage below the water freezing point; (2) the existence of a small fraction of ice coverage at $T$ slightly above the freezing point. The exponential model adopted in WK97 and in previous versions of the ESTM (dashed line in the left panel) is not able to reproduce these two features, indicating the need of a new type of functional dependence. 2. 2. After trying different types of functions, we found that the empirical trends of Fig. 2 can be well approximated using a generalized logistic function (Richard, 1959). Based on the physical boundary conditions of our problem, we chose a function that vanishes at very high $T$ and tends to 1 at very low $T$: $f_{ix}(\overline{T})={\frac{1}{\left[1+\xi_{x}\,e^{\theta_{x}\,(\overline{T}-T_{\circ,x})}\right]^{1/\xi_{x}}}}$ (4) where $\overline{T}$ is the zonal temperature averaged over a time $\tau_{\text{ice}}$ representative of the time scale of ice growth/melting; the index $x$ refers to the type of surface underlying the ice cover ($x=o$ for oceans and $x=l$ for lands); $T_{\circ}$ is the temperature turning point for the liquid-solid transition of water; $\theta$ is the growth rate; and $\xi$ is the shape parameter. The zonal temperature $\overline{T}$ is averaged over the interval $\tau_{\text{ice}}$ that precedes the current time of the climate simulation. For consistency with the data shown in Fig. 2, which have been averaged over one year, we adopt $\tau_{\text{ice}}=12$ months. The parameters $T_{\circ}$, $\theta$ and $\xi$ were tuned using: (i) the data versus temperature shown in the right panel of Fig. 2, and (ii) the data versus latitude of the Earth’s reference model (bottom right panel in Fig. 8). The adopted parameter values are listed in Table 4. The resulting logistic functions (solid curves in the right panel of Fig. 2) are able to reproduce the observed sharp rise below the turning point $T_{\circ}$ and the existence of a small ice fraction slightly above $T_{\circ}$. #### 2.3.2 Albedo of frozen surfaces The albedo of frozen surfaces (ice and snow) shows a remarkable scatter in Earth measurements, with temporal variations that take place on different time scales. In ESTM we adopt representative values based on average conditions. As an upgrade with respect to the previous version, we model the albedo of ice with Eq. (3). For stable ice on lands and oceans we adopt $a_{il}(0.5)=0.70$ $a_{io}(0.5)=0.55$, respectively. In both cases we set $d=0.1$, i.e. a “weak” dependence on $\mu$ (Briegleb, 1992). These values provide a good match to Earth’s zonal albedo (Section 3), but can be changed to model exoplanets with specific properties of frozen surfaces. #### 2.3.3 Thermal inertia The contribution to thermal inertia of ice is important only if the planet lacks oceans and has an extremely thin atmosphere. For icy surfaces we adopt the same representative value adopted for any solid surface (Table 3). For icy surfaces over oceans, following WK97, we add a small contribution (10.5 $\times$ $10^{6}$ J m-2 K-1) representative of the thermal inertia of the underlying water. Table 4: Parameters adopted for the ice coverage function (4). Parameter | Description | Value ---|---|--- Land | | $T_{\circ\,l}$ | Temperature turning point | 265.15 K $\theta_{l}$ | Growth rate | 1.2 $\xi_{l}$ | Shape parameter | 8.0 Ocean | | $T_{\circ\,o}$ | Temperature turning point | 263.15 K $\theta_{o}$ | Growth rate | 3.0 $\xi_{o}$ | Shape parameter | 12.0 ### 2.4 Transient ice The albedo and thermal capacity of transient ice (ice that is forming or melting) are different from those of stable ice. A possible way to take into account this effect is to introduce specific ice parameters in a temperature range around the water freezing point (WK97). However, this approach requires the introduction of several parameters not easy to quantify (the albedo and thermal capacity of unstable ice and the temperature range where the transition takes place). To avoid this additional parametrization we adopt a prescription that provides a gradual change of the albedo and thermal capacity from the case in which the ice is totally absent, to the case in which the ice is stable. #### 2.4.1 Albedo of transient ice The albedo of stable ice over lands, $a_{il,s}$, and of stable ice over oceans, $a_{io,s}$, is higher than the albedo of the underlying surface. When the temperature increases and the ice becomes more and more patchy, the albedo of unstable ice gets closer and closer to the albedo of the underlying surface. To simulate this transition we assume that the fractional coverage of ice (4) is a reasonable estimator of the patchiness of the ice and we adopt the expressions $a_{il}(\overline{T})=a_{l}+(a_{il,s}-a_{l})\,f_{il}(\overline{T})$ (5) for ice over lands and $a_{io}(\overline{T})=a_{o}+(a_{io,s}-a_{o})\,f_{io}(\overline{T})$ (6) for ice over oceans. In this way the albedo attains the high values typical of stable ice only when the ice coverage is complete. When the ice coverage is absent, the albedo equals that of lands ($a_{l}$) or oceans ($a_{o}$) without ice. The parameters in the above equations are calculated for $\mu=0.5$ (see Table 7) and the albedo dependence on $\mu$ is modelled as explained in Section 2.3.2. #### 2.4.2 Thermal inertia of transient ice For the thermal inertia of transient ice we follow the same approach used for the albedo of transient ice. For the oceans, which provide the main contribution to the thermal inertia, we adopt the relation $C_{io}(\overline{T})=C_{o}+(C_{io,s}-C_{o})\,f_{io}(\overline{T})~{},$ (7) where $C_{io,s}$ is the thermal inertia of stable ice over ocean. For land, the contribution is very small, and there is no need to adopt a similar relation since in our parametrization $C_{il,s}=C_{l}$. ### 2.5 Clouds The complexity of the physics of cloud formation and the lack of fluidodynamics and 3D capabilities of ESTM prevent us to model the spatial distribution and physical properties of clouds in a self-consistent way. Clouds are treated as bottom-of-the atmosphere features with parametrized values of coverage, albedo, and OLR forcing. Given their critical role in the climate energy budget, we introduced upgraded algorithms for all these features. #### 2.5.1 Cloud fraction The cloud fraction is estimated with the expression $f_{c}=f_{o}\,\left[(1-f_{io})\,f_{co}+f_{io}\,f_{ci}\right]+f_{l}\,\left[(1-f_{il})\,f_{cl}+f_{il}\,f_{ci}\right]$ (8) where $f_{co}$, $f_{cl}$, and $f_{ci}$ are representative values of the cloud coverage over oceans, lands, and ice, respectively (see Table 5). These values are based on Earth data but, in principle, can be tuned for other planets. In the original version of ESTM a constant cloud coverage over ice is adopted. This approach is reasonable when ice is not dominant, as in the case of the present-day Earth. However, when the planet undergoes a transition to a snowball state, the global fraction of clouds is expected to decrease. To capture this effect, we introduce a parameter $f_{c,sb}$, representative of the cloud coverage in a snowball planet, and we adjust $f_{ci}$ in the course of the climate simulation introducing a dependence on the globally-averaged ice fraction, $\langle f_{\text{ice}}\rangle$. In practice we adopt the expression $f_{ci}=\left(f_{ci,\earth}-f_{c,sb}\right)\left(\frac{1-\langle f_{\text{ice}}\rangle}{1-\langle f_{\text{ice}}\rangle_{\earth}}\right)+f_{c,sb}~{}~{},$ (9) where $f_{ci,\earth}$ is the cloud coverage over ice calibrated with Earth’s data. With this prescription, $f_{ci}=f_{c,sb}$ when the planet enters a hard snowball state ($\langle f_{\text{ice}}\rangle=1$). The parameter $f_{c,sb}$ can in principle be tuned from results of GCM simulations of snowball planets (Abbot, 2014). The Earth model is not affected by the choice of $f_{c,sb}$ because $f_{ci}=f_{ci,\earth}$ when $\langle f_{\text{ice}}\rangle=\langle f_{\text{ice}}\rangle_{\earth}$. #### 2.5.2 Cloud albedo To upgrade the prescriptions for the albedo of the clouds we: (i) used an updated set of Earth satellite data; (ii) adopted a new functional form for the dependence of the cloud albedo on $\mu$; and (iii) introduced a dependence of the effective cloud albedo, $a^{\prime}_{c}$, on the albedo of the underlying surface, $a_{s}$. 1. 1. To upgrade the experimental data, we use the recent set of top-of-atmosphere (TOA) albedo data obtained from the CERES-EBAF satellite (Loeb et al., 2018). Following Eq. (8) by Cess (1976), we estimate the zonal TOA albedo of the clouds with the expression $A_{\textrm{c,obs}}=\frac{A_{\textrm{obs}}-A_{\textrm{obs,clear}}\,(1-f_{c})}{f_{c}}~{},$ (10) where $A_{\text{obs}}$ are the TOA albedo data obtained in a given zone, $A_{\text{obs,clear}}$, the TOA albedo data obtained in the same zone in clear-sky conditions, and $f_{\textrm{c}}$ is the fractional cloud coverage in the latitude zone of interest. To extract the dependence on zenith distance, the derived values of $A_{\textrm{c,obs}}$ are associated to the mean annual value of $\mu=\cos Z$ of the zone of interest, $\overline{\mu}$. In Fig. 3 we compare the cloud albedo versus $\overline{\mu}$ that was obtained by Cess (1976) from the dataset by Ellis & Haar (1976), with the results that we obtain inserting the CERES-EBAF data for the period 2005-2015 in Eq. (10). One can see that the use of the updated and extensive dataset provided by CERES leads to significant differences. In particular, the cloud albedo becomes weaker in the equatorial regions. We use this updated data set to improve the description of the cloud albedo with respect to previous parametrizations (WK97, V15) , which were based on Cess (1976). Figure 3: TOA cloud albedo profile obtained using the data collected from CERES-EBAF (green dots) in the period 2005-2015 compared with Cess (1976) data for the NH (empty circles) and SH (black circles). The albedo is plotted versus the mean-annual Solar zenith angle of each latitude zone. 2. 2. In previous work, the dependence of the cloud albedo on the zenith distance was modeled using the linear form $a_{\text{c}}=\alpha+\beta\,Z$ (WK97,V13). To prevent the existence of negative values of albedo at low $Z$, V15 introduced a third parameter (the minimum value of cloud albedo at low zenith distances, $a_{\text{c,min}}$), and used the expression $a_{c}=\max\\{a_{\text{c,min}},(\alpha+\beta Z)\\}$. Here, for consistency with the description of the albedo of land, ice and ocean, we adopt a dependence on $\mu$, rather than on $Z$, also for the cloud albedo. In practice, we adopt $a_{c}(\mu)=a_{c}(0.5)+m_{c}\,(\mu-{\frac{1}{2}})$ (11) where $m_{c}$ is the slope and $a_{c}(0.5)$ is the cloud albedo at $\mu$=0.5. As can be seen in Fig. 4, this new function (solid line) yields positive values at low $Z$ without the need of an extra parameter, and yields a smoother dependence on $Z$ than the previous prescription (dashed line). Figure 4: Comparison of the cloud albedo models adopted in V15 (dashed line) and in the present work (solid line). See Section 2.5.2. 3. 3. Polar clouds have a relatively high transmittance of short-wavelength photons, and part of the photons are reflected by the underlying surface, making their path up to the outer space through the clouds and the atmosphere. This effect becomes particularly important when the underlying icy surface is very reflective. The impact of this effect can be appreciated in Fig. 3, where one can see that at low $\mu$, in correspondence with the ice polar caps, the slope of the Earth’s cloud albedo versus $\mu$ becomes steeper. To incorporate this effect in our model we follow Thompson & Barron (1981) and, based on their Eq. (A14), we calculate the effective cloud albedo over reflective surfaces, $\small a^{\prime}_{c}=a_{c}+\frac{(1-a_{c})\,(1-a^{*}_{c})}{a^{*}_{c}}\times[(1-t^{2}\,a^{*}_{\text{s}}\,a^{*}_{c})^{-1}-1]$ (12) where $a_{c}$ is the cloud albedo over a non-reflective surface, $a^{*}_{c}$ the cloud albedo for diffuse radiation, $a^{*}_{\text{s}}$ is the surface albedo for diffuse radiation; the values of diffuse albedo are estimated from the corresponding direct albedo calculated at $\mu$=0.5; the parameter $t$ is an estimator of the transmittance, i.e. the radiation fraction not absorbed between the cloud top and the surface. We model $t$ with the function $t=0.90-0.05\,\tanh{\left(\frac{T-263.15\,\text{K}}{10\,\text{K}}\right)}$ (13) which provides a smooth transition between the typical transmittance of thin polar clouds ($t\simeq 0.95$; Thompson & Barron (1981)) and the lower transmittance in regions without surface ice. In these regions the surface albedo is low, the term $t^{2}\,a^{*}_{\text{s}}\,a^{*}_{c}$ is small, and the exact choice of $t$ has a modest impact on the calculation of $a^{\prime}_{c}$ with Eq. (12). #### 2.5.3 Cloud OLR forcing In the original version of the ESTM the cloud OLR forcing had a constant value. This approach is unsatisfactory since the OLR measurements of terrestrial clouds show an extremely large seasonal and latitudinal scatter (Hartmann et al., 1992, Fig. 10 therein). As an attempt to introduce a variable value by capturing a dependence on $T$, we tried to scale the cloud OLR according to the water vapour content of the atmospheric column, which is a function of $T$ at constant relative humidity. However, this attempt did not provide a good match to the experimental data. This is not too surprising, given the complexity of the physics of cloud formation and cloud radiative transfer, which depends on a variety of thermodynamical and microscopic factors not treated in our model. Despite the negative result of this attempt, we decided to take into account the properties of clouds over icy regions as an upgrade of our model. Terrestrial clouds over icy regions show a very small OLR forcing, typically one order of magnitude smaller than the average value (Hartmann et al., 1992, Fig. 10 therein). To capture this effect, we calculate the cloud OLR forcing (also called cloud radiative effect, CRE) with the following expression $CRE=CRE_{\circ}\left[0.60+0.40\,\tanh{\left(\frac{T-263.15\,\mathrm{K}}{10\,\text{K}}\right)}\right]$ (14) where $CRE_{\circ}$ is a representative value that can be calibrated with terrestrial clouds (see Section 3.1.2) and the term in square brackets varies smoothly from 1, at high $T$, to 0.2, at very low $T$. The above expression provides a smooth transition with decreasing $T$ that mirrors the increase of cloud transmittance adopted in Eq. (13). This simplified formalism is justified by the fact that it improves the match between predicted and observed zonal OLR in the Northern polar regions (see Section 3). ### 2.6 Atmosphere and top of atmosphere #### 2.6.1 Clear-sky radiative transfer The vertical radiative transport of energy throughout the atmosphere has a strong impact on the $I$ and $A$ terms in Eq. (1) and therefore plays a central role in the climate simulations. In the previous version of ESTM, the terms $I$ and $A$ were estimated using the radiative transfer model developed as part of the Community Climate Model 3 (CCM3, Kiehl et al., 1998), which is based on the HITRAN 1992 spectroscopic repository data (Rothman et al., 1992). CCM3 is a band model tailored for an Earth-like atmosphere illuminated by solar-type radiation. As such, the concentrations of greenhouse gases can only be varied in trace abundances, the list of greenhouse gases cannot be expanded, and it is not possible to model stellar spectra different from the solar one. To overcome these limitations and to use an updated repository of spectroscopic data, the new version of ESTM uses the EOS radiative transfer procedure (Simonetti et al., 2022) to calculate the terms $I$ and $A$. EOS is a line-by-line procedure based on the publicly available opacity calculator HELIOS-K (Grimm & Heng, 2015; Grimm et al., 2021) and the radiative transfer code HELIOS (Malik et al., 2017, 2019). Line absorption from N2, O2, H2O, CO2 and CH4 are calculated using data from the HITRAN 2016 (Gordon et al., 2017) repository. The continuum of H2O is included via the standalone version of the CNTNM routine of the LBLRTM code (Clough et al., 2005), which runs the MT_CKD v3.4 opacity model (Mlawer et al., 2012). For CO2-dominated atmospheres, the Collision-Induced Absorption (CIA) and the sub-Lorentzian absortion lines shape of CO2 are also taken into account and calculated from, respectively, HITRAN data and the recipes in Perrin & Hartmann (1989). The EOS model has the advantage of not being tied to a specific type of atmosphere via e.g. gas opacity parameterizations, thus allowing a far greater flexibility in choosing the composition of the atmosphere. Radiative transfer calculations are performed on a 60-layer atmospheric column logarithmically spaced in pressure, from the $\sim$ 1 bar surface to the 1 $\mu$bar TOA level (10 layers per order of magnitude). The OLR is evaluated as a function of $T$ every 20 K below 280 K, every 10 K between 280 K and 310 K and every 5 K up to 360 K. The TOA albedo is evaluated every 20 K in the entire $T$ range up to 360 K, for $Z\in(0^{\circ},30^{\circ},45^{\circ},60^{\circ},70^{\circ},75^{\circ},80^{\circ},83^{\circ},85^{\circ},87^{\circ},88^{\circ},89^{\circ})$ and for $a_{x}\in(0.0,0.15,0.30,0.60,0.90)$. We adopted a non-equally spaced grid in order to sample more precisely the regions in which the OLR and the TOA albedo change their slopes. Multilinear interpolation on both OLR and TOA albedo tables is then carried out by ESTM. Thanks to the fact that HELIOS and HELIOS-K run on GPU processors (also known as graphic cards or accelerators) it is possible to calculate the line-by-line radiative transfer in a reasonable amount time even on desktop machines. As reported in Grimm et al. (2021), GPU-based codes can be more than an order of magnitude faster than CPU-based ones. Starting from the HITRAN line parameters files, EOS requires $\sim$70 hours to calculate the OLR and TOA albedo tables for a specific atmosphere181818Estimates obtained using a workstation equipped with an nVidia RTX 2080 graphic card, a 7200 rpm HP/Seagate hard drive and an Intel Xeon Silver 4108 CPU. The reading-writing times from the storage memory (hard drive or solid state memory) and the CPU efficiency play important roles in determining the final amount of time required by the procedure, which can be consistently different on other machines.. Thanks to the modularity of the EOS procedure, part of the results obtained in the early steps can be reused, reducing the total time for the calculations of new cases. #### 2.6.2 Top-of-atmosphere albedo The top-of-atmosphere (TOA) albedo is tabulated as a function of $T$ for a given set of atmospheric parameters, using radiative transfer calculations in the short wavelength (visible, near IR) spectral range. Since the TOA albedo also depends on the surface albedo, $a_{s}$, and the stellar zenith distance, $Z$, the calculations of the TOA albedo must sample the parameter space ($T$, $a_{s}$, $Z$). The wavelength dependence of the short wavelength scattering implies that the albedo is sensitive to the spectral distribution of the host star. The new radiative transfer procedure that we adopt allows us to tabulate $A$ for planetary atmospheres illuminated by stars of different spectral type. #### 2.6.3 OLR For a given set of atmospheric parameters (surface gravity, surface pressure, chemical composition, relative humidity, vertical structure), the OLR in the thermal IR band is computed as a function of surface temperature $T$ by using a reverse calculation of radiative transfer where the temperature of the lowest atmospheric layer is forced to be equal to $T$. The resulting OLR tables, calculated in clear-sky conditions, are given in input to the climate simulation. The OLR forcing of clouds calculated with Eq. (14) is then subtracted from the clear-sky OLR. #### 2.6.4 Thermal inertia of the atmosphere The atmospheric thermal inertia is much smaller than the oceanic one and can be neglected in planets with oceans and thin, Earth-like atmospheres. However, in general, the atmospheric thermal inertia must be taken into account, since habitable planets may lack oceans and/or may have thick atmospheres. For this reason, in our formulation we scale the thermal inertia of the Earth’s atmosphere (Table 3) according to the thermal capacity and columnar mass of the planetary atmosphere (V15). The thermal capacity is calculated for the specific atmospheric composition; assuming hydrostatic equilibrium, the atmospheric columnar mass is calculated as $p/g$, where $p$ is the surface atmospheric pressure and $g$ the surface gravitational acceleration. The atmospheric contribution to thermal inertia is summed to the ocean, land, and ice contributions described above. Table 5: Astronomical and planetary Earth data Parameter | Description | Adopted value | Reference/Comments ---|---|---|--- $S_{\circ}$ | Mean annual insolation | 1361.0 W m-2 | CERES-EBAF (2005-2015) $e$ | Orbital eccentricity | 0.01671022 | $\epsilon$ | Axis obliquity | 23.43929 | $g$ | Surface gravity acceleration | 9.81 m s-2 | * • a. Adopted Earth’s values can be changed to model exoplanets with different types of stellar, orbital and planetary properties. Table 6: Earth satellite data and results of the Earth reference model Quantity | Description | Earth value | Model | Units ---|---|---|---|--- $\langle T\rangle$ | Global surface temperature | 287.5a | 288.7 | K $\langle T\rangle_{\text{NH}}$ | Mean surface temperature of Northern hemisphere | 288.4a | 288.5 | K $\Delta T_{\text{PE}}$ | North Pole-Equator temperature difference | 38.9a | 41.1 | K $\langle h\rangle_{\text{NH}}$ | Fraction of habitable surface (Northern hemisphere) | 0.866b | 0.855 | … $\langle A\rangle$ | Global top-of-atmosphere albedo | 0.314c | 0.315 | … $\langle A\rangle_{\text{NH}}$ | Mean top-of-atmosphere albedo of Northern hemisphere | 0.310c | 0.314 | … $\langle OLR\rangle$ | Global outgoing longwave radiation | 240.2c | 241.4 | W m-2 $\langle OLR\rangle_{\text{NH}}$ | Mean outgoing longwave radiation of Northern hemisphere | 240.8c | 241.6 | W m-2 $\langle f\rangle_{c}$ | Global cloud fraction | 0.674c | 0.666 | .. $\langle f\rangle_{c,\text{NH}}$ | Mean cloud fraction of Northern hemisphere | 0.644c | 0.646 | .. $\Phi_{\text{max}}$ | Peak of atmospheric transport at mid latitudes | 5.0d | 5.0 | PW * • a. Average ERA5 temperatures in the period 2005-2015. * • b. Average fraction of planet surface with temperature satisfying the liquid water criterion. * • c. Average CERES-EBAF data in the period 2005-2015. * • d. Trenberth & Caron (2001) Table 7: Parameters of surface and cloud albedo Parameter | Description | Adopted value | Comments ---|---|---|--- $a_{l}$ | Albedo of lands (at $\mu$=0.5) | 0.20 | Tuned to match zonal albedo profile (Fig. 8) $a_{il,s}$ | Albedo of stable frozen surfaces (at $\mu$=0.5) | 0.70 | Tuned to match zonal albedo profile (Fig. 8) $a_{io,s}$ | Albedo of stable ice on ocean (at $\mu$=0.5) | 0.55 | Tuned to match zonal albedo profile (Fig. 8) $a_{c}$ | Albedo of clouds (at $\mu$=0.5) | 0.44 | Tuned to match CERES-EBAF data (Fig. 6) $m_{c}$ | Slope of cloud albedo equation | $-0.67$ | Tuned to match CERES-EBAF data (Fig. 6) $f_{cw}$ | Cloud coverage on water | 0.72 | King et al. (2013) $f_{cl}$ | Cloud coverage on land | 0.55 | King et al. (2013) $f_{ci}$ | Cloud coverage on ice | 0.56 | Tuned to match the cloud coverage of Earth’s North Hemisphere ## 3 The reference Earth model In this Section we present the calibration and the validation tests of the model applied to Earth, which represents the reference for modelling habitable exoplanets of terrestrial type. The large amount and good quality of experimental data of the Earth climate system provide the best way for adjusting many important model parameters and testing the new recipes that we have introduced in the previous section. ### 3.1 Astronomical and planetary quantities The values of astronomical and planetary quantities that we adopt for the reference Earth model are listed in Table 5. Global values of planetary temperature, OLR and albedo taken from satellite observations (CERES-EBAF and ERA5) are summarized in Table 6. Unless differently specified, all data were averaged for the period 2005-2015. For consistency, the insolation $S$ and the volumetric mixing ratio of CO2 and CH4 were estimated for the same time period. For the solar constant we adopt $S_{0}$ = 1361 W m-2 in accordance with the mean annual insolation value measured from CERES-EBAF-EBAF (1361.16 W m-2). #### 3.1.1 Surface albedo The adopted values of albedo are listed in Table 7. For the albedo of lands we adopt a value representative of the Earth continents, namely $a_{l}(0.5)=0.20$, which is an intermediate value between bare and vegetation- covered soil. As far as the dependence with $\mu$ is concerned, we adopt a “weak" dependence ($d=0.1$), which is representative of most types of Earth’s continental surfaces (Briegleb, 1992; Coakley, 2003). #### 3.1.2 Clouds The parameters adopted for the clouds fraction on land, ocean and ice are shown in Table 7. With respect to V15, the coverage over land and ocean, $f_{cl}$, $f_{co}$, were updated following the experimental data by King et al. (2013), while the adopted value of coverage over ice, $f_{ci}$, was estimated from the CERES-EBAF satellite data. The value of the cloud radiative forcing for longwave radiation, which acts as a parameter in the model, was tuned in order to obtain a better match in the OLR profiles in Figs. 7 and 8. By adopting $CRE_{\circ}$ = 26.1 W m-2 in Eq. (14), we obtain an average value $\langle CRE_{\text{olr}}\rangle=25.5$ W m-2 for the Earth model. This is in excellent agreement with the mean value of the Earth, 25.8 W m-2, obtained from CERES-EBAF Ed4.1, also considering the still large uncertainty on this quantity found in the literature. Table 8: Atmospheric radiative transfer parameters for the Earth’s reference model Quantity | Description | Adopted value | Reference/Comments ---|---|---|--- $p_{\text{dry}}$ | Surface pressure dry air | $10^{5}$ Pa | $r$ | Relative humidity | 60% | Vladilo et al. (2015) $c_{CO2}$ | Atmospheric concentration of CO2 | 350 ppm | See section 3.1.3 $c_{CH4}$ | Atmospheric concentration of CH4 | 1.7 ppm | See section 3.1.3 $T_{\text{tp}}$ | Temperature of tropopause | 200 K | Seidel et al. (2001), Kuell et al. (2005) $CRE_{\mathrm{\circ}}$ | TOA longwave forcing of clouds | 26.1 W m-2 | tuned to match the OLR profiles in Figs. 7 and 8 #### 3.1.3 Atmospheric quantities Table 8 shows the atmospheric quantities adopted in the EOS radiative transfer calculations of the Earth model. Following Seidel et al. (2001) and Kuell et al. (2005) we adopt a temperature for the tropopause $T_{\text{tp}}=$ 200 K. We adopt a relative humidity of $60$%, in agreement with the global relative humidity measured on Earth, a surface pressure of dry air of $p_{\text{dry}}$=1.00 $\times$ 105 Pa and a volumetric concentration for CH4 of 1.7 ppmv. The CO2 gas concentration of the reference period 2005-2015 ($c=390$ ppm, derived from the NOAA database 191919https://gml.noaa.gov/ccgg/trends/) has been corrected to compensate the net TOA radiative imbalance of $\Delta F\simeq$ 0.6 Wm-2(Wild et al., 2013) observed in the current transient climate, since we will be performing constant-forcing simulations. We used the simplified analytical expression linking CO2 concentration changes to the resulting radiative forcing change (Myhre et al., 1998): $\Delta F=\alpha\,\text{ln}(\frac{c}{c_{0}})$ (15) with $\alpha$ = 5.35 W m-2, leading to a corrected volumetric mixing ratio for CO2 of $c_{0}=350$ ppm. Table 9: Parameters for the meridional transport Quantity | Description | Adopted value | Reference/Comments ---|---|---|--- $D_{\circ}$ | Coefficient of latitudinal transport | 0.66 W m-2 K-1 | Tuneda to match the zonal temperature profile (Fig. 8) ${R}$ | Modulation of latitudinal transport | 1.4 | Tuned to match the zonal temperature profile (Fig. 8) $\Lambda_{\circ}$ | Ratio of moist over dry eddie transport | 0.7 | V15; Fig. 2 in Kaspi & Showman (2015) * • a. $D_{\circ}$ is also tuned to match the Earth’s peak of atmospheric transport at mid latitudes, $\Phi_{\mathrm{max}}$ (Table 6) #### 3.1.4 Meridional transport The parameters adopted for the meridional transport are shown in Table 9. The parameter $D_{\circ}$ was tuned to match the Earth’s peak of atmospheric transport at mid latitudes, $\Phi_{\mathrm{max}}$ (5.0 PW, Trenberth & Caron (2001)) and the temperature-latitude profile in Fig. 8. We refer to V15 for a full description of the parameters listed in the table. #### 3.1.5 Depth of the mixed ocean layer To tune the mixed ocean layer parameter, $C_{\text{ml}}$, we investigated the monthly excursions of the Earth global surface temperature. In Fig. 5 we compare the annual evolution of this quantity observed in the northern hemisphere (crosses) with the predictions of our model obtained for different choices of $C_{\text{ml}}$ (solid lines). One can see that by increasing $C_{\text{ml}}$, the maximum annual excursion of monthly surface temperatures, $\Delta T_{\text{max}}$, becomes smaller. A small time lag between the predicted and observed peak is also present and increases with decreasing $C_{\text{ml}}$. The value of thermal capacity that better reproduces the observed trend is found at $C_{\text{ml}}\simeq C_{ml50}/2$. We therefore adopt this value, which provides a small time lag and a difference of only 0.8 K between the observed and predicted value of $\Delta T_{\text{max}}$. Figure 5: Seasonal evolution of mean surface temperature as a function of the effective thermal capacity. The black, blue, green and red lines represent the thermal capacities of oceans $C_{o}$ with $C_{ml50}$= 210, 105, 70 and 84 $\times$ 106 J m-2 K-1, respectively. Magenta Crosses: averaged ERA5 temperatures in the period 2005-2015 for the North Hemisphere (NH). #### 3.1.6 Cloud albedo parameters To tune the cloud albedo parameters $a_{c}(0.5)$ and $m_{c}$ in Eq. (11) we used the TOA cloud albedo estimated from satellite data with Eq. (10). The corresponding model predictions were then calculated by applying the EOS radiative transfer calculations to the effective cloud albedo at the bottom of the atmosphere estimated with Eq. (12). In this process we fine-tuned the cloud transmittance defined in Eq.(13). In Fig. 6 we compare the observational data set (green circles) with the final result of this modelization (orange curve). One can see that the new calibration of cloud albedo provides a good match to the measured trend of TOA cloud albedo versus $\mu$, including the sharp rise observed at low $\mu$, i.e. over the Earth polar caps. From this figure it is clear that the reflectivity of the underlying surface becomes fundamental at the poles. The previous version of the ESTM is unable to reproduce these features. Figure 6: Cloud albedo versus $\mu=\cos Z$ for the present-day Earth (Northern and Southern hemispheres). Dark green circles and crosses: observational data obtained from Eq. (10) for the Northern and Southern hemisphere, respectively. Red-solid and orange-dashed lines: ESTM predictions obtained by converting to TOA values the cloud albedo calculated from Eq. (12) for the Northern and Southern hemisphere, respectively. Figure 7: Mean annual values of OLR (left panel) and TOA albedo (right panel) plotted versus surface temperature for the present-day Earth. Data at temperatures below 260 K are related to the South Polar cap. Green dots and crosses: observational data obtained with the ERA5 reanalysis and CERES-EBAF satellite data averaged over the period 2005-2015, for the Northern and Southern hemisphere, respectively. Red-solid and orange-dashed lines: predictions of the Earth’s reference model obtained with the radiative transfer calculations specified in Section 3.1.3, for the Northern and Southern hemisphere, respectively. The OLR peak at $\simeq$ 296 K in the left panel is due to emission at the edges of tropical regions, which are connected with the presence of large deserts and of low clouds with warm tops; the decrease around the equator is associated with the presence of deep convective clouds with cold tops. Figure 8: Mean annual latitude profile of surface temperature, albedo, outgoing longwave radiation (OLR), and fractional ice coverage predicted by the reference Earth model (solid lines). Top left panel: the temperature profile is compared with ERA5 temperatures averaged in the period 2005-2015 (blue dots). Top right panel: the albedo profile is compared with CERES-EBAF data averaged in the period 2005-2015 (pink crosses). Bottom left panel: the OLR profile is compared with CERES-EBAF data averaged in the period 2005-2015 (pink crosses). Bottom right panel: the model profile is compared with the mean ice coverage, obtained by weighting the land and ocean data (averaged in the period 2005-2015) in each zone according to the zonal coverage of lands and oceans. ### 3.2 Diagnostic tests of the Earth model To test the predictions of Earth model we first investigated the temperature dependence of two key energy balance quantities, namely the OLR and TOA albedo. We then compare global and zonal planetary data with the model predictions. #### 3.2.1 OLR and TOA albedo In Fig. 7 we plot the OLR (left panel) and TOA albedo (right panel) versus surface $T$ obtained from the Earth’s reference model (black lines) and Earth’s satellite data (green circles). To obtain these plots, we plotted the mean annual OLR and TOA albedo of each latitude zone versus the corresponding zonal value of mean annual surface temperature. With this procedure we obtain two independent sets of data versus $T$, one for the Northern and the other for the Southern hemisphere. One can see that the EOS/ESTM calculations match well the Earth data, despite the simplified nature of the model. The agreement is better in the Northern hemisphere, which is less affected by the peculiar orography of Antarctica. A better match would require a 3D climate model with orography and a physical description of the atmospheric and oceanic fluidodynamics and clouds. #### 3.2.2 Global and zonal data In Table 6 we display the globally averaged values of planetary quantities predicted by the Earth’s reference model (second to last column). The comparison with the corresponding experimental data (previous column) shows an excellent agreement, with relative differences below 2% for the albedo and much smaller for the temperature and OLR. The match of model and observed data is particularly good in the Northern Hemisphere. The experimental data of the Southern hemisphere are significantly influenced by the high altitude of Anctartica, which is not accounted in our model without orography. In Fig. 8 we compare the mean annual zonal values of surface temperature, TOA albedo, OLR, and ice cover obtained from the reference Earth model (solid lines) and Earth’s satellite and reanalysis data (symbols). The agreement of the surface temperature curve (top left panel) is excellent, with an area- weighted rms deviation of 1.0 K for the Northern hemisphere. The main difference arise above the South Polar cap, where the observed temperature is $\sim 20$ K lower than predicted due to the lack of orography of the model. This difference is consistent with the $\sim 2$ km thickness of the ice sheet and a dry lapse rate of $\simeq 10$ K/km. The albedo curve (top right panel) shows an excellent agreement in the Northern Hemisphere at mid-high latitudes, where the gradual change of the albedo of transient ice provided by Eqs. (5) and (6) yields a better match to the data than in the original ESTM. In the equatorial regions the agreement is reasonable, considering the existence of albedo factors that can only be treated in 3D models, such as the atmospheric circulation, which affects the clouds distribution. In the Antarctic region, the model underestimates the albedo due to the lack of orography. The OLR profile (bottom left panel) shows a general agreement, with strong deviations in Antarctica and in the tropical regions. The OLR excess predicted by the model over Antarctic regions is due to the temperature excess that we have already discussed. The bumps of OLR emission measured at the edges of tropical regions are connected with the presence of large deserts and of low clouds with warm tops, while the reduced equatorial OLR is connected with the presence of deep convective clouds with cold tops (e.g. Hartmann, 2016). Neither of these two features can be captured by our model which performs a sort of average that provides the correct global value of OLR (Table 6). The good match between model and observations in the North polar region is an improvement with respect to the original ESTM, and is due to the fact that the scaling factor (14) rises the planetary OLR emitted from frozen regions. In the bottom-right panel we show a diagnostic test on ice coverage that was not performed by V15. The mean annual zonal coverage of ice predicted is in general agreement with the area-weighted lands and oceans data (red dots). This implies that the new algorithm that we have introduced, based on Eq. (4), is able to capture the main characteristics of ice coverage, using the dependence on a single parameter, namely the surface temperature. A treatment of the physics of ice formation and melting is, at this moment, beyond the scope of our model. ## 4 Testing non-terrestrial conditions By changing the input parameters that describe the stellar, orbital and planetary properties, the ESTM can be in principle applied to simulate a broad spectrum of exoplanetary climates. In this context, validation tests are required to assess the limits of validity of the model in non-terrestrial conditions. At present time, however, the climate systems of exoplanets are poorly constrained by observations and of no use for validating the model. Given this situation, the best way to test EOS-ESTM is to perform a comparison with the predictions obtained by other models that have been developed to investigate non-terrestrial planetary climates. Of particular interest is the comparison with 3D and 1D climate models, given the fact that ESTM is a 2D model, in the sense that we have clarified in Section 2. Below we provide the results of some preliminary comparison tests, starting from a simple simulation that we have performed using a 3D model of intermediate complexity. We then describe several tests that we performed using predictions published in the literature. So far, the space of stellar/planetary parameters that affects exoplanetary climates has been covered only partially in previous work. Therefore a comparison of different model predictions is only possible for a limited number of cases. Here we focus our attention on the models and published results summarized in Table 10. ### 4.1 Earth-like aquaplanet As a preliminary comparison test with a 3D climate model, we used the global climate model of intermediate complexity, PlaSim (Fraedrich et al., 2005; Angeloni et al., 2020). Specifically, we tested the case of an aquaplanet with rotational spin aligned with the orbital spin ($\epsilon=0$), the remaining parameters being equal to those of the Earth. For the sake of comparison with EOS-ESTM, the PlaSim simulation was run without oceanic transport. The resulting mean annual latitude profiles of surface temperature and top-of- atmosphere albedo are shown in Fig. 9. Despite the 3D nature of PlaSim and the different prescriptions of surface features, clouds, and ice between the two models, one can see that the results are in general agreement. This is true, in particular, for the temperature profile (left panel). The differences found in the albedo profiles (right panel) are due to the cloud distribution, which follows the atmospheric circulation pattern that can be modelled in PlaSim, but not in EOS-ESTM. Setting PlaSim parameters to non terrestrial conditions is not trivial. This is true, in general, for all 3D models, particularly for the most complex ones. For this reason, extending the comparison tests with PlaSim to cover a broader space of parameters will be the subject of a separate work. Figure 9: Comparison of model predictions obtained for an Earth-like aquaplanet using the EOS-ESTM (black curves) and the 3D climate of intermediate complexity PlaSim (green curves). Left panel: mean annual surface temperature versus latidude. Right panel: mean annual top-of-atmosphere albedo versus latitude. See Section 4.1. Table 10: Summary of the test cases represented in Fig. 10, 11 and 12. The second and third columns give the radiative transfer (RT) model and the atmospheric properties adopted. The last column reports the references. Kunze et al. (2014) adopted the RAD4ALL model (Nissen et al., 2007) for the shortwave transport and the RRTM model (Mlawer et al., 1997) for the longwave transport. Model Name | RT model | Atmosphere | Reference ---|---|---|--- ESTM | EOS | 1.013 bar | This work | | N2, CO2 (360 ppm), CH4 (1.8 ppm) and H2O | 1DGodolt2016 | K84202020Kasting et al. (1984) | 1 bar | Godolt et al. (2016) | | N2, O2, CO2 (355 ppm), CH4 (1.64 ppm), O3 and H2O | 3DLeconte2013 | LMDG | 1 bar | Leconte et al. (2013) | | N2, CO2 (376 ppm), and H2O | 3DWolf&Toon2015 | CAM 4 | 0.983 bar | Wolf & Toon (2015) | | N2, CO2 (367 ppm), and H2O | 3DWolf&Toon2014 | CAM 3 | 0.983 bar | Wolf & Toon (2014) | | N2, CO2 (367 ppm), and H2O | EBMShields+2013 | SMART | present-day Earth | Shields et al. (2013) | | CO2, O2 and H2O | ### 4.2 Variations of stellar insolation To test the model response to variations of insolation, $S$, we run a set of simulations aimed at reproducing similar climate experiments performed with 1D (Godolt et al., 2016) and 3D models (Leconte et al., 2013; Wolf & Toon, 2014; Shields et al., 2014; Wolf & Toon, 2015). In all cases, an Earth-like atmosphere was considered, with properties described in Table 10. The comparison with the 1D model is shown in Fig. 10, where we plot the mean surface temperature as a function of $S$ for two cases considered by Godolt et al. (2016). The first case (left panel) is a cloud-free Earth-like planet with a fixed albedo $A_{\texttt{surf}}$ = 0.22, a value that, according to Godolt et al. (2016), reproduces the mean surface temperature of Earth in a cloud- free model. The second case (right panel) is an Earth-like aquaplanet with $A_{\texttt{surf}}$ = 0.07, as adopted by Godolt et al. (2016), which is representative of the ocean albedo. In both cases ice was not considered. One can see that, despite the existence of some differences in the parametrizations (Table 10), the EOS-ESTM results (red solid lines) are in good agreement with those provided by Godolt et al. (2016) (black solid lines). Departures between the models arise for high values of surface temperatures. This effect is emphasised in the aquaplanet scenario, where the deviations start to be present above $\sim$ 280 K (right panel). Since the differences become important at high temperature, when the atmospheres contain more water vapor, this effect may be induced by a different treatment of the relative humidity (RH). Indeed, in our model we adopt a constant value, RH=60%, representative of the mean global value of the Earth, whereas Godolt et al. (2016) used a parametrization proposed by Manabe & Wetherald (1967), representative of the RH vertical profile of the Earth. Figure 10: Surface temperatures an Earth-like planet (left panel) and of an aquaplanet (right panel) orbiting a Sun-like star at various stellar insolations, $S$. In both cases, a cloud-free configuration is considered with no ice formation and a fixed value of surface albedo: 0.22 (left panel) and 0.07 (right panel). Black solid line: results obtained by Godolt et al. (2016) with a 1D model with relative humidity specified by Manabe & Wetherald (1967). Red solid line: results obtained in this work, where we adopt RH=60%. Figure 11: Comparison of global and annual mean surface temperature (left panel) and TOA albedo (right panel) obtained from different climate Earth’s models by increasing the solar constant. Red, solid line: EOS-ESTM (this work). Black, solid line: 3D model CAM4 (Wolf & Toon, 2015). Green solid line: 3D model by Leconte et al. (2013). Dashed lines in the right panel represent the surface albedo of these three models, indicated with the same color coding. In Fig. 11 we compare the mean annual global surface temperature (left panel) and TOA albedo (right panel) obtained with the 3D climate models. When the increase of insolation is modest, the results obtained with our model (red line) are in general agreement with those provided by the 3D models. However, discrepancies with Leconte et al. (2013) (green solid lines) and Wolf & Toon (2015) (black solid lines) appear at higher insolation, when the surface temperature rises above $\sim$ 290 K. Important differences are likely to arise from the different RT model adopted (Table 10). As shown in Simonetti et al. (2022, fig. 10a therein) and in Yang et al. (2016, fig. 3a therein), differences in the impact of the water vapor absorption predicted by different RT models start to become important for temperatures higher than 290 K. In particular, due to the onset of the runaway greenhouse instability, the slope of the OLR vs $T_{\texttt{surf}}$ in the CAM4 and LMDG models flattens more than in EOS. Therefore, under equal insolation conditions, EOS-ESTM features lower surface temperatures. On the other hand, the CAM3 model (solid cyan line) exhibits a lower surface temperature in response to higher insolations. We believe that this behaviour can be associated to the temperature dependence of the OLR and TOA albedo. In fact, for temperatures above $\sim$300 K, the CAM3 model features the largest value of OLR (Simonetti et al., 2022, Fig. 10a therein) and, at the same time, a slightly higher TOA albedo compared to other models (Simonetti et al., 2022, Fig. 10b therein). Besides the different RT recipes, deviations in the predictions are expected because our model does not incorporate a 3D physical treatment of the cloud and water vapor feedbacks, even though it does reproduce the essential features of the ice-albedo feedback and, to some extent, the rise of water vapor with temperature. We suggest that the sharp transitions of surface temperature (left panel) and TOA albedo (right panel) found by Leconte et al. (2013) and Wolf & Toon (2015) can be associated to variations in the cloud fraction in response to the increase of insolation. This interpretation is consistent with the fact that such transitions are not found for the surface albedo (dashed lines, right panel), for which the cloud/atmospheric effects are not relevant. ### 4.3 Variation of stellar spectra The ice-albedo feedback is a well known mechanism that affects the planetary climate with a de-stabilizing effect that, in the most extreme cases, may lead to an ice-covered planetary state, called “snowball” (Kirschvink, 1992). Owing to the wavelength dependence of the albedo, the impact of this effect will depend on the spectral energy distribution (SED) of the central star. G-type stars, like our Sun, emit a far greater fraction of their radiation in the visible light spectrum, whereas smaller and cooler M-dwarfs exhibit their peak output in the $\sim$0.8 to 1.2 $\mu$m range (Shields et al., 2013, Fig. 1a therein). The fact that these stars emit a significant fraction of their radiation above 1 $\mu$m, combined to the reduction of the albedos of snow and ice at the same wavelengths (Shields et al., 2013, Fig. 1b therein), implies that the albedos of frozen surfaces are lower on planets orbiting M-type stars than on Earth. Calculations of broadband albedo212121The ratio of the surface upward radiation flux to the downward radiation flux within a certain wavelength range (Kokhanovsky, 2021)., performed taking into account the stellar SEDs and the wavelength-dependence albedo of snow and ice, indicate that the ice-albedo feedback is weaker around M-type stars (Joshi & Haberle, 2012). The atmospheric contribution to the albedo in these stars was studied by Von Paris et al. (2013): the presence of trace amounts of H2O and CH4 in the atmosphere, as well as high CO2 pressures, damps the ice-albedo feedback in planets around M-type stars. To test the EOS-ESTM predictions at different stellar SEDs we performed a comparison with the work by Shields et al. (2013). These authors used a 1D radiative transfer model (SMART) to calculate the broadband planetary albedo, given the spectrum of the central star and that of the surface albedo. Then, they included the resulting broadband albedo into a 1D EBM to calculate the mean global surface temperature as a function of insolation for an aquaplanet orbiting a G-type star (the Sun) and an M-type star (AD Leo). Following their prescriptions, we considered an aquaplanet with an axis obliquity of 23∘, zero orbital eccentricity and a present-day Earth atmospheric composition. For consistency with their work, we considered a spectral distribution representative of AD Leo, which is a M3.5-type star, with a M⋆ = 0.42 M⊙ (Reiners et al., 2009). Since at decreasing insolation the aquaplanet undergoes a transition towards a “snowball” state, we dedicate special attention to select the value of the albedo of ice over ocean, $a_{io}$. Among the different values of “blue ice“222222According to Shields et al. (2013) blue marine ice results from freezing of liquid marine water and not from glacier ice. calculated by Shields et al. (2013, Table 2) we adopt the value for the case with no gases and clouds and no Rayleigh scattering. This is the case more appropriate for the surface albedo in our model, since the EOS-ESTM calculates the contribution of the atmosphere and clouds in its own way. The results are shown in Figure 12, where the planet orbiting the M-type dwarf appears less susceptible to “snowball” states, since the ice is particularly absorptive in the NIR, as well as the atmosphere. In spite of small differences associated to the transition to a complete “snowball” state, expressed as a sudden decrease of global surface temperature, the results obtained with EOS-ESTM (red lines) and by Shields et al. (2013) (black line) show an overall agreement. The main difference between the trends found in the two models may arises from the different parameterization of ice: the smooth transition in our model is probably due the gradual temperature dependence of the ice coverage described in Section 2.3. Figure 12: Mean global surface temperature versus stellar flux for an aquaplanet orbiting an M-type (dashed lines) and a G-type (solid lines) star. Due to the different stellar SED, the albedo of ice over the ocean, $a_{io}$, is varied as indicated in the legenda. Black lines: predictions obtained by Shields et al. (2013) with SMART in combination with an EBM adapted from North & Coakley (1979). Red lines: results obtained in this work adopting a warm start for consistency with Shields et al. (2013). ### 4.4 Variations of planet radius and rotation rate A critical difference between ESTM and GCMs is the treatment of the meridional transport. As explained in V15 (see Sec. 2.1), we model the $D$ term in Eq. (1) as a scaling relation between planetary quantities that are involved in the physics of the meridional transport. To test the reliability of this parameterization, we run a set of simulations varying one parameter at a time and compared our results with similar tests performed with GCMs by other authors. Specifically, we varied the planetary radius and rotation rate and performed a comparison with results published by Kaspi & Showman (2015) and Komacek & Abbot (2019). The mean annual equator-to-pole temperature difference, $\Delta T_{\text{EP}}$, is a good indicator of the efficiency of the meridional transport and is expected to be higher in planets with fast rotational velocities or large radii. The Coriolis forces resulting from planetary rotation tend to inhibit the transport from the tropics to the poles, leading to a higher gradient $\Delta T_{\text{EP}}$. Quantifying these effects with 3D models is important because changes of rotational angular velocity may affect the location of the inner edge of the habitable zone (Yang et al., 2014; Yang et al., 2019). Also variations of planetary radius affect the meridional gradient: as the radius increases, so does the physical distance between equator and poles, leading to a less efficient meridional heat distribution, i.e. a larger $\Delta T_{\text{EP}}$. In Fig. 13 we show how $\Delta T_{\text{EP}}$ is predicted to change as a function of planetary radius (left panels) and rotation period (right panels) for different models that we describe below. Following Komacek & Abbot (2019), we normalize $\Delta T_{\text{EP}}$ to the values predicted by each model for Earth’s values of rotation rate and radius. The results obtained by Kaspi & Showman (2015) and Komacek & Abbot (2019) are rather different, despite both being based on 3D models. We refer to the latter paper for a discussion on these differences, which may be due to different physico-chemical assumptions and to the fact that the model of Komacek & Abbot (2019) had not been tuned to match the Earth. Here we compare our results with those obtained in these two papers. #### 4.4.1 Comparison with Kaspi & Showman (2015) To investigate the atmospheric dynamics over a wide range of planetary parameter space Kaspi & Showman (2015) adopted a 3D GCM with a scheme similar to that of Frierson et al. (2006) both for the radiative transfer and the surface boundary-layer: a standard two-stream gray radiation and an uniform 1-meter water-covered slab, with an albedo of $A=0.35$, respectively. They modelled an idealized aquaplanet at perpetual equinox with an Earth-like reference atmosphere. The effects of clouds, sea-ices and continents were not accounted for. For the sake of comparison we adopted the same set of conditions in EOS-ESTM. In the top panels of Fig. 13 one can see that, in spite of differences at the low- and high-radius and rotation rate regimes, the EOS-ESTM predictions (red symbols and lines) reproduce the trends obtained by Kaspi & Showman (2015) with the 3D aquaplanet (blue symbols and lines). The two sets of results are consistent as long as the planets have radii and rotation rates sufficiently close to those of the Earth. For habitability studies we are interested in the range of radii expected for rocky planets, shown as shaded red areas in the left panels of Fig. 13. In this range, the predictions of the two models are comparable. Figure 13: Normalized equator-to-pole temperature difference as a function of planet radius (left panels) and planet rotation rate (right panels) for an Earth-like aquaplanet. Top panels: comparison between the results obtained in this work (red lines) and those obtained by Kaspi & Showman (2015) (blue lines) for a cloud-free aquaplanet slab ocean with a depth of 1- meter, an axis obliquity $i$ = 0 ∘, a fixed albedo (A=0.35) and no sea ice. Bottom panels: comparison between the results obtained in this work (red lines) and by Komacek & Abbot (2019) (blue lines) for an aquaplanet slab ocean with a depth of 50- meter where the effects of clouds, nongray radiative transfer, and sea ice are included; axis obliquity $i=0$ and the eccentricity $e=0$. Shaded red area: range of radius (0.5<R/R⊕<1.6) at which an exoplanet is more likely to be composed of rock and metal (Meadows & Barnes, 2018). #### 4.4.2 Comparison with Komacek & Abbot (2019) More recently, Komacek & Abbot (2019) investigated how the atmospheric circulation and climate of planets orbiting Sun-like stars vary when planetary parameters are changed. They used the state-of-the-art GCM ExoCAM (a modified version of the Community Atmosphere Model version 4), to simulate an idealized aquaplanet with a 50-m water slab without oceanic transport and an atmosphere of N2 and H2O. At variance with Kaspi & Showman (2015), they included the effects of clouds, non-gray radiative transfer, and sea ice. To test the EOS- ESTM predictions making use of their results, we adopted, for consistency, a uniform 50-meter thick water-covered slab, the same atmosphere of N2 and H2O,and the same ice albedo; we set to zero both the axis obliquity and orbital eccentricity. In the bottom panels of Figure 13 we compare the normalized $\Delta T_{\text{EP}}$ obtained with our model (red symbols and lines) with those obtained with the GCM (blue symbols and lines). In the bottom-left panel one can see that the trend of increasing $\Delta T_{\text{EP}}$ that we find (red line) is consistent, but steeper than that found with the GCM (blue line), the departures becoming significant above the radius limit of rocky planets. The smoother trend found with the GCM suggests that the large scale, 3D circulation, not present in our model, may enhance the heat distribution. Our trend is somewhat steeper than the one that we found in the previous test (top-left-panel), indicating how the inclusion of clouds and ice impacts our results. The bottom-right panel of Figure 13 shows a consistent trend for angular velocity lower than $\Omega_{\oplus}$, with a discrepancy at high angular velocity ($\Omega$ = 2 $\Omega_{\oplus}$). This discrepancy is surprising, because our algorithm for meridional transport is expected to be more realistic for fast-rotating planets (see V15), and this indication is supported by the comparison with Kaspi & Showman (2015) shown in the top-right panel. Clearly, the two 3D models that we are using for comparison show remarkable differences between them and should be taken with some caution. At low rotation speed, where we know that our assumptions are more critical (see V15), our model seems to underestimate $\Delta T_{\text{EP}}$, as in the comparison with Kaspi & Showman (2015) shown in the top-right panel. These results suggest that the 3D circulation may be able to redistribute the heat efficiently, with a weak dependence on the planet rotation rate. #### 4.4.3 Future improvements of the model In our parametrization of the meridional transport, the term $D$ scales as $\Omega^{-4/5}$ and $R^{-6/5}$ (see V15, Section 2.1). Taking advantage of the flexibility of our model, we varied the exponents of these power laws, searching for a better agreement with the trends obtained by the 3D models shown in Fig. 13. The dashed green lines plotted in all panels of that figure show that a better match with the 3D results is achieved when adopting a more moderate dependence for both the angular velocity, $\propto\Omega^{-0.5}$ and radius, $\propto R^{-0.5}$. This exercise shows that, in principle, one could recalibrate the exponents of the scaling relations that we adopt for $D$, making use of specifically designed tests performed with GCMs. To this end it would be important to use realistic 3D models for cross validation. Realistic models should include the main components of the climate system and should be calibrated to match the Earth data. Setting state-of-the art GCM models to simulate non-terrestrial conditions is not a straightforward task. However, this is the way to proceed for expanding the range of application of flexible models such as EOS-ESTM and exploring the parameter space that allows habitable climates to exist. ### 4.5 The outer edge of the habitable zone In classic studies of the habitable zone (HZ) the locations of the inner and outer edge are calculated making use of single-column, cloud-free atmospheric climate models (Kasting et al., 1993; Kopparapu et al., 2013a). In recent years several studies have proven the critical role of planetary properties on the position and extension of the circumstellar HZ (Yang et al., 2014; Rushby et al., 2019; Yang et al., 2019; Zhao et al., 2021). Here we take advantage of the flexibility of EOS-ESTM to investigate how the location of the outer edge is affected by variations of planetary parameters. Establishing the exact location of the outer edge would require a study of clouds effects (e.g. Forget & Pierrehumbert, 1997; Selsis et al., 2007; Kitzmann, 2017) and the possible presence of other greenhouse gases, such as CH4(see Ramirez & Kaltenegger, 2018). However, for the climate experiments that we present here, we simply adopt the “maximum greenhouse” limit, defined as the maximum distance at which a cloud-free planet with an atmosphere dominated by CO2 can maintain a surface temperature of 273 K (Kasting et al., 1993). Beyond this limit, the greenhouse effect due to a further rise of CO2 is offset by the rise of atmospheric albedo due to the Rayleigh scattering of CO2 molecules. To explore the impact of planetary parameters on the location of the outer edge, we considered a cloud-free, CO2-dominated atmosphere with a dry surface pressure of 7.3 bar232323Kopparapu and collaborators also had an additional bar of N2 in all their models, thus they evaluated the CO2 partial pressure. Taken alone, it would produce a surface pressure of 7.3 bar., which is the value identified by Kopparapu et al. (2013a) as the maximum greenhouse limit. To build the pressure-temperature profile of the atmosphere we followed the recipes in the appendix of Kasting (1991), with a H2O-saturated lower troposphere, a CO2-saturated upper troposphere and a 154 K isothermal stratosphere. We varied the insolation of a planet with Earth-like parameters and solar-type central star, searching for the limit at which the planet undergoes a transition to a snowball state. For simplicity, we considered only the solutions obtained with warm initial conditions, i.e. starting with $T_{0}$=300 K, which provide a conservative outer limit to the habitable zone. The results of these experiments are shown in Fig. 14, where we plot the mean- annual global ice coverage as a function of insolation obtained for different values of planetary rotation, radius, axis tilt, and ocean/land distribution. In all cases we find that the transition to a snowball state is rather sharp, taking place around $\simeq 1.6$ AU. This result is in general agreement with the maximum greenhouse limit for a solar-type star found at 1.67 AU by Kasting et al. (1993) and Kopparapu et al. (2013b) from single-column calculations. The different treatment of the radiative transfer and of the climate recipes in our model can explain why we find the snowball transition at a location somewhat closer to the star than the classic outer edge. In particular, we used more recent spectral data (HITRAN2016) and a different H2O continuum model with respect to Kasting et al. (1993) and Kopparapu et al. (2013a). Our RT calculations employed a more coarse vertical pressure grid with respect of Kopparapu et al. (2013a), which is known to slightly increase the OLR (thus increasing the lower insolation limit). Finally, we adopted less opaque CIA prescriptions for the CO2 with respect to Kasting et al. (1993), which are considered more in line with experimental results (see Wordsworth et al., 2010, for a discussion on the subject). The extra dimension (latitude) and the ice-albedo feedback that are present in our model provide a detailed description of the climate changes that take place in the proximity of the outer edge. A detailed analysis of Fig. 14 highlights the role played by different planetary properties in determining the onset of the snowball transition. In the top left panel one can see that the transition occurs at increasing distance from the star when the rotation period increases. This effect is expected because the heat transport from the equator to the poles becomes more efficient with increasing $P_{\text{rot}}$, leading to a slower growth of the ice polar caps. In the top right panel one can see that the increase of planetary radius shifts the snowball boundary inwards. This is due to the fact that the heat transport to the poles is less effective in planets with larger $R_{\text{p}}$, leading to a faster growth of the polar caps. The bottom left panel shows that an increase of the planetary axis tilt, $\epsilon$, shifts the snowball limit outwards. The effect is negligible up to $\epsilon\simeq 20^{\circ}$, becoming evident above $\simeq 30^{\circ}$. In this moderate range of obliquities the effect can be interpreted as follows. The configuration at $\epsilon=0^{\circ}$ favours the formation of permanent ice caps in the polar regions, where the zenith distance $Z$ is always large. As the obliquity starts to increase, the polar regions undergo a period of higher insolation (lower $Z$) in some seasons, which tends to reduce the ice caps. At very high obliquities the behaviour is more complex (see Section 4.4.2 in V13) and can be properly investigated only using 3D models. In the bottom panel of Fig. 14, we show the impact of variations of ocean/land distribution. As one can see, the outer limit shifts outwards when the fraction of oceans, $f_{o}$, increases. The land planet, with $f_{o}=0.05$, provides an extreme example of early snowball transition. These results can be understood in terms of the lower albedo and higher thermal capacity of the oceans compared to the continents. The snowball transition that we find is slightly sharper in ocean planets than in desert planets owing to the slightly different temperature dependence of ice over oceans and lands (Section 2.3). Our results are in line with recent findings that planets covered largely by oceans have warmer average surface temperatures than land-covered planets Rushby et al. (2019). Figure 14: Dependence on planetary parameters of the fractional ice coverage calculated at the outer edge of the HZ. The results were obtained for a cloud- free Earth-like planet with a CO2-dominated, maximum greenhouse atmosphere, the remaining parameters being fixed to Earth values; only the solutions obtained with warm initial conditions ($T_{0}=300$ K) are shown (see Section 4.5). Top left panel: rotation period, $P_{\text{rot}}$; top right panel: planet radius, $R_{\text{p}}$; bottom left panel: axis obliquity, $\epsilon$; bottom right panel: geography. ## 5 Conclusions We have presented EOS-ESTM, a flexible climate model aimed at simulating the surface and atmospheric conditions that characterize habitable planets. The model allows one to perform a fast exploration of the parameter space representative of planetary quantities, including those currently not measurable in rocky exoplanets. EOS-ESTM has been built up starting from ESTM, a seasonal-latitudinal EBM featuring an advanced treatment of surface and cloud components and a 2D (vertical and latitudinal) treatment of the energy transport. The main upgrades of EOS-ESTM can be summarized as follows: * • The atmospheric radiative transfer is calculated using EOS (Simonetti et al., 2022), a procedure tailored for atmospheres of terrestrial-type planets, based on the opacity calculator HELIOS-K (Grimm & Heng, 2015; Grimm et al., 2021) and the radiative transfer code HELIOS (Malik et al., 2017, 2019). Thanks to EOS, the ESTM radiative transfer can be now calculated for a variety of atmospheres with different bulk and greenhouse compositions, illuminated by stars with different SEDs. * • The parameterizations that describe the clouds properties have been largely upgraded. New equations have been introduced for the albedo of the clouds and its dependence on the albedo of the underlying surface. The clouds coverage over ice is now a function of the global planetary ice coverage. A specific treatment for the transmittance and OLR forcing of clouds at very low temperature has been introduced. * • A generalized logistic function has been introduced to estimate the ice coverage as a function of mean zonal surface temperature. Based on a detailed study of the ice distribution on Earth, the adopted algorithm discriminates between ice over lands and oceans. The albedo and thermal capacity of transitional ice is now estimated using the fractional ice coverage. With the aim of providing a reference model for studies of habitable planets, we calibrated EOS-ESTM using a large set of Earth satellite and reanalysis data. The reference Earth model satisfies a variety of diagnostic tests, including mean global measurements (Table 6) and mean latitudinal profiles of surface temperature, TOA albedo, OLR and ice coverage (Fig. 8). The positive results of the diagnostic tests were obtained by tuning the parameters within narrow ranges perfectly consistent with measurements of each climate component. All the Earth’s data used in our analysis were selected for the same period (2005-2015) and the atmospheric trace content of greenhouse gases was tuned accordingly (Section 3.1.3). Due to the lack of 3D treatment of clouds and atmospheric circulation, the model is not able to reproduce the detailed shape of the OLR latitudinal profile, even though it does reproduce correctly the mean global value. To test the consistency of EOS-ESTM with previous studies of non-terrestrial climate conditions we performed a series of comparisons with a hierarchy of climate models (Section 4). The results of these tests can be summarized as follows: * • The latitudinal profiles of temperature and albedo of an Earth-like aquaplanet are in agreement with predictions obtained using the 3D, intermediate complexity model PlaSim. Differences that we find are due to the lack of the 3D atmospheric circulation and the 3D representation of clouds in our model. * • Comparisons performed at varying levels of insolation yield results which are in general agreement with other models. However, critical differences appear at high insolation and temperature, when the resulting abundance of water vapour makes extremely model-dependent the radiative transfer calculations. Changing stellar spectrum at moderate and low levels of insolation yields consistent results. * • Comparisons performed at varying planetary radius and rotation rate yield consistent results, but suggest that the dependence of the meridional transport on these planetary quantities may be more moderate than estimated in V15. This test indicates that some parameters of our model can be recalibrated using a proper set of climate experiments carried out with state-of-the art GCMs. * • The application of EOS-ESTM to the case of a CO2-dominated atmosphere in maximum greenhouse conditions (Kasting et al., 1993) yields a detailed description of the transition to a snowball state that takes place when the insolation decreases in the proximity of the outer edge of the HZ. Thanks to the flexibility of our model we can explore how this transition develops in different planetary conditions (e.g. rotation rate, radius, axis tilt, ocean coverage), taking also into account the presence of climate bistability. The possibility to easily adapt the input parameters to simulate a broad spectrum of planetary and atmospheric quantities allows one to apply EOS-ESTM to simulate a large variety of terrestrial-type exoplanets. As in the case of the original ESTM, this flexibility can be used to explore in detail the habitability conditions of individual exoplanets (Silva et al., 2017b) or to perform statistical studies of exoplanetary habitability (Murante et al., 2020). With EOS-ESTM it will be possible to extend these types of studies with a more accurate treatment of the climate effects of land, oceans, ice and clouds, and expanding the palette of atmospheres to non-terrestrial compositions and the host stars to non-solar types. The flexibility of EOS-ESTM paves the road for building up multiparameter habitable zones, each parameter being representative a planetary property that affects the climate. To achieve this ambitious goal it is important to assess the consistency with respect to a hierarchy of climate models, devising a dedicated series of experiments with the same set of initial conditions. Given the vastness of possibilities to be tested, a collaborative effort is required in order to establish proper protocols for a meaningful comparison of models developed by independent research groups, such as the TRAPPIST-1 Habitable Atmosphere Intercomparison (THAI, Fauchez et al., 2020, 2021a), and the future larger project Climates Using Interactive Suites of Intercomparisons Nested for Exoplanet Studies (CUISINES) NExSS242424https://nexss.info/ Working Group (Fauchez et al., 2021b). ## Acknowledgements The Authors wish to thank the Italian Space Agency for co-funding the Life in Space project (ASI N. 2019-3-U.0). The research reported in this work was supported by OGS and CINECA under HPC-TRES program award number 2022-02. We thank the referee for his/her careful reading of the manuscript and helpful comments. ## DATA AVAILABILITY The data used for this article will be shared on reasonable request to the corresponding author. ## References * Abbot (2014) Abbot D., 2014, Journal of Climate, 27, 4391 * Angeloni et al. (2020) Angeloni M., Palazzi E., von Hardenberg J., 2020, Geoscientific Model Development Discussions, 2020, 1 * Barnes (2017) Barnes R., 2017, Celestial Mechanics and Dynamical Astronomy, 129, 509 * Borucki et al. (2010) Borucki W. J., et al., 2010, Science, 327, 977 * Briegleb (1992) Briegleb B. P., 1992, Journal of geophysical research, 97, 7603 * Briegleb et al. (1986) Briegleb B. P., Minnis P., Ramanathan V., Harrison E., 1986, Journal of Applied Meteorology, 25, 214 * Broeg et al. (2018) Broeg C., Benz W., Fortier A., 2018, in 42nd COSPAR Scientific Assembly. pp E4.1–5–18 * Cess (1976) Cess R. D., 1976, Journal of Atmospheric Sciences, 33, 1831 * Clough et al. (2005) Clough S. A., Shephard M. W., Mlawer E. J., Delamere J. S., Iacono M. J., Cady-Pereira K., Boukabara S., Brown P. D., 2005, J. Quant. Spectrosc. Radiative Transfer, 91, 233 * Coakley (2003) Coakley J., 2003, Encyclopedia of Atmospheric Sciences * Ellis & Haar (1976) Ellis J. S., Haar T. H. V., 1976, Zonal average earth radiation budget measurements from satellites for climate studies, https://mountainscholar.org/bitstream/handle/10217/87/0240_Bluebook.pdf?sequence=1 * Enomoto (2007) Enomoto T., 2007, JAMSTEC Report of Research and Development, 6 * Fauchez et al. (2020) Fauchez T. J., et al., 2020, Geoscientific Model Development, 13, 707 * Fauchez et al. (2021a) Fauchez T. J., et al., 2021a, Planetary Science Journal, 2, 106 * Fauchez et al. (2021b) Fauchez T., et al., 2021b, in Bulletin of the American Astronomical Society. p. 1018 * Forget & Pierrehumbert (1997) Forget F., Pierrehumbert R. T., 1997, Science, 278, 1273 * Fraedrich et al. (2005) Fraedrich K., Jansen H., Kirk E., Luksch U., Lunkeit F., 2005, Meteorologische Zeitschrift, 14, 299 * Frierson et al. (2006) Frierson D. M. W., Held I. M., Zurita-Gotor P., 2006, Journal of Atmospheric Sciences, 63, 2548 * Fujii et al. (2018) Fujii Y., et al., 2018, Astrobiology, 18, 739 * Fulton et al. (2017) Fulton B. J., et al., 2017, AJ, 154, 109 * Gardner et al. (2006) Gardner J. P., et al., 2006, Space Sci. Rev., 123, 485 * Gaudi et al. (2020) Gaudi B. S., et al., 2020, arXiv e-prints, p. arXiv:2001.06683 * Godolt et al. (2016) Godolt M., Grenfell J. L., Kitzmann D., Kunze M., Langematz U., Patzer A. B. C., Rauer H., Stracke B., 2016, A&A, 592, A36 * Gordon et al. (2017) Gordon I. E., et al., 2017, J. Quant. Spectrosc. Radiative Transfer, 203, 3 * Grimm & Heng (2015) Grimm S. L., Heng K., 2015, HELIOS-K: Opacity Calculator for Radiative Transfer (ascl:1503.004) * Grimm et al. (2021) Grimm S. L., et al., 2021, ApJS, 253, 30 * Haqq-Misra & Hayworth (2022) Haqq-Misra J., Hayworth B. P. C., 2022, The Planetary Science Journal, 3, 32 * Hartmann (2016) Hartmann D. L., 2016, Global Physical Climatology (Second Edition). Elsevier, Boston, doi:https://doi.org/10.1016/B978-0-12-328531-7.00021-9 * Hartmann et al. (1992) Hartmann D. L., Ockert-Bell M. E., Michelsen M. L., 1992, Journal of Climate, 5, 1281 * Hersbach et al. (2020) Hersbach H., et al., 2020, Quarterly Journal of the Royal Meteorological Society, 146, 1999 * Howard et al. (2012) Howard A. W., et al., 2012, ApJS, 201, 15 * Huang et al. (2019) Huang C. J., Qiao F., Chen S., Xue Y., Guo J., 2019, Journal of Geophysical Research: Oceans, 124, 4480–4491 * Jenkins et al. (2015) Jenkins J. M., et al., 2015, AJ, 150, 56 * Joshi & Haberle (2012) Joshi M. M., Haberle R. M., 2012, Astrobiology, 12, 3 * Kalirai (2018) Kalirai J., 2018, Contemporary Physics, 59, 251 * Kaltenegger (2017) Kaltenegger L., 2017, ARA&A, 55, 433 * Kaspi & Showman (2015) Kaspi Y., Showman A. P., 2015, ApJ, 804, 60 * Kasting (1991) Kasting J. F., 1991, Icarus, 94, 1 * Kasting et al. (1984) Kasting J. F., Pollack J. B., Crisp D., 1984, Journal of Atmospheric Chemistry, 1, 403 * Kasting et al. (1993) Kasting J. F., Whitmire D. P., Reynolds R. T., 1993, Icarus, 101, 108 * Kasting et al. (2014) Kasting J. F., Kopparapu R., Ramirez R. M., Harman C. E., 2014, Proceedings of the National Academy of Science, 111, 12641 * Kiehl et al. (1998) Kiehl J. T., Hack J. J., Bonan G. B., Boville B. A., Williamson D. L., Rasch P. J., 1998, Journal of Climate, 11, 1131 * King et al. (2013) King M. D., Platnick S., Menzel W. P., Ackerman S. A., Hubanks P. A., 2013, IEEE Transactions on Geoscience and Remote Sensing, 51, 3826 * Kirschvink (1992) Kirschvink J., 1992, The Proterozoic Biosphere: A Multidisciplinary Study. Cambridge University Press, pp 51–52 * Kitzmann (2017) Kitzmann D., 2017, A&A, 600, A111 * Kokhanovsky (2021) Kokhanovsky A., 2021, Frontiers in Environmental Science, 9, 757575 * Koll et al. (2019) Koll D. D. B., Malik M., Mansfield M., Kempton E. M. R., Kite E., Abbot D., Bean J. L., 2019, ApJ, 886, 140 * Komacek & Abbot (2019) Komacek T. D., Abbot D. S., 2019, ApJ, 871, 245 * Kopparapu et al. (2013a) Kopparapu R. K., et al., 2013a, ApJ, 765, 131 * Kopparapu et al. (2013b) Kopparapu R. K., Ramirez R., Kasting J. F., 2013b, The Astrophysical Journal, 770, 82 * Kopparapu et al. (2014) Kopparapu R. K., Ramirez R. M., SchottelKotte J., Kasting J. F., 2014, Astrophysical Journal, Letters, 787, L29 * Kreidberg (2018) Kreidberg L., 2018, Exoplanet Atmosphere Measurements from Transmission Spectroscopy and Other Planet Star Combined Light Observations. Springer International Publishing, Cham, pp 2083–2105, doi:10.1007/978-3-319-55333-7_100, https://doi.org/10.1007/978-3-319-55333-7_100 * Kuell et al. (2005) Kuell V., et al., 2005, Journal of Geophysical Research (Atmospheres), 110, D16104 * Kunze et al. (2014) Kunze M., Godolt M., Langematz U., Grenfell J. L., Hamann-Reinus A., Rauer H., 2014, Planet. Space Sci., 98, 77 * Leconte et al. (2013) Leconte J., Forget F., Charnay B., Wordsworth R., Pottier A., 2013, Nature, 504, 268 * Leconte et al. (2015) Leconte J., Wu H., Menou K., Murray N., 2015, Science, 347, 632 * Loeb et al. (2018) Loeb N. G., et al., 2018, Journal of Climate, 31, 895 * Lovelock (1965) Lovelock J. E., 1965, Nature, 207, 568 * Maiolino et al. (2013) Maiolino R., et al., 2013, arXiv e-prints, p. arXiv:1310.3163 * Malik et al. (2017) Malik M., et al., 2017, AJ, 153, 56 * Malik et al. (2019) Malik M., Kitzmann D., Mendonça J. M., Grimm S. L., Marleau G.-D., Linder E. F., Tsai S.-M., Heng K., 2019, AJ, 157, 170 * Manabe & Wetherald (1967) Manabe S., Wetherald R. T., 1967, Journal of Atmospheric Sciences, 24, 241 * McKay (2014) McKay C. P., 2014, Proceedings of the National Academy of Science, 111, 12628 * Meadows & Barnes (2018) Meadows V. S., Barnes R. K., 2018, Factors Affecting Exoplanet Habitability. Springer International Publishing, Cham, pp 2771–2794, doi:10.1007/978-3-319-55333-7_57, https://doi.org/10.1007/978-3-319-55333-7_57 * Meredith & Brandon (2017) Meredith M. P., Brandon M., 2017, in , Sea Ice (3rd ed). John Wiley & Sons, Chichester, pp 216–238, http://oro.open.ac.uk/48468/ * Mlawer et al. (1997) Mlawer E. J., Taubman S. J., Brown P. D., Iacono M. J., Clough S. A., 1997, J. Geophys. Res., 102, 16,663 * Mlawer et al. (2012) Mlawer E. J., Payne V. H., Moncet J. L., Delamere J. S., Alvarado M. J., Tobin D. C., 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 2520 * Morley et al. (2017) Morley C. V., Kreidberg L., Rustamkulov Z., Robinson T., Fortney J. J., 2017, ApJ, 850, 121 * Murante et al. (2020) Murante G., et al., 2020, MNRAS, 492, 2638 * Myhre et al. (1998) Myhre G., Highwood E. J., Shine K. P., Stordal F., 1998, Geophysical Research Letters, 25, 2715 * Nissen et al. (2007) Nissen K. M., Matthes K., Langematz U., Mayer B., 2007, Atmospheric Chemistry & Physics, 7, 5391 * North & Coakley (1979) North G. R., Coakley James A. J., 1979, Journal of Atmospheric Sciences, 36, 1189 * North et al. (1981) North G. R., Cahalan R. F., Coakley James A. J., 1981, Reviews of Geophysics and Space Physics, 19, 91 * Payne (1972) Payne R. E., 1972, Journal of Atmospheric Sciences, 29, 959 * Perrin & Hartmann (1989) Perrin M. Y., Hartmann J. M., 1989, J. Quant. Spectrosc. Radiative Transfer, 42, 311 * Pierrehumbert (2010) Pierrehumbert R. T., 2010, Principles of Planetary Climate. Cambridge University Press * Provenzale (2013) Provenzale A., 2013, Rendiconti Lincei, 25 * Quanz et al. (2021) Quanz S. P., et al., 2021, arXiv e-prints, p. arXiv:2101.07500 * Ramirez & Kaltenegger (2018) Ramirez R. M., Kaltenegger L., 2018, ApJ, 858, 72 * Ramirez et al. (2019) Ramirez R., et al., 2019, BAAS, 51, 31 * Rauer et al. (2014) Rauer H., et al., 2014, Experimental Astronomy, 38, 249 * Reiners et al. (2009) Reiners A., Basri G., Browning M., 2009, ApJ, 692, 538 * Richard (1959) Richard F. J., 1959, Journ. of Experimental Botany, 10, 290 * Ricker et al. (2015) Ricker G. R., Winn J. N., Vanderspek R., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003 * Rogers (2015) Rogers L. A., 2015, ApJ, 801, 41 * Rothman et al. (1992) Rothman L. S., et al., 1992, J. Quant. Spectrosc. Radiative Transfer, 48, 469 * Rushby et al. (2019) Rushby A. J., Shields A. L., Joshi M., 2019, ApJ, 887, 29 * Schwieterman et al. (2018) Schwieterman E. W., et al., 2018, Astrobiology, 18, 663 * Seidel et al. (2001) Seidel D. J., Ross R. J., Angell J. K., Reid G. C., 2001, J. Geophys. Res., 106, 7857 * Selsis et al. (2007) Selsis F., Kasting J. F., Levrard B., Paillet J., Ribas I., Delfosse X., 2007, A&A, 476, 1373 * Shields (2019) Shields A. L., 2019, ApJS, 243, 30 * Shields et al. (2013) Shields A. L., Meadows V. S., Bitz C. M., Pierrehumbert R. T., Joshi M. M., Robinson T. D., 2013, Astrobiology, 13, 715 * Shields et al. (2014) Shields A. L., Bitz C. M., Meadows V. S., Joshi M. M., Robinson T. D., 2014, ApJ, 785, L9 * Silva et al. (2017a) Silva L., Vladilo G., Schulte P. M., Murante G., Provenzale A., 2017a, International Journal of Astrobiology, 16, 244 * Silva et al. (2017b) Silva L., Vladilo G., Murante G., Provenzale A., 2017b, MNRAS, 470, 2270 * Simonetti et al. (2022) Simonetti P., Vladilo G., Silva L., Maris M., Ivanovski S. L., Biasiotti L., Malik M., von Hardenberg J., 2022, ApJ, 925, 105 * Snellen et al. (2015) Snellen I., et al., 2015, A&A, 576, A59 * Spiegel et al. (2008) Spiegel D. S., Menou K., Scharf C. A., 2008, ApJ, 681, 1609 * Stringer & Groves (1991) Stringer W., Groves J., 1991, ARCTIC, 44 * The LUVOIR Team (2019) The LUVOIR Team 2019, arXiv e-prints, p. arXiv:1912.06219 * Thompson & Barron (1981) Thompson S. L., Barron E. J., 1981, Journal of Geology, 89, 143 * Tinetti et al. (2018) Tinetti G., et al., 2018, Experimental Astronomy, 46, 135 * Trenberth & Caron (2001) Trenberth K. E., Caron J. M., 2001, Journal of Climate, 14, 3433 * Udry & Santos (2007) Udry S., Santos N. C., 2007, ARA&A, 45, 397 * Vladilo & Hassanali (2018) Vladilo G., Hassanali A., 2018, Life, 8, 1 * Vladilo et al. (2013) Vladilo G., Murante G., Silva L., Provenzale A., Ferri G., Ragazzini G., 2013, ApJ, 767, 65 * Vladilo et al. (2015) Vladilo G., Silva L., Murante G., Filippi L., Provenzale A., 2015, ApJ, 804, 50 * Von Paris et al. (2013) Von Paris P., Selsis F., Kitzmann D., Rauer H., 2013, Astrobiology, 13, 899 * Walker et al. (1981) Walker J. C. G., Hays P. B., Kasting J. F., 1981, J. Geophys. Res., 86, 9776 * Wiedner et al. (2021) Wiedner M. C., et al., 2021, Experimental Astronomy, 51, 595 * Wild et al. (2013) Wild M., Folini D., Schär C., Loeb N., Dutton E. G., König-Langlo G., 2013, AIP Conference Proceedings, 1531, 628 * Williams & Kasting (1997) Williams D. M., Kasting J. F., 1997, Icarus, 129, 254 * Winn & Fabrycky (2015) Winn J. N., Fabrycky D. C., 2015, ARA&A, 53, 409 * Wolf & Toon (2013) Wolf E. T., Toon O. B., 2013, Astrobiology, 13, 656 * Wolf & Toon (2014) Wolf E. T., Toon O. B., 2014, Geophys. Res. Lett., 41, 167 * Wolf & Toon (2015) Wolf E. T., Toon O. B., 2015, Journal of Geophysical Research (Atmospheres), 120, 5775 * Wolf et al. (2022) Wolf E. T., Kopparapu R., Haqq-Misra J., Fauchez T. J., 2022, Planetary Science Journal, 3, 7 * Wordsworth et al. (2010) Wordsworth R., Forget F., Eymet V., 2010, Icarus, 210, 992 * Yang et al. (2014) Yang J., Boué G., Fabrycky D. C., Abbot D. S., 2014, The Astrophysical Journal, 787, L2 * Yang et al. (2016) Yang J., et al., 2016, ApJ, 826, 222 * Yang et al. (2019) Yang H., Komacek T. D., Abbot D. S., 2019, Astrophysical Journal, Letters, 876, L27 * Zhao et al. (2021) Zhao Z., Liu Y., Li W., Liu H., Man K., 2021, ApJ, 910, L8
# Opinion Dynamics in Financial Markets via Random Networks Mateus F. B. Granha<EMAIL_ADDRESS>Física de Materiais, Universidade de Pernambuco, Recife, PE 50720-001, Brazil André L. M. Vilela <EMAIL_ADDRESS>Universidade de Pernambuco, Recife, PE 50720-001, Brazil Center for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215, USA Chao Wang<EMAIL_ADDRESS>College of Economics and Management, Beijing University of Technology, Beijing, 100124, China Kenric P. Nelson<EMAIL_ADDRESS>Photrek LLC, Watertown, MA 02472, USA H. Eugene Stanley<EMAIL_ADDRESS>Center for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215, USA ###### Abstract We investigate the financial market dynamics by introducing a heterogeneous agent-based opinion formation model. In this work, we organize the individuals in a financial market by their trading strategy, namely noise traders and fundamentalists. The opinion of a local majority compels the market exchanging behavior of noise traders, whereas the global behavior of the market influences the fundamentalist agents’ decisions. We introduce a noise parameter $q$ to represent a level of anxiety and perceived uncertainty regarding the market behavior, enabling the possibility for an adrift financial action. We place the individuals as nodes in an Erdös-Rényi random graph, where the links represent their social interaction. At a given time, they assume one of two possible opinion states $\pm 1$ regarding buying or selling an asset. The model exhibits such fundamental qualitative and quantitative real-world market features as the distribution of logarithmic returns with fat-tails, clustered volatility, and long-term correlation of returns. We use Student’s t distributions to fit the histograms of logarithmic returns, showing the gradual shift from a leptokurtic to a mesokurtic regime, depending on the fraction of fundamentalist agents. We also compare our results with the distribution of logarithmic returns of several real-world financial indices. Econophysics, Sociophysics, Monte Carlo simulation, Phase transitions, Complex Networks ###### pacs: 89.65.Gh Economics; econophysics, financial markets, business and management, 87.23.Ge Dynamics of social systems, 05.10.Ln Monte Carlo methods, 64.60.Cn Order-disorder transformations ††preprint: APS/123-QED ## I INTRODUCTION For decades now, especially over the last years, financial markets and economic systems have fascinated researchers and investors worldwide. The effectiveness of methods and techniques from Statistical Mechanics has been a compelling element for the comprehension of financial dynamics as a complex system stanley2000introduction ; Stanley2002 ; Farmer2005 ; Baldovin2007 ; Li2020 . Simultaneously, it has been responsible for developing the interdisciplinary field of Econophysics, where agent-based models have a significant contribution. In this framework, several models use a set of elementary rules to represent the agent behavior and interactions. Thus, yielding intense emergent collective phenomena bonabeau2002agent ; macal2005tutorial ; macal2009agent ; vilela2019majority ; zubillaga2019three ; Feng2012 ; Zhao2011 . Oliveira de1992isotropic ; Oliveira1993 pioneered in the study of opinion dynamics modeled by the majority-vote model (MVM) with noise, a sociological adaptation of a magnetic spin model that presents similar critical behavior. Lima et al., Campos et al. and Pereira et al. lima2008majority ; campos2003small ; pereira2005majority investigated the effects of complex networks on the MVM social dynamics, thus revealing an ordered phase over the increasing complexity of these networks. Vilela et al. vilela2019majority incorporated the rational and emotional behavior of agents in financial markets introducing the global-vote model, inspired by the majority-vote dynamics. The MVM consists of a comprehensive approach to the study of social dynamics with interactions embedded in regular and complex networks de1992isotropic ; Oliveira1993 ; santos1995anisotropic ; vieira2016phase ; pereira2005majority ; lima2008majority ; campos2003small ; vilela2009 ; vilela2018effect ; vilela2021 . In this agent-based model with two states, the individual’s opinion in a given time instant may assume one of the two values, $\pm 1$, regarding some social discussion. Furthermore, an agent in the social network assumes the opinion of the majority of its neighboring spins with probability $(1-q)$ and the opposite opinion with probability $q$. The variable $q$ stands for the noise parameter of the model and measures the social unrest or social temperature of the system. The MVM exhibits a second-order phase transition for a critical noise value $q=q_{c}\approx 0.075$ in square lattice networks of social interactions de1992isotropic . Complex Networks consist of a natural preference to the study of real-world complex systems, such as climate analysis, biological neural connections, World Wide Web, public transportation, airline networks, financial markets, among others feldhoff2015complex ; reijneveld2007application ; barabasi2000scale ; an2014synchronization . In this context, the random graph networks describe the topology of a set of $N$ nodes connected with chance $p$, thus introducing a fundamental probability distribution over graphs. The Erdös-Rényi algorithm is one well-known method for the assembly of random graph networks. The method connects an initial set of $N$ isolated nodes by adding a total of $pN(N-1)/2$ random links between them, while double connections are forbidden erdos1960evolution ; bollobas1985random . In Figure 1, we illustrate the Erdös-Rényi process of building a random graph network with $N=10$ and $\left<k\right>=3$. We also present the averaged degree distribution $P(k)$ over ten networks with $N=2\times 10^{4}$ nodes and several values of $\left<k\right>$. The lines correspond to Poisson fits for the data with average degree $\left<k\right>=pN$, in agreement for the degree distribution of random graphs for large networks. Figure 1: Illustration of the Erdös-Rényi process of building random graph networks. From (a) to (d), $N=10$ isolated nodes are connected by adding a total of $pN(N-1)/2$ random links between them. While double connections are forbidden, the final network has $\left<k\right>=3$. In (e) we show the degree distribution $P(k)$ averaged over ten networks for $\left<k\right>=6,8,10$ and $20$ with $N=2\times 10^{4}$. The lines represent Poisson fits for the data. Modern financial markets operate as a complex system. Its dynamics depend not only on the rational strategy of the individual but also on the emotional circumstances that define the investor’s psyche. At first, individual decisions might seem challenging to model; still, social agents tend to follow a herding behavior, as they feel sheltered when the crowd is endorsing their choices. In this way, trader social dynamics is reasonably attainable cont2000herd ; sznajd2002simple . This bias to follow a significant group is often embraced by less experienced agents, denominated noise traders. They are highly susceptible to the dominant opinion and tend to overreact to fresh news regarding buying or selling. In contrast, other traders, denominated noise contrarian traders or fundamentalists, follow the global minority in a given market as an investment strategy. Thus they buy when stock prices decline and sell when the prices increase sznajd2002simple ; lux1999scaling ; bornholdt2001expectation ; kaizoji2002dynamics ; takaishi2005simulations . Therefore, their decision- making process drives the asset price to its fundamental value. Inspired by the global-vote model for financial markets vilela2019majority , we propose to investigate the influence of random graph networks on the time evolution of financial markets quantity measures. This work employs an opinion formation model to study the economic mechanics of noise traders and contrarian individuals interacting through a random graph of social influence. This work is organized as follows. In Section II, we describe the global-vote opinion formation model for financial markets and present the relevant quantities analyzed in our simulations. In Section III, we present the numerical results obtained along with the corresponding discussions. Section IV concludes with our final remarks. ## II THE MODEL We represent the financial agents on the market and their interactions using a complex network structure. We place $N$ agents on the nodes of a random graph network, where the links represent the interaction between the agents in the market. We map the agent’s financial decision (opinion or option) at a given time $t$ by a spin variable, which may assume one of two values: $+1$ or $-1$, regarding buying or selling an asset. Furthermore, to model the essential dynamics of real-world financial markets, we randomly distribute two sets of individuals: a fraction $f$ of fundamentalist traders, also referred to as noise contrarians, and the remaining fraction $1-f$ of noise traders. We use the spin variables $\lambda$ and $\alpha$ to stand for the financial option of noise traders and contrarian agents, respectively. We introduce a noise parameter variable $q$ to model a financial level of anxiety and perceived uncertainty present in the market. In this study, $q$ influences contrarian and noise traders’ decisions and represents the probability of performing an inaccurate financial action. In other words, $q$ is the chance for an agent to choose the opposite of his standard strategy when negotiating in the financial market. ### II.1 Noise Traders A noise trader individual $i$, with opinion $\lambda_{i}$, assumes the same opinion of the majority of its neighbors with probability $1-q$, and the opposite option with probability $q$. We write the opinion flipping probability for the noise trader agent $i$ as follows $\omega(\lambda_{i})=\frac{1}{2}\left[1-(1-2q)c_{\lambda}\lambda_{i}\textrm{sgn}(m)\right],$ (1) where $\textrm{sgn}(x)=-1,0,+1$ for $x<0$, $x=0$ and $x>0$, respectively. Here, $c_{\lambda}$ stands for the agent strategy, where we set $c_{\lambda}=+1$ to model the noise traders trend to agree with the local majority of their neighbors. The variable $m$ quantifies the local predominant opinion, or local magnetization, defined as $m=\sum_{\delta=1}^{k_{i}}\lambda_{i+\delta}.$ (2) The summation is over all the $k_{i}$ neighboring agents connected to the trader at node $i$, with opinion $\lambda_{i}$. From Eq. (1), when $q=0$ the noise trader adopts its local predominant opinion. When we increase the market anxiety parameter $q$, the noise trader tends to follow the opposite opinion of its local majority. ### II.2 Noise Contrarians A fundamentalist agent tends to follow the market’s minority opinion with probability $1-q$, while following the majority opinion with chance $q$. We define the prevailing option of the system as a global magnetization, which influences noise contrarian traders’ financial option. The option of a noise contrarian agent $j$, $\alpha_{j}$, flips with probability $\omega(\alpha_{j})=\frac{1}{2}\left[1-(1-2q)c_{\alpha}\alpha_{j}\textrm{sgn}(M)\right],$ (3) where $c_{\alpha}$ is defined as the fundamentalist’s strategy, and since these individuals tend to agree with the global minority of the system, $c_{\alpha}=-1$, for all noise contrarian agents. The variable $M$ measures the average market opinion of the system, thus revealing the economic order. The magnetization $M$ accounts for the financial opinion of every agent on the market, and is evaluated as $M=\frac{1}{(N_{\lambda}+N_{\alpha})}\left(\sum_{i=1}^{N_{\lambda}}\lambda_{i}+\sum_{j=1}^{N_{\alpha}}\alpha_{j}\right),$ (4) where $N_{\lambda}=N(1-f)$ stands for the number of noise trader agents in the network and $N_{\alpha}=Nf$ represents the number of noise contrarian agents. For $M=1$, every agent on the market has an option equal to $+1$. Similarly, $M=-1$ denotes a market configuration where every agent opinion is equal to $-1$. $M=0$ represents the case where half of the agents have opinion $+1$ and the other half $-1$. Additionally, intermediate values of $M$ infer the dominant opinion of the market. We remark that when $q=0$, a contrarian agent always follow the opposite opinion of the global magnetization: buy (sell) when the majority sells (buy). As $q$ increases, there is a greater probability for the contrarian agent to follow the majority, adopting the opposite of his inherent financial market strategy. In the global-vote dynamics for financial markets, we recover the standard majority-vote model on random graphs when the fraction of contrarians $f$ is zero. In this case, we observe a order-disorder phase transition at a critical point $q_{c}$ for each value of the average connectivity $\left<k\right>$ in the thermodynamic limit $N\rightarrow\infty$ pereira2005majority . This $f=0$ system presents an ordered phase, with large clusters of agents that share the same opinion for values of noise $q$ below the critical point $q_{c}$. By increasing $q$, the same opinion clusters fade, and the magnetization (order parameter) approaches zero. In this work, we analyze the global-vote dynamics for financial markets on random graphs for several values of the average connectivity $\left<k\right>$ and noise contrarian traders fraction $f\neq 0$. We set $q$ near $q_{c}(f=0,\left<k\right>)$ to model real-world market dynamics adequately since previous investigations suggest the strong market phase emerge when the system is close to its critical melting point bornholdt2001expectation ; vilela2019majority . ## III RESULTS AND DISCUSSION We perform Monte Carlo simulations on Erdös-Rényi random graphs of size $N=10201$ and several values of the average connectivity $\left<k\right>$. For the network of financial interactions, we randomly place a fraction $1-f$ of noise trader on the network nodes, and we occupy the remaining fraction $f$ with noise contrarian agents. In this way, $N=N_{\lambda}+N_{\alpha}$. This work considers that a real-world market presents a small fraction of noise contrarian agents. To this extent, we investigate the influence of the noise contrarian agents for $q$ near its critical value for the $f=0$ case, obtained in previous studies within several values of $\left<k\right>$ pereira2005majority , where our model exhibits key real-world market features vilela2019majority . Additionally, we consider small values for the average number of connections of an agent in a market, i.e., small values of $\left<k\right>$, supported by the agreement between the data of real-world markets and the results of our simulations. The market dynamics evolve as follows. We select a randomly chosen agent and update its opinion accordingly with the probabilities given by Eqs. (1) or (3) for a noise trader or a noise contrarian agent, respectively. We repeat this process $N$ times, thus defining a unity of time in one Monte Carlo step (MCS). This way, each agent’s opinion is updated once on average on one MCS. To discard the transient regime, we allow this dynamic to run during $10^{3}$ MCS, and we perform our analysis in the subsequent $10^{5}$ MCS. In the financial context, we shall relate the global magnetization of the system to the aggregate excess demand of a particular asset, for its impact on stock prices bornholdt2001expectation ; kaizoji2002dynamics ; takaishi2005simulations . Thus a positive demand ($M>0$) causes prices to rise, while a negative demand ($M<0$) causes prices to fall. Moreover, markets that fluctuate around equilibrium cont2000herd exhibit an average excess demand that oscillates around zero. The financial return time series represents the price variation of a given asset over time. Positive (negative) returns relate to profit (loss) during the period of analysis. We quantify the logarithmic return at a given time $t$ as follows $r(t)=\textrm{log}\left[\left|M(t)\right|\right]-\textrm{log}\left[\left|M(t-1)\right|\right].$ (5) In Figure 2, we present the logarithmic return for the closing values of the S&P 500 index in US dollars, from Nov 01, 2012 to Dec 08, 2020. Similar to any other traditional asset, the S&P 500 index depends mainly on market supply and demand, a fundamental financial mechanism that yields several periods of strong return variations. Periods of significant fluctuations of returns are compressed in time, denoting the clustered volatility effect for the analyzed index price. This financial phenomenon is known as volatility clustering, and it can be elucidated by Mandelbrot’s observation that “large changes tend to be followed by large changes - of either sign - and small changes tend to be followed by small changes” mandelbrot1963variation . Figure 2 also shows that the period around t = 1850 presents the most considerable time-series return volatility. It denotes the Coronavirus pandemic phase, where government officials halted the economic activity. The panic and uncertainty triggered by the financial impacts of such measures led to a expressive stock market crash vega2021 ; wei2021 . Figure 2: Logarithmic returns for the daily closing price of S&P 500 index in US dollars from Nov 01, 2012 to Dec 08, 2020. The high volatility observed for the period around t = 1850 represents the Coronavirus Stock Market Crash of 2020. In Figure 3, we analyze the influence of the average connectivity $\left<k\right>$ and the noise contrarian fraction $f$ on the market order of the model on random graphs. Figure 3(a) exhibits two distinct market phases: a “strong market phase” for $f=0.20$ (dark blue), where the system is highly volatile and the magnetization exhibits a irregular wave pattern; and a “weak market phase” for $f=0.70$ (yellow), where magnetization values are roughly randomly distributed vilela2019majority . This result demonstrates that the increase of contrarians tends to stabilize the market dynamics, where the high volatile periods are present but to a substantially lower extent. Additionally, the increase of the agent’s average number of connections $\left<k\right>$ also yields stochastic patterns for the market demand, but driving a contraction on the amplitude of the fluctuations, as observed in Figure 3(b). Figure 3: Time series of the order parameter $M$ for two sets of parameters: (a) $\left<k\right>=6$ for $q=0.240$, and (b) $\left<k\right>=50$ for $q=0.411$. Figure 4, displays the logarithmic returns of the absolute value of the order parameter. In financial markets, we define volatility as a measure of the variation around the average returns observed in an asset’s time series. In this figure, we note intensive market fluctuations for $f=0.20$, embodied by the large spikes in the plot. The result also shows clustered volatility, a real-world market feature. We observe that the presence of contrarians tends to stabilize the market, indicated by attenuated fluctuations of the returns. Furthermore, when comparing Figures 4(a) and 4(b), it becomes clear that increasing the average connectivity of the network will similarly increase the market’s volatility and simultaneously deviate the behavior from the expected of real-world financial markets. Figure 4 also shows that periods of high volatility tend to be clustered, for lower values of $f$. Figure 4: Logarithmic returns of the absolute magnetization for (a) $\left<k\right>=6$ and $q=0.240$, (b) $\left<k\right>=50$ and $q=0.411$. In order to quantify the effects of volatility clustering, we shall compute the autocorrelation of absolute returns as follows $A(\tau)=\frac{\sum_{t=\tau+1}^{T}\left[\left|r(t)\right|-|\overset{-}{r}|\right]\left[\left|r(t-\tau)\right|-|\overset{-}{r}|\right]}{\sum_{t=1}^{T}\left[\left|r(t)\right|-|\overset{-}{r}|\right]^{2}},$ (6) where $1\leq\tau\leq 10^{5}\ \textrm{MCS}$ is the time-step difference between observations, $T=10^{5}\ \textrm{MCS}$ is the time of simulation, $r(t)$ is the return at a time $t$ and $\overset{-}{r}$ the average value of the return. The function defined by Eq. (6) measures nonlinear correlations in observations of the absolute value of log-returns as a function of the time delay between them. Figure 5 displays the autocorrelation for several values of $f$, and it shows that the returns present a strong correlation in time with exponential decay kaizoji2002dynamics ; takaishi2005simulations ; bornholdt2001expectation ; vilela2019majority . Figure 5: Linear-log plot of the autocorrelation of absolute logarithmic returns for (a) $\left<k\right>=6$ and $q=0.240$, and (b) $\left<k\right>=8$ and $q=0.275$. The dashed red lines represent exponential fits for the data. Figure 6: Linear-log plot of the autocorrelation of absolute logarithmic returns of the closing values for several financial indices. Also shown is the autocorrelation of absolute log-returns for $\left<k\right>=6$, $q=0.240$ and $f=0.20$ (dark blue). To illustrate this exponential behavior, we perform an exponential fit of the autocorrelation function $A(q,f,t)\sim\textrm{exp}(-t/t_{0})$ for $f=0.20$, and we obtain $1/t_{0}\approx 2.5\times 10^{-7}$ and $1/t_{0}\approx 3.0\times 10^{-7}$ for Figures 5(a) and 5(b), respectively. Other values for the fraction of noise contrarians yield similar results. In Figure 6, we compare the results of our model with real-world financial indices. We calculate the autocorrelation function for the daily log-returns of the closing values of the indices: Dow Jones (DJI) observed from Jan 29, 1985 to Dec 02, 2020; Ibovespa (BVSP) observed from July 24, 1993 to Dec 02, 2020; Nikkei (N225) observed from Jan 05, 1980 to Dec 02, 2020; S&P 500 (GSPC) observed from Jan 02, 1985 to Dec 02, 2020; Nasdaq (IXIC) observed from Oct 01, 1985 to Dec 02, 2020. We analyze each index for roughly $7\times 10^{3}$ days. We note that the returns for each investigated index display an expressive correlation which decays exponentially in time. Figure 7 displays the histogram of the log-returns for two sets of parameters and several values of the noise contrarian fraction $f$ in $10^{5}$ MCS. The real-world market systems display fat-tailed distributions as a reflection of lower non-zero probabilities of obtaining above or below-average returns. We observe such behavior only for lower values of the fraction of contrarians $f$, substantiating a real-world market region for our agent-based dynamics. To quantify the return distributions, we perform a statistical analysis of the data. We obtain that the kurtosis $\textrm{K}(\left<k\right>,f)$ for $f=0.20$ (strong market phase) are $\textrm{K}(6,0.20)=5.85$ and $\textrm{K}(8,0.20)=4.98$. For $f=0.70$ (weak market phase), $\textrm{K}(6,0.70)=2.41$ and $\textrm{K}(8,0.70)=2.37$. We remark that for a normal distribution $\textrm{K}=3$, thus, increasing the fraction of contrarians $f$ gradually shifts the system’s behavior, from a leptokurtic (fat-tailed) regime, to a mesokurtic (Gaussian) regime. To qualify the distributions of Figure 7, we perform a comparative normal quantile-quantile (Q-Q) plot. Fig. 8 displays Q-Q plots for the distribution of log-returns using several values for the fraction of fundamentalists $f$. The red line represents the expected results of a Gaussian distribution, and if a particular distribution exhibits similar behavior, its data points should lie on that reference line. For lower values of $f$, for instance $f\leq 0.40$, both Figures 8(a) and 8(b) display a non-linear behavior in the Q-Q plot, thus indicating that the distributions of log-returns feature fat-tails, and therefore, non-Gaussian. In such distributions, one observes a greater chance of obtaining high return values than the expected for normal distributions, consistent with real-world market systems stanley2000introduction . Figure 7: Distribution of logarithmic returns for $10^{5}$ MCS and several values of the fraction of contrarians $f$: (a) $\left<k\right>=6$ and $q=0.240$, and (b) $\left<k\right>=8$ and $q=0.275$. Figure 8: Normal quantile-quantile plots of the logarithmic return distributions for (a) $\left<k\right>=6$ and $q=0.240$, and (b) $\left<k\right>=8$ and $q=0.275$. Here we use $10^{5}$ MCS. Figure 9: Plot of the absolute log-return distributions $F(q,\ f,\ r)$ for several values of the fraction $f$ and $10^{5}$ MCS . The lines for $f\leq 0.40$ correspond to Student’s t fits, whereas the lines for $f\geq 0.5$ correspond to Gaussian fits. We use (a) $\left<k\right>=6$ and $q=0.240$, and (b) $\left<k\right>=8$ with $q=0.275$. Figure 9 displays the probability distribution of the absolute log-returns for several values of $f$ in $10^{5}$ MCS with $\left<k\right>=6$ and $\left<k\right>=8$. We find that Student’s t-distributions can depict the strong market’s phase data ($f\leq 0.40$), while the weak market’s phase is well fitted by Gaussian distributions vilela2019majority . Assuming that the mean values $\mu$ of the distributions are zero, as illustrated by Figure 10, we perform generalized Student’s t fits for the data $S(r;\nu,\mu,\sigma)=S(r;\nu,\sigma)$, where $\nu$ is the degree of freedom and $\sigma$ represents the scale. The function $S$ is defined as follows $S(r;\nu,\sigma)=\frac{1}{\sqrt{\nu\sigma^{2}}\textrm{B}\left(\frac{\nu}{2},\frac{1}{2}\right)}\left(1+\frac{r^{2}}{\nu\sigma^{2}}\right)^{-\frac{\nu+1}{2}},$ (7) where $\textrm{B}\left(\frac{\nu}{2},\frac{1}{2}\right)=\int_{0}^{1}t^{\frac{\nu}{2}-1}(1-t)^{-\frac{1}{2}}\textrm{dt},$ (8) is the Beta function. Values obtained for the degree $\nu$ and scale $\sigma$ are displayed in Table 1. As the fraction of contrarians increase, we find that the absolute log-return distributions are well fitted by a Gaussian distribution $g(r)$ $g(r)=\frac{1}{\sigma\sqrt{2\pi}}\ \textrm{e}^{-\frac{1}{2}\left(\frac{r-\mu}{\sigma}\right)^{2}},$ (9) where $\sigma$ is the standard deviation and $\mu$ is the mean, which we shall consider as zero once more. In Table 2, we display the Gaussian fit information. | | $\left<k\right>=6$, $q=0.240$ | | $\left<k\right>=8$, $q=0.275$ ---|---|---|---|--- $f$ | | $\nu$ | $\sigma$ | | $\nu$ | $\sigma$ 0.20 | | 5.0 $\pm$ 0.3 | 0.80 $\pm$ 0.01 | | 3.0 $\pm$ 0.1 | 0.785 $\pm$ 0.008 0.25 | | 7.4 $\pm$ 0.5 | 0.558 $\pm$ 0.006 | | 6.3 $\pm$ 0.4 | 0.621 $\pm$ 0.006 0.30 | | 12 $\pm$ 2 | 0.444 $\pm$ 0.006 | | 10 $\pm$ 1 | 0.559 $\pm$ 0.008 0.40 | | 13 $\pm$ 2 | 0.252 $\pm$ 0.005 | | 38 $\pm$ 2 | 0.319 $\pm$ 0.006 Table 1: Correlation between the scale $\sigma$ and degree $\nu$ as a function of the fraction of contrarians f, the average connectivity $\left<k\right>$ and noise $q$. | | $\left<k\right>=6$, $q=0.240$ | | $\left<k\right>=8$, $q=0.275$ ---|---|---|---|--- $f$ | | $\sigma$ | | $\sigma$ 0.50 | | 0.1916 $\pm$ 0.0008 | | 0.2254 $\pm$ 0.0009 0.70 | | 0.1108 $\pm$ 0.0004 | | 0.1284 $\pm$ 0.0005 Table 2: Dependence of the standard deviation $\sigma$ on the fraction f, on the average connectivity $\left<k\right>$ and on noise parameter $q$. Figure 10 displays the cumulative distribution of logarithmic returns $\Phi$ for the data in $10^{5}$ MCS, from which we imply that the mean for the distributions remains zero for $\left<k\right>=6$ with $q=0.240$, and $\left<k\right>=8$ with $q=0.275$ for several values of $f$. Figure 10: Plot of the cumulative distribution $\Phi$ of log-returns in $10^{5}$ MCS for (a) $\left<k\right>=6$ and $q=0.240$, and (b) $\left<k\right>=8$ and $q=0.275$. In the inset we analyze the details of the plot for values of $r(t)$ near zero. Figure 11: Plot of the absolute log-return distributions $F(r)$ of the closing values of several financial indices. Also shown is the absolute log-return distributions for $\left<k\right>=6$, $q=0.240$, $f=0.20$. The lines correspond to Student’s t fits. In Figure 11, we show the distribution of the absolute log-returns of the closing values of several financial indices to the results found in our investigation. We analyze each index for roughly $7\times 10^{3}$ days and use Student’s t-distributions to fit the data, represented by the lines in the plots. In Table 3, we show the values for the degree $\nu$ and scale $\sigma$. We note that the degree of freedom primarily determines the shape of each distribution, and comparison shows that our model behaves as the expected for real-world financial markets. On the other hand, the scale parameter measures fluctuations around the mean of the distribution, and comparison displays a disparity in the order of magnitude obtained in the fits. Analyzing Figures 2 and 4, we observe that the daily log-returns of financial markets are one order of magnitude smaller than the log-returns obtained in our simulations, and thus accountable for the divergence in the results. Hence, we observe that our model represents satisfactorily real-world financial market behavior. Index | $\nu$ | $\sigma$ ---|---|--- Dow Jones | 9.5 $\pm$ 0.2 | 0.01248 $\pm$ 8 $\times 10^{-5}$ Ibovespa | 4.2 $\pm$ 0.3 | 0.0197 $\pm$ 6 $\times 10^{-4}$ Nikkei | 15.3 $\pm$ 0.6 | 0.0202 $\pm$ 2 $\times 10^{-4}$ S&P 500 | 15.6 $\pm$ 0.5 | 0.0151 $\pm$ 1 $\times 10^{-4}$ Nasdaq | 6.5 $\pm$ 0.3 | 0.0180 $\pm$ 3 $\times 10^{-4}$ Table 3: Values for the scale $\sigma$ and degree $\nu$ for several financial indices with Student’s t fits. We also perform an extensive investigation of the model for several values of the average connectivity $\left<k\right>$ and different values of the noise parameter $q$. Figure 12 displays a multi-plot table of the histograms of logarithmic returns for several ($q,f,\left<k\right>$) triplets. Columns correspond to values of $q$, below criticality $q<q_{c}(\left<k\right>)$ (left), at criticality $q=q_{c}(\left<k\right>)$ (center) and above criticality with $q>q_{c}(\left<k\right>)$ (right). In this work, we use the values of $q=q_{c}(\left<k\right>)$ obtained in previous investigations of the MVM on random graphs, where the fraction of contrarian agents $f$ is set to zero pereira2005majority . The values of noise above and below criticality are taken to be $q_{c}(1\pm 10\%)$. We observe that distributions exhibit fat-tails for lower values of $f$, and especially for small values of $\left<k\right>$, i.e., $\left<k\right>=6$ and $\left<k\right>=8$, and at its correspondent critical value of $q$. On the contrary, such behavior is lost for higher values of the average connectivity and values of $q$ that deviate from criticality. This result illustrates and supports our choice for $\left<k\right>$ and $q_{c}$ used in this investigation. Figure 12: Distributions of logarithmic returns for several of the average degree of connectivity $\left<k\right>$ in the vicinity of $q_{c}(\left<k\right>)$ for several values of $f$: 0.20; 0.25; 0.30; 0.50; 0.70 (dark blue, purple, violet, pink, orange, yellow, respectively). Here, we use $10^{5}$ MCS. We conclude that the adoption of random graph networks on the global-vote model for financial markets has demonstrated to be effective. Our model is able to reproduce qualitatively and quantitatively real-world market features for lower values of the average connectivity $\left<k\right>$ and near criticality $q(\left<k\right>)=q_{c}(\left<k\right>)$. We remark that other combination of values for $\left<k\right>$, $f$ and $q$ might yield similar results. Nevertheless, our particular choice adopts simple key ideas: limited and small number of interacting agents, near critical values of the noise parameter $q$ and small number of contrarian agents in the market. Despite the model’s simplicity, it has shown its capability of characterizing the mechanisms that drive social behavior and decision making in economic systems. ## IV CONCLUSION AND FINAL REMARKS This work proposes a generalization of the two-state global-vote model for financial markets on random networks. The global-vote model suggests that any stock market dynamics consist primarily of different agent strategies driven by economic and social interactions. In its standard version, the stock market consists of a heterogeneous population with two distinct kinds of investors: noise traders, who follow the local majority of its neighbors, and noise contrarian traders, influenced by the global minority of the system vilela2019majority . We aim to investigate the return’s distribution dependence on the average connectivity $\left<k\right>$ of the random graph network. We relate variations of the global magnetization of the system to the daily return of a given asset. Our simulations reproduce the typical qualitative and quantitative real-world financial time series. Thus, yielding key features as fat-tailed distributions of returns, volatility clustering and long-term memory volatility. We demonstrate that higher values for the average connectivity $\left<k\right>$, or the noise contrarian fraction $f$, as well as values of $q$ far from criticality, may eliminate similar real-world behavior in our model. This investigation combines two distinct fields of research - Econophysics and Sociophysics - to better understand the mechanisms at play in the complexity of the human decision-making process, which drives economic systems’ dynamics. Consequently, it demonstrates advancements towards comprehending real-world heterogeneous complex systems, such as financial markets. Acknowledgements The authors acknowledge financial support from Brazilian and Chinese institutions and funding agents UPE, FACEPE (APQ-0565-1.05/14, APQ-­0707­-1.05/14), CAPES, CNPq, National Natural Science Foundation of China (72071006), the International Postdoctoral Exchange Fellowship Program (20170016) and Beijing Social Science Foundation of China (16JDGLC005). The Boston University Center for Polymer Studies is supported by NSF Grants PHY-1505000, CMMI-1125290, and CHE-1213217, by DTRA Grant HDTRA1-14-1-0017, and by DOE Contract DE-AC07-05Id14517. ## References * (1) H. E. Stanley and R. N. Mantegna, An introduction to econophysics. Cambridge University Press, Cambridge, 2000. * (2) H. E. Stanley, L. A. N. Amaral, S. V. Buldyrev, P. Gopikrishnan, V. Plerou, and M. A. Salinger, “Self-organized complexity in economics and finance,” Proceedings of the National Academy of Sciences, vol. 99, no. suppl 1, pp. 2561–2565, 2002. * (3) J. D. Farmer, P. Patelli, and I. I. Zovko, “The predictive power of zero intelligence in financial markets,” Proceedings of the National Academy of Sciences, vol. 102, no. 6, pp. 2254–2259, 2005. * (4) F. Baldovin and A. L. Stella, “Scaling and efficiency determine the irreversible evolution of a market,” Proceedings of the National Academy of Sciences, vol. 104, no. 50, pp. 19741–19744, 2007. * (5) Y. Li, A. L. M. Vilela, and H. E. Stanley, “The institutional characteristics of multifractal spectrum of china’s stock market,” Physica A: Statistical Mechanics and its Applications, vol. 550, p. 124129, 2020. * (6) E. Bonabeau, “Agent-based modeling: Methods and techniques for simulating human systems,” Proceedings of the National Academy of Sciences, vol. 99, no. suppl 3, pp. 7280–7287, 2002. * (7) C. M. Macal and M. J. North, “Tutorial on agent-based modeling and simulation,” in Proceedings of the 2005 Winter Simulation Conference, p. 14, IEEE, 2005. * (8) C. M. Macal and M. J. North, “Agent-based modeling and simulation,” in Proceedings of the 2009 Winter Simulation Conference, pp. 86–98, IEEE, 2009\. * (9) A. L. M. Vilela, C. Wang, K. P. Nelson, and H. E. Stanley, “Majority-vote model for financial markets,” Physica A: Statistical Mechanics and its Applications, vol. 515, pp. 762–770, 2019. * (10) B. J. Zubillaga, A. L. M. Vilela, C. Wang, K. P. Nelson, and H. E. Stanley, “A three-state opinion formation model for financial markets,” Physica A: Statistical Mechanics and its Applications, vol. 588, p. 126527, 2022. * (11) L. Feng, B. Li, B. Podobnik, T. Preis, and H. E. Stanley, “Linking agent-based models and stochastic models of financial markets,” Proceedings of the National Academy of Sciences, vol. 109, no. 22, pp. 8388–8393, 2012. * (12) L. Zhao, G. Yang, W. Wang, Y. Chen, J. P. Huang, H. Ohashi, and H. E. Stanley, “Herd behavior in a complex adaptive system,” Proceedings of the National Academy of Sciences, vol. 108, no. 37, pp. 15058–15063, 2011. * (13) M. J. de Oliveira, “Isotropic majority-vote model on a square lattice,” Journal of Statistical Physics, vol. 66, no. 1-2, pp. 273–281, 1992. * (14) M. J. de Oliveira, J. F. F. Mendes, and M. A. Santos, “Nonequilibrium spin models with ising universal behaviour,” Journal of Physics A: Mathematical and General, vol. 26, no. 10, p. 2317, 1993. * (15) F. W. S. Lima, A. O. Sousa, and M. A. Sumuor, “Majority-vote on directed erdős–rényi random graphs,” Physica A: Statistical Mechanics and its Applications, vol. 387, no. 14, pp. 3503–3510, 2008. * (16) P. R. A. Campos, V. M. de Oliveira, and F. G. B. Moreira, “Small-world effects in the majority-vote model,” Physical Review E, vol. 67, no. 2, p. 026104, 2003. * (17) L. F. C. Pereira and F. G. B. Moreira, “Majority-vote model on random graphs,” Physical Review E, vol. 71, no. 1, p. 016123, 2005. * (18) M. A. Santos and S. Teixeira, “Anisotropic voter model,” Journal of statistical physics, vol. 78, no. 3-4, pp. 963–970, 1995. * (19) A. R. Vieira and N. Crokidakis, “Phase transitions in the majority-vote model with two types of noises,” Physica A: Statistical Mechanics and its Applications, vol. 450, pp. 30–36, 2016. * (20) A. L. M. Vilela and F. G. B. Moreira, “Majority-vote model with different agents,” Physica A: Statistical Mechanics and its Applications, vol. 388, p. 4171, 2009. * (21) A. L. M. Vilela and H. E. Stanley, “Effect of strong opinions on the dynamics of the majority-vote model,” Scientific reports, vol. 8, no. 1, pp. 1–8, 2018. * (22) A. L. M. Vilela, L. F. C. Pereira, L. Dias, H. E. Stanley, and L. R. da Silva, “Majority-vote model with limited visibility: An investigation into filter bubbles,” Physica A: Statistical Mechanics and its Applications, vol. 563, p. 125450, 2021. * (23) J. H. Feldhoff, S. Lange, J. Volkholz, J. F. Donges, J. K. Jürgen, and F.-W. Gerstengarbe, “Complex networks for climate model evaluation with application to statistical versus dynamical modeling of south american climate,” Climate Dynamics, vol. 44, no. 5-6, pp. 1567–1581, 2015. * (24) J. C. Reijneveld, S. C. Ponten, H. W. Berendse, and C. J. Stam, “The application of graph theoretical analysis to complex networks in the brain,” Clinical neurophysiology, vol. 118, no. 11, pp. 2317–2331, 2007. * (25) A.-L. Barabási, R. Albert, and H. Jeong, “Scale-free characteristics of random networks: the topology of the world-wide web,” Physica A: Statistical Mechanics and its Applications, vol. 281, no. 1-4, pp. 69–77, 2000\. * (26) X. lei An, L. Zhang, Y. zhen Li, and J. gang Zhang, “Synchronization analysis of complex networks with multi-weights and its application in public traffic network,” Physica A: Statistical Mechanics and its Applications, vol. 412, pp. 149–156, 2014. * (27) P. Erdös, A. Rényi, et al., “On the evolution of random graphs,” Publ. Math. Inst. Hung. Acad. Sci, vol. 5, no. 1, pp. 17–60, 1960\. * (28) B. Bollobás and A. Thomason, “Random graphs of small order,” in North-Holland Mathematics Studies, vol. 118, pp. 47–97, Elsevier, 1985. * (29) R. Cont, J.-P. Bouchaud, et al., “Herd behavior and aggregate fluctuations in financial markets,” Macroeconomic Dynamics, vol. 4, no. 2, pp. 170–196, 2000. * (30) K. Sznajd-Weron and R. Weron, “A simple model of price formation,” International Journal of Modern Physics C, vol. 13, no. 01, pp. 115–123, 2002\. * (31) T. Lux and M. Marchesi, “Scaling and criticality in a stochastic multi-agent model of a financial market,” Nature, vol. 397, no. 6719, pp. 498–500, 1999. * (32) S. Bornholdt, “Expectation bubbles in a spin model of markets: Intermittency from frustration across scales,” International Journal of Modern Physics C, vol. 12, no. 05, pp. 667–674, 2001. * (33) T. Kaizoji, S. Bornholdt, and Y. Fujiwara, “Dynamics of price and trading volume in a spin model of stock markets with heterogeneous agents,” Physica A: Statistical Mechanics and its Applications, vol. 316, no. 1-4, pp. 441–452, 2002. * (34) T. Takaishi, “Simulations of financial markets in a potts-like model,” International Journal of Modern Physics C, vol. 16, no. 08, pp. 1311–1317, 2005\. * (35) B. Mandelbrot, “The Variation of Certain Speculative Prices,” The Journal of Business, vol. 36, pp. 394–394, 1963. * (36) M. Mazura, M. Dang, and M. Vega, “Covid-19 and the march 2020 stock market crash. evidence from s&p1500,” Finance Research Letters, vol. 38, p. 101690, 2021. * (37) S. Fu, C. Liu, and X. Wei, “Contagion in global stock markets during the covid-19 crisis,” Global Challenges, vol. 5, no. 10, p. 2000130, 2021.
# The effect of disorder on phases across two-dimensional thermal melting Prashanti Jami Department of Physics, Indian Institute of Science Education and Research (IISER) Kolkata, Mohanpur - 741246, West Bengal, India Pinaki Chaudhuri The Institute of Mathematical Sciences, Taramani, Chennai 600113, India Chandan Dasgupta Department of Physics, Indian Institute of Science, Bangalore 560012, India International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bangalore 560089, India Amit Ghosal Department of Physics, Indian Institute of Science Education and Research (IISER) Kolkata, Mohanpur - 741246, West Bengal, India ###### Abstract We study melting in a two-dimensional system of classical particles with Gaussian-core interactions in disordered environments. The pure system validates the conventional two-step melting with a hexatic phase intervening between the solid and the liquid. This picture is modified in the presence of pinning impurities. A random distribution of pinning centers forces a hexatic- like low temperature phase that transits into a liquid at a single melting temperature $T^{\rm RP}_{\rm m}$. In contrast, pinning centers located at randomly chosen sites of a perfect crystal anchors a solid at low temperatures which undergoes a direct transition to the liquid at $T^{\rm CP}_{\rm m}$. Thus, the two-step melting is lost in either cases of disorder. We discuss the characteristics of melting depending on the nature of the impurities. Introduction — Enhanced fluctuations make two-dimensional melting a topic of immense research interest. Unlike their three-dimensional counterparts undergoing “Lindemann melting” [1, 2], 2D melting is mediated by the unbinding of topological defects. The positional order (PO) and bond-orientational order (BOO) decouple in 2D, producing a “hexatic phase” sandwiched between the solid and the liquid. Hexaticity, a rich concept, is realized in colloids [3, 4], the vortex lattice in superconductors [5], in active Brownian disks [6] and recently in van der Waals magnet [7]. The celebrated KTHNY theory [8, 9, 10, 11, 12], pictures 2D-melting as a two-step process involving successive unbinding of dislocations and disclinations– presented schematically in Fig. 1(a). However, the relevance of the two-step 2D melting has also been debated [13, 14, 15, 16]. Figure 1: A schematic representation of 2D-melting in: (a) KTHNY description in a pure system, (b) A system with randomly placed impurities, and (c) With a fraction of particles frozen at randomly chosen zero-temperature positions of the pure system, suggesting that the clean-melting may be obscured by impurities (See text). Quenched disorder, inherent to real materials, can not only move around the phase boundaries but is also capable of modifying the mechanism of melting. For example, impurities can generate unbounded defects even at $T=0$, and thereby mask the unbinding of thermal defect-pairs. This could strike out solidity even at the lowest $T$, as suggested by Nelson [17] – portrayed schematically in Fig. 1(b). In contrast, impurities which pin a given fraction of particles on sites of the underlying perfect lattice could stabilize the solid by anchoring it via these commensurate locations, and thereby consume the phase space of hexaticity (see Fig. 1(c)). Role of disorder in destabilizing the hexatic phase [18] and in enhancing long-range correlation has also been pointed out [19]. Thus, a careful analysis of 2D-melting in disordered media can potentially uncover new paradigms. Experiments on colloids [20], vortex lattices [21, 22] and multi component mixtures [23] indicate a broadened stability of the hexatic phase in the presence of disorder consistent with recent calculations [24, 25, 26, 27, 28, 32]. Study of 2D melting is also popular in confined geometry [29, 30, 31] mimicking disordered background. Zeng et al.[33] have argued that a solid (“Bragg glass”) phase with power-law decay of translational correlations cannot occur in a 2D system with impurities. Pronounced hexatic correlations are expected to be present [34, 35] if the disorder is not strong, though there are controversies [36, 37, 38] about the existence of a hexatic glass phase with long- or quasi-long-range hexatic order in 2D. In contrast, phase boundaries of the two-step melting are found to be insensitive to quenched defects on a spherical surface [39]. In this letter, we investigate the phases across melting of a bulk 2D system of soft-core particles, modeled via Gaussian-core interactions [40], which is known to validate the KTHNY-melting scenario in a pure system [41]. Addressing the role of quenched disorder in the phase behaviour this model, our key results are summarized as follows: (i) Random-pinning (RP) destabilizes solidity causing a single transition from a low-$T$ hexatic-like phase to a high-$T$ liquid. Here, the low-$T$ phase undergoes a likely crossover from hexatic-glass to hexatic-liquid. (ii) On the other hand, the commensurate- pinning (CP) anchors solidity and engulfs hexaticity – even the high-$T$ liquid phase supports inhomogeneous pockets of crystallinity. The defect locations correlate oppositely with pinning centers in the two models of disorder – defects tend to bind with the pinning centers for RP-systems, whereas they stay away from the impurities in CP-systems. Thus, in either realization of the quenched disorder, the two-step melting is lost. Model & method: — We introduce disorder in two different ways: (a) Random pinning (RP), in which we freeze a given fraction ($n_{\rm imp}$) of particles, chosen randomly in space, within a high-$T$ liquid configuration. Here, these immobile particles act as disorder. (b) Commensurate pinning (CP), where $n_{\rm imp}$ fraction of particles are frozen at randomly chosen positions of an ideal triangular lattice – the ground state configuration of the pure system. Note that CP represents correlated disorder with a long-range positional correlation of a perfect lattice. In contrast, RP constitutes nearly uncorrelated disorder though weak short-range correlation of a high-$T$ liquid may exist. We investigated a system of $N=4356$ particles, with $n_{\rm imp}=3.5\%$. These results were compared with those from a pure system with $N=4096$ particles. For these systems, we sample configurations via molecular dynamics. [42] within the canonical ensemble, using LAMMPS [43] We consider a simulation box having dimensions $L_{x}=\frac{2}{\sqrt{3}}L_{y}$, having periodic boundary conditions. $L_{x}$ is adjusted to keep the density, $\rho$ of particles fixed for all our studies ($\rho=0.628$). We carried out $2\times 10^{7}$ MD steps with a time step $\delta t=0.005$. We use dimensionless parameters: $t^{\prime}=t\sqrt{\epsilon/m\sigma^{2}}$ and $E^{\prime}=E/\epsilon$, where $m$ is the mass of each particle. $T$ is expressed in terms of $\Gamma^{-1}$[44], where $\Gamma=\epsilon\exp(-\sqrt{3}/2\rho)/K_{B}T$. The physical observables are averaged over $8$ \- $10$ independent pinning configurations for a given $n_{\rm imp}$. Positional and bond orientational order: — A pure 2D solid is characterized by two kinds of ordering: (i) PO measured by $\psi_{\rm T}=\frac{1}{N}\left\langle|\Psi_{\rm T}|\right\rangle$, where $\Psi_{\rm T}=\sum_{i=1}^{N}\exp(i{\bf G}.{\bf r}_{i})$. $\bf{G}$ is a first shell reciprocal-lattice vector of the underlying triangular crystal and ${\bf r}_{i}$ is the position of particle $i$, and (ii) BOO, quantified by $\psi_{\rm 6}=\frac{1}{N}\left\langle|\sum_{k=1}^{N}\Psi_{\rm 6}(r_{k})|\right\rangle$ where $\Psi_{\rm 6}=\frac{1}{N_{b}(k)}\sum_{l=1}^{N_{b}(k)}\exp{(i6\theta_{kl})}$. The sum is over the $N_{b}(k)$ nearest neighbors of particle $k$ identified by a Voronoi construction [45] and $\theta_{kl}$ is the angle that a line joining particle $k$ and particle $l$ makes with a reference axis. KTHNY theory predicts two critical temperatures: $\Gamma^{-1}_{\rm SH}$ and $\Gamma^{-1}_{\rm HL}$, for the thermal depletion of quasi-long-range PO (solid to hexatic) and BOO (hexatic to liquid) respectively, leaving a hexatic phase with quasi-long- range BOO between the solid and isotropic liquid phases. In Fig. 2(a,b), we plot the thermal evolution of $\psi_{\rm T}$ and $\psi_{\rm 6}$ for pure, RP and CP systems. While the pure system follows KTHNY melting 111The three phases are clearly identified by the snapshots at three representatives $T$s in the SM, Fig. S1(a-c) with $\Gamma^{-1}_{\rm SH}=0.0140$ and $\Gamma^{-1}_{\rm HL}=0.0162$, $\psi_{\rm T}$ in the CP-system is found to survive to larger $T$. The RP-system shows a much weaker $\psi_{\rm T}$ than the other two, even at the lowest $T$ and depletes very gradually with $T$ without any threshold behavior. A threshold behavior near $\Gamma^{-1}_{\rm HL}$ is also seen in the pinned systems in Fig. 2(b), albeit the transitions are broader. Unlike $\psi_{\rm T}$, the $\psi_{\rm 6}$ is comparable at low-$T$ in pure, CP- and RP-systems. We also note that $\psi_{\rm T}$ and $\psi_{\rm 6}$ show a significant drop at the same critical temperature in a CP-system, implying a direct transition from solid to liquid, which we discuss further below. In addition, the fluctuations of $\psi_{\rm T}$ and $\psi_{\rm 6}$ define generalised susceptibilities $\chi_{\alpha}=\frac{1}{N}\left[\langle|\Psi_{\alpha}|^{2}\rangle-\langle|\Psi_{\alpha}|\rangle^{2}\right]$, (with $\alpha=T$ or $6$), and help to identify $\Gamma^{-1}_{\rm SH}$ and $\Gamma^{-1}_{\rm HL}$, as shown in Fig. 2(c,d). Their behavior confirms that the pure system shows sharp transitions. Consistent with our finding in panel (a), $\chi_{\rm T}$ in the RP-system features only a broad and low hump, hinting that a low-$T$ phase in such a system represents a broad crossover between a hexatic glass [34] and a hexatic liquid 222The low-$T$ phase support significant hexatic order $\psi_{\rm 6}$ for our model parameters, while $\psi_{\rm T}$ and the snapshots establish the amorphous and glassy nature. The $\Gamma^{-1}$-dependence of $\psi_{\rm 6}$ and $\psi_{\rm T}$ is indicative of a crossover from a hexatic-glass to a hexatic-liquid, before the RP-system transits to a liquid at $\Gamma^{-1}_{\rm RP}$. However, the resolution of our simulation is inadequate for drawing a firm conclusion.. This is also consistent with the trajectory picture of RP system at $\Gamma^{-1}=0.0028$ in the supplementary material (SM) [48] Fig. S1(d). Congruous with our findings in panels (a,b), the locations of the peak of $\chi_{\rm T}$ and $\chi_{\rm 6}$ verify that PO and BOO in the CP-system vanish at a single $\Gamma^{-1}_{\rm CP}$. While our results from Fig. 2 seem to support the schematic phase-diagram of Fig. 1, we emphasize that the ‘impure’ phases at low- and high-$T$ defy conventional wisdom. These include the presence of unbound defects even at $T=0$ in RP-systems, and pockets of crystallinity deep into the liquid phase in CP-systems, as seen from Fig. S1(g-i) in the SM [48], and discussed below. Figure 2: Positional and orientational ordering tendencies are shown as a function of $\Gamma^{-1}$ for pure (green circles), RP (red triangles) and CP (blue square) systems. (a) Decay of PO ($\psi_{\rm T}$) with $\Gamma^{-1}$ (the inset shows $\psi_{\rm T}$ for $\Gamma^{-1}\rightarrow 0$). (b) The softening of the BOO ($\psi_{\rm 6}$). Here $\psi_{\rm 6}(\Gamma^{-1}\rightarrow 0)$ is shown in inset. Panel (c) and (d) show the corresponding susceptibilities $\chi_{\rm T}$ and $\chi_{\rm 6}$. The location of the peaks in $\chi_{\rm T}$ and $\chi_{\rm 6}$ identify the transitions. We find $\Gamma^{-1}_{\rm RP}=0.0145$ from $\chi_{\rm 6}$. Interestingly, the peak in $\chi_{\rm T}$ and $\chi_{\rm 6}$ for CP-systems appear at the same $\Gamma^{-1}_{\rm CP}=0.0187$. Figure 3: The distribution of distances between dislocation pairs, $P(r_{\rm dp})$ in different phases. Panels (a)-(c) show results for pure, RP- and CP-systems, respectively, at low-$T$. $a_{0}$ is the average inter-particle distance. $P(r_{\rm dp})$ is sharply peaked at the typical distance between dislocation pairs, though there are differences in details. The distribution has a short tail for the pure system, a very long tail for RP-systems, and essentially no tail for CP-systems. Panel (d) displays $\Delta P(r_{\rm dp})$ taken in pure system between $T^{\prime}s$ just above and below $\Gamma^{-1}_{\rm SH}$. The inset shows corresponding $P(r_{\rm dp})$. The largest zero-crossing distance of $\Delta P(r_{\rm dp})$ is taken as $r^{c}_{\rm dp}$, and is marked as an arrow in panel (a). Panels (e) and (f) show the spatial density of unbound defects just prior to melting into a liquid for a given realization of RP- and CP-system. The pinning centers are marked via magenta dots. Figure 4: Evolution of defects: Panels (a)-(c) represent the variation of defects with $\Gamma^{-1}$ (green: pure, red: RP, and blue: CP-system). (a) The evolution of unbound dislocations (in squares) as well as disclinations (in circles) validate KTHNY melting in pure systems; (traces of disclinations are multiplied by $10$ for visual clarity). (b) The presence of significant unbound dislocations in RP-systems at low $\Gamma^{-1}$ prohibits solidity. In CP-systems, shown in panel (c), the proliferation of unbound defects (both dislocations and disclinations) commences at a single threshold ($\Gamma^{-1}$=0.0198), implying a direct solid-liquid transition. Panel (d) shows the comparative number defects in RP- systems with reference to a pure system, illustrating how the number of bound (B) and unbound (UB) defects grow with increasing temperature. Defects analysis: — In KTHNY theory, a 2D-solid transits to the hexatic phase by the unbinding of paired dislocations [10, 11]. We proceed to examine the consistency of this picture in Fig. 2. This requires an estimation of the critical distance between two dislocations (with equal and opposite Burger vectors) below which they are bound. We first employ the Hungarian algorithm [49], which chooses ‘correct’ partners of dislocations by minimizing the sum of the distances between all partners [50]. The distribution of resulting pair distances, $P(r_{\rm dp})$, at low-$T$ is presented in Fig. 3(a-c). $P(r_{\rm dp})$ is sharply peaked for pure systems in Fig. 3(a), with insignificant weight at larger $r_{\rm dp}$. In contrast, its long tail at low $T$ (shown for $\Gamma^{-1}=0.0056$ in Fig. 3(b)) for the RP-system arises from unbound dislocations even for $T\rightarrow 0$, which destabilize a true solid. $P(r_{\rm dp})$ in CP-systems (Fig. 3(c)) consists of the initial sharp peak, and nearly no weight for larger $r_{\rm dp}$. A discernible tail in $P(r_{\rm dp})$ for pure systems develops when dislocation pairs start unbinding. An integrated distribution of $P(r_{\rm dp})$ features a threshold behavior at this transition (see SM [48]). To obtain the critical $r_{\rm dp}$ for the pure system, we plot in Fig. 3(d) the difference of these distributions, $\Delta P(r_{\rm dp})$, at temperatures just above and below $\Gamma^{-1}_{\rm SH}$, while the corresponding $P(r_{\rm dp})$-s are shown as the inset. The total positive and negative weights of $\Delta P(r_{\rm dp})$ cancel out, and $r^{c}_{\rm dp}\approx 2.15a_{0}$ is identified as the last zero-crossing point. This identification is found robust for $T$’s near $\Gamma^{-1}_{\rm SH}$. A study of distances of disclination pairs yielded a similar critical distance between disclination pairs. Once extracted for the pure system, these critical distances were used for analyzing pinned systems. Subsequently, we explored the thermal evolution of the defects and their unbinding in Fig. 4(a-c). For the lowest $T$, defects are essentially absent in the pure system. Unbound disclinations proliferate at $\Gamma^{-1}\approx 0.0162$, whereas dislocations unbind at $\Gamma^{-1}\approx 0.0145$, with a hexatic phase at intervening temperatures [41], consistent with Fig. 2. The CP-system (Fig. 4(c)) behaves like a ‘better’ solid at low-$T$ due to the absence of any free defects up to $\Gamma^{-1}=0.0182$ beyond that unbound dislocations and disclinations start proliferating at the same $\Gamma^{-1}_{\rm CP}$. There are significant number of impurity-induced unbound dislocations in the RP-system for $T\rightarrow 0$, as also concluded from Fig. 2. Here, unpaired dislocations are not only present for all $T$, they even outnumber bound dislocations at low-$T$. Fig. 4(d) addresses the role of the impurity induced free defects (at $T\rightarrow 0$) in RP-systems, on the thermal defects, whose unbinding drives the two transitions in a pure system. Number of bound (B) and unbound (UB) defects, with corresponding numbers subtracted for an equivalent pure system, are examined separately in Fig. 4(d). These numbers increase sharply with $T$ until the system transits to the liquid. Thus, the impurity induced defects help promote further thermal defects, than in pure systems for $\Gamma^{-1}<\Gamma^{-1}_{\rm HL}$. Such a rise disappears in the liquid. In fact, this difference in bound defects in the liquid goes down to a even lower value than the corresponding number at $T\rightarrow 0$. Figure 5: Correlations: Panels (a), (b) show the orientational correlation function, $g_{6}(r)$, of RP- and CP-systems, respectively, for the values of $\Gamma^{-1}$ as labelled in each case. In panel (a), power law decay of the correlations is visible (for $\Gamma^{-1}<{}^{\rm RP}\Gamma^{-1}_{\rm m}$) down to a very low $T$ in the RP-system; e.g. we find $g_{6}(r,\Gamma^{-1}=0.0112)\sim(r/a_{0})^{-0.14}$, $g_{6}(r,\Gamma^{-1}=0.0.014)\sim(r/a_{0})^{-0.2}$, $g_{6}(r,\Gamma^{-1}=0.0154)\sim(r/a_{0})^{-0.64}$, while $g_{6}(r,\Gamma^{-1}=0.0162)\sim{\rm exp}^{-0.16(r/a_{0})}$, as shown via dashed lines. In contrast, in CP-system, as shown in (b), correlations get modified. Even in the liquid phase, the conventional exponential decay flattens out at large $r$, implying a “remnant crystallinity” (See text). Correlations: — Finally, we discuss the orientational correlations as measured by the correlation function $g_{6}(r)=\langle\Psi_{6}({\bf r}_{i})\Psi_{\rm 6}^{*}({\bf r}_{j})\rangle$ where $r=\lvert{\bf r}_{i}-{\bf r}_{j}\rvert$. The $T$-dependence of orientational correlations in the pure system follows KTHNY scenario [10, 11, 12] as claimed earlier [41]. The evolution of $g_{6}(r)$ for various $T$ is shown in Fig. 5 for RP- and CP-systems. Our $\chi^{2}$-minimization analysis [51] of the large-$r$ decay of $g_{6}(r)$ in the RP-system (Fig. 5(a)) identified a power-law behavior for nearly the entire low-$T$ phase. This power-law behavior continues until an exponential decay sets in for $\Gamma^{-1}\geq 0.0162$, signaling the onset of liquidity. Intriguingly, $g_{6}(r)$ in CP-systems shows the enhanced solidity for $\Gamma^{-1}\leq 0.0187$, where its traces remain largely flat. Beyond the direct melting from solid to liquid for $\Gamma^{-1}>0.0190$, $g_{6}(r)$ in CP-systems displays a tendency of plateauing at large-$r$, though it decays at intermediate $r$. This is a signature of ‘remnant solidity’ arising from local crystalline pockets surrounding the impurities whose locations are commensurate with the perfect crystal and hence anchoring crystallinity in the vicinity (See SM [48]). This is a direct consequence of the correlated nature of the CP-impurities. Conclusion: — To summarize, we demonstrate that the conventional picture of 2D-melting undergoes significant changes in the presence of impurities. While RP-disorder destabilizes solidity and CP-disorder removes the hexatic phase, the low-$T$ phase in RP-systems is not the conventional hexatic. Similarly, the high-$T$ phase in the CP-systems mixes remnant solidity with the liquid phase. The inhomogeneous melting (Fig. S1 in SM [48] generates defects which correlate differently with pinning centers: For RP-systems the defects tend to bind with the pinning centers, whereas the defects stay away from the impurities in CP-systems. While defects are found essential for driving the melting, our MD configurations indicate that they often bunch up in various shapes of macroscopic size (See videos in SM [48]). An extension of our study to larger systems exploring possible role of grain boundaries on melting is a promising future direction. It will also be interesting to inspect the role of quantum fluctuations in these thermal phases. We hope that our findings will motivate future experiments for shedding new light. ## References * Lindemann [1910] F. Lindemann, The calculation of molecular vibration frequencies phys, Z 11, 609 (1910). * Lozovik [1987] Y. E. Lozovik, Ion and electron clusters, Soviet Physics Uspekhi 30, 912 (1987). * Zahn _et al._ [1999] K. Zahn, R. Lenke, and G. Maret, Two-stage melting of paramagnetic colloidal crystals in two dimensions, Phys. Rev. Lett. 82, 2721 (1999). * Gasser _et al._ [2010] U. Gasser, C. Eisenmann, G. Maret, and P. Keim, Melting of crystals in two dimensions, ChemPhysChem 11, 963 (2010). * Guillamón _et al._ [2009] I. Guillamón, H. Suderow, A. Fernández-Pacheco, J. Sesé, R. Córdoba, J. M. De Teresa, M. R. Ibarra, and S. Vieira, Direct observation of melting in a two-dimensional superconducting vortex lattice, Nature Physics 5, 651 (2009). * Digregorio _et al._ [2018] P. Digregorio, D. Levis, A. Suma, L. F. Cugliandolo, G. Gonnella, and I. Pagonabarraga, Full phase diagram of active brownian disks: From melting to motility-induced phase separation, Phys. Rev. Lett. 121, 098003 (2018). * Meisenheimer _et al._ [2023] P. Meisenheimer, H. Zhang, D. Raftrey, X. Chen, Y.-T. Shao, Y.-T. Chan, R. Yalisove, R. Chen, J. Yao, M. C. Scott, W. Wu, D. A. Muller, P. Fischer, R. J. Birgeneau, and R. Ramesh, Ordering of room-temperature magnetic skyrmions in a polar van der waals magnet, Nature Communications 14, 3744 (2023). * Kosterlitz and Thouless [1972] J. M. Kosterlitz and D. J. Thouless, Long range order and metastability in two dimensional solids and superfluids. (application of dislocation theory), Journal of Physics C: Solid State Physics 5, L124 (1972). * Kosterlitz and Thouless [1973] J. M. Kosterlitz and D. J. Thouless, Ordering, metastability and phase transitions in two-dimensional systems, J. Phys. C 6, 1181 (1973). * Halperin and Nelson [1978] B. I. Halperin and D. R. Nelson, Theory of two-dimensional melting, Phys. Rev. Lett. 41, 121 (1978). * Nelson and Halperin [1979] D. R. Nelson and B. I. Halperin, Dislocation-mediated melting in two dimensions, Phys. Rev. B 19, 2457 (1979). * Young [1979] A. P. Young, Melting and the vector coulomb gas in two dimensions, Phys. Rev. B 19, 1855 (1979). * Kapfer and Krauth [2015] S. C. Kapfer and W. Krauth, Two-dimensional melting: From liquid-hexatic coexistence to continuous transitions, Phys. Rev. Lett. 114, 035702 (2015). * Chui [1983] S. T. Chui, Grain-boundary theory of melting in two dimensions, Phys. Rev. B 28, 178 (1983). * Qi _et al._ [2014] W. Qi, A. P. Gantapara, and M. Dijkstra, Two-stage melting induced by dislocations and grain boundaries in monolayers of hard spheres, Soft Matter 10, 5449 (2014). * Mazars [2015] M. Mazars, The melting of the classical two-dimensional wigner crystal, Europhysics Letters 110, 26003 (2015). * Nelson [1983] D. R. Nelson, Reentrant melting in solid films with quenched random impurities, Phys. Rev. B 27, 2902 (1983). * Qi and Dijkstra [2015] W. Qi and M. Dijkstra, Destabilisation of the hexatic phase in systems of hard disks by quenched disorder due to pinning on a lattice, Soft Matter 11, 2852 (2015). * Guillamón _et al._ [2014] I. Guillamón, R. Córdoba, J. Sesé, J. M. De Teresa, M. R. Ibarra, S. Vieira, and H. Suderow, Enhancement of long-range correlations in a 2d vortex lattice by an incommensurate 1d disorder potential, Nature Physics 10, 851 (2014). * Deutschländer _et al._ [2013] S. Deutschländer, T. Horn, H. Löwen, G. Maret, and P. Keim, Two-dimensional melting under quenched disorder, Phys. Rev. Lett. 111, 098301 (2013). * Ganguli _et al._ [2016] S. C. Ganguli, H. Singh, I. Roy, V. Bagwe, D. Bala, A. Thamizhavel, and P. Raychaudhuri, Disorder-induced two-step melting of vortex matter in co-intercalated $\mathrm{NbS}{\mathrm{e}}_{2}$ single crystals, Phys. Rev. B 93, 144503 (2016). * Duhan _et al._ [2023] R. Duhan, S. Sengupta, R. Tomar, S. Basistha, V. Bagwe, C. Dasgupta, and P. Raychaudhuri, Structure and dynamics of a pinned vortex liquid in superconducting $a$-rexzr (x $\approx$ 6) thin film (2023), arXiv:2304.10926 [cond-mat.supr-con] . * Li _et al._ [2023] Y.-W. Li, Y. Yao, and M. P. Ciamarra, Two-dimensional melting of two- and three-component mixtures, Phys. Rev. Lett. 130, 258202 (2023). * Tsiok _et al._ [2021] E. N. Tsiok, Y. D. Fomin, E. A. Gaiduk, and V. N. Ryzhov, Structural transition in two-dimensional hertzian spheres in the presence of random pinning, Phys. Rev. E 103, 062612 (2021). * Gaiduk _et al._ [2019] E. A. Gaiduk, Y. Fomin, E. N. Tsiok, and V. N. Ryzhov, The influence of random pinning on the melting scenario of two-dimensional soft-disk systems, Molecular Physics 117, 2910 (2019), https://doi.org/10.1080/00268976.2019.1607917 . * Shankaraiah _et al._ [2020] N. Shankaraiah, S. Sengupta, and G. I. Menon, Disorder-induced enhancement of local hexatic correlations in two-dimensional fluids, Journal of Physics: Condensed Matter 32, 184003 (2020). * Arjun H and Chaudhuri [2020] Arjun H and P. Chaudhuri, Dense hard disk ordering: influence of bidispersity and quenched disorder, Journal of Physics: Condensed Matter 32, 414001 (2020). * Tsiok _et al._ [2015] E. N. Tsiok, D. E. Dudalov, Y. D. Fomin, and V. N. Ryzhov, Random pinning changes the melting scenario of a two-dimensional core-softened potential system, Phys. Rev. E 92, 032110 (2015). * Melzer _et al._ [2012] A. Melzer, A. Schella, T. Miksch, J. Schablinkski, D. Block, A. Piel, H. Thomsen, H. Kählert, and M. Bonitz, Phase transitions of finite dust clusters in dusty plasmas, Contributions to Plasma Physics 52, 795 (2012), https://onlinelibrary.wiley.com/doi/pdf/10.1002/ctpp.201200028 . * Ash _et al._ [2017] B. Ash, J. Chakrabarti, and A. Ghosal, Static and dynamic properties of two-dimensional coulomb clusters, Phys. Rev. E 96, 042105 (2017). * Ash _et al._ [2018] B. Ash, C. Dasgupta, and A. Ghosal, Analysis of vibrational normal modes for coulomb clusters, Phys. Rev. E 98, 042134 (2018). * Cha and Fertig [1995] M.-C. Cha and H. A. Fertig, Disorder-induced phase transitions in two-dimensional crystals, Phys. Rev. Lett. 74, 4867 (1995). * Zeng _et al._ [1999] C. Zeng, P. L. Leath, and D. S. Fisher, Absence of two-dimensional bragg glasses, Phys. Rev. Lett. 82, 1935 (1999). * Chudnovsky [1989] E. M. Chudnovsky, Hexatic vortex glass in disordered superconductors, Phys. Rev. B 40, 11355 (1989). * Chudnovsky [1991a] E. M. Chudnovsky, Orientational and positional order in flux lattices of type-ii superconductors, Phys. Rev. B 43, 7831 (1991a). * Toner [1991a] J. Toner, Orientational order in disordered superconductors, Phys. Rev. Lett. 66, 2523 (1991a). * Chudnovsky [1991b] E. M. Chudnovsky, Comment on “orientational order in disordered superconductors.”, Phys. Rev. Lett. 67, 1809 (1991b). * Toner [1991b] J. Toner, Toner replies, Phys. Rev. Lett. 67, 1810 (1991b). * Singh _et al._ [2022] N. Singh, A. Sood, and R. Ganapathy, Observation of two-step melting on a sphere, Proceedings of the National Academy of Sciences 119, e2206470119 (2022). * Stillinger [2008] F. H. Stillinger, Phase transitions in the Gaussian core system, The Journal of Chemical Physics 65, 3968 (2008), https://pubs.aip.org/aip/jcp/article-pdf/65/10/3968/11360037/3968_1_online.pdf . * Prestipino _et al._ [2011] S. Prestipino, F. Saija, and P. V. Giaquinta, Hexatic phase in the two-dimensional gaussian-core model, Phys. Rev. Lett. 106, 235701 (2011). * Frenkel and Smit [2001] D. Frenkel and B. Smit, _Understanding molecular simulation: from algorithms to applications_ , Vol. 1 (Elsevier, 2001). * Plimpton [1995] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics, Journal of Computational Physics 117, 1 (1995). * Gann _et al._ [1979] R. C. Gann, S. Chakravarty, and G. V. Chester, Monte carlo simulation of the classical two-dimensional one-component plasma, Phys. Rev. B 20, 326 (1979). * Tipper [1991] J. C. Tipper, Fortran programs to construct the planar voronoi diagram, Computers & Geosciences 17, 597 (1991). * Note [1] The three phases are clearly identified by the snapshots at three representatives $T$s in the SM, Fig. S1(a-c). * Note [2] The low-$T$ phase support significant hexatic order $\psi_{\rm 6}$ for our model parameters, while $\psi_{\rm T}$ and the snapshots establish the amorphous and glassy nature. The $\Gamma^{-1}$-dependence of $\psi_{\rm 6}$ and $\psi_{\rm T}$ is indicative of a crossover from a hexatic-glass to a hexatic-liquid, before the RP-system transits to a liquid at ${}^{\rm RP}\Gamma^{-1}_{\rm m}$. However, the resolution of our simulation is inadequate for drawing a firm conclusion. * [48] The URL for SM will appear here. * Kuhn [1955] H. W. Kuhn, The hungarian method for the assignment problem, Naval research logistics quarterly 2, 83 (1955). * Lau and Dasgupta [1989] M.-h. Lau and C. Dasgupta, Numerical investigation of the role of topological defects in the three-dimensional heisenberg transition, Phys. Rev. B 39, 7212 (1989). * Levenberg [1944] K. Levenberg, A method for the solution of certain non-linear problems in least squares, Quarterly of applied mathematics 2, 164 (1944). Supplementary material for ‘The effect of disorder on phases across two- dimensional thermal melting’ In order to support the key conclusions reported in the main manuscript, we include below additional results in this supplementary materials (SM). ## I Model and Methods For our study, we consider the Gaussian-core model in a 2D system with periodic boundary conditions, within which particles act via the following interaction. $V(r)=\epsilon exp(-r^{2}/\sigma^{2})$, with $\epsilon>0$, $\epsilon$ and $\sigma$ are the energy and length scales, respectively. We carry out molecular dynamics (MD) simulations of this model system, using LAMMPS, starting with a high temperature and cooling the system down to the desired temperature, as detailed in the main text. We use $10^{5}$ MD steps ($t=5000$) with sampling time window of $t=0.1$, which are recorded after $10^{7}$ sweeps of equilibration runs. The desired temperatures are maintained via the Berendsen thermostat. To ensure correct equilibration before taking statistics, we looked into the distribution of velocities of the particles, which assumes the form of an Maxwell-Boltzmann distribution in thermal equilibrium as well as independence of the temporal correlation of the observables on the time origin, i.e. $\zeta(t_{1},t_{2})=\zeta_{(}t_{1}-t_{2})$, where $\zeta$ is any temporal correlation defined between two time points $t_{1}$ and $t_{2}$. Figure S1: Trajectories: The trajectories of the particles studied at different temperatures ($\Gamma^{-1}$) for all three systems (the pinning centers marked in red) for $\Delta t=5000$. Panels (a-c) represent the trajctories of particles in the clean system. For lowest $\Gamma^{-1}$ (panel a), the particles remain firmly tied to the equilibrium position of a triangular lattice reflecting the solid phase. In the hexatic phase at $\Gamma^{-1}=0.0145$ (panel b), as per Fig. 2 of the main text, particles tend to move preferentially along the principal direction. In the liquid phase (panel c), particles execute isotropic motion, covering the entire space. The RP-system at low-$T$ (panel d) features distorted lattice lines and a resulting irregular array of particles. Together with the results from Fig. 2 of main text, the trajectory of particles indicate that the lowest-$T$ phase of RP-system constitute a hexatic glass. Panel (e) depicts the trajectory plot near ${}^{\rm RP}\Gamma^{-1}_{\rm m}$ and features incipient melting which results into motional signatures which are rather inhomogeneous in space. Finally, the liquid phase in the RP-system in panel (f) displays particles’ motion everywhere except for a small region surrounding the impurities. The motion in CP-system looks similar to the clean system in the solid phase (panel g). Close to the ${}^{\rm CP}\Gamma^{-1}_{\rm m}$ in panel (h) the dynamics shows inhomogeneity, but unlike in RP-system the mobile particles tend to stay away from pinning centers. the concentration of pinned centers keeps away the melting from the pinned centers (panel h). While most particles delocalize significantly in the high-$T$ phase of the CP-system (panel i), local patches of solid-like regions still survive, as seen from the trajectory plot. ## II Snapshots of particle trajectories Here we plot the trajectories of particles during their equilibrium dynamics for the clean, RP- and CP-systems. Such motional signatures are displayed in Fig.S1 for three representative temperatures, one at low-$T$ in the solid phase (likely a hexatic glass in case of RP-system), at intermediate temperature (just before the system melts into an isotropic liquid (in hexatic phase in case of the clean system), and finally at a high temperature (deep into the liquid state). The following points are worth mentioning: The results for clean system in Fig. S1(a-c) are consistent with the KTHNY picture of 2D-melting in a pure systems. Note that the particles in a hexatic phase (Fig. S1(b)) preferentially move along the three principal directions of the underlying triangular lattice of the solid. The lowest temperature phase for the RP-system in Fig. S1(d) represents a hexatic glass (or, an amorphous solid) in which the pinned impurities are marked red, with no apparent positional order. The snapshot at $\Gamma^{-1}=0.0140$ for RP-system in Fig.S1(e) appears inhomogeneous, where the mobile particles close to pinned particles delocalize more. Other particles tend to carry the motional signature of a hexatic phase in a clean system. The snapshot in Fig.S1(f) for RP-system for $\Gamma^{-1}=0.0210$ indicates that delocalization of mobile particles is nearly engulfed the whole space, except for small region surrounding the repulsive pinning centers. The snapshots in the CP-system in Fig.S1(g-i) supports the notion of enhanced solidity in the following manner: The solidity is nearly perfect at low $T$ (solidity here is stronger than in clean system) – the impurities are pinned at commensurate locations of the underlying triangular lattice hold onto perfect solidity. Unlike for RP-system, here the regions rich in pinning centers anchors solidity around them, whereas, the regions relatively free of impurity feature incipient melting. This is seen from Fig.S1(h) for $\Gamma^{-1}=0.0182$, i.e. at a temperature just below the onset of melting. As a result, the CP-system displays a local pockets of ‘remnant solidity’ around impurities. This remnant solidity weakens with $\Gamma^{-1}$, however, persists up to a large temperature (Fig.S1(i) at $\Gamma^{-1}=0.0210$). Note that the pinning centers maintain solid-like correlation at all $T$. The remnant solidity has discernible effects on the bond-orientational correlations $g_{\rm 6}(r)$, discussed in connection with Fig. 5(d) in the main manuscript, and is elaborated further in the later part of this SM. ## III Identification of critical distances of defects unbinding Figure S2: The distribution of distances between dislocation pairs: $P(r_{\rm dp})$ shown for different phases. Panels (a-c), (d-f), and (g-i) show results for clean, RP-, and CP-systems, respectively. $P(r_{\rm dp})$ at low-$T$ in panels (a), (d), and (g) are sharply peaked at the typical distance of dislocation pairs, though there are differences in details. The distribution has a short tail for the clean system, and a very long tail for RP-systems, and essentially no tail for CP-systems, indicating the enhanced solidity due to commensurate pinning. The distributions for the three systems develop smooth and healthy tails at a $T$, where defects unbinding sets in, as shown in panels (b, e, h). Panels (c, f, i) show that they become indistinguishable at large $T$, beyond the melting to a liquid. In fig. S2, we report the distribution of the separation of the two dislocations with equal and opposite Burger’s vector (here the two constitute a bound pair of dislocations). We denote this distribution by $P(r_{\rm dp})$, where $r_{\rm dp}$ is the separation in question. At lower $T$ ($\Gamma^{-1}=0.0112$), Fig. S2(a) show that $P(r_{\rm dp})$ in a clean system is sharply peaked at $r_{\rm dp}\approx a_{0}$, the average inter-particle distance, with some additional weight for larger $r_{\rm dp}$. In contrast, a weak yet very long tail in $P(r_{\rm dp})$ (Fig. S2(b)) is found for the RP- system at low-$T$. This demonstrates the presence of unbound dislocations even at the lowest $T$ ($\Gamma^{-1}=0.0056$), prohibiting true solidity. In contrast, the first peak in $P(r_{\rm dp})$ for CP-systems at low-$T$ in Fig. S2(g) is even sharper than the one in clean system (Fig. S2(a)). This reflects an enhanced solidity in the CP-system. The nature of the $P(r_{\rm dp})$ changes with $T$, and when the dislocation pairs start unbinding, $P(r_{\rm dp})$ begins to develops smooth tails, see Fig. S2(b, e, h). At these $T$’s and beyond $P(r_{\rm dp})$ becomes qualitatively similar for clean, RP- and CP-systems. The distributions become nearly identical deep into the liquid phase (hight-$T$) as shown in Fig. S2(c, f, i). Figure S3: Difference in $P(r_{\rm dp})$: The two panels display $\Delta P(r_{\rm dp})$ in the pure system, taken between two pair of temperatures – one in the solid and the other in the hexatic phase, both being close to ${}^{C}\Gamma^{-1}_{\rm SH}$. The corresponding traces of $P(r_{\rm dp})$ is shown as the inset. The largest $r_{\rm dp}$ corresponding to a zero-crossing is taken as $r_{dp}^{c}$. Fig. S3 supplements the Fig. 3(d) in the main text, and illustrates the insensitivity of $r_{dp}^{c}$ on the chosen $T$-values for its extraction, as long as those temperatures are close to ${}^{C}\Gamma^{-1}_{\rm SH}$. We plot in Fig. S3 the difference $\Delta P(r_{\rm dp})$ of $P(r_{\rm dp})$ for two different pair of temperatures around the ${}^{C}\Gamma^{-1}_{\rm SH}$, while the corresponding distributions are shown as the inset. Both plots yield $r_{dp}^{c}=2.15a_{0}$ – the same value obtained in the main test for a yet another pair of $T$’s. Figure S4: integrated distribution (ID): as a function of $r_{\rm dp}$ for different $Ts$. The attainment of unity of ID occurs sharply in the solid phase and more gently beyond ${}^{C}\Gamma^{-1}_{\rm SH}$. The curvature of ID is studied further to estimate the critical distance ($r_{dp}^{c}$), which is presented as an inset. An alternative extraction of $r_{\rm dp}^{c}$ for the pure system, employing differently the same concept as above is the following: In Fig. S4, we plot the integrated distribution (ID) of $P(r_{\rm dp})$ for various $T$’s. The traces of ID shows a threshold behaviour marking the solid-to-hexatic transition. ID comes up very steeply for the lower $T$’s corresponding the solid phase (as inerred from the Fig. 2 of the main text), whereas the rise of ID becomes distinctively gradual for the two higher $T$’s representing a hexatic phase. In the inset, we plot the distribution of the change of the slope of ID-traces in the solid phase, which attains the peak-value for $r_{\rm dp}^{c}\approx 2.15a_{0}$. Figure S5: distribution of distances between disclination pairs: Panels (a-c) represent the distribution of the distances of the disclination pairs $P({\tilde{r}}_{\rm dp})$ at different $\Gamma^{-1}$. Panel (a) is the $P({\tilde{r}}_{\rm dp})$ corresponds to the hexatic phase, which attains a sharp peak around the lattice spacing reflecting the only presence of tightly bounded disclinations in the phase. When the system hits the transition $T$, the defects unbounds. The unbinding of these defects modifies the distribution, as shown in panels (b) in the liquid phase, and remains identical for the higher $\Gamma^{-1}$ value (panel (c)). We also studied the thermal evolution of the distribution of the disclination pair (${\tilde{r}}_{\rm dp}$) and we present our results in fig .S5 for three representative $T$’s. This distribution $P({\tilde{r}}_{\rm dp})$ features a single peak at ${\tilde{r}}_{\rm dp}\sim a_{0}$ for up to ${}^{C}\Gamma^{-1}_{\rm HL}$, and identify a second peak for larger $T$’s at ${\tilde{r}}_{\rm dp}\approx 2a_{0}$. Thus, we conclude that ${\tilde{r}}^{c}_{\rm dp}\sim r^{c}_{\rm dp}$. ## IV Correlations: The pair correlation is given by $g(r)=\frac{1}{2\pi rN}\sum_{i=1}^{N}\sum_{j\neq i=1}^{N}\langle\delta(r-\lvert\bf{r}_{i}-\bf{r}_{j}\rvert)\rangle,$ (1) with $r=\lvert{\bf r}_{i}-{\bf r}_{j}\rvert$. We present the low-$T$ behavior of $g(r)$ in Fig. S6(a). Its evolution in the clean (blue) and CP (green) systems is nearly identical, consistent with the findings of Fig. 2(a) and Fig. 2(b) in the main text. The $g(r)$ for the RP-system exhibit damped modulations which prohibits solidity even at low temperatures. However, these damped modulations are long-ranged than what’s expected in a liquid. Fig. S6(b) shows $g(r)$ for large $T$ where all three systems turns into ‘liquid.’ A role reversal occurs – $g(r)$ for clean and RP-system overlap this time, yielding a liquid-like behavior, while the long-range modulations for CP- systems survives implying ‘remnant solidity’ as discussed already in other context. Figure S6: Pair correlation function: Panel (a) represents the pair correlation (red: clean-, blue: random- and green: CP-system) for two different $\Gamma^{-1}$ values. At the low $\Gamma^{-1}$ value, the clean system’s behavior is comparable to the CP-system, while in the higher $\Gamma^{-1}$, as shown in panel-b (for visual clarity, the traces of RP- system is shown in a dotted line), it is comparable to the RP-system. Figure S7: Orientational correlation function: of a clean system at different $Ts$. The pure system displays the thermal evolution of $g_{6}(r)$ consistent with KTHNY theory. The $T$-dependence of orientational correlations in the clean system is shown in Fig. S7 implies that $g_{6}(r)$ remains nearly independent of $\Gamma^{-1}$ for up to $\Gamma^{-1}=0.0145$. It shows a power-law decay $\sim r^{-\eta(T)}$, in the hexatic phase for $0.0151\geq\Gamma^{-1}\geq 0.0162$ (exponents $\eta(T)$ are listed in Fig. S7). Finally, $g_{6}(r)$ features an exponential decay in the liquid phase for $\Gamma^{-1}\geq 0.0165$. We find, e.g., $g_{6}(r,\Gamma^{-1}=0.0140)\sim(r/a_{0})^{-0.003}$, $g_{6}(r,\Gamma^{-1}=0.0.0157)\sim(r/a_{0})^{-0.01}$, $g_{6}(r,\Gamma^{-1}=0.0159)\sim(r/a_{0})^{-0.23}$, while $g_{6}(r,\Gamma^{-1}=0.0182)\sim{\rm exp}^{-0.24(r/a_{0})}$ and $g_{6}(r,\Gamma^{-1}=0.0210)\sim{\rm exp}^{-0.49(r/a_{0})}$. Figure S8: Orientational correlation function, The $g_{\rm 6}(r)$ is calculated in two separate components. (i) The dotted line shows the contribution of $g_{\rm 6}(r)$ only for the particles which lie within a distance of $2a_{0}$ from pinned particles at the higher $Ts$ beyond the melting temperature and (ii) the solid traces shows the $g_{\rm 6}(r)$ contributed by all other particles which are more than $2a_{0}$ distance. In order to address ‘remnant solidity’ in the CP-system at high $T$, we plot the two separate components of $g_{\rm 6}(r)$ in Fig. S8. Once component estimates the contribution of $g_{\rm 6}(r)$ (dotted trace) only for the particles which lie within a distance of $2a_{0}$ from pinned particles. The other component measures $g_{\rm 6}(r)$ (solid trace) contributed by all other particles. We present data of such two contributions for two values of $T$ (both in the high-$T$ liquid phase). The message from Fig. S8 is clear – the particles close to the pinning centers at commensurate locations of a perfect crystal anchor the crystallinity around them, which is reflected in the corresponding $g_{\rm 6}(r)$ which remains nearly constant at large $r$ like in a solid. The particles further away from the pinned particles, on the other hand, expectedly behaves like a liquid. Thus, it is the particles near pinning centers which contribute to the ‘remnant solidity.’ As such, that pinned particles must show a perfect solid like correlation by construction!
# missing Entanglement Dynamics in Quantum Continuous–Variable States DOCTORAL THESIS Ankit Kumar Indian Institute of Technology Roorkee, India _jointly supervised by_ Prof. P. Arumugam Department of Physics Indian Institute of Technology Roorkee Roorkee 247667, India Prof. Tomasz Paterek Institute of Theoretical Physics and Astrophysics Faculty of Mathematics, Physics and Informatics University of Gdańsk, Gdańsk 80-308, Poland July, 2023 ©INDIAN INSTITUTE OF TECHNOLOGY ROORKEE, 2023 ALL RIGHTS RESERVED headsepline=true Abstract Due to the weakness of gravitational coupling, all quantum experiments up to date in which gravity plays a role utilized the field of the Earth. Since this field undergoes practically undetectable back-action from quantum particles, it effectively admits a classical description as a fixed background Newtonian field or spacetime. This argument strongly motivates theoretical and experimental research towards a demonstration of gravitation between two quantum masses, as this is one of the most straightforward scenarios where quantum features of gravity could be observed. Several proposals studied the possibility of generating entanglement between two massive objects. Along the same lines, with a particular focus on gravity, this thesis introduces general tools to tackle interaction-mediated entanglement and applies them to two particles prepared in continuous-variable states. In order to pursue this aim systematically, this dissertation begins by introducing methods to precisely simulate the dynamics of quantum systems coupled by weak interactions. We improve the accuracy of the numerical implementation of Cayley’s operator and develop a methodology to avoid reflections from numerical infinities. We derive a condition under which a product state from the laboratory (LAB) perspective remains a product state in the center-of-mass (COM) frame, which reveals that only certain states are transformed into disentangled states. Even though the primary focus is on gravity, all the developed methods apply to arbitrary central interactions, and considerable parts of this thesis are devoted to explicit demonstrations of this versatility. Accordingly, the first application is to investigate the head-on collision in the Rutherford experiment, with the projectile treated as (realistic) localized wave packets. We observe various nonclassical effects in the average trajectories and trace them back to the convexity properties of the Coulomb potential with the help of Jensen’s inequality. The concluding chapter also comments on the projectile-target entanglement. Our next goal is to simplify the possible observation of weak gravitational entanglement in an inevitably noisy laboratory. The basic idea is to amplify correlations by pushing the particles toward each other, hoping that an ever- increasing gravitational interaction will automatically lead to a higher accumulated entanglement. A toolbox is developed that quantifies the entanglement gain between the two particles directly in the COM frame of reference, thereby eliminating the need for inverse transformations back to the LAB frame. We start with the standard practice of the second-order truncation of quantum Newtonian potential, which has long kept the mathematical complexities at a minimum by forcefully constraining the system into the regime of (very well-understood) Gaussian Quantum Information. While it is known that an analytical solution exists, we utilize Ehrenfest’s theorem to derive the covariance matrix in an exact closed form. The resultant entanglement is insensitive to relative motion between the two particles. The less-understood non-Gaussian regime triggered by the cubic and higher- order potentials is considered next. We develop a hybrid analytical-numerical scheme to faithfully estimate the entanglement gain with the help of algorithms in Google TensorNetwork. The entanglement is found to be sensitive to relative motion only when the system evolves into the non-Gaussian regime. We prove that the position-momentum correlations originate from the force gradient in relative motion. A derivation of closed forms for the non-Gaussian entanglement gain follows with informed guesswork. In experiments, it will be challenging to screen the system from all interactions but gravity. With this in mind, we develop tools to quantify the entanglement with multiple central forces acting simultaneously. As the final application, the thesis discusses an entanglement-based test of the Modified Newtonian Dynamics (MOND), a candidate explanation of dark matter effects which proposes to modify Newton’s second law and/or the gravitational force law for accelerations smaller than $\sim 10^{-10}$ m/s2. One verifies that the masses recently cooled by the Aspelmeyer group in Vienna, when separated by a distance of a few times their radius, are into the regime of accelerations where MOND is relevant. Accordingly, the tools developed in this thesis offer an opportunity to test the assumptions behind MOND through entanglement between two nearby quantum masses. We develop an experiment where departures from Newtonian gravity are certified by simply witnessing the entanglement generation starting from thermal states. Publications ## Contributions in Peer-Reviewed Journals 1. 1) Nonclassical Trajectories in Head-On Collisions Ankit Kumar, Tanjung Krisnanda, P. Arumugam, and Tomasz Paterek Quantum 5, 506 (2021) arXiv:2011.06470 2. 2) Continuous-Variable Entanglement through Central Forces: Application to Gravity between Quantum Masses Ankit Kumar, Tanjung Krisnanda, P. Arumugam, and Tomasz Paterek Quantum 7, 1008 (2023) arXiv:2206.12897 3. 3) Probing Modified Gravity with Entanglement of Microspheres Ankit Kumar, Yen-Kheng Lim, P. Arumugam, Tom Złośnik, and Tomasz Paterek Physical Review D 109, L101501 (2024) arXiv:2306.14938 ## Contributions in Peer-Reviewed Conference Proceedings 1. 1) Closest Approach of a Quantum Projectile Ankit Kumar, Tanjung Krisnanda, P. Arumugam, and Tomasz Paterek Journal of Physics: Conference Series 1850, 012074 (2021) arXiv:2112.13296 ## Contributions in Public Scientific Repositories 1. 1) An Accurate Pentadiagonal Matrix Solution for the Time-Dependent Schrödinger Equation Ankit Kumar, and P. Arumugam Zenodo.7275667 GitHub/vyason/Cayley-TDSE arXiv:2205.13467 ###### Contents 1. Contributions in Public Scientific Repositories
${\mathcal{F}}_{2}(q,\bar{\rho})=\frac{1}{2}\sum_{L\in\Gamma_{8}}q^{(L+\frac{\rho}{2})^{2}}\,.$ (A.44) The functions ${\mathcal{F}}_{1}$, ${\mathcal{F}}_{2}$, and ${\mathcal{F}}_{3}$ below, match the Mikhailov’s generating functions $F_{1}$, $F_{2}$ and $F_{3}$ given in equations (3.3)-(3.5) in [11] up to an overall factor $q/\eta^{24}$. The meaning of these functions will be explained shortly. Twisted sector The twisted sector is much simpler. The two terms in ${\mathcal{Z}}_{\mathrm{II}}(g)$ are given in equations (A.24) and (A.26). Combining them gives ${\mathcal{Z}}_{\mathrm{II}}(g)=\frac{1}{\bar{\eta}\eta^{17}}\sum_{n\in{\mathbb{Z}},\,\ell\in 2{\mathbb{Z}}+1}\bar{q}^{\frac{1}{2}p_{R}^{2}}\,q^{\frac{1}{2}p_{L}^{2}}\,\sum_{\rho\hskip 1.0pt\in\Gamma_{8}}q^{\frac{1}{4}\rho^{2}}\,{\mathcal{F}}_{3}\\!\left(q,(\tfrac{\rho^{2}}{2}+n)\\!\\!\\!\mod\\!2\right)\,.$ (A.45) The momenta $p_{R}$ and $p_{L}$ are again given in (A.31). The function ${\mathcal{F}}_{3}$ reads ${\mathcal{F}}_{3}\\!\left(q,(\tfrac{\rho^{2}}{2}+n)\\!\\!\\!\\!\mod\\!2\right)=\frac{1}{2}\left[\left(\frac{\eta^{3}}{\vartheta_{4}}\right)^{\\!\\!4}-e^{i\pi(n+\frac{\rho^{2}}{2})}\left(\frac{\eta^{3}}{\vartheta_{3}}\right)^{\\!\\!4}\right]\,.$ (A.46) The twisted states constitute class C characterized by $\rho\in\Gamma_{8}$ and $\ell\in 2{\mathbb{Z}}+1$. Full orbifold Adding ${\mathcal{Z}}_{\mathrm{II}}({\mathbb{1}})$ and ${\mathcal{Z}}_{\mathrm{II}}(g)$ yields ${\mathcal{Z}}_{{\mathrm{II}}_{\text{o}}}=\frac{1}{\bar{\eta}\eta^{17}}\left\\{\sum_{\begin{subarray}{c}n\in{\mathbb{Z}}\\\ \ell\in 2{\mathbb{Z}}\end{subarray}}\bar{q}^{\frac{1}{2}p_{R}^{2}}\,q^{\frac{1}{2}p_{L}^{2}}\left[\sum_{\rho\hskip 1.0pt\in 2\Gamma_{8}}q^{\frac{1}{4}\rho^{2}}\,{\mathcal{F}}_{1}\ +\sum_{\rho\hskip 1.0pt\in\Gamma_{8}/2\Gamma_{8}}q^{\frac{1}{4}\rho^{2}}{\mathcal{F}}_{2}\right]+\sum_{\begin{subarray}{c}n\in{\mathbb{Z}}\\\ \ell\in 2{\mathbb{Z}}+1\end{subarray}}\bar{q}^{\frac{1}{2}p_{R}^{2}}\,q^{\frac{1}{2}p_{L}^{2}}\,\sum_{\rho\hskip 1.0pt\in\Gamma_{8}}q^{\frac{1}{4}\rho^{2}}\,{\mathcal{F}}_{3}\right\\}\,.$ (A.47) The arguments of ${\mathcal{F}}_{1}$, ${\mathcal{F}}_{2}$ and ${\mathcal{F}}_{3}$ are omitted to simplify the expression. From this result we can read off the content of orbifold states classified according to the possible domains of $\ell$, $n$ and $\rho$. This information is summarised in Table 8. class | $\ell$ | $n$ | $\rho$ | sector | generating function ---|---|---|---|---|--- A | $2{\mathbb{Z}}$ | ${\mathbb{Z}}$ | $2\Gamma_{8}$ | untwisted | ${\mathcal{F}}_{1}(q,n\\!\\!\\!\mod\\!2)$ B | $2{\mathbb{Z}}$ | ${\mathbb{Z}}$ | $\Gamma_{8}/2\Gamma_{8}$ | untwisted | ${\mathcal{F}}_{2}(q,\bar{\rho})$ C | $2{\mathbb{Z}}+1$ | ${\mathbb{Z}}$ | $\Gamma_{8}$ | twisted | ${\mathcal{F}}_{3}\\!\left(q,(\tfrac{\rho^{2}}{2}+n)\\!\\!\\!\mod\\!2\right)$ Table 8: Classes of orbifold states #### A.4.3 Reading ${\mathrm{II}}_{(1)}$ from the partition function It turns out that some identities relating the functions ${\mathcal{F}}_{1}$, ${\mathcal{F}}_{2}$ and ${\mathcal{F}}_{3}$ are needed to show that the orbifold states lie in the lattice ${\mathrm{II}}_{(1)}$. In [11] the relations are proven analytically for the generating functions $F_{c}$, $c=1,2,3$, connected to the ${\mathcal{F}}_{c}$ by $F_{c}=\frac{q}{\eta^{24}}{\mathcal{F}}_{c}\,.$ (A.48) The identities can be verified by comparing the $q$-expansions of the ${\mathcal{F}}_{c}$. For ${\mathcal{F}}_{1}(q,n\\!\\!\\!\mod\\!2)$ and ${\mathcal{F}}_{3}\\!\left(q,(\tfrac{\rho^{2}}{2}+n)\\!\\!\\!\mod\\!2\right)$ these expansions are easily found from their definitions. The expansion of ${\mathcal{F}}_{2}(q,\bar{\rho})$ is less direct because it depends on $\bar{\rho}$, which is the conjugacy class of $\rho$ in $\Gamma_{8}/2\Gamma_{8}$. As explained in [11], if $\frac{1}{2}\rho^{2}$ is odd, then $\rho$ equals a root modulo $2\Gamma_{8}$. We denote the conjugacy class by $\Delta_{2}$. If instead $\frac{1}{2}\rho^{2}$ is even, then $\rho$ modulo $2\Gamma_{8}$ is equal to a vector $\varrho\in\Gamma_{8}$ with $\varrho^{2}=4$. We denote the conjugacy class by $\Delta_{4}$. From the expansions we can check the identities $\displaystyle{\mathcal{F}}_{1}(q,1)$ $\displaystyle={\mathcal{F}}_{2}(q,\Delta_{4})={\mathcal{F}}_{3}(q,0)=8q+64q^{2}+224q^{3}+512q^{4}+\cdots,$ (A.49a) $\displaystyle{\mathcal{F}}_{2}(q,\Delta_{2})$ $\displaystyle={\mathcal{F}}_{3}(q,1)=q^{\frac{1}{2}}(1+28q+126q^{2}+344q^{3}+\cdots)\,.$ (A.49b) The Mikhailov’s generating functions $F_{c}$ verify the same identities because they are equal to the ${\mathcal{F}}_{c}$ up to an overall factor. The meaning of the generating functions can be understood by looking at the simple lattice partition function of the 10-dimensional heterotic string in (A.2). In this case there is a generating function $1/\eta^{16}$. Now, we know that for each vector in the lattice there is a tower of excited states created by acting with the oscillators of the $Y^{\hat{I}}$, $\hat{I}=1,\cdots,16$. Moreover, the coefficients in the $q$-expansion of $1/\eta^{16}$ precisely count the number of states at each excited level. The meaning of the Mikhailov’s generating functions $F_{c}$ is completely analogous. The prefactor $q/\eta^{24}$ in the relation with the ${\mathcal{F}}_{c}$ is well justified. The power of $\eta$ corresponds to the 24 left-moving coordinates, and the power of $q$ just offsets the normal ordering constant. In this way their $q$-expansion will be of the form $F(q)=\sum_{N^{\prime}}d(N^{\prime},(\ell,n,\rho))q^{N^{\prime}}\,,$ (A.50) where now $N^{\prime}$ corresponds to the full left-moving oscillator number. For each state $(\ell,n,\rho)$, the coefficients $d(N^{\prime},(\ell,n,\rho))$ count the number of states with oscillators acting on it and given oscillator number $N^{\prime}$. The dependence on $(\ell,n,\rho)$ is necessary because, as seen in Table 8, for each type of state there is an associated generating function. We are finally ready to state Mikhailov’s proof that the spectrum of orbifold states can be put into correspondence with the points in the lattice ${\mathrm{II}}_{(1)}$ displayed in (A.35). This correspondence is summarised in Table 9. For example, the points of type 1 where $(\ell,n,\rho)\in 2{\mathrm{II}}_{(1)}$ can only correlate with points of orbifold class A which have $\rho\in 2\Gamma_{8}$, and $\ell\in 2{\mathbb{Z}}$, provided that also $n\in 2{\mathbb{Z}}$. ${\mathrm{II}}_{(1)}$ type | $(l,n,\rho)$ | $\frac{1}{2}\rho^{2}+\ell n$ | orbifold class ---|---|---|--- 1 | $2{\mathrm{II}}_{(1)}$ | $2{\mathbb{Z}}$ | $\left[{\bf A},n\in 2{\mathbb{Z}}\right]$ 2 | ${\mathrm{II}}_{(1)}/2{\mathrm{II}}_{(1)}$ | $2{\mathbb{Z}}$ | $\left[{\bf A},n\in 2{\mathbb{Z}}+1\right]$, $\left[{\bf B},\frac{1}{2}\rho^{2}\in 2{\mathbb{Z}}\right]$, $\left[{\bf C},\left(\frac{1}{2}\rho^{2}+n\right)\in 2{\mathbb{Z}}\right]$ 3 | ${\mathrm{II}}_{(1)}$ | $2{\mathbb{Z}}+1$ | $\left[{\bf B},\frac{1}{2}\rho^{2}\in 2{\mathbb{Z}}+1\right]$, $\left[{\bf C},\left(\frac{1}{2}\rho^{2}+n\right)\in 2{\mathbb{Z}}+1\right]$ Table 9: Points in ${\mathrm{II}}_{(1)}$ vs orbifold classes For points of type 2 and 3 there can be more than one orbifold class as can be understood by looking at Table 8. For these points consistency requires precise identities among the generating functions. For example, the points of type 3 must appear with the same generating function whether they arise in class B with $\frac{1}{2}\rho^{2}\in 2{\mathbb{Z}}+1$, or in class C with $(\frac{1}{2}\rho^{2}+n)\in 2{\mathbb{Z}}+1$. This means that ${\mathcal{F}}_{2}(q,\Delta_{2})$ must be equal to ${\mathcal{F}}_{3}(q,1)$, which is precisely the identity in (A.49b). Similarly for the points of type 2 the functions ${\mathcal{F}}_{1}(q,1)$, ${\mathcal{F}}_{2}(q,\Delta_{4})$ and ${\mathcal{F}}_{3}(q,0)$ must be the same, which is true by virtue of of the identity (A.49a). For points of type 1 they just fall in class A and occur with generating function ${\mathcal{F}}_{1}(q,0)$. #### A.4.4 T-duality The previous results can be used to show that the partition function of the $S^{1}/{\mathbb{Z}}_{2}$ orbifold is invariant under T-duality. We consider the simpler situation with Wilson line $a=0$ in which T-duality is the action $n\leftrightarrow\ell$, $R_{M}\to 1/R_{M}$. The relevant piece of the partition function is the lattice contribution ${\mathcal{Z}}_{{\mathrm{II}}_{\text{o}}}$ displayed in (A.47). From the corresponding spectrum of states summarised in Table 8 it is evident that T-duality mixes twisted and untwisted states as remarked in [11]. To establish T-duality it is enough to show that the quantity between brackets in (A.47) is invariant. The parts with both $n$ and $\ell$ even (odd), arising in the untwisted (twisted) sector, are clearly invariant by themselves. The remaining question is whether the untwisted sector terms with $n$ odd and $\ell$ even do match the twisted sector terms with $n$ even and $\ell$ odd. The answer is yes as follows from the equality $\sum_{\begin{subarray}{c}n\in 2{\mathbb{Z}}+1\\\ \ell\in 2{\mathbb{Z}}\end{subarray}}\bar{q}^{\frac{1}{2}p_{R}^{2}}\,q^{\frac{1}{2}p_{L}^{2}}\left[\sum_{\rho\hskip 1.0pt\in 2\Gamma_{8}}q^{\frac{1}{4}\rho^{2}}\,{\mathcal{F}}_{1}(q,1)\ +\sum_{\rho\hskip 1.0pt\in\Gamma_{8}/2\Gamma_{8}}q^{\frac{1}{4}\rho^{2}}{\mathcal{F}}_{2}(q,\bar{\rho})\right]\\!=\\!\\!\sum_{\begin{subarray}{c}n\in 2{\mathbb{Z}}\\\ \ell\in 2{\mathbb{Z}}+1\end{subarray}}\bar{q}^{\frac{1}{2}p_{R}^{2}}\,q^{\frac{1}{2}p_{L}^{2}}\,\sum_{\rho\hskip 1.0pt\in\Gamma_{8}}q^{\frac{1}{4}\rho^{2}}\,{\mathcal{F}}_{3}\\!\left(q,\tfrac{\rho^{2}}{2}\\!\\!\\!\\!\mod\\!2\right)\,.$ (A.51) In turn this identity can be shown using the properties ${\mathcal{F}}_{1}(q,1)={\mathcal{F}}_{3}(q,0)$, ${\mathcal{F}}_{2}(q,\Delta_{4})={\mathcal{F}}_{3}(q,0)$ and ${\mathcal{F}}_{2}(q,\Delta_{2})={\mathcal{F}}_{3}(q,1)$, given in (A.49). ## Appendix B World-sheet realisation of gauge symmetries In this appendix we briefly discuss the Kac-Moody algebras that realize the space-time gauge symmetries of the CHL theory in 9 dimensions and its toroidal compactifications. The space-time ${\mathrm{E}}_{8}\times{{\mathrm{E}}}^{\prime}_{8}$ gauge symmetry is realized on the world-sheet by dimension (1,0) currents $J_{1}^{a}\otimes 1$ and $1\otimes J^{b}_{2}$, $a,b=1,...,248$, that obey the OPE ${J}_{i}^{a}(z){J}_{i}^{b}(0)\sim\frac{\tilde{k}_{i}\delta^{ab}}{z^{2}}+\frac{i}{z}f^{ab}{}_{c}{J}_{i}^{c}(0)\,,\quad i=1,2$ (B.1) at level $k_{i}=\frac{2\tilde{k}_{i}}{\psi_{i}^{2}}=1$, where $\tilde{k}_{i}=1$, $\psi_{i}^{2}=2$ is the norm of the highest root and $f^{ab}{}_{c}$ are the structure constants of the simply-laced Lie algebra of ${\mathrm{E}}_{8}$. The Sugawara construction induces a representation of the Virasoro algebra with central charge $c=\sum_{i=1}^{2}c_{i}=\sum_{i=1}^{2}\frac{k_{i}\,{\rm dim}\ G_{i}}{k_{i}+{\rm g}_{i}}\,,$ (B.2) where ${\rm g}_{i}$ is the dual Coxeter number of the group $G_{i}$ (see Table 10). These formulae hold in general for arbitrary products of groups [35, 36]. For simply laced algebras at level one it follows that $c_{i}={\rm rank}\,G_{i}$. In the ten dimensional heterotic string with $G_{i}={\mathrm{E}}_{8}$, clearly $c_{i}=8$ and $c=16$. $G$ | ${\mathrm{A}}_{n}$ | ${\mathrm{D}}_{n}$ | ${\mathrm{E}}_{6}$ | ${\mathrm{E}}_{7}$ | ${\mathrm{E}}_{8}$ | ${\mathrm{B}}_{n}$ | ${\mathrm{C}}_{n}$ | ${\mathrm{F}}_{4}$ | ${\mathrm{G}}_{2}$ ---|---|---|---|---|---|---|---|---|--- ${\rm g}$ | $n+1$ | $2n-2$ | 12 | 18 | 30 | $2n-1$ | $n+1$ | 9 | 4 dim $G$ | $n(n+2)$ | $n(2n-1)$ | 78 | 133 | 248 | $n(2n+1)$ | $n(2n+1)$ | 52 | 14 Table 10: Dual Coxeter number ${\rm g}$ and dimension of the gauge group $G$ The currents of the level $k=1$ untwisted affine Kac Moody algebras associated with simple Lie algebras which are simply-laced were constructed using the vertex operators of the massless gauge bosons of the string spectrum in [37, 38]. In the ten dimensional theory, the 248 gauge bosons of each ${\mathrm{E}}_{8}$ comprise the 8 Cartan $\alpha^{I}_{-1}|0,0\rangle$ or $\alpha^{I+8}_{-1}|0,0\rangle,I=1,...,8$ and the 240 roots $|p^{I},p^{I+8}\rangle=|r_{1}^{I},0\rangle$ or $|0,r_{2}^{I}\rangle$, with $r_{1}^{I},r_{2}^{I}\in\Gamma_{8}$. Their vertex operators can be written in terms of the free bosons $Y^{I}(z)$ and ${Y^{\prime}}^{I}(z)$, and the corresponding currents $J_{1,2}^{a}$ have the following realisation in the Cartan basis $\displaystyle H^{I}_{1}(z)=i\partial Y^{I}(z)\,,\qquad\qquad E_{1}^{\pm r_{1}}(z)=c_{r_{1}}:e^{\pm ir_{1}\cdot Y(z)}:\,,$ (B.3) $\displaystyle H^{I}_{2}(z)=i\partial{Y^{\prime}}^{I}(z)\,,\qquad\qquad E_{2}^{\pm r_{2}}(z)=c_{r_{2}}:e^{\pm ir_{2}\cdot Y^{\prime}(z)}:\,,$ (B.4) where $c_{r_{1}},c_{r_{2}}$ are cocycle factors. Using the OPEs $\partial Y^{I}(z)\partial Y^{J}(0)=-\frac{\delta^{IJ}}{z^{2}}\,,\qquad\partial{Y^{\prime}}^{I}(z)\partial{Y^{\prime}}^{J}(0)=-\frac{\delta^{IJ}}{z^{2}}\,,$ (B.5) the current algebra of $\widehat{{\mathrm{E}}_{8}\times{\mathrm{E}}^{\prime}_{8}}$ is realized at level $k_{1}=k_{2}=\frac{2\tilde{k}_{i}}{|\psi_{i}|^{2}}=1$, as can be read from $\displaystyle H^{I}_{i}(z)H^{J}_{j}(0)$ $\displaystyle\sim\frac{\delta_{ij}\delta^{IJ}}{z^{2}}\,,$ (B.6a) $\displaystyle H_{i}^{I}(z)E_{j}^{\pm r_{j}}(0)$ $\displaystyle\sim\frac{\pm r_{j}^{I}E_{j}^{\pm r_{j}}(0)\delta_{ij}}{z}\,,$ (B.6b) $\displaystyle E_{i}^{r_{i}}(z)E_{i}^{-r_{i}}(0)$ $\displaystyle\sim\frac{1}{z^{2}}+\frac{r_{i}\cdot H_{i}(0)}{z}\,,$ (B.6c) As we have seen, the CHL string in 9 and lower dimensions can be constructed as a ${\mathbb{Z}}_{2}$ orbifold involving the outer automorphism that exchanges ${\mathrm{E}}_{8}$ and ${{\mathrm{E}}}^{\prime}_{8}$. In 10 dimensions the orbifold by this exchange simply reproduces the original theory. This can be verified computing the partition function as discussed in Appendix A, with $g$ corresponding to the action ${\mathrm{E}}_{8}\leftrightarrow{\mathrm{E}}^{\prime}_{8}$, and using the identities (A.49). However, if the exchange of ${\mathrm{E}}_{8}$ and ${\mathrm{E}}^{\prime}_{8}$ is accompanied by an additional $2\pi$ rotation of the ten dimensional space-time one gets the non-supersymmetric ${\mathrm{E}}_{8}$ string [39, 40] in which some sectors of the Hilbert space are projected out. As explained in [40], only the products ${\cal H}_{\rm spF}\otimes{\cal H}_{\rm s}$ and ${\cal H}_{\rm spB}\otimes{\cal H}_{\rm as}$ survive, where ${\cal H}_{\rm spB}({\cal H}_{\rm spF})$ denotes the Hilbert subspace of space-time bosons (fermions) which is symmetric (antisymmetric) under the $2\pi$ rotation. Since the internal Hilbert space ${\cal H}_{\rm int}$ is an irreducible representation of $\widehat{{\mathrm{E}}_{8}\times{\mathrm{E}}^{\prime}_{8}}$, its symmetric and antisymmetric subspaces ${\cal H}_{\rm s}$ and ${\cal H}_{\rm as}$ are not invariant under the full ${\mathrm{E}}_{8}\times{\mathrm{E}}^{\prime}_{8}$ current algebra, but are invariant under the algebra of the diagonal currents $\displaystyle T^{a}(z)=J_{1}^{a}(z)\otimes 1+1\otimes J^{a}_{2}(z)\,,$ (B.7) since $T^{a}$ is invariant under the exchange of the two ${\mathrm{E}}_{8}$’s. The diagonal $\hat{\mathrm{E}}_{8}$ is a subalgebra of $\widehat{{\mathrm{E}}_{8}\times{\mathrm{E}}^{\prime}_{8}}$ and clearly, the current algebra is realized at level $k_{1}+k_{2}=2$. In this case, the central charge obtained from (B.2) is $\frac{31}{2}$, and the missing $\frac{1}{2}$ is provided by the coset theory $\frac{({\mathrm{E}}_{8}\times{\mathrm{E}}_{8})_{k=1}}{({\mathrm{E}}_{8})_{k=2}}$, which is equivalent to the Ising model [40]. Let us now turn to CHL strings. As we have reviewed in the main text, the 9-dimensional theory can be described by a $S^{1}/{\mathbb{Z}}_{2}$ orbifold, with ${\mathbb{Z}}_{2}$ action given by the exchange ${\mathrm{E}}_{8}\leftrightarrow{\mathrm{E}}^{\prime}_{8}$ together with a translation in the compactified direction $x^{9}\rightarrow x^{9}+\pi R$. For arbitrary values of the compactification radius $R$ and Wilson lines $a$, only the 8 diagonal Cartan gauge bosons in the untwisted massless sector survive the orbifold projection, and together with the KK gauge boson of the compactified $x^{9}$, they account for the ${\mathrm{U}(1)}^{9}$ abelian symmetry of the theory, with generators $T^{I}_{+}=H^{I}_{1}+H^{I}_{2}=i\left(\partial Y^{I}(z)+\partial{Y}^{{}^{\prime}I}(z)\right)\qquad{\rm and}\qquad H^{9}=i\partial X^{9}(z)\,,$ (B.8) where $X^{9}$ is a free boson. For arbitrary $R$ and $a=0$ the untwisted states $\frac{1}{\sqrt{2}}\left(|r^{I},0\rangle+|0,r^{I}\rangle\right)$ are also massless when $r^{I}$ is a root of ${\mathrm{E}}_{8}$ and $m=n=0$. Together with the nine Cartan above, they give rise to the rank nine gauge symmetry ${\mathrm{E}}_{8}\times{\mathrm{U}(1)}$, with ${\mathrm{E}}_{8}$ the diagonal subgroup of the original ${\mathrm{E}}_{8}\times{\mathrm{E}}_{8}$. The corresponding raising and lowering currents are $T_{+}^{\pm r^{I}}=E_{1}^{\pm r^{I}}+E_{2}^{\pm r^{I}}\,.$ (B.9) The ${\mathrm{E}}_{8}$ current algebra is realized at level 2, just as in the 10-dimensional ${\mathrm{E}}_{8}$ string theory. Lower rank groups and higher level algebras are a hallmark of CHL strings. The total central charge $c$ of the Kac-Moody algebra associated to the gauge symmetry also gives useful information. In general, in $(10-d)$ dimensions there is a bound $c\leq c_{L}^{int}$, where the internal piece is $c_{L}^{int}=16+d$. This follows because keeping transverse degrees of freedom the total left-moving central charge is $c_{L}=24$ and the world-sheet bosons corresponding to the space-time coordinates contribute $(8-d)$. It is convenient to write the bound on $c$ as $\Delta=16+d-c\geq 0\,.$ (B.10) We will refer to $\Delta$ as the missing central charge. A consistency condition is that when $\Delta<1$ it must be equal to the central charge of a unitary minimal model given by $c_{j}=1-\frac{6}{j(j+1)},\quad j=3,4,\ldots.$ (B.11) For instance, in the above ${\mathrm{E}}_{8}\times{\mathrm{U}(1)}$ example the Kac-Moody central charge is $c=\frac{31}{2}+1$ and the missing $\Delta=\frac{1}{2}$ is provided by the $j=3$ minimal model, i.e. the Ising model which is furthermore equivalent to the coset theory $\frac{({\mathrm{E}}_{8}\times{\mathrm{E}}_{8})_{k=1}}{({\mathrm{E}}_{8})_{k=2}}$. Continuing with the 9-dimensional CHL string, at the particular radius $R=\sqrt{2}$ and $a=0$, the states with $\ell=\pm 1,n=\pm 1,\rho=0$ in the twisted sector become massless and enhance the ${\mathrm{U}(1)}$ of the KK vector to ${\mathrm{SU}}(2)$. The vertex operators that create these states involve the left moving currents $\qquad H^{1}(z)=i\partial X^{1}(z)\,,\qquad E^{\pm}(z)=c_{\pm}\Lambda e^{\pm ip_{L}\cdot X(z)}\,,$ (B.12) where the fields $X^{\rm a}(z)=e^{\rm a}_{i}X^{i}(z)$ with ${\rm a},i=1,...,d$ have tangent space indices ${\rm a}$ and standard propagator $\langle X^{\rm a}(z)X^{\rm b}(w)\rangle=-\delta^{\rm{ab}}ln(z-w)\,.$ (B.13) The momentum in the tangent space is $p_{L{\rm a}}=\hat{e}_{\rm{a}}{}^{i}p_{Li}=1$ and $\Lambda$ is a twist field with conformal dimension $h=\frac{1}{2}$ and OPE $\Lambda(z)\Lambda(0)=\frac{1}{z}+{\rm reg}\,,$ (B.14) which is necessary to build spin 1 currents [41, 42]. From the OPEs $\displaystyle H(z)H(0)$ $\displaystyle\sim$ $\displaystyle\frac{1}{z^{2}}\,,$ (B.15) $\displaystyle H(z)E^{\pm}(0)$ $\displaystyle\sim$ $\displaystyle\pm\frac{p_{L{\rm a}}E^{\pm}(0)}{z}\,,$ (B.16) $\displaystyle E^{+}(z)E^{-}(0)$ $\displaystyle\sim$ $\displaystyle\frac{1}{z^{2}}+\frac{p_{L{\rm a}}H^{\rm a}(0)}{z}\,,$ (B.17) we see that the affine ${\mathrm{SU}}(2)$ algebra is realized at level $k=\frac{2\tilde{k}}{p_{L}^{2}}=2$. The central charge of the ${\mathrm{E}}_{8}\times{\mathrm{A}}_{1}$ model at level $k=2$ saturates $c_{L}^{int}=17$, as may be verified using the data in Table 10. The central charges of all the maximal enhancements listed in Table 3 can be readily computed. Except for the ${\mathrm{D}}_{9}$ and ${\mathrm{E}}_{8}\times{\mathrm{A}}_{1}$ models, the internal Kac-Moody algebras do not saturate $c_{L}^{int}=17$. In some cases, the missing central charge $\Delta$ is provided by unitary minimal models, cf. (B.11). For instance the ${\mathrm{E}}_{6}\times{\mathrm{A}}_{3}$ at level 2 requires $\Delta=6/7=c_{6}$. On the other hand, the ${\mathrm{A}}_{1}\times{\mathrm{A}}_{2}\times{\mathrm{A}}_{6}$ current algebra at level 2 leads to $\Delta=\frac{49}{30}$ which could arise combining two minimal models with $j=4$ and $j=9$. However in the case ${\mathrm{D}}_{5}\times{\mathrm{A}}_{4}$, with $\Delta=8/7$, a candidate world-sheet CFT is not obvious. It would be interesting to understand if there is a realisation of the missing CFTs in terms of coset models involving the original and the enhanced gauge groups. In compactifications of the CHL string to 8 dimensions, the gauge group is ${\mathrm{U}(1)}^{10}$ for generic values of the background fields. To analyze maximal enhancement at a special point in moduli space let us choose $E_{11}=2,E_{22}=1,E_{12}=E_{21}=0$ and $a_{1}=a_{2}=0$. These moduli actually correspond to starting with the 9-dimensional CHL model with group ${\mathrm{E}}_{8}\times{\mathrm{A}}_{1}$ at level $k=2$ discussed above, and further compactifying on a circle of radius $R_{2}=1$. Following the analysis in section 3, and using equations (3.7) and (3.8), we see that there are additional untwisted states with $Z^{2}=4$, having $\rho=0$ and $(\ell^{1},\ell^{2},n_{1},n_{2})=\pm(0,2,0,1)$. These states enhance the $({\mathrm{E}}_{8}\times{\mathrm{A}}_{1})_{2}\times{\mathrm{U}(1)}$ gauge symmetry to $({\mathrm{E}}_{8}\times{\mathrm{A}}_{1})_{2}\times({\mathrm{C}}_{1})_{1}$. Using that $p_{L}^{\rm a}=\frac{1}{\sqrt{2}}l^{i}e_{i}{}^{\rm a}=(0,\sqrt{2})$, the vertex operators contain the currents $H^{2}=i\partial X^{2}\,,\qquad E^{\pm}=c^{\pm}e^{\pm i\sqrt{2}X^{2}}\,,$ (B.18) which realize the current algebra of ${\mathrm{C}}_{1}$ at level $k=1$. We next consider an example with short and long roots. Taking $E_{11}=2$, $E_{12}=-2$, $E_{21}=0$, $E_{22}=1$ and $a_{1}=a_{2}=0$, gives gauge symmetry ${\mathrm{E}}_{8}\times{\mathrm{C}}_{2}$. The quantum numbers $(\ell^{1},\ell^{2},n_{1},n_{2})$ of the massless states that enhance the ${\mathrm{U}(1)}^{2}$ to ${\mathrm{C}}_{2}$ are $\pm(0,2,-2,1)$, $\pm(2,2,0,1)$, $\pm(1,0,1,0)$ and $\pm(1,2,-1,1)$, and they all have $\rho=0$. The vertex operators contain the currents $\displaystyle E_{1}^{\pm}(z)$ $\displaystyle=c_{1}^{\pm}\Lambda(z)e^{\pm iX^{1}(z)},\hskip 56.9055ptp^{\rm a}_{1L}=\alpha_{1}=(1,0)\,,$ (B.19a) $\displaystyle E_{2}^{\pm}(z)$ $\displaystyle=c_{2}^{\pm}e^{\mp i(X^{1}(z)-X^{2}(z))},\qquad\qquad p^{\rm a}_{2L}=\alpha_{2}=(-1,1)\,,$ (B.19b) $\displaystyle E_{3}^{\pm}(z)$ $\displaystyle=c_{3}^{\pm}\Lambda(z)e^{\pm iX^{2}(z)},\hskip 48.36958pt\quad p^{\rm a}_{3L}=\alpha_{3}=(0,1)\,,$ (B.19c) $\displaystyle E_{4}^{\pm}(z)$ $\displaystyle=c_{4}^{\pm}e^{\pm i(X^{1}(z)+X^{2}(z))},\qquad\qquad p^{\rm a}_{4L}=\alpha_{4}=(1,1)\,.$ (B.19d) Together with the Cartan operators $H^{1}=i\partial X^{1}$ and $H^{2}=i\partial X^{2}$, the current algebra of $\hat{\mathrm{C}}_{2}$ is realized at level $k=1$ since, $\tilde{k}=1$ and the square of the highest root $\alpha_{4}$ is 2. It is straightforward to calculate the central charge of the Kac-Moody algebras of the eight dimensional models listed in Table 7. As in the nine- dimensional case, they do not saturate $\Delta=0$ in general, but again in most cases one can find combinations of minimal models that account for the missing contribution. A consistency check is that when $\Delta<1$ it is always equal to the central charge of a unitary minimal model. Finally, let us remark that vertex operators for the twisted states in the examples (B.18) and (B.19) discussed above do not involve the fields $Y^{I},Y^{\prime I}$ explicitly. However, when $\rho$ is non-vanishing, they are expected to be part of the exponentials in the currents. For instance, with Wilson line $a=\frac{1}{2}w_{6}$ and $R^{2}=\frac{3}{2}$ in nine dimensions, the states with quantum numbers $(\ell,n,\rho)$ given by $\pm(1,0,-w_{6})$ and $\pm(1,1,0)$ become massless and enhance the gauge group to ${\mathrm{E}}_{7}\times{\mathrm{A}}_{2}$, with current algebra realized at level 2. Free field representations of the affine ${\mathrm{SU}}(3)$ current algebra at level 2 are known [35] as well as of the level 1 non-simply laced algebras [43, 44, 45] involved in the enhanced gauge groups of the eight dimensional theory (see also [46] for constructions of Kac Moody algebras in terms of free fields). But not all of them can be directly related to the vertex operators of the twisted states of the CHL theory. We postpone a detailed analysis of the twisted vertex operators that realize these current algebras to a future publication. ## References * [1] A. Adams, O. DeWolfe, and W. Taylor, String universality in ten dimensions, Phys. Rev. Lett. 105 (2010) 071601, [arXiv:1006.1352]. * [2] H.-C. Kim, G. Shiu, and C. Vafa, Branes and the Swampland, Phys. Rev. D 100 (2019), no. 6 066006, [arXiv:1905.08261]. * [3] B. Fraiman, M. Graña, and C. A. Núñez, A new twist on heterotic string compactifications, JHEP 09 (2018) 078, [arXiv:1805.11128]. * [4] A. Font, B. Fraiman, M. Graña, C. A. Núñez, and H. P. De Freitas, Exploring the landscape of heterotic strings on $T^{d}$, JHEP 10 (2020) 194, [arXiv:2007.10358]. * [5] K. S. Narain, New Heterotic String Theories in Uncompactified Dimensions $<$ 10, Phys. Lett. B169 (1986) 41–46. * [6] P. Goddard and D. Olive, Algebras, lattices and strings, in Vertex Operators in Mathematics and Physics (J. Lepowsky, S. Mandelstam, and I. M. Singer, eds.), (New York, NY), pp. 51–96, Springer US, 1985. * [7] F. A. Cachazo and C. Vafa, Type I’ and real algebraic geometry, hep-th/0001029. * [8] I. Shimada and D. Q. Zhang, Classification of extremal elliptic K3 surfaces and fundamental groups of open K3 surfaces, Nagoya Math. J. 161 (2001) 23, [math/0007171]. * [9] S. Chaudhuri, G. Hockney, and J. D. Lykken, Maximally supersymmetric string theories in D $<$ 10, Phys. Rev. Lett. 75 (1995) 2264–2267, [hep-th/9505054]. * [10] S. Chaudhuri and J. Polchinski, Moduli space of CHL strings, Phys. Rev. D 52 (1995) 7168–7173, [hep-th/9506048]. * [11] A. Mikhailov, Momentum lattice for CHL string, Nucl. Phys. B 534 (1998) 612–652, [hep-th/9806030]. * [12] H.-C. Kim, H.-C. Tarazi, and C. Vafa, Four-dimensional $\mathbf{\mathcal{N}=4}$ SYM theory and the swampland, Phys. Rev. D 102 (2020), no. 2 026003, [arXiv:1912.06144]. * [13] M. Cvetič, M. Dierigl, L. Lin, and H. Y. Zhang, String Universality and Non-Simply-Connected Gauge Groups in 8d, Phys. Rev. Lett. 125 (2020), no. 21 211602, [arXiv:2008.10605]. * [14] M. Cvetic, M. Dierigl, L. Lin, and H. Y. Zhang, On the Gauge Group Topology of 8d CHL Vacua, arXiv e-prints (July, 2021) arXiv:2107.04031, [arXiv:2107.04031]. * [15] E. Witten, Toroidal compactification without vector structure, JHEP 02 (1998) 006, [hep-th/9712028]. * [16] W. Lerche, C. Schweigert, R. Minasian, and S. Theisen, A Note on the geometry of CHL heterotic strings, Phys. Lett. B 424 (1998) 53–59, [hep-th/9711104]. * [17] J. de Boer, R. Dijkgraaf, K. Hori, A. Keurentjes, J. Morgan, D. R. Morrison, and S. Sethi, Triples, fluxes, and strings, Adv. Theor. Math. Phys. 4 (2002) 995–1186, [hep-th/0103170]. * [18] L. Bhardwaj, D. R. Morrison, Y. Tachikawa, and A. Tomasiello, The frozen phase of F-theory, JHEP 08 (2018) 138, [arXiv:1805.09070]. * [19] A. Dabholkar and J. Park, Strings on orientifolds, Nucl. Phys. B 477 (1996) 701–714, [hep-th/9604178]. * [20] O. Aharony, Z. Komargodski, and A. Patir, The Moduli space and M(atrix) theory of 9d N=1 backgrounds of M/string theory, JHEP 05 (2007) 073, [hep-th/0702195]. * [21] S. Elitzur and A. Giveon, Connection Between Spectra of Nonsupersymmetric Heterotic String Models, Phys. Lett. B 189 (1987) 52–56. * [22] V. G. Kac, Automorphisms of finite order of semisimple Lie algebras, Funkcional. Anal. i Priložen. 3 (1969), no. 3 94–96. * [23] C. Córdova, D. S. Freed, H. T. Lam, and N. Seiberg, Anomalies in the Space of Coupling Constants and Their Dynamical Applications II, SciPost Phys. 8 (2020), no. 1 002, [arXiv:1905.13361]. * [24] I. Shimada, On elliptic k3 surfaces., Michigan Mathematical Journal 47 (2000) 423–446, [math/0505140]. * [25] L. Chabrol, F-theory and Heterotic Duality, Weierstrass Models from Wilson lines, Eur. Phys. J. C 80 (2020), no. 10 944, [arXiv:1910.12844]. * [26] Y. Hamada and C. Vafa, 8d Supergravity, Reconstruction of Internal Geometry and the Swampland, arXiv:2104.05724. * [27] M. Bianchi, G. Pradisi, and A. Sagnotti, Toroidal compactification and symmetry breaking in open string theories, Nucl. Phys. B 376 (1992) 365–386. * [28] M. Montero and C. Vafa, Cobordism Conjecture, Anomalies, and the String Lamppost Principle, JHEP 01 (2021) 063, [arXiv:2008.11729]. * [29] M. Bianchi, A Note on toroidal compactifications of the type I superstring and other superstring vacuum configurations with sixteen supercharges, Nucl. Phys. B 528 (1998) 73–94, [hep-th/9711201]. * [30] R. Blumenhagen, D. Lüst, and S. Theisen, Basic concepts of string theory, Springer (2013). * [31] K. Narain, M. Sarmadi, and C. Vafa, Asymmetric Orbifolds, Nucl. Phys. B 288 (1987) 551. * [32] L. J. Dixon, J. A. Harvey, C. Vafa, and E. Witten, Strings on Orbifolds. 2., Nucl. Phys. B 274 (1986) 285–314. * [33] L. J. Dixon, J. A. Harvey, C. Vafa, and E. Witten, Strings on Orbifolds, Nucl. Phys. B 261 (1985) 678–686. * [34] C. Vafa, Modular Invariance and Discrete Torsion on Orbifolds, Nucl. Phys. B 273 (1986) 592–606. * [35] P. Di Francesco, P. Mathieu, and D. Senechal, Conformal Field Theory. Graduate Texts in Contemporary Physics. Springer-Verlag, New York, 1997\. * [36] P. H. Ginsparg, Applied Conformal Field Theory, in Les Houches Summer School in Theoretical Physics: Fields, Strings, Critical Phenomena, pp. 1–168, 9, 1988. hep-th/9108028. * [37] I. B. Frenkel and V. G. Kac, Basic Representations of Affine Lie Algebras and Dual Resonance Models, Invent. Math. 62 (1980) 23–66. * [38] G. Segal, Unitarity Representations of Some Infinite Dimensional Groups, Commun. Math. Phys. 80 (1981) 301–342. * [39] H. Kawai, D. C. Lewellen, and S. H. H. Tye, Classification of Closed Fermionic String Models, Phys. Rev. D 34 (1986) 3794. * [40] P. Forgacs, Z. Horvath, L. Palla, and P. Vecsernyes, Higher Level Kac-Moody Representations and Rank Reduction in String Models, Nucl. Phys. B 308 (1988) 477–508. * [41] S. Hamidi and C. Vafa, Interactions on Orbifolds, Nucl. Phys. B 279 (1987) 465–513. * [42] L. J. Dixon, D. Friedan, E. J. Martinec, and S. H. Shenker, The Conformal Field Theory of Orbifolds, Nucl. Phys. B 282 (1987) 13–73. * [43] P. Goddard and D. I. Olive, Kac-Moody and Virasoro Algebras in Relation to Quantum Physics, Int. J. Mod. Phys. A 1 (1986) 303. * [44] P. Goddard, W. Nahm, D. I. Olive, and A. Schwimmer, Vertex Operators for Nonsimply Laced Algebras, Commun. Math. Phys. 107 (1986) 179. * [45] D. Bernard and J. Thierry-Mieg, Level One Representations of the Simple Affine Kac-Moody Algebras in Their Homogeneous Gradations, Commun. Math. Phys. 111 (1987) 181. * [46] M. Kuwahara, N. Ohta, and H. Suzuki, Conformal field theories realized by free fields, Nucl. Phys. B 340 (1990) 448–474.
# Nbias: A Natural Language Processing Framework for bias Identification in Text Shaina Raza<EMAIL_ADDRESS>Muskan Garg<EMAIL_ADDRESS>Deepak John Reji<EMAIL_ADDRESS>Syed Raza Bashir <EMAIL_ADDRESS>Chen Ding<EMAIL_ADDRESS>Vector Institute for Artificial Intelligence, Toronto, ON, Canada Artificial Intelligence & Informatics, Mayo Clinic, Rochester, MN, USA Environmental Resources Management, Bengaluru, Karnataka, India Toronto Metropolitan University, Toronto, ON, Canada ###### Abstract Bias in textual data can lead to skewed interpretations and outcomes when the data is used. These biases could perpetuate stereotypes, discrimination, or other forms of unfair treatment. An algorithm trained on biased data may end up making decisions that disproportionately impact a certain group of people. Therefore, it is crucial to detect and remove these biases to ensure the fair and ethical use of data. To this end, we develop a comprehensive and robust framework Nbias that consists of four main layers: data, corpus construction, model development and an evaluation layer. The dataset is constructed by collecting diverse data from various domains, including social media, healthcare, and job hiring portals. As such, we applied a transformer-based token classification model that is able to identify bias words/ phrases through a unique named entity BIAS. In the evaluation procedure, we incorporate a blend of quantitative and qualitative measures to gauge the effectiveness of our models. We achieve accuracy improvements ranging from 1% to 8% compared to baselines. We are also able to generate a robust understanding of the model functioning. The proposed approach is applicable to a variety of biases and contributes to the fair and ethical use of textual data. ###### keywords: Bias detection , Dataset , Token classification , Nbias ## 1 Introduction The recent surge in Natural Language Processing (NLP) applications, encompassing fields from recommendation systems to social justice and employment screening, has sparked a critical concern - the emergence of bias within these systems [1]. Instances of racial and gender bias have been increasingly reported [2], indicating an urgent need for scrutiny. These biases often originate from the training data used in NLP models, and a majority of these large datasets harbor inherent biases . Regrettably, many NLP practitioners lack the necessary awareness or knowledge to effectively identify and address these biases, highlighting a significant gap in the field. Furthermore, there is a notable lack of discussion on data specifics - its origin, generation, and pre-processing - in many NLP publications. Given these circumstances, the importance of addressing biases in NLP applications cannot be overstated. These biases, if unchecked, not only compromise the validity of the models, but can also have unfavorable and detrimental consequences. The objective of this research is to provide insights into the detection of bias in NLP datasets, contributing to the development of more equitable and unbiased Artificial Intelligence (AI) systems. Bias in text data is a pervasive and deeply-rooted issue. The bias in data often stems from cognitive predispositions that influences our dialogues, views, and understanding of information [3]. This bias can be explicit which are often seen in discriminatory language targeting certain racial or ethnic groups [4], as in social media. Implicit bias [5], on the other hand, subtly perpetuates prejudice through unintentional language use but is equally harmful. The necessity for unbiased, trustworthy text data has grown across sectors like healthcare [6], social media [4, 7], and recruitment [8]. This data is essential for training NLP models for various downstream tasks, like formulating healthcare diagnoses and treatment plans, handling discriminatory language on social media, and promoting fair recruitment practices. Figure 1 illustrates the complexities of biases in text data in various domains, including job hiring, social media, and healthcare. These biases are primarily conveyed through lexical choices [9] and demand sophisticated detection methods, motivating this research. The primary aim of this study is to further foundational research on the fairness and reliability of the textual data. Figure 1: Visual Representation of Implicit and Explicit Biases in Textual Data: Examples from Job Hiring, Social Media, and Healthcare. Although NLP has advanced much, the state-of-the-art techniques [2, 10, 11] often concentrate on bias detection in specific domains and lack generalizability. To address this, our research offers a generalizable bias detection method proven effective across the various domains. We present Nbias, a comprehensive framework for detecting bias in text data. This involves data preparation where bias-indicative terms are marked using a transformer-based token classification method like Named Entity Recognition (NER). Current NER solutions can manage general [12], biomedical [13], and social media [14] entities, but often neglect BIAS as a separate entity. To address this, we introduce a new entity type, BIAS, to identify biased terms in text data. In this context, bias refers to unfair and often harmful favoritism or prejudice towards a particular group, person, or idea, which can manifest through profanity, unjustified criticism, or discriminatory language. A key contribution of this study is the development of the first comprehensive framework for bias detection in text data. This framework is based on latest language model technology and incorporates four crucial layers: data gathering, corpus construction, model development, and rigorous evaluation. The specific contributions of the work are as follows: 1. 1. Development of Annotated Datasets: Acknowledging the scarcity of bias annotations in text-based data, we designed a solution by generating multiple annotated datasets. Our work fills a critical gap in the available resources, thereby providing a solid foundation for future research in the realm of bias detection. 2. 2. Semi-Autonomous Labeling: To alleviate the time-intensive manual annotation process, we pioneered a novel methodology termed “semi-autonomous labeling”. This strategy provides a faster and more efficient way of annotating bias- related terms within textual content. This innovative approach has significant implications for improving the speed and accuracy of bias detection. 3. 3. Unique Entity Type - BIAS: In an effort to enhance the precision of bias identification within text, we introduced a unique entity type, BIAS. This new entity type is specifically designed for detecting biased words and phrases within the text data. This has the potential to dramatically improve the process of bias identification and quantification in text-based analysis. 4. 4. Comprehensive Evaluation Process: We subjected our proposed framework to a thorough evaluation process, utilizing both quantitative and qualitative analysis methods. The results confirm the reliability and efficiency of our approach, making it compatible for its application in real-world scenarios. This rigorous evaluation sets a benchmark for assessing the efficacy of bias detection methodologies. ## 2 Related Work ### 2.1 Identifying Bias in NLP One of the key challenges associated with NLP systems lies in the presence of bias, a manifestation of unfair and systematic discrimination observed in their outcomes [15]. Moreover, the past studies [16, 10, 11, 2, 17] have shown the societal and cultural prejudices are deeply embedded within the training data due to the presence of bias. As such, the biases, whether explicit or implicit, can significantly impact the functionality of the NLP systems, leading to skewed results and perpetuating existing societal biases. Thus, the detection and mitigation of these biases are crucial to promoting fairness and inclusiveness within NLP systems [7, 11]. Researchers have proposed and implemented various strategies to identify bias, including employing statistical methods to discover patterns of bias within the training data [2, 18]. Under this approach, specific words or phrases that appear to be disproportionately associated with certain demographic groups, such as genders or races, are identified. For example, certain adjectives might be used more frequently in descriptions of women than men [2], or vice versa. The identification and debiasing of such patterns can highlight areas of potential bias, providing a starting point for efforts to eliminate these biases [19]. The field of bias detection in NLP has seen a surge of innovative methods in recent years, primarily leveraging advanced machine learning techniques. One such study has considered the combination of a speech detection system with an explanatory method to identify potential bias [20]. In this method, not only is the system trained to detect instances of hate speech, but it also provides a rationale or explanation for its classification decisions. Another area of research that has attracted considerable attention is the investigation of bias in event detection datasets and models [21]. Event detection tasks involve identifying and classifying real-world events within text data. These tasks can be susceptible to a range of bias-related issues, including data sparsity, labeling task, and annotations. Additionally, NLP techniques have been employed to address various aspects of bias. For instance, in a related study [22] on gender bias and sentiment towards political leaders in the news were quantified using word embeddings and sentiment analysis. Another work focused on investigating ableist bias in NLP systems, particularly at the intersection of gender, race, and disability [23]. Similarly, a methodology was proposed to eliminate gender bias from word embeddings [24]. Furthermore, marked attribute bias in natural language inference was identified and analyzed, with an evaluation of existing methods for bias mitigation [9]. These studies provide a deep understanding of the social and cultural factors that contribute to bias identification. Another work [25] presents bias analysis in NLP beyond demographic bias, focusing on predicting interpersonal group relationships using fine-grained emotions. A related study [26] evaluates gender bias in NLP research, highlighting the lack of explicit gender theorization. In another work, authors [27] introduce an effective bias-conflicting scoring method and gradient alignment strategy to identify and mitigate dataset biases. Overall, these studies underscore the importance of continuous efforts in identifying and mitigating biases in models to ensure fairness and equity. ### 2.2 Named Entity Recognition (NER) Named Entity Recognition (NER) is a significant task in NLP that is aimed at identifying and classifying named entities within textual data. NER is a token classification task that focuses on identifying and classifying named entities such as individuals, organizations, and locations within a given text. In the past, many traditional methods have been employed for NER, each with its unique characteristics and benefits. * 1. Rule-based methods rely on predefined sets of rules to identify named entities [28]. This method usually employs regular expressions or dictionary-based techniques to extract entities. Although rule-based methods can be effective for well-defined and specific contexts, their performance can decrease in the face of variability and ambiguity in language usage. * 2. Supervised learning methods leverage annotated data to train a model for NER [14, 29]. These methods use statistical models such as Support Vector Machines (SVM), Conditional Random Fields (CRF), and others to classify the named entities. The performance of supervised learning methods can be impressive, given sufficient high-quality annotated data. * 3. Deep learning methods, which are more contemporary approaches, utilize complex architectures like recurrent neural networks (RNNs) and transformer-based language models to extract named entities [30, 13]. These methods have shown promising results in NER tasks, owing to their capacity to capture intricate language patterns and contextual information. A recent study introduced a contrastive learning-based approach for multimodal NER [31]. This approach leverages both textual and non-textual data to identify and classify named entities, harnessing the complementary information offered by different modalities to improve the model’s performance. Another research work into event detection from social media posts, evaluating the effectiveness of a pre-trained NER model followed by graph-based spectral clustering [32]. The study also explored transformer-based methods to weight the edges of the graph for event detection, further refining the detection process. A span-based NER model eliminates the need for label dependency [32]. This approach addresses the issue of cascade label mis-classifications, a common challenge in traditional NER models that depend on label sequences. While our work on token classification is inspired by these studies, we identify a notable gap in the literature: the existing seminal work does not recognize BIAS as an entity. In this work, we detect biased expressions within unstructured texts, designating them under the ’BIAS’ entity label. ### 2.3 Data Annotation Data annotation is a crucial task in NLP as it involves labeling and categorizing information to extract valuable insights from text data [33]. By enriching text data with relevant metadata, such as part-of-speech tags, named entity tags, and sentiment tags, data annotation provides contextual information that is essential for subsequent analysis [34]. Quality annotated data enhances model learning, boosting prediction accuracy. In contrast, inadequate annotations impede learning, resulting in subpar performance. Various methods of data annotation cater to different requirements of speed, quality, and computational resources: * 1. Manual annotation is carried out by human annotators who carefully review and label the data. This method typically yields high-quality results, given the nuanced understanding that humans have of language. However, manual annotation is often time-consuming and labor-intensive, and its feasibility may be limited by the availability of qualified annotators and financial resources [28]. * 2. Semi-automatic annotation combines manual efforts with automated tools to accelerate the annotation process and minimize human error. These tools can range from rule-based systems to pre-trained machine learning models [35]. While semi-automatic annotation can improve efficiency, its accuracy may still depend on the quality of the automated tools and the manual review process. * 3. Automatic annotation leverages machine learning models and algorithms to annotate text data without human intervention [36]. Although automatic annotation can process vast amounts of data in a relatively short time, its accuracy may be compromised, particularly for complex or ambiguous texts. Therefore, a common practice is to combine automatic annotation with manual review to ensure data quality. Various strategies have been developed to address these challenges and optimize the annotation process. One study presents a comprehensive comparison of different annotation tools, highlighting their strengths and limitations [37]. Another research work proposes a method for automatically generating high-quality labeled data for NER tasks by leveraging existing knowledge bases [38]. A similar study has developed an annotation framework that combines statistical machine translation and human annotation to create a parallel corpus [39]. Other researchers have investigated methods for improving the reliability and consistency of manual annotations, such as developing guidelines and protocols for annotation tasks [40]or implementing quality control mechanisms to ensure data quality. Ultimately, the choice of annotation method and tools will depend on the specific requirements of a project, such as the desired level of accuracy, the available resources, and the nature of the data being annotated.To this end, we employ a semi-automatic annotation strategy, integrating human proficiency with semi-supervised learning methodologies. ## 3 Proposed Framework for Bias Identification in Texts In this section, we present Nbias, an innovative framework designed to detect biases within textual data, as illustrated in Figure 2. The Nbias framework is structured into four distinct layers: (i) the data collection layer, (ii) the corpus construction layer, (iii) the model development layer, and (iv) the evaluation Layer. Each layer is designed to collaborate seamlessly with the others, providing an effective and comprehensive approach for detecting biases in textual content. Figure 2: Nbias: A Natural Language Processing Framework for Bias Identification. ### 3.1 Data Layer The Data Layer serves as the framework’s primary interface with the data for analysis. It handles data collection, pre-processing and data consolidation from a variety of sources, such as social media, online articles, and databases. This layer ensure adaptability and high performance for the entire framework. #### Data Gathering Our study adopts a methodological data collection approach, incorporating diverse sources from various domains. To analyze biases in medical narratives and healthcare, we include data from two important clinical text databases: MIMIC-III [41] and MACCROBAT [37]. The MIMIC-III dataset is a publicly available database with de-identified health data from over 40,000 ICU patients. It offers rich clinical narratives, including nursing notes, radiology reports, and discharge summaries, enabling a deep understanding of biases in healthcare communication. The textual data were primarily obtained from the NOTEEVENTS table. The MACCROBAT dataset provides valuable pediatric critical care data, including admission notes and medical summaries. It contains 200 original documents along with corresponding annotated versions centered around clinical case reports. To detect bias in news articles and social media streams, we use the BABE (Bias Annotations By Experts) dataset [10]. This dataset includes 3700 articles and tweets, offering a comprehensive perspective on linguistic bias in media and public opinion. It features marked statements, enabling recognition of bias at both granular (word-level) and broader (sentence-level) scopes, covering diverse topics. To examine biases in employment practices, we incorporate the Job Hiring/Recruitment dataset [42], comprising 20,000 job posts with titles, descriptions, and associated tags from various businesses. Each advertisement includes job details and manually assigned tags by recruiters, suggesting jobs to potential candidates with analogous skills. #### Data Consolidation After gathering and pre-processing data from various sources, all datasets are harmonized into a single consolidated dataframe. This dataframe includes the following columns: * 1. Dataset: Specifies the source dataset, such as MIMIC-III, MACCROBAT, Job Hiring, or BABE * 2. Text: Contains the actual textual data extracted from the respective datasets, including clinical notes, case reports, job descriptions, or annotated statements. * 3. Biased Words: Includes the words or phrases identified as biased in the text, crucial for granular bias detection. * 4. Aspect of Bias: Denotes the specific type or aspect of bias present in the text, categorized by gender, racial, or age biases, to understand the nature of the biases detected. * 5. Label: Indicates whether the text is biased or non-biased, serving as the target variable for the token classifier and for evaluation purposes. A sample record in JSON format is shown below: ⬇ { "Record": { "Dataset": "MIMIC-III", "Text": "Clinical notes of patient XYZ indicate a history of superficial hypertension due to overly emotional personality.", "BiasedWords": "superficial, overly emotional personality", "AspectOfBias": "age", "Label": "biased" } } In the consolidated dataframe, each row represents a unique sample from the original dataset, supplying information for bias detection and assessment. Further pre-processing is conducted to prepare the data for subsequent layers of the Nbias framework, particularly the NLP model performing token classification. #### Data Pre-processing The pre-processing of textual data involves a series of sequential operations to refine and structure the data for machine learning algorithms. This includes tokenization, which involves breaking raw text into meaningful tokens (words or subwords) for semantic understanding and subsequent NLP tasks; text cleaning, which involves removing punctuation, numbers, special characters, and converting text to lowercase to ensure uniformity and clarity; and handling missing values, which involves identifying and appropriately managing missing data to avoid bias and improve model performance. These pre-processing steps convert raw text into a clean, structured format, enabling the NLP token classification model in the subsequent layer. ### 3.2 Corpus Construction Our group, consisting of three seasoned professionals from the disciplines of healthcare, computer science, and journalism, was joined by two diligent students to perform the task of detecting and labeling bias in our dataset. Their collective role centered around the critical task of carefully annotating bias within our dataset. This endeavor is important to ensure the integrity and fairness of any subsequent analysis or research. The foundation for this task was based on comprehensive guidelines that clearly delineated the concept of bias in this context. Bias, as per the instructions, was defined as any terminology or phraseology that could potentially provoke prejudiced comprehension or induce stereotyping, as mentioned in most of the literature [11, 7, 10] also. The factors from which biases could stem were identified as gender, race, socioeconomic status, age, or disability for this NLP work. Such biases could inadvertently skew the dataset and, consequently, the results derived from it. Thus, the identification and annotation of such biases are of high importance to uphold the accuracy and reliability of our dataset. Highlighting both explicit and implicit biases was emphasized as a critical part of our work. #### Annotation Scheme Under the light of these provided guidelines, our team proceeded by using a carefully compiled list of terms and phrases, known as “bias-indicative” lexicons. These lexicons provided a comprehensive guide to potential areas where bias could lurk within our dataset. A portion of this list is exhibited in Table 1 for reference. This bias-indicative lexicon served as a navigational tool for our team to identify and mark “BIAS” entities scattered within our textual data. These entities can be individual words or phrases that express or imply bias. This systematic approach ensured that we could account for most biases that exists in the data. Table 1: Bias Dimensions and Fewer Sample Biased Words/Phrases Shown Due to Brevity Bias Dimension | Biased Words/Phrases ---|--- Gender | ‘hysterical’, ‘emotional’, ‘weak’, ‘bossy’, ‘fragile’, ‘nagging’, ‘man up’, ‘tomboy’ Race | ‘inner city’, ‘illegal alien’, ‘thug’, ‘exotic’, ‘uncivilized’, ‘model minority’, ‘white trash’ Social Status | ‘trailer park’, ‘lazy’, ‘freeloader’, ‘welfare queen’, ‘ghetto’, ‘lazy bum’, ‘filthy rich’ Age | ‘senile’, ‘slow’, ‘old-fashioned’, ‘whippersnapper’, ‘elderly’, ‘young and naive’, ‘generation gap’ Disability | ‘handicapped’, ‘crippled’, ‘invalid’, ‘sufferer’, ‘differently-abled’, ‘victim’ Religion | ‘radical’, ‘terrorist’, ‘infidel’, ‘heathen’, ‘fanatic’, ‘holy roller’ Profession | ‘greedy’, ‘dishonest’, ‘corrupt politician’, ‘crooked lawyer’, ‘greedy CEO’, ‘lazy government worker’ National | ‘unpatriotic’, ‘alien’, ‘foreigner’, ‘outsider’, ‘immigrant’, ‘nationalist’ Education | ‘uneducated’, ‘illiterate’, ‘dropout’, ‘underachiever’, ‘overachiever’, ‘smarty-pants’ Body Size | ‘fat’, ‘slob’, ‘skinny’, ‘lardass’, ‘beanpole’, ‘plus-sized’ We adopted the Inside-Outside-Beginning (IOB) annotation scheme [43] to classify and annotate ‘BIAS’ entities. This technique categorizes tokens in the text as the beginning (B), inside (I), or outside (O) of a bias entity. ‘B’ represents the first token of a bias entity, ‘I’ for tokens inside the entity, and ‘O’ for tokens not part of any bias entity. This approach ensured consistent and precise annotations, enhancing the reliability and accuracy of our study. #### Annotation Approach We leveraged semi-supervised learning methodologies [35, 13, 33] to enhance both efficiency and precision of the annotation process. The integration of BERT (Bidirectional Encoder Representations from Transformers) , known for its superior text comprehension abilities, substantially improved our approach. Our annotation process initiated with the manual tagging of 20% of the complete dataset. This critical yet time-consuming task was strategically limited to a subset of data, ensuring a balance between accuracy and efficiency. The “BIAS” entities were carefully annotated in compliance with our predefined guidelines. This annotated subset then fed into our BERT model, serving as training data for the token-classification task. Once sufficiently trained, the model was assigned the task of predicting “BIAS” entities within the remaining 80% of the data. The extensive dataset was effectively managed by breaking it down into 20% increments, a process we refer to as “semi- autonomous labelling”. Expert reviews cross-verified the “BIAS” entities labelled by the model. This combination of semi-supervised learning with expert validation enabled us to create an annotation process that is both optimized and trustworthy. #### Working Instance To demonstrate our annotation scheme, we consider the example sentence: “The overpriced product from the highly successful company was surprisingly popular”. Table 2 presents the corresponding BIO format annotations for this sentence. Assuming the term “overpriced” holds potential bias, it would be tagged as “B” in the BIO scheme, indicating the start of a bias entity. All other tokens not part of this bias entity would be labeled “O”. This example shows our extensive annotation process across our dataset. This approach allows us to quantify and comprehend biases in a consistent manner. Table 2: Bias Annotation using BIO scheme Word | Bias Annotation ---|--- The | O overpriced | B-BIAS product | O from | O the | O highly | B-BIAS successful | I-BIAS company | O was | O surprisingly | B-BIAS popular | I-BIAS . | O #### Resolving Discrepancies An integral part of our process was addressing discrepancies between annotators, a common criteria in multi-person annotation tasks. We implemented a consensus-driven approach to uphold consistency and reliability in our annotations. Any disagreement was discussed collectively, considering each annotator’s viewpoint and reaching a unified decision based on predefined annotation guidelines. This process ensured collective agreement on all annotations, minimizing potential bias or error and boosting reliability. This consensus strategy was uniformly applied across all data sources including the BABE, MIMIC, MACCROBAT, or Job Hiring datasets. #### FAIR Principles After reaching consensus on all annotations, we saved the final annotated data in the widely-accepted CoNLL-2003 format [44]. This format represents data in a tab-separated manner, associating each word with its part of speech tag, chunk tag, and named entity tag. Sentences are separated by an empty line, and each row corresponds to a token with its annotation. The CoNLL-2003 format offers multiple benefits. It ensures compatibility with existing NLP tools and models, facilitating future analysis and model training. Additionally, it promotes collaboration and peer review by allowing easy data sharing and comprehension among researchers. Lastly, it enhances the reproducibility of our study, enabling others to use our data for model validation and findings replication. By adhering to the FAIR principles, our dataset is made Findable, Accountable, Interoperable, and Reusable, enhancing the transparency, accessibility, and reliability of our research. #### Inter-Annotator Agreement In our research, we placed considerable emphasis on establishing rigorous protocols to guarantee the reliability and consistency of the data annotations. Two independent reviewers were assigned to carefully assess the annotated data, promoting objective evaluation devoid of influence from initial annotators. Rather than relying on subjective judgment, we quantified their agreement through Cohen’s Kappa coefficient—a statistical measure common in categorical data studies, accounting for potential chance agreement. Scores over 0.6 denote “substantial” agreement and above 0.8 represent “almost perfect” agreement. Our reviewers attained a Cohen’s Kappa score of 78%, demonstrating high concordance on the annotations. This high score substantiates the uniformity, consistency, and quality of our annotations. Moreover, it demonstrates the objectivity of the assessment process, highlighting the well-built nature of our annotated data. This, in turn, enhances the trustworthiness of prospective findings drawn from this dataset. ### 3.3 Model Development Layer In this layer, we leverage the BERT language model for token-classification and adapt it for the task of NER . The choice of BERT is motivated by its powerful capability of understanding both left and right context of a word, and its effectiveness in recognizing and classifying multi-word phrases. These features make it particularly well-suited for the complex task of bias detection and annotation in our text data. The advantage of using BERT in Nbias model development lies in its more effective token-level bias identification. Nbias incorporates enhancements to the standard BERT architecture, such as modifications in the attention mechanism, loss function, and fine-tuning approaches, specifically tailored for better capturing biases in complex text data. The subsequent section provides a detailed explanation of the model development. The token classifier architecture (shown in as the middle component in Figure 2) consists of a multi-layer bidirectional transformer encoder that captures contextual information from both directions. Given an input sequence $X=\\{x_{1},x_{2},...,x_{n}\\}$, the words are tokenized and embedded as shown in Equation (1): $E(X)=\\{e(x_{1}),e(x_{2}),\ldots,e(x_{n})\\}$ (1) where $E(X)$ represents the set of embedded representations for an input sequence $X$, $X$ consists of $n$ words $\\{x_{1},x_{2},...,x_{n}\\}$, $e(x_{i})$ is the embedding function that maps each word $x_{i}$ from the input sequence to a continuous vector representation. The embedded input sequence is then passed through the transformer layers. BERT employs self-attention mechanisms to weigh the importance of different words in the input sequence, enabling it to better identify and understand complex relationships between words. The self-attention $Att$ score between word $i$ and word $j$ is computed as shown in Equation (2): $Att(i,j)=\text{Softmax}\left(\frac{Q(e(x_{i}))\cdot K(e(x_{j}))^{T}}{\sqrt{d_{k}}}\right)$ (2) where $Q$, $K$ are the query and key matrices, and $d_{k}$ is the key dimension. Following the transformer encoder, the output after applying self-attention and passing through the bidirectional transformer encoder is represented, as shown in Equation (3): $R(X)=\\{r(x_{1}),r(x_{2}),...,r(x_{n})\\}$ (3) where $R(X)$ represents the set of contextualized representations for an input sequence $X$. The function $r(x_{i})$ is the representation function that maps each word $x_{i}$ from the input sequence to a continuous vector representation after passing through the transformer encoder. A linear layer with a softmax activation function is added for entity classification. This layer transforms the representations generated by the transformer encoder into a probability distribution over the possible output classes. To simplify our annotation and prediction task, we have merged the ’B’ (Beginning) and ’I’ (Inside) tags from the standard BIO tagging scheme into a single ’BIAS’ tag. The ’BIAS’ tag represent any part of a bias entity, while ’O’ represents non-entity. The probability distribution is calculated as shown in Equation (4): $P(y|x)=\text{Softmax}(W\cdot c(x)+b)$ (4) where $W$ is the weight matrix, $b$ is the bias vector in the softmax function, and $P(y|x)$ is the probability distribution over the output classes ‘BIAS’ and ‘O’. The final output of the model indicates the presence of biased words or phrases within the input sequence by labeling them as ‘BIAS’. This simplification enables our model to recognize biased phrases more effectively, without differentiating between their start or continuation. We show an example of the model output on a sample from the test set in Figure 3. Figure 3: BIAS Annotation on a Piece of Text The pseudocode algorithm steps for the Nbias model development are given in Algorithm 1. As seen in Algorithm 1, the Nbias model, built on BERT, tokenizes and contextualizes input text using transformer encoders. Through self- attention mechanisms, it weighs relationships between words and classifies each token as biased or unbiased using a softmax-activated linear layer. Algorithm 1 Nbias Model Development 1:Text sequence $X=\\{x_{1},x_{2},\dots,x_{n}\\}$ 2:Initialize BERT with token-classification architecture 3:Tokenize input sequence $X$ 4:Embed input sequence: $E(X)=\\{e(x_{1}),e(x_{2}),\dots,e(x_{n})\\}$ 5:for each token in $E(X)$ do 6: Compute self-attention: 7: $Att(i,j)=\text{Softmax}\left(\frac{Q(e(x_{i}))\times K(e(x_{j}))^{T}}{\sqrt{dk}}\right)$ 8:end for 9:Pass $E(X)$ through bidirectional transformer encoder: $R(X)=\\{r(x_{1}),r(x_{2}),\dots,r(x_{n})\\}$ 10:for each token representation in $R(X)$ do 11: Compute probability distribution: $P(y|x_{i})=\text{Softmax}(W\times c(x_{i})+b)$ 12:end for 13:for each token in $X$ do 14: if probability corresponds to BIAS then 15: label as ‘BIAS’ 16: else 17: label as ‘O’ 18: end if 19:end for 20:return the labeled sequence ### 3.4 Evaluation Layer The evaluation layer plays a critical role in assessing the performance of our model. This layer encompasses both quantitative and qualitative evaluation methods, providing a comprehensive perspective on the model’s performance. #### Quantitative Evaluation The quantitative evaluation is typically statistical in nature and involves the use of various metrics to numerically measure the model’s performance. Metrics such as F1-score, AUC-ROC and accuracy are commonly used in this context. F1 score balances precision (the ability of the model to correctly identify positive instances) and recall (the ability of the model to identify all relevant instances), providing a single measure of the model’s overall performance. #### Qualitative Evaluations In addition to these numerical measures, we also conduct a qualitative evaluation. This type of evaluation is more about the quality, relevance, and usefulness of the model’s output. It involves an expert review of a subset of the model’s predictions to measure how well the model is performing in practical terms. Factors such as the model’s ability to correctly identify complex or subtle bias entities, and the interpretability of its output, are examined in the qualitative evaluation. In our study, we focus on qualitative evaluations, specifically assessing model robustness and conducting perpetuation tests. Our robustness analysis [45] explores the model’s stability under various conditions including adversarial inputs and data variations. Perpetuation tests [46] help us understand if the model inadvertently reinforces or introduces societal biases. We also consider a human evaluation, to assess the model’s performance in real-world conditions. ## 4 Experimental Setup In this section, we detail the settings, evaluation metrics, baselines and hyperparameters of our experimental design for replication and validation. ### 4.1 Dataset Our study uses diverse datasets: MIMIC-III [41], MACCROBAT [37], BABE [10], and Job Hiring [42]. After annotation (detailed in Section 3.1 and 3.2), each dataset is split into training, validation, and test sets using an 80-10-10 ratio. The division allows for efficient model training, validation, and testing. Modifications are made for the MACCROBAT dataset to maintain balance despite its limited entries. Table 3 presents the detailed dataset information. Table 3: Dataset Details with Training (train), Development (dev), Test (test) sets and Total Samples Data Source | Domain | train | dev | test | Total ---|---|---|---|---|--- BABE | News/Social Media | 15,300 | 1,700 | 1,700 | 18,700 MIMIC (Clinical) | Healthcare | 1,800 | 200 | 200 | 2,200 MACCROBAT | Healthcare | 160 | – | 40 | 200 Job Hiring | Occupational | 16,000 | 2,000 | 2,000 | 20,000 Total | | 33,260 | 3,900 | 3,940 | 41,100 ### 4.2 Hardware Settings The experiments conducted in this study were performed on a dedicated research server with specific hardware configurations. The server was equipped with an Intel Xeon CPU E5-2690 v4 running at 2.60GHz, 128GB of RAM, and a powerful NVIDIA GeForce RTX 3090 GPU. The operating system installed on the server was Ubuntu 18.04 LTS. These hardware settings provided substantial computational power, enabling us to efficiently execute resource-intensive tasks, such as training complex machine learning algorithms and deep learning models. ### 4.3 Time measurements Time measurements during the training, validation, and testing phases were recorded for our models across the diverse datasets. Utilizing our hardware setup, we ensured peak performance with minimized hardware-induced delays. Specifically, the BABE dataset took 4.5 hours for training with 30 minutes each for validation and testing. The MIMIC dataset required 2 hours of training, and 10 minutes for both validation and testing. For the smaller MACCROBAT dataset, training was completed in 0.5 hours, with validation and testing taking 5 minutes each. Lastly, the Job Hiring dataset took the longest at 5 hours for training and 40 minutes each for validation and testing. ### 4.4 Baselines For the comparison of models for token classification model performance, we consider a range of diverse baseline approaches. These include BiLSTM-CRF, which combines BiLSTM and CRF [29]; BERT-CRF, a blend of BERT and CRF [47]; RoBERTa, an optimized variant of BERT [48]; BART-NER, an application of the BART model for NER [49]; CNN-NER, a CNN-based method for capturing named entities [50]; and TENER, an NER model that utilizes an adapted Transformer Encoder for character and word-level features [51]. We also consider the few- shot NER models like [52] and model-agnostic meta-learning (MAML) [53] and zero-shot named entity typing (NET) [54] model. The selected baselines represent a collection of different architectures such as BiLSTM, BERT, RoBERTa, BART, CNN, and Transformer Encoder, each combined with either the CRF or NER task. These models were chosen because they represent the state-of-the-art and constitute a robust set of baselines for comparing token classification model performance. ### 4.5 Hyperparameter Settings The chosen hyperparameters for our token classifier are provided in Table 4. Table 4: Hyperparameter Settings and Training Details Parameter/Method | Details/Value ---|--- Model | bert-base-uncased Optimizer | Adam Learning Rate | $1\times 10^{-2}$ Momentum | 0.5 Weight Decay | 0.01 Epochs | 5 Batch Sizes | 4, 8, 16, 32, 64 Batch Size (training) | 16 Input Sequence Length | 128 subword tokens Dropout | Applied on input and hidden layers Convergence Criteria | Negligible decrease in validation loss Validation Strategy | Hold-out Early Stopping | Implemented Training Environment | Google Colab Pro Hardware | NVIDIA Tesla T4 GPU $\beta_{1}$ | 0.9 $\beta_{2}$ | 0.999 Epsilon | $1\times 10^{-8}$ Hidden Units | (Leaky) Rectified Linear Units (ReLUs) In the comparative experiments with the baselines, the models were optimized using a learning rate between 1e-5 and 5e-5 over several training epochs, typically 3 to 10. The batch size varied between 16 and 64, based on memory constraints, and the input sequence length was limited to 512 tokens. To prevent overfitting, we used regularization techniques such as dropout and weight decay. We generally employed the Adam or AdamW optimizer. All hyperparameters were fine-tuned according to the specific task requirements and dataset characteristics. ## 5 Results ### 5.1 Overall Performance Table 5 presents a comprehensive comparison of our proposed method, Nbias, with various baseline models in the token-classification task across three distinct categories: Social Media Bias, Health-related, and Job Hiring. Due to space constraints, we are only reporting the F1-scores in this overall comparoson, which are the harmonic mean of precision and recall, as it is a commonly used single metric that combines both precision and recall. The F1-scores are expressed as percentages, accompanied by the standard deviation (±) to indicate the variability in scores across five separate runs. The highest F1-score in each category is highlighted in bold to easily identify the best performing model. Table 5: Comparison of Token Classification Models on Three Different Categories: Social Media Bias, Health-related, and Occupational. The performance metric used is F1-score (harmonic mean of precision and recall), expressed as a percentage, accompanied by the standard deviation (±) indicating the variability in scores across 5 runs. The best score is highlighted in bold. Model | Social Media | Health-related | Job Hiring ---|---|---|--- Rule-based [55] | 65.4 $\pm$ 1.4 | 70.2 $\pm$ 0.7 | 72.3 $\pm$ 0.9 BiLSTM-CRF [29] | 72.6 $\pm$ 1.0 | 75.8 $\pm$ 0.9 | 78.1 $\pm$ 0.8 BERT-CRF [47] | 80.7 $\pm$ 1.3 | 82.3 $\pm$ 0.7 | 83.5 $\pm$ 0.6 RoBERTa [48] | 82.8 $\pm$ 0.7 | 83.6 $\pm$ 0.9 | 80.5 $\pm$ 0.5 CNN-NER [50] | 76.2 $\pm$ 1.1 | 78.1 $\pm$ 0.0 | 73.4 $\pm$ 0.9 BART-NER [49] | 84.7 $\pm$ 0.9 | 84.2 $\pm$ 0.7 | 82.0 $\pm$ 0.8 TENER [51] | 85.7 $\pm$ 0.5 | 86.4 $\pm$ 0.6 | 85.1 $\pm$ 0.5 Few-shot NER [52] | 70.2 $\pm$ 3.4 | 73.1 $\pm$ 2.9 | 69.2 $\pm$ 1.7 NET [54] | 70.1 $\pm$ 1.4 | 72.2 $\pm$ 1.2 | 67.1 $\pm$ 1.2 MAML [53] | 62.1 $\pm$ 1.8 | 65.3 $\pm$ 1.2 | 60.5 $\pm$ 2.5 Nbias | 86.9 $\pm$ 0.2 | 89.1 $\pm$ 0.8 | 90.3 $\pm$ 0.4 The results presented in Table 5 conclusively demonstrate the performance of the Nbias model in all tested scenarios. In the Social Media area, the Nbias model got an F1-score of 86.9% with a small deviation of ± 0.2. In the Health area, it performs even better with an F1-score of 89.1% and a deviation of ± 0.8, which means the scores ranged between 88.3% and 89.9%. In the Job Hiring area, the model got an F1-score of 90.3%, with scores ranging between 89.9% and 90.7%. These small deviations show that the model’s performance is consistent in different tests. Amongst the baselines models, the TENER model performs better. The BERT-CRF and RoBERTa models, on the other hand, exhibit good performances. Both the CNN-NER and BART-NER models also display satisfactory performance, although they comes behind the Nbias and TENER models. Contrastingly, the Rule-based model underperforms compared to transformer and BiLSTM based baselines. The Few-shot NER, NET, and MAML models also showed average performance. Even though few-shot models can work well with just a few examples, the results show there is room for improvement. This could be achieved by creating custom training methods or tasks that are specific to a certain area. Overall, the Nbias model emerges as the most effective across all categories. While other BERT-based baselines may also attempt bias identification, Nbias outperforms them due to its custom-designed model features optimized for this specific purpose. The performance gain could be in terms of better debiasing results, increased fairness in predictions, or improved overall model accuracy in scenarios where bias reduction is critical. These findings provide valuable insights for the future development and selection of token classification models across different domains. #### Accuracy Analysis of Token Classification Models Figure 4 shows how different models perform in classifying tokens over different test sets. Figure 4: Comparative Accuracy Scores of Token Classification Models across Three Different Categories: Social Media, Health-related , and Job Hiring for Bias Text Identification As depicted in Figure 4, the Nbias model exhibits superior performance, achieving accuracy scores of 88.4% in Social Media Bias, 90.6% in Health- related texts, and 91.8% in Job Hiring texts. Following closely are the TENER and BART-NER models in terms of accuracy. While other models such as RoBERTa, BERT-CRF, BiLSTM-CRF, and CNN-NER also demonstrate commendable performance, they fall short of the scores attained by Nbias, TENER, and BART-NER in this experiment. Models like Few-shot NER, NET, and MAML, although not scoring the best, exhibit promising potential. Lastly, the Rule-based model, which relies on predefined rules rather than learning from the data, still manages to perform above 60%. Overall, these results underscore the enhanced capability of the latest transformer-based models like BART and TENER to extract contextual information from text data, as evidenced by this experiment. Moreover, it affirms that a model carefully designed for bias detection, such as ours, can indeed yield highly effective results. ### 5.2 Performance Analysis using ROC Curves and AUC Scores (a) Models applied to Social Media Data. (b) Models applied to Health-related Data. Figure 5: ROC curves and AUC Scores for Various datasets (continued on next page) (a) Models applied to Job Hiring Data. Figure 6: ROC Curves and AUC Scores for Various Datasets. In this study, we compare the performance of different models in token classification tasks using Receiver Operating Characteristic (ROC) curves and corresponding Area Under the Curve (AUC) scores on Social media, Health- related, and Job Hiring data. Figures 5(a), 5(b), and 6(a) displays the AUC- ROC curves for all the baseline models and our Nbias token classification model. The results presented in Figure 6 shows the superior capability of the Nbias model, as evidenced by their better True Positive Rates at minimal False Positive Rates. While models like Rule-based, RoBERTa, Zero-shot and few-shot NER models demonstrated low-to-moderate performance, others such as TENER, BiLSTM-CRF, CNN-NER, BART-NER yielded commendable results, particularly in the early segments of their respective curves. All these models also exhibited better performance specifically in the health and job hiring datasets. Overall, these findings suggest that some models excel in specific domains. This could be attributed to several factors, including but not limited to: 1. 1. Training on analogous data points that make the model more aware of the specific features of a domain. 2. 2. The architecture of the model being inherently better suited for certain types of data. 3. 3. Hyperparameter choices that resonate better with specific data characteristics. 4. 4. Preprocessing and feature engineering steps that align closely with the requirements of a domain. Thus, choosing the optimal model for a specific domain is important for achieving the best performance. ### 5.3 Confusion Matrix and Error Analysis We present the results of the BIAS entity identification task for “Health- related Bias”, “Political Bias”, and “Occupational Bias” using Nbias. The model’s performance is evaluated based on confusion matrices and error analysis (Table 6), providing insights into its strengths and limitations of the model. Table 6: Confusion Matrix and Error Analysis for BIAS Entity Identification using Nbias: The table presents the True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN) for various bias types identified in the dataset, along with the Precision in percentage. The categorization of biases is based on a predefined analysis of the content and context in which they appear. Dataset | Bias types | TP | FP | TN | FN | Precision ---|---|---|---|---|---|--- Health | healthy lifestyle | 98 | 12 | 145 | 5 | 89.1% medical advancements | 85 | 56 | 142 | 15 | 60.2% research findings | 98 | 19 | 138 | 2 | 83.7% Social Media | biased news source | 112 | 10 | 157 | 8 | 91.8% political affiliation | 95 | 7 | 162 | 16 | 93.1% political agenda | 86 | 14 | 154 | 16 | 86.0% Occupational | gender bias in hiring | 63 | 5 | 172 | 8 | 92.7% ethnicity bias in hiring | 49 | 4 | 173 | 11 | 92.5% age bias in hiring | 45 | 8 | 170 | 13 | 84.2% Health-related Bias: The Nbias exhibits strong performance in identifying “healthy lifestyle” entities, achieving a precision of 89.1%. However, it missed 5 instances of this entity, leading to false negatives. For “medical advancements”, the precision is lower at 60.2%, and the model identified 56 false positives, misclassifying non-biased terms as biased. On the other hand, the model achieved a relatively high precision of 83.7% for “research findings” yet it missed 2 instances, resulting in false negatives. These findings suggest that the model performs well for more explicit health-related biases, but subtle biases and rare terms might pose challenges. Social Media: Our Nbias demonstrates high precision in identifying “biased news source” entities (91.8%), correctly capturing biased sources. However, it produced a few false positives, misclassifying some non-biased sources as biased. For “political affiliation” entities, the precision is 93.1%, indicating a reliable performance. However, some false positives occurred, classifying neutral statements as biased based on political association. For “political agenda” entities, the model achieved a precision of 86.0%, although it misclassified a few non-biased mentions as biased. These results highlight the model’s ability to detect explicit political biases but also suggest room for improvement in handling ambiguous language. Occupational Bias: In the “Occupational Bias” category, the Nbias exhibits strong precision for identifying “gender bias in hiring” entities (92.7%), effectively capturing biased terms. However, it produced a few false positives, misclassifying neutral statements as biased based on gender. For “ethnicity bias in hiring” entities, the precision is 92.5%, indicating accurate identification. Yet, a few false positives occurred, misclassifying non-biased mentions as biased. The model achieved a precision of 84.2% for “age bias in hiring” entities. However, some neutral statements were misclassified as biased, revealing areas for enhancement. These findings suggest that the model can effectively identify biased occupational entities, but improvements are needed to reduce false positives. Actionable Insights: * 1. The proposed NER model demonstrates robust precision in identifying biased entities for all three categories with clear biases. * 2. Addressing false positives can enhance the model’s discrimination between biased and non-biased entities. Fine-tuning the model to better understand nuanced language can be beneficial. * 3. Augmenting the training data with diverse instances of subtle biased entities can improve recall and help detect rare biased terms. * 4. Considering context-aware models, such as transformer-based models, might help tackle challenges arising from sarcasm and subtle biases more effectively. Overall, these results provide valuable insights into the strengths of Nbias and areas for improvement in identifying biased entities across different categories. ### 5.4 Ablation Study on the Nbias Model To understand the importance of different components in the Nbias model, we conducted an ablation study on the combined dataframe from all the data sources. We systematically remove or replace elements/ components of the model to observe their influence on bias detection performance. The study assessed the following model variants: * 1. Nbias Full: Original model with all features intact. * 2. Nbias -NoAttn: Exclusion of the self-attention mechanism. * 3. Nbias-GloVe: GloVe embeddings replace the BERT defaults. * 4. Nbias -HalfBERT: A version with half the transformer layers. * 5. Nbias-RandInit: Trained without leveraging the pre-trained BERT weights. Table 7 illustrates the outcomes of the ablation study for the F1-score, precision, and recall metrics on the combined dataframe. Table 7: Ablation Study Results for Nbias. Bold means best score Model Variant | Precision (%) | Recall (%) | F1-Score (%) ---|---|---|--- Nbias -Full | 94.8 | 95.6 | 95.2 Nbias -NoAttn | 89.5 | 91.0 | 90.1 Nbias -GloVe | 93.0 | 92.6 | 92.8 Nbias -HalfBERT | 93.7 | 93.3 | 93.5 Nbias -RandInit | 87.8 | 89.2 | 88.4 The analysis of the ablation study reveals some insightful observations. From Table 7, it is evident that the fully featured Nbias-Full model outperforms all other variants, with a highest F1-Score of 95.2%, highlighting the combined effect of all its components working together. The significant performance drop observed in the Nbias-NoAttn model, which does not incorporate the self-attention mechanism. It shows the role that self- attention plays in capturing contextual relationships in the text for effective bias detection. Additionally, the slight performance reduction in the Nbias-GloVe model, which uses GloVe embeddings instead of the default BERT embeddings, suggests that BERT embeddings are better suited for this specific task, possibly because they are trained on a more diverse and comprehensive corpus. Similarly, the negligible performance variation in the Nbias-HalfBERT model indicates that the model can achieve almost equivalent performance with half the transformer layers, which may be a crucial consideration in resource-constrained environments. However, it is also worth noting that this minimal reduction might lead to missing out on some complexities in the data that can only be captured with a deeper network. Lastly, the reduced performance of the Nbias- RandInit model, which does not leverage pre-trained BERT weights, highlights the significant benefits of transfer learning and the importance of initializing the model with pre-trained weights to achieve optimal performance. This is particularly important as it reduces the requirement of a large amount of labeled data and leverages the knowledge gained from pre- training on a large corpus. In conclusion, the Nbias model, with its full set of features, proves to be the most effective model for bias detection. ### 5.5 Robustness Testing Robustness testing is a type of evaluation used to assess the performance and resilience of a system or model against various inputs or scenarios [56]. In the context of our testing, we programmatically measure the robustness of Nbias using three key factors: case sensitivity, semantics, contextand spelling. In Table 8, we showcase the robustness testing on a sample of 5 examples (due to brevity reasons) from the test set. The results of the robustness testing of the model in Table 8 are summarized as: * 1. Spelling: The model partially passed the test. It was able to map “superrior” to “superior” but failed with “un like”. * 2. Semantics: The model successfully mapped “females” to “women” and “exaggerate” to “amplify”, thus passing the test. * 3. Case Sensitivity and Semantics: The model was case insensitive and identified synonyms, mapping “ASIANS” to “Asians” and “more prone” to “higher susceptibility”, passing the test. * 4. Semantics: The model passed another semantics test by mapping “females” to “women” and “not fit” to “not suitable”. * 5. Context: The model partially passed the test by mapping “senior” to “older” and “resist” to “resistant”, but it failed to capture the change in severity from “severely lack” to “lack”. Overall, the model showed strengths in identifying synonyms, being case insensitive, and recognizing some contextual changes, but had limitations in identifying misspellings and capturing changes in severity. We also conducted the robustness testing on a test case of 300 samples manually prepared by the our team of 5 members to evaluate the robustness of our system and report the results in Figure 8. Table 8: Robustness Test on Nbias Predictions: evaluating the model performance in identifying misspellings, synonyms, case sensitivity, semantics, and context. Assessment based on Pass: ✓, Partial: ⚫, Fail: ✗ Original: Conservative views are superior, unlike liberal ideas. --- Test Case: Conservative views are superrior, un like liberal ideas. Expected: Identifies misspellings. Actual: Maps “superior” to “superrior”, fails with “un like”. Test Type: Spelling. Pass: ⚫ Original: Women tend to exaggerate pain due to their emotional nature. Test Case: Females tend to amplify pain due to their emotional nature. Expected: Identifies synonyms. Actual: Correctly maps “females” to “women” and “exaggerate” to “amplify”. Test Type: Semantics. Pass: ✓ Original: Asians are more prone to diabetes due to lifestyles. Test Case: ASIANS have a higher susceptibility to diabetes, due to lifestyle. Expected: Case insensitive and identifies synonyms. Actual: Maps “ASIANS” to “Asians” and “more prone” to “higher susceptibility”. Test Type: Case Sensitivity, Semantics. Pass: ✓ Original: Women are not fit for leadership roles. Test Case: Females are not suitable for leadership roles. Expected: Identifies synonyms. Actual: Maps “females” to “women” and “not fit” to “not suitable”. Test Type: Semantics. Pass: ✓ Original: Senior employees resist change and severely lack technological skills. Test Case: Older employees are resistant to change and lack technological skills. Expected: Recognizes context and severity change. Actual: Maps “senior” to “older” , “resist” to “resistant”, fails to capture change in severity. Test Type: Context. Pass: ⚫ (a) Case Sensitivity Robustness Test. (b) Contexts Robustness Test. Figure 7: Robustness test (continued on next page) (a) Semantics Robustness Test. (b) Spellings Robustness Test. Figure 8: Robustness Test: each plot illustrates the performance of the Nbias model across 5 development runs in robustness tests: spelling, semantics, case sensitivity, and context. The x-axis represents the different test instances used in each run, while the y-axis displays the corresponding scores, referred to as the ’Pass Count’ achieved by the model on these tests. As observed in Figure 8, the Nbias model appears to be improving over time in all four robustness test categories: spelling, semantics, case sensitivity, and context. This is evident as the scores increase with each successive run, demonstrating the model’s adaptability and improvement in its learning approach. In spelling, the model begins with a score of 70 and ends at 90 in the fifth run. A similar upward trend is seen in semantics, starting from a score of 72 and concluding at 93 in the final run. The model also consistently improves in the case sensitivity test, beginning at 80 and finishing at 95. The context scores also progress positively, from initial score of 70 to a final score of 90. The Nbias model shows the highest performance in case sensitivity, as it reaches a score of 95 in the final run. It also performs well in the semantics category, achieving a score of 93. However, the model’s performance in the context and spelling categories is slightly lower. While these are still strong results, there may be room for further optimization in these areas to achieve results comparable to the case sensitivity and semantics tests. ### 5.6 Perpetuation Bias Tests for Bias Detection To assess whether our model unintentionally perpetuates biases present in its training data, we conducted perpetuation bias tests. These tests evaluated the performance of our model in identifying and labeling potentially biased words or phrases as BIAS entity. In our testing approach, we curated a diverse list of terms and phrases representing various social groups and contexts prone to bias. This list included phrases like “elderly person”, “young woman”, “African immigrant”, “gay man” and “blue-collar worker”. We inserted these phrases into neutral sentences to evaluate the model’s perception of potential bias. Upon processing the sentences through our model, we observed the following pattern: The person was described as a [Phrase] * — Ethnicity: * (a) African immigrant (Flagged: 25 out of 30 times, 83%) * (b) Asian immigrant (Flagged: 20 out of 30 times, 67%) * (c) European immigrant (Flagged: 10 out of 30 times, 33%) * — Gender: * (a) young woman (Flagged: 10 out of 30 times, 33%) * (b) young man (Flagged: 5 out of 30 times, 17%) * (c) elderly man (Flagged: 5 out of 30 times, 17%) * — Occupation: * (a) blue-collar worker (Flagged: 15 out of 30 times, 50%) * (b) white-collar worker (Flagged: 8 out of 30 times, 27%) * — Age: * (a) elderly person (Flagged: 5 out of 30 times, 17%) * (b) young adult (Flagged: 3 out of 30 times, 10%) The provided data showcases the results of a bias detection test on a language model. Various phrases associated with different demographics (ethnicity, gender, occupation, and age) were inserted into a neutral sentence, and the model flagged certain phrases as “BIAS ENTITY” with varying frequencies. Specifically, the phrases “African immigrant” and “Asian immigrant” were flagged 83% and 67% of the time, respectively, whereas “European immigrant” was only flagged 33% of the time. Similarly, “blue-collar worker” was flagged 50% of the time, while “white-collar worker” was flagged 27% of the time. In contrast, phrases related to age and gender, such as “elderly person”, “young woman”, “young man”, and “elderly man”, were flagged much less frequently, ranging from 10% to 33%. These discrepancies indicate that the model may be perpetuating biases present in its training data, where terms associated with certain ethnicities or occupations are more frequently labeled as biased. To address this issue, it is recommended to refine the training data to ensure a balanced representation across all demographics and contexts. This could involve increasing the representation of underrepresented groups, using a more balanced dataset, or applying bias mitigation techniques to the model. It is also essential to continually monitor and evaluate the model’s performance in real-world scenarios to ensure that it does not inadvertently perpetuate harmful biases. ### 5.7 Human Evaluation To validate the model’s bias detection, a human evaluation was initiated. Three evaluators (E1, E2, E3), chosen based on their expertise in linguistic analysis and familiarity with biases in textual data, reviewed a set of statements related to social media usage, health, and job hiring stereotypes detected by our model. Each statement was rated on a scale from 1 to 5, with 1 indicating no perceived bias and 5 indicating a high level of bias. The evaluation results are summarized in Table 9, which provides the statements, the corresponding identified entities, and the scores assigned by each evaluator. The average score for each entity is also provided, giving an overall indication of the perceived bias. Table 9: Evaluation of Perceived Bias in Various Statements by Three Evaluators (E1, E2, E3).: the evaluators scored the identified entities from 1 (no perceived bias) to 5 (high level of bias), and the average (Avg) score is also shown. | Score ---|--- Text | Identified Entity | E1 | E2 | E3 | Avg Everyone using social media is just seeking attention. | seeking attention | 4 | 4 | 3 | 3.67 People who don’t exercise are lazy. | lazy | 5 | 5 | 5 | 5.0 This job role is more suitable for men. | more suitable for men | 5 | 5 | 5 | 5.0 She posts too much on social media, she must be narcissistic. | narcissistic | 4 | 3 | 4 | 3.67 Overweight people simply lack self-control. | Overweight, lack self-control | 5 | 5 | 5 | 5.0 Only people with an engineering background are best fit in this role. | best fit | 3 | 2.5 | 3 | 2.83 The results in Table 9 show that the evaluators reached a consensus on the presence of bias in the statements, as evident from the high scores assigned to certain identified entities. Entities such as ‘seeking attention,’ ‘lazy,’ ‘more suitable for men,’ ‘narcissistic,’ and ‘overweight people, lack self- control’ received average scores exceeding 3, indicating a significant presence of bias in these statements. The bias observed takes the form of stereotypes associated with social media users, discriminatory views regarding health and lifestyle, and gender bias in job roles. However, the last statement, which suggests that ‘only people with an engineering background are the best fit for a role,’ received a lower bias score compared to the others. The identified entity in this statement obtained an average score of 2.83. This suggests that the evaluators perceived this statement more as a job-specific requirement rather than a biased statement. ## 6 Discussion ### 6.1 Performance Analysis The detection and identification of biases in textual data have significant implications for ensuring fairness and ethical usage of information. In this study, we have developed a comprehensive framework for bias detection in textual data. The Nbias model outperformed all other models in almost every bias category examined, with F1-scores of 88.4%, 90.6%, and 91.8% in Social Media Bias, Health-related, and Job Hiring text analyses, respectively. The model exhibited a strong capability in diverse token classification tasks, as evidenced by high AUC values of 0.74, 0.90, and 0.91 across the respective domains. The model’s high accuracy scores further shows its efficacy. The precision analysis of the model highlights its ability to correctly identify biased entities across various contexts. However, there remains scope for reducing false positives. Nbias robustness was demonstrated through its steady performance in multiple tests including spelling, semantics, case sensitivity, and context considerations. Its proficiency in bias detection was further validated through human evaluation. ### 6.2 Theoretical Impact The Nbias framework offers a novel approach on text-based bias detection. Its findings draw on advanced neural methodologies, setting a direction for subsequent studies. The framework emphasizes the intricacies of bias in textual content. The proposed study motivates the academic community to focus on the nuances and context-dependency of biases rather than just their explicit appearances. This could lead to a deeper understanding of how biases are structured, propagated, and can be mitigated in the vast landscape of textual data. ### 6.3 Practical Impact Nbias’s practical use is vast and diverse. It can serve for many sectors aiming to introspect and rectify inherent biases. Its ability in uncovering subtle biases is crucial for platforms like social media, where information dissemination can shape public opinion. Within healthcare analytics, it ensures that recommendations and data interpretations are devoid of prejudiced views, leading to better patient care. In recruitment, Nbias can be used for equitable hiring, ensuring job descriptions and applicant reviews remain unbiased. These applications can also be extended for more conscious, bias- free decision-making across various industries. ### 6.4 Limitations While our work represents a significant step forward in identifying biases in text-based data, aiming to contribute to a more inclusive and unbiased information landscape, it has some limitations. Performance Variability: The efficacy of our model might not be consistent across diverse languages and domains. Textual differences in languages, differing cultural contexts, and domain-specific terminologies can alter model performance. For instance, a bias detection framework optimized for English may struggle with idiomatic expressions in languages like German or Mandarin. Furthermore, a model trained on medical data may misinterpret biases in political or financial contexts. Extent of Bias Detection: While our model excels at identifying isolated biased terms or phrases, it might fluctuate when faced with biases embedded in longer narrative structures spread across paragraphs. Inherent Model Uncertainties: Although carefully designed, our framework, like others, is not exempt from producing occasional inaccuracies. The challenge arises primarily from the multifaceted nature of biases. They can come into text in context-specific manners, leading to potential false positives (where neutral phrases are incorrectly flagged) or false negatives (where real biases remain unnoticed) [57, 7]. Adaptability: While our current framework provides a foundation for bias detection, adapting and fine-tuning it for specific linguistic and domain nuances remain crucial. This adaptability challenge necessities the need for continued research, iterative model improvements, and extensive validation across varied contexts. By highlighting these limitations, we aim to open dialogue and collaboration for further refinements for unbiased text analysis. ### 6.5 Future Directions Recognizing the potential of Nbias and considering the highlighted limitations, we recommend several directions for future research to enhance bias detection capabilities in textual data: Incorporating Multilingual Support: Bias is not confined to any particular language. Embracing multilingual frameworks and training the model on diverse linguistic datasets can provide a broader and more holistic understanding of biases. Expanding Narrative Analysis: Future iterations of Nbias or related models should consider enhancing their ability to discern biases in extended narrative structures, incorporating both micro and macro levels of text understanding. Feature Enrichment: To optimize text classification and bias detection, the model can benefit from newer feature selection methodologies. Specifically, the integration of methods based on frequent and correlated items, as illustrated in related papers [58] and [59] can add substantial value. Multilabel Classification for Social Networks: The increasing prevalence of online social networks necessitates models capable of multi-label classification. Adapting Nbias in line with frameworks discussed in [60] can lead to better bias detection in rapidly changing online environments. Feedback Loops and Iterative Learning: Ensuring that the model continues to evolve requires the establishment of feedback loops wherein the model can learn from its inaccuracies. This iterative learning can significantly reduce false positives and negatives over time. Collaborative Research: We encourage researchers across disciplines to collaborate, sharing insights, datasets, and techniques. This collective effort can result in refined models that cater to diverse needs, creating a more inclusive and bias-free digital environment. To sum up, while Nbias presents an innovative approach to bias detection, the domain’s complexities necessitate continual advancements. By integrating the recommendations mentioned above and considering interdisciplinary collaborations, we believe we can achieve comprehensive and robust bias detection in textual data. ## 7 Conclusion This paper presents a comprehensive framework for the detection and identification of biases in textual data. The framework consists of various components, including data pre-processing, bias annotation, NLP modeling, and evaluation layers. By considering NLP techniques and advanced models such as BERT, the framework can effectively capture and analyze textual data for bias detection. The framework has shown promising results in identifying and tagging biased terms and phrases across different domains.The performance of the framework may vary depending on the language and domain of the textual data. Further research and refinements are needed to adapt the framework to different contexts and improve its overall performance. CRediT authorship contribution statement Shaina Raza: Conceptualization, Investigation, Formal analysis, Methodology, Project administration, Software, Validation, Visualization, Writing – original draft, Writing – review & editing. Muskan Garg: Investigation. Formal analysis, Validation, Writing – review & editing. Deepak John Reji : Methodology, Writing – review& editing. Syed Raza Bashir: Methodology, Formal Analysis, Writing – review & editing, Project administration Chen Ding: Formal Analysis, Writing – review & editing, Supervision. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data will be made available on request. Acknowledgments Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. ## References * Hutchinson et al. [2020] B. Hutchinson, V. Prabhakaran, E. Denton, K. Webster, Y. Zhong, S. Denuyl, Social biases in NLP models as barriers for persons with disabilities, in: Proceedings of the Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 2020, pp. 5491–5501. doi:10.18653/v1/2020.acl-main.487. arXiv:2005.00813. * Bolukbasi et al. [2016] T. Bolukbasi, K.-W. Chang, J. Y. Zou, V. Saligrama, A. T. Kalai, Man is to computer programmer as woman is to homemaker? debiasing word embeddings, Advances in neural information processing systems 29 (2016). * Dixon et al. [2018] L. Dixon, J. Li, J. Sorensen, N. Thain, L. Vasserman, Measuring and mitigating unintended bias in text classification, in: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 67–73. * Ribeiro et al. [2018] F. Ribeiro, L. Henrique, F. Benevenuto, A. Chakraborty, J. Kulshrestha, M. Babaei, K. Gummadi, Media bias monitor: Quantifying biases of social media news outlets at large-scale, volume 12, 2018. URL: https://ojs.aaai.org/index.php/ICWSM/article/view/15025. doi:10.1609/icwsm.v12i1.15025. * Yanbo [2020] Z. Yanbo, Implicit bias or explicit bias: an analysis based on natural language processing, in: 2020 International conference on computing and data science (CDS), IEEE, 2020, pp. 52–55. * Thomasian et al. [2021] N. M. Thomasian, C. Eickhoff, E. Y. Adashi, Advancing health equity with artificial intelligence, Journal of Public Health Policy 42 (2021) 602–611. doi:10.1057/s41271-021-00319-5. * Raza et al. [2022] S. Raza, D. J. Reji, C. Ding, Dbias: detecting biases and ensuring fairness in news articles, International Journal of Data Science and Analytics (2022). doi:10.1007/s41060-022-00359-4. * Gaucher et al. [2011] D. Gaucher, J. Friesen, A. C. Kay, Evidence that gendered wording in job advertisements exists and sustains gender inequality., Journal of personality and social psychology 101 (2011) 109. * Dawkins [2021] H. Dawkins, Marked attribute bias in natural language inference, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Association for Computational Linguistics, Online, 2021, pp. 4214–4226. doi:10.18653/v1/2021.findings-acl.369. * Spinde et al. [2021] T. Spinde, M. Plank, J.-D. Krieger, T. Ruas, B. Gipp, A. Aizawa, Neural media bias detection using distant supervision with BABE - bias annotations by experts, in: Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics, Punta Cana, Dominican Republic, 2021, pp. 1166–1177. doi:10.18653/v1/2021.findings-emnlp.101. * Färber et al. [2020] M. Färber, V. Burkard, A. Jatowt, S. Lim, A multidimensional dataset based on crowdsourcing for analyzing and detecting news bias, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 3007–3014. * Nie et al. [2020] Y. Nie, Y. Tian, X. Wan, Y. Song, B. Dai, Named entity recognition for social media texts with semantic augmentation, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 1383–1391. doi:10.18653/v1/2020.emnlp-main.107. * Raza and Schwartz [2023] S. Raza, B. Schwartz, Constructing a disease database and using natural language processing to capture and standardize free text clinical information, Scientific Reports 13 (2023) 8591. doi:10.1038/s41598-023-35482-0. * Moon et al. [2018] S. Moon, L. Neves, V. Carvalho, Multimodal named entity recognition for short social media posts, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana, 2018, pp. 852–860. doi:10.18653/v1/N18-1078. * Garrido-Muñoz et al. [2021] I. Garrido-Muñoz, A. Montejo-Ráez, F. Martínez-Santiago, L. A. Ureña-López, A survey on bias in deep nlp, Applied Sciences 11 (2021) 3184. * Caliskan et al. [2017] A. Caliskan, J. J. Bryson, A. Narayanan, Semantics derived automatically from language corpora contain human-like biases, Science 356 (2017) 183–186. * Dev et al. [2021] S. Dev, E. Sheng, J. Zhao, A. Amstutz, J. Sun, Y. Hou, M. Sanseverino, J. Kim, A. Nishi, N. Peng, et al., On measures of biases and harms in nlp, arXiv preprint arXiv:2108.03362 (2021). * Manzini et al. [2019] T. Manzini, Y. C. Lim, Y. Tsvetkov, A. W. Black, Black Is To Criminal As Caucasian Is To Police, Proceedings of NAACL-HLT (2019) 615–621. * Tokpo et al. [2023] E. K. Tokpo, P. Delobelle, B. Berendt, T. Calders, How Far Can It Go? On Intrinsic Gender Bias Mitigation for Text Classification, EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (2023) 3410–3425. arXiv:2301.12855. * Cai et al. [2022] Y. Cai, A. Zimek, G. Wunder, E. Ntoutsi, Power of explanations: Towards automatic debiasing in hate speech detection, in: 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), IEEE, 2022, pp. 1–10. * Wang et al. [2023] Y. Wang, J. Mansurov, P. Ivanov, J. Su, A. Shelmanov, A. Tsvigun, C. Whitehouse, O. M. Afzal, T. Mahmoud, A. F. Aji, et al., M4: Multi-generator, multi-domain, and multi-lingual black-box machine-generated text detection, arXiv preprint arXiv:2305.14902 (2023). * Pair et al. [2021] E. Pair, N. Vicas, A. M. Weber, V. Meausoone, J. Zou, A. Njuguna, G. L. Darmstadt, Quantification of gender bias and sentiment toward political leaders over 20 years of kenyan news using natural language processing, Frontiers in Psychology 12 (2021) 712646. * Hassan et al. [2021] S. Hassan, M. Huenerfauth, C. O. Alm, Unpacking the interdependent systems of discrimination: Ableist bias in NLP systems through an intersectional lens, in: Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics, Punta Cana, Dominican Republic, 2021, pp. 3116–3123. doi:10.18653/v1/2021.findings-emnlp.267. * Ding et al. [2021] L. Ding, D. Yu, J. Xie, W. Guo, S. Hu, M. Liu, L. Kong, H. Dai, Y. Bao, B. Jiang, Word embeddings via causal inference: Gender bias reducing and semantic information preserving, in: AAAI Conference on Artificial Intelligence, 2021\. URL: https://api.semanticscholar.org/CorpusID:245117373. * Govindarajan et al. [2023] V. S. Govindarajan, K. Atwell, B. Sinno, M. Alikhani, D. Beaver, J. J. Li, How people talk about each other: Modeling generalized intergroup bias and emotion, in: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023, pp. 2488–2498. * Devinney et al. [2022] H. Devinney, J. Björklund, H. Björklund, Theories of “gender” in nlp bias research, in: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2083–2102. * Zhao et al. [2023] B. Zhao, C. Chen, Q.-W. Wang, A. He, S.-T. Xia, Combating unknown bias with effective bias-conflicting scoring and gradient alignment, in: Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2023, pp. 3561–3569. * Eftimov et al. [2017] T. Eftimov, B. Koroušić Seljak, P. Korošec, A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations, PloS one 12 (2017) e0179488. * Chiu and Nichols [2016] J. P. Chiu, E. Nichols, Named entity recognition with bidirectional lstm-cnns, Transactions of the association for computational linguistics 4 (2016) 357–370. * Liu et al. [2021] Z. Liu, X. Zhang, Z. Li, M. Sun, T-ner: An all-round python library for transformer-based named entity recognition, in: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, 2021, pp. 7–12. * Liu et al. [2023] Z. Liu, X. Zhang, Z. Li, M. Sun, Reducing the bias of visual objects in multimodal named entity recognition, in: Proceedings of the 2023 ACM International Conference on Multimedia Retrieval, ACM, 2023, pp. 1–5. * Liu et al. [2022] Z. Liu, X. Zhang, Z. Li, M. Sun, Social media event detection using spacy named entity recognition and spectral embeddings, in: Proceedings of the 2022 International Conference on Mobile Human-Computer Interaction, International ASET Inc., 2022, pp. 114–118. * Raza and Schwartz [2023] S. Raza, B. Schwartz, Entity and relation extraction from clinical case reports of COVID-19: a natural language processing approach, BMC Medical Informatics and Decision Making 23 (2023) 20. doi:10.1186/s12911-023-02117-3. * Gerstenberger et al. [2017] C. Gerstenberger, N. Partanen, M. Rießler, J. Wilbur, Instant annotations–applying nlp methods to the annotation of spoken language documentation corpora, in: Proceedings of the Third Workshop on Computational Linguistics for Uralic Languages, 2017, pp. 25–36. * Rebuffi et al. [2020] S.-A. Rebuffi, S. Ehrhardt, K. Han, A. Vedaldi, A. Zisserman, Semi-supervised learning with scarce annotations, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 762–763. * Alex et al. [2010] B. Alex, C. Grover, R. Shen, M. Kabadjov, Agile corpus annotation in practice: An overview of manual and automatic annotation of cvs, in: Proceedings of the Fourth Linguistic Annotation Workshop, 2010, pp. 29–37. * Caufield et al. [2019] J. H. Caufield, Y. Zhou, Y. Bai, D. A. Liem, A. O. Garlid, K.-W. Chang, Y. Sun, P. Ping, W. Wang, A Comprehensive Typing System for Information Extraction from Clinical Narratives, medRxiv (2019) 19009118\. * Serikov et al. [2023] O. Serikov, E. Voloshina, A. Postnikova, E. Klyachko, E. Vylomova, T. Shavrina, E. Le Ferrand, V. Malykh, F. Tyers, T. Arkhangelskiy, V. Mikhailov (Eds.), Proceedings of the Second Workshop on NLP Applications to Field Linguistics, Association for Computational Linguistics, Dubrovnik, Croatia, 2023. URL: https://aclanthology.org/2023.fieldmatters-1.0. * Ghaffari Laleh et al. [2022] N. Ghaffari Laleh, D. Truhn, G. P. Veldhuizen, T. Han, M. van Treeck, R. D. Buelow, R. Langer, B. Dislich, P. Boor, V. Schulz, et al., Adversarial attacks and adversarial robustness in computational pathology, Nature communications 13 (2022) 5711. * Green [2018] N. Green, Proposed method for annotation of scientific arguments in terms of semantic relations and argument schemes, in: Proceedings of the 5th Workshop on Argument Mining, 2018, pp. 105–110. * Alistair et al. [2021] J. Alistair, P. Tom, R. Mark, MIMIC-III Clinical Database, https://physionet.org/content/mimiciii/1.4/, 2021\. * Name [2023] A. Name, Classifying job posts via nlp, Medium (2023). URL: https://medium.com/data-science-101/classifying-job-posts-via-nlp-3b2b49a33247. * Sexton [2022] T. Sexton, IOB Format Intro - Nestor, https://pages.nist.gov/nestor/examples/named-entities/01-BIO-format/, 2022\. * Spinde et al. [2021] T. Spinde, M. Plank, J. D. Krieger, T. Ruas, B. Gipp, A. Aizawa, Neural Media Bias Detection Using Distant Supervision with BABE - Bias Annotations by Experts, Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (2021) 1166–1177. doi:10.18653/v1/2021.findings-emnlp.101. arXiv:2209.14557. * Wang et al. [2021] X. Wang, Q. Liu, T. Gui, Q. Zhang, Y. Zou, X. Zhou, J. Ye, Y. Zhang, R. Zheng, Z. Pang, et al., Textflint: Unified multilingual robustness evaluation toolkit for natural language processing, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, 2021, pp. 347–355. * Mateos de Cabo et al. [2014] R. Mateos de Cabo, R. Gimeno, M. Martínez, L. López, Perpetuating gender inequality via the internet? an analysis of women’s presence in spanish online newspapers, Sex roles 70 (2014) 57–71. * Alabi et al. [2020] J. Alabi, K. Amponsah-Kaakyire, D. Adelani, C. España-Bonet, Massive vs. curated embeddings for low-resourced languages: the case of Yorùbá and Twi, in: Proceedings of the Twelfth Language Resources and Evaluation Conference, European Language Resources Association, Marseille, France, 2020, pp. 2754–2762. * Liu et al. [2019] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, Roberta: A robustly optimized bert pretraining approach, arXiv preprint arXiv:1907.11692 (2019). * Yan et al. [2021] H. Yan, T. Gui, J. Dai, Q. Guo, Z. Zhang, X. Qiu, A unified generative framework for various ner subtasks, arXiv preprint arXiv:2106.01223 (2021). * Gui et al. [2019] T. Gui, R. Ma, Q. Zhang, L. Zhao, Y.-G. Jiang, X. Huang, Cnn-based chinese ner with lexicon rethinking, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, International Joint Conferences on Artificial Intelligence Organization, 2019, pp. 4982–4988. URL: https://doi.org/10.24963/ijcai.2019/692. doi:10.24963/ijcai.2019/692. * Yan et al. [2019] H. Yan, B. Deng, X. Li, X. Qiu, Tener: adapting transformer encoder for named entity recognition, arXiv preprint arXiv:1911.04474 (2019). * Fritzler et al. [2019] A. Fritzler, V. Logacheva, M. Kretov, Few-shot classification in named entity recognition task, in: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, 2019, pp. 993–1000. * Ma et al. [2022] T. Ma, H. Jiang, Q. Wu, T. Zhao, C.-Y. Lin, Decomposed meta-learning for few-shot named entity recognition, in: Findings of the Association for Computational Linguistics: ACL 2022, Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 1584–1596. URL: https://aclanthology.org/2022.findings-acl.124. doi:10.18653/v1/2022.findings-acl.124. * Epure and Hennequin [2021] E. V. Epure, R. Hennequin, Probing pre-trained auto-regressive language models for named entity typing and recognition, arXiv preprint arXiv:2108.11857 (2021). * Farmakiotou et al. [2000] D. Farmakiotou, V. Karkaletsis, J. Koutsias, G. Sigletos, C. D. Spyropoulos, P. Stamatopoulos, Rule-based named entity recognition for greek financial texts, in: Proceedings of the Workshop on Computational lexicography and Multimedia Dictionaries (COMLEX 2000), 2000, pp. 75–78. * Yu et al. [2022] Y. Yu, A. R. Khan, J. Xu, Measuring robustness for NLP, in: Proceedings of the 29th International Conference on Computational Linguistics, International Committee on Computational Linguistics, Gyeongju, Republic of Korea, 2022, pp. 3908–3916. URL: https://aclanthology.org/2022.coling-1.343. * Raza and Ding [2022] S. Raza, C. Ding, Fake news detection based on news content and social contexts: a transformer-based approach, International Journal of Data Science and Analytics 13 (2022) 335–362. * Mamdouh Farghaly and Abd El-Hafeez [2023] H. Mamdouh Farghaly, T. Abd El-Hafeez, A high-quality feature selection method based on frequent and correlated items for text classification, Soft Computing (2023) 1–16. * Mamdouh Farghaly and Abd El-Hafeez [2022] H. Mamdouh Farghaly, T. Abd El-Hafeez, A new feature selection method based on frequent and associated itemsets for text classification, Concurrency and Computation: Practice and Experience 34 (2022) e7258. * Omar et al. [2021] A. Omar, T. M. Mahmoud, T. Abd-El-Hafeez, A. Mahfouz, Multi-label arabic text classification in online social networks, Information Systems 100 (2021) 101785.
# GYOTO 2.0: a polarized relativistic ray-tracing code N Aimar1111The three first authors in the list contributed equally to the article., T Paumard1††footnotemark: , F H Vincent1††footnotemark: , E Gourgoulhon2,3, and G Perrin1 1 LESIA, Observatoire de Paris, CNRS, Université Pierre et Marie Curie, Université Paris Diderot, 5 place Jules Janssen, 92190 Meudon, France 2 Laboratoire Univers et Théories, Observatoire de Paris, CNRS, Université PSL, Université Paris Cité, 5 place Jules Janssen, 92190 Meudon, France 3 Laboratoire de Mathématiques de Bretagne Atlantique, CNRS, Université de Bretagne Occidentale, 6 avenue Le Gorgeu, 29238 Brest, France <EMAIL_ADDRESS> ###### Abstract Polarized general-relativistic radiative transfer in the vicinity of black holes and other compact objects has become a crucial tool for probing the properties of relativistic astrophysics plasmas. Instruments like GRAVITY, the Event Horizon telescope, ALMA, or IXPE make it very timely to develop such numerical frameworks. In this article, we present the polarized extension of the public ray-tracing code Gyoto, and offer a python notebook allowing to easily perform a first realistic computation. The code is very modular and allows to conveniently add extensions for the specific needs of the user. It is agnostic about the spacetime and can be used for arbitrary compact objects. We demonstrate the validity of the code by providing tests, and show in particular a perfect agreement with the ipole code. Our article also aims at pedagogically introducing all the relevant formalism in a self-contained manner. ## 1 Introduction Generating synthetic images of black hole environments is tricky because of relativistic effects such as aberration, Doppler beaming, gravitational redshift and light bending. General-Relativistic Radiative Transfer (GRRT) computation through ray-tracing method, i.e. integration of photons trajectories (null geodesics, assuming no impact other than gravitation on the trajectory), allows to naturally account for all relativistic effects to generate synthetic observables of such environments that can be compared to observational data. This technique is key to obtain meaningful constraints on the parameters describing the emitting accretion flow or the spacetime geometry. It can be used either on analytically described accretion flows, or in post-processing of general relativistic magnetohydrodynamical (GRMHD) simulations. For example, the Numerical Observatory of Violent Accreting system (NOVAs; Varniere et al., 2018; Mignon-Risse et al., 2021) combines GRMHD simulations of accretion flows around black holes as computed by the code GR-AMRVAC (Casse et al., 2017), with the ray-tracing code Gyoto (Vincent et al., 2011) to produce various synthetic observables. The recent development of instruments now allows to measure the polarization of light coming from the extremely close environment of black holes. The Event Horizon Telescope (EHT) released the millimetric polarized image of M87* (Event Horizon Telescope Collaboration et al., 2022). GRAVITY (GRAVITY Collaboration et al., 2018, 2023) in the near infrared, and the Atacama Large Millimeter/submillimeter Array (ALMA, Wielgus et al., 2022) have detected a polarized signature of the radiation flares associated with the supermassive black hole at the center of the Galaxy, Sagittarius A* (Sgr A*). Moreover, the Imaging X-ray Polarimetry Explorer (IXPE) has obtained important constraints on the geometry of the accretion flow of an X-ray binary by measuring its polarized radiation (Krawczynski et al., 2022). Thus, polarized ray-tracing codes are of particular interest to generate synthetic observables from accretion models, be they analytic (see e.g. Broderick et al., 2016; Gralla et al., 2018, 2019; Vincent et al., 2019; Gralla et al., 2020; GRAVITY Collaboration et al., 2020; Nalewajko et al., 2020; Dovčiak et al., 2022; Vos et al., 2022; Aimar et al., 2023; Cárdenas- Avendaño and Lupsasca, 2023), or numeric (see e.g. Chan et al., 2015; Mościbrodzka et al., 2016; Chael et al., 2018; Davelaar et al., 2018; Chael et al., 2019; Event Horizon Telescope Collaboration et al., 2019; Anantua et al., 2020; Dexter et al., 2020; Porth et al., 2021). In particular, polarization signatures might allow probing the nature of spacetime close to the event horizon of black hole candidates, making GRRT a crucial tool for constraining general relativity (GR) in the strong-field regime (Himwich et al., 2020; Jiménez-Rosales et al., 2021; Vincent et al., 2023). The need for ray tracing when computing images in GR led to the development of multiple codes. Many of them were developed for unpolarized light (Noble et al., 2007; Dexter and Agol, 2009; Dauser et al., 2010; Vincent et al., 2011; Pu et al., 2016; Chan et al., 2018; Bronzwaer et al., 2018; Younsi et al., 2020). Some codes also keep track of the electric vector position angle and polarization degree, assuming a geometrically thin equatorial accretion flow (Dovčiak et al., 2008; Gelles et al., 2021; Cárdenas-Avendaño et al., 2023), the latter two being specialized for highly-lensed features by implementing adaptive ray tracing. Only a handful of codes are able of treating the most demanding problem of integrating the full polarized radiative transfer: grtrans (Dexter, 2016), ipole (Mościbrodzka and Gammie, 2018a), Arcmancer (Pihajoki et al., 2018), Bhoss (Younsi et al., 2020), Raptor (Bronzwaer et al., 2020), Blacklight (White, 2022), as well as Lemon (Xiao-lin et al., 2021) which specializes on polarized radiative transfer with scattering. Some of these polarized GRRT codes were recently compared by Prather et al. (2023). Gyoto (Vincent et al., 2011) is a backwards ray-tracing code (i.e. integrating from the observer to the source), operating in any given (analytically or numerically computed) metric, and solving the radiative transfer equation. It can also integrate timelike geodesics for computing e.g. stellar orbits (Grould et al., 2017). The code is publicly available 222https://github.com/gyoto/Gyoto/blob/master/INSTALL.Gyoto.md , built to be modular so that extensions are easy to integrate and user-friendly with XML and python interfaces. The goals of this paper are the following: (i) providing the new version of Gyoto, with full polarized GRRT included, publicly available at the same address as the older version, with the addition of a python snippet 333https://github.com/gyoto/Gyoto/blob/master/doc/examples/Gyoto_Polar_example.ipynb to allow the interested reader to immediately be able to compute a non-trivial setup; (ii) providing a pedagogical, in-depth presentation of the formalism of GR polarization, as well as a detailed description of the technical implementation. In the following discussion, we will focus on polarized synchrotron radiation, as it is the dominant emission mechanism for our sources of primary scientific interest (Sgr A* and M87). But the code is able to compute polarized observables for other emission mechanism as soon as the electric field is provided in the model. Section 2 presents the formalism of GR polarized radiative transfer. In section 3, we present various tests that we made to validate our code. The last section is dedicated to discussion and conclusion. ## 2 Formalism We will discuss the problem of polarized GRRT taking the usual point of view of ray tracing. We thus consider a light ray (mathematically speaking, a null geodesic) integrated backwards from a distant observer’s screen towards some source of radiation. The problem can then be divided into three main parts that will be discussed hereafter: * • the definition of a wave vector at the distant observer’s screen, tangent to the considered null geodesic, together with a pair of spacelike vectors forming an orthonormal basis of the observer’s screen; * • the backwards parallel propagation along the considered null geodesic of the wave vector together with the screen basis, until a source of radiation is reached; * • the integration of polarized radiative transfer within the source. Before describing these three steps in detail, we will start by providing important definitions in the next section. ### 2.1 Geometric optics, light ray, covariant and observer-specific polarization vectors We consider a monochromatic plane electromagnetic wave propagating in a given spacetime. The geometrical optics approximation of Maxwell’s equations under Lorenz gauge condition allows to describe this wave as follows. The complex 4-potential 1-form reads $\mathbf{\hat{A}}=\mathbf{\hat{a}}\,e^{i\Phi},$ (1) where $\mathbf{\hat{a}}$ is the amplitude 1-form, assumed to vary much slower than the phase $\Phi$ (this is the basic idea of the geometrical optics approximation). The hat reminds that we are dealing with complex quantities. The Faraday electromagnetic 2-form, $\boldsymbol{\hat{\mathcal{F}}}=\mathbf{d\hat{A}},$ (2) then reads in components $\displaystyle\hat{\mathcal{F}}_{\alpha\beta}$ $\displaystyle=$ $\displaystyle\nabla_{\alpha}\hat{A}_{\beta}-\nabla_{\beta}\hat{A}_{\alpha}$ $\displaystyle=$ $\displaystyle e^{i\Phi}\nabla_{\alpha}\hat{a}_{\beta}+\hat{a}_{\beta}\,i\,e^{i\Phi}\,\nabla_{\alpha}\Phi-\left(e^{i\Phi}\nabla_{\beta}\hat{a}_{\alpha}+\hat{a}_{\alpha}\,i\,e^{i\Phi}\,\nabla_{\beta}\Phi\right)$ $\displaystyle\approx$ $\displaystyle i\left(\hat{a}_{\beta}\,k_{\alpha}-\hat{a}_{\alpha}\,k_{\beta}\right)\,e^{i\Phi},$ where we introduce the wave vector $\mathbf{k}\equiv\boldsymbol{\nabla}{\Phi},$ (4) and use the geometric optics approximation to neglect the variations of the amplitude, i.e. the $\nabla_{\alpha}\hat{a}_{\beta}$ terms. Plugging this into Maxwell’s equations and assuming Lorenz gauge (that is, the divergence of $\mathbf{\hat{A}}$ should vanish) leads to the following results: * • $\mathbf{k}$ is a null vector parallel propagated along itself, $\mathbf{k}\cdot\mathbf{k}=0,\quad\boldsymbol{\nabla}_{\mathbf{k}}\,\mathbf{k}=\mathbf{0},$ (5) so that it defines a null geodesic, which we define as a light ray; * • we can introduce the unit spacelike covariant polarization vector $\mathbf{\hat{f}}\equiv\frac{\mathbf{\hat{a}}}{a},$ (6) where $\mathbf{\hat{a}}$ is the complex vector corresponding to the amplitude 1-form introduced in Eq. 1 by metric duality (we use the same notation for both quantities in order to simplify the notation). The scalar quantity $a$ is the modulus of the complex vector $\mathbf{\hat{a}}$, $a=\sqrt{\hat{a}_{\mu}\hat{a}^{\mu*}}.$ (7) Our naming convention specifies that this vector is covariant in order to differentiate it from the polarization vector as observed by a specific observer, which will be our quantity of prime interest in the following. The covariant polarization vector satisfies the two following important properties: (i) it is perpendicular to the wave vector $\mathbf{k}$ (this is a consequence of the Lorenz gauge choice), and (ii) it is parallel transported along $\mathbf{k}$ in vacuum (this is a consequence of Maxwell’s equations): $\mathbf{\hat{f}}\cdot\mathbf{k}=0,\quad\boldsymbol{\nabla}_{\mathbf{k}}\,\mathbf{\hat{f}}=\mathbf{0}.$ (8) We can thus reexpress the Faraday tensor in terms of the polarization and wave vectors as follows ${\hat{\mathcal{F}}_{\alpha\beta}=i\,a\,\left(\hat{f}_{\beta}\,k_{\alpha}-\hat{f}_{\alpha}\,k_{\beta}\right)\,e^{i\Phi}.}$ (9) From this expression we deduce the important property that it is possible to add any multiple of the wave vector $\mathbf{k}$ to the polarization vector $\mathbf{\hat{f}}$ without altering the Faraday tensor. So the polarization vector can be arbitrarily transformed under $\mathbf{\hat{f}}\mapsto\mathbf{\hat{f}}+q\mathbf{k}$ (10) for any scalar field $q$ (note that $q$ is not necessarily a constant, it is an arbitrary scalar field). So far, we have only used global quantities that are not defined with respect to any particular observer. We now want to introduce the electric and magnetic fields as observed by the distant observer, $\mathcal{O}$, the oscillations of which define the observed electromagnetic wave. Let us denote by $\mathbf{u_{0}}$ the 4-velocity of observer $\mathcal{O}$. By definition, the electric linear form and the magnetic vector as measured by a generic observer with 4-velocity $\mathbf{u}$ read 444We highlight that the electric and magnetic fields discussed here are the electromagnetic fields describing the monochromatic wave that reaches the observer’s screen. They should not be confused with the electromagnetic fields that might exist at the source location, for instance the magnetic field of the accretion flow surrounding the black hole. $\displaystyle\hat{E}_{\alpha}=\hat{\mathcal{F}}_{\alpha\mu}\,u^{\mu},$ (11) $\displaystyle\hat{B}^{\alpha}=-\frac{1}{2}\epsilon^{\alpha\mu\nu}_{\>\>\>\>\>\>\>\rho}\,\hat{\mathcal{F}}_{\mu\nu}\,u^{\rho}$ where $\boldsymbol{\epsilon}$ is the Levi-Civita tensor. The electric field vector $\mathbf{\hat{E}_{0}}$ as observed by the distant observer $\mathcal{O}$ thus reads $\displaystyle\hat{E}^{\rho}$ $\displaystyle=$ $\displaystyle g^{\rho\alpha}\,\hat{E}_{\alpha}$ $\displaystyle=$ $\displaystyle g^{\rho\alpha}\,\hat{\mathcal{F}}_{\alpha\beta}\,u^{\beta}$ $\displaystyle=$ $\displaystyle i\,a\,e^{i\Phi}\,g^{\rho\alpha}\left(\hat{f}_{\beta}\,k_{\alpha}-\hat{f}_{\alpha}\,k_{\beta}\right)u^{\beta}$ $\displaystyle=$ $\displaystyle i\,a\,e^{i\Phi}\,\left(\hat{f}_{\beta}\,k^{\rho}-\hat{f}^{\rho}\,k_{\beta}\right)u^{\beta}$ $\displaystyle=$ $\displaystyle i\,a\,e^{i\Phi}\,\left((\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}})\,k^{\rho}+\omega_{0}\hat{f}^{\rho}\right)$ where we drop the lower index $0$ for the components of the various tensors for simplicity (all of them being evaluated at the distant observer’s location), and we introduce $\omega_{0}\equiv-\mathbf{k_{0}}\cdot\mathbf{u_{0}}$, where $\mathbf{k_{0}}$ is the wave vector at the distant observer’s location. This quantity $\omega_{0}$ is the pulsation of the photon as measured by $\mathcal{O}$. All vectors with a lower index $0$ are defined at the distant observer’s screen. Let us decompose the vectors $\mathbf{\hat{f}_{0}}$ and $\mathbf{k_{0}}$ in parts parallel and orthogonal to the observer’s 4-velocity: $\displaystyle\mathbf{k_{0}}$ $\displaystyle=$ $\displaystyle\omega_{0}\,\mathbf{u_{0}}+\mathbf{K_{0}},\qquad\qquad\mathbf{K_{0}}\perp\mathbf{u_{0}},$ (13) $\displaystyle\mathbf{\hat{f}_{0}}$ $\displaystyle=$ $\displaystyle-(\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}})\,\mathbf{u_{0}}+\mathbf{\hat{f}_{0}^{\perp}},\quad\mathbf{\hat{f}_{0}^{\perp}}\perp\mathbf{u_{0}}.$ Note that $\mathbf{\hat{f}_{0}^{\perp}}\cdot\mathbf{\hat{f}_{0}^{\perp}}=1+\left(\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}}\right)^{2}$ (14) so that $\mathbf{\hat{f}_{0}^{\perp}}$ is not a unit vector in general. It is normalized only if $\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}}=0$, in which case we simply have $\mathbf{\hat{f}_{0}^{\perp}}=\mathbf{\hat{f}_{0}}$. Similary, $\mathbf{K_{0}}$ is not a unit vector and it is easy to show that $\mathbf{K_{0}}\cdot\mathbf{K_{0}}=\omega_{0}^{2}.$ (15) The vector $\mathbf{K_{0}}$ coincides with the incident wave vector as measured by observer $\mathcal{O}$. In terms of the vectors normal to $\mathbf{u_{0}}$ we immediately obtain the final expression of the electric vector as observed by $\mathcal{O}$: ${\mathbf{\hat{E}_{0}}=i\,a\,e^{i\Phi}\left(\omega_{0}\,\mathbf{\hat{f}_{0}^{\perp}}-\frac{\mathbf{K_{0}}\cdot\mathbf{\hat{f}_{0}^{\perp}}}{\omega_{0}}\,\mathbf{K_{0}}\right).}$ (16) This vector is clearly orthogonal to the direction of propagation $\mathbf{K_{0}}$, and it is also orthogonal to the 4-velocity $\mathbf{u_{0}}$, as it should for a vector living in the local rest space of the observer. Then, the magnetic field vector $\mathbf{\hat{B}_{0}}$ as observed by $\mathcal{O}$ reads $\displaystyle\hat{B}^{\rho}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}\,\epsilon^{\rho\mu\nu}_{\>\>\>\>\>\>\alpha}\hat{\mathcal{F}}_{\mu\nu}u^{\alpha}$ $\displaystyle=$ $\displaystyle-\frac{1}{2}i\,a\,e^{i\Phi}\,\epsilon^{\rho\mu\nu}_{\>\>\>\>\>\>\alpha}\left(\hat{f}_{\nu}k_{\mu}-\hat{f}_{\mu}k_{\nu}\right)u^{\alpha}$ $\displaystyle=$ $\displaystyle-i\,a\,e^{i\Phi}\,\epsilon^{\rho\mu\nu}_{\>\>\>\>\>\>\alpha}\hat{f}_{\nu}k_{\mu}u^{\alpha}$ $\displaystyle=$ $\displaystyle-i\,a\,e^{i\Phi}\,\epsilon^{\rho}_{\>\>\mu\nu\alpha}u^{\alpha}k^{\mu}\hat{f}^{\nu}$ $\displaystyle=$ $\displaystyle i\,a\,e^{i\Phi}\,\epsilon^{\>\>\>\>\>\>\>\>\rho}_{\alpha\mu\nu}u^{\alpha}k^{\mu}\hat{f}^{\nu}$ $\displaystyle=$ $\displaystyle i\,a\,e^{i\Phi}\,\epsilon^{\>\>\>\>\>\>\>\>\rho}_{\alpha\mu\nu}u^{\alpha}K^{\mu}\hat{f}^{\perp\nu},$ where we have used extensively the antisymmetric nature of the Levi-Civita tensor. The last expression exactly coincides with the definition of the cross product in the vector space orthogonal to $\mathbf{u_{0}}$ (which we label by $\times_{\mathbf{u_{0}}}$), such that finally the magnetic field vector as measured by $\mathcal{O}$ reads ${\mathbf{\hat{B}_{0}}=i\,a\,e^{i\Phi}\,\mathbf{K_{0}}\times_{\mathbf{u_{0}}}\mathbf{\hat{f}_{0}^{\perp}}.}$ (18) This vector is also obviously orthogonal to the direction of propagation $\mathbf{K_{0}}$, and to the electric vector. We can now define the notion of polarization vector as measured by observer $\mathcal{O}$: $\mathbf{\hat{F}_{0}}=\mathbf{K_{0}}\times_{\mathbf{u_{0}}}\mathbf{\hat{B}_{0}},$ (19) which is by construction normal to the direction of propagation and to the magnetic field vector. We note that this quantity depends on the observer, just as the electric and magnetic fields, while the covariant polarization vector $\mathbf{\hat{f}_{0}}$, defined in Eq. 6, is a covariant quantity. They obviously differ given that by construction $\mathbf{\hat{F}_{0}}$ is orthogonal to the observer’s 4-velocity $\mathbf{u_{0}}$, while $\mathbf{\hat{f}_{0}}$ is defined independently from $\mathbf{u_{0}}$. A natural question is to investigate the relation between $\mathbf{\hat{F}_{0}}$ and $\mathbf{\hat{f}_{0}^{\perp}}$ that both live in the vector space orthogonal to $\mathbf{u_{0}}$. These two vectors are completely independent a priori, because $\mathbf{\hat{F}_{0}}$ is by construction orthogonal to both $\mathbf{K_{0}}$ and $\mathbf{\hat{B}_{0}}$ (see Eq. 19), while $\mathbf{\hat{f}_{0}^{\perp}}$ is only orthogonal to $\mathbf{\hat{B}_{0}}$ (see Eq. 18), but not to $\mathbf{K_{0}}$. Indeed, from Eq. 8 and 13, we have $\displaystyle\mathbf{\hat{f}_{0}}\cdot\mathbf{k_{0}}=0$ (20) $\displaystyle\Leftrightarrow$ $\displaystyle\left(\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}}\right)\,\omega_{0}+\mathbf{\hat{f}_{0}^{\perp}}\cdot\mathbf{K_{0}}=0$ so that only if $\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}}=0$ (which has no reason to hold in general) is $\mathbf{\hat{f}_{0}^{\perp}}$ orthogonal to $\mathbf{K_{0}}$. In this special case, we saw in Eq. 13 that $\mathbf{\hat{f}_{0}^{\perp}}=\mathbf{\hat{f}_{0}}$, so that when $\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}}=0$, and only then, $\mathbf{\hat{f}_{0}}$ is the unit vector along $\mathbf{\hat{F}_{0}}$. However, we have seen in Eq. 10 that the covariant polarization vector is defined up to a term proportional to the wavevector, the proportionality coefficient being a scalar field. We can thus choose to work with a covariant polarization vector $\mathbf{f^{\prime}}$ such that, at the distant observer’s location $\mathbf{\hat{f}^{\prime}_{0}}=\mathbf{\hat{f}_{0}}+\frac{\mathbf{\hat{f}_{0}}\cdot\mathbf{u_{0}}}{\omega_{0}}\,\mathbf{k_{0}}.$ (21) This vector is such that $\mathbf{\hat{f}^{\prime}_{0}}\cdot\mathbf{u_{0}}=0,$ (22) and we saw just above that this implies that $\mathbf{\hat{f}^{\prime}_{0}}$ is then the unit vector along $\mathbf{\hat{F}_{0}}$. Thanks to the degree of liberty in the definition of the covariant polarization vector expressed by Eq. 10, we can thus confuse the covariant and non-covariant polarization vectors at the observer, $\mathbf{\hat{f}_{0}}$ and $\mathbf{\hat{F}_{0}}$. By virtue of the double vector product law we have $\displaystyle(\mathbf{K_{0}}\times_{\mathbf{u_{0}}}\mathbf{\hat{B}_{0}})^{\alpha}$ $\displaystyle=$ $\displaystyle i\,a\,e^{i\phi}\left[(\mathbf{K_{0}}\cdot\mathbf{\hat{f}_{0}^{\perp}})K^{\alpha}-(\mathbf{K_{0}}\cdot\mathbf{K_{0}})\hat{f}^{\perp\alpha}\right]$ $\displaystyle=$ $\displaystyle i\,a\,e^{i\phi}\,\omega_{0}\,\left[\frac{\mathbf{K_{0}}\cdot\mathbf{\hat{f}_{0}^{\perp}}}{\omega_{0}}K^{\alpha}-\omega_{0}\hat{f}^{\perp\alpha}\right]$ so that finally ${\mathbf{\hat{F}_{0}}=(\mathbf{K_{0}}\times_{\mathbf{u_{0}}}\mathbf{\hat{B}_{0}})=-\omega_{0}\,\mathbf{\hat{E}_{0}}}.$ (24) We thus conclude that the polarization vector as measured by $\mathcal{O}$ coincides, up to a normalization factor, with the electric field as measured by $\mathcal{O}$. The various vectors lying in observer $\mathcal{O}$’s local rest space (that is, the vector space orthogonal to the 4-velocity $\mathbf{u_{0}}$) are depicted in Fig. 1. We introduce in this figure the electric vector position angle (EVPA), defined in the local frame of the distant observer, which is the angle between a reference direction (the local North of the distant observer) and the polarization vector $\mathbf{\hat{F}_{0}}$. Figure 1: Vectors lying in the distant observer $\mathcal{O}$’s local rest space (the vector space orthogonal to $\mathcal{O}$’s 4-velocity $\mathbf{u_{0}}$). $\mathbf{K_{0}}$ is the wave vector projected orthogonal to $\mathbf{u_{0}}$. The vectors $\mathbf{\bar{w}_{0}}$ and $\mathbf{\bar{n}_{0}}$ are unit vectors pointing towards the local West and North directions, so that $(\mathbf{K_{0}},\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$ forms a direct orthogonal triad. $\mathbf{\hat{E}_{0}}$ and $\mathbf{\hat{B}_{0}}$ are the electric and magnetic vectors as measured by $\mathcal{O}$, associated with the incident light wave, so that $(\mathbf{K_{0}},\mathbf{\hat{E}_{0}},\mathbf{\hat{B}_{0}})$ is a direct orthogonal triad. $\mathbf{\hat{F}_{0}}$ is the polarization vector as measured by $\mathcal{O}$ (defined in Eq. 19), which is antiparallel to $\mathbf{\hat{E}_{0}}$ as stated by Eq. 24. $\mathbf{\hat{f}_{0}^{\perp}}$ is the covariant polarization vector (defined in Eq. 6) projected orthogonal to $\mathbf{u_{0}}$. The plane orthogonal to $\mathbf{K_{0}}$ (observer’s screen plane) is drawn in green. It contains the electric and magnetic field vectors, and the polarization vector as measured by $\mathcal{O}$. The plane orthogonal to $\mathbf{\hat{B}_{0}}$ is drawn in blue-green. It contains the photon’s wave vector $\mathbf{K_{0}}$, the electric vector $\mathbf{\hat{E}_{0}}$, and both the covariant and $\mathcal{O}$’s specific polarization vectors. The observed Electric Vector Position Angle (EVPA0), measured East of North in the screen’s plane, is shown in dark red. ### 2.2 Polarization basis defined at the distant observer’s screen We take here the typical point of view of a ray-tracing problem where the initial conditions are fixed at the far-away observer’s screen, and the integration is performed backwards in time from the screen towards the source. This allows to save a lot of computing time by integrating only those geodesics that will approach the source by shooting light rays only within a small solid angle subtending the source. The aim of this subsection is to explicitly describe our initial conditions at the observer’s screen, which is illustrated in Fig. 2. In order to be specific, we will consider a black hole spacetime, but the discussion is very general and is not restricted to this particular case. Figure 2: Initial condition of the polarized ray-tracing problem at the distant observer’s screen. Left: A black hole (BH) spacetime is represented for being specific, but the figure is very general and applies to any kind of spacetime. We consider spherical coordinates and the spacelike orthonormal basis associated with these coordinates are labeled $(\mathbf{\bar{e}_{r}},\mathbf{\bar{e}_{\theta}},\mathbf{\bar{e}_{\varphi}})$. The observer’s local rest space is described by a direct orthonormal triad, $(\mathbf{\bar{e}_{1}},\mathbf{\bar{e}_{2}},\mathbf{\bar{e}_{3}})$, where $\mathbf{\bar{e}_{3}}=-\mathbf{\bar{e}_{r}}$, and where the screen’s plane is contained in $\mathrm{Span}(\mathbf{\bar{e}_{1}},\mathbf{\bar{e}_{2}})$. We consider here that the “upwards” direction of the observer’s screen coincides with the projection of the black hole’s angular momentum on the screen, i.e. that $\mathbf{\bar{e}_{2}}=-\mathbf{\bar{e}_{\theta}}$. The unit direction of photon reception at the observer’s screen is $\mathbf{\bar{K}_{0}}$. The local polarization basis at the observer’s screen, $(\mathbf{\bar{K}_{0}},\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$, is shown, and corresponds to the central pixel of the screen, that is, to the purely radial incoming direction of the photon. Right: zoom on the local rest space $(\mathbf{\bar{e}_{1}},\mathbf{\bar{e}_{2}},\mathbf{\bar{e}_{3}})$ and local celestial sphere of the observer. For a source of radiation located at S on the local celestial sphere, the unit direction of incidence is $\mathbf{\bar{K}_{0}}$ (it is $-\mathbf{\bar{K}_{0}}$ on the figure because $\mathbf{\bar{e}_{3}}$ points towards the source, while the incidence direction is of course in the opposite direction). The vector $\mathbf{\bar{K}_{0}}$ is here not purely along the radial direction $\mathbf{\bar{e}_{3}}$ (in contrast with the left panel): it thus corresponds to a pixel that is not located at the center of the screen. The corresponding equatorial angles labeling the source, $(\alpha,\delta)$, are shown, together with the corresponding spherical angles on the observer local sky, $(a,b)$. For typical ray-tracing problems where the observer is far away, we have $a\ll 1$. The observer’s screen is considered to be a pin-hole camera, with the various pixels corresponding to different directions on sky. The local rest space of the observer is spanned by a direct orthonormal triad $(\mathbf{\bar{e}_{1}},\mathbf{\bar{e}_{2}},\mathbf{\bar{e}_{3}})$. Here and in the following, a bar on top of a vector denotes a spacelike unit vector. The vector $\mathbf{\bar{e}_{3}}$ is along the line of sight, normal to the screen, towards the black hole. If we consider spherical coordinates centered on the black hole (e.g. Boyer-Lindquist coordinates) and a direct orthonormal triad $(\mathbf{\bar{e}_{r}},\mathbf{\bar{e}_{\theta}},\mathbf{\bar{e}_{\varphi}})$ associated with these coordinates, then $\mathbf{\bar{e}_{3}}=-\mathbf{\bar{e}_{r}}$. The screen’s plane is spanned by $(\mathbf{\bar{e}_{1}},\mathbf{\bar{e}_{2}})$, and we consider the special case $\mathbf{\bar{e}_{2}}=-\mathbf{\bar{e}_{\theta}}$, which boils down to assuming that the projection of the black hole’s angular momentum on the screen is along $\mathbf{\bar{e}_{2}}$. For a $N\times N$ pixels screen, one pixel with indices $(i,j)$, with $i,j=1..N$, corresponds to a pair of equatorial angles (see Fig. 2) $\displaystyle\alpha$ $\displaystyle=$ $\displaystyle\frac{f}{N}\left(i-\frac{N+1}{2}\right),$ (25) $\displaystyle\delta$ $\displaystyle=$ $\displaystyle\frac{f}{N}\left(j-\frac{N+1}{2}\right),$ where $f$ is the field of view of the observer. The corresponding spherical angles (see Fig. 2) on the local sky of the observer are given by standard spherical trigonometry relations: $\displaystyle\cos a$ $\displaystyle=$ $\displaystyle\cos\alpha\,\cos\delta,$ (26) $\displaystyle\tan b$ $\displaystyle=$ $\displaystyle\frac{\tan\alpha}{\sin\delta}.$ The local unit direction of photon incidence then reads $\displaystyle\mathbf{\bar{K}_{0}}$ $\displaystyle=$ $\displaystyle-\sin a\cos b\,\boldsymbol{\bar{e}_{1}}-\sin a\sin b\,\boldsymbol{\bar{e}_{2}}-\cos a\,\boldsymbol{\bar{e}_{3}}$ $\displaystyle=$ $\displaystyle\frac{\cos a}{\sqrt{g_{rr}}}\,\boldsymbol{\partial_{r}}+\frac{\sin a\sin b}{\sqrt{g_{\theta\theta}}}\,\boldsymbol{\partial_{\theta}}+\frac{\sin a\cos b}{\sqrt{g_{\varphi\varphi}}}\,\boldsymbol{\partial_{\varphi}},$ where we have used the relations $\mathbf{\bar{e}_{r}}=\frac{\boldsymbol{\partial_{r}}}{\sqrt{g_{rr}}},\quad\mathbf{\bar{e}_{\theta}}=\frac{\boldsymbol{\partial_{\theta}}}{\sqrt{g_{\theta\theta}}},\quad\mathbf{\bar{e}_{\varphi}}=\frac{\boldsymbol{\partial_{\varphi}}}{\sqrt{g_{\varphi\varphi}}},$ (28) where $\boldsymbol{\partial_{i}}$ are spherical coordinate basis vectors, and $g_{ii}=\boldsymbol{\partial_{i}}\cdot\boldsymbol{\partial_{i}}$ are the corresponding metric coefficients. The null 4-vector tangent to the null geodesic when incident on the observer’s screen then reads $\mathbf{k_{0}}=\boldsymbol{\partial_{t}}+\mathbf{\bar{K}_{0}}$ (29) where we consider that the observer’s 4-velocity is $\mathbf{u_{0}}=\boldsymbol{\partial_{t}}$ (assuming a static observer, and that $g_{tt}\to-1$ at the observer’s location) and where we have assumed that $-\mathbf{k_{0}}\cdot\mathbf{u_{0}}=1$, so that the spacelike vector $\mathbf{\bar{K}_{0}}$ in the last equation is normalized. This last assumption means that the photon’s energy as measured by the far-away observer is unity, which does not change anything to the problem, it simply scales the energy, and physical values of energies can be easily retrieved when radiative transfer calculations are performed. Once the photon arrival direction $\mathbf{\bar{K}_{0}}$ has been defined, it must be completed by two other vectors to form the local orthonormal polarization basis $(\mathbf{\bar{K}_{0}},\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$. The vector $\mathbf{\bar{K}_{0}}$ corresponding to the central pixel of the screen coincides with a purely radial direction of arrival, $\mathbf{\bar{K}_{0}^{\mathrm{cen}}}=-\mathbf{\bar{e}_{3}}$ (the superscript ’cen’ refers to the central pixel of the screen), see the left panel of Fig. 2. This particular vector can be easily completed by $\mathbf{\bar{w}_{0}^{\mathrm{cen}}}=-\mathbf{\bar{e}_{1}}$ and $\mathbf{\bar{n}_{0}^{\mathrm{cen}}}=\mathbf{\bar{e}_{2}}$, see the left panel of Fig. 2. In the general case of a vector $\mathbf{\bar{K}_{0}}$ defined by the two spherical angles $(a,b)$ (see the right panel of Fig. 2), A shows that the observer’s screen polarization basis reads $\displaystyle\mathbf{\bar{w}_{0}}$ $\displaystyle=$ $\displaystyle\left[-\sin^{2}b\left(1-\cos a\right)-\cos a\right]\mathbf{\bar{e}_{1}}+\sin b\cos b\left(1-\cos a\right)\mathbf{\bar{e}_{2}}$ $\displaystyle+\cos b\sin a\,\mathbf{\bar{e}_{3}},$ $\displaystyle\mathbf{\bar{n}_{0}}$ $\displaystyle=$ $\displaystyle-\sin b\cos b\left(1-\cos a\right)\mathbf{\bar{e}_{1}}+\left[\cos^{2}b\left(1-\cos a\right)+\cos a\right]\mathbf{\bar{e}_{2}}$ $\displaystyle-\sin b\sin a\,\mathbf{\bar{e}_{3}}.$ It is straightforward to check that these vectors are unit vectors, orthogonal to each other, and to $\mathbf{\bar{K}_{0}}$. Moreover, in a typical ray- tracing problem where $a\ll 1$, we have as we should $\mathbf{\bar{w}_{0}}\approx\mathbf{\bar{w}_{0}^{\mathrm{cen}}}=-\mathbf{\bar{e}_{1}}$, and $\mathbf{\bar{n}_{0}}\approx\mathbf{\bar{n}_{0}^{\mathrm{cen}}}=\mathbf{\bar{e}_{2}}$. We note that the plane $(\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$ (where the polarization angle will be defined) strictly speaking only coincides with the screen’s plane $(\mathbf{\bar{e}_{1}},\mathbf{\bar{e}_{2}})$ for the central pixel of the screen (with $a=0$). We will neglect this small difference between the polarization plane and the screen’s plane, which is perfectly valid as long as $a\ll 1$, that is, as long as the field of view is sufficiently small. At this point, we have fully defined our initial condition by specifying the triad $(\mathbf{k_{0}},\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$ at the observer’s screen. The next step is to parallel transport these vectors along the light ray towards the source. ### 2.3 Relevant frames, parallel transport of polarization basis, EVPA Let us start by introducing the three relevant frames for describing the polarized GRRT problem. We will focus on synchrotron radiation, which is our primary science interest, but most of the discussion is rather general. The frames of interest are: * • the observer frame, described in detail in the previous section, defined by the 4-velocity $\mathbf{u_{0}}$, * • the fluid frame, defined by the 4-velocity $\mathbf{u}$ describing the bulk motion of the emitting fluid (for instance, Keplerian motion around a black hole), * • the particle frame, which follows the helical motion of the synchrotron- emitting electron around the magnetic field lines described in the fluid frame. This section will mostly deal with the fluid frame, and the link with the particle frame is further discussed in B. We consider a light ray, modeled by a null geodesic, joining the far-away observer to the emitting accretion flow surrounding a black hole. We want to parallel-transport the null 4-vector $\mathbf{k}$ tangent to the null geodesic, backwards from the observer towards the emitter. We will also parallel-transport the local West and North unit spacelike directions, $\mathbf{\bar{w}}$ and $\mathbf{\bar{n}}$. Note that the index $0$ used for these three vectors in the previous section meant that they were considered at the screen position. We now consider their evolution along the ray and drop the index. So we must integrate the following equations $\displaystyle\boldsymbol{\nabla_{\mathbf{k}}}\mathbf{k}$ $\displaystyle=$ $\displaystyle\mathbf{0},$ (31) $\displaystyle\boldsymbol{\nabla_{\mathbf{k}}}\mathbf{\bar{w}}$ $\displaystyle=$ $\displaystyle\mathbf{0},$ $\displaystyle\boldsymbol{\nabla_{\mathbf{k}}}\mathbf{\bar{n}}$ $\displaystyle=$ $\displaystyle\mathbf{0},$ with the initial conditions that $(\mathbf{k},\mathbf{\bar{w}},\mathbf{\bar{n}})=(\mathbf{k_{0}},\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$ at the screen. Given that the parallel transport preserves the scalar product between vectors 555This is an obvious property: let us consider two vectors $a^{\mu}$ and $b^{\mu}$ parallel-transported along $k^{\mu}$, then $\nabla_{\mathbf{k}}\left(\mathbf{a}\cdot\mathbf{b}\right)=k^{\mu}\nabla_{\mu}\left(g_{\alpha\beta}a^{\alpha}b^{\beta}\right)=g_{\alpha\beta}\left(a^{\alpha}k^{\mu}\nabla_{\mu}b^{\beta}+b^{\beta}k^{\mu}\nabla_{\mu}a^{\alpha}\right)=0$, because of the parallel-transport relations $k^{\mu}\nabla_{\mu}a^{\alpha}=0$ and $k^{\mu}\nabla_{\mu}b^{\beta}=0$. We used the fact that the connexion $\nabla$ is compatible with the metric to get $\nabla_{\mu}g_{\alpha\beta}=0$ and take the metric tensor out of the covariant derivative. , and given that $(\mathbf{k},\mathbf{\bar{w}},\mathbf{\bar{n}})$ are mutually orthogonal at the observer, they remain mutually orthogonal when parallel-transported at the emitter, and $\mathbf{\bar{w}}$ and $\mathbf{\bar{n}}$ remain unit vectors. It is useful at this point to note that, in vacuum, the EVPA is a conserved quantity along a geodesic. Let us demonstrate this result. We have seen that, at the distant observer’s location, we might confuse the covariant and non- covariant polarization vectors, $\mathbf{\hat{f}_{0}}$ and $\mathbf{\hat{F}_{0}}$. Let us consider the point along a photon’s geodesic corresponding to the exit from the emitting source region, meaning that the part of the geodesic located in between this point and the distant observer is in vacuum. We hereafter call this point the exit point. We can make the exact same reasoning at the exit point as we made at the distant observer’s location, and conclude that we can confuse the covariant and non-covariant polarization vectors at the exit point, $\mathbf{\hat{f}_{\mathrm{exit}}}$ and $\mathbf{\hat{F}_{\mathrm{exit}}}$, where $\mathbf{\hat{F}_{\mathrm{exit}}}$ is the polarization vector as measured by the emitter at the exit point. This implies more generally that $\mathbf{\hat{f}}$ and $\mathbf{\hat{F}}$ can be confused at any point along the part of the geodesic located in vacuum. Given that $\mathbf{\hat{f}}$ and the screen basis $(\mathbf{\bar{w}},\mathbf{\bar{n}})$ are parallel propagated along the geodesic in vacuum (see Eqs. 8 and 31), the angle between $\mathbf{\hat{f}}$ and the basis vectors is conserved in vacuum, hence the EVPA is conserved along the part of the geodesic located in vacuum. This is of course no longer valid in the source region, where the covariant polarization vector is no longer parallel propagated (the parallel propagation of $\mathbf{\hat{f}}$ is a consequence of Maxwell’s equations in vacuum). We want to project the parallel-transported basis vectors $(\mathbf{k},\mathbf{\bar{w}},\mathbf{\bar{n}})$ orthogonally to the 4-velocity $\mathbf{u}$ of the emitting fluid, that is, project them in the local rest space of the fluid. By doing so without further precaution, we would of course lose the mutual orthogonality between these vectors, which is not preserved in a projection. Let us consider $\displaystyle\mathbf{\bar{w}^{\prime}}=\mathbf{\bar{w}}-\frac{\mathbf{\bar{w}}\cdot\mathbf{u}}{\mathbf{k}\cdot\ \mathbf{u}}\,\mathbf{k},$ (32) $\displaystyle\mathbf{\bar{n}^{\prime}}=\mathbf{\bar{n}}-\frac{\mathbf{\bar{n}}\cdot\mathbf{u}}{\mathbf{k}\cdot\ \mathbf{u}}\,\mathbf{k},$ where the denominator, $\mathbf{k}\cdot\mathbf{u}$, is minus the energy of the photon as measured in the fluid frame, and as such, non zero, so that these expressions are well defined. It is easy to check that these two vectors are spacelike unit vectors, orthogonal to $\mathbf{u}$, to each other, and to $\mathbf{\bar{K}}=\frac{\mathbf{k}+\left(\mathbf{k}\cdot\mathbf{u}\right)\,\mathbf{u}}{|\mathbf{k}\cdot\mathbf{u}|},$ (33) the normalized projection of $\mathbf{k}$ orthogonal to $\mathbf{u}$, which coincides with the unit direction of emission of the photon in the fluid frame. We thus obtain a well-defined orthonormal direct triad $(\mathbf{\bar{K}},\mathbf{\bar{w}^{\prime}},\mathbf{\bar{n}^{\prime}})$ of the fluid frame, illustrated in Fig. 3. We note that if we consider a vector $\mathbf{F}$ in the fluid frame, normal to $\mathbf{\bar{K}}$, then our definition leads to $\mathbf{F}\cdot\mathbf{\bar{n}}=\mathbf{F}\cdot\mathbf{\bar{n}^{\prime}},$ (34) and similarly for $\mathbf{w}$, so that our definition allows to keep unchanged the angles between such a vector $\mathbf{F}$ and the reference directions, be they primed or unprimed. This will be important later. Figure 3: Frames of interest for the polarized ray-tracing problem. The far- away observer’s rest frame (orthogonal to the observer’s 4-velocity $\mathbf{u_{0}}$) is shown in green, it is a simplified version of Fig. 1 and shows the local incidence direction $\mathbf{\bar{K}_{0}}$ (spacelike vector) of the observed light ray, together with the null 4-vector tangent to the incident null geodesic $\mathbf{k_{0}}$, the local North and West spacelike directions $(\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$, such that $(\mathbf{\bar{K}_{0}},\mathbf{\bar{w}_{0}},\mathbf{\bar{n}_{0}})$ is a direct orthonormal triad, and the polarization vector as measured by the far-away observer, $\mathbf{\bar{F}_{0}}$. The observed EVPA is labelled EVPA0, lying between the screen’s North direction and the observed polarization vector $\mathbf{\bar{F}_{0}}$. Starting from the far-away observer, a null geodesic is integrated backwards towards the source (red line), until it reaches the accretion flow (black line) surrounding the black hole (black disk). The fluid frame (orthogonal to the emitter’s 4-velocity $\mathbf{u}$) is shown in blue. The null 4-vector tangent to the null geodesic at the emission point is called $\mathbf{k}$. The magnetic field spacelike 4-vector as measured in the fluid frame (thus, orthogonal to $\mathbf{u}$) is called $\mathbf{b}$. The synchrotron-emitting electron’s trajectory is represented by the pale black helix. The local direction of photon emission, as measured in the fluid frame, is the spacelike unit vector $\mathbf{\bar{K}}$. The pair of vectors $(\mathbf{\bar{n}^{\prime}},\mathbf{\bar{w}^{\prime}})$ is related to the pair $(\mathbf{\bar{n}_{0}},\mathbf{\bar{w}_{0}})$, parallel-propagated along the null geodesic (see text for details), such that $(\mathbf{\bar{K}},\mathbf{\bar{w}^{\prime}},\mathbf{\bar{n}^{\prime}})$ is a direct orthonormal triad. The unit polarization vector as measured by the emitter is called $\mathbf{\bar{F}}$. The radiation field $\mathbf{E_{\mathrm{rad}}}$ associated with the helical motion of the electron is shown in pale black, and lies along $\mathbf{\bar{F}}$ for a relativistic electron (see B for a demonstration). The unit projection of the magnetic 4-vector orthogonal to $\mathbf{\bar{K}}$ is called $\boldsymbol{\bar{b}_{\perp}}$. Thus $(\mathbf{\bar{K}},\boldsymbol{\bar{b}_{\perp}},\mathbf{\bar{F}})$ is also a direct orthonormal triad, rotated with respect to $(\mathbf{\bar{K}},\mathbf{\bar{w}^{\prime}},\mathbf{\bar{n}^{\prime}})$ by the emission EVPA, labeled EVPAe, lying between the parallel-transported North direction and the fluid-frame polarization vector $\mathbf{\bar{F}}$. Let us now consider the magnetic field 4-vector $\mathbf{b}$ of the accretion flow, as measured in the fluid frame. By construction, this vector lies in the local rest frame of the fluid, so it is orthogonal to $\mathbf{u}$. We are also interested in its normalized projection orthogonally to $\mathbf{\bar{K}}$, which reads $\boldsymbol{\bar{b}_{\perp}}=\frac{\mathbf{b}-\left(\mathbf{b}\cdot\mathbf{\bar{K}}\right)\mathbf{\bar{K}}}{||\mathbf{b}-\left(\mathbf{b}\cdot\mathbf{\bar{K}}\right)\mathbf{\bar{K}}||},$ (35) (note that the minus sign in the numerator and denominator of the rhs, compared to the plus sign in the numerator of the rhs of Eq. 33, comes from the fact that $\mathbf{\bar{K}}$ is spacelike while $\mathbf{u}$ is timelike), and in the unit polarization vector as measured in the fluid frame $\mathbf{\bar{F}}=\frac{\mathbf{\bar{K}}\times\mathbf{b}}{||\mathbf{\bar{K}}\times\mathbf{b}||}=\mathbf{\bar{K}}\times\boldsymbol{\bar{b}_{\perp}}.$ (36) We thus have constructed a second orthonormal direct triad of the fluid rest space, $(\mathbf{\bar{K}},\boldsymbol{\bar{b}_{\perp}},\mathbf{\bar{F}})$. We note that it is not obvious that the vector defined by Eq. 36 coincides with the emission polarization vector for synchrotron radiation, that is, with the direction of the radiation electric field emitted by an electron moving around the $\mathbf{b}$ field lines, given that we have never discussed the emitting electron motion so far. In B, by relating the particle frame and the fluid frame, we demonstrate that, provided the emitting electron is relativistic, this is indeed so. We have thus at hand two orthonormal triads of the fluid frame, the observer- related $(\mathbf{\bar{K}},\mathbf{\bar{w}^{\prime}},\mathbf{\bar{n}^{\prime}})$, and the magnetic-field-related $(\mathbf{\bar{K}},\boldsymbol{\bar{b}_{\perp}},\mathbf{\bar{F}})$. These two frames are rotated with respect to each other by the angle $\chi\equiv(\mathbf{\bar{n}^{\prime}},\mathbf{\bar{F}})=(\mathbf{\bar{n}},\mathbf{\bar{F}})\equiv\mathrm{EVPA_{e}},$ (37) the EVPA in the fluid frame, where the index e reminds that we are dealing with an emission EVPA, as compared to the observed EVPA of Fig. 1. Both angles are illustrated in Fig. 3. We note that the emission EVPA evolves as the light ray evolves through the emitting fluid; the EVPA is only conserved in vacuum as demonstrated above. So a sequence of emission EVPAs corresponds to a unique observed EVPA. Note that the second equality in Eq. 37 is a consequence of Eq. 34. This emission EVPA will be crucial in the polarized radiative transfer formalism that we introduce in the next section. A practical, code-friendly expression for the emission EVPA is the following $\mathrm{EVPA_{e}}=\frac{\pi}{2}-\mathrm{atan2}\left(\mathbf{\bar{b}_{\perp}}\ \cdot\mathbf{\bar{w}^{\prime}},\mathbf{\bar{b}_{\perp}}\cdot\mathbf{\bar{n}^{\prime}}\right).$ (38) After having discussed the parallel transport of the vectors of interest along the null geodesic, the last step of the polarized GRRT problem is to integrate the polarized radiative transfer within the accretion flow surrounding the compact object. ### 2.4 Polarized radiative transfer #### 2.4.1 Stokes parameters The most general monochromatic electromagnetic wave has an elliptical polarization, in the sense that the electric field vector describing the wave draws an ellipse during its time evolution in the plane normal to the direction of propagation, see Fig. 4. Figure 4: Polarization ellipse of a general monochromatic wave (in blue). The black axes $(x,y)$ label the frame of interest where we want to formulate the problem, while the blue axes $(x^{\prime},y^{\prime})$ are along the major and minor axes of the ellipse and are therefore naturally adapted to the elliptically polarized wave. The angle between the two bases is called $\chi$ (it would coincide with the notion of EVPA for a linearly polarized wave along the $x$ axis), and the quantity $\tan\beta$ encodes the ellipse axes ratio. The dashed black axes $(a,b)$ are tilted by $45^{\circ}$ relative to $(x,y)$ and are useful for defining the U Stokes parameter. Let us introduce the electric field complex vector of the monochromatic wave $\mathbf{\hat{E}}=\hat{E}_{x}\mathbf{\bar{e}_{x}}+\hat{E}_{y}\mathbf{\bar{e}_{y}},$ (39) decomposed in an arbitrary orthonormal basis $(\mathbf{\bar{e}_{x}},\mathbf{\bar{e}_{y}})$ of the plane orthogonal to the direction of propagation, where $\hat{E}_{x}$ and $\hat{E}_{y}$ are the complex components along the axes. The polarization ellipse described by the vector $\mathbf{\hat{E}}$ can be equivalently described by two sets of parameters. From a geometrical point of view, it is very natural to provide the total intensity $|\mathbf{\hat{E}}|^{2}$, together with two angles $\chi$ and $\beta$. The angle $\chi$ lies between the basis $(\mathbf{\bar{e}_{x}},\mathbf{\bar{e}_{y}})$ and the basis corresponding to the axes of the ellipse, while $\tan\beta$ encodes the ellipse axes ratio (see Fig. 4). However, this parametrization is not practical from a physical point of view given that the two angles are not directly observable. So from a physical point of view, it is more natural to consider the following set of four Stokes parameters (Rybicki and Lightman, 1979) $\displaystyle I$ $\displaystyle=$ $\displaystyle|\hat{E}_{x}|^{2}+|\hat{E}_{y}|^{2},$ (40) $\displaystyle Q$ $\displaystyle=$ $\displaystyle|\hat{E}_{x}|^{2}-|\hat{E}_{y}|^{2}=I\,\cos 2\chi\,\cos 2\beta,$ $\displaystyle U$ $\displaystyle=$ $\displaystyle|\hat{E}_{a}|^{2}-|\hat{E}_{b}|^{2}=I\,\sin 2\chi\,\cos 2\beta,$ $\displaystyle V$ $\displaystyle=$ $\displaystyle|\hat{E}_{r}|^{2}-|\hat{E}_{l}|^{2}=I\,\sin 2\beta,$ where $\hat{E}_{a}$ and $\hat{E}_{b}$ are the complex components of the electric field in an orthonormal basis $(\mathbf{\bar{e}_{a}},\mathbf{\bar{e}_{b}})$ rotated by $45^{\circ}$ compared to $(\mathbf{\bar{e}_{x}},\mathbf{\bar{e}_{y}})$, see the dashed black axes in Fig. 4. The quantities $\hat{E}_{r}$ and $\hat{E}_{l}$ are the complex components of the field in an orthonormal complex basis, $\mathbf{\bar{e}_{l,r}}=\sqrt{2}/2(\mathbf{\bar{e}_{x}}\pm i\mathbf{\bar{e}_{y}})$. These relations clearly show that the Stokes parameters are all sums or differences of intensities along specific directions, and as such are directly observable and well adapted to being evolved in a radiative transfer problem. Equations 40 show how to construct the Stokes parameters from the geometrical angular parameters $\chi$, $\beta$ of the polarization ellipse. The reverse expression is easy to find and reads $\displaystyle\tan 2\chi$ $\displaystyle=$ $\displaystyle\frac{U}{Q},$ (41) $\displaystyle\sin 2\beta$ $\displaystyle=$ $\displaystyle\frac{V}{I}.$ For a circular polarization, $\beta=\pi/4$, so $Q=0,\quad U=0,\quad V=I\quad(\mathrm{circular\>polarization}),$ (42) while for a linear polarization, $\beta=0$, and $Q=I\cos 2\chi,\quad U=I\sin 2\chi,\quad V=0\quad(\mathrm{linear\>polarization}),$ (43) and if the wave is polarized along the $x$ axis of Fig. 4, then $Q=I$ and $U=0$, while if the wave is polarized at $45^{\circ}$ from the $x$ axis, then $Q=0$ and $U=I$. So $Q$ and $U$ encode linear polarization along the directions $\mathbf{\bar{e}_{x}}$ or $\mathbf{\bar{e}_{y}}$ and $\mathbf{\bar{e}_{a}}$ or $\mathbf{\bar{e}_{b}}$, respectively, and $V$ encodes circular polarization. Let us consider the Stokes parameters $(I,Q,U,V)$ defined in a basis $(\mathbf{\bar{e}_{x}},\mathbf{\bar{e}_{y}})$, and the parameters $(I^{\prime},Q^{\prime},U^{\prime},V^{\prime})$ defined in a basis $(\mathbf{e^{\prime}_{x}},\mathbf{e^{\prime}_{y}})$, rotated by an angle $\chi$ with respect to $(\mathbf{\bar{e}_{x}},\mathbf{\bar{e}_{y}})$, see Fig. 4. It is easy to show that the $Q$ and $U$ Stokes parameters transform following $\left(\begin{array}[]{c}Q\\\ U\end{array}\right)=\left(\begin{array}[]{cc}\cos 2\chi&-\sin 2\chi\\\ \sin 2\chi&\cos 2\chi\end{array}\right)\left(\begin{array}[]{c}Q^{\prime}\\\ U^{\prime}\end{array}\right),$ (44) while $I$ and $V$ are invariant. For a monochromatic radiation, there must exist a relation between the four Stokes parameters, that are equivalent to the set of three parameters $(I,\chi,\beta)$, and thus cannot be independent. This relation reads $I^{2}=Q^{2}+U^{2}+V^{2}\quad(\mathrm{monochromatic\>/\>fully\>polarized})$ (45) and the radiation is then said to be fully polarized. For a superimposition of waves at different frequencies, the radiation is only partially polarized and the resulting Stokes parameters verify $I^{2}\geq Q^{2}+U^{2}+V^{2}\quad(\mathrm{partially\>polarized}).$ (46) It is then useful to introduce the degree of polarization $\mathrm{d_{p}}=\frac{\sqrt{Q^{2}+U^{2}+V^{2}}}{I},$ (47) and the degree of linear polarization $\mathrm{d_{lp}}=\frac{\sqrt{Q^{2}+U^{2}}}{I}.$ (48) #### 2.4.2 Stokes parameters for synchrotron radiation, conventions We are primarily interested in polarized synchrotron radiation given that our main science interest is the millimeter and infrared radiation emitted by nearby supermassive black hole environments. Let us consider a single electron following a helical motion around the field lines of a magnetic field $\mathbf{b}$ as measured in the fluid frame. The emitted synchrotron radiation is elliptically polarized, with the minor axis of the polarization ellipse aligned along the direction of the magnetic field projected orthogonally to the direction of propagation, and major axis along the fluid-frame polarization vector (Huang et al., 2009). The Stokes parameters are thus naturally expressed in a basis aligned with the axes of this polarization ellipse, that is, the $(\mathbf{\bar{F}},-\mathbf{\bar{b}_{\perp}})$ basis (see the illustration in Fig. 5). We call this basis the synchrotron polarization basis. This basis is rotated by the emission EVPA with respect to the observer-related $(\mathbf{\bar{n}^{\prime}},-\mathbf{\bar{w}^{\prime}})$ basis. This last basis is called the parallel-transported polarization basis. For integrating the radiative transfer in the observer’s frame, we will need to take care of this rotation between the synchrotron and the parallel- transported polarization bases. This is described in the next section. Our sign conventions for the Stokes parameters are illustrated in Fig. 5. It complies with the convention of the International Astronomical Union (Transactions of the International Astronomical Union, 1973; Hamaker and Bregman, 1996, see Fig. 1 of the second reference). Figure 5: Geometry of the polarized synchrotron problem and Stokes parameter illustration. All quantities depicted here are defined in the fluid frame. The magnetic field is $\mathbf{b}$, which lies along the $\mathbf{\bar{e}_{z}}$ unit vector. The direction of emission in this frame is $\mathbf{\bar{K}}$, which lies along the $\mathbf{\bar{e}_{\textfrak}{c}}$ unit vector. We define the $\mathbf{\bar{e}_{\textfrak}{a}}$ unit vector as lying along the major axis of the synchrotron polarization ellipse (shown in dashed pale blue). This vector is defined up to an unimportant sign convention. The $\mathbf{\bar{e}_{\textfrak}{b}}$ unit vector is such that $(\mathbf{\bar{e}_{\textfrak}{a}},\mathbf{\bar{e}_{\textfrak}{b}},\mathbf{\bar{e}_{\textfrak}{c}})$ is a direct orthonormal triad of the fluid frame. The vector $\mathbf{\bar{e}_{x}}$ is parallel to $\mathbf{\bar{e}_{\textfrak}{a}}$. The $\mathbf{\bar{e}_{y}}$ unit vector is such that $(\mathbf{\bar{e}_{x}},\mathbf{\bar{e}_{y}},\mathbf{\bar{e}_{z}})$ is a direct orthonormal triad of the fluid frame. The angle between $\mathbf{b}$ and $\mathbf{\bar{K}}$ is called $\theta_{B}$. The Stokes parameters Q and U are defined in the $(\mathbf{\bar{e}_{\textfrak}{a}}=\mathbf{F},\mathbf{\bar{e}_{\textfrak}{b}}=-\mathbf{\bar{b}_{\perp}})$ basis, that we call the synchrotron polarization basis, illustrated by the zoom on the right of the figure. This zoom shows the synchrotron polarization basis (in blue; subscript ’synch’) as well as the parallel-transported polarization basis (in green; subscript ’$\parallel$trans’), $(\mathbf{\tilde{e}_{\textfrak}{a}}=\mathbf{\bar{n}},\mathbf{\tilde{e}_{\textfrak}{b}}=-\mathbf{\bar{w}})$. These two bases are rotated by the emission EVPA angle. The sign conventions of the Stokes parameters are as shown in this zoom. Note that the orientation convention used in this figure is the same as that used by e.g. Dexter (2016); Huang and Shcherbakov (2011), which results in a positive emission coefficient for Stokes Q. Some authors use an alternative orientation convention, taking $\mathbf{\bar{e}_{\textfrak}{a}}$ along the minor axis of the polarization ellipse, see Shcherbakov (2008); Marszewski et al. (2021). This simply leads to Stokes Q being multiplied by $-1$, and to a negative emission coefficient for Stokes Q. #### 2.4.3 Transfer equation Just like in the unpolarized version of Gyoto (Vincent et al., 2011), we will integrate the radiative transfer equation in the fluid frame, and then transform the quantities to the observer’s frame. The unpolarized radiative transfer equation used by Gyoto reads $\frac{\mathrm{d}I^{\mathrm{em}}_{\nu}}{\mathrm{d}s}=j_{\nu}-\alpha_{\nu}\,I^{\mathrm{em}}_{\nu},$ (49) where $I^{\mathrm{em}}_{\nu}$ is the specific intensity (the index $\nu$ means that we are considering an intensity per unit of frequency), $\mathrm{d}s$ is the element of optical path length, $j_{\nu}$ and $\alpha_{\nu}$ are the specific emission and absorption coefficients, all these quantities being measured in the fluid frame (hence the superscript ’em’ for the intensity, referring to the emitting fluid; we discard it for the other quantities for simplicity). The intensity in the observer frame (superscript ’obs’) then follows from $I^{\mathrm{obs}}_{\nu}=I^{\mathrm{em}}_{\nu}\,\left(\frac{\nu^{\mathrm{obs}}}{\nu^{\mathrm{em}}}\right)^{3},$ (50) which is a consequence of Liouville’s theorem (Misner et al., 1973). The polarized radiative transfer equation is naturally written in the synchrotron polarization basis of the fluid frame. In this basis, the transfer equation reads $\frac{\mathrm{d}}{\mathrm{d}s}\left(\begin{array}[]{c}I_{\nu}^{\mathrm{em;synch}}\\\ Q_{\nu}^{\mathrm{em;synch}}\\\ U_{\nu}^{\mathrm{em;synch}}\\\ V_{\nu}^{\mathrm{em;synch}}\end{array}\right)=\left(\begin{array}[]{c}j_{\nu,I}\\\ j_{\nu,Q}\\\ j_{\nu,U}\\\ j_{\nu,V}\end{array}\right)-\left(\begin{array}[]{cccc}\alpha_{\nu,I}&\alpha_{\nu,Q}&\alpha_{\nu,U}&\alpha_{\nu,V}\\\ \alpha_{\nu,Q}&\alpha_{\nu,I}&r_{\nu,V}&-r_{\nu,U}\\\ \alpha_{\nu,U}&-r_{\nu,V}&\alpha_{\nu,I}&r_{\nu,Q}\\\ \alpha_{\nu,V}&r_{\nu,U}&-r_{\nu,Q}&\alpha_{\nu,I}\end{array}\right)\left(\begin{array}[]{c}I_{\nu}^{\mathrm{em;synch}}\\\ Q_{\nu}^{\mathrm{em;synch}}\\\ U_{\nu}^{\mathrm{em;synch}}\\\ V_{\nu}^{\mathrm{em;synch}}\end{array}\right),$ (51) where the ’em;synch’ label is here to remind that we are dealing with Stokes parameters expressed in the synchrotron polarization basis of the emitting fluid frame; moreover, $j_{\nu,X}$ and $\alpha_{\nu,X}$ are emission and absorption coefficients for the Stokes parameter X, $r_{\nu,Q}$ and $r_{\nu,U}$ are Faraday conversion parameters, and $r_{\nu,V}$ is the Faraday rotation parameter. All these transfer coefficients are defined in the synchrotron polarization basis of the fluid frame (we discard the superscript for simplicity). We refer to Fig. 5 for the details of the sign conventions. However, we rather want to integrate this equation in the parallel-transported polarization basis, $(\mathbf{\bar{n}^{\prime}},-\mathbf{\bar{w}^{\prime}})$, which is rotated by the emission EVPA with respect to the synchrotron polarization basis, see Fig. 5. In the parallel-transported polarization basis of the fluid frame, the transfer equation reads $\displaystyle\frac{\mathrm{d}}{\mathrm{d}s}\left(\begin{array}[]{c}I_{\nu}^{\mathrm{em;\parallel trans}}\\\ Q_{\nu}^{\mathrm{em;\parallel trans}}\\\ U_{\nu}^{\mathrm{em;\parallel trans}}\\\ V_{\nu}^{\mathrm{em;\parallel trans}}\end{array}\right)$ $\displaystyle=$ $\displaystyle\mathbf{R}\left(\chi_{\mathrm{e}}\right)\left(\begin{array}[]{c}j_{\nu,I}\\\ j_{\nu,Q}\\\ j_{\nu,U}\\\ j_{\nu,V}\end{array}\right)$ (69) $\displaystyle-\mathbf{R}\left(\chi_{\mathrm{e}}\right)\left(\begin{array}[]{cccc}\alpha_{\nu,I}&\alpha_{\nu,Q}&\alpha_{\nu,U}&\alpha_{\nu,V}\\\ \alpha_{\nu,Q}&\alpha_{\nu,I}&r_{\nu,V}&-r_{\nu,U}\\\ \alpha_{\nu,U}&-r_{\nu,V}&\alpha_{\nu,I}&r_{\nu,Q}\\\ \alpha_{\nu,V}&r_{\nu,U}&-r_{\nu,Q}&\alpha_{\nu,I}\end{array}\right)\mathbf{R}\left(-\chi_{\mathrm{e}}\right)\left(\begin{array}[]{c}I_{\nu}^{\mathrm{em;\parallel trans}}\\\ Q_{\nu}^{\mathrm{em;\parallel trans}}\\\ U_{\nu}^{\mathrm{em;\parallel trans}}\\\ V_{\nu}^{\mathrm{em;\parallel trans}}\end{array}\right),$ where the superscript ’em;$\parallel$trans’ reminds that we are dealing with Stokes parameters defined in the parallel-transported polarization basis of the emitting fluid frame, and $\mathbf{R}\left(\chi_{\mathrm{e}}\right)=\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&\cos 2\chi_{\mathrm{e}}&-\sin 2\chi_{\mathrm{e}}&0\\\ 0&\sin 2\chi_{\mathrm{e}}&\cos 2\chi_{\mathrm{e}}&0\\\ 0&0&0&1\end{array}\right)$ (70) is a rotation matrix describing the rotation by the angle $\chi_{\mathrm{e}}\equiv EVPA_{\mathrm{e}}$, the emission EVPA, between the synchrotron and the parallel-transported polarization bases. This is the exact same transformation as that described by Eq. 44. Solving Eq. 69 is a well-known problem that we briefly discuss in C. The corresponding Stokes parameters in the observer frame then follow from $X^{\mathrm{obs}}_{\nu}=X^{\mathrm{em;\parallel trans}}_{\nu}\,\left(\frac{\nu^{\mathrm{obs}}}{\nu^{\mathrm{em}}}\right)^{3},$ (71) similarly as in Eq. 50, where $X$ is either of the Stokes parameters. #### 2.4.4 Polarized synchrotron coefficients We now need to express the synchrotron coefficients in the synchrotron polarization basis. In this basis, the transfer coefficients for the Stokes parameter U, i.e. $j_{\nu,U}$, $\alpha_{\nu,U}$ and $r_{\nu,U}$ are zero by definition. However, the computation of emission, absorption and rotation synchrotron coefficients for the others Stokes parameters, from an arbitrary distribution of electrons could be quite heavy. Indeed, even for isotropic distributions, the computation of the emission coefficients require a double integral (Rybicki and Lightman, 1979) and the others are even more complex using the susceptibility tensor (see Appendix B of Marszewski et al., 2021). Fortunately, Marszewski et al. (2021); Dexter (2016); Huang and Shcherbakov (2011) derived fitted formulae for the emissivities, absorptivities and rotativities for well-defined isotropic distributions of electrons : Thermal (Maxwell-Jüttner), Power Law or Kappa (thermal core with a power law tail). We choose to implement the formulae of Marszewski et al. (2021) in Gyoto to compute the synchrotron coefficients as they are the only one who provides formulae for a kappa distribution. These formulae are valid for a specific range of parameters. For a thermal distribution, parametrized by the dimensionless temperature $\Theta_{e}=k_{B}T/m_{e}c^{2}$, the fits are accurate for $3<\Theta_{e}<40$ and for $\nu/\nu_{c}\gg 1$ with $\nu_{c}=eB/(2\pi m_{e}c)$ the cyclotron frequencies (Marszewski et al., 2021). For a Power Law distribution, parametrized by a minimum and maximum Lorentz factor, $\gamma_{\mathrm{min}}$ and $\gamma_{\mathrm{max}}$ respectively, and by a power law index $p$, the fits are accurate for $\gamma_{\mathrm{min}}<10^{2}$, $1.5<p<6.5$ and, as before, for $\nu/\nu_{c}\gg 1$ (Marszewski et al., 2021). The Kappa distribution is characterized by two parameters $w$ (equivalent to the dimensionless temperature) and $\kappa=p+1$. Contrary to the other distributions where the fits are continuous functions of the parameters, the fits for rotation coefficients for the Kappa distribution have been done for four specific values of $\kappa=(3.5,4.0,4.5,5.0)$ and are not defined for any other value. The fits are valid while $3<w<40$, $\nu/\nu_{c}\gg 1$ and $X_{\kappa}\gg 10^{-1}$ where $X_{\kappa}=\nu/\nu_{c}(w\kappa)^{2}\sin\theta$ with $\theta$ the angle between the magnetic field vector and the photon tangent vector (Marszewski et al., 2021). For some tests in section 3, we will use the formulae from Dexter (2016), especially for the comparison with the ray-tracing code ipole which use the formulae of Dexter (2016) (as the code grtrans). The order of the maximum relative error between all the fits in Marszewski et al. (2021) and the true values is of 30%. For the typical parameters of the accretion flow of Sgr A*, the difference between the formulae of Marszewski et al. (2021) and Dexter (2016) is lower or equal to $10\%$. ## 3 Tests ### 3.1 Test of the parallel transport The first test that we have to make is to check that the observer polarization basis’ vectors, i.e. $\mathbf{\bar{n}}$ and $\mathbf{\bar{w}}$, are well parallel transported along the null geodesics. The parallel transport equation given by Eq. 31 is fully general and agnostic about the particular spacetime considered. However, in the special case of the Kerr spacetime, it is well known that the special algebraic kind of the spacetime allows the existence of the Walker-Penrose constant (Walker and Penrose, 1970), defined as follows. If $\mathbf{k}$ is the tangent vector to a null geodesic, and if $\mathbf{f}$ is a vector orthogonal to $\mathbf{k}$ and parallel-transported along $\mathbf{k}$, then the following complex quantity $\displaystyle K_{1}-iK_{2}=$ $\displaystyle(r-ia\cos\theta)\bigg{[}(k^{t}f^{r}-k^{r}f^{t})+a\sin^{2}\theta(k^{r}f^{\varphi}-k^{\varphi}f^{r})\bigg{.}$ $\displaystyle\bigg{.}-i\sin\theta\left\\{(r^{2}+a^{2})(k^{\varphi}f^{\theta}-k^{\theta}f^{\varphi})-a(k^{t}f^{\theta}-k^{\theta}f^{t})\right\\}\bigg{]},$ here expressed in Boyer-Lindquist coordinates, is conserved along the null geodesic. Thus, knowing the evolution of $\mathbf{k}$ along the null geodesic, the vectors $\mathbf{\bar{n}}$ and $\mathbf{\bar{w}}$ (that obviously fulfill the conditions on $\mathbf{f}$ in Eq. 3.1) can be immediately obtained without further computation by using this constant. This Kerr-specific result is only used for testing purposes in Gyoto. The code is agnostic about the spacetime and does not use this property. We thus check the conservation of $K_{1}$ and $K_{2}$ for specific geodesics and obtain a conservation to within $10^{-5}$ for default integration parameters of Gyoto. This can be improved by setting a lower tolerance value for the integration steps, but at the cost of a longer calculation time. ### 3.2 EVPA calculation test Parallel transport having been tested, the observer polarization basis is well defined in the rest frame of the emitter $(\mathbf{\bar{K}},\mathbf{\bar{w}^{\prime}},\mathbf{\bar{n}^{\prime}})$, through Eq. (32). As said in section 2.3, the natural basis to express the synchrotron coefficients is $(\mathbf{\bar{K}},\boldsymbol{\bar{b}_{\perp}},\mathbf{\bar{F}})$ with $\mathbf{\bar{K}}$ a common vector between the two bases. Thus, to express the radiative transfer in the observer-related basis, rather than in the synchrotron basis, we just need to apply the rotation matrix defined in Eq. (70). The angle between these two bases corresponds to the emission EVPA (see section 2.3). We define a simple setup to check the computation of this crucial angle. We consider a Page-Thorne disk (geometrically thin, optically thick; Page and Thorne, 1974) in a Minkowski metric (to avoid GR effects), seen face-on. We consider two magnetic field configurations $\mathbf{b}$, toroidal and radial. Expressed in Boyer-Lindquist coordinates, the two magnetic field configurations read, in the rest frame of the emitter $\mathrm{\textbf{Toroidal:}}\>b^{\alpha}=\left\\{\begin{array}[]{l}b^{t}=\sqrt{\frac{g_{\varphi\varphi}}{g_{tt}}\frac{\Omega^{2}}{g_{tt}+g_{\varphi\varphi}\Omega^{2}}},\\\ b^{r}=0,\\\ b^{\theta}=0,\\\ b^{\varphi}=\sqrt{\frac{g_{tt}}{g_{\varphi\varphi}}\frac{1}{g_{tt}+g_{\varphi\varphi}\Omega^{2}}},\end{array}\right.$ (73) where $\Omega=u^{\varphi}/u^{t}$ and $\mathbf{u}$ is the 4-velocity of the emitting fluid assumed to be Keplerian, and $\mathrm{\textbf{Radial: }}\>b^{\alpha}=\left\\{\begin{array}[]{l}b^{t}=0,\\\ b^{r}=\sqrt{\frac{1}{g_{rr}}},\\\ b^{\theta}=0,\\\ b^{\varphi}=0.\end{array}\right.$ (74) In the toroidal case, for all azimuthal angles of the disk, the wave-vector $\mathbf{\bar{K}}$ is made of two components, one almost vertical (face-on view), and one azimuthal component resulting from special-relativistic aberration (Vincent et al., 2023). The magnetic field $\mathbf{b}$ is in the toroidal direction. Thus, the resulting polarization vector $\mathbf{\bar{F}}=\mathbf{\bar{K}}\times\boldsymbol{\bar{b}_{\perp}}$ is in the radial direction. Similarly, for the radial magnetic field, the resulting polarization vector is in the toroidal direction. We consider an arbitrary $I_{\nu}=1$ emission for unpolarized intensity. Gyoto computes images for the four Stokes parameters from which we can compute the orientation of the polarization vector, i.e. the EVPA. As we are only interested by the computation of the EVPA for the moment, i.e. without radiative transfer, we assume a fully linearly polarized radiation by taking $Q_{\nu}=I_{\nu}\,\cos\left(2*EVPA\right)$ and $U_{\nu}=I_{\nu}\,\sin\left(2*EVPA\right)$. This means that we do not take into account any absorption nor Faraday rotation. Figure 6: Image of total (unpolarized) intensity of a Page-Thorne disk with Keplerian orbit in a Minkowski space-time seen face-on for two magnetic field configurations (toroidal left, radial right). The inner radius is at $r=3\,r_{g}$. The green lines represent the orientation of polarization vectors. We show in the left panel of Fig. 6 in background the unpolarized images of the setup described above and the polarization vector with the green ticks. The position angle of these vectors is the observed EVPA. As expected, the polarization vectors are radial for a toroidal magnetic field and toroidal for radial magnetic field. We now want to make a similar test in curved spacetime using a Kerr metric. To validate the computation of EVPA in Gyoto, we compute the polarization from a geometrically thin ring as in Gelles et al. (2021) taking as previously $I_{\nu}=1$, $Q_{\nu}=I_{\nu}\,\cos\left(2*EVPA\right)$ and $U_{\nu}=I_{\nu}\,\sin\left(2*EVPA\right)$. Gelles et al. (2021) consider a synchrotron emission that we discard here, our only interest being in testing the EVPA. Here, we are only interested in the orientation of the polarization vector, that is, in the EVPA, at $r_{1}=3\,r_{g}$ and $r_{2}=6\,r_{g}$ (the inner and outter radius of the ring). Figure 7 shows the tick plots for three magnetic field configurations : radial, toroidal and vertical with the same setup as in Fig. 1 of Gelles et al. (2021), the fluid being assumed to be comoving with the Zero Angular Momentum Observer frame. The vertical configuration implemented in Gyoto reads $\mathrm{\textbf{Vertical}}\>b^{\alpha}=\left\\{\begin{array}[]{l}b^{t}=0,\\\ b^{r}=\frac{\cos{\theta}}{\sqrt{g_{rr}}},\\\ b^{\theta}=\frac{\sin{\theta}}{\sqrt{g_{\theta\theta}}},\\\ b^{\varphi}=0.\end{array}\right.$ (75) The results of Gyoto shown in Fig. 7 are in perfect agreement with the ones in the Fig. 1 in Gelles et al. (2021). We note that Gelles et al. (2021) scale the length of the ticks by the observed synchrotron intensity, while our ticks are all of unit length: we are only interested in the EVPA. This confirms that the calculation of the EVPA works correctly and we can now test the radiative transfer part and compare the results with another ray-tracing code. Figure 7: Polarized tick plots for three idealized magnetic field configurations: radial (left), toroidal (middle), and vertical (right) from a geometrically thin ring seen by an almost face-on observer $i=0.1^{\circ}$. The fluid is comoving with the Zero Angular Momentum Observer frame. Each plot shows two spins ($a=0$ and $a=-0.99$ in red and blue, respectively) as two emission radii ($r_{1}=3\,r_{g}$ and $r_{2}=6\,r_{g}$, corresponding to the inner and outer rings, respectively). ### 3.3 Comparison with ipole To check that all parts of our code work correctly, we compare the results of Gyoto with the ones of another polarized ray-tracing code : ipole (Mościbrodzka and Gammie, 2018b). We will focus on polarized observables of a thick disk around a Schwarzschild black hole. We take power law profiles of the physical quantities following Vos et al. (2022). To compute the emission, absorption and rotation coefficients for the radiative transfer, we assume a thermal distribution of electrons (as in Vos et al., 2022), and, for this comparison only, we use the fitting formula of Dexter (2016) as used in Ipole (we remind that Gyoto implements the formulae from Marszewski et al., 2021). We compared the three magnetic field configurations (toroidal, radial and vertical) described in Vos et al. (2022) at two inclinations, close to face-on with $i=20$° and close to edge-on with $i=80$°. We define, as in Prather et al. (2023), the normalized mean squared error (NMSE) as $NMSE(A,B)=\frac{\sum|A_{j}-B_{j}|^{2}}{\sum|A_{j}|^{2}}$ (76) where $A_{j}$ and $B_{j}$ are the intensities of a particular Stokes parameter in two images at pixel $j$. The results of Gyoto are in perfect agreement with Ipole with a NMSE $<10^{-4}$ for all configurations and Stokes parameters except for Stoke U in the radial cases at high inclination for which the NMSE is around $10^{-3}$. This can be compared to the worst NMSE of $\sim 0.01$ obtained in the code comparison made in Prather et al. (2023) showing the perfect agreement between Gyoto and Ipole. We also performed pixel-to-pixel comparisons, not restricting our comparison to integrated quantities like the NMSE. Fig. 8 illustrates this for 128x128 pixels images of the four Stokes parameters computed by Gyoto and their relative difference with Ipole, in a field of view of $40\,r_{g}$, for the three magnetic field configurations described above, at low inclination $i=20^{\circ}$. The relative error map is very close to zero for the vast majority of pixels, with typical values $\lesssim 0.1\%$. Higher values are only reached on specific tracks, that correspond to the zeroes of the corresponding Stokes parameters (as is clear by comparing with the panels showing the maps of the corresponding Stokes parameters). It is thus not surprising to get higher residuals there, and it does not affect the radiative transfer, given that the Stokes parameters are anyway close to zero in these regions. Besides these tracks corresponding to the zeroes of the Stokes parameters, the interior of the “shadow” region (i.e. geodesics that asymptotically approach the horizon when backward ray traced) lead to higher error. This is due to the stop condition of the geodesic integration that differs in the two codes. Given that this part of the image is anyway strongly redshifted and leads to a very low flux, this has no impact on the field-of- view integrated comparison of the NMSE. Figure 8: The first, third and fifth lines show images from left to right of the four Stokes parameters (I, Q, U, V) generated by Gyoto and $Q^{2}+U^{2}$ as the last column for the three magnetic field configuration at low inclination ($i=20$°). Their relative error maps with ipole images are shown in the second, fourth and sixth lines. The relative errors scale is linear between $\pm 5\%$. ## 4 Conclusion This article presents the polarized version of the ray-tracing code Gyoto. After reviewing the formalism for polarized GRRT, our main aim is to explain the details of our implementation, and to provide tests of our code. In particular, we have shown that in the framework of the GRRT computation of a geometrically thick, optically thin accretion flow, we find results in perfect agreement with the ipole code. Polarized GRRT is of fundamental importance for interpreting current and future observed data, in particular that of GRAVITY, the EHT, the polarized loops of ALMA associated with Sgr A* flares, or the data of IXPE. Properly interpreting these data is key to better understanding the properties of plasmas in the extreme environments of black holes, and might offer new interesting probes of strong-field gravity. The polarized Gyoto code is public, actively maintained, and in constant development for offering ever more diversed setups for relativistic astrophysics. The recent polarized version of the code is accompanied by a python notebook, available at https://github.com/gyoto/Gyoto/blob/master/doc/examples/Gyoto_Polar_example.ipynb, offering a quick and user-friendly first example of the new environment. ## Appendix A Observer’s screen polarization basis Figure 9: Local observer’s basis $(\mathbf{\bar{e}_{1}},\mathbf{\bar{e}_{2}},\mathbf{\bar{e}_{3}})$, in green, as represented in Fig. 2. The polarization basis corresponding to the central pixel of the screen, that is, to a purely radial direction of incidence, is $(\mathbf{\bar{K}_{0}^{\mathrm{cen}}},\mathbf{\bar{w}_{0}^{\mathrm{cen}}},\mathbf{\bar{n}_{0}^{\mathrm{cen}}})$, and is aligned with the oberver’s local basis vectors. When considering a non- central pixel, the direction of photon incidence is no longer purely radial. The corresponding vector $\mathbf{\bar{K}_{0}}$ is obtained from $\mathbf{\bar{K}_{0}^{\mathrm{cen}}}$ by a rotation of angle $a$ around the vector $\mathbf{v}$ (in red) normal to the red plane that contains $\mathbf{\bar{e}_{3}}$ and makes an angle $b$ with $\mathbf{\bar{e}_{1}}$. The same applies for $\mathbf{\bar{w}_{0}}$ and $\mathbf{\bar{n}_{0}}$. A general vector $\mathbf{\bar{K}_{0}}$, defined by the two spherical angles $(a,b)$, see the right panel of Fig. 2, is obtained by rotating the vector $\mathbf{\bar{K}_{0}^{\mathrm{cen}}}$ by an angle $a$ in a plane containing $\mathbf{\bar{e}_{3}}$ and making an angle $b$ with the $\mathbf{\bar{e}_{1}}$ vector (see the red plane in Fig. 9). This corresponds to a rotation of angle $a$ around the vector $\mathbf{v}=-\sin b\,\mathbf{\bar{e}_{1}}+\cos b\,\mathbf{\bar{e}_{2}}$. The corresponding rotation matrix reads $R(a,b)=\left(\begin{array}[]{cccc}\sin^{2}b\left(1-\cos a\right)+\cos a&-\sin b\cos b\left(1-\cos a\right)&\cos b\sin a\\\ -\sin b\cos b\left(1-\cos a\right)&\cos^{2}b\left(1-\cos a\right)+\cos a&\sin b\cos a\\\ -\cos b\sin a&-\sin b\sin a&\cos a\end{array}\right).$ (77) The general vector $\mathbf{\bar{K}_{0}}$ then reads $\mathbf{\bar{K}_{0}}=R(a,b)\,\mathbf{\bar{K}_{0}^{\mathrm{cen}}}$, and similarly for the two other polarization vectors. It is then easy to express the general local unit West and North directions that complete the local incoming photon direction given by Eq. 2.2. They read $\displaystyle\mathbf{\bar{w}_{0}}$ $\displaystyle=$ $\displaystyle\left[-\sin^{2}b\left(1-\cos a\right)-\cos a\right]\mathbf{\bar{e}_{1}}+\sin b\cos b\left(1-\cos a\right)\mathbf{\bar{e}_{2}}$ $\displaystyle+\cos b\sin a\,\mathbf{\bar{e}_{3}},$ $\displaystyle\mathbf{\bar{n}_{0}}$ $\displaystyle=$ $\displaystyle-\sin b\cos b\left(1-\cos a\right)\mathbf{\bar{e}_{1}}+\left[\cos^{2}b\left(1-\cos a\right)+\cos a\right]\mathbf{\bar{e}_{2}}$ $\displaystyle-\sin b\sin a\,\mathbf{\bar{e}_{3}}.$ ## Appendix B Electron gyration and polarization vector direction Our discussion of the polarization vector in section 2.3 took place in the fluid frame. Here, we need to discuss a third natural frame (after the observer frame and the fluid frame) naturally associated with the GRRT problem, namely the particle frame, the frame comoving with an individual electron swirling around the magnetic field lines and emitting synchrotron radiation as a consequence of this accelerated motion (see Fig. 3). The description of this infinitesimal 666Infinitesimal as compared to the natural scale of our problem coinciding with the gravitational radius. motion is the topic of this appendix. Our goal is to express the radiation electric field of an individual electron, as measured in the fluid frame. The direction of this vector should coincide with that of the polarization vector, which was introduced in Eq. 36 without any reference to the electron’s motion, nor to its radiation field. Let us consider the standard picture of synchrotron emission by a relativistic electron illustrated in Fig. 10, using the same notation and following the derivation of Westfold (1959). In the fluid frame, we consider a direct orthonormal triad, $(\boldsymbol{\mathcal{I}},\boldsymbol{\mathcal{J}},\boldsymbol{\mathcal{K}})$, such that $\boldsymbol{\mathcal{I}}$ is antiparallel to the acceleration vector at some initial time $t=0$ (coinciding with the proper time of the fluid frame), $\boldsymbol{\mathcal{K}}$ is along the ambient magnetic field $\mathbf{b}$ (measured in the fluid frame), and $\boldsymbol{\mathcal{J}}$ completes the triad. We call $\boldsymbol{\beta}$ the velocity 3-vector of the electron in the fluid frame and $\boldsymbol{\dot{\beta}}$ the corresponding acceleration. A synchrotron wave is emitted by the accelerated electron in the unit direction $\mathbf{\bar{K}}$ (measured in the fluid frame). The emission angle $\theta_{B}$ and the pitch angle $\alpha$ are illustrated in Fig. 10. The velocity and acceleration read $\displaystyle\boldsymbol{\beta}$ $\displaystyle=$ $\displaystyle\beta\sin\alpha\left(-\sin\omega_{B}t\,\boldsymbol{\mathcal{I}}+\cos\omega_{B}t\,\boldsymbol{\mathcal{J}}\right)+\beta\cos\alpha\,\boldsymbol{\mathcal{K}},$ (79) $\displaystyle\boldsymbol{\dot{\beta}}$ $\displaystyle=$ $\displaystyle-\omega_{B}\beta\sin\alpha\left(\cos\omega_{B}t\,\boldsymbol{\mathcal{I}}+\sin\omega_{B}t\,\boldsymbol{\mathcal{J}}\right),$ where $\beta$ is the velocity of the electron in units of the speed of light, and $\omega_{B}$ is the cyclotron gyrofrequency. It is easy to check that at $t=0$, the projection of the velocity vector orthogonal to $\mathbf{b}$ is along $\boldsymbol{\mathcal{J}}$, and the acceleration vector is along $-\boldsymbol{\mathcal{I}}$, which is the setup illustrated in Fig. 10. The radiation field of the moving charge in the fluid frame satisfies $\mathbf{E_{\mathrm{rad}}}\propto\mathbf{\bar{K}}\times\left[\left(\mathbf{\bar{K}}-\boldsymbol{\beta}\right)\times\boldsymbol{\dot{\beta}}\right].$ (80) Let us consider the unit direction of emission of a synchrotron photon, written in full generality as $\mathbf{\bar{K}}=a\,\boldsymbol{\mathcal{I}}+b\,\boldsymbol{\mathcal{J}}+c\,\boldsymbol{\mathcal{K}},$ (81) where $(a,b,c)$ are arbitrary real numbers such that $\mathbf{\bar{K}}$ is a unit vector. It is easy to show that $\mathbf{E_{\mathrm{rad}}}\cdot\boldsymbol{\mathcal{K}}\propto\beta\cos\alpha-c.$ (82) The lhs quantity represents the projection of the radiating electric field along the ambient magnetic field direction. It is well known that the beaming effect leads to the radiation being confined within a narrow cone around the pitch angle of the relativistically moving electron (see Fig. 6.5 of Rybicki and Lightman, 1979). This exactly means that $\beta\cos\alpha-c\approx 0,$ (83) such that the radiating electric field is orthogonal to the ambient magnetic field. It follows that $\mathbf{E_{\mathrm{rad}}}\propto\mathbf{\bar{K}}\times\mathbf{b},$ (84) so that the rhs can be used to define the direction of the synchrotron polarization vector, as done in Eq. 36. Figure 10: Geometry of the synchrotron-emitting electron motion in the fluid frame. $\boldsymbol{\beta}$ is the velocity 3-vector, $\mathbf{\bar{K}}$ is the direction of emission, $\mathbf{b}$ is the magnetic vector, all these quantities defined in the fluid frame. The emission angle between the magnetic field direction and the direction of emission is called $\theta_{B}$ while $\alpha$ is the pitch angle between the magnetic field direction and the velocity of the electron. ## Appendix C Solving the polarized radiative transfer equation Let us start with the unpolarized radiative transfer equation in the emitter frame $\frac{\mathrm{d}I}{\mathrm{d}s}=-\alpha I+j$ (85) with obvious notations. The formal solution reads $I(\tau)=\int_{s_{0}}^{s}\mathrm{exp}\left(-(\tau-\tau^{\prime})\right)S(\tau^{\prime})\mathrm{d}\tau^{\prime}$ (86) where $S=j/\alpha$ is the source function, $\mathrm{d}\tau=\alpha\,\delta s$ is the optical depth, $s$ is the proper length in the emitter frame and $s_{0}$ is some initial value of $s$ where the intensity is assumed to be zero. This equation can equivalently be written $I(s)=\int_{s_{0}}^{s}\mathrm{exp}\left(-\alpha(s-s^{\prime})\right)j(s^{\prime})\mathrm{d}s^{\prime}$ (87) where $\alpha$ is assumed constant over the integration interval. Considering a small range $\delta s=s-s^{\prime}$ between some previous location $s^{\prime}$ and some current location $s$, over which interval $j$ and $\alpha$ can be considered constant in a realistic problem, the increment of intensity reads $\delta I(s)=j(s)\,\delta s\,\mathrm{exp}\left(-\alpha(s)\,\delta s\right).$ (88) Let us now come back to the polarized version of the radiative transfer equation (Eq. 69) which reads $\frac{\mathrm{d}\boldsymbol{\mathcal{I}}}{\mathrm{d}s}=-\boldsymbol{\mathcal{K}}\boldsymbol{\mathcal{I}}+\boldsymbol{\mathcal{J}}$ (89) where $\boldsymbol{\mathcal{I}}$ is the vector of Stokes parameters and $\boldsymbol{\mathcal{K}}=\mathbf{R}\left(\chi\right)\left(\begin{array}[]{cccc}\alpha_{I}&\alpha_{Q}&\alpha_{U}&\alpha_{V}\\\ \alpha_{Q}&\alpha_{I}&r_{V}&-r_{U}\\\ \alpha_{U}&-r_{V}&\alpha_{I}&r_{Q}\\\ \alpha_{V}&r_{U}&-r_{Q}&\alpha_{I}\end{array}\right)\mathbf{R}\left(-\chi\right);\qquad\boldsymbol{\mathcal{J}}=\mathbf{R}\left(\chi\right)\left(\begin{array}[]{c}j_{I}\\\ j_{Q}\\\ j_{U}\\\ j_{V}\end{array}\right)$ (90) with $\mathbf{R}\left(\chi\right)$ being the rotation matrix given in Eq. 70. Its formal solution is the direct generalization of Eq. 86, provided that $\boldsymbol{\mathcal{K}}$ is constant with $s$: $\boldsymbol{\mathcal{I}}(s)=\int_{s_{0}}^{s}\mathrm{exp}\left(-\boldsymbol{\mathcal{K}}(s-s^{\prime})\right)\boldsymbol{\mathcal{J}}(s^{\prime})\mathrm{d}s^{\prime}.$ (91) Let us introduce the matrix $\mathbf{O}(s,s^{\prime})=\mathrm{exp}\left(-\boldsymbol{\mathcal{K}}(s-s^{\prime})\right).$ (92) Over a small interval of proper length $\delta s=s-s^{\prime}$, over which the absorption matrix and emission vector can be considered constant, the elementary increase of Stokes parameters is $\delta\boldsymbol{\mathcal{I}}(s)=\mathbf{O}(\delta s)\boldsymbol{\mathcal{J}}(s)\delta s$ (93) which is the direct generalization of Eq. 88. This is the equation used in Gyoto to update the Stokes parameters along the light ray. We still need to compute the exponential of matrix appearing in Eq. 92. Landi Degl’Innocenti and Landi Degl’Innocenti (1985) have given an expression for this matrix. It reads $\displaystyle\mathbf{O}(\delta s)=\mathrm{exp}\left(-\alpha_{I}\delta s\right)$ $\displaystyle\left\\{\left[\mathrm{cosh}\left(\Lambda_{1}\delta s\right)+\cos\left(\Lambda_{2}\delta s\right)\right]\mathbf{M_{1}}/2\right.$ $\displaystyle\left.-\sin\left(\Lambda_{2}\delta s\right)\mathbf{M_{2}}\right.$ $\displaystyle\left.-\mathrm{sinh}\left(\Lambda_{1}\delta s\right)\mathbf{M_{3}}\right.$ $\displaystyle\left.+\left[\mathrm{cosh}\left(\Lambda_{1}\delta s\right)-\cos\left(\Lambda_{2}\delta s\right)\right]\mathbf{M_{4}}/2\right\\}$ with $\displaystyle\mathbf{M_{1}}$ $\displaystyle=$ $\displaystyle\mathbf{1},$ (95) $\displaystyle\mathbf{M_{2}}$ $\displaystyle=$ $\displaystyle\frac{1}{\Theta}\left(\begin{array}[]{cccc}0&\Lambda_{2}\tilde{\alpha}_{Q}-\sigma\Lambda_{1}\tilde{r}_{Q}&\Lambda_{2}\tilde{\alpha}_{U}-\sigma\Lambda_{1}\tilde{r}_{U}&\Lambda_{2}\tilde{\alpha}_{V}-\sigma\Lambda_{1}\tilde{r}_{V}\\\ \Lambda_{2}\tilde{\alpha}_{Q}-\sigma\Lambda_{1}\tilde{r}_{Q}&0&\sigma\Lambda_{1}\tilde{\alpha}_{V}+\Lambda_{2}\tilde{r}_{V}&-\sigma\Lambda_{1}\tilde{\alpha}_{U}-\Lambda_{2}\tilde{r}_{U}\\\ \Lambda_{2}\tilde{\alpha}_{U}-\sigma\Lambda_{1}\tilde{r}_{U}&-\sigma\Lambda_{1}\tilde{\alpha}_{V}-\Lambda_{2}\tilde{r}_{V}&0&\sigma\Lambda_{1}\tilde{\alpha}_{Q}+\Lambda_{2}\tilde{r}_{Q}\\\ \Lambda_{2}\tilde{\alpha}_{V}-\sigma\Lambda_{1}\tilde{r}_{V}&\sigma\Lambda_{1}\tilde{\alpha}_{U}+\Lambda_{2}\tilde{r}_{U}&-\sigma\Lambda_{1}\tilde{\alpha}_{Q}-\Lambda_{2}\tilde{r}_{Q}&0\end{array}\right),$ (100) $\displaystyle\mathbf{M_{3}}$ $\displaystyle=$ $\displaystyle\frac{1}{\Theta}\left(\begin{array}[]{cccc}0&\Lambda_{1}\tilde{\alpha}_{Q}+\sigma\Lambda_{2}\tilde{r}_{Q}&\Lambda_{1}\tilde{\alpha}_{U}+\sigma\Lambda_{2}\tilde{r}_{U}&\Lambda_{1}\tilde{\alpha}_{V}+\sigma\Lambda_{2}\tilde{r}_{V}\\\ \Lambda_{1}\tilde{\alpha}_{Q}+\sigma\Lambda_{2}\tilde{r}_{Q}&0&-\sigma\Lambda_{2}\tilde{\alpha}_{V}+\Lambda_{1}\tilde{r}_{V}&\sigma\Lambda_{2}\tilde{\alpha}_{U}-\Lambda_{1}\tilde{r}_{U}\\\ \Lambda_{1}\tilde{\alpha}_{U}+\sigma\Lambda_{2}\tilde{r}_{U}&\sigma\Lambda_{2}\tilde{\alpha}_{V}-\Lambda_{1}\tilde{r}_{V}&0&-\sigma\Lambda_{2}\tilde{\alpha}_{Q}+\Lambda_{1}\tilde{r}_{Q}\\\ \Lambda_{1}\tilde{\alpha}_{V}+\sigma\Lambda_{2}\tilde{r}_{V}&-\sigma\Lambda_{2}\tilde{\alpha}_{U}+\Lambda_{1}\tilde{r}_{U}&\sigma\Lambda_{2}\tilde{\alpha}_{Q}-\Lambda_{1}\tilde{r}_{Q}&0\end{array}\right),$ (105) $\displaystyle\mathbf{M_{4}}$ $\displaystyle=$ $\displaystyle\frac{2}{\Theta}$ (110) $\displaystyle\times$ $\displaystyle\left(\begin{array}[]{cccc}(\tilde{\alpha}^{2}+\tilde{r}^{2})/2&\tilde{\alpha}_{V}\tilde{r}_{U}-\tilde{\alpha}_{U}\tilde{r}_{V}&\tilde{\alpha}_{Q}\tilde{r}_{V}-\tilde{\alpha}_{V}\tilde{r}_{Q}&\tilde{\alpha}_{U}\tilde{r}_{Q}-\tilde{\alpha}_{Q}\tilde{r}_{U}\\\ \tilde{\alpha}_{U}\tilde{r}_{V}-\tilde{\alpha}_{V}\tilde{r}_{U}&\tilde{\alpha}_{Q}^{2}+\tilde{r}_{Q}^{2}-(\tilde{\alpha}^{2}+\tilde{r}^{2})/2&\tilde{\alpha}_{Q}\tilde{\alpha}_{U}+\tilde{r}_{Q}\tilde{r}_{U}&\tilde{\alpha}_{V}\tilde{\alpha}_{Q}+\tilde{r}_{V}\tilde{r}_{Q}\\\ \tilde{\alpha}_{V}\tilde{r}_{Q}-\tilde{\alpha}_{Q}\tilde{r}_{V}&\tilde{\alpha}_{Q}\tilde{\alpha}_{U}+\tilde{r}_{Q}\tilde{r}_{U}&\tilde{\alpha}_{U}^{2}+\tilde{r}_{U}^{2}-(\tilde{\alpha}^{2}+\tilde{r}^{2})/2&\tilde{\alpha}_{U}\tilde{\alpha}_{V}+\tilde{r}_{U}\tilde{r}_{V}\\\ \tilde{\alpha}_{Q}\tilde{r}_{U}-\tilde{\alpha}_{U}\tilde{r}_{Q}&\tilde{\alpha}_{V}\tilde{\alpha}_{Q}+\tilde{r}_{V}\tilde{r}_{Q}&\tilde{\alpha}_{U}\tilde{\alpha}_{V}+\tilde{r}_{U}\tilde{r}_{V}&\tilde{\alpha}_{V}^{2}+\tilde{r}_{V}^{2}-(\tilde{\alpha}^{2}+\tilde{r}^{2})/2\end{array}\right)$ where $\displaystyle\tilde{\alpha}^{2}$ $\displaystyle=$ $\displaystyle\tilde{\alpha}_{Q}^{2}+\tilde{\alpha}_{U}^{2}+\tilde{\alpha}_{V}^{2},$ (111) $\displaystyle\tilde{r}^{2}$ $\displaystyle=$ $\displaystyle\tilde{r}_{Q}^{2}+\tilde{r}_{U}^{2}+\tilde{r}_{V}^{2},$ $\displaystyle\Lambda_{1}$ $\displaystyle=$ $\displaystyle\sqrt{\sqrt{\frac{1}{4}(\tilde{\alpha}^{2}-\tilde{r}^{2})^{2}+(\tilde{\alpha}_{Q}\tilde{r}_{Q}+\tilde{\alpha}_{U}\tilde{r}_{U}+\tilde{\alpha}_{V}\tilde{r}_{V})^{2}}+\frac{1}{2}(\tilde{\alpha}^{2}-\tilde{r}^{2})},$ $\displaystyle\Lambda_{2}$ $\displaystyle=$ $\displaystyle\sqrt{\sqrt{\frac{1}{4}(\tilde{\alpha}^{2}-\tilde{r}^{2})^{2}+(\tilde{\alpha}_{Q}\tilde{r}_{Q}+\tilde{\alpha}_{U}\tilde{r}_{U}+\tilde{\alpha}_{V}\tilde{r}_{V})^{2}}-\frac{1}{2}(\tilde{\alpha}^{2}-\tilde{r}^{2})},$ $\displaystyle\Theta$ $\displaystyle=$ $\displaystyle\Lambda_{1}^{2}+\Lambda_{2}^{2},$ $\displaystyle\sigma$ $\displaystyle=$ $\displaystyle\mathrm{sign}\left(\tilde{\alpha}_{Q}\tilde{r}_{Q}+\tilde{\alpha}_{U}\tilde{r}_{U}+\tilde{\alpha}_{V}\tilde{r}_{V}\right),$ and where the tilde quantities $\tilde{\alpha}_{X}$, $\tilde{r}_{X}$ take into account the basis rotation by an angle $\chi$ and read $\left(\begin{array}[]{c}\tilde{\alpha}_{Q}\\\ \tilde{\alpha}_{U}\end{array}\right)=\left(\begin{array}[]{cc}\cos 2\chi&-\sin 2\chi\\\ \sin 2\chi&\cos 2\chi\end{array}\right)\left(\begin{array}[]{c}\alpha_{Q}\\\ \alpha_{U}\end{array}\right),$ (112) and similarly for $\tilde{r}_{Q}$ and $\tilde{r}_{U}$, while $\tilde{\alpha}_{I,V}$ and $\tilde{r}_{V}$ are the same as their counterparts without a tilde, given that $I$ and $V$ are not affected by the rotation. ## Acknowledgements It is a pleasure for the authors to acknowledge continuous discussions with Maciek Wielgus on the topic of the paper. The authors gratefully acknowledge fruitful discussions with B. Cerutti and B. Crinquand, as well as very useful exchanges with J. Vos for the code comparison of section 3.3. E. G. acknowledges funding by l’Agence Nationale de la Recherche, Project StronG ANR-22-CE31-0015. ## References * (1) * Aimar et al. (2023) Aimar N, Dmytriiev A, Vincent F H, El Mellah I, Paumard T, Perrin G and Zech A 2023 A&A 672, A62. * Anantua et al. (2020) Anantua R, Ressler S and Quataert E 2020 MNRAS 493(1), 1404–1418. * Broderick et al. (2016) Broderick A E, Fish V L, Johnson M D, Rosenfeld K, Wang C, Doeleman S S, Akiyama K, Johannsen T and Roy A L 2016 ApJ 820(2), 137. * Bronzwaer et al. (2018) Bronzwaer T, Davelaar J, Younsi Z, Mościbrodzka M, Falcke H, Kramer M and Rezzolla L 2018 A&A 613, A2. * Bronzwaer et al. (2020) Bronzwaer T, Younsi Z, Davelaar J and Falcke H 2020 A&A 641, A126. * Cárdenas-Avendaño and Lupsasca (2023) Cárdenas-Avendaño A and Lupsasca A 2023 Phys. Rev. D 108(6), 064043. * Cárdenas-Avendaño et al. (2023) Cárdenas-Avendaño A, Lupsasca A and Zhu H 2023 Phys. Rev. D 107(4), 043030. * Casse et al. (2017) Casse F, Varniere P and Meliani Z 2017 MNRAS 464(3), 3704–3712. * Chael et al. (2019) Chael A, Narayan R and Johnson M D 2019 MNRAS 486(2), 2873–2895. * Chael et al. (2018) Chael A, Rowan M, Narayan R, Johnson M and Sironi L 2018 MNRAS 478(4), 5209–5229. * Chan et al. (2018) Chan C k, Medeiros L, Özel F and Psaltis D 2018 ApJ 867(1), 59. * Chan et al. (2015) Chan C k, Psaltis D, Özel F, Medeiros L, Marrone D, Sadowski A and Narayan R 2015 ApJ 812(2), 103. * Dauser et al. (2010) Dauser T, Wilms J, Reynolds C S and Brenneman L W 2010 MNRAS 409(4), 1534–1540. * Davelaar et al. (2018) Davelaar J, Mościbrodzka M, Bronzwaer T and Falcke H 2018 A&A 612, A34. * Dexter (2016) Dexter J 2016 MNRAS 462(1), 115–136. * Dexter and Agol (2009) Dexter J and Agol E 2009 ApJ 696(2), 1616–1629. * Dexter et al. (2020) Dexter J, Tchekhovskoy A, Jiménez-Rosales A, Ressler S M, Bauböck M, Dallilar Y, de Zeeuw P T, Eisenhauer F, von Fellenberg S, Gao F, Genzel R, Gillessen S, Habibi M, Ott T, Stadler J, Straub O and Widmann F 2020 MNRAS 497(4), 4999–5007. * Dovčiak et al. (2008) Dovčiak M, Muleri F, Goosmann R W, Karas V and Matt G 2008 MNRAS 391(1), 32–38. * Dovčiak et al. (2022) Dovčiak M, Papadakis I E, Kammoun E S and Zhang W 2022 A&A 661, A135. * Event Horizon Telescope Collaboration et al. (2022) Event Horizon Telescope Collaboration, Akiyama K, Alberdi A, Alef W, Algaba J C, Anantua R, Asada K, Azulay R, Bach U, Baczko A K, Ball D, Baloković M, Barrett J, Bauböck M, Benson B A, Bintley D, Blackburn L, Blundell R, Bouman K L, Bower G C, Boyce H, Bremer M, Brinkerink C D, Brissenden R, Britzen S, Broderick A E, Broguiere D, Bronzwaer T, Bustamante S, Byun D Y, Carlstrom J E, Ceccobello C, Chael A, Chan C k, Chatterjee K, Chatterjee S, Chen M T, Chen Y, Cheng X, Cho I, Christian P, Conroy N S, Conway J E, Cordes J M, Crawford T M, Crew G B, Cruz-Osorio A, Cui Y, Davelaar J, De Laurentis M, Deane R, Dempsey J, Desvignes G, Dexter J, Dhruv V, Doeleman S S, Dougal S, Dzib S A, Eatough R P, Emami R, Falcke H, Farah J, Fish V L, Fomalont E, Ford H A, Fraga-Encinas R, Freeman W T, Friberg P, Fromm C M, Fuentes A, Galison P, Gammie C F, García R, Gentaz O, Georgiev B, Goddi C, Gold R, Gómez-Ruiz A I, Gómez J L, Gu M, Gurwell M, Hada K, Haggard D, Haworth K, Hecht M H, Hesper R, Heumann D, Ho L C, Ho P, Honma M, Huang C W L, Huang L, Hughes D H, Ikeda S, Impellizzeri C M V, Inoue M, Issaoun S, James D J, Jannuzi B T, Janssen M, Jeter B, Jiang W, Jiménez-Rosales A, Johnson M D, Jorstad S, Joshi A V, Jung T, Karami M, Karuppusamy R, Kawashima T, Keating G K, Kettenis M, Kim D J, Kim J Y, Kim J, Kim J, Kino M, Koay J Y, Kocherlakota P, Kofuji Y, Koch P M, Koyama S, Kramer C, Kramer M, Krichbaum T P, Kuo C Y, La Bella N, Lauer T R, Lee D, Lee S S, Leung P K, Levis A, Li Z, Lico R, Lindahl G, Lindqvist M, Lisakov M, Liu J, Liu K, Liuzzo E, Lo W P, Lobanov A P, Loinard L, Lonsdale C J, Lu R S, Mao J, Marchili N, Markoff S, Marrone D P, Marscher A P, Martí-Vidal I, Matsushita S, Matthews L D, Medeiros L, Menten K M, Michalik D, Mizuno I, Mizuno Y, Moran J M, Moriyama K, Moscibrodzka M, Müller C, Mus A, Musoke G, Myserlis I, Nadolski A, Nagai H, Nagar N M, Nakamura M, Narayan R, Narayanan G, Natarajan I, Nathanail A, Fuentes S N, Neilsen J, Neri R, Ni C, Noutsos A, Nowak M A, Oh J, Okino H, Olivares H, Ortiz-León G N, Oyama T, Özel F, Palumbo D C M, Paraschos G F, Park J, Parsons H, Patel N, Pen U L, Pesce D W, Piétu V, Plambeck R, PopStefanija A, Porth O, Pötzl F M, Prather B, Preciado-López J A, Psaltis D, Pu H Y, Ramakrishnan V, Rao R, Rawlings M G, Raymond A W, Rezzolla L, Ricarte A, Ripperda B, Roelofs F, Rogers A, Ros E, Romero-Cañizales C, Roshanineshat A, Rottmann H, Roy A L, Ruiz I, Ruszczyk C, Rygl K L J, Sánchez S, Sánchez-Argüelles D, Sánchez-Portal M, Sasada M, Satapathy K, Savolainen T, Schloerb F P, Schonfeld J, Schuster K F, Shao L, Shen Z, Small D, Sohn B W, SooHoo J, Souccar K, Sun H, Tazaki F, Tetarenko A J, Tiede P, Tilanus R P J, Titus M, Torne P, Traianou E, Trent T, Trippe S, Turk M, van Bemmel I, van Langevelde H J, van Rossum D R, Vos J, Wagner J, Ward-Thompson D, Wardle J, Weintroub J, Wex N, Wharton R, Wielgus M, Wiik K, Witzel G, Wondrak M F, Wong G N, Wu Q, Yamaguchi P, Yoon D, Young A, Young K, Younsi Z, Yuan F, Yuan Y F, Zensus J A, Zhang S, Zhao G Y, Zhao S S, Agurto C, Allardi A, Amestica R, Araneda J P, Arriagada O, Berghuis J L, Bertarini A, Berthold R, Blanchard J, Brown K, Cárdenas M, Cantzler M, Caro P, Castillo-Domínguez E, Chan T L, Chang C C, Chang D O, Chang S H, Chang S C, Chen C C, Chilson R, Chuter T C, Ciechanowicz M, Colin-Beltran E, Coulson I M, Crowley J, Degenaar N, Dornbusch S, Durán C A, Everett W B, Faber A, Forster K, Fuchs M M, Gale D M, Geertsema G, González E, Graham D, Gueth F, Halverson N W, Han C C, Han K C, Hasegawa Y, Hernández-Rebollar J L, Herrera C, Herrero-Illana R, Heyminck S, Hirota A, Hoge J, Hostler Schimpf S R, Howie R E, Huang Y D, Jiang H, Jinchi H, John D, Kimura K, Klein T, Kubo D, Kuroda J, Kwon C, Lacasse R, Laing R, Leitch E M, Li C T, Liu C T, Liu K Y, Lin L C C, Lu L M, Mac-Auliffe F, Martin-Cocher P, Matulonis C, Maute J K, Messias H, Meyer-Zhao Z, Montaña A, Montenegro-Montes F, Montgomerie W, Moreno Nolasco M E, Muders D, Nishioka H, Norton T J, Nystrom G, Ogawa H, Olivares R, Oshiro P, Pérez-Beaupuits J P, Parra R, Phillips N M, Poirier M, Pradel N, Qiu R, Raffin P A, Rahlin A S, Ramírez J, Ressler S, Reynolds M, Rodríguez-Montoya I, Saez-Madain A F, Santana J, Shaw P, Shirkey L E, Silva K M, Snow W, Sousa D, Sridharan T K, Stahm W, Stark A A, Test J, Torstensson K, Venegas P, Walther C, Wei T S, White C, Wieching G, Wijnands R, Wouterloot J G A, Yu C Y, Yu W and Zeballos M 2022 ApJ 930(2), L12. * Event Horizon Telescope Collaboration et al. (2019) Event Horizon Telescope Collaboration, Akiyama K, Alberdi A, Alef W, Asada K, Azulay R, Baczko A K, Ball D, Baloković M, Barrett J, Bintley D, Blackburn L, Boland W, Bouman K L, Bower G C, Bremer M, Brinkerink C D, Brissenden R, Britzen S, Broderick A E, Broguiere D, Bronzwaer T, Byun D Y, Carlstrom J E, Chael A, Chan C k, Chatterjee S, Chatterjee K, Chen M T, Chen Y, Cho I, Christian P, Conway J E, Cordes J M, Crew G B, Cui Y, Davelaar J, De Laurentis M, Deane R, Dempsey J, Desvignes G, Dexter J, Doeleman S S, Eatough R P, Falcke H, Fish V L, Fomalont E, Fraga-Encinas R, Friberg P, Fromm C M, Gómez J L, Galison P, Gammie C F, García R, Gentaz O, Georgiev B, Goddi C, Gold R, Gu M, Gurwell M, Hada K, Hecht M H, Hesper R, Ho L C, Ho P, Honma M, Huang C W L, Huang L, Hughes D H, Ikeda S, Inoue M, Issaoun S, James D J, Jannuzi B T, Janssen M, Jeter B, Jiang W, Johnson M D, Jorstad S, Jung T, Karami M, Karuppusamy R, Kawashima T, Keating G K, Kettenis M, Kim J Y, Kim J, Kim J, Kino M, Koay J Y, Koch P M, Koyama S, Kramer M, Kramer C, Krichbaum T P, Kuo C Y, Lauer T R, Lee S S, Li Y R, Li Z, Lindqvist M, Liu K, Liuzzo E, Lo W P, Lobanov A P, Loinard L, Lonsdale C, Lu R S, MacDonald N R, Mao J, Markoff S, Marrone D P, Marscher A P, Martí-Vidal I, Matsushita S, Matthews L D, Medeiros L, Menten K M, Mizuno Y, Mizuno I, Moran J M, Moriyama K, Moscibrodzka M, Mul$\ddot{}$ler C, Nagai H, Nagar N M, Nakamura M, Narayan R, Narayanan G, Natarajan I, Neri R, Ni C, Noutsos A, Okino H, Olivares H, Oyama T, Özel F, Palumbo D C M, Patel N, Pen U L, Pesce D W, Piétu V, Plambeck R, PopStefanija A, Porth O, Prather B, Preciado-López J A, Psaltis D, Pu H Y, Ramakrishnan V, Rao R, Rawlings M G, Raymond A W, Rezzolla L, Ripperda B, Roelofs F, Rogers A, Ros E, Rose M, Roshanineshat A, Rottmann H, Roy A L, Ruszczyk C, Ryan B R, Rygl K L J, Sánchez S, Sánchez-Arguelles D, Sasada M, Savolainen T, Schloerb F P, Schuster K F, Shao L, Shen Z, Small D, Sohn B W, SooHoo J, Tazaki F, Tiede P, Tilanus R P J, Titus M, Toma K, Torne P, Trent T, Trippe S, Tsuda S, van Bemmel I, van Langevelde H J, van Rossum D R, Wagner J, Wardle J, Weintroub J, Wex N, Wharton R, Wielgus M, Wong G N, Wu Q, Young A, Young K, Younsi Z, Yuan F, Yuan Y F, Zensus J A, Zhao G, Zhao S S, Zhu Z, Anczarski J, Baganoff F K, Eckart A, Farah J R, Haggard D, Meyer-Zhao Z, Michalik D, Nadolski A, Neilsen J, Nishioka H, Nowak M A, Pradel N, Primiani R A, Souccar K, Vertatschitsch L, Yamaguchi P and Zhang S 2019 ApJ 875(1), L5. * Gelles et al. (2021) Gelles Z, Himwich E, Johnson M D and Palumbo D C M 2021 Phys. Rev. D 104(4), 044060. * Gralla et al. (2019) Gralla S E, Holz D E and Wald R M 2019 Phys. Rev. D 100(2), 024018. * Gralla et al. (2020) Gralla S E, Lupsasca A and Marrone D P 2020 Phys. Rev. D 102(12), 124004. * Gralla et al. (2018) Gralla S E, Lupsasca A and Strominger A 2018 MNRAS 475(3), 3829–3853. * GRAVITY Collaboration et al. (2023) GRAVITY Collaboration, Abuter R, Aimar N, Amaro Seoane P, Amorim A, Bauböck M, Berger J P, Bonnet H, Bourdarot G, Brandner W, Cardoso V, Clénet Y, Davies R, de Zeeuw P T, Dexter J, Drescher A, Eckart A, Eisenhauer F, Feuchtgruber H, Finger G, Förster Schreiber N M, Foschi A, Garcia P, Gao F, Gelles Z, Gendron E, Genzel R, Gillessen S, Hartl M, Haubois X, Haussmann F, Heißel G, Henning T, Hippler S, Horrobin M, Jochum L, Jocou L, Kaufer A, Kervella P, Lacour S, Lapeyrère V, Le Bouquin J B, Léna P, Lutz D, Mang F, More N, Ott T, Paumard T, Perraut K, Perrin G, Pfuhl O, Rabien S, Ribeiro D C, Sadun Bordoni M, Scheithauer S, Shangguan J, Shimizu T, Stadler J, Straub O, Straubmeier C, Sturm E, Tacconi L J, Vincent F, von Fellenberg S, Widmann F, Wielgus M, Wieprecht E, Wiezorrek E and Woillez J 2023 A&A 677, L10. * GRAVITY Collaboration et al. (2018) GRAVITY Collaboration, Abuter R, Amorim A, Bauböck M, Berger J P, Bonnet H, Brandner W, Clénet Y, Coudé Du Foresto V, de Zeeuw P T, Deen C, Dexter J, Duvert G, Eckart A, Eisenhauer F, Förster Schreiber N M, Garcia P, Gao F, Gendron E, Genzel R, Gillessen S, Guajardo P, Habibi M, Haubois X, Henning T, Hippler S, Horrobin M, Huber A, Jiménez-Rosales A, Jocou L, Kervella P, Lacour S, Lapeyrère V, Lazareff B, Le Bouquin J B, Léna P, Lippa M, Ott T, Panduro J, Paumard T, Perraut K, Perrin G, Pfuhl O, Plewa P M, Rabien S, Rodríguez-Coira G, Rousset G, Sternberg A, Straub O, Straubmeier C, Sturm E, Tacconi L J, Vincent F, von Fellenberg S, Waisberg I, Widmann F, Wieprecht E, Wiezorrek E, Woillez J and Yazici S 2018 A&A 618, L10. * GRAVITY Collaboration et al. (2020) GRAVITY Collaboration, Bauböck M, Dexter J, Abuter R, Amorim A, Berger J P, Bonnet H, Brandner W, Clénet Y, Coudé Du Foresto V, de Zeeuw P T, Duvert G, Eckart A, Eisenhauer F, Förster Schreiber N M, Gao F, Garcia P, Gendron E, Genzel R, Gerhard O, Gillessen S, Habibi M, Haubois X, Henning T, Hippler S, Horrobin M, Jiménez-Rosales A, Jocou L, Kervella P, Lacour S, Lapeyrère V, Le Bouquin J B, Léna P, Ott T, Paumard T, Perraut K, Perrin G, Pfuhl O, Rabien S, Rodriguez Coira G, Rousset G, Scheithauer S, Stadler J, Sternberg A, Straub O, Straubmeier C, Sturm E, Tacconi L J, Vincent F, von Fellenberg S, Waisberg I, Widmann F, Wieprecht E, Wiezorrek E, Woillez J and Yazici S 2020 A&A 635, A143. * Grould et al. (2017) Grould M, Vincent F H, Paumard T and Perrin G 2017 A&A 608, A60. * Hamaker and Bregman (1996) Hamaker J P and Bregman J D 1996 A&AS 117, 161–165. * Himwich et al. (2020) Himwich E, Johnson M D, Lupsasca A and Strominger A 2020 Phys. Rev. D 101(8), 084020. * Huang et al. (2009) Huang L, Liu S, Shen Z Q, Yuan Y F, Cai M J, Li H and Fryer C L 2009 ApJ 703(1), 557–568. * Huang and Shcherbakov (2011) Huang L and Shcherbakov R V 2011 MNRAS 416(4), 2574–2592. * Jiménez-Rosales et al. (2021) Jiménez-Rosales A, Dexter J, Ressler S M, Tchekhovskoy A, Bauböck M, Dallilar Y, de Zeeuw P T, Drescher A, Eisenhauer F, von Fellenberg S, Gao F, Genzel R, Gillessen S, Habibi M, Ott T, Stadler J, Straub O and Widmann F 2021 MNRAS 503(3), 4563–4575. * Krawczynski et al. (2022) Krawczynski H, Muleri F, Dovčiak M, Veledina A, Rodriguez Cavero N, Svoboda J, Ingram A, Matt G, Garcia J A, Loktev V, Negro M, Poutanen J, Kitaguchi T, Podgorný J, Rankin J, Zhang W, Berdyugin A, Berdyugina S V, Bianchi S, Blinov D, Capitanio F, Di Lalla N, Draghis P, Fabiani S, Kagitani M, Kravtsov V, Kiehlmann S, Latronico L, Lutovinov A A, Mandarakas N, Marin F, Marinucci A, Miller J M, Mizuno T, Molkov S V, Omodei N, Petrucci P O, Ratheesh A, Sakanoi T, Semena A N, Skalidis R, Soffitta P, Tennant A F, Thalhammer P, Tombesi F, Weisskopf M C, Wilms J, Zhang S, Agudo I, Antonelli L A, Bachetti M, Baldini L, Baumgartner W H, Bellazzini R, Bongiorno S D, Bonino R, Brez A, Bucciantini N, Castellano S, Cavazzuti E, Ciprini S, Costa E, De Rosa A, Del Monte E, Di Gesu L, Di Marco A, Donnarumma I, Doroshenko V, Ehlert S R, Enoto T, Evangelista Y, Ferrazzoli R, Gunji S, Hayashida K, Heyl J, Iwakiri W, Jorstad S G, Karas V, Kolodziejczak J J, La Monaca F, Liodakis I, Maldera S, Manfreda A, Marscher A P, Marshall H L, Mitsuishi I, Ng C Y, O’Dell S L, Oppedisano C, Papitto A, Pavlov G G, Peirson A L, Perri M, Pesce-Rollins M, Pilia M, Possenti A, Puccetti S, Ramsey B D, Romani R W, Sgrò C, Slane P, Spandre G, Tamagawa T, Tavecchio F, Taverna R, Tawara Y, Thomas N E, Trois A, Tsygankov S, Turolla R, Vink J, Wu K, Xie F and Zane S 2022 Science 378(6620), 650–654. * Landi Degl’Innocenti and Landi Degl’Innocenti (1985) Landi Degl’Innocenti E and Landi Degl’Innocenti M 1985 Sol. Phys. 97, 239–250. * Marszewski et al. (2021) Marszewski A, Prather B S, Joshi A V, Pandya A and Gammie C F 2021 ApJ 921(1), 17. * Mignon-Risse et al. (2021) Mignon-Risse R, Aimar N, Varniere P, Casse F and Vincent F 2021 in A Siebert, K Baillié, E Lagadec, N Lagarde, J Malzac, J. B Marquette, M N’Diaye, J Richard and O Venot, eds, ‘SF2A-2021: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics. Eds.: A. Siebert’ pp. 113–116. * Misner et al. (1973) Misner C W, Thorne K S and Wheeler J A 1973 Gravitation Freeman New York. * Mościbrodzka et al. (2016) Mościbrodzka M, Falcke H and Shiokawa H 2016 A&A 586, A38. * Mościbrodzka and Gammie (2018a) Mościbrodzka M and Gammie C F 2018a MNRAS 475(1), 43–54. * Mościbrodzka and Gammie (2018b) Mościbrodzka M and Gammie C F 2018b MNRAS 475(1), 43–54. * Nalewajko et al. (2020) Nalewajko K, Sikora M and Różańska A 2020 A&A 634, A38. * Noble et al. (2007) Noble S C, Leung P K, Gammie C F and Book L G 2007 Classical and Quantum Gravity 24(12), S259–S274. * Page and Thorne (1974) Page D N and Thorne K S 1974 ApJ 191, 499–506. * Pihajoki et al. (2018) Pihajoki P, Mannerkoski M, Nättilä J and Johansson P H 2018 ApJ 863(1), 8. * Porth et al. (2021) Porth O, Mizuno Y, Younsi Z and Fromm C M 2021 MNRAS 502(2), 2023–2032. * Prather et al. (2023) Prather B S, Dexter J, Moscibrodzka M, Pu H Y, Bronzwaer T, Davelaar J, Younsi Z, Gammie C F, Gold R, Wong G N, Akiyama K, Alberdi A, Alef W, Algaba J C, Anantua R, Asada K, Azulay R, Bach U, Baczko A K, Ball D, Baloković M, Barrett J, Bauböck M, Benson B A, Bintley D, Blackburn L, Blundell R, Bouman K L, Bower G C, Boyce H, Bremer M, Brinkerink C D, Brissenden R, Britzen S, Broderick A E, Broguiere D, Bustamante S, Byun D Y, Carlstrom J E, Ceccobello C, Chael A, Chan C k, Chang D O, Chatterjee K, Chatterjee S, Chen M T, Chen Y, Cheng X, Cho I, Christian P, Conroy N S, Conway J E, Cordes J M, Crawford T M, Crew G B, Cruz-Osorio A, Cui Y, De Laurentis M, Deane R, Dempsey J, Desvignes G, Dhruv V, Doeleman S S, Dougal S, Dzib S A, Eatough R P, Emami R, Falcke H, Farah J, Fish V L, Fomalont E, Ford H A, Fraga-Encinas R, Freeman W T, Friberg P, Fromm C M, Fuentes A, Galison P, García R, Gentaz O, Georgiev B, Goddi C, Gómez-Ruiz A I, Gómez J L, Gu M, Gurwell M, Hada K, Haggard D, Haworth K, Hecht M H, Hesper R, Heumann D, Ho L C, Ho P, Honma M, Huang C W L, Huang L, Hughes D H, Ikeda S, Impellizzeri C M V, Inoue M, Issaoun S, James D J, Jannuzi B T, Janssen M, Jeter B, Jiang W, Jiménez-Rosales A, Johnson M D, Jorstad S, Joshi A V, Jung T, Karami M, Karuppusamy R, Kawashima T, Keating G K, Kettenis M, Kim D J, Kim J Y, Kim J, Kim J, Kino M, Koay J Y, Kocherlakota P, Kofuji Y, Koyama S, Kramer C, Kramer M, Krichbaum T P, Kuo C Y, La Bella N, Lauer T R, Lee D, Lee S S, Leung P K, Levis A, Li Z, Lico R, Lindahl G, Lindqvist M, Lisakov M, Liu J, Liu K, Liuzzo E, Lo W P, Lobanov A P, Loinard L, Lonsdale C J, Lu R S, MacDonald N R, Mao J, Marchili N, Markoff S, Marrone D P, Marscher A P, Martí-Vidal I, Matsushita S, Matthews L D, Medeiros L, Menten K M, Michalik D, Mizuno I, Mizuno Y, Moran J M, Moriyama K, Müller C, Mus A, Musoke G, Myserlis I, Nadolski A, Nagai H, Nagar N M, Nakamura M, Narayan R, Narayanan G, Natarajan I, Nathanail A, Fuentes S N, Neilsen J, Neri R, Ni C, Noutsos A, Nowak M A, Oh J, Okino H, Olivares H, Ortiz-León G N, Oyama T, Özel F, Palumbo D C M, Paraschos G F, Park J, Parsons H, Patel N, Pen U L, Pesce D W, Piétu V, Plambeck R, PopStefanija A, Porth O, Pötzl F M, Preciado-López J A, Psaltis D, Ramakrishnan V, Rao R, Rawlings M G, Raymond A W, Rezzolla L, Ricarte A, Ripperda B, Roelofs F, Rogers A, Ros E, Romero-Cañizales C, Roshanineshat A, Rottmann H, Roy A L, Ruiz I, Ruszczyk C, Rygl K L J, Sánchez S, Sánchez-Argüelles D, Sánchez-Portal M, Sasada M, Satapathy K, Savolainen T, Schloerb F P, Schonfeld J, Schuster K F, Shao L, Shen Z, Small D, Sohn B W, SooHoo J, Souccar K, Sun H, Tazaki F, Tetarenko A J, Tiede P, Tilanus R P J, Titus M, Torne P, Traianou E, Trent T, Trippe S, Turk M, van Bemmel I, van Langevelde H J, van Rossum D R, Vos J, Wagner J, Ward-Thompson D, Wardle J, Weintroub J, Wex N, Wharton R, Wielgus M, Wiik K, Witzel G, Wondrak M F, Wu Q, Yamaguchi P, Yfantis A, Yoon D, Young A, Young K, Yu W, Yuan F, Yuan Y F, Zensus J A, Zhang S, Zhao G Y, Zhao S S and Event Horizon Telescope Collaboration 2023 ApJ 950(1), 35. * Pu et al. (2016) Pu H Y, Yun K, Younsi Z and Yoon S J 2016 ApJ 820(2), 105. * Rybicki and Lightman (1979) Rybicki G B and Lightman A P 1979 Radiative processes in astrophysics. * Shcherbakov (2008) Shcherbakov R V 2008 ApJ 688(1), 695–700. * Transactions of the International Astronomical Union (1973) Transactions of the International Astronomical Union 1973 Transactions of the International Astronomical Union 15(2), 165–167. * Varniere et al. (2018) Varniere P, Casse F and Vincent F H 2018 in P Di Matteo, F Billebaud, F Herpin, N Lagarde, J. B Marquette, A Robin and O Venot, eds, ‘SF2A-2018: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics’ p. Di. * Vincent et al. (2019) Vincent F H, Abramowicz M A, Zdziarski A A, Wielgus M, Paumard T, Perrin G and Straub O 2019 A&A 624, A52. * Vincent et al. (2011) Vincent F H, Paumard T, Gourgoulhon E and Perrin G 2011 Classical and Quantum Gravity 28(22), 225011. * Vincent et al. (2023) Vincent F H, Wielgus M, Aimar N, Paumard T and Perrin G 2023 arXiv e-prints p. arXiv:2309.10053. * Vos et al. (2022) Vos J, Mościbrodzka M A and Wielgus M 2022 A&A 668, A185. * Walker and Penrose (1970) Walker M and Penrose R 1970 Communications in Mathematical Physics 18(4), 265–274. * Westfold (1959) Westfold K C 1959 ApJ 130, 241. * White (2022) White C J 2022 ApJS 262(1), 28. * Wielgus et al. (2022) Wielgus M, Moscibrodzka M, Vos J, Gelles Z, Martí-Vidal I, Farah J, Marchili N, Goddi C and Messias H 2022 A&A 665, L6. * Xiao-lin et al. (2021) Xiao-lin Y, Jian-cheng W, Chu-yuan Y and Zun-li Y 2021 ApJS 254(2), 29. * Younsi et al. (2020) Younsi Z, Porth O, Mizuno Y, Fromm C M and Olivares H 2020 in K Asada, E de Gouveia Dal Pino, M Giroletti, H Nagai and R Nemmen, eds, ‘Perseus in Sicily: From Black Hole to Cluster Outskirts’ Vol. 342 pp. 9–12.
# Dynamical Consequence of Shadows Cast to the Outer Protoplanetary Disks: I. Two-dimensional Simulations Zehao Su School of Physics and Astronomy, Beijing Normal University, Beijing 100875, China<EMAIL_ADDRESS>Institute for Frontier in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China; Xue-Ning Bai Institute for Advanced Study, Tsinghua University, Beijing 100084, China; <EMAIL_ADDRESS>Department of Astronomy, Tsinghua University, Beijing 100084, China ###### Abstract There has been increasing evidence of shadows from scattered light observations of outer protoplanetary disks (PPDs) cast from the (unresolved) disk inner region, while in the meantime these disks present substructures of various kinds in the submillimeter. As stellar irradiation is the primary heating source for the outer PPDs, the presence of such shadows thus suggest inhomogeneous heating of the outer disk in azimuth, leading to a “thermal forcing” with dynamical consequences. We conduct a suite of idealized 2D disk simulations of the outer disk with azimuthally-varying cooling prescription to mimic the effect of shadows, generally assuming the shadow is static or slowly-rotating. The linear response to such shadows is two-armed spirals with the same pattern speed as the shadow. Towards the nonlinear regime, we find that shadows can potentially lead to the formation of a variety of types of substructures including rings, spirals and crescents, depending on viscosity, cooling time, etc. We have conducted systematic and statistical characterization of the simulation suite, and as thermal forcing from the shadow strengthens, the dominant form of shadow-induced disk substructures change from spirals to rings, and eventually to crescents/vortices. Our results highlight the importance of properly modeling the dynamical impact of inhomogeneous stellar irradiation, while call for more detailed modeling incorporating more realistic disk physics. Protoplanetary disks-shadows-hydrodynamics substructures ## 1 Introduction Thanks to the advent of new observational facilities and instruments to conduct spatially resolved observations of protoplanetary disks (PPDs), it has now been well established that disk substructures are ubiquitous (e.g. Bae et al., 2023). In the millimeter/sub-millimeter wavelengths, the Atacama Large Millimeter Array (ALMA) has revealed the richness of disk substructures that are primarily in the form of rings and gaps in addition to other features such as spirals and crescents at different radii (e.g. ALMA Partnership et al., 2015; Monnier et al., 2017; Avenhaus et al., 2018; Isella et al., 2018; Huang et al., 2020; Gratton et al., 2019; Andrews, 2020). These observations reflect the thermal emission from mm-sized dusts around the disk midplane, which are biased tracers of the gas density profiles due to finite aerodynamic coupling between gas and dust. In the optical and near infrared (NIR), high-contrast imaging with extreme adaptive optics (e.g., VLT/SHERE, VLT/CRIRES, GPI) reveal even richer and more complex features (e.g. Benisty et al., 2015; Pinilla et al., 2015; Pohl et al., 2017; van Boekel et al., 2017; Garufi et al., 2018; Benisty et al., 2023). Emission at optical/NIR mainly result from the starlight scattered by micron-sized dust (better coupled to the gas) suspended in the disk, which are better tracers of the disk surface layers. As a result, features seen in scattered light do not necessarily have direct correspondence to substructures recognized by ALMA (e.g. van der Marel et al., 2016; Uyama et al., 2018; Pérez et al., 2018; Muro-Arena et al., 2018). At least partly contributing to the complexity in features seen in scattered light is the presence of shadows, typically defined as low-intensity regions that are confined to specific azimuthal angles (Benisty et al., 2023). They must be cast from the (unresolved) disk inner region, and can be mainly classified into two types: broad extended shadows in azimuthal directions (e.g. Muro-Arena et al., 2020) and narrow shadow lanes with only a few degrees (e.g. Ginski et al., 2021). Considerable effort has been devoted to understanding the origin of shadows because the morphology and temporal variation of shadows can provide indirect information about the disk’s inner regions. The most common case for shadow casting is the presence of a misaligned/warped inner disk. For instance, TW Hya shows a moving shadow pattern that could suggests a precessing inner disk (Debes et al., 2017), shadows in HD 143006 can be reproduced using a $30^{\circ}$ misaligned inner disk (Benisty et al., 2018), fast time variations of shadows in RX J1604.3-2130A may come from dust very close to the central star in an irregular misaligned inner disk (Pinilla et al., 2015), narrow shadow lanes in SU Aur that possibly suggest misalignment caused by late-time interactions with infalling materials (Kuffmeier et al., 2021), shadows in HD 139614 can be explained by the combination of a misaligned inner ring and disk (Muro-Arena et al., 2020), and the flux ratio switches sides in brightest nebula of IRAS40302 can be achieved by applying a tilted inner disk (Villenave et al., 2023). Even for disks with nearly aligned inner regions, subtle shadowing effects can still be recognized (Monnier et al., 2017). In addition to the misalignment of the inner disk regions, variations in the scale height of the inner disk atmosphere could also be responsible for generating shadows, such as in HD 163296 (Rich et al., 2019, 2020; Varga et al., 2021; GRAVITY Collaboration et al., 2021). Most effort aiming to understand disk shadows so far has focused on the modeling the (inner) disk morphology to explain the shadow features using radiative transfer calculations (e.g. Casassus et al., 2018; Benisty et al., 2018; Nealon et al., 2019; Muro-Arena et al., 2020). On the other hand, we note that as stellar irradiation is the primary source of heating in the bulk of the (outer) PPDs, the presence of shadows must also give rise to dynamical consequences by “thermal forcing”: the disk gas experiences (quasi-)periodic cooling and heating as it enters and exits the shadow, which hardly settles to thermal equilibrium and constantly exerts modest or even strong pressure perturbations to the neighboring fluid. This effect was first explored in Montesinos et al. (2016), who conducted 2D hydrodynamic simulations that take into account both stellar irradiation and periodic forcing of shadows with an opening angle of $28^{\circ}$ in the context of the transition disk HD 142527. They found that azimuthal pressure gradients generated by shadows can trigger $m=2$ spirals, which are enhanced by self-gravity and give rise to observable quasi-steady spiral signals. In this work, motivated by the diversity of shadowing features seen in scattered light images and the case study by Montesinos et al. (2016) for the HD 142527 disk, we aim at a systematic exploration on the dynamical consequences of shadows cast onto outer PPDs. As an initial effort, we restrict ourselves to vertically-integrated systems by 2D hydrodynamic simulations. We follow the evolution of a passive, viscous gaseous disk with a thermal relaxation prescription towards a target temperature, which is set by stellar irradiation subject to shadowing. By exploring a large suite of simulations varying the viscosity, cooling time, and shadow geometry, we find that shadows can result in the formation of a wide variety of disk substructures, and perform a statistical analysis of all the substructures generated from our simulations. This paper is organized as follows. We detail our simulation setup in Section 2, followed by a description representative features of shadow-driven disk substructures in Section 3. We present statistical analysis of all our simulations and describe the substructure-forming process from linear to non- linear regimes in Section 4. Finally, we discuss the caveats and conclude in Section 6. ## 2 Numerical Methods ### 2.1 Simulation Setup We solve the vertically integrated viscous hydrodynamic equations using the grid-based higher-order Godunov code ATHENA++ Stone et al. (2020) in cylindrical coordinates $(r,\varphi)$. The conservative form of control equations are: $\frac{\partial\Sigma}{\partial t}+\nabla\cdot(\Sigma\textbf{v})=0,$ (1) $\frac{\partial\Sigma\textbf{v}}{\partial t}+\nabla\cdot(\Sigma\textbf{v}\textbf{v}+P\mathcal{I}+\mathcal{T}_{vis})=-\Sigma\nabla\Phi,$ (2) $\frac{\partial E}{\partial t}+\nabla\cdot\left[(E+P+\mathcal{T}_{vis})\textbf{v}\right]=-\Sigma\textbf{v}\cdot\nabla\Phi+\Lambda,$ (3) where $\Sigma$ is the gas surface density in the disk, v is the gas velocity, $P=\Sigma T$ is the vertically integrated pressure where $T$ is the disk temperature, $\mathcal{I}$ is the identity tensor, $\Phi$ is the gravitational potential written as $\Phi=-GM_{*}/r$ where $M_{*}$ is mass of central star, $E$ is total energy density, and $\Lambda$ gives the cooling source terms. The viscous stress tensor $\mathcal{T}_{visc}$ in momentum equation reads $\mathcal{T}_{vis}=-\Sigma\nu\left(\frac{\partial v_{i}}{\partial x_{j}}+\frac{\partial v_{j}}{\partial x_{i}}-\frac{2}{3}\frac{\partial v_{k}}{\partial x_{k}}\delta_{ij}\right),$ (4) with $\nu$ being the kinematic viscosity. The total energy density $E$ is given by the combination of kinetic energy and internal energy: $E=\frac{1}{2}\Sigma v^{2}+\frac{P}{\gamma-1},$ (5) where $\gamma=7/5$ is the adiabatic index for molecular gas. Note that viscous heating is automatically included in the energy equation, although it is generally unimportant in the outer PPDs. The gas temperature $T$ is associated with the isothermal sound speed as $T=c_{s}^{2}$, which yields the disk scale height $H=c_{s}/\Omega_{K}$, where $\Omega_{K}=(GM_{*}/r^{3})^{1/2}$ is the Keplerian angular frequency. The disk aspect ratio is then given by $h=H/r$. With this, viscosity follows the standard $\alpha$ prescription (Shakura & Sunyaev, 1973), $\nu=\alpha c_{s}H$. It is worth noticing that viscosity varies as the disk evolves. Figure 1: Illustration of our shadow morphology depicted by normalized temperature in disks with $\sigma_{\phi}=0.236$ and $0.079$ for fast ($\beta=0.001$) and slow ($\beta=1$) cooling processes. With fast cooling, the low-temperature regions caused by shadows largely reflect the target temperature we prescribe at the shadow location. With longer cooling timescale, these regions deviate from the prescribed shadow positions towards the leading side, and exhibit significantly weaker amplitude. We choose the initial density profile to be power-law: $\Sigma_{\rm init}=\Sigma_{0}\left(\frac{r}{r_{0}}\right)^{d}.$ (6) where $\Sigma_{0}$ is the density at reference radius $r_{0}$, which we take to be the radius of the inner boundary. We specify the disk initial temperature as $T_{\rm init}\equiv c_{s_{0}}^{2}\left(\frac{r}{r_{0}}\right)^{p}=\left(h_{0}r_{0}\Omega_{K_{0}}\right)^{2}\left(\frac{r}{r_{0}}\right)^{p},$ (7) where $c_{s_{0}}$, $h_{0}$, $\Omega_{K_{0}}$ are the value of isothermal sound speed, aspect ratio and Keplerian angular velocity at reference radius $r_{0}$. The radial force balance leads to the initial rotation profile calculated by $v_{\varphi}(r)=\left[\left(p+d\right)c_{s}^{2}+\frac{GM_{*}}{r}\right]^{1/2}\ .$ (8) With viscosity, the radial velocity is set by the accretion velocity given by $v_{r}(r)=-\frac{3}{2}\frac{\alpha c_{s}^{2}}{r\Omega_{K}}\ .$ (9) To ensure steady-state accretion in the initial equilibrium (shadow not included) with constant $\alpha$, the initial temperature and density profile should satisfy $d+p=-3/2$. Our simulations are scale-free, adopting $GM_{*}=\Sigma_{0}=r_{0}=1$ in code units, with $h_{0}=0.1$. As a result, we have $\Omega_{K_{0}}=c_{s_{0}}=1$. The computational domain ranges from $r_{\rm in}=1$ to $r_{\rm out}=30$ in code units, to ensure sufficient dynamical range. We employ a logarithmic grid in radial direction and a uniform grid in azimuthal direction with $N_{r}\times N_{\varphi}=512\times 512$, achieving a gird resolution of 15 cells per $H$ in $r$ while keeping cell aspect ratio $\Delta r\approx 0.5r\Delta\phi$. #### 2.1.1 Shadow prescription In our simulations, we assume an obscuring structure present in disk inner region, which is outside of our simulation domain (inside the inner boundary). As an initial study, we tentatively take this obscuring structure as a slightly misaligned inner disk (inclination angle $\sim h$). In this case, near half of the outer disk (in azimuth) is illuminated from one hemisphere, and the opposite side being illuminated from the other hemisphere. In the vertically-integrated sense, the two sides of the “pie-chart” are heated largely equally. It is the transition region, the disk can be largely blocked by the inner disk from both hemisphere, which is mostly affected by the shadow. The shadow then introduces a thermal forcing to the system, causing system’s temperature to approach the target temperature $T_{\rm tar}$. For simplicity, we prescribe this target temperature by $T_{\rm tar}(r,\phi)=T_{\rm init}(r)\left(1-\epsilon e^{-\frac{\phi^{2}}{2\sigma_{\phi}^{2}}}\right)\left(1-\epsilon e^{-\frac{(\phi-\pi)^{2}}{2\sigma_{\phi}^{2}}}\right),$ (10) where $\epsilon$ reflects the amplitude of the shadow and $\sigma_{\phi}$ characterizes the azimuthal width of the shadow. Although we have argued that two-sided shadows are the most basic case, we still examine the one-sided case in C for reference. Also, in most cases in this paper, except for the simulation mentioned in Section 5.3, the shadow is static with pattern speed ($\Omega_{\rm shadow}$) being zero. The final temperature structure depends on heating and cooling process, which is often modeled using the $\beta$ cooling approximation (Gammie, 2001). The cooling term is given by thermal relaxation towards the target temperature $\Lambda=-\frac{\Sigma}{\gamma-1}\times\frac{T-T_{\rm tar}}{t_{\rm cool}}\ ,$ (11) where the cooling timescale is specified by the dimensionless parameter $\beta$: $t_{\rm cool}=\beta\Omega_{K}^{-1}\ .$ (12) It describes the disk’s thermodynamic timescale, which can range from $\sim 10^{-3}$ (approaching the isothermal limit) to at least $\sim 10$ (approaching the adiabatic limit) in our simulations. Figure 1 shows the expected temperature structure for four representative shadow prescriptions, with different shadow widths ($15^{\circ}$ and $45^{\circ}$) and cooling times, calculated by following fluid elements undergoing heating and cooling on circular orbits. We have fixed shadow amplitude of $\epsilon=0.8$. With fast cooling ($\beta=0.001$), we see that the shadow aligns with its expected position, and the temperature at its center is approximately $0.2T_{0}$ as desired. When cooling is inefficient, the observed shadow center on the leading side from its expected location, and the lowest temperature becomes well above $0.2T_{0}$. It should be noted that our shadow and cooling prescriptions are highly simplified and are not necessarily always physical (for instance, a flat disk with $p=-1$ would not be irradiated). We emphasize that the goal of this work is not to precisely model any particular system, but to explore the general phenomenology in a qualitative manner. #### 2.1.2 Boundary Conditions We use modified outflow boundary conditions, where hydrodynamic variables are copied from the last grid zone assuming $\Sigma\propto r^{d}$, $P\propto r^{d+p}$, $v_{\phi}\propto r^{-1/2}$, with $v_{r}$ unchanged—except we set $v_{r}=0$ in case of inflow. To further dampen unphysical waves, we adopt wave-killing functions in the form described by de Val-Borro et al. (2006): $\frac{dx}{dt}=-\frac{x-x_{0}}{\tau_{\rm damp}}R(r)$ (13) where $x$ represents any fluid quantities (e.g. $\Sigma$, v, etc.). The damping timescale $\tau_{\rm damp}$ is defined as $\tau_{\rm damp}=\eta\Omega_{K}^{-1}$, where $\eta$ is the damping rate and is set to 1 for all of our simulations. The function $R(r)$ is a parabolic function expressed as: $R(r)=\left(\frac{r-r_{\rm damp}}{L_{\rm damp}}\right)^{2},\ {\rm for}\left|r-r_{\rm damp}\right|<L_{\rm damp}\ ,$ (14) where $r_{\rm damp}$ is the boundary of the damping area which we take to be $2.08$ and $26.57$ in inner and outer part of our computation domain, respectively, and $L_{\rm damp}$ is the length of wave killing zone. ### 2.2 Simulation Runs In order to comprehensively investigate the dynamical effects of shadows in PPDs, we conducted a wide parameter scan. Five main parameters are included in our simulations: dimensionless cooling timescale $\beta$ ranging from $10^{-3}$ to $10$, viscosity coefficient $\alpha$ ranging from $0$ to $10^{-2}$, shadow amplitude coefficient $\epsilon=0.5$ and $\epsilon=0.8$, shadow width $\sigma_{\phi}=0.236$ and $\sigma_{\phi}=0.079$, and the temperature slope $p=-1$ (flat case) and $p=-0.5$ (flared case). In the viscous simulations, they translate to density gradient $d=-0.5$ and $d=-1$ to ensure steady state accretion. In most simulations, the shadows do not rotate, and we fix the disk aspect ratio $h_{0}=0.1$ at $r=1$, thus $h=0.1$ is constant in most $p=-1$ (flat) cases. All of these simulations will be discussed in Section 4. In Sections 5.3 and 5.4, we also briefly explore simulations with rotating shadows and vary $h_{0}$ from $0.05$ to $0.15$. To further comment on our choice of parameters, we first note that in outer disk conditions, we generally expect $\beta\lesssim 1$ (e.g. Lin & Youdin, 2015; Pfeil & Klahr, 2019), though the finite thermal coupling between dust and gas may significantly enhance the effective $\beta$ (e.g. Bae et al., 2021). In inviscid simulations, we further examine the influence of the density profile ($d=-0.5$ and $d=-1$, which affects thermal forcing) since this parameter is no longer free when viscosity is included (dependent on $p$). Note that in the new paradigm of wind-driven accretion, the disk is more laminar and the surface density profile can be more arbitrary (e.g. Bai, 2016; Suzuki et al., 2016; Tabone et al., 2022). Although we do not incorporate wind-driven accretion, this exploration also serves the purpose to partly mimic “windy” disk conditions. On the choice of shadow amplitudes, note that given the $T^{4}$ dependence, the two choices correspond to the shadowed region receiving about $0.2^{4}\approx 0.002$ and $0.5^{4}\approx 0.06$ of the stellar irradiation compared to the non-shadowed regions. In all of our simulations, the total run time is chosen to be $T=20000P_{0}$, where $P_{0}=2\pi/\Omega_{K_{0}}$ is the orbital period at the inner boundary. This is significantly longer than the timescales for substructure formation, which we find to be within $5000P_{0}$ for most cases. In only a few cases (especially $\beta\sim 10,\alpha\sim 0$), even on the timescale of $20000P_{0}$, we cannot unambiguously identify the dominant form of disk substructure. However, we can infer their evolution trend from a statistical point of view. To facilitate comparison of the various simulations discussed in the following sections, we provide a list of all our runs and their parameters in Table 1. Our naming convention is structured as follows. We use “L” for runs in the linear regime ($\epsilon=0.001$) and “NL” for runs in the nonlinear regime ($\epsilon=0.5,0.8$). The labels “hs”, “hm” and “hl” indicate runs with $h_{0}=0.05$, $h_{0}=0.1$, and $h_{0}=0.15$, respectively, with $h_{0}=0.1$ as fiducial. To specify the dominant substructure, we use “S” for spiral- dominant, “R” for ring-dominant, and “V” for vortex-dominant. Shadow precession speeds are denoted as “NR” for nonrotating (fiducial), “FR” for fast rotating, “MR” for moderately rotating, and “SR” for slow rotating. For simulations dedicated to parameter searches discussed in Section 4, we use the label “S-h-all,” as we do not discuss individual runs for these simulations. Table 1: Summary of All Highlighted Simulations.1 Run | $\sigma_{\phi}$ | $\epsilon$ | $\alpha$ | $\beta$ | p | $h_{0}$ | $\Omega_{\rm shadow}$ | ---|---|---|---|---|---|---|---|--- Representative runs (Section 3) | NL-hm-S-NR | 0.236 | 0.8 | $10^{-3}$ | 10 | -1.0 | 0.1 | 0 | NL-hm-R-NR | 0.236 | 0.5 | $10^{-4}$ | 1 | -1.0 | 0.1 | 0 | NL-hm-V-NR | 0.236 | 0.5 | $0$ | $10^{-3}$ | -1.0 | 0.1 | 0 | Statistical runs (Section 4) | S-h-all2 | $(0.236,0.079)$ | $(0.5,0.8)$ | $(0,10^{-4},10^{-3},10^{-2})$ | $(10^{-3},10^{-2},10^{-1},1,10)$ | $(-1.0,-0.5)$ | 0.1 | 0 | Linear run (Section 5.1) | L-hm-S-NR | 0.236 | 0.001 | 0 | $10^{-3}$ | -1.0 | 0.1 | 0 | Rotating shadow runs (Section 5.3) | L-hm-S-FR3 | 0.236 | 0.001 | 0 | $10^{-3}$ | -1.0 | 0.1 | $\Omega_{0}$ | L-hm-S-MR | 0.236 | 0.001 | 0 | $10^{-3}$ | -1.0 | 0.1 | $0.03\Omega_{0}$ | L-hm-S-SR | 0.236 | 0.001 | 0 | $10^{-3}$ | -1.0 | 0.1 | $0.003\Omega_{0}$ | Aspect ratio test runs (Section 5.4) | NL-hs-S-NR | 0.236 | 0.8 | $10^{-3}$ | 10 | -1.0 | 0.05 | 0 | NL-hs-R-NR | 0.236 | 0.5 | $10^{-4}$ | 1 | -1.0 | 0.05 | 0 | NL-hs-V-NR | 0.236 | 0.5 | 0 | $10^{-3}$ | -1.0 | 0.05 | 0 | NL-hl-S-NR | 0.236 | 0.8 | $10^{-3}$ | 10 | -1.0 | 0.15 | 0 | NL-hl-R-NR | 0.236 | 0.5 | $10^{-4}$ | 1 | -1.0 | 0.15 | 0 | NL-hl-V-NR | 0.236 | 0.5 | $0$ | $10^{-3}$ | -1.0 | 0.15 | 0 | 11footnotetext: Simulations mentioned in Appendix A and Appendix C are not included. 22footnotetext: The unified name for parameter sruvey simulations, with parameters being all combinations of those listed in this row, totaling 160 simulations. 33footnotetext: The resolution of this run is set to $N=2048$. Note. — $\sigma_{\phi}$: shadow width parameter; $\epsilon$: shadow amplitude parameter; $\alpha$: viscosity parameter; $\beta$: cooling rate parameter; p: temperature slope; $h_{0}$: disk aspect ratio at inner boundary; $\Omega_{\rm shadow}$: shadow procession angular frequency; $\Omega_{0}$: Keplerain angular velocity at $r=1$. All runs, except for run L-hm-S-FR, use a resolution of $N=512$. ### 2.3 Diagnostics of Substructures As we will demonstrate, our simulations generate a variety of substructures of all types. In this section, we provide the diagnostics we employ to identify and characterize these substructures. To minimize the influence of the boundaries and wave-killing, we restrict the analysis domain to be $r\in[3,21]$. Vortices exhibit as anti-cyclonic flows with pressure maxima at the center that can potentially be strong dust traps. They are identified as regions with negative vorticity, which is defined as $\nabla\times\delta\textbf{v}$ with $\delta\textbf{v}$ being the difference between current fluid velocity and background fluid velocity. We quantify individual vortices based on their mean vorticity (normalized by background Keplerian angular velocity), density contrast, spacing, and aspect ratio. In doing so, we first choose the vortex boundary to be where the density is $10\%$ of the density at the vortex center after subtracting background, while ensuring that the vorticity remains below zero. This is motivated from the analytical work of Lyra & Lin (2013) while being robust to the influence of density waves. In our simulations, vortices are constantly generated and destroyed; only the largest vortices are chosen (usually can survive for at least 100 local orbits). We measure the density contrast by comparing the average density in vortex with the average density at the same radius. The spacing of vortices is calculated by the radial distances between neighboring vortices, which are normalized by the local scale height at the midpoint radius between the two vortices. As vortices can be highly time variable, all quantities are calculated and averaged over several snapshots (see in Section 4). In ring-forming disks, we measure the density contrast, width, spacing and eccentricity of the rings. We identify the rings by first fitting the background density as a power law, and consider peaks/troughs above/below the fitted profile as ring peaks/gap centers. The boundaries of the rings are identified as the radius at the midpoint between peak and valley densities, with ring width being the distance between the two boundaries for each ring. Density contrast is calculated by comparing the density between peaks and boundaries. The final ring width is obtained by averaging the widths of all identified rings, and each ring width is normalized to the local scale height of the disk. Ring spacing is measured as the radial distances between the boundaries of two neighboring rings, normalized in a way similar to that for vortices, and averaged over several snapshots. In the above, we have treated the rings as axsymmetric by working with 1D profiles, whereas in practice we have found that the rings can be eccentric. For identified rings, we further track the maximum density in 2D data and measure their eccentricity by fitting an ellipse. Incomplete rings at the boundary of the analysis domain are excluded from the statistics. For spirals, we quantify their density contrast, number of spiral arms, pattern speed and pitch angle. The density contrast is obtained by comparing density of the spiral spine and the fitted background density at same radii. In our simulations, we obtain the spiral phase at each radius using Fourier decomposition and the pitch angle is obtained by fitting the phase angle with a logarithmic function $\varphi=m(\tan\alpha_{p})^{-1}\ln r+\phi_{0}$, where $\alpha_{p}$ is the pitch angle and $m$ is number of spiral arms. The constant $\phi_{0}$ is further employed to measure the pattern speed of the spirals. ## 3 Representative Results In this section, we present representative outcomes of shadow-driven substructures at fixed disk aspect ratio $h$ before giving more comprehensive statistical results. The three representative runs, denoted as“NL-hm-S-NR,” “NL-hm-R-NR,” and “NL-hm-V-NR,” can be found in Table 1. We show snapshots of major fluid quantities of interest (i.e. $\Sigma,T,\textbf{v}_{r},\textbf{v}_{\phi},\nabla\times\textbf{v}$) from our simulations, and discuss the results below. ### 3.1 Spirals Figure 2: Density (a), temperature (b), radial velocity (c) and azimuthal velocity (d) evolution in spiral-forming disk ($\sigma_{\phi}=0.236,\epsilon=0.8,\alpha=10^{-3},\beta=10,p=-1.0$). All quantities except for $v_{r}$ are normalized by their initial values, and $v_{r}$ is normalized by initial $v_{\phi}$. The color range for temperature differs between the formation process of the early stage (first and second rows) and the relative quasi-steady state (third to fifth rows). The white solid lines represent the fitted spirals based on density contrast. Spirals typically form in disks where the shadow is relatively weak, such as those characterized by slow cooling or weak shadow amplitude. We choose spirals formed in a disk with the following parameters as an example: $\sigma_{\phi}=0.236,\epsilon=0.8,\alpha=10^{-3},\beta=10,p=-1.0$ (run NL-hm- S-NR). As depicted in Figure 2, spirals form relatively quickly (first row in Figure 2), typically within approximately 20 local orbits, and once formed, they remain highly stable111The growth in density perturbations observed in the last two rows of Figure 2 is primarily due to the combined effects of strong viscous heating and the influence of wave damping zones.. These spirals are clearly density waves, showing spiral patterns in all diagnostic physical quantities. The spiral patterns are stationary (i.e., pattern speed is zero), which is related to the fact that our shadow patterns have zero angular velocity. Further discussions regarding the relationship between the properties of the spirals and the other two substructures will be provided in Section 5.2. In addition, by examining the second column of Figure 2, we see that with inefficient cooling, the overall temperature is systematically cooler than the initial temperature by $\sim 15\%$. The azimuthal varies smoothly through the shadowing regions, with a maximum temperature variation of about $4\%$. ### 3.2 Rings Figure 3: Density (a), temperature (b), radial velocity (c) and azimuthal velocity (d) evolution in ring-forming disk ($\sigma_{\phi}=0.236,\epsilon=0.5,\alpha=10^{-4},\beta=1,p=-1.0$). All quantities except for $v_{r}$ are normalized by their initial values, and $v_{r}$ is normalized by initial $v_{\phi}$. The black dashed lines represent the fourth and fifth fitted rings in this disk. The conditions for ring formation generally require either slow cooling or a combination of moderate viscosity and shadow amplitude (for more detailed information, see Section 4). In Figure 3, we adopt parameters $\sigma_{\phi}=0.236,\epsilon=0.5,\alpha=10^{-4},\beta=1,p=-1.0$ (run NL-hm-R- NR) to illustrate the typical formation process and properties of ring structures. –Formation. The formation of rings begins with the presence of two-arm spirals following a transient period (as seen in the first to third rows of Figure 3). The spirals appear only marginally stable, which later break apart and reconnect to form concentric rings in surface density (as shown in the fourth and fifth rows of Figure 3), which takes a relatively long time of $\sim 100$ local orbits. On the other hand, the spiral patterns remain in the velocity structure even after ring formation, although they are distorted (as opposed to the spirals discussed in Section 3.1 and could become a distorted ring patterns in some cases). –Evolution and main properties. Once formed, the amplitudes of the rings continue to increase slowly, reaching a steady state over a few hundred local orbits, where the gas density in rings are about $10\%$ higher than the background. However, the density within one ring at quasi-steady state is unevenly distributed, with surface density near the broken/reconnection location being smaller, which will be further discussed in Section 5.2 and Appendix A. The typical ring width is approximately twice the local scale height, and the spacing is regular (about $4H$ between peaks of two neighbouring rings) across the disk (further discussed in Section 4). We find the rings to be eccentric (but centered on the star), with the eccentricity measured to be $e\sim 0.12$. As can be inferred from the third and fourth columns in Figure 3, the ratio $v_{r}/v_{\phi}$ is approximately $10^{-3}\ll e$, suggesting that these rings do not directly correspond to the gas moving in eccentric orbit. Also, the eccentric rings do not precess, analogous to spiral patterns that remain stationary, thus coroborating the fact that the rings emerge as the aftermath of spiral patterns. With moderate cooling, the azimuthal temperature contrast reaches 15$\%$ and may cause azimuthal brightness variations in observed rings, although we caution for our highly simplified thermodynamic treatment (further discussed in Section 5.5). ### 3.3 Vortices and Crescents Figure 4: Density (a), temperature (b), vorticity (c) and radial velocity (d) evolution in vortex-forming disk ($\sigma_{\phi}=0.236,\epsilon=0.5,\alpha=0,\beta=0.001,p=-1.0$). The color ranges for density, vorticity and radial velocity differ between the early spiral formation stage (first and second rows) and the vortex-dominant stage (third and fourth rows). The simulation duration increases sequentially from top to bottom. The white frame delineates the boundary of each identified vortex, and there are no significant structures inside the white frame in the temperature plot (b) (compare with density (a), vorticity (c) and radial velocity (d)) due to rapid cooling. A white star within each frame indicates the center of the vortex. The red numbers highlight the selected vortices. Crescents can be described as rings that exhibit an azimuthal variation in intensity (Bae et al., 2023). Physically, the crescents discussed in this paper are all induced by vortices, thus we use “vortices” and “crescents” interchangeably. Figure 4 shows an example of shadow-driven formation of vortices/crescents. This usually occurs with strong shadow amplitude and rapid cooling, thus strong thermal forcing, and we adopt $\epsilon=0.5,\beta=0.001$ in this example (run NL-hm-V-NR). –Formation. With rapid cooling, the disk temperature almost instantly relaxes to the target temperature both within and outside of the shadow region, resulting in a 50$\%$ variation in azimuthal temperature given our setup. This leaves two symmetric low-pressure regions that form quickly at the shadow locations. In the initial stages (first and second rows in Figure 4), it leads to the appearance of spiral features in surface density. With strong thermal forcing that constantly perturbing the disk, the system subsequently becomes more chaotic (third row in Figure 4) where the velocity field undergoes significant alterations. Although the physical process is not entirely clear, vortex/crescent formation ensues, as identified in fourth row of Figure 4. Selected vortices and crescents are marked by white frames in Figure 4. –Evolution and main properties. Shadow-driven vortices are all anti-cyclonic in nature, which can be observed either from the negative vorticity (the 3rd column of Figure 4) or from the change in the sign of radial velocity across the vortex center (changing from negative to positive when viewed along the direction of rotation (counterclockwise), as shown in the 4th column of Figure 4). We observe that vortices started small and are continuously generated. They merge to form larger ones under the influence of differential rotation within approximately 60 local orbits, ultimately manifesting as relatively large crescent-shaped structures. In Figure 4, vortices labeled as 4a and 4b are undergoing a merger into one single vortex. We find that these patterns largely corotate with the gas, as expected, and their azimuthal locations are found to be largely random, with no preference to stay in or out of the shadows. The disk gas remains turbulent and chaotic throughout the evolution due to strong perturbations from thermal forcing. Velocity deviations from local Keplerian inside the vortex region are around $0.5c_{s}$. Additionally, the local level of turbulence, measured in terms of root mean square (rms) velocity fluctuations averaged in azimuth, is approximately $10\%$ of the local sound speed. The typical aspect ratio of the vortices/crescents is about 6, with their density contrast being 1.4. The normalized vorticity in this case is 0.2. Despite of modest to strong level of turbulence, the large vortices are relatively long-lived, with typical lifetime of at least 300 local orbits. ## 4 Statistics of Substructures To gain deeper insights into the dynamical consequences of shadows, we conducted a comprehensive exploration of parameter space. We performed a total of 160 simulations (run S-h-all), encompassing a wide combination of parameters. Most results show similarities with one of the aforementioned three representative cases. We thus primarily summarize the outcomes in a statistical manner. For simulations that exhibit the formation of rings and spirals, we only measure their properties at the end of the simulations when the system has already reached a steady state. For simulations with vortex/crescent formation which are generically chaotic, we select four specific snapshots, denoted as $P_{\rm orb1}=5000P_{0}$, $P_{\rm orb2}=10000P_{0}$, $P_{\rm orb3}=15000P_{0}$, and $P_{\rm orb4}=20000P_{0}$. The statistical values for vorticity, density contrast, spacing, and aspect ratio of the vortices are calculated by averaging the results at these snapshots. The simplified statistical results are presented in Figure 5, and more detailed ones are provided in Figures 14 and 15. It is important to emphasize that panels shaded with red or blue lines are actually undergoing a vortex- ring transition or a ring-spiral transition state (see discussion in Appendix A). Generally speaking, shadows are capable of generating different kinds of substructures under different parameter settings. Additionally, we found that the dominant form of shadow-driven substructures changes from spirals to rings and eventually becomes vortices/crescents as cooling timescales and/or viscosity decreases. Where exactly the transition occurs depends on other parameters such as the shadow amplitude, width, and disk aspect ratio, etc., and these will be discussed in more detail in the following subsections. ### 4.1 Statistics for Spirals Two-arm spirals are fundamental substructures in our simulations, dominating in disks with cooling timescales significantly longer than the dynamical timescale, high viscosity ($\alpha>10^{-3}$), or very weak shadow amplitude (see Section 5). Here, we focus on discussing their density contrast, pattern speed, and pitch angle. –Density contrast. In general, stronger thermal forcing, higher shadow amplitude, wider shadow width, etc. leads to stronger density contrast in the spirals. However, as the spiral-dominated regime generally requires weak thermal forcing, the spiral amplitudes are typically low (with upper limit only $1\%$ higher than background density at the same radius). –Pattern speed. Spirals found in our simulations are density wave patterns with zero pattern speed, which also results in non-precessing rings. More generally, the spiral pattern speed exactly matches the shadow’s pattern speed, which will be further discussed in Section 5.3. –Pitch angle. The pitch angle is solely affected by the disk aspect ratio. With weak thermal forcing, we consider the dispersion relation of spiral density waves in the linear regime under the WKB approximation (Lin & Shu, 1964) $m^{2}(\Omega_{p}-\Omega)^{2}=k^{2}c_{s}^{2}+\kappa^{2}.$ (15) Here, $\Omega_{p}$ represents the spiral pattern speed, $k$ is the radial wave number, and $\kappa\approx\Omega_{K}$ is the epicyclical frequency. The spiral pitch angle can be estimated by $\alpha_{p}=\partial r/(r\partial\phi)\approx m/(|k|r)$. With $\Omega_{p}=0$ and $m=2$, we obtain $\alpha_{p}\sim\frac{2}{\sqrt{3}}h=constant$ for $p=-1$ disks and $\alpha_{p}\sim\frac{2}{\sqrt{3}}h_{0}r^{0.25}$ for $p=-0.5$ disks. Taking the disk parameters used in our simulations (with $h_{0}=0.1$) and averaging over radius gives $\alpha_{p}=6.6^{\circ}$ for disks with $p=-1$ and $\alpha_{p}=12.1^{\circ}$ for disks with $p=-0.5$. These estimated values agree well with our simulation results, which we find to be $7.344_{-0.535}^{+0.607}$∘ and $13.202_{-1.766}^{+2.16}$∘ (see Figure 15), respectively. Figure 5: Statistics of shadow-driven substructures based on 80 of 160 simulations with constant disk aspect ratio ($h=0.1$). The figure is divided into two parts by a dashed line, representing the shadow width being 45 degrees on the left and 15 degrees on the right. Within each part, the left column corresponds to disks with $\epsilon=0.5$, while the right column corresponds to disks with $\epsilon=0.8$. Each subfigure in the $\beta$-$\alpha$ sections represents a specific combination of parameters. Within each $\beta$-$\alpha$ section, there are three rows representing the dominant structures in the disk: vortices/crescents, rings, and spirals from top to bottom. Each type of structure is represented by a different type of square marker, whose color represent the density contrast of the substructures. The figure also includes red and blue line shaded areas, indicating disks undergoing transitions from vortex-ring and ring-spiral phases, respectively. Figure 6: The average and error of normalized ring spacing. The black solid line represents simulations with $p=-1.0$, while the red solid line represents simulations with $p=-0.5$. Points marked with an ’x’ do not have error bars. In this case, there are at most two selected rings within the disk. Upward arrow represents lower limit points, where there is only one ring within the disk. Figure 7: The average and error of normalized vortex/crescent spacing. The black solid line represents simulations with $p=-1.0$, while the red solid line represents simulations with $p=-0.5$. Points marked with an ’x’ do not have error bars. In this case, there are at most two selected vortices/crescents within the disk. Upward arrow represents lower limit points, where there is only one selected vortex/crescent within the disk. ### 4.2 Statistics for Rings In our simulations, rings dominate in disks with cooling timescales comparable to the dynamical timescale ($\beta\sim 1$) when $\alpha$ is roughly below $10^{-3}$. For much higher viscosity, rings dominate even when the cooling rate approaches the isothermal limit ($\beta=10^{-3}$). Typically, this value is $\alpha=10^{-2}$ for disks with $\sigma_{\phi}=0.236$ and $\alpha=10^{-3}$ for disks with $\sigma_{\phi}=0.079$. Overall, the parameter space for the dominance of rings is modest thermal forcing, in between the cases that form vortices/crescents (strong forcing, see next subsection) and spirals (weak forcing). In fact, we pose that rings can be viewed either as “reconnected spirals” (stated in Section 3.2), or “failed vortices”, where the latter connection arises from the finding that vortex-ring transitions often involve crescents with very large aspect ratios, although the boundary between this transition is not necessarily clear-cut, and will be further discussed in Appendix A. Below, we will focus on “normal” rings (not under transition), and will discuss the density contrast, ring radial width, ring spacing, eccentricity, and the parameters that have strong influence on them. –Density contrast. As shown in Figure 5 and 14, gas densities are typically $1-20\%$ higher than the background density in ring-dominant disks, and ring density contrast is enhanced by larger shadow amplitude and width. Density contrast could reach very small values, such as $0.3\%$, in the ring-spiral transition, and very large values, such as $50\%$, in the vortex-ring transition. –Width and spacing. The ring widths in our simulations are usually 2 times the local scale height, regardless of shadow parameters. Similarly, for almost all cases, the spacing between neighboring rings is approximately $4H$, as depicted in Figure 6. There is very small deviations from the mean, indicating a highly uniform distribution of rings within the disk. –Eccentricity. As will be stated in Section 5, ring structures are generated following the “reconnection” of two-armed spirals in the early stages of disk evolution, causing the ring to become eccentric with zero pattern speed (as shadows are stationary). More flared disk morphology results in larger spiral pitch angles, making the spirals less tightly wound. As a result, the rings formed in this case tend to be more eccentric. Additionally, we find that viscosity has a strong impact on eccentricity. Typically, ring eccentricity varies from 0.1 to 0.7 as $\alpha$ increases from $0$ to $10^{-2}$ in our simulations (see Figure 15 for details). The angle between the ring’s major axis and the effective shadow center (e.g. $\phi=0^{\circ},180^{\circ}$ when $\beta=0.001$) is typically between $80^{\circ}$ and $110^{\circ}$. ### 4.3 Statistics for Vortices and Crescents As we mentioned in Section 3.3 and better seen in Figure 5, vortices/crescents tend to dominate in disks characterized by fast cooling processes ($\beta<1$), low viscosity ($\alpha=0,10^{-4}$), high shadow amplitudes ($\epsilon=0.8$), and wide shadow widths ($\sigma_{\phi}=0.236$). Such parameter settings all point to strong thermal forcing. Below, we discuss the properties of the shadow-driven vortices/crescents, focusing on density contrast, spacing and aspect ratio of vortices/crescents under the influence of these parameters. –Vorticity and density contrast. The density contrast of substructures is a crucial factor as it directly influences their detectability. From our explorations, the density of the crescents are typically $10-50\%$ higher than the average density at same radius for all vortex-dominated disks. The density contrast is generally slightly higher for stronger shadow intensity, larger shadow width, and faster cooling, but the trend is not definitive given the chaotic nature of the system. The normalized vorticity ranges from 0.1 to 0.6 in vortex-dominated disks, with vorticity around 0.2 in most cases, potentially reaching up to 0.6 in the most extreme cases (large $\epsilon$ and $\sigma_{\phi}$). No clear relationship is found between vorticity and density contrast due to the high turbulence level, which is around $0.1c_{s}$. The velocity deviations from local Keplerian inside the vortex region ranges from 0.4 to 1.2 $c_{s}$, indicative of strong rotation in the vortices. –Spacing. The statistical results of the spacing of vortices/crescents are plotted in Figure 7. In all simulations, the distance between neighboring vortices/crescents is typically between $2H$ and $4H$. The spacing is less uniform compared to rings, and is related to the fact that vortex-dominated disks are usually turbulent. Note that the small error bars in a few cases are related to very limited number of vortices/crescents (2 or 3); the lower limit point represents the case where there is only one vortex-induced crescent in the disk. Similar to the case shown in Figure 4, the azimuthal locations of the vortices/crescents are largely random with no direct correlation with the position of the shadows. –Aspect ratio. The aspect ratio of crescents/vortices is less affected by different parameters. Typically, in vortex-dominated disks, this value is about 6. However, for cases close to (for example, $\sigma_{\phi}=0.079$, $p=-1$, $\epsilon=0.5$, $\alpha=0$, $\beta=0.01$) or undergoing (for example, $\sigma_{\phi}=0.236$, $p=-0.5$, $\epsilon=0.8$, $\alpha=10^{-4}$, $\beta=1$) the vortex-ring transition in parameter space, the aspect ratio can be very large (greater than 12). More detailed results are shown in Figure 15 in Appendix B. ## 5 Discussion Figure 8: Density (a), temperature (b), radial velocity (c) and azimuthal velocity (d) evolution in linear evolution disk ($\sigma_{\phi}=0.236,\epsilon=0.001,\alpha=0,\beta=0.001,p=-1.0$). All quantities except for $v_{r}$ are normalized by their initial values, and $v_{r}$ is normalized by initial $v_{\phi}$. Figure 9: Formation processes of substructures from linear to non-linear regime. All plots show density contrast in disks in logarithmic scale. In this paper, we have conducted simple numerical experiments to study the dynamical consequence of shadows cast from the inner disk to the outer disk as a result of thermal forcing. We have restricted ourselves to a small number of parameters, and the discussion has been largely phenomenological. In this section, while not going into full detail, we conduct additional studies to help better understand the origin and trend of shadow-driven substructures, and briefly discuss their potential implications. ### 5.1 Linear regime Based on the analysis and discussions in the previous sections, here we provide further analysis to gain better physical insights on the shadow-driven substructure formation. As we observe that in all cases, substructure formation starts from the formation of two-armed spirals under our shadow prescriptions. This suggests that spirals are the most fundamental form of shadow-driven substructure, and it can be instructive to look into how spirals form and evolve under very weak thermal forcing to avoid nonlinear effects. We thus further conducted a series of 2D inviscid hydrodynamic simulations with varying perturbation strengths ($\epsilon=0.001$, $\epsilon=0.01$, $\epsilon=0.1$) while keeping the cooling timescales consistent ($\beta=0.001$). Without viscosity, the simulations are in hydrostatic equilibrium to start with before thermal forcing is introduced. In Figure 8, we present the results from the $\epsilon=0.001$ simulation (run L-hm-S-NR in Table 1). When the shadow is introduced, gas flows into the shadowed region in a counterclockwise manner. The gas between the shadow center (pressure minimum) and its rear edge, i.e., between $315^{\circ}$ and $360^{\circ}$ in the first row of Figure 8, gets accelerated, while the gas between the shadow center and its leading edge, i.e., between $0^{\circ}$ and $45^{\circ}$ in the first row of Figure 8, gets decelerated. This leads to gas piling up near the shadow center, while the neighboring gas is slightly rarefied, which naturally launch density waves. As the disk evolves, such density waves wind up due to differential rotation (see second row of Figure 8). In the meantime, the periodic forcing at the shadow location continues, keep launching new density waves, leading to interference. After a few local orbits, the system reaches a relatively steady pattern of two-arm spirals (see third and forth rows of Figure 8), which remain stable over long-term. The spirals share the same pattern speed of the shadows (in this case, zero), and the pitch angle also remains unchanged. We note that this is very different from planet-induced spirals in that a planet launches density waves through discrete Lindblad resonances, while as shadows are cast over a wide range of radii, each radius can excite its own density waves. In our case, the pattern speed of the shadow is zero, and the only relevant resonance condition is simply given by $\Omega=\kappa/m$, where $m=1,2,\cdots$. However, taking $m=2$, we see that with $\kappa\approx\Omega$ for Keplerian disks, no resonance condition is satisfied. In other words, the two-armed spirals are not driven by Lindblad resonances, but are the effective eigen-state of thermally-forced oscillations. ### 5.2 Towards the nonlinear regime We note that even in the linear regime, the spiral patterns are distorted due to thermal forcing. These can be most easily seen from the velocity perturbations in the last three columns of Figure 8. They are also present in the density perturbations where the amplitude of the spirals varies across the shadow region. The form of the distortion can depend on system parameters, which is found to be different in Figure 2 where cooling time is significantly longer. We speculate that such distortions are the source of instability when thermal forcing enters the nonlinear regime. Based on our discussions in the previous sections, we summarize the formation of shadow-driven substructures in Figure 9. Irrespective of whether thermal forcing is linear or nonlinear, the initial phase of the development is similar, involving the formation of two-armed spirals, as shown in (a)-(b). The spirals persist under linear and weakly nonlinear thermal forcing, as seen in the “linear branch” and “spiral branch” in (c)-(f). The properties of spirals are similar between the linear and weakly nonlinear regimes, in terms of pitch angle and pattern speed. When thermal forcing becomes slightly stronger, the spiral arms undergo a relatively quiescent transformation by “reconnecting” into eccentric rings (see Figure 9(g), (h)). The eccentricity of these rings is largely set by the pitch angles of the original two-arm spirals stage and disk viscosity. However, when the thermal forcing becomes too strong, the spirals break in a highly chaotic manner (see Figure 9(i), (j)), leading to the formation of more localized vortices/crescents. Figure 10: Normalized density perturbation for simulations with $\Omega_{\text{shadow}}=1\Omega_{0}$ (10), $0.03\Omega_{0}$ (10), and $0.003\Omega_{0}$ (10) at the final steady state under the parameters setting as $\sigma_{\phi}=0.236,\epsilon=0.001,\alpha=0,\beta=0.001,p=-1.0$. The radii of corotation resonances (CR), inner Lindblad resonances (ILR), and outer Lindblad resonances (OLR) are shown as yellow, green, and purple dashed lines respectively. ### 5.3 Rotating shadows In this paper, we have only discussed the situation when the shadow’s pattern speed is zero. However, if the misaligned inner disk precesses around the central star, the shadow cast from the inner region would have a pattern speed, which then changes the resonance condition discussed in Secton 5.1. To extend our study to more general conditions, we have conducted additional simulations with rotating shadows in the linear regime, with three different shadow pattern speeds, $\Omega_{\text{shadow}}=1\Omega_{0}$ (run L-hm-S-FR), $0.03\Omega_{0}$ (run L-hm-S-MR), and $0.003\Omega_{0}$ (run L-hm-S-SR). Here, $\Omega_{0}$ is Keplerain angular velocity at $r=1$. The detailed parameter settings can be found in Table 1. The density structure from these simulations in the final states are shown in Figures 10. We measure the pattern speed of the spirals $\Omega_{p}$ in these situations, and we confirm that in all three cases, the spirals all have $\Omega_{p}=\Omega_{\text{shadow}}$. Given the pattern speed, the radii of corotation resonances (CR), inner Lindblad resonances (ILR), and outer Lindblad resonances (OLR) can be calculated by $\Omega_{p}=\Omega$, $\Omega_{p}=\Omega\pm\kappa/m$ (with $m=2$), and shown as yellow, green, and purple dashed lines in Figures 10. With the WKB dispersion relation 15, the permitted regions for density wave propagation are outside the Lindblad resonances. In the fast-rotating case $\Omega_{p}=\Omega_{0}$, density waves are permitted beyond the OLR, and the spirals are tightly wound towards outer radii with pitch angle $\alpha_{p}\sim h(\Omega/\Omega_{p})\sim h(r/r_{0})^{-3/2}$. Even with a resolution of $N=2048$ in Figure 10, it is still insufficient to resolve the spirals across the entire disk, weakening the spirals at the outer disk by numerical dissipation. With intermediate $\Omega_{\rm shadow}=0.03\Omega_{0}$, the ILR and OLR are located at $r=6.5$ and $r=13.5$, respectively. Clearly, there are well-defined spirals outside the Lindblad resonances, which break inside the Lindblad resonances. In the slow-rotating case with $\Omega_{\rm shadow}=0.003\Omega_{0}$, even the ILR is beyond the computational domain, and the results are largely identical to the stationary case described in Section 5.1. Given the discussion above, we expect the results presented in this paper largely applies to regions inside the ILR in slowly-precessing shadows. Although not the focus of this paper, it is worth noting the significance of moderately rotating shadows, where the corotation radius lies within the disk region. Our findings are morphologically similar to those of Montesinos & Cuello (2018), who demonstrated that the morphology of shadow-driven spirals notably resembles the planetary wakes caused by embedded planets in the disc using radiative transfer. For better comparison with planet-induced spirals, more detailed investigation with more realistic physics (especially dust and radiative processes) is necessary for the slow-rotating case, especially in regions between the ILR and OLR. ### 5.4 Dependence on disk aspect ratio Figure 11: Normalized density perturbation for runs with different disk aspect ratios. The parameters are identical to those in the representative runs (NL-hm runs) in Section 3, except for the disk aspect ratio. Detailed parameter settings are provided in Table 1. (a): snapshot with $P_{\rm orb}=20000P_{0}$ for run NL-hs-S-NR ($h_{0}=0.05,\sigma_{\phi}=0.236,\epsilon=0.8,\alpha=10^{-3},\beta=10,p=-1.0$). (b): snapshot with $P_{\rm orb}=20000P_{0}$ for run NL-hs-R-NR ($h_{0}=0.05,\sigma_{\phi}=0.236,\epsilon=0.5,\alpha=10^{-4},\beta=1,p=-1.0$). (c): snapshot with $P_{\rm orb}=20000P_{0}$ for run NL-hs-V-NR ($h_{0}=0.05,\sigma_{\phi}=0.236,\epsilon=0.5,\alpha=0,\beta=0.001,p=-1.0$). (d): snapshot with $P_{\rm orb}=3000P_{0}$ for run NL-hl-S-NR ($h_{0}=0.15,\sigma_{\phi}=0.236,\epsilon=0.8,\alpha=10^{-3},\beta=10,p=-1.0$). (e): snapshot with $P_{\rm orb}=10000P_{0}$ for run NL-hl-R-NR ($h_{0}=0.15,\sigma_{\phi}=0.236,\epsilon=0.5,\alpha=10^{-4},\beta=1,p=-1.0$). (f): snapshot with $P_{\rm orb}=1500P_{0}$ for run NL-hl-V-NR ($h_{0}=0.15,\sigma_{\phi}=0.236,\epsilon=0.5,\alpha=0,\beta=0.001,p=-1.0$). In the preceding discussion, we observed that shadow-driven substructures are closely tied to thermal forcing, which is influenced not only by the cooling process but also by disk temperature. Additionally, detailed characteristics of substructures, such as pitch angle or eccentricity, are affected by the disk aspect ratio h. Therefore, it is natural to further investigate the influence of $h_{0}$. We conducted additional simulations with $h_{0}$ ranging from 0.03 to 0.15, focusing on $h_{0}=0.05$ and $h_{0}=0.15$. These simulations, denoted as NL-hs-S-NR, NL-hs-R-NR, NL-hs-V-NR and NL-hl-S-NR, NL- hl-R-NR, NL-hl-V-NR respectively, maintained the same parameters as the representative runs discussed in Section 3 except for $h_{0}$ (see Table 1). We note that here “S”, “R”, and “V” do not necessarily indicate dominant form of substructures but rather serve to guide the reader that these runs only vary $h_{0}$ compared to representative runs. In the NL-hs run series ($h_{0}=0.05$), with lower target temperature, we see that the NL-hs-S-NR (Figure 11) and NL-hs-R-NR (Figure 11) runs maintain spirals and rings as the primary substructure, respectively. We see the spirals are more tightly wound and the rings spacing remains uniform except for being smaller. The changes are exactly in proportion to $h_{0}$, and the general properties of the rings and spirals are otherwise identical to those discussed in the NL-hm runs. For the NL-hs-V-NR run, while the vortices are clearly the dominant, many of the overdensities close a full circle, and we identify this run as in the vortex-ring transition state. In the NL-hl run series ($h_{0}=0.15$), with higher target temperature, we see that all three NL-hl runs retain their spiral, ring and crescent/vortex as the dominant substructure, respectively. Similarly, the spirals are more open, the rings are more eccentric, and the vortices are larger and more widely spaced, as expected. Overall, we find that varying $h$ slightly alters the boundary where different forms of substructures dominate, while the general properties for individual substructures largely remain consistent with what we have found in the fiducial simulations with $h_{0}=0.1$. ### 5.5 Observational implications Given the diverse dynamical consequence of shadowing, such disks is expected to exhibit a variety signatures that are potentially observable. However, it should be noted that our work serves as a general exploration without detailed modeling, including radiation transport, dust dynamics, shadow precession rates (e.g., Pinilla et al., 2015; Stolker et al., 2016; Wolff et al., 2016; Debes et al., 2017), and realistic shadow morphologies may differ from our prescription (e.g., Muro-Arena et al., 2020; Debes et al., 2017). Additionally, as there are a variety of other mechanisms that can drive substructures (e.g., see reviews by Andrews, 2020; Bae et al., 2023; Benisty et al., 2023), such as planet-disk interactions, icelines etc. Our shadowed disk simulations implicitly assumed a smooth disk to start with, and it is conceivable that the final outcome is set by the interplay between the existing substructures and shadowing. Besides such dynamical interplay, substructures themselves can self-shadow (e.g., Zhang et al., 2021), which can further complicate the situation. Therefore, a systematic observational comparison with specific sources is beyond the scope of this work. Below, we mainly discuss general aspects of potential observational implications. –Spirals. Spirals generated from shadows may not be easily detectable in the submm continuum or in kinematics, but may be observable in scatter light. Nearly all spiral-dominant disks correspond to weak thermal forcing, resulting in only about $0.1\%$ higher gas density than the background. This not only makes pressure variations across the spirals small that is difficult for dust trapping, and only sufficiently small particles with a stopping time shorter than the spiral crossing time (typically requiring the Stokes number much less than 0.1) can potentially be trapped by the spiral (e.g. Sturm et al., 2020; Speedie et al., 2022). With the weak spirals, the gas velocity is found to show very small deviationsfrom Keplerian ($\sim 0.1\%$ $v_{k}$, as opposed to $\gtrsim 0.5\%$ $v_{k}$ for typical ALMA observations (Pinte et al., 2023).), making it difficult for kinematic detection. On the other hand, such spirals may be detectable in scattered light, as suggested by Montesinos et al. (2016) for the HD 142527 disk, thanks to azimuthal variation of disk scale heights across the spirals, though three-dimensional simulations are needed for proper characterization. –Rings. For full disks, our simulations predict the presence of multiple gas rings that are uniformly spaced and weakly eccentric. The relatively high density contrast in our simulations suggests that these rings likely concentrate dust, making them readily observable in sub-mm wavelengths. While the resulting dust rings formed are also likely uniformly spaced, whether they can be eccentric remains uncertain (as the eccentric gas ring is a pattern and does not reflect real motion), requiring simulations incorporating dust dynamics. From all simulations, we find that the azimuthal temperature contrast in the ring-dominant disks are typically greater than $8\%$ and can reach up to $50\%$ as they approach to vortex-ring transition in disks with high viscosity and rapid cooling. Such azimuthal temperature variations should result in azimuthal brightness variations in the mm continuum image, which however has not been revealed in in real shadowed disks with rings (e.g., HD 143006). This suggests that thermal forcing by shadows in these systems are likely not as strong as given in our prescriptions, but we caution that without detailed modeling of shadow morphology, radiation transport and dust dynamics, we cannot make specific predictions for individual systems. On the other hand, we comment that both the weakly eccentric ring pattern and low- level of azimuthal temperature variation, if present, may affect the interpretation of azimuthal asymmetries seen in multi-ring systems (e.g. Doi & Kataoka, 2021; Liu et al., 2022). Finally, we note that detection by kinematic signatures, with velocity disturbances being $\sim 1\%$ of the Keplerian velocity, is possible but challenging since they are close to ALMA’s detection limits. –Crescents. Vortices generate significant velocity perturbations and are favored sites for dust trapping. Given the adopted turbulent viscosity parameter $\alpha_{t}\gtrsim 10^{-2}$ in most vortex-dominated simulations, dust with Stokes number $St>\alpha_{t}\sim 10^{-2}$ is expected to concentrate inside vortices overcoming turbulent diffusion (Birnstiel et al., 2013), and can be readily observable in sub-millimeter wavelengths (Zhu et al., 2014). Previous studies have found that detecting kinematic signatures of vortices can be possible but challenging (Huang et al., 2018), despite the relatively large vorticity (typically around $0.2$) and significant velocity deviations from local Keplerian ($\delta v$ up to $1.2c_{s}$) inside vortex region . It is expected that sources with modest inclination favors detection but requires long integration time with ALMA (more than 10h) to achieve the necessary signal-to-noise ratio. ## 6 Summary and Future Prospects In this work, we have systematically studied the dynamical consequence of thermal forcing by shadows cast to the outer protoplanetary disks. With a large survey of parameters, we have identified a diverse forms of substructures generated by shadows and studied their trends under different thermodynamic and viscosity prescriptions. Our results apply in regimes where the shadow is static or slowly-rotating (prograde), so that the corotation radius is further than regions of interest. The main findings of our studies are as follows. 1\. Two-arm spirals with identical pattern speed as the shadow are fundamental substructures generated by weak thermal forcing ($\epsilon<=0.5$, $\sigma_{\phi}=0.079$, $\beta>1$) or high viscosity ($\alpha>10^{-3}$). They represent linear response to thermal forcing, and their pitch angle well agrees with standard density waves. Both the density contrast (0.1-1$\%$ higher than background) and velocity disturbance up to 0.5$\%$ $v_{K}$) are small and scale with the strength of thermal forcing. 2\. Disks with moderate thermal forcing are dominated by ring-like substructures. In this regime (parameter space in between crescent/vortex and spiral-dominant disks), the gas density contrast reaches 1-20$\%$ above the background. The rings are uniformly spaced ($\Delta r/H\sim 4H$) and exhibit pattern eccentricities on the order of $h/r$ or higher which rotates at the same rate of the shadow. 3\. Crescents/vortices dominate disks under strong thermal forcing ($\epsilon>0.5$, $\beta\lesssim 0.1$, $\sigma_{\phi}=0.236$) and low viscosity ($\alpha<=10^{-4}$). In this case, the density contrast is typically 10-50$\%$ higher than the average density at the same radius. The vortices in our simulations exhibit relatively large vorticity (ranging from 0.1 to 0.6, typically around 0.2) and significant velocity deviations from local Keplerian inside the vortex region (ranging from 0.4 to 1.2 $c_{s}$). Due to the chaotic nature (local turbulence level is 0.1 $c_{s}$) of the vortex-dominant disk, these structures are not uniformly spaced, with $\Delta r/H$ between 2 and 4. 4\. Thermodynamics and viscosity significantly influence the formation of shadow-driven disk substructures. The dominant substructure transitions from spirals to rings and eventually to vortices as cooling timescales and/or viscosity decrease. 5\. Owing to the simplicity of our problem setup, it is premature to definitely assess the observability of such shadow-driven substructures. We anticipate that the azimuthal brightness contrast in the sub-mm continuum to offer important constraints on the strength of the thermal forcing, while detecting in-plane kinematic signatures is likely challenging. Through our suite of physically-motivated while highly simplified simulations, we highlight the importance on the dynamical impact of shadows or more generally, inhomogeneous stellar irradiation, on the gas dynamics of PPDs through thermal forcing. Given the fact that shadows are often observed in scattered light images of disks, our results call for proper consideration and incorporation of such effects for adequate modeling of such systems. Our simulations can be considered as a starting point to understand the dynamical effects of shadows on PPDs, yet real systems are likely much more complex. This leaves several aspects to be considered and tested in the future. Proper characterizing disk thermodynamics is a pre-requisite to accurately model thermal forcing from shadows, which requires better modeling of the shadow geometry, together with self-consistent radiation transport. Such modeling under typical disk parameters (that are likely nearly optically- thin) will likely reduce the azimuthal temperature contrast due to in-plane radiation transport. Incorporation of dust dynamics is essential to obtain dust response to the shadow-driven substructures. Such simulations are expected to link the results with specific sources, as we are aware of efforts underway (Ziampras et al., in preparation). We have also assumed the shadows are cast to a full disk, whereas shadows are also observed in transition disks, and it is also pertinent to account for the interplay other physical mechanisms that cause disk substructures, with additional effect of self- shadowing. Finally, all existing studies of shadow-driven disk dynamics are conducted in 2D in the disk plane, whereas the shadow-driven thermal forcing is also expected to also drive oscillations in the vertical direction (Liu & Bai, in preparation). Future studies should incorporate 3D effects, which is essential to further assess the fidelity of 2D simulation results, and make more realistic observational predictions and comparisons. Acknowledgements We thank Yanqin Wu and Shangjia Zhang for useful discussions, Pinghui Huang for helpful instructions on problem setup, and Alexandros Ziampras for constructive exchanges. This work is supported by National Science Foundation of China under grant No. 12233004, 12325304. We also acknowledge the China Center of Advanced Science and Technology for hosting the Protoplanetary Disk and Planet Formation Summer School in 2022 when this work was initiated. Numerical simulations are conducted in the Orion and Sirius clusters at Department of Astronomy, Tsinghua University and TianHe-1 (A) at National Supercomputer Center in Tianjin, China. ## Appendix A Transition State The vortex-ring transition represents the parameter regime where both the features of vortices/crescents and rings can be observed in the disk. Four examples of vortex-ring transition are illustrated in Figure 12. They are recognized as vortex-ring transitions generally based on two reasons: rings and vortices/crescents are simultaneously present in the disk (Figure 12), or the basic morphology appears as rings but with significant asymmetry (Figure 12, 12, 12). In Figures 14 and 15, the left side of vortex-ring transition cases depicts vortex-dominated disks, while the right side illustrates ring- dominated disks. Further decreases in $\beta$ or $\alpha$ lead to the disk being completely dominated by vortices/crescents. The ring-spiral transition represents the parameter regime where both the features of rings and spirals can be identified in the disk. Four examples of ring-spiral transitions are shown in Figure 13. They either exhibit regularly broken rings (Figure 13 and 13) or clearly display both rings and spirals within the same disks (Figure 13 and 13). These transition regions lie between ring-dominated disks and spiral-dominated disks in Figure 14 and 15. The disk becomes dominated by spirals as $\beta$ or $\alpha$ increases. From the transition states shown in Figure 12 and 13, we can verify that rings exhibit characteristics of both vortices/crescents and spirals, as discussed in Section 4.2. Slightly excessive thermal forcing, relative to ring-dominant disks, can hamper reconnection (Figure 12) mentioned in Section 5.2, leading the disk into a vortex-ring transition state with strongly asymmetric rings (Figure 12) or crescents with large aspect ratios (Figure 12). Conversely, with weak thermal forcing, the breaking of two-armed spirals is partial (Figure 13), placing the disk into a ring-spiral transition state. Figure 12: The demonstration of vortex-ring transition situation. The parameters of these 4 plots are $\sigma_{\phi}=0.236,p=-0.5,\epsilon=0.5,\alpha=0.0001,\beta=1$; $\sigma_{\phi}=0.236,p=-0.5,\epsilon=0.8,\alpha=0.01,\beta=0.001$; $\sigma_{\phi}=0.236,p=-1.0,\epsilon=0.8,\alpha=0.01,\beta=0.001$; $\sigma_{\phi}=0.079,p=-0.5,\epsilon=0.8,\alpha=0,\beta=0.1$ respectively. All these cases can be found in Figure 14 and Figure 15 as vortex or ring blocks covered by red hatch lines. Figure 13: The demonstration of ring-spiral transition situation. The parameters of these 4 plots are $\sigma_{\phi}=0.236,p=-0.5,\epsilon=0.8,\alpha=0,\beta=10$; $\sigma_{\phi}=0.079,p=-1.0,\epsilon=0.8,\alpha=0.001,\beta=1$; $\sigma_{\phi}=0.079,p=-0.5,\epsilon=0.5,\alpha=0,\beta=10$; $\sigma_{\phi}=0.079,p=-0.5,\epsilon=0.5,\alpha=0.001,\beta=1$ respectively. All these cases can be found in Figure 14 and Figure 15 as ring or spiral blocks covered by blue hatch lines. ## Appendix B Simulation statistics The detailed statistical plot of vorticity and density contrast (Figure 14), along with other parameters (Figure 15) of substructures, is presented here. These two figures share the same structures. Each of these figures is divided into two sections by a dashed line, representing shadow ranges of 45 degrees ($\sigma_{\phi}=0.236$) and 15 degrees ($\sigma_{\phi}=0.079$), respectively. In the left column, disks with a temperature slope of $-1$ are shown, while the right column represents disks with a temperature slope of $-0.5$. Each row, from top to bottom, corresponds to shadow amplitudes of $\epsilon=0.5$ and $\epsilon=0.8$. The $x$-axis of the subfigures represents $\beta$, while the $y$-axis represents $\alpha$. Within each $\beta$-$\alpha$ section, there are three rows indicating the dominant structures in the disk: vortices/crescents, rings, and spirals, each represented by different types of squares, colored by the relevant physical properties as indicated in the color bars. The figure also includes red and blue line shaded areas, indicating disks undergoing transitions from vortex-ring and ring-spiral phases, respectively. We note that the inviscid ($\alpha=0$) simulations maintain the same temperature gradient with a slope of $-1$ (indicating that the $p$ value shown in the title of each subfigure only applies to viscid runs) and vary the density gradient of $d=-0.5$ and $d=-1$ in the left and right columns, respectively, which help us exclude the influence of density gradient. Figure 14: Statistics of substructures’ vorticity and density ratio for 160 simulations. The structure of the figure is described in Appendix B and is similar to Figure 5. Figure 15: Statistics of vortices’ aspect ratio, rings’ eccentricity and spirals’ pitch angles for 160 simulations. The structure of the figure is same as Fig14. ## Appendix C One-sided shadow test In this Appendix, we briefly examine how the morphology and form of substructures can be affected by the morphology of the shadow region. As an experiment, we performed simulations with only the right side of the shadow shown in Figure 1 present, and the target temperature is taken as $T_{\rm tar}(r,\phi)=T_{\rm init}(r)\left(1-\epsilon e^{-\frac{\phi^{2}}{2\sigma_{\phi}^{2}}}\right).$ (C1) The remaining parameters for the disk and shadow are the same as those in the representative simulations (NL-hm runs). For detailed parameter settings for the NL-hm runs, please refer to Table 1. It can be seen from Figure 16 that the types of dominant substructures have not changed compare with NL-hm runs. The dominant spiral now has $m=1$, and the rings become asymmetric (with $m=1$, as opposed to eccentric with $m=2$), while crescents are generated as usual. These outcomes similarly follow the formation process described in Section 5.2. These simulations illustrate that besides a morphological change from $m=2$ to $m=1$, the general trends of shadow-driven substructures are not sensitive to shadow prescriptions. Figure 16: Normalized density perturbation for one-sided shadow. Left panel: Spiral forming disk with parameter taken to be same as run NL-hm-S-NR (Figure 2). Middle panel: Ring forming disk with parameter taken to be same as run NL- hm-R-NR (Figure 3). Right panel: Vortex forming disk with parameter taken to be same as run NL-hm-V-NR (Figure 4). ## References * ALMA Partnership et al. (2015) ALMA Partnership, Brogan, C. L., Pérez, L. M., et al. 2015, ApJ, 808, L3, doi: 10.1088/2041-8205/808/1/L3 * Andrews (2020) Andrews, S. M. 2020, ARA&A, 58, 483, doi: 10.1146/annurev-astro-031220-010302 * Avenhaus et al. (2018) Avenhaus, H., Quanz, S. P., Garufi, A., et al. 2018, ApJ, 863, 44, doi: 10.3847/1538-4357/aab846 * Bae et al. (2023) Bae, J., Isella, A., Zhu, Z., et al. 2023, in Astronomical Society of the Pacific Conference Series, Vol. 534, Protostars and Planets VII, ed. S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, & M. Tamura, 423, doi: 10.48550/arXiv.2210.13314 * Bae et al. (2021) Bae, J., Teague, R., & Zhu, Z. 2021, ApJ, 912, 56, doi: 10.3847/1538-4357/abe45e * Bai (2016) Bai, X.-N. 2016, ApJ, 821, 80, doi: 10.3847/0004-637X/821/2/80 * Benisty et al. (2015) Benisty, M., Juhasz, A., Boccaletti, A., et al. 2015, A&A, 578, L6, doi: 10.1051/0004-6361/201526011 * Benisty et al. (2018) Benisty, M., Juhász, A., Facchini, S., et al. 2018, A&A, 619, A171, doi: 10.1051/0004-6361/201833913 * Benisty et al. (2023) Benisty, M., Dominik, C., Follette, K., et al. 2023, in Astronomical Society of the Pacific Conference Series, Vol. 534, Protostars and Planets VII, ed. S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, & M. Tamura, 605, doi: 10.48550/arXiv.2203.09991 * Birnstiel et al. (2013) Birnstiel, T., Dullemond, C. P., & Pinilla, P. 2013, A&A, 550, L8, doi: 10.1051/0004-6361/201220847 * Casassus et al. (2018) Casassus, S., Avenhaus, H., Pérez, S., et al. 2018, MNRAS, 477, 5104, doi: 10.1093/mnras/sty894 * de Val-Borro et al. (2006) de Val-Borro, M., Edgar, R. G., Artymowicz, P., et al. 2006, MNRAS, 370, 529, doi: 10.1111/j.1365-2966.2006.10488.x * Debes et al. (2017) Debes, J. H., Poteet, C. A., Jang-Condell, H., et al. 2017, ApJ, 835, 205, doi: 10.3847/1538-4357/835/2/205 * Doi & Kataoka (2021) Doi, K., & Kataoka, A. 2021, ApJ, 912, 164, doi: 10.3847/1538-4357/abe5a6 * Gammie (2001) Gammie, C. F. 2001, ApJ, 553, 174, doi: 10.1086/320631 * Garufi et al. (2018) Garufi, A., Benisty, M., Pinilla, P., et al. 2018, A&A, 620, A94, doi: 10.1051/0004-6361/201833872 * Ginski et al. (2021) Ginski, C., Facchini, S., Huang, J., et al. 2021, ApJ, 908, L25, doi: 10.3847/2041-8213/abdf57 * Gratton et al. (2019) Gratton, R., Ligi, R., Sissa, E., et al. 2019, A&A, 623, A140, doi: 10.1051/0004-6361/201834760 * GRAVITY Collaboration et al. (2021) GRAVITY Collaboration, Sanchez-Bermudez, J., Caratti O Garatti, A., et al. 2021, A&A, 654, A97, doi: 10.1051/0004-6361/202039600 * Huang et al. (2020) Huang, J., Andrews, S. M., Dullemond, C. P., et al. 2020, ApJ, 891, 48, doi: 10.3847/1538-4357/ab711e * Huang et al. (2018) Huang, P., Isella, A., Li, H., Li, S., & Ji, J. 2018, ApJ, 867, 3, doi: 10.3847/1538-4357/aae317 * Isella et al. (2018) Isella, A., Huang, J., Andrews, S. M., et al. 2018, ApJ, 869, L49, doi: 10.3847/2041-8213/aaf747 * Kuffmeier et al. (2021) Kuffmeier, M., Dullemond, C. P., Reissl, S., & Goicovic, F. G. 2021, A&A, 656, A161, doi: 10.1051/0004-6361/202039614 * Lin & Shu (1964) Lin, C. C., & Shu, F. H. 1964, ApJ, 140, 646, doi: 10.1086/147955 * Lin & Youdin (2015) Lin, M.-K., & Youdin, A. N. 2015, ApJ, 811, 17, doi: 10.1088/0004-637X/811/1/17 * Liu et al. (2022) Liu, Y., Flock, M., & Fang, M. 2022, Science China Physics, Mechanics, and Astronomy, 65, 269511, doi: 10.1007/s11433-021-1891-8 * Lyra & Lin (2013) Lyra, W., & Lin, M.-K. 2013, ApJ, 775, 17, doi: 10.1088/0004-637X/775/1/17 * Monnier et al. (2017) Monnier, J. D., Harries, T. J., Aarnio, A., et al. 2017, ApJ, 838, 20, doi: 10.3847/1538-4357/aa6248 * Montesinos & Cuello (2018) Montesinos, M., & Cuello, N. 2018, MNRAS, 475, L35, doi: 10.1093/mnrasl/sly001 * Montesinos et al. (2016) Montesinos, M., Perez, S., Casassus, S., et al. 2016, ApJ, 823, L8, doi: 10.3847/2041-8205/823/1/L8 * Muro-Arena et al. (2018) Muro-Arena, G. A., Dominik, C., Waters, L. B. F. M., et al. 2018, A&A, 614, A24, doi: 10.1051/0004-6361/201732299 * Muro-Arena et al. (2020) Muro-Arena, G. A., Benisty, M., Ginski, C., et al. 2020, A&A, 635, A121, doi: 10.1051/0004-6361/201936509 * Nealon et al. (2019) Nealon, R., Pinte, C., Alexander, R., Mentiplay, D., & Dipierro, G. 2019, MNRAS, 484, 4951, doi: 10.1093/mnras/stz346 * Pérez et al. (2018) Pérez, L. M., Benisty, M., Andrews, S. M., et al. 2018, ApJ, 869, L50, doi: 10.3847/2041-8213/aaf745 * Pfeil & Klahr (2019) Pfeil, T., & Klahr, H. 2019, ApJ, 871, 150, doi: 10.3847/1538-4357/aaf962 * Pinilla et al. (2015) Pinilla, P., Birnstiel, T., & Walsh, C. 2015, A&A, 580, A105, doi: 10.1051/0004-6361/201425539 * Pinte et al. (2023) Pinte, C., Teague, R., Flaherty, K., et al. 2023, in Astronomical Society of the Pacific Conference Series, Vol. 534, Protostars and Planets VII, ed. S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, & M. Tamura, 645, doi: 10.48550/arXiv.2203.09528 * Pohl et al. (2017) Pohl, A., Benisty, M., Pinilla, P., et al. 2017, ApJ, 850, 52, doi: 10.3847/1538-4357/aa94c2 * Rich et al. (2020) Rich, E. A., Wisniewski, J. P., Sitko, M. L., et al. 2020, ApJ, 902, 4, doi: 10.3847/1538-4357/abb2a3 * Rich et al. (2019) Rich, E. A., Wisniewski, J. P., Currie, T., et al. 2019, ApJ, 875, 38, doi: 10.3847/1538-4357/ab0f3b * Shakura & Sunyaev (1973) Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 * Speedie et al. (2022) Speedie, J., Booth, R. A., & Dong, R. 2022, ApJ, 930, 40, doi: 10.3847/1538-4357/ac5cc0 * Stolker et al. (2016) Stolker, T., Dominik, C., Avenhaus, H., et al. 2016, A&A, 595, A113, doi: 10.1051/0004-6361/201528039 * Stone et al. (2020) Stone, J. M., Tomida, K., White, C. J., & Felker, K. G. 2020, ApJS, 249, 4, doi: 10.3847/1538-4365/ab929b * Sturm et al. (2020) Sturm, J. A., Rosotti, G. P., & Dominik, C. 2020, A&A, 643, A92, doi: 10.1051/0004-6361/202038919 * Suzuki et al. (2016) Suzuki, T. K., Ogihara, M., Morbidelli, A., Crida, A., & Guillot, T. 2016, A&A, 596, A74, doi: 10.1051/0004-6361/201628955 * Tabone et al. (2022) Tabone, B., Rosotti, G. P., Cridland, A. J., Armitage, P. J., & Lodato, G. 2022, MNRAS, 512, 2290, doi: 10.1093/mnras/stab3442 * Uyama et al. (2018) Uyama, T., Hashimoto, J., Muto, T., et al. 2018, AJ, 156, 63, doi: 10.3847/1538-3881/aacbd1 * van Boekel et al. (2017) van Boekel, R., Henning, T., Menu, J., et al. 2017, ApJ, 837, 132, doi: 10.3847/1538-4357/aa5d68 * van der Marel et al. (2016) van der Marel, N., Cazzoletti, P., Pinilla, P., & Garufi, A. 2016, ApJ, 832, 178, doi: 10.3847/0004-637X/832/2/178 * Varga et al. (2021) Varga, J., Hogerheijde, M., van Boekel, R., et al. 2021, A&A, 647, A56, doi: 10.1051/0004-6361/202039400 * Villenave et al. (2023) Villenave, M., Stapelfeldt, K. R., Duchene, G., et al. 2023, arXiv e-prints, arXiv:2311.07668, doi: 10.48550/arXiv.2311.07668 * Wolff et al. (2016) Wolff, S. G., Perrin, M., Millar-Blanchaer, M. A., et al. 2016, ApJ, 818, L15, doi: 10.3847/2041-8205/818/1/L15 * Zhang et al. (2021) Zhang, S., Hu, X., Zhu, Z., & Bae, J. 2021, ApJ, 923, 70, doi: 10.3847/1538-4357/ac2c82 * Zhu et al. (2014) Zhu, Z., Stone, J. M., Rafikov, R. R., & Bai, X.-n. 2014, ApJ, 785, 122, doi: 10.1088/0004-637X/785/2/122
# Onset of metallic transition in molecular liquid hydrogen Jianqing Guo School of Physics, Peking University, Beijing 100871, People’s Republic of China International Center for Quantum Materials, Peking University, Beijing 100871, People’s Republic of China Bingqing Cheng The Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria Limei Xu School of Physics, Peking University, Beijing 100871, People’s Republic of China International Center for Quantum Materials, Peking University, Beijing 100871, People’s Republic of China Interdisciplinary Institute of Light-Element Quantum Materials and Research Center for Light- Element Advanced Materials, Peking University, Beijing, China. Enge Wang School of Physics, Peking University, Beijing 100871, People’s Republic of China International Center for Quantum Materials, Peking University, Beijing 100871, People’s Republic of China Interdisciplinary Institute of Light- Element Quantum Materials and Research Center for Light-Element Advanced Materials, Peking University, Beijing, China. Songshan Lake Materials Lab, Institute of Physics, Chinese Academy of Sciences, Guangdong, China. School of Physics, Liaoning University, Shenyang, China Ji Chen<EMAIL_ADDRESS>School of Physics, Peking University, Beijing 100871, People’s Republic of China Interdisciplinary Institute of Light-Element Quantum Materials and Research Center for Light-Element Advanced Materials, Peking University, Beijing, China. Collaborative Innovation Center of Quantum Matter, Beijing 100871, People’s Republic of China ###### Abstract Liquid-liquid phase transition of hydrogen is at the center of hydrogen phase diagram as a promising route towards emergent properties such as the Wigner- Huntington metallization, superconductivity, and superfluidity. Here we report a study on the liquid-liquid phase transition of hydrogen using the state-of- the-art diffusion quantum Monte Carlo and density functional theory calculations. Our results suggest that the metallization process happens at lower pressures and temperatures compared to the structural phase transition of molecular to atomic hydrogen. The consequence is that metallized molecular hydrogen is stable at a wide range of pressures and temperatures. Our study breaks the conventional assumption that metallization coinciding with dissociation of hydrogen molecule, and the molecular metallic hydrogen liquid phase is likely to become the frontier of studying hydrogen phase transitions. Hydrogen, the simplest and lightest element in nature, is famous for a complex phase diagram and a rich variety of phase transitions. Among the many phases of hydrogen, the liquid-liquid phase transition (LLPT) is of particular interest ever since metallic hydrogen was predicted in 1935, and LLPT is considered as a path to metallic hydrogen McMahon _et al._ (2012); Nellis (2021). Experimentally, the LLPT of hydrogen has been achieved under dynamic and static compression in the fluid phase, and the evidence of metallization was mostly provided by the change in optical reflectivity Dzyabura _et al._ (2013); Zaghoo _et al._ (2016, 2018); Jiang _et al._ (2020); Weir _et al._ (1996); Knudson _et al._ (2015); Celliers _et al._ (2018). Nevertheless, challenges and controversies still exist in experiments especially at low temperatures, including the uncertainty in determining the extreme pressure and temperature conditions, the difficulties in structure characterizations, and the lack of accurate transport measurements of conductivity McMahon _et al._ (2012); Fang _et al._ (2019); Silvera and Dias (2021). Theoretically, LLPT of hydrogen is often described as a transition from the insulating molecular phase to the metallic atomic phase, supported by computational simulations Scandolo (2003); Holst _et al._ (2008); Morales _et al._ (2010); Lorenzen _et al._ (2010); Morales _et al._ (2013); Geng _et al._ (2019); Hinz _et al._ (2020); Tian _et al._ (2020); van de Bund _et al._ (2021), indicating or assuming that the molecular-atomic transition (MAT) and the insulating-metallic transition (IMT) occur simultaneously. In recent years, new attention has been paid to the supercritical behavior and the location of liquid-liquid critical point in the computational community, involving studies using density functional theory (DFT), machine learning potentials and quantum Monte Carlo (QMC) Li _et al._ (2015); Gorelov _et al._ (2020); Cheng _et al._ (2020); Karasiev _et al._ (2021); Cheng _et al._ (2021); Tirelli _et al._ (2021). Based on these studies, it is understood that below the critical point, if the temperature is above the melting line, LLPT is one phase transition where $\text{H}_{2}$ molecule dissociates and metallizes simultaneously; but above the critical point, the dissociation and metallization happen in a smooth and continuous manner. In literature, both the molecular-atomic transition and the insulating-metallic transition have been analysed in detail using the state-of-the-art computational methods (see e.g. Refs. Pierleoni _et al._ , 2018; Gorelov _et al._ , 2020). However, to the best of our knowledge, there is no theoretical guarantee or experimental evidence that the metallization of molecular hydrogen should exactly coincide with the dissociation, regardless if LLPT is first order or not. On the contrary, experimental studies performed in the solid regime of dense hydrogen has suggested that metallization can happen without a molecular-atomic transition, motivating a re-examination of the nature of LLPT Eremets _et al._ (2019). In this paper, we study the molecular-atomic and insulating-metallic phase transition of liquid hydrogen, to answer the basic question of whether they coincide with each other or one occurs before another. To provide microscopic insights, we propose the molecular fraction of liquid hydrogen as the key order parameter to compare the two kinds of phase transitions on the same foot. We use the state-of-the-art diffusion quantum Monte Carlo (DMC) method to benchmark DFT functionals and find that vdw-DF can accurately describe the structural phase transition of dense liquid hydrogen. We also study the electronic phase transition of liquid hydrogen by calculating the fundamental gap as a function of molecular fraction using DMC and DFT methods. We find that the insulating-metallic transition begins when around 20% hydrogen molecules dissociate into hydrogen atoms, which requires much lower dissociation than the half molecular-atomic phase transition. By mapping such results onto the pressure-temperature phase diagram, we estimate that the pressure of IMT is tens of GPa lower than structural MAT. Figure 1: Molecular and atomic liquid hydrogen from AIMD simulations. (a-b) Snapshots of typical molecular liquid hydrogen (x = 0.96) and half-molecular- half-atomic liquid hydrogen (x = 0.48) from the vdW-DF simulations. The criterion for drawing the H-H bond is 0.9 Å. (c) Molecular fraction as a function of temperature using different DFT functionals. The molecular fraction order parameter x is defined in detail in the main text and the Supplementary Material. Qualitatively the molecular faction x describes the portion of $\text{H}_{2}$ molecules in liquid hydrogen. (d) HH pair correlation functions $\text{G}_{\text{HH}}(\text{r})$ at different temperatures calculated from vdw-DF AIMD simulations. Simulations were carried out at the density of $\text{r}_{\text{s}}$ = 1.50 Bohr in the NVT ensemble with 128 hydrogen atoms. DFT-based ab initio molecular dynamics (AIMD) simulations were carried out using the Quantum Espresso (QE) package Giannozzi _et al._ (2009, 2017) with the Perdew-Burke-Ernzerhof (PBE) Perdew _et al._ (1996), BLYP-D2 Grimme (2006), the vdW-DF Dion _et al._ (2004), the vdW-DF2 Klimeš _et al._ (2009) and the SCAN Sun _et al._ (2015) functionals. The interactions between the valence electrons were treated with Hamann-Schlüter-Chiang-Vanderbilt pseudopotentials Hamann _et al._ (1979); Vanderbilt (1985). Each simulation was performed in the canonical (NVT) ensemble, and the temperature was controlled by a Nosé-Hoover thermostat Martyna _et al._ (1992). DFT total energy calculations with the PBE, the vdW-DF, the BLYP-D2, the SCAN, the HSE06 Heyd _et al._ (2003) functionals were performed with both the QE package and the VASP code Kresse and Furthmüller (1996). DMC calculations were performed using the CASINO package Needs _et al._ (2020), with the size-consistent DMC algorithm ZSGMA Zen _et al._ (2016). The recently developed energy consistent correlated electron pseudopotentials (eCEPPs) Trail and Needs (2017) were used. Further computational details can be found in the Supplemental Material si . To investigate the molecular-atomic phase transition we use an order parameter, namely the molecular fraction. As suggested in previous studies, a molecular fraction of 0.5 is a good criterion to characterize the structural molecular-atomic phase transition Tamblyn and Bonev (2010); Geng _et al._ (2019); Cheng _et al._ (2020). Specifically, we follow the definition of Cheng et al.Cheng _et al._ (2020), where the molecular fraction x was defined as the fraction of atoms with one neighbour within a smooth cutoff function starts from 0.8 Å and decays to 0 at 1.1 Å (see Supplementary Material II for more details and further validation using an agnostic classification scheme si ). There are other definitions of molecular fraction Tamblyn and Bonev (2010); Geng _et al._ (2019) and other order parameters, e.g. density/volume, heat capacity, compressibility, and diffusion coefficient, that can be used to describe the molecular-atomic transition on the temperature pressure phase diagram Li _et al._ (2015); Cheng _et al._ (2020); Karasiev _et al._ (2021). A detailed discussion on the thermodynamic properties involved in the molecular-atomic LLPT is beyond the scope of this study. In this study we focus on the molecular fraction, which provides the clearest microscopic insight to the molecular-atomic transition. Fig. 1 (a) and (b) show snapshot of two structures with a high molecular fraction of (x=0.96) and a low molecular fraction (x=0.48), respectively. In Fig. 1 (c) we plot the molecular fraction as a function of temperature by performing systematic AIMD simulations using different DFT exchange correlation functionals at the density of $\text{r}_{\text{s}}$ = 1.50 Bohr with a system size of 128 H atoms. Tests on e.g. size effects and density dependence are further presented and discussed in Fig. S2 and S3 si . The simulations confirm that the molecular fraction can be used to monitor the molecular-atomic LLPT as a function of temperature. The structural transition is further illustrated by the pair correlation functions in Fig. 1 (d) and Fig. S1 si , where the first peak is suppressed when the molecular fraction decreases to $\sim$0.5. However, different functionals show quite different transition temperatures at a constant density, which are consistent with shifts of phase boundaries on a pressure-temperature phase diagram observed in previous studies. Among the various functionals considered, we find that PBE predict the lowest phase transition temperature and vdW-DF2 has the highest phase transition temperature. The other three functionals, namely SCAN, vdW- DF, and BLYP-D2, show behaviors that are in between the cases of PBE and vdW- DF2. Overall, these computational observations of the qualitative difference among different DFT functionals agree with the comparisons reported in literature Clay _et al._ (2014); Geng _et al._ (2019). Figure 2: Benchmark of DFT against DMC. 14 structures covering a wide range of molecular fraction x were selected for the benchmark. The structures are collected from vdW-DF AIMD and machine learning potential simulations of 128 hydrogen atoms with a density of rs=1.5 Bohr at different temperatures (see Supplementary Material II for more details si ). The relative binding energy is calculated as $\text{E}_{\text{b}}^{\text{DFT}}-\text{E}_{\text{b}}^{\text{QMC}}$, where $\text{E}_{\text{b}}=(\text{E}_{\text{liquid}}-\text{N}_{\text{H}}\times\text{E}_{\text{H}})/\text{N}_{\text{H}}$ is the binding energy. The different results from AIMD simulations suggest that it is necessary to further evaluate the accuracy of DFT functionals for liquid hydrogen. Therefore, we calculate the binding energies of a set of structures via DMC and use them as the benchmark to evaluate the performance of DFT functionals, a strategy that has been adopted successfully for solid hydrogen Chen _et al._ (2014); Clay _et al._ (2014); Drummond _et al._ (2015). In Fig. 2, we plot the relative binding energies of liquid hydrogen structures as a function of the molecular fraction with respect to the DMC calculations ($\text{E}_{\text{b}}^{\text{DFT}}-\text{E}_{\text{b}}^{\text{QMC}}$). The binding energy is defined as $\text{E}_{\text{b}}=(\text{E}_{\text{liquid}}-\text{N}_{\text{H}}\times\text{E}_{\text{H}})/\text{N}_{\text{H}}$, where $\text{E}_{\text{liquid}}$ is the total energy computed using DFT or DMC, and $\text{E}_{\text{H}}$ is the energy of a single H atom. As one can see, vdw-DF and BLYP+D2 describe the interactions in liquid hydrogen most accurately among all the functionals. From the atomic side to the molecular side, the relative binding energy differs by less than 10 meV per atom, and the minor slope of the curve also benefits the prediction of phase boundary between the molecular phase and the atomic phase. We also find that PBE and SCAN tend to overestimate the stability of liquid hydrogen, and a large positive slope means an severe overestimate of the relative stability of the atomic phase. In contrast, vdw-DF2 underestimates the stability of liquid hydrogen and the underestimation of the stability of the atomic phase is stronger. Based on the benchmark, we conclude that vdw-DF and BLYP+D2 describe the structural phase transition of liquid hydrogen most accurately, whereas PBE and SCAN tend to facilitate the molecular-atomic transition and vdw-DF2 hinders the hydrogen molecules from dissociating into atoms. The benchmark conclusions are in line with the AIMD results: the vdw-DF and BLYP+D2 results coincide with each other within a small variation; PBE and SCAN predict lower phase transition temperatures while vdw-DF2 has a higher phase transition temperature. Note that the DFT methods that we used are different in both the nature of the underlying exchange correlation functionals and the van der Waals correction scheme. The importance of van der Waals interactions in dense hydrogen have been mentioned in some studies, however further analyses show that the differences in describing the molecular and atomic liquid hydrogen are mainly dominated by the underlying exchange correlation functional while the van der Waals interactions do not play a significant role (Fig. S4 si ). Figure 3: Fundamental energy gap for liquid hydrogen as a function of molecular fraction x calculated using different methods. Fundamental gap is defined as $\Delta=\text{E}(\text{N}+1)+\text{E}(\text{N}-1)-2\text{E}(\text{N})$, where $\text{N}=128$ is the number of electrons, which equals the number of hydrogen atoms in the system. Lines with different color show fundamental gap computed using different methods. The structures are selected from those used in Fig. 2, and are detailed in the Supplementary Material. AMH, MMH and MIH indicates regimes of atomic metallic hydrogen, molecular metallic hydrogen and molecular insulating hydrogen, respectively. The background is colored with the molecular fraction. We now consider the electronic insulating-metallic phase transition of liquid hydrogen. The fundamental gap of liquid hydrogen as a function of molecular fraction calculated by DMC is shown in Fig. 3, computed according to $\Delta=\text{E}(\text{N}+1)+\text{E}(\text{N}-1)-2\text{E}(\text{N})$. We find that the fundamental gap $\Delta$ closes at molecular fraction $x=0.8$ (red dashed line), which is far from the half molecular-atomic phase transition boundary $x=0.5$ (green dashed line). A fraction of $x=0.8$ means liquid hydrogen becomes conductive when about 20% hydrogen molecules dissociate into atomic hydrogen. Note that the structures used in computing the fundamental gap are collected from simulations covering a wide range of temperatures and pressures si . Therefore, despite that hydrogen metallizes easier under higher pressures, the molecular fraction can be a robust order parameter to characterize the metallization of hydrogen. We can also compute the fundamental gap using DFT methods. As show in Fig. 3, DFT calculations generally underestimate the fundamental gap compared with DMC. Specifically, HSE06 and vdW-DF underestimates the gap by up to 30%, whereas PBE may underestimate by more than 50%. However, an interesting fact is that the molecular fraction where the fundamental gap closes is independent of the calculation methods. All DFT functionals predict consistently a threshold of molecular fraction $x=0.8$ for gap closing, therefore we can directly calculate the fundamental gap using more economic methods to test the size effects in fundamental gap calculations. So we select more configurations from vdw-DF AIMD simulations with 512 hydrogen atoms at different densities ($\text{r}_{\text{s}}$ = 1.35 Bohr,$\text{r}_{\text{s}}$ = 1.50 Bohr and $\text{r}_{\text{s}}$ = 1.65 Bohr respectively) and calculate their fundamental gaps using PBE, as shown in Fig. S5 si . We find that the insulating-metallic phase transition boundary is always around molecular fraction $x=0.8$. Figure 4: LLPT on the pressure-temperature phase diagram of hydrogen. Insulating-metallic transition and molecular-atomic transition boundaries determined in this work are shown as dark red and green solid lines, respectively. NVT AIMD simulations of 512 hydrogen atoms were performed with the vdW-DF functional at different temperatures and densities from which the molecular fraction and pressure were calculated. The two transition boundaries were estimated by the temperature and the pressure that reproduce the molecular fraction x=0.8 (dark red line) and x=0.5 (green line), respectively. Futher details can be found in Fig. S6 of the Supplemental Material si . MIH, MMH and AMH indicates the molcular-insulating hydrogen, molecular-metallic hydrogen and the atomic-metallic hydrogen in the liquid regime. The light red line indicates the melting line adapted from Ref. Nellis, 2021. Symbols with dashed lines (dark red) show several experimental measurements of insulating- metallic phase transition. The dark blue dashed line shows the MAT phase boundary obtained by Geng et al. using the vdW-DF functional Geng _et al._ (2019). The light blue squares are IMT points calculated by Gorelov et al. Gorelov _et al._ (2020) using quantum Monte Carlo. Recent experimental reports of the insulating-metallic transition boundary by Celliers et al. Celliers _et al._ (2018), Knudson et al. Knudson _et al._ (2015) and Zaghoo et al. Zaghoo _et al._ (2016); Zaghoo and Silvera (2017) are presented by the dark red points. A clear prediction from the above results is that dense liquid hydrogen can be metallized before the molecular-atomic phase transition, which implies that experiments can find metallic hydrogen at lower temperatures/pressures than atomic hydrogen. To provide a more quantitative prediction, we draw the insulating-metallic ($x=0.8$) and molecular-atomic ($x=0.5$) phase transition boundaries in the pressure-temperature phase diagram, using AIMD simulations with the benchmarked vdW-DF functional (See Supplementary Material I and Fig. S6 for more details si ). As shown by the pink shaded area in Fig. 4, at a constant temperature the insulating-metallic transition pressure is shifted downward 30-60 GPa from the molecular-atomic transition pressure. At a constant pressure, the insulating-metallic transition temperature is lower than the molecular-atomic transition temperature by a range of 300 to 1000 K. The two phase boundaries mean that liquid hydrogen can be divided into three phases, namely the molecular insulating hydrogen (MIH), the molecular metallic hydrogen (MMH), and the atomic metallic hydrogen (AMH), as labelled in Fig. 3 and Fig. 4. Molecular semi-metallic phase has been observed in the low temperature solid regime of dense hydrogenEremets _et al._ (2019), and partially dissociated molecular phases of solid hydrogen have also been discussed experimentally and theoretically Dalladay-Simpson _et al._ (2016); Monserrat _et al._ (2018). Here our prediction suggests that there is a wide range of molecular metallic hydrogen in the liquid regime, where we can further explore the intriguing electronic transitions of hydrogen. We note that a different definition of molecular fraction was used in Ref. Geng _et al._ , 2019 to determine the MAT boundary, and our estimates agree quite well with theirs, suggesting that the conclusions reached are insensitive to the definition of the molecular fraction. In experiments of liquid hydrogen, most of the measures are monitoring the metallization instead of the structural phase transition. Therefore, the separation of the two phase transition boundaries allows us to make further quantitative comparison with experimental data, some of which are also plotted in Fig. 4. Apart from the original results of Knudson et al., which was later re-interpreted by Celliers et al.Celliers _et al._ (2018), our new prediction of the insulating-metallic transition boundary are in better agreements with experimental data Zaghoo and Silvera (2017); Celliers _et al._ (2018). Vibrational measurements using e.g. Raman and Infrared spectroscopy have been performed to identify structural transitions of dense solid hydrogen Howie _et al._ (2015), and further extension of such techniques are desired to determine the molecular-atomic LLPT. On the contrary, theoretical studies of LLPT have mostly focused on structural phase transitions, and there is space for further developments of theoretical methods, e.g. employing other electronic structure calculations. Nuclear quantum effects can also lead to a shift of both boundaries by a small amount Morales _et al._ (2013); van de Bund _et al._ (2021), but it remains to be investigated whether nuclear quantum effects on MAT and IMT are effectively the same, and how sensitive are nuclear quantum effects to the choice of DFT functional. To conclude, we have performed DMC calculations and AIMD simulations of liquid hydrogen at a wide range of pressures and temperatures. Our main finding is that the LLPT of hydrogen is in fact two separate transitions that do not coincide with each other. Specifically, the insulting-metallic transition occurs before the molecular-atomic phase transition as the temperature and pressure increases, leading to a regime of molecular metallic hydrogen with a width of 30-60 GPa and 300-1000 K. Our results provide an important addition to the current understanding of the phase diagram of hydrogen. Last but not the least, the onset of metallic transition in molecular liquid hydrogen is encouraging for the experimental chasing of the holy grail of metallic hydrogen and the seeking of high temperature superconductor in dense hydrides. ###### Acknowledgements. The authors thank X.-Z. Li and A. Zen for helpful discussions. This work was supported by the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB33000000, and the National Natural Science Foundation of China under Grant No. 11974024, 92165101, 11935002. We are grateful for computational resources provided by TianHe-1A, the High Performance Computing Platform of Peking University, Shanghai Supercomputer Center, and the Platform for Data Driven Computational Materials Discovery of the Songshan Lake Materials Lab. ## References * McMahon _et al._ (2012) J. M. McMahon, M. A. Morales, C. Pierleoni, and D. M. Ceperley, Rev. Mod. Phys. 84, 1607 (2012). * Nellis (2021) W. J. Nellis, J. Phys. Chem. Lett. 12, 7972 (2021). * Dzyabura _et al._ (2013) V. Dzyabura, M. Zaghoo, and I. F. Silvera, Proc. Natl. Acad. Sci. U.S.A. 110, 8040 (2013). * Zaghoo _et al._ (2016) M. Zaghoo, A. Salamat, and I. F. Silvera, Phys. Rev. B 93, 155128 (2016). * Zaghoo _et al._ (2018) M. Zaghoo, R. J. Husband, and I. F. Silvera, Phys. Rev. B 98, 104102 (2018). * Jiang _et al._ (2020) S. Jiang, N. Holtgrewe, Z. M. Geballe, S. S. Lobanov, M. F. Mahmood, R. S. McWilliams, and A. F. Goncharov, Advanced Science 7, 1901668 (2020). * Weir _et al._ (1996) S. T. Weir, A. C. Mitchell, and W. J. Nellis, Phys. Rev. Lett. 76, 1860 (1996). * Knudson _et al._ (2015) M. D. Knudson, M. P. Desjarlais, A. Becker, R. W. Lemke, K. R. Cochrane, M. E. Savage, D. E. Bliss, T. R. Mattsson, and R. Redmer, Science 348, 1455 (2015). * Celliers _et al._ (2018) P. M. Celliers, M. Millot, S. Brygoo, R. S. McWilliams, D. E. Fratanduono, J. R. Rygg, A. F. Goncharov, P. Loubeyre, J. H. Eggert, J. L. Peterson, N. B. Meezan, S. L. Pape, G. W. Collins, R. Jeanloz, and R. J. Hemley, Science 361, 677 (2018). * Fang _et al._ (2019) W. Fang, J. Chen, Y. Feng, X.-Z. Li, and A. Michaelides, International Reviews in Physical Chemistry 38, 35 (2019). * Silvera and Dias (2021) I. F. Silvera and R. Dias, Advances in Physics: X 6, 1961607 (2021). * Scandolo (2003) S. Scandolo, Proc. Natl. Acad. Sci. U.S.A. 100, 3051 (2003). * Holst _et al._ (2008) B. Holst, R. Redmer, and M. P. Desjarlais, Phys. Rev. B 77, 184201 (2008). * Morales _et al._ (2010) M. A. Morales, C. Pierleoni, E. Schwegler, and D. M. Ceperley, Proc. Natl. Acad. Sci. U. S. A 107, 12799 (2010). * Lorenzen _et al._ (2010) W. Lorenzen, B. Holst, and R. Redmer, Phys. Rev. B 82, 195107 (2010). * Morales _et al._ (2013) M. A. Morales, J. M. McMahon, C. Pierleoni, and D. M. Ceperley, Phys. Rev. Lett. 110, 065702 (2013). * Geng _et al._ (2019) H. Y. Geng, Q. Wu, M. Marqués, and G. J. Ackland, Phys. Rev. B 100, 134109 (2019). * Hinz _et al._ (2020) J. Hinz, V. V. Karasiev, S. X. Hu, M. Zaghoo, D. Mejía-Rodríguez, S. B. Trickey, and L. Calderín, Phys. Rev. Research 2, 032065 (2020). * Tian _et al._ (2020) C. Tian, F. Liu, H. Yuan, H. Chen, and Y. Gan, J. Phys. Condens. Matter 33, 015401 (2020). * van de Bund _et al._ (2021) S. van de Bund, H. Wiebe, and G. J. Ackland, Phys. Rev. Lett. 126, 225701 (2021). * Li _et al._ (2015) R. Li, J. Chen, X. Li, E. Wang, and L. Xu, New Journal of Physics 17, 063023 (2015). * Gorelov _et al._ (2020) V. Gorelov, D. M. Ceperley, M. Holzmann, and C. Pierleoni, Phys. Rev. B 102, 195133 (2020). * Cheng _et al._ (2020) B. Cheng, G. Mazzola, C. J. Pickard, and M. Ceriotti, Nature 585, 217 (2020). * Karasiev _et al._ (2021) V. V. Karasiev, J. Hinz, S. X. Hu, and S. B. Trickey, Nature 600 (2021). * Cheng _et al._ (2021) B. Cheng, G. Mazzola, C. J. Pickard, and M. Ceriotti, Nature 600, E15 (2021). * Tirelli _et al._ (2021) A. Tirelli, G. Tenti, K. Nakano, and S. Sorella, arXiv:2112.11099 (2021). * Pierleoni _et al._ (2018) C. Pierleoni, M. Holzmann, and D. M. Ceperley, Contributions to Plasma Physics 58, 99 (2018). * Eremets _et al._ (2019) M. I. Eremets, A. P. Drozdov, P. P. Kong, and H. Wang, Nature Physics 15, 1246 (2019). * Giannozzi _et al._ (2009) P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, J. Phys.: Condens. Matter 21, 395502 (2009). * Giannozzi _et al._ (2017) P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. Buongiorno Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. Dal Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Küçükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. Otero-de-la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, J. Phys.: Condens. Matter 29, 465901 (2017). * Perdew _et al._ (1996) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). * Grimme (2006) S. Grimme, J. Comput. Chem. 27, 1787 (2006). * Dion _et al._ (2004) M. Dion, H. Rydberg, E. Schröder, D. C. Langreth, and B. I. Lundqvist, Phys. Rev. Lett. 92, 246401 (2004). * Klimeš _et al._ (2009) J. Klimeš, D. R. Bowler, and A. Michaelides, J. Phys. Condens. Matter 22, 022201 (2009). * Sun _et al._ (2015) J. Sun, A. Ruzsinszky, and J. P. Perdew, Phys. Rev. Lett. 115, 036402 (2015). * Hamann _et al._ (1979) D. R. Hamann, M. Schlüter, and C. Chiang, Phys. Rev. Lett. 43, 1494 (1979). * Vanderbilt (1985) D. Vanderbilt, Phys. Rev. B 32, 8412 (1985). * Martyna _et al._ (1992) G. J. Martyna, M. L. Klein, and M. Tuckerman, J. Chem. Phys. 97, 2635 (1992). * Heyd _et al._ (2003) J. Heyd, G. E. Scuseria, and M. Ernzerhof, J. Chem. Phys. 118, 8207 (2003). * Kresse and Furthmüller (1996) G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996). * Needs _et al._ (2020) R. J. Needs, M. D. Towler, N. D. Drummond, P. López Ríos, and J. R. Trail, J. Chem. Phys. 152, 154106 (2020). * Zen _et al._ (2016) A. Zen, S. Sorella, M. J. Gillan, A. Michaelides, and D. Alfè, Phys. Rev. B 93, 241118 (2016). * Trail and Needs (2017) J. R. Trail and R. J. Needs, J. Chem. Phys. 146, 204107 (2017). * (44) See Supplemental Material for more computational details, extended analyses, additional figures, and structure files. * Tamblyn and Bonev (2010) I. Tamblyn and S. A. Bonev, Phys. Rev. Lett. 104, 065702 (2010). * Clay _et al._ (2014) R. C. Clay, J. Mcminis, J. M. McMahon, C. Pierleoni, D. M. Ceperley, and M. A. Morales, Phys. Rev. B 89, 184106 (2014). * Chen _et al._ (2014) J. Chen, X. Ren, X.-Z. Li, D. Alfè, and E. Wang, J. Chem. Phys. 141, 024501 (2014). * Drummond _et al._ (2015) N. D. Drummond, B. Monserrat, J. H. Lloyd-Williams, P. López Ríos, C. J. Pickard, and R. J. Needs, Nat. Commun. 6, 7794 (2015). * Zaghoo and Silvera (2017) M. Zaghoo and I. F. Silvera, Proc. Natl. Acad. Sci. U.S.A. 114, 11873 (2017). * Dalladay-Simpson _et al._ (2016) P. Dalladay-Simpson, R. T. Howie, and E. Gregoryanz, Nature 529, 63 (2016). * Monserrat _et al._ (2018) B. Monserrat, N. D. Drummond, P. Dalladay-Simpson, R. T. Howie, P. López Ríos, E. Gregoryanz, C. J. Pickard, and R. J. Needs, Phys. Rev. Lett. 120, 255701 (2018). * Howie _et al._ (2015) R. T. Howie, P. Dalladay-Simpson, and E. Gregoryanz, Nature Materials 14 (2015).
# Wideband Audio Waveform Evaluation Networks: Efficient, Accurate Estimation of Speech Qualities Andrew Catellier, and Stephen Voran A. Catellier and S. Voran are with the Institute for Telecommunication Sciences, Boulder, CO, 80305 USA e-mail: <EMAIL_ADDRESS> ###### Abstract Wideband Audio Waveform Evaluation Networks (WAWEnets) are convolutional neural networks that operate directly on wideband audio waveforms in order to produce evaluations of those waveforms. In the present work these evaluations give qualities of telecommunications speech (e.g., noisiness, intelligibility, overall speech quality). WAWEnets are no-reference networks because they do not require “reference” (original or undistorted) versions of the waveforms they evaluate. Our initial WAWEnet publication introduced four WAWEnets and each emulated the output of an established full-reference speech quality or intelligibility estimation algorithm. We have updated the WAWEnet architecture to be more efficient and effective. Here we present a single WAWEnet that closely tracks seven different quality and intelligibility values. We create a second network that additionally tracks four subjective speech quality dimensions. We offer a third network that focuses on just subjective quality scores and achieves very high levels of agreement. This work has leveraged 334 hours of speech in 13 languages, over two million full-reference target values and over 93,000 subjective mean opinion scores. We also interpret the operation of WAWEnets and identify the key to their operation using the language of signal processing: ReLUs strategically move spectral information from non-DC components into the DC component. The DC values of 96 output signals define a vector in a 96-D latent space and this vector is then mapped to a quality or intelligibility value for the input waveform. ###### Index Terms: convolutional neural network, no-reference objective estimator, speech intelligibility, speech quality, subjective testing, wideband speech. ## I Introduction Wired and wireless telecommunications options continue to proliferate and evolve. Speech quality and speech intelligibility can vary dramatically between systems and devices, and further variation is driven by changes in acoustic environment, network loading, and radio conditions. Efficient real- time measurements of received speech quality and intelligibly are invaluable. Researchers have been developing such measurement tools for decades but telecommunications systems, devices, and use cases continue to evolve. This can cause entirely new classes of speech impairments and existing measurement tools may then fail to give meaningful results. This scenario motivates the development of new tools that do give meaningful results in the current environment. Measurement tools fall into two major classifications—full-reference (FR) tools require reference (transmitted) and impaired (received) speech and are practical in off-line applications. No-reference (NR) tools operate on impaired speech alone and are much more practical for real-time, in-service monitoring. Some examples of FR tools developed over the years can be found in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. The most effective FR tools apply psycho-acoustic transformations to reference and impaired speech and then compare these internal representations in ways that mimic key attributes of human judgement. NR tools eliminate the need for time alignment and the issue of comparisons, but they require an embodied model for how speech should sound, independent of what speech was sent. This is a significant challenge, but success allows for practical real-time, in-service monitoring of speech quality or intelligibility. Thus these tools have also been named “single-ended,” “in- service,” “non-intrusive,” or “output only.” Some examples of early NR tools are found in [14, 15, 16]. ### I-A Existing Machine Learning Approaches As machine learning (ML) has become more powerful and accessible, numerous research groups have sought to apply ML to develop NR tools [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]. Some of these NR tools produce estimates of subjective test scores that report speech or sound quality mean opinion score (MOS) [17, 18, 19, 24, 25, 26, 27, 30, 35, 38, 40, 41, 46, 47], naturalness [28, 34, 36], listening effort [23], noise intrusiveness [47], and speech intelligibility [20, 32]. The non-intrusive speech quality assessment model called NISQA [50] produces estimates of subjective speech quality as well as four constituent dimensions: noisiness, coloration, discontinuity, and loudness. Other NR tools produce estimates of objective values including FR speech quality values [22, 29, 31, 42, 48], FR speech intelligibility values [29, 31, 42, 49], speech transmission index [21], codec bit-rate [43], and detection of specific impairments, artifacts, or noise types [33, 37, 39, 49]. Some of these tools perform a single task and others perform multiple tasks. The works cited here cover a wide variety of ML architectures. They address application areas that include room acoustics (noise and reverberation), speech enhancers, telecommunications systems, and hearing aids. Each addresses one or more of narrowband (NB) (nominally 300 Hz–3.5 kHz), wideband (WB) (nominally 50 Hz–7 kHz), super-wideband (SWB) (nominally 50 Hz–16 kHz), or fullband (FB) (nominally 20 Hz–20 kHz) speech, except for [33, 43] which address music. Recent work shows that NR-tools can measure the speech quality at a system input in spite of the fact that such tools can only access the system output. [44]. It is natural to consider agreement with subjective test results as the ultimate goal for NR tools. For ML-based tools this requires large datasets of subjective test results (most commonly speech quality MOS values) for training and validation. Sufficient datasets are rare and expensive to generate through laboratory testing, so crowd-sourced tests are becoming common. Joint training or transfer learning can leverage objective FR quality values [25, 30, 47] or impairment categories [46] alongside MOS values to maximize the benefit of those MOS values. Semi-supervised learning [40] is also an effective way to compensate for scarce MOS values. It is important to recognize that the MOS scale is relative, not absolute, and the use of the scale in any given subjective test will depend on the conditions included in that test. For example, a given condition might be be rated 4.0 when it appears in a test with lower quality conditions. But that same condition might be rated 3.0 when it appears in a test with higher quality conditions. Per-subject corrections are explored in [47]. A method to learn per-test correction factors is given in [45] and this approach may be viewed as the ML version of the linear algebra solution previously given in [51]. Key considerations when using ML to develop NR tools are the total amount of data available, how that data is used to both train and test a model, and the homogeneity or diversity of the training and testing data. The number of talkers, languages, and sources of impairment are also potentially important factors, depending on the application. Because the speech quality measurement community has not yet settled on standardized datasets, some published work is backed by extensive data and rigorous testing while other work could be described as an initial proof-of-concept for some specific architecture or application area, backed by a domain-specific dataset. For this reason three separate datasets with three different application areas are considered in this work. ### I-B The WAWEnet Machine Learning Approach The most common ML approach applies ML to a set of features that form a spectral description, often one that uses a perceptual frequency scale (e.g. mel, gammatone, or third-octave bands) and that may use the log of the spectral magnitudes since the log function can very roughly approximate the human perception of loudness. Alternately, ML has been applied to features such as mel-frequency cepstral coefficients, delta MFCCs, perceptual linear prediction coefficients, pitch, voice activity, frame energy, and features used in the legacy NR tools [15, 16]. One distinction of WAWEnets is that they apply ML directly to speech waveforms instead of applying it to features extracted from speech waveforms. Extracting features from waveforms is a data compression step—it reduces the number of inputs per unit time that the ML algorithm must process. Good features identify the most useful information in the speech waveform so that the ML algorithm may more easily and efficiently map that information to the desired target value. But in general, feature extraction is not lossless data compression. The most popular features apply the magnitude function to complex-valued time-to- frequency transform results. This clearly removes information, and that information cannot be restored. Feature extraction makes assumptions about what information is important and what can be discarded. While many feature- based approaches have been successful in this problem space, WAWEnets demonstrate that using waveforms directly is both practical and effective. Thanks to the convolutional neural network architecture, WAWEnets are not excessively large, yet they can process waveform samples directly, thus gaining access to all the information those samples carry and effectively generating the best features for the task internally, rather than having potentially sub-optimal features mandated externally. Before our initial WAWEnet publication [31] we found just one other NR research effort that considered speech waveforms for input to ML. In [21] ML is applied to a spectrogram representation of audio in order to estimate speech transmission index. But this spectrogram representation is also learned—waveforms are processed with 128 filters, each with length 128, and those filters are “initialized with a Fourier basis (sine waves at different frequencies).” So in effect, ML is actually applied to the audio waveform, but with a strong suggestion that the first ML step should be to calculate a spectral representation. This spectral representation does not use the magnitude function and thus it has the potential to preserve all of the information in the audio waveform. We are aware of three additional waveform-oriented research efforts that have emerged since [31] was published. In [40] ML is applied to $\mu$-law compressed speech in order to estimate speech quality MOS. But the value of $\mu$ is learned (it is initialized to 256 as in G.711 [52]) so in effect, ML is actually applied to the speech waveform, but with the strong suggestion that the first ML step should be compression. No quantization is added, so this approach can preserve all of the information in the speech waveform. In [42] the MOS estimation task is learned in two different ways. Speech waveforms are first processed by either a learnable 1-D convolution layer, or by the “conventional STFT,” and then supplied to the main ML architecture. Authors report that compared to the STFT, “the learnable 1-D convolution layer leads to slight improvements for all targets in nearly all criteria.” This result suggests that allowing ML to operate on waveforms is preferable to providing it a spectral representation. And finally, in [53] a WAWEnet-style architecture is applied to waveforms to generate quality information to guide dialog remixing. In [31] we introduced four WAWEnets. Three of these emulated FR speech quality tools POLQA [10], WB-PESQ [8], and PEMO [7]. The fourth WAWEnet emulated the FR speech intelligibility tool STOI [54]. Since that time we have updated the WAWEnet architecture to be more efficient and effective and have leveraged large amounts of data to train three very effective WAWEnets. Section II describes the new architecture. In Section III we describe how we used 252 hours of speech from 13 languages to train and test individual WAWEnets to very closely emulate seven FR tools (four speech- quality and three speech-intelligibility). For comparison purposes we provide results from FR versions of these WAWEnets as well. Next we present a single WAWEnet that emulates all seven FR tools at once. Section IV introduces 30 hours of subjectively-rated speech and we train another WAWEnet to produce values for four subjective rating dimensions as well as the seven FR quality and intelligibility values. In Section V we train and evaluate a third WAWEnet using 52 hours of speech with MOS scores. This WAWEnet tracks MOS scores quite closely, confirming that the single architecture can be trained to produce very good results for both FR objective targets and subjective targets. Using ML to develop NR tools for evaluating speech and audio is presently a rich and active field, with many successful results already published. Our work is novel because we adopt a relatively simple convolutional architecture, we apply it directly to speech waveforms, we successfully train to closely emulate eleven different targets, and we use an unprecedented quantity and diversity of speech data. Finally Section VII uses the language of signal processing to explain how WAWEnets process speech signals to produce evaluations of themselves. To our knowledge, interpretations of this sort have never before been published. Figure 1: Diagram depicting the overall shape of our WAWEnet model. Each rectangle represents one section and the enclosed text describes the shape of the section’s input vector. Starting with 48,000 speech samples, the model coalesces information in the time domain until a 96-dimension feature vector remains. ## II WAWEnet Architecture Similar to the model we described in [55], WAWEnets accept 3 seconds of speech sampled at 16,000 smp/s as input. The architecture is composed of sections that use one-dimensional, 96 channel convolutional layers to filter the signal and pooling layers which effectively downsample the signal. This model differs from the model described in [55] in a few key ways. We added additional convolutional sections that downsample all the way to one “sample” per channel for each of the 96 channels, thus yielding a 96-dimension feature vector. A dense layer then maps the feature vector to an estimate. This model therefore only uses convolutional layers and downsampling to coalesce temporal information, whereas the previous model also used a dense layer to coalesce information across a temporal and a feature dimension. All PReLU activations have been replaced with ReLU, and average pooling is used throughout. The first section downsamples the input signal by 4, thereby significantly reducing memory and computation requirements. In this work we allow the dense layer in $S_{14}$ to generate $N_{T}\geq 1$ estimates. This model is visualized in Fig. 1 and is fully described by Tables I and II. With $N_{T}=1$, the new formulation has a total of 335,905 parameters, a roughly 50% increase over the 225,025 parameters of our previous model. For brevity, we refer to specific WAWEnet implementations using a subscript that can indicate the type and number ($N_{T}$) of estimates produced. For example, $\text{WAWEnet}_{11}$ is a WAWEnet with one input channel and 11 estimates. In addition, $\text{WAWEnet}_{\text{S}1}$ estimates one subjective target and $\text{WAWEnet}_{\text{O}1}$ estimates one objective FR target. We also made a separate WAWEnet configuration that allows $S_{1}$ to accept two 48,000 sample vectors. We used this configuration to create an FR WAWEnet that uses reference and impaired speech ($\text{WAWEnet}_{1}^{\text{FR}}$) and, for comparison purposes, a WAWEnet that uses two identical copies of the impaired speech ($\text{WAWEnet}_{1}^{\text{2}}$). TABLE I: WAWEnet architecture: sections $S_{1}$–$S_{13}$ are composed of one of the two section types listed in Table II. Number of input and output samples per channel are given by $l_{in}$ and $l_{out}$, effective sample rate by $\hat{f_{s}}$, and effective sample spacing by $s_{l}$. The dense layer $S_{14}$ maps $f_{n}=96$ scalar outputs from $S_{13}$ to the $N_{T}$ estimates. $S$ | section type | $\hat{f_{s}}$ (Hz) | $l_{in}$ | $s_{l}$ (ms) | $l_{out}$ ---|---|---|---|---|--- $S_{1}$ | Conv A-4 | 16,000 | 48,000 | 0.0625 | 12,000 $S_{2}$ | Conv A-2 | 4,000 | 12,000 | 0.25 | 6,000 $S_{3}$ | Conv A-2 | 2,000 | 6,000 | 0.5 | 3,000 $S_{4}$ | Conv A-4 | 1,000 | 3,000 | 1 | 750 $S_{5}$ | Conv A-2 | 250 | 750 | 4 | 375 $S_{6}$ | P Conv A-2 | 125 | 376 | 8 | 188 $S_{7}$ | Conv A-2 | 62.50 | 188 | 16 | 94 $S_{8}$ | Conv A-2 | 31.25 | 94 | 32 | 47 $S_{9}$ | P Conv A-2 | 15.63 | 48 | 64 | 24 $S_{10}$ | Conv A-2 | 7.81 | 24 | 128 | 12 $S_{11}$ | Conv A-2 | 3.91 | 12 | 256 | 6 $S_{12}$ | Conv A-2 | 1.95 | 6 | 512 | 3 $S_{13}$ | Conv A-3 | 0.98 | 3 | 1024 | 1 $S_{14}$ | Dense | — | $f_{n}$ | — | $N_{T}$ TABLE II: WAWEnet section types. Each section contains a 1-D convolution layer C-$f_{n}$-$f_{l}$ with $f_{n}$$=96$ filters per channel and $f_{n}$ channels, filter length $f_{l}=3$, stride of 1, and zero padding $\lfloor\frac{f_{l}}{2}\rfloor=1$. Padding layer $\text{Pad}(a,b)$ prepends $a$ zeros and appends $b$ zeros to the input vector. ReLU indicates a ReLU activation. $k$ denotes average pooling layer kernel size. name | Conv A-$k$ | P Conv A-$k$ ---|---|--- | | $\text{Pad}(0,1)$ | C-$f_{n}$-$f_{l}$ | C-$f_{n}$-$f_{l}$ | BatchNorm | BatchNorm | ReLU | ReLU layers | AvgPool-$k$ | AvgPool-$k$ ## III WAWEnet for Seven Objective FR Targets ### III-A Data The Institute for Telecommunication Sciences (ITS) dataset is formed from high-quality WB speech recordings. We carefully selected these from an array of sources including [56, 57, 58, 59, 60, 61, 62, 63, 64, 65] and also from recordings made in our lab. We extracted three-second “reference segments” from these recordings such that each reference segment has a speech activity factor (SAF) of 50% or greater (determined by the P.56 Speech Voltmeter found in [66]) and any segment has at most 50% (1.5 sec) content in common with any other segment. We then normalized each reference segment to have an active speech level of $26\pm 0.2$ dB below the overload point, again using the P.56 Speech Voltmeter. This process provided 84 hours of speech in the form of 100,681 reference segments, representing 13 languages and 1230 different talkers. Spanish, Mandarin, and North American English each account for 29% of the segments. Korean contributes 6% of the segments, African-accented French 3%, Japanese 2%, while Italian, French, German, Portuguese, Hindi, British English, Finnish, Dutch, and Polish combine to contribute the remaining 2%. We strategically split the reference segments into training, testing, and validation groups, as well as an unseen group. We generated the unseen group by reserving 10% of the talkers in each language, to the extent possible. The resulting 127 talkers associated with the unseen group do not contribute any segments to the remaining data and are only used for evaluation. The unseen dataset contains 10,391 segments (9 hours) of speech. We split the remaining segments into training (50%), testing (40%), and validation groups (10%) with approximate sizes of 38, 30, and 7 hours respectively. We processed each reference segment with software to simulate the impairments found in a wide range of current telecommunications environments. These impairments cover three classes: background noise and suppression, speech codecs, and packet loss and concealment. Background noises include coffee shop, party, and street noise at SNRs between 5 and 25 dB. Noise suppression ranged from mild (minimal artifacts but significant noise) to very aggressive (significant artifacts and distortion). We applied 6 WB codec algorithms: EVS, AMR-WB, Opus, G.711.1, G.722.1, and G.722. We selected bit-rates ranging from 8 to 64 kbps for a total of 49 WB codec modes. We used 13 different NB codecs including EVS, AMR, Opus, G.711, G.729, G.723.1, G.726, MELP and others. Bit- rates range from 1.2 to 64 kbps for a total of 40 NB codec modes. We applied both independent and bursty packet losses at rates ranging from 5 to 40% followed by packet loss concealment (PLC). Finally, we normalized each impaired segment to have an active speech level of $26\pm 0.2$ dB below the overload point. Each reference segment was impaired in three different ways: a randomly selected NB noise or codec impairment, a randomly selected WB noise or codec impairment, and a randomly selected NB or WB impairment that combined a codec with noise or PLC or both. This produced roughly 302,000 segments of impaired speech. This is 252 hours (114 training, 90 testing, 21 validation, and 27 unseen). A high-level summary of the impairment distribution is given in Table III. In each row 50% of the speech is NB and 50% is WB. A total of 321 distinct conditions are present in the ITS dataset. TABLE III: Distribution of impairments in ITS dataset. Hours | Noise & | Speech Codecs | Packet Loss & ---|---|---|--- | Suppression | | Concealment $84$ | ✓ | | $84$ | | ✓ | $28$ | ✓ | ✓ | $28$ | | ✓ | ✓ $28$ | ✓ | ✓ | ✓ Each impaired segment in the dataset was then labeled with values from seven established FR estimators: WB-PESQ [8], POLQA [10], ViSQOL (compliance version c310) [12], and PEMO (software available via [67]) estimate the quality of the impaired segment, while STOI [54], ESTOI [9], and SIIBGauss [68] give estimates of the speech intelligibility. Each of these seven FR tools compared every impaired segment with the corresponding WB reference segment. We then trained WAWEnets to estimate these FR values using only the impaired segments. ### III-B Training To prepare each segment for the training process, we converted the speech data to float and normalized it to the range [-1, 1]. We used affine transformations to map values from PESQ ([1.02, 4.64]), POLQA ([1, 4.75]), PEMO ([0, 1]), ViSQOL ([1, 5]), STOI ([0.45, 1]), ESTOI ([0.23, 1]), and SIIBGauss ([0, 750]) to [-1, 1] before use as targets. As in [29], we performed inverse phase augmentation (IPA) to augment all datasets in order to train WAWEnet to learn invariance to waveform phase inversion. This augmentation increased the amount of data available to just over 500 hours of total speech data. When training our model from scratch on the ITS dataset we used one set of initial weights for each training process. This set of weights was initialized using the Kaiming-Normal initialization method [69]. In the cases where $N_{T}>1$, the weights in the last layer were duplicated $N_{T}$ times, resulting in a shape of $N_{T}\times 96$. We seeded all random number generators such that batch order and batch contents were consistent for every training run. WAWEnets were trained using mini-batches that were as large as GPU memory would allow; in this case, 60 segments per batch. We used the Adam optimizer [70] with $10^{-4}$ learning rate, and $L_{2}$ regularization parameter set to $10^{-5}$. When the network had trained for an entire epoch we evaluated the validation set and logged the epoch RMSE (root mean-squared error) loss $E_{l}$ and per-segment correlation between the target and the WAWEnet output, $\rho_{seg}$. In the case that $E_{l}$ on the validation set had not decreased by at least $10^{-4}$ for 5 epochs, we multiplied the learning rate by $10^{-1}$. The network was trained for 30 epochs. Training $\text{WAWEnet}_{\text{O}1}$ on one NVIDIA GTX 1070 took about 14 hours.111Certain products are mentioned in this paper to describe the experiment design. The mention of such entities should not be construed as any endorsement, approval, recommendation, prediction of success, or that they are in any way superior to or more noteworthy than similar entities not mentioned. WAWEnets are NR tools, but for completeness of the research effort, we also created some FR versions. An NR WAWEnet processes 48,000 samples ($3\text{s}\times 16,000$ smp/s). The FR version ($\text{WAWEnet}_{1}^{\text{FR}}$) processes reference speech and the corresponding impaired speech. Thus the first convolutional layer of $\text{WAWEnet}_{1}^{\text{FR}}$ has more parameters (input size $2\ \text{channels}\times 48,000\ \text{samples}\times 96\ \text{filters}$) compared to the NR WAWEnet (input size $1\ \text{channel}\times 48,000\ \text{samples}\times 96\ \text{filters}$). During initialization, all weights in the $S_{1}$ were duplicated resulting in a shape of $2\times 48,000\times 96$. In order to make a fair comparison, we also created a dual input NR version called $\text{WAWEnet}_{1}^{\text{2}}$. $\text{WAWEnet}_{1}^{\text{2}}$ and $\text{WAWEnet}_{1}^{\text{FR}}$ have identical architecture and number of parameters. $\text{WAWEnet}_{1}^{\text{2}}$ processes two identical copies of the impaired speech. Results for all three versions are presented and compared below. ### III-C Individual network for each of seven FR targets We trained individual WAWEnets ($\text{WAWEnet}_{\text{O}1}$) for each of the seven FR targets. Table IV gives the resulting per-segment Pearson correlations and Table V shows the corresponding per-segment RMS errors. To allow for direct comparisons, these errors are normalized and shown as a percentage of the full scale for each target. The network outputs are highly correlated to the FR target values across a vast amount of data that spans a wide range of impairment types, talkers and languages, in spite of the fact that the networks have no access to the reference speech for comparison purposes. In effect, the networks embody very effective generalized models for speech quality (or intelligibility). These generalized models are invariant to speech content, talker, and language and this allows them to operate without comparison to a reference speech signal. The “Dual-NR” column of Table IV shows that the additional weights in $\text{WAWEnet}_{1}^{\text{2}}$’s $S_{1}$ have minimal effect on correlation. And the “FR” column ($\text{WAWEnet}_{1}^{\text{FR}}$) shows that access to reference speech (and necessarily increasing the network size) does produce some additional benefit, as expected. Table V tells the same story in the RMSE domain. The networks produce impressive estimation error values that range from 5 to 9% of full scale. These values barely change when network size is increased, but are further reduced when reference speech is provided. TABLE IV: Pearson correlations between predictions from three WAWEnets with $N_{T}=1$ and seven objective FR targets, unseen portion of ITS dataset, individual network for each target, correlations calculated per-segment. Target | NR | Dual-NR | FR ---|---|---|--- | $\text{WAWEnet}_{\text{O}1}$ | $\text{WAWEnet}_{1}^{\text{2}}$ | $\text{WAWEnet}_{1}^{\text{FR}}$ WB-PESQ | 0.94 | 0.95 | 0.98 POLQA | 0.93 | 0.93 | 0.97 ViSQOL | 0.95 | 0.95 | 0.98 PEMO | 0.95 | 0.95 | 0.97 STOI | 0.91 | 0.90 | 0.98 ESTOI | 0.95 | 0.95 | 0.99 SIIBGauss | 0.96 | 0.97 | 0.99 TABLE V: Normalized RMSE between predictions from three WAWEnets with $N_{T}=1$ and seven objective FR targets, unseen portion of ITS dataset, individual network for each target, errors calculated per-segment, values are percent of full scale. Target | NR | Dual-NR | FR ---|---|---|--- | $\text{WAWEnet}_{\text{O}1}$ | $\text{WAWEnet}_{1}^{\text{2}}$ | $\text{WAWEnet}_{1}^{\text{FR}}$ WB-PESQ | 9.2 | 9.1 | 4.9 POLQA | 9.4 | 9.4 | 6.6 ViSQOL | 7.2 | 7.0 | 4.9 PEMO | 6.1 | 6.2 | 5.1 STOI | 5.1 | 5.2 | 2.4 ESTOI | 6.1 | 6.1 | 3.2 SIIBGauss | 4.9 | 4.6 | 2.2 ### III-D Single network for all seven FR targets The outstanding success of individual WAWEnets for each target suggests that the WAWEnet architecture has plenty of capacity for this class of problems. This then suggests the possibility of a single network that maps speech signals to points in a single latent space such that those points can then be mapped to all four quality and all three intelligibility targets. Speech quality indicates how pleasing a speech signal is to the ear and speech intelligibility measures the information transferred by the speech. These are certainly different quantities, but they are also related, and this bodes well for the possibility of a single network. We trained a single WAWEnet to produce four, five, six, and all seven target values and found that this co-training is both possible and beneficial. Seven networks are replaced with a single network of nearly identical size without compromise in performance. In fact, co-training with seven targets appears to regularize the problem and results in improved performance. Table VI provides correlation and RMSE values for the single network called $\text{WAWEnet}_{7}$. The single network correlation values are better than those for the individual networks (see Table IV) in four cases and they are matched in the other three cases. The single network RMSE values are better than those for the individual networks (see Table V) in six cases and they are matched in the seventh case. Fig. 2 contains two-dimensional histograms that show the joint distribution of $\text{WAWEnet}_{7}$ per-segment estimates and actual target values for four of the seven targets. TABLE VI: Pearson correlation, RMSE, and normalized RMSE between seven estimates from a single network ($\text{WAWEnet}_{7}$) and seven objective FR targets, unseen portion of ITS dataset, correlation and error calculated per-segment, error shown as actual values and percent of full scale. | Correlation | RMSE | RMSE (%) ---|---|---|--- WB-PESQ | 0.95 | 0.31 | 7.8 POLQA | 0.94 | 0.33 | 8.3 ViSQOL | 0.95 | 0.28 | 6.9 PEMO | 0.96 | 0.06 | 6.1 STOI | 0.92 | 0.05 | 4.7 ESTOI | 0.95 | 0.06 | 5.8 SIIBGauss | 0.96 | 35.1 | 4.7 (a) WB-PESQ (b) POLQA (c) ViSQOL (d) STOI Figure 2: Two-dimensional histogram showing joint distribution of $\text{WAWEnet}_{7}$ per-segment estimates and actual target values with per- segment Pearson correlation and unnormalized RMSE for four of the seven targets on the unseen portion of the ITS dataset. Number of segments per bin is given by the scale at the right. ## IV WAWEnet for Seven Objective FR Targets and Four Subjective Targets Having successfully developed a single WAWEnet that emulates seven FR tools, we asked if that network might be further trained to also emulate subjective scores. ### IV-A Data We added to our collection the dataset described in [50] and generously provided by the Quality and Usability Lab of the Technische Universität Berlin. We designate this the TUB dataset. It contains a variety of speech sources, simulated impairments (added background noise, selected codecs, packet loss, bandpass filtering, and clipping) and live impairments (background noise, landline-to-mobile calls, and VoIP calls) Additional details are given in [50]. We successfully computed seven FR target values for 14,220 of the TUB speech files.222The “TEST_LIVETALK” database had no reference files so FR targets could not be calculated. FR estimators occasionally fail to produce valid results so this reduces the usable number of files as well. Each of these files also has crowd-sourced subjective ratings of overall speech quality, noisiness, coloration, discontinuity, and loudness. We used the subjective ratings “overall speech quality,” “noisiness,” “coloration,” and “discontinuity” (labeled as MOS, NOI, COL, and DIS, respectively) as targets for WAWEnet training. The “loudness” rating is not a practical target for WAWEnets because WAWEnets use normalized input speech levels. This removes variation in overall signal level which is a dominating factor behind the “loudness” ratings. This normalization could be removed if training for “loudness” is desired. We divided the dataset in two ways. The first was according to the labeling that was provided with it. We used the 10,903 files (77%) from “TRAIN_LIVE” and “TRAIN_SIM” for training, the 642 (4%) files from “TEST_FOR,” “TEST_NSC,” and “TEST_P501” for testing, and the 2,675 files (19%) from “VAL_LIVE” and “VAL_SIM” for validation. Note that “TEST_NSC” contains German language speech and the remainder of the dataset is English language speech. The second way we divided this dataset was through random sampling: 50% (7,110 files) were used for training, 40% (5,688 files) for testing, and 10% (1,422 files) for validation. The results presented below are based on the testing portion in both cases. File lengths range from 4.5 s to 14.6 s with a mean of 8.8 s and a median of 9.0 s. WAWEnets work on three-second segments where the SAF is at least 50 percent. For each file we find all such distinct segments in the file—98% of the files produce two segments, 48% produce three and 10% produce four. The result is approximately 28,200 training segments, 6,800 validation segments, and 1400 testing segments—a total of 30 hours of speech. Using G.191 tools [66], we converted the data from 48,000 smp/s to 16,000 smp/s and normalized each segment to $26\pm 0.2$ dB below the overload point. Each subjective rating is based on an entire file. For training we replicate that file rating to create an identical target for each segment extracted from that file. If the file shows little temporal variation in the rated attribute, then this target replication incurs only a minor approximation. But if there is major variation (e.g. localized packet loss or non-stationary background noise) then replication of targets can be a significant approximation and source of error. For testing, the correlations and RMSE values compare each per-segment WAWEnet output with the subjective rating of the corresponding file. Signal bandwidths create an additional approximation in this work. The subjective ratings in the TUB dataset are based on FB speech signals but WAWEnets are WB and only analyze the lower 8 kHz of these signals. (a) MOS (b) DIS (c) WB-PESQ (d) STOI Figure 3: Two-dimensional histogram showing joint distribution of $\text{WAWEnet}_{11}$ per-segment estimates and actual target values with per- segment Pearson correlation and unnormalized RMSE for four of the eleven targets on the test portion of the TUB dataset when using the 50/40/10 split. Number of segments per bin is given by the scale at the right. ### IV-B Training and Results Starting with the weights from $\text{WAWEnet}_{7}$ we allowed the optimizer to update the weights in each section. This strategy improved overall performance compared to the random initialization strategy used in Section III-B. We used affine transformations to map all subjective scores from [1, 5] to [-1, 1]. Besides those exceptions, we followed the training process described in Section III-B. We trained a WAWEnet to emulate the four subjective ratings: $\text{WAWEnet}_{\text{S}4}$. The per-segment correlation and normalized RMSE values are shown in Table VII. We also trained WAWEnets to emulate the four subjective ratings and three, four, five, six, or all seven of the FR values. Table VII also shows the result for a single WAWEnet that emulates four subjective and seven objective FR targets: $\text{WAWEnet}_{11}$. Fig. 3 contains two-dimensional histograms that show the joint distribution of $\text{WAWEnet}_{11}$ per-segment estimates and actual target values for four of the eleven targets using the 50/40/10 split. Table VII makes clear that co-training with the objective FR targets improves correlations and RMS errors for the four subjective targets. It appears that the extra constraints regularize the problem and lead to a better solution. Considering the test data prescribed by the TUB dataset, Table VII shows dramatic correlation drops and RMS error increases for the FR targets. But the correlation drop is smaller when evaluated on the random split. Table VI and Table VII are based on different and dissimilar datasets so the comparison is not exact. Estimators are often judged by per-condition statistics. Target values are averaged for all results from each condition (e.g., codec mode or noise environment) and the same is done for the estimates. These per-condition averages are then compared by correlation or error statistics. This averaging reflects a common and relevant estimation situation—it removes variation due to talkers, utterances, and other factors so we can draw clear conclusions about the systems we are testing. Table VIII provides per-condition results analogous to those in Table VII. Here we see that $\text{WAWEnet}_{11}$ gives per-condition correlations of 0.85 or better for four subjective and three objective FR targets. TABLE VII: Per-segment Pearson correlation ($\rho_{seg}$) and normalized RMSE between single WAWEnet trained for 4 ($\text{WAWEnet}_{\text{S}4}$) or 11 ($\text{WAWEnet}_{11}$) targets.Testing portion of TUB dataset. Error values are percent of full scale. | 77/4/19 Split | 50/40/10 Split ---|---|--- | $\rho_{seg}$ | RMSE (%) | $\rho_{seg}$ | RMSE (%) $N_{T}$ | 4 | 11 | 4 | 11 | 4 | 11 | 4 | 11 MOS | 0.80 | 0.82 | 16 | 15 | 0.84 | 0.85 | 15 | 14 NOI | 0.80 | 0.82 | 14 | 13 | 0.80 | 0.82 | 14 | 14 COL | 0.79 | 0.81 | 13 | 12 | 0.78 | 0.80 | 15 | 14 DIS | 0.75 | 0.78 | 17 | 16 | 0.71 | 0.73 | 17 | 16 WB-PESQ | - | 0.79 | - | 19 | - | 0.89 | - | 15 POLQA | - | 0.80 | - | 17 | - | 0.88 | - | 14 ViSQOL3 | - | 0.83 | - | 14 | - | 0.91 | - | 12 PEMO | - | 0.58 | - | 17 | - | 0.85 | - | 8 STOI | - | 0.52 | - | 16 | - | 0.85 | - | 8 ESTOI | - | 0.58 | - | 22 | - | 0.88 | - | 10 SIIBGauss | - | 0.75 | - | 16 | - | 0.82 | - | 19 TABLE VIII: Per-condition Pearson correlation ($\rho$) and normalized RMSE between single WAWEnet trained for 4 ($\text{WAWEnet}_{\text{S}4}$) or 11 ($\text{WAWEnet}_{11}$) targets. Testing portion of TUB dataset. Error values are percent of full scale. | 77/4/19 Split | 50/40/10 Split ---|---|--- | $\rho$ | RMSE (%) | $\rho$ | RMSE (%) $N_{T}$ | 4 | 11 | 4 | 11 | 4 | 11 | 4 | 11 MOS | 0.91 | 0.91 | 12 | 12 | 0.96 | 0.96 | 9 | 9 NOI | 0.91 | 0.91 | 10 | 9 | 0.97 | 0.97 | 9 | 9 COL | 0.91 | 0.91 | 9 | 9 | 0.97 | 0.97 | 7 | 8 DIS | 0.90 | 0.90 | 12 | 12 | 0.97 | 0.96 | 10 | 10 WB-PESQ | - | 0.85 | - | 15 | - | 0.94 | - | 10 POLQA | - | 0.91 | - | 10 | - | 0.96 | - | 10 ViSQOL3 | - | 0.89 | - | 12 | - | 0.97 | - | 8 PEMO | - | 0.66 | - | 18 | - | 0.93 | - | 13 STOI | - | 0.66 | - | 19 | - | 0.95 | - | 11 ESTOI | - | 0.60 | - | 25 | - | 0.91 | - | 15 SIIBGauss | - | 0.80 | - | 13 | - | 0.88 | - | 10 ## V WAWEnet for Subjective Scores Only We also trained a WAWEnet to closely emulate just subjective speech quality scores. ### V-A Data We are very grateful that the Audio, Speech, and Information Retrieval Group at Indiana University Bloomington provided us with the dataset described in [71]. We call this the IUB dataset and it is better suited to WAWEnets than the TUB dataset is. The IUB dataset is WB, so WAWEnets need not approximate FB subjective scores from WB signals as was the case with the TUB dataset. In addition, the IUB file lengths range from 2.0 to 7.8 s with a mean of 3.8 s and a median of 3.7 s thus providing a much closer match to the WAWEnet three- second window than was possible with the TUB data. The IUB dataset includes high-quality speech from close-talking microphones and lower quality speech from more distant microphones. The distant microphones necessarily capture more natural and artificial environmental noise (SNRs reported in the range -10 to +11 dB) and natural reverberation (speech-to-reverberation ratios reported in the range -5 to +4 dB)[71]. In addition, some recordings were subjected to 3.4 kHz low-pass filtering in order to create anchor conditions for subjective testing. Subjective testing was crowd-sourced. Scores were collected, filtered, and normalized [71] to produce speech quality scores on a scale of zero to ten. The result is 36,000 speech files, each with a scaled subjective speech quality MOS value. The IUB dataset contains 35,428 files that are six seconds or shorter. From each of these files we selected all disjoint three-second segments with SAF at or above 50%. Any file shorter than three seconds was zero-padded to create a single three-second segment. By this process 27,004 files produced two segments each and the remaining 8,424 files produced one segment each. This gives over 62,000 segments or 52 hours of speech data. Each segment was assigned the scaled MOS value of the file that it came from. Using only files six seconds or shorter means that a three-second segment contains at least half of the file. This minimizes error associated with assigning per-file MOS values to individual segments when speech quality is not constant throughout the file. Using G.191 tools [66], we normalized each file to $26\pm 0.2$ dB below the overload point. ### V-B Training and Results We used affine transformations to map all subjective scores from [0, 10] to [-1, 1]. Starting with the weights from $\text{WAWEnet}_{7}$ we allowed the optimizer to update the weights in each section. However, with this dataset equivalent results are achieved using the initialization method described in III-B. We randomly selected 50% of the segments for training, 10% for validation and 40% for testing. The rest of the training process was similar to the process described in Section III-B except we allowed training for 60 epochs in this case. The results that follow are based on the approximately 25,000 testing segments, which comprise about 21 hours of speech. $\text{WAWEnet}_{\text{S}1}$ achieves a per-segment correlation to MOS of 0.96 and normalized RMSE of 6 % of full scale. Fig. 4 shows this correlation graphically. Six other recently proposed NR tools are applied to the IUB dataset in [42] and the resulting Pearson correlations to MOS range from 0.93 to 0.96. The $\text{WAWEnet}_{\text{S}1}$ MOS correlation of 0.965 matches the best of these. Mean absolute errors (MAEs) given in [42] range from 0.40 to 0.50 and the $\text{WAWEnet}_{\text{S}1}$ MAE is 0.41, which places $\text{WAWEnet}_{\text{S}1}$ near the best previous result. Note that this is not a complete comparison because the best tools in [42] produce MOS and three additional estimates, while $\text{WAWEnet}_{\text{S}1}$ produces just MOS. But there is no question that this WAWEnet architecture performs on par with these other top-performers on this dataset. Figure 4: Two-dimensional histogram showing joint distribution of $\text{WAWEnet}_{\text{S}1}$ per-segment estimates and actual target values with per-segment Pearson correlation and unnormalized RMSE for scaled MOS on the test portion of the IUB dataset. Number of segments per bin is given by the scale at the right. ## VI Discussion and Extensions We have shown that the NR WAWEnet architecture can evaluate wideband speech signals in a manner consistent with a variety of objective and subject evaluations without using any reference speech signal. $\text{WAWEnet}_{7}$ can simultaneously very closely emulate seven diverse objective targets. $\text{WAWEnet}_{\text{S}1}$ closely emulates MOS values. And simultaneous emulation of seven objective targets and four subjective targets with $\text{WAWEnet}_{11}$ appears to improve performance on subjective targets compared to emulating solely subjective targets. The weights present in $\text{WAWEnet}_{7}$ appear to be relevant to both the TUB and the IUB datasets. Training on the TUB dataset with the 50/40/10 split with a random initialization and 11 targets resulted in MOS, NOI, COL, and DIS per-segment correlations of 0.80, 0.80, 0.75, and 0.64 respectively—lower than the correlations achieved when starting from $\text{WAWEnet}_{7}$. When training on the IUB dataset, the weights present in $\text{WAWEnet}_{7}$ don’t improve estimation performance, but the training process converged to the best performance roughly 5 epochs sooner. With both the TUB and the IUB datasets, solely allowing the weights in $\text{WAWEnet}_{7}$ $S_{14}$ to be trained resulted in worse performance than allowing the weights in all sections to be trained. It is possible to extend the WAWEnet architecture to wider bandwidths (higher sample rates) by adding one or more sections to the input of the network. For example, adding a section with a $k=3$ would allow WAWEnets to process 3 seconds of speech with a sample rate 48 kHz. Likewise, there are several strategies suitable for extending WAWEnets to process longer signals. One strategy would be to insert a section with a $k=2$ between $S_{13}$ and $S_{14}$. This would allow WAWEnets to generate estimates for six-second signals. Another strategy would be to use either the 96-D feature vector or target estimates as an input to a recurrent neural network of some kind, e.g. long short-term memory (LSTM), or gated recurrent unit (GRU), thus allowing WAWEnets to process arbitrary-length speech. With sufficient data either strategy might learn to properly account for the various principles at work when speech quality varies. These principles include (see [72]): long-term ratings are lower bounded by minima and upper- bounded by averages, larger and more frequent quality variations reduce quality, and recency. Alternately, longer signals can be accommodated by a sliding three-second WAWEnet processing window followed by averaging or more sophisticated processing of the multiple results produced. ## VII Signal Processing Interpretation We have established that the WAWEnet architecture offers an efficient and effective tool for evaluating wideband speech waveforms. It does this by mapping a wideband speech waveform to a 96-dimensional vector in a latent space. Different projections in that space produce scalar values that can track different objective or subjective values related to speech quality or intelligibility. We also seek to understand, to the extent possible, _how_ the WAWEnet architecture maps waveforms to this space. As is often the case with algorithms developed through machine learning, a fully-satisfying interpretation is elusive. But we can describe the signal processing that a WAWEnet applies and give a high-level description of how this signal processing converts a waveform into an evaluation of that waveform. ### VII-A Functions To map waveforms to the latent space a WAWEnet uses 13 sections and each section consists of four layers. In the language of ML, these four layers are convolution, batch normalization, ReLU, and average pooling. Table IX shows how the four layers map to six signal-processing (SP) functions. Linear time- invariant (LTI) systems are often relatively amenable to analysis. Table IX shows that WAWEnets are linear except for the bias and half-wave rectification (HWR). Of course these non-linear functions are what enable WAWEnets (and many other ML based algorithms) to accomplish the assigned tasks and also prevent the network from simplifying into a trivial and ineffective one. Time- invariance is satisfied by all functions except the sub-sampling (which shows time-invariance only for time shifts that are integral multiples of the output sampling period). TABLE IX: WAWEnet ML layers expanded into signal processing functions ML Layer | SP Function | Equation | Linear | Time-Invar ---|---|---|---|--- Conv. | FIR Filtering | $y_{k}=\sum\limits_{i=0}^{2}h_{i}x_{k-i}$ | ✓ | ✓ Batch Norm. | Gain | $y_{k}=ax_{k}$ | ✓ | ✓ Bias | $y_{k}=x_{k}+b$ | | ✓ ReLU | Half-Wave Rect. | $y_{k}=\text{max}(x_{k},0)$ | | ✓ Avg. Pool | FIR Filtering | $y_{k}=\frac{1}{m}\sum\limits_{i=0}^{m-1}x_{k-i}$ | ✓ | ✓ Sub-sampling | $y_{k}=x_{mk}$ | ✓ | $S_{1}$ has one input channel and 96 output channels. $S_{1}$ begins by splitting the input audio signal into 96 identical copies which then feed into the 96 channels. These 96 channels are processed in parallel and independent of each other. Each channel starts with FIR (finite impulse response) filtering, followed by the application of gain and bias, then HWR, and finally a low-pass FIR filter and sub-sampling with a factor of four. $S_{2}$ through $S_{13}$ have 96 input channels and 96 output channels. The filtering layer of these sections can be described as a full matrix of filtering. That is, $96^{2}=9216$ filters are used to produce 96 filtered versions of each input channel. Then each of the 96 output channels is formed by summing one filtered version of each input channel (96 signals in each sum). The remaining layers of $S_{2}$ through $S_{13}$ are the same as those in $S_{1}$, although the sub-sampling may use a factor of two, three, or four. In each of these layers the 96 channels are processed in parallel and independent of each other. $S_{6}$ and $S_{9}$ start with zero padding (appending one zero at the end of the signal) to allow sub-sampling by a factor of two at the end of the section. ### VII-B Per-Function Operation A time-domain description of the operations says that WAWEnets replace samples with weighted local averages, add up signals from 96 different channels, scale and shift all samples in a channel uniformly, replace negative samples with zero, and replace blocks of samples with their average value. A frequency- domain description provides more insight, so we next describe how these functions change the spectra of the signals as they move through the network. We describe separately changes to the DC component of the signal spectra and changes to the other components. This distinction is key to the description of the overall operation that follows. The second-order FIR filters have three unconstrained real coefficients, resulting in either a pair of complex-conjugate zeros, or two zeros on the real line. The result is gentle spectral peaks and gentle or deep spectral nulls, depending on the location of the zeros relative to the unit circle. Forty percent of these filters are lowpass, 29% are highpass, 19% bandstop, and 12% bandpass. These proportions change by less than 1% between $\text{WAWEnet}_{7}$, $\text{WAWEnet}_{11}$, and $\text{WAWEnet}_{\text{S}1}$. This commonality is consistent with the fact that both $\text{WAWEnet}_{11}$ and $\text{WAWEnet}_{\text{S}1}$ use the weights from $\text{WAWEnet}_{7}$ as their initial state during the training process. The gain function simply scales all values of the spectrum (DC and non-DC) by a single value. The bias function adjusts only the DC value of the spectrum and leaves the rest unchanged. The effect of HWR is controlled by the bias. If the bias forces all samples to be negative, HWR removes the signal. If the bias forces all samples to be positive, HWR does nothing. If the bias is such that the signal is bipolar, the most common and prominent spectral effect is the creation of new spectral components, thus increasing the spectral density. An example is given in the middle panel of Fig. 5. The effect of HWR on the DC value of the signal depends strongly on the signal. Figure 5: Top panel shows spectra of two tones (345 Hz, shown in light blue, and 6789 Hz, shown in gold), middle panel shows the result of half-wave rectification with zero bias, bottom panel shows the additional effect of average pooling (pooling factor is 2). Colors emphasize non-linearity: light blue shows spectral components that are produced by the 345 Hz tone alone, gold shows components produced by the 6789 Hz tone alone, blue shows components that only appear when both tones are present. A significant DC component will be produced in any of these three cases and it is shown in black. The FIR filtering that precedes the subsampling is length $m$ with all coefficients equal to $m^{-1}$ ($m=$ 2, 3, or 4). These are low-pass filters and each one has a perfect null at any frequency that would alias to DC so the subsequent subsampling cannot change the DC value of the spectrum. (This is consistent with the fact that averaging cannot change the DC value of a signal.) The subsampling will remove all spectral content above the new Nyquist frequency and can produce aliasing at any non-DC frequency below the new Nyquist frequency. The aliasing is significant because these FIR low-pass filters are very short and have responses that are far from the near brick- wall responses needed to achieve alias-free sub-sampling. For example, when $m=$2 spectral components just above the new Nyquist frequency are aliased to those just below the new Nyquist frequency with only 3 dB of attenuation. Aliasing involves addition of complex values, so aliasing may reduce or increase the original spectral magnitudes depending on the relative phases of the two addends. The bottom panel of Fig. 5 shows an example. Note that in conventional sample rate reduction, a filter calculates one output sample for each input sample, then subsampling retains every $m^{th}$ sample. The avgpool function integrates filtering and subsampling so that only every $m^{th}$ sample is calculated. Because the filtering is FIR and the filter length matches the downsampling factor, this unconventional approach produces the same results as the conventional approach. ### VII-C Overall Operation We can view the end-to-end mapping from audio signals to the 96-D latent space as 96 individual (but coupled) signal processors. The job of these processors is to shorten the signals and to strategically shape and move relevant spectral information to DC. In $S_{13}$ the length-3 filtering and 3-to-1 sub- sampling (the avgpool layer) serve to extract the DC value of a signal that has 3 samples. The ensemble of the 96 DC values defines a vector in the latent space which is then mapped to a final output value by $S_{14}$. The particulars of shaping spectral information and moving it to DC are shown graphically for one section in Fig. 6. In each section non-DC spectral values of the signals are modified by FIR filtering, gain, HWR, the pooling FIR filter, and sub-sampling. The DC spectral values of the signals are modified by just four of the six functions. In the FIR filtering, gain, and bias functions the modification of the DC value is determined solely by the processor. These modifications are independent of the signal itself. But in the HWR function the modification of the DC spectral component is driven by the non-DC spectral components. The HWR is the stage where relevant non-DC spectral information is strategically moved to DC. Figure 6: In each section four functions change DC component and five change non-DC components. Non-DC components affect the DC component _only_ in the half-wave rectification function. Sections sequentially move non-DC information to DC. For example, consider a speech signal and noise signal each with a DC value of zero. When added, the noisy speech signal still has a DC value of zero. But after HWR, the original and noisy speech signal can have very different DC values and this DC value can thus serve to indicate that noise was present in the speech signal. This is a very simple example but by using many sections of intricate spectral shaping and folding, this processor can also accurately assess a broad array of much more nuanced perturbations to speech signals. Fig. 6 emphasizes the fact that every function modifies either the DC component or the non-DC component, or both, and that the HWR is the sole function where non-DC information influences the DC value. The input audio signal will typically have a DC value near zero, but as the processing continues, the spectral shaping and the flow of information to DC at HWRs results in DC values that describe important characteristics of the original audio signal. Fig. 7 gives a visual example of DC values in the first 13 sections of $\text{WAWEnet}_{\text{S}1}$ (1 input signal plus the outputs of 6 SP functions $\times$ 13 sections gives 79 rows) for each of the 96 channels. Input speech is shown at the top of the figure and DC values there are all zero as expected. As signals move through the network (down the figure) DC values build in various channels for part or all of a section. No continuous downward “flow” of DC appears because channels are fully connected with each other in every section and channel numbers are arbitrary. The most dramatic distribution of DC values is in the second half of the network (lower half of the figure), and this distribution then moderates to form the output. Figure 7: Example DC values (dark purple for large negative, bright yellow for large positive) in the first 13 sections (shown as labeled macro-rows) and all 96 channels (shown as columns) of $\text{WAWEnet}_{\text{S}1}$.The six rows within each section show the DC output value of six signal processing functions (FIR, gain, bias, HWR, FIR, sub-sampling, in that order) in that section. The DC value of the input signal is replicated 96 times above $S_{1}$. In effect, the wideband input speech signal passes through 96 parallel coupled signal processors (composed of $S_{1}$–$S_{13}$) and the 96 DC values of the 96 output signals form a vector that is then mapped by $S_{14}$ to an estimate of some quality of that speech signal. Fig. 8 shows examples of these 96-D $\text{WAWEnet}_{7}$ outputs for 30 different conditions (1,320 segments averaged for each condition). These examples show how different dimensions respond to different attributes while they work together to produce an estimate of speech quality or intelligibility. Figure 8: 96 outputs of $S_{13}$ for 30 conditions. From left to right, six increasing bit-rates for each of four codecs, then three noise suppression thresholds for each of two SNRs. Example observations: channel 32 responds strongly to WB vs NB and also to noise suppression thresholds, channel 17 responds to codec bit-rates and to noise levels. ## VIII Conclusion WAWEnets are no-reference wideband audio waveform evaluation networks that process waveforms directly into evaluations of those waveforms. We have trained and evaluated multiple WAWEnets and this work is based on 334 hours of speech in 13 different languages produced by well over 1000 different talkers. We have demonstrated that a $\text{WAWEnet}_{7}$ can produce speech quality and intelligibility estimates that agree (correlations of 0.92 to 0.96) with values from seven established FR objective estimators. $\text{WAWEnet}_{11}$ demonstrates that this architecture is useful for estimating both objective and subjective speech qualities at the same time. $\text{WAWEnet}_{\text{S}1}$ agrees with the subjective MOS values of the IUB dataset with per-segment correlation 0.965, commensurate with top results from other current approaches. We expect that WAWEnets will generate accurate predictions of speech qualities for applications where speech impairments resemble those represented in our wide-ranging training dataset. In addition, we expect they are useful for narrower applications (for example, measuring reverb present in a room) if trained to do so. WAWEnets’ small size make them approachable for applications where computing power is at a premium. WAWEnets are distinct from the vast majority of alternatives because they apply ML directly to speech waveforms instead of applying it to features extracted from speech waveforms. This gives WAWEnets access to all the information in the waveforms and allows them to effectively generate the best features for the task internally, rather than having potentially sub-optimal features mandated externally. Our work shows that this is indeed a viable approach. We have proposed several strategies to extend WAWEnets to wider bandwidths and longer signals as data becomes available. WAWEnets are composed of six common signal processing functions and, as expected, the non-linear functions (bias and HWR) are critical. These functions move spectral information (shaped by FIR filtering and gains) to DC so that the DC values of the final, very short, signals provide a 96-D description of the original, much longer signals. We encourage further experimentation and development with WAWEnets and that is enabled by the software repository at https://github.com/NTIA/WEnets. ## Acknowledgments Sincere thanks to Gabriel Mittag, Sebastian Möller, and the members of the Quality and Usability Lab of the Technische Universität Berlin for creating and sharing the TUB dataset; to Zhuohuang Zhang, Donald Williamson, and the members of the Audio, Speech, and Information Retrieval Group at Indiana University for creating and providing the IUB dataset; and to Michael Chinen, Andrew Hines, and Jan Skoglund for support with ViSQOL 3.0. ## References * [1] S. Quackenbush, T. Barnwell, and M. Clements, _Objective Measures of Speech Quality_. Englewood Cliffs, New Jersey: Prentice Hall, 1988. * [2] S. Wang, A. Sekey, and A. Gersho, “An objective measure for predicting subjective quality of speech coders,” _IEEE Journal on Selected Areas in Communications_ , vol. 10, no. 5, pp. 819–829, June 1992. * [3] J. G. Beerends and J. A. Stemerdink, “A perceptual speech-quality measure based on a psychoacoustic sound representation,” _J. Audio Eng. Soc_ , vol. 42, no. 3, pp. 115–123, 1994. [Online]. Available: http://www.aes.org/e-lib/browse.cfm?elib=6957 * [4] S. Voran, “Objective estimation of perceived speech quality — Part I: Development of the measuring normalizing block technique,” _IEEE Transactions on Speech and Audio Processing_ , vol. 7, no. 4, pp. 371–382, July 1999. * [5] “Results of objective speech quality assessment including receiving terminals using the advanced TOSQA2001,” ITU-T, SG12, Q9, Contribution 20, Jan. 9, 2001\. * [6] J. Beerends, A. Hekstra, A. Rix, and M. Hollier, “Perceptual evaluation of speech quality (PESQ) — The new ITU standard for end-to-end speech quality assessment, Part II — Psychoacoustic model,” _J. Audio Engineering Society_ , vol. 50, pp. 765–778, Oct. 2002. * [7] R. Huber and B. Kollmeier, “PEMO-Q—A new method for objective audio quality assessment using a model of auditory perception,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 14, no. 6, pp. 1902–1911, Nov. 2006. * [8] _ITU-T Recommendation P.862.2, Wideband Extension to Recommendation P.862 for the Assessment of Wideband Telephone Networks and Speech Codecs_ , Geneva, 2007\. * [9] J. Jensen and C. H. Taal, “An algorithm for predicting the intelligibility of speech masked by modulated noise maskers,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 24, no. 11, pp. 2009–2022, 2016\. * [10] _ITU-T Recommendation P.863, Perceptual objective listening quality prediction_ , Geneva, 2018. * [11] S. Van Kuyk, W. B. Kleijn, and R. C. Hendriks, “An instrumental intelligibility metric based on information theory,” _IEEE Signal Processing Letters_ , vol. 25, no. 1, pp. 115–119, 2018. * [12] M. Chinen, F. S. C. Lim, J. Skoglund, N. Gureev, F. O’Gorman, and A. Hines, “Visqol v3: An open source production ready objective speech and audio metric,” in _Proc. Twelfth International Conference on Quality of Multimedia Experience_ , 2020. * [13] W. A. Jassim, J. Skoglund, M. Chinen, and A. Hines, “Warp-Q: Quality prediction for generative neural speech codecs,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2021, pp. 401–405. * [14] J. Liang and R. Kubichek, “Output-based objective speech quality,” in _Proc. IEEE Vehicular Technology Conference_ , vol. 3, June 1994, pp. 1719–1723. * [15] L. Malfait, J. Berger, and M. Kastner, “P.563—The ITU-T standard for single-ended speech quality assessment,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 14, no. 6, pp. 1924–1934, Nov. 2006\. * [16] D. Kim and A. Tarraf, “Anique+: A new American national standard for non-intrusive estimation of narrowband speech quality,” _Bell Labs Technical Journal_ , vol. 12, no. 1, pp. 221–236, Spring 2007. * [17] T. H. Falk and W. Chan, “Single-ended speech quality measurement using machine learning methods,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 14, no. 6, pp. 1935–1947, Nov. 2006. * [18] M. H. Soni and H. A. Patil, “Novel deep autoencoder features for non-intrusive speech quality assessment,” in _Proc. European Signal Processing Conference_ , Nov. 2016, pp. 2315–2319. * [19] M. Hakami and W. B. Kleijn, “Machine learning based non-intrusive quality estimation with an augmented feature set,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2017, pp. 5105–5109. * [20] C. Spille, S. D. Ewert, B. Kollmeier, and B. T. Meyer, “Predicting speech intelligibility with deep neural networks,” _Computer Speech & Language_, vol. 48, pp. 51 – 66, 2018. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0885230817300803 * [21] P. Seetharaman, G. J. Mysore, P. Smaragdis, and B. Pardo, “Blind estimation of the speech transmission index for speech quality prediction,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , April 2018, pp. 591–595. * [22] S. Fu, Y. Tsao, H. Hwang, and H. Wang, “Quality-Net: An end-to-end non-intrusive speech quality assessment model based on BLSTM,” in _Proc. Interspeech_ , 2018, pp. 1873–1877. [Online]. Available: http://dx.doi.org/10.21437/Interspeech.2018-1802 * [23] R. Huber, M. Krüger, and B. T. Meyer, “Single-ended prediction of listening effort using deep neural networks,” _Hearing Research_ , vol. 359, pp. 40 – 49, 2018. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0378595517304951 * [24] H. Salehi, D. Suelzle, P. Folkeard, and V. Parsa, “Learning-based reference-free speech quality measures for hearing aid applications,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 26, no. 12, pp. 2277–2288, Dec 2018. * [25] G. Mittag and S. Möller, “Non-intrusive speech quality assessment for super-wideband speech communication networks,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , May 2019, pp. 7125–7129. * [26] J. Ooster and B. T. Meyer, “Improving deep models of speech quality prediction through voice activity detection and entropy-based measures,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , May 2019, pp. 636–640. * [27] J. F. Santos and T. H. Falk, “Towards the development of a non-intrusive objective quality measure for DNN-enhanced speech,” in _Proc. Eleventh International Conference on Quality of Multimedia Experience_ , 2019. * [28] C.-C. Lo, S.-W. Fu, W.-C. Huang, X. Wang, J. Yamagishi, Y. Tsao, and H.-M. Wang, “MOSNet: Deep learning-based objective assessment for voice conversion,” in _Proc. Interspeech_ , Sep. 2019. * [29] A. A. Catellier and S. D. Voran, “WEnets: A convolutional framework for evaluating audio waveforms,” _arXiv e-prints 1909.09024_ , Sep. 2019. * [30] H. Gamper, C. K. A. Reddy, R. Cutler, I. J. Tashev, and J. Gehrke, “Intrusive and non-intrusive perceptual speech quality assessment using a convolutional neural network,” in _Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics_ , Oct. 2019, pp. 85–89. * [31] A. A. Catellier and S. D. Voran, “WAWEnets: A no-reference convolutional waveform-based approach to estimating narrowband and wideband speech quality,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2020, pp. 331–335. * [32] M. B. Pedersen, A. Heidemann Andersen, S. H. Jensen, and J. Jensen, “A neural network for monaural intrusive speech intelligibility prediction,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2020, pp. 336–340. * [33] N. Simou, Y. Mastorakis, and N. Stefanakis, “Towards blind quality assessment of concert audio recordings using deep neural networks,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2020, pp. 3477–3481. * [34] Y. Choi, Y. Jung, and H. Kim, “Deep MOS predictor for synthetic speech using cluster-based modeling,” in _Proc. Interspeech_ , Oct. 2020. * [35] G. Mittag, R. Cutler, Y. Hosseinkashi, M. Revow, S. Srinivasan, N. Chande, and R. Aichner, “DNN no-reference PSTN speech quality prediction,” in _Proc. Interspeech_ , Oct. 2020. * [36] G. Mittag and S. Möller, “Deep learning based assessment of synthetic speech naturalness,” in _Proc. Interspeech_ , Oct. 2020. * [37] Y. Saishu, A. H. Poorjam, and M. G. Christensen, “A CNN-based approach to identification of degradations in speech signals,” _EURASIP Journal on Audio, Speech, and Music Processing_ , Feb. 2021. * [38] M. Liu, J. Wang, W. Yi, and F. Liu, “Neural network-based non-intrusive speech quality assessment using attention pooling function,” _EURASIP Journal on Audio, Speech, and Music Processing_ , May 2021. * [39] H. Nylén, S. Chatterjee, and S. Ternström, “Detecting signal corruptions in voice recordings for speech therapy,” in _Proc. International Conference on Acoustics, Speech and Signal Processing_ , 2021, pp. 386–390. * [40] J. Serrà, J. Pons, and S. Pascual, “SESQA: Semi-supervised learning for speech quality assessment,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2021, pp. 381–385. * [41] Y. Leng, X. Tan, S. Zhao, F. Soong, X.-Y. Li, and T. Qin, “MBNET: MOS prediction for synthesized speech with mean-bias network,” in _Proc. 2021 IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2021, pp. 391–395. * [42] Z. Zhang, P. Vyas, X. Dong, and D. S. Williamson, “An end-to-end non-intrusive model for subjective and objective real-world speech assessment using a multi-task framework,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2021, pp. 316–320. * [43] X. Zheng and C. Zhang, “Towards blind audio quality assessment using a convolutional-recurrent neural network,” in _Proc. Thirteenth International Conference on Quality of Multimedia Experience_ , 2021, pp. 91–96. * [44] S. Voran, “Measuring speech quality of system input while observing only system output,” in _Proc. Thirteenth International Conference on Quality of Multimedia Experience_ , 2021, pp. 125–128. * [45] G. Mittag, S. Zadtootaghaj, T. Michael, B. Naderi, and S. Möller, “Bias-aware loss for training image and speech quality prediction models from multiple datasets,” in _Proc. Thirteenth International Conference on Quality of Multimedia Experience_ , 2021, pp. 97–102. * [46] A. Ragano, E. Benetos, and A. Hines, “More for less: Non-intrusive speech quality assessment with limited annotations,” in _Proc. Thirteenth International Conference on Quality of Multimedia Experience_ , 2021, pp. 103–108. * [47] N. Nessler, M. Cernak, P. Prandoni, and P. Mainar, “Non-intrusive speech quality assessment with transfer learning and subject-specific scaling,” in _Proc. Interspeech_ , 2021, pp. 2406–2410. * [48] M. Yu, C. Zhang, Y. Xu, S.-X. Zhang, and D. Yu, “MetricNet: Towards Improved Modeling For Non-Intrusive Speech Quality Assessment,” in _Proc. Interspeech_ , 2021, pp. 2142–2146. * [49] Ĺuboš Marcinek, M. Stone, R. Millman, and P. Gaydecki, “N-MTTL SI model: Non-intrusive multi-task transfer learning-based speech intelligibility prediction model with scenery classification,” in _Proc. Interspeech_ , 2021, pp. 3365–3369. * [50] G. Mittag, B. Naderi, A. Chehadi, and S. Möller, “NISQA: A deep cnn-self-attention model for multidimensional speech quality prediction with crowdsourced datasets,” in _Proc. Interspeech_ , 2021, pp. 2127–2131. * [51] S. Voran, “An iterated nested least-squares algorithm for fitting multiple data sets,” U.S. Department of Commerce, National Telecommunications and Information Administration, Institute for Telecommunication Sciences, Tech. Rep. TM-03-397, 2002. * [52] _ITU-T Recommendation G.711, Pulse code modulation (PCM) of voice frequencies_ , Geneva, 1988. * [53] M. Torcoli, J. Paulus, T. Kastner, and C. Uhle, “Controlling the remixing of separated dialogue with a non-intrusive quality estimate,” in _Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics_ , Oct. 2021, pp. 91–95. * [54] C. H. Taal, R. C. Hendriks, R. Heusdens, and J. Jensen, “An algorithm for intelligibility prediction of time–frequency weighted noisy speech,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 19, no. 7, pp. 2125–2136, 2011. * [55] A. A. Catellier and S. D. Voran, “WAWEnets: A no-reference convolutional waveform-based approach to estimating narrowband and wideband speech quality,” in _Proc. IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2020, pp. 331–335. * [56] _ITU-T P Series Supplement 23 Speech Database_ , Geneva, 1998. * [57] _ITU-T Recommendation P.501, Test signals for use in telephonometry_ , Geneva, 2017. * [58] McGill University, Telecommunications and Signal Processing Laboratory, “Speech database,” available at http://www-mmsp.ece.mcgill.ca/Documents/Data/. * [59] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren, “DARPA TIMIT acoustic phonetic continuous speech corpus CDROM,” 1993. * [60] A. Rousseau , P. Deléglise, and Y. Estève , “TED-LIUM: An automatic speech recognition dedicated corpus,” in _Proc. Eighth International Conference on Language Resources and Evaluation_ , May 2012. * [61] C. D. Hernandez-Mena, “TEDx Spanish corpus. audio and transcripts in spanish taken from the TEDx talks; shared under the cc by-nc-nd 4.0 license,” Web Download, 2019. * [62] D. Wang, X. Zhang, and Z. Zhang, “THCHS-30: A free Chinese speech corpus,” 2015. [Online]. Available: http://arxiv.org/abs/1512.01882 * [63] Y. Choi and B. Lee, “Pansori: ASR corpus generation from open online video contents,” in _Proc. IEEE Seoul Section Student Paper Contest_ , Nov. 2018, pp. 117–121. * [64] Recordings of African Accented French speech, Available at http://openslr.org/57. * [65] Open Speech Repository, Available at https://www.voiptroubleshooter.com/. * [66] _ITU-T Recommendation P.191, Software tools for speech and audio coding standardization_ , Geneva, 2005. * [67] V. Emiya, E. Vincent, N. Harlander, and V. Hohmann, “Subjective and objective quality assessment of audio source separation,” _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 19, no. 7, pp. 2046–2057, Sep. 2011. * [68] S. Van Kuyk, W. B. Kleijn, and R. C. Hendriks, “An instrumental intelligibility metric based on information theory,” _IEEE Signal Processing Letters_ , vol. 25, no. 1, pp. 115–119, 2018. * [69] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” in _Proc. IEEE International Conference on Computer Vision_ , 2015, pp. 1026–1034. * [70] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _Proc. Third International Conference on Learning Representations_ , Y. Bengio and Y. LeCun, Eds., May 2015. [Online]. Available: http://arxiv.org/abs/1412.6980 * [71] X. Dong and D. S. Williamson, “A pyramid recurrent network for predicting crowdsourced speech-quality ratings of real-world signals,” in _Proc. Interspeech_ , Oct. 2020. * [72] S. Voran, “A basic experiment on time-varying speech quality,” in _Proc. Fourth International MESAQIN (Measurement of Speech and Audio Quality in Networks) Conference_ , Prague, Czech Republic, Jun. 2005, pp. 51–64. Andrew Catellier (Senior Member, IEEE) attended the University of Wyoming where he earned a B.S. in Electrical Engineering in 2006 and an M.S. in Electrical Engineering in 2007. Since then, Andrew conducted traditional and crowd-sourced subjective experiments for audio and video signals and explored objective speech quality assessment using ML techniques at the Institute for Telecommunication Sciences (ITS) in Boulder, Colorado. He worked to facilitate crop yield measurements, forecasts, and classification via remote sensing, computer vision, and ML at GeoVisual Analytics from 2017 until 2022. Currently, Andrew uses computer vision and ML to validate identification documents and identity at Nametag, Inc. --- Stephen Voran (Senior Member, IEEE) received the B.A. degree in Mathematics from Carleton College, Northfield, MN, in 1985, and the M.S. degree in Electrical Engineering from the University of Colorado, Boulder, in 1989. Since 1990 he has been with the Institute for Telecommunication Sciences, Boulder, Colorado and has been contributing to signal-processing based advances in objective assessment of telecommunications speech quality and intelligibility, as well as subjective audio testing, speech coding, and audio quality enhancement. ---
Entangling transmons with low-frequency protected superconducting qubits Andrea Maiani, Morten Kjaergaard, and Constantin Schrade Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark Novel qubits with intrinsic noise protection constitute a promising route for improving the coherence of quantum information in superconducting circuits. However, many protected superconducting qubits exhibit relatively low transition frequencies, which could make their integration with conventional transmon circuits challenging. In this work, we propose and study a scheme for entangling a tunable transmon with a Cooper-pair parity-protected qubit, a paradigmatic example of a low-frequency protected qubit that stores quantum information in opposite Cooper-pair parity states on a superconducting island. By tuning the external flux on the transmon, we show that non-computational states can mediate a two-qubit entangling gate that preserves the Cooper-pair parity independent of the detailed pulse sequence. Interestingly, the entangling gate bears similarities to a controlled-phase gate in conventional transmon devices. Hence, our results suggest that standard high-precision gate calibration protocols could be repurposed for operating hybrid qubit devices. Superconducting transmon qubits [1] are a highly promising platform for noisy intermediate-scale quantum (NISQ) devices [2] and error-corrected quantum computers [3, 4, 5, 6] with applications ranging from quantum simulations [7, 8, 9, 10] to the first experiments on quantum advantage [11, 12]. Among the most attractive features of the transmon circuit are its reproducibility, insensitivity to charge noise-induced dephasing, and coherence times that have seen steady improvements over the past decade [13]. Interestingly though, despite notable advances in prolonging the coherence of transmons, the transmon circuit does not exhibit intrinsic protection to qubit relaxation errors. It is thus an important question how transmon devices can be further optimized with complementary qubit technologies to accelerate the path to fault-tolerant quantum computation. Motivated by the challenge of exploring complementary qubit modalities, several alternative qubit encodings have been proposed [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32] and experimentally studied [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. A particular class of such novel qubit encodings are Cooper-pair parity-protected qubits (PPQ) [30, 31, 32, 33, 34], which rely on a special Josephson element that only permits tunneling of pairs of Cooper-pairs. Similar to the transmon qubit, the two nearly-degenerate ground states of the PPQ have a nearly flat charge dispersion, which makes them insensitive to charge-noise induced dephasing. Similar to the fluxonium qubit [39, 40, 41, 42, 43, 44], the two qubit states also have disjoint support, since they carry opposite Cooper-pair parity. This disjoint support ensures that, if the qubit-environment coupling conserves the Cooper-pair parity, relaxation errors between the computational states are prevented. While considerable efforts have been devoted to the development of a gate set for protected superconducting qubits [16, 46, 47, 48, 49, 50] for their use as an independent quantum computing platform, a different approach is to integrate protected qubits as memory elements in a conventional transmon-based quantum computing architecture. In such a scheme, the qubit state would be stored on the protected qubit during idle times and transferred to the transmon qubits for fast, high-fidelity operations, using the full machinery of well-established high-fidelity transmon operation. However, many protected qubits exhibit relatively low qubit transition frequencies, which could make this integration with transmon devices challenging. This motivates the question of efficiently generating entanglement between protected superconducting qubits and transmon qubits. In this work, we propose and study a capacitive coupling scheme for entangling a tunable transmon with a PPQ, a paradigmatic example of a protected superconducting qubit featuring a nearly-flat charge dispersion and qubit states with disjoint support. By tuning the external flux on the transmon, we show that non-computational states can mediate an entangling gate that preserves the Cooper-pair parity irrespective of the detailed pulse sequence. Besides opening the way to coherent state transfer, the proposed entangling gate also bears similarities with a controlled-phase gate in conventional capacitively coupled transmon qubits. Consequently, our results suggest standard high-precision two-qubit calibration protocols could be repurposed for the operation of hybrid qubit devices. § SETUP As depicted in Fig. <ref>(a), we consider a direct capacitively coupling between a frequency-tunable transmon qubit and a PPQ, realized by a capacitively-shunted $\cos(2\phi)$ Josephson element for the tunneling of pairs of Cooper-pairs. The individual Hamiltonians of the transmon, $H_{t}$, and of the PPQ, $H_{p}$, are given by, \begin{equation} \begin{split} H_{t} &= 4 E^{}_{C,t} (n_{t} - n^{}_{g,t})^2 - E^{}_{J,t}\, \text{cos}(\phi_{t}) \,, \\ H_{p} &= 4 E^{}_{C,p} (n_{p} - n_{g,p})^2 - E^{}_{J,p} \, \text{cos}(2\phi_{p})\,. \end{split} \label{Eq1} \end{equation} Here, $(n_{t},\phi_{t})$ and $(n_{p},\phi_{p})$ denote Cooper-pair charge and phase degrees of freedom of the transmon and PPQ. Moreover, $E_{J,t}$ is the transmon Josephson energy and $E_{J,p}$ is the two-Cooper-pair tunneling amplitude of the PPQ. The charging energies of the two qubit circuits are $E^{}_{C,t} =e^{2}/2C_{t}$ and $E^{}_{C,p} =e^{2}/2C_{p}$ with the shunt capacitances $C_{t}$ and $C_{p}$. Both Hamiltonians in Eq. (<ref>) can be diagonalized exactly by rewriting the eigenvalue problems as Mathieu equations. For the transmon [1], the energy splitting between the ground and first-excited state, which form the qubit basis $\left|0_{t}\right\rangle$ and $\left|1_{t}\right\rangle$, is $\omega_{t}=\sqrt{8E_{J,t}E_{C,t}}+\delta\omega_{t}$ with $\delta \omega_t\propto \text{exp}(-\sqrt{8E_{J,t}/E_{C,t}}) \cos(2\pi n_{g,t})$ for $E_{J,t}\gg E_{C,t}$, see the left panel of Fig. <ref>(b). For the PPQ [31], the qubit basis is given by the two lowest-energy states with even and odd Cooper-pair parity, $\left|0_{p}\right\rangle$ and $\left|1_{p}\right\rangle$. These states have an exponentially suppressed energy splitting, $\omega_{p}\propto \text{exp}(-\sqrt{2E_{J,p}/E_{C,p}}) |\cos(\pi n_{g,p})|$ for $E_{J,p}\gg E_{C,p}$, see the right panel of Fig. <ref>(b). Unlike the transmon, the PPQ is thus a low-frequency qubit with the computational states exhibiting an exact degeneracy if $\text{cos}(\pi n_{g,p})=0$ and a near-degeneracy otherwise. However, like the transmon, the energy splitting of the PPQ is insensitive to variations in $n_{g,p}$ if $E_{J,p}\gg E_{C,p}$, which ensures insensitivity to charge noise dephasing. To explain the protection of the PPQ against parity-preserving relaxation errors, we consider the wavefunctions of the computational basis. In phase space, these wavefunctions are symmetric/anti-symmetric combinations of states that are localized in the $0$- and $\pi$-valleys of the Josephson potential, see Fig. <ref>(c). In charge space, the same wavefunctions are superpositions of states with even/odd Cooper-pair number. Due to this disjoint support of the charge space wavefunctions, $\left\langle 0_{p} | \mathcal{O} | 1_{p}\right\rangle = 0$ for any operator $\mathcal{O}$ that preserves the Cooper-pair parity, which is the condition for protection against parity-preserving relaxation errors [30]. Hybrid qubit setup. (a) A frequency-tunable transmon (red) coupled to a PPQ (blue) realized by a capacitively-shunted tunneling element for pairs of Cooper-pairs (Josephson junction symbol with double lines). The two qubits are coupled via a coupling capacitance $C_c$. (b) Zoom-in on the charge dispersion relation for the transmon (left panel) and the PPQ (right panel). In the gray area, the ground (first excited) state carries odd (even) Cooper-pair parity. In the white area, the order is inverted. (c) Wavefunctions of the decoupled system $(C_c=0)$. The system parameters are $(E_{J,t}, E_{J,p},E_{C,t},E_{C,p}) = 2\pi (10,3,0.25,0.25)\,\text{GHz}$. Low-energy spectrum of the hybrid qubit setup and $\textsf{CZ}_\phi$ gate. (a) Low-energy spectrum of the hybrid qubit setup for $n_{g, p}=0$ and $(E_{J,t}, E_{J,p},E_{C,t},E_{C,p},E_{C,c}) = 2\pi (12,2.7,0.2,0.15,0.025)\,\text{GHz}$ as a function of the external flux $\Phi_{\text{ext}}$ of the tunable transmon. The $\textsf{CZ}_\phi$ gate is realized by a rapid excursion from $\Phi_{\text{ext}}=0$ to the vicinity of the $\left|1_{t},0_{p}\right\rangle\leftrightarrow\left|0_{t},3_{p}\right\rangle$ anti-crossing at $\Phi_{\text{ext}}=\Phi^{*}_{\text{ext}}$. The bottom panel shows the coupling strengths, $g^{yz}$ and $g^{zz}$, upon approaching the anti-crossing. (b) Same as (a) but for $n_{g, p}=0.5$. Each shown energy level is now exactly two-fold degenerate. Having introduced the two decoupled qubit circuits with the associated computational subspace $\mathcal{P}_0= \{ \left|1_{t},1_{p}\right\rangle, \left|1_{t},0_{p}\right\rangle, \left|0_{t},1_{p}\right\rangle, \left|0_{t},0_{p}\right\rangle \} $, we proceed by coupling the qubits via a standard capacitive coupling (see also Fig. <ref>a) corresponding to a coupling Hamiltonian given by \begin{equation} H_{c} = 4 E_{C,c} (n_{p} - n_{g,p})(n_{t} - n_{g,t}). \label{Eq2} \end{equation} Here, $E_{C,c} = e^2 C_{c}/(C_{p}C_{t})$ with the coupling capacitance $C_{c}$. In summary, the full Hamiltonian of our setup is $H_{}=H_{p}+H_{t}+H_{c}$. In the next section, we will derive the effective qubit interaction due to this direct capacitive coupling. § EFFECTIVE HAMILTONIAN To motivate the derivation of the effective qubit interaction, we first recall the case of two capacitively coupled transmon qubits, $t1$ and $t2$, which are both `high-frequency' qubits. In this example, the capacitive coupling mediates a $\sigma_{t1}^{y}\sigma_{t2}^{y}$ interaction when projected onto the computational subspace and a $\sigma_{t1}^{z}\sigma_{t2}^{z}$ interaction due to the mixing of computational and non-computational states [51]. In our setup, which involves the coupling of a `high-frequency' transmon qubit and a `low-frequency' PPQ, we will show that the couplings to non-computational states will play an even more essential role. To anticipate this result, we note that the coupling Hamiltonian of Eq. (<ref>) at $n_{g,p}=0$ vanishes exactly when projected onto the computational subspace, $\langle s_{t}, s'_{p}|H_{c}|s''_{t},s'''_{p}\rangle=0$ for any two states $|s_{t},s'_{p}\rangle, |s''_{t},s'''_{p}\rangle$ in $\mathcal{P}_0$ since $\langle 0_{t}|n_{t}|0_{t}\rangle=\langle 1_{t}|n_{t}|1_{t}\rangle=0$ and $\langle 0_{p}|n_{p}|1_{p}\rangle=0$. A direct coupling within the computational subspace is thus fully absent at $n_{g,p}=0$ and any qubit interaction, if present, is necessarily mediated by virtual transitions through non-computational states. §.§ Special case: $\boldsymbol{n_{g,p}}=0$ To identify the origin of such virtual transitions, we initially compare two special cases with the offset charge on the PPQ set to either $n_{g,p}=0$ or $n_{g,p}=0.5$. Starting with the $n_{g,p}=0$ case, we show the low-energy spectrum as a function of external magnetic flux $\Phi^{\text{ext}}_{t}$ of the tunable transmon in Fig. <ref>(a). The spectrum comprises not only the four qubit levels of $\mathcal{P}_0$ but also two additional levels corresponding to the $\left|0_{t},2_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$ state of the uncoupled system. Interestingly, the non-computational states exhibit two anti-crossings with the computational states, $\left|1_{t},1_{p}\right\rangle\leftrightarrow\left|0_{t},2_{p}\right\rangle$ and $\left|1_{t},0_{p}\right\rangle\leftrightarrow\left|0_{t},3_{p}\right\rangle$, at certain values of external flux. These anti-crossings arise because the respective couplings preserve the Cooper-pair parity on the PPQ. On the other hand, anti-crossing between $\left|1_{t},0_{p}\right\rangle\leftrightarrow\left|0_{t},2_{p}\right\rangle$ and $\left|1_{t},1_{p}\right\rangle\leftrightarrow\left|0_{t},3_{p}\right\rangle$ are absent from the spectum in Fig. <ref>(a), as such couplings violate the Cooper-pair parity conservation on the PPQ. We will now show that in the vicinity of the two anti-crossings, virtual transitions in-and-out of the computational subspace are enhanced and, consequently, can induce a sizable effective qubit interaction between the transmon and the PPQ. For computing the effective qubit interaction at $n_{g,p}=0$, we initially project our setup Hamiltonian $H$ onto the four qubit states of $\mathcal{P}_0$ and on the additional $\left|0_{t},2_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$ states. This yields the following low-energy Hamiltonian, \begin{equation} \begin{pNiceArray}{cccc|cc} \omega_{11} & 0 & 0 & 0 &\lambda' & 0 \\ 0 & \omega_{10} & 0 & 0 & 0 & -\lambda'' \\ 0 & 0 & \omega_{01} & 0 & 0 & 0 \\ 0 & 0 & 0 & \omega_{00} & 0& 0 \\ \hline \lambda' & 0 & 0 & 0 & \omega_{02} & 0 \\ 0 & -\lambda'' & 0 & 0 & 0 & \omega_{03}\\ \end{pNiceArray} \label{lowenergy1} \end{equation} Here, the frequency of the the $\left|s_{t},s'_{p}\right\rangle$ state in the uncoupled system is denoted by $\omega_{ss'}=\omega_{t,s}(\Phi^\mathrm{ext}_t)+\omega_{p,s'}$. Moreover, the coupling matrix elements are given by $\lambda'=\left\langle 1_{t},1_{p}|H_{c}|0_{t},2_{p}\right\rangle$ and $\lambda''=\left\langle 1_{t},0_{p}|H_{c}|0_{t},3_{p}\right\rangle$, where we picked a wavefunction gauge for which ($\lambda',\lambda''$) are real-valued. We point out that the low-energy Hamiltonian of Eq. (<ref>) is different from the one of capacitively-coupled transmons [51], because the conservation of Cooper-pair parity prohibits a coupling of the $\left|0_{t},1_{p}\right\rangle$ to the $\left|1_{t},0_{p}\right\rangle$ state. Also, for two coupled transmons only the highest energy computational state exhibits crossing with non-computational states. In our case, the two computational states, $\left|1_{t},0_{p}\right\rangle$ and $\left|1_{t},1_{p}\right\rangle$, both cross with non-computational states, albeit at different values of external flux. Next, we integrate out the non-computational states to second order in $\lambda'$ and $\lambda''$ by a Schrieffer-Wolff transformation [52]. Provided that $\lambda'^{2}\ll |\omega_{02}-\omega_{11}|$ and $\lambda''^{2}\ll |\omega_{03}-\omega_{10}|$, we find that the effective qubit Hamiltonian reads [53], \begin{align} \left(\omega_{t}+\frac{g^{zz}_{+}}{2}\right) \frac{ \sigma^{z}_{t} \left(\omega_{p}+\frac{g^{zz}_{-}}{2}\right) \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \\ \label{Heffng0} \end{align} where $\omega_{p/t}=\omega_{p/t,1}-\omega_{p/t,0}$ denote the bare qubit frequencies. The key insight from Eq. (<ref>) is that the interaction between the two qubits is of $\sigma^{z}_{p}\sigma^{z}_{t}$ type. As anticipated, this interaction arises from a two-step perturbative sequence involving virtual transitions in-and-out of the $\left|0_{t},2_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$ state. For example, in a perturbative sequence close to the $\left|1_{t},1_{p}\right\rangle\leftrightarrow\left|0_{t},2_{p}\right\rangle$ anti-crossing, the system exhibits a first virtual transition from the computational state $\left|1_{t},1_{p}\right\rangle$ to the non-computational state $\left|0_{t},2_{p}\right\rangle$ and, subsequently, a second virtual transition back to $\left|1_{t},1_{p}\right\rangle$. Such a sequence preserves the state of the transmon, which explains why the interaction is $\propto\sigma^{z}_{t}$. The dependence of the interaction on $\sigma^{z}_{p}$ arises because the coupling Hamiltonian of Eq. (<ref>) preserves the Cooper-pair parity. §.§ Special case: $\boldsymbol{n_{g,p}}=0.5$ Having derived the qubit interaction at $n_{g,p}=0$, we want to compare the results of Eq. (<ref>) with the $n_{g,p}=0.5$ case. We therefore plot the low-energy spectrum at $n_{g,p}=0.5$ in Fig. <ref>(b). Unlike in the previous case, we find that each depicted energy level exhibits an exact two-fold degeneracy, corresponding to opposite Cooper-pair parity sectors. This finding is consistent with our results of Fig. <ref>(b), where we pointed out that the levels on the uncoupled PPQ are exactly degenerate at $n_{g,p}=0.5$. In particular, the anti-crossings $\left|1_{t},1_{p}\right\rangle\leftrightarrow\left|0_{t},2_{p}\right\rangle$ and $\left|1_{t},0_{p}\right\rangle\leftrightarrow\left|0_{t},3_{p}\right\rangle$ occur now at the same value of external flux and overlap exactly. Couplings between $\left|1_{t},0_{p}\right\rangle\leftrightarrow\left|0_{t},2_{p}\right\rangle$ and $\left|1_{t},1_{p}\right\rangle\leftrightarrow\left|0_{t},3_{p}\right\rangle$ remain absent (they are forbidden since the states belong to a different parity sector). We will now show that this new scenario at $n_{g,p}=0.5$ will lead to a different effective qubit Hamiltonian compared to Eq. (<ref>). We begin again by projecting the setup Hamiltonian $H$ onto the qubit subspace $\mathcal{P}_{0}$ and onto the states $\left|0_{t},2_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$. The resulting low-energy Hamiltonian reads, \begin{equation} \begin{pNiceArray}{cccc|cc} \omega_{11} & 0 & -i\eta & 0 & \lambda & 0 \\ 0 & \omega_{10} & 0 & i\eta & 0 & -\lambda \\ i\eta & 0 & \omega_{01} & 0 & 0 & 0 \\ 0 & -i\eta & 0 & \omega_{00} & 0 & 0 \\ \hline \lambda & 0 & 0 & 0 & \omega_{02} & 0 \\ 0& -\lambda & 0 & 0& 0 & \omega_{03} \end{pNiceArray}. \end{equation} Here, we have $\lambda=\left\langle 1_{t},1_{p}|H_{c}|0_{t},2_{p}\right\rangle=-\left\langle 1_{t},0_{p}|H_{c}|0_{t},3_{p}\right\rangle$ and $\eta=i\left\langle 1_{t},1_{p}|H_{c}|0_{t},1_{p}\right\rangle=-i\left\langle 1_{t},0_{p}|H_{c}|0_{t},0_{p}\right\rangle$ in a wavefunction gauge for which $(\lambda,\eta)$ are real-valued. By inserting the coupling Hamiltonian in the expressions for the matrix elements, we note that $\eta\propto\left\langle s_{p}|n_{p}|s_{p}\right\rangle-n_{g,p}$. In the previous case when $n_{g,p}=0$, we had $\left\langle s_{p}|n_{p}|s_{p}\right\rangle=0$ and, consequently, $\eta$ vanished. In the present case when $n_{g,p}=0.5$ and $E_{J,p}\gtrsim E_{C,p}$, we have $\left\langle s_{p}|n_{p}|s_{p}\right\rangle\neq n_{g,p}$ so that $\eta$ is finite yet gets successively smaller upon increasing $E_{J,p}$. In particular, when $E_{J,p}\gg E_{C,p}$, we have $\left\langle s_{p}|n_{p}|s_{p}\right\rangle\rightarrow n_{g,p}$ so that the contribution of $\eta$ to the low-energy Hamiltonian is negligible. Lastly, we note that due to the degeneracy of the PPQ levels, $\omega_{p,0}=\omega_{p,1}$ and $\omega_{p,2}=\omega_{p,3}$. Hence, the frequencies of the hybrid setup satisfy $\omega_{11}=\omega_{10}$, $\omega_{01}=\omega_{00}$, and $\omega_{02}=\omega_{03}$. We now proceed by integrating out the effects of the non-computational states, $\left|0_{t},2_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$, with a Schrieffer-Wolff transformation. The resulting effective Hamiltonian is of the form, \begin{equation} \begin{split} \left(\omega_{t}+\frac{g^{zz}_{+}}{2}\right) \frac{ \sigma^{z}_{t} \end{split} \end{equation} Contrasting this result with Eq. (<ref>), we note that both terms $\propto\sigma^{z}_{p}$ and $\propto\sigma^{z}_{t}\sigma^{z}_{p}$ have vanished because $\omega_{p}=0$ and $g^{zz}_{-}=0$. As a result, the effective qubit interaction is not of $\sigma^{z}_{t}\sigma^{z}_{p}$ but rather of $\sigma^{y}_{t}\sigma^{z}_{p}$ type. The physical origin of the interaction at $n_{g,p} = 0.5$ is different from the $n_{g,p} = 0$ case, since it arises directly from the finite matrix elements of the charge operators of the parity-protected qubit and the transmon qubits. The non-computational states induce only a renormalization of the transmon frequency through the $g^{zz}_{+}$ contribution in the coefficient of $\sigma^{z}_{t}$. §.§ General case So far, we have seen that the capacitive coupling between the transmon and the parity-protected qubit induces a qubit interaction that is substantially different for $n_{g,p}=0$ and $n_{g,p}=0.5$. In a last step, we want to interpolate between those two representative cases. This interpolation is achieved by studying the dependence on the offset charge $n_{g,p}$ of the various matrix elements. Since the procedure for obtaining the effective interaction is otherwise identical to the special cases, we only note that for generic values of $n_{g,p}$ the effective Hamiltonian acquires both a $\sigma^{z}_{t}\sigma^{z}_{p}$ and a $\sigma^{y}_{t}\sigma^{z}_{p}$ interaction term [53], \begin{align} \left(\omega_{t}+\frac{g^{zz}_{+}}{2}\right) \frac{ \sigma^{z}_{t} \left(\omega_{p}+\frac{g^{zz}_{-}}{2}\right) \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \nonumber \\ \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \end{align} While $g^{zz}_{\pm}$ are defined as in Eq. (<ref>), the definition of $g^{yz}$ is now slightly generalized to $g^{yz}=(\eta'+\eta'')/2$ with $\eta'=i\left\langle 1_{t},1_{p}|H_{c}|0_{t},1_{p}\right\rangle$ and $\eta''=-i\left\langle 1_{t},0_{p}|H_{c}|0_{t},0_{p}\right\rangle$. The transition from a pure $\sigma^{z}_{t}\sigma^{z}_{p}$ at $n_{g,p}=0$ to a pure $\sigma^{y}_{t}\sigma^{z}_{p}$ at $n_{g,p}=0.5$ is gradual. As for the the dependence on the transmon offset charge, we remark that in the deep-transmon regime, $E_{J,t}\gg E_{C,t}$, the qubit interaction is almost independent of $n_{g,t}$. So far, we have derived the effective qubit interaction and have demonstrated that it depends on the anti-crossings with the non-computational states, $\left|0_{t},2_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$. To realize the respective anti-crossings, we note that it is essential that, \begin{equation} \omega_{02}<\omega_{10}. \label{condition} \end{equation} The transmon energy levels are approximated by $\omega_{t,n}\approx\sqrt{8E_{J,t}E_{C,t}}(n+1/2)-E_{J,t}$ while the PPQ energy levels by $\frac{\omega_{p,2} + \omega_{p,3}}{2} \approx2\sqrt{8E_{J,p}E_{C,p}}-4 E_{J,p}$. Neglecting the anharmonicity corrections on both qubits, we find that the necessary condition in Eq. (<ref>) simplifies to $2\sqrt{E_{J,p}E_{C,p}}<\sqrt{E_{J,t}E_{C,t}}$. This condition is satisfied for the parameters chosen in Fig. <ref>. § QUANTUM GATES We will now use the effective Hamiltonian for the hybrid PPQ/transmon setup to implement a controlled-phase gate ($\textsf{CZ}_\phi$), which will preserve the Cooper-pair parity irrespective of the detailed pulse sequence. In addition, we will also discuss a complete set of single-qubit gates realized by controllably driving the system in-and-out of protection [37]. In combination with the $\textsf{CZ}_\phi$ gate, these single-qubit gates will permit the coherent state transfer, a operation, between the transmon and PPQ. §.§ $\textsf{CZ}_\phi$ gate For deriving the $\textsf{CZ}_\phi$ gate protocol, we initially move to the frame that rotates with the bare qubit frequencies, $\tilde{H}^{(n_{g,p})}_{\text{eff}}=U^{\dag}(t)H^{(n_{g,p})}_{\text{eff}}U(t)-iU^{\dag}(t)\dot{U}(t)$ with \omega_{p}\sigma^{z}_{p})t/2}$. Within this rotating frame, the effective Hamiltonian reads, \begin{equation} \begin{split} \tilde{H}^{(n_{g,p})}_{\text{eff}}&= \frac{g^{zz}_{+}}{2} \frac{ \sigma^{z}_{t} \frac{g^{zz}_{-}}{2} \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \\ \sigma^{+}_{t}\sigma^{z}_{p} \sigma^{+}_{t})+\text{H.c.}], \label{rotframeHamiltonian} \end{split} \end{equation} where we introduced $\sigma^{\pm}_{}=(\sigma^{x}_{}\pm i\sigma^{y}_{})/2$. We note that the terms $\propto g^{yz}\sigma^{\pm}_{t}\sigma^{z}_{p}$ and $\propto g^{y}\sigma^{\pm}_{t}$ vanish if $n_{g,p}=0$. In this situation, the free evolution of the effective Hamiltonian can implement a $\textsf{CZ}^{10}_\phi$ gate. For executing this $\textsf{CZ}^{10}_\phi$ gate, we carry out a rapid excursion from $\Phi^{\text{ext}}_t\approx 0$ to a flux $\Phi^{\text{ext}}_t\approx \Phi^{\text{ext}}_{*}$ close to the anti-crossing $\left|1_{t},0_{p}\right\rangle\leftrightarrow\left|0_{t},3_{p}\right\rangle$. We then let the system evolve freely for a time $t_{*}=\phi$. This free evolution gives rise to a rotation in the space of $\left|1_{t},0_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$. After the time $t_{*}$, the $\left|1_{t},0_{p}\right\rangle$ state will have acquired a finite phase factor and we rapidly return to the idle configuration at $\Phi^{\text{ext}}_t\approx 0$. Because $g^{zz}_{-}\approx-g^{zz}_{+}$ near the anti-crossing, the result of this rapid excursion is a $\textsf{CZ}^{10}_{\phi}$ gate of the form, \begin{equation} \begin{split} \textsf{CZ}^{10}_\phi |0_{t}\rangle\langle 0_{t}| \otimes \textsf{I}_{p} |1_{t}\rangle\langle 1_{t}| \otimes \textsf{P}_{p} \\ \textsf{P}_{p}&=e^{-i\phi}|0_{p}\rangle\langle 0_{p}|+|1_{p}\rangle\langle 1_{p}| \end{split} \end{equation} Unlike for the case of two capacitively coupled transmons $t1$ and $t2$, we remark that the phase factor is not acquired by the $\left|1_{t1},1_{t2}\right\rangle$ state but by the $\left|1_{t},0_{p}\right\rangle$ state. Also, as announced at the beginning of this section, we highlight that the Cooper-pair parity is preserved for the full duration of the $\textsf{CZ}^{10}_\phi$ gate. In the protocol for the $\textsf{CZ}^{10}_{\phi}$ gate, we have assumed that the offset charge on the PPQ is gate-tuned to $n_{g,p}=0$. Such a tuning is beneficial as it maximizes the coefficient of the $\sigma^{z}_{t}\sigma^{z}_{p}$ terms, thereby allowing for improved gate speed. Furthermore, the tuning should always be achievable because higher levels of the PPQ are strongly offset charge sensitive and can be used for adjusting $n_{g,p}$. However, the fine-tuning to $n_{g,p}=0$ is not essential for the gate protocol. To see this, we note that the terms $\propto g^{yz}\sigma^{\pm}_{t}\sigma^{z}_{p}$ and $\propto g^{y}\sigma^{\pm}_{t}$ in Eq. (<ref>), which appear when $n_{g,p}$ is detuned from zero, share a fast-oscillating prefactor $\propto e^{i\omega_{t}t}$. This fast-oscillating prefactor suggests that such terms are average to zero when invoking a `rotating-wave approximation'. For making this argument more precise, we have integrated out the fast-oscillating terms to second order in $g^{y}$ and $g^{yz}$ with a time-dependent Schrieffer-Wolff transformation [54, 55]. The resulting modified effective Hamiltonian reads [53], \begin{align} \tilde{H}^{(n_{g,p})}_{\text{eff}}(t)&\approx \left( \frac{g^{zz}_{+}}{2} \frac{4[\tilde{g}^{y}(t)^{2}+\tilde{g}^{yz}(t)^{2}]}{\omega_{t}} \right) \frac{ \sigma^{z}_{t} \frac{g^{zz}_{-}}{2} \frac{ \sigma^{z}_{p} \\ \left( \frac{16\tilde{g}^{y}(t)\tilde{g}^{yz}(t)}{\omega_{t}} \right) \,\frac{ \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \label{rotframeHamiltonian2} \end{align} with $\tilde{g}(t)=g\sin(\omega_{t}t/2)$. Provided that $g^{y}\ll \omega_{t}$ and $g^{yz}\ll \omega_{t}$, we see that the correction terms to the effective Hamiltonian are indeed negligibly small. For the realistic parameters chosen in Fig. <ref>, we have $g^{y} / (2 \pi) =\SI{345}{\kilo\Hz}$ and $g^{yz}/ (2\pi) =\SI{3.88}{\mega\Hz}$ if $n_{g,p}=0.1$. Single-qubit gates. (a) A generalized PPQ circuit with three circuits elements; a $\cos(2\phi_{p})$ element (blue), a $\cos(\phi_p)$ element (red), and a $\sin(\phi_p)$ element (green). No magnetic flux is threading the gray areas of the circuit. (b) Dependence of the matrix element $h^{x}$ (resulting from the $\cos(\phi_p)$ element) and $h^{y}$ (resulting from the $\sin(\phi_p)$ element) as a function of $n_{g,p}$. The system parameters are $(E_{J, p}, E_{C, p}) = 2\pi (2.7, 0.18)\,\text{GHz}$ and $(\varepsilon_x/E_{J, p},\varepsilon_y/E_{J, p})=(0.04,0.2)$. (c) The $\cos(\phi_p)$ element element induces rotations around the $x$-axis of the Bloch sphere (red arrow). The $\sin(\phi_p)$ element induces rotations around the $y$-axis of the Bloch sphere. The wavefunctions of the PPQ in the $z$-basis ($|0_{p}\rangle$,$|1_{p}\rangle$) and in the $x$-basis ($|+_{p}\rangle$,$|-_{p}\rangle$) are shown schematically. §.§ Single-qubit gates Having introduced the $\textsf{CZ}_\phi$ gate, we now discuss the implementation of single-qubit gates on the PPQ. For implementing these single-qubits gates, we consider the generalized circuit for a PPQ depicted in Fig. <ref>(a). The circuit comprises not only a $\cos(2\phi_{p})$ element for the tunneling of pairs of Cooper-pairs, but also a $\cos(\phi_{p})$ and $\sin(\phi_{p})$ element that describe the tunneling of single Cooper-pairs. The Hamiltonians for these additional circuit elements are given by, \begin{equation} \begin{split} \\ \label{single-qubit-perturbations} \end{split} \end{equation} While both additional circuit elements permit single Cooper-pair tunnelings and temporarily lift the qubit protection, they are typically tuned by different control parameters, depending on the experimental implementation of the PPQ [33, 34]. For example, if the PPQ is realized in a nanowire Josephson interferometer, then sinusoidal term arises if the interferometer junctions are tuned out of balance by local gate electrodes. In contrast, the cosinusoidal term arise when the interferometer magnetic flux is biased away from half flux quantum [32, 33]. We now project the Hamiltonians $H_{p}+H^{x}_{}$ and $H_{p}+H^{y}_{}$ onto the computational subspace of the PPQ. The resulting qubit Hamiltonians read, \begin{align} \delta\omega_{p}\cos(\pi n_{g,p})\,\sigma^{z}_{p}\,/2 \delta h^{x}\,\sigma^{x}_{p},\nonumber \\ \delta\omega_{p}\cos(\pi n_{g,p})\,\sigma^{z}_{p}\,/2 \delta h^{y}\sin(\pi n_{g,p})\,\sigma^{y}_{p}. \label{singlequbitham} \end{align} From this result, we see that the $\cos(\phi_{p})$ and $\sin(\phi_{p})$ elements induce rotations around the $x$- and $y$-axis of the Bloch sphere. The respective matrix elements are given by $\delta h^{x}=\left\langle 0_{p}| H^{x}_{}|1_{p}\right\rangle$ and $\delta h^{y}\sin(\pi n_{g,p})=\left\langle 0_{p}|H^{y}_{}|1_{p}\right\rangle$. The dependence of these matrix elements on the offset charge $n_{g,p}$ is shown in Fig. <ref>(b). Since we can reach any point on the Bloch sphere through a combined rotation around the $x$- and $y$-axis, we conclude that the free time evolution of the Hamiltonians in Eq. (<ref>) can implement a complete set of single-qubit gates. However, we also emphasize that these single-qubit gates break the Cooper-pair parity conservation so that the PPQ is prone to relaxation errors during the operation time of the single-qubit gates. §.§ $\textsf{CNOT}$ and $\textsf{SWAP}$ gate We now combine the proposed method for single-qubit gates with the $\textsf{CZ}^{10}_\phi$ (with $\phi = \pi$) gate to realize a $\textsf{CNOT}_{tp}$ gate with the transmon as control and the PPQ as target by the gate sequence, \begin{equation} \textsf{CNOT}_{tp} = \begin{quantikz}[column sep=0.2cm, row sep = 0.75cm] & \ctrl{1} & \qw \\ & \targ{} & \qw \end{quantikz} =\begin{quantikz}[column sep=0.2cm, row sep = 0.35cm] \qw & \qw &\gate[wires=2]{\textsf{CZ}^{10}} & \qw & \qw\\ \qw & \gate{\textsf{Y}_{\frac{\pi}{2}}} & & \gate{\textsf{Y}_{-\frac{\pi}{2}}} & \qw \end{quantikz} \end{equation} A $\textsf{CNOT}_{pt}$ gate that uses the PPQ as control and the transmon as target is similarly given by $\textsf{CNOT}_{pt}=\textsf{H}_{t}\cdot \textsf{H}_{p} \cdot \textsf{CNOT}_{tp}\cdot \textsf{H}_{t}\cdot \textsf{H}_{p}$ with the Hadamards, $\textsf{H}_{t/p}=(\sigma^{x}_{t/p}+\sigma^{z}_{t/p})/\sqrt{2}$. Most notably, the $\textsf{CNOT}_{tp}$ and $\textsf{CNOT}_{pt}$ gate can now be combined to realize a $\textsf{SWAP}=\textsf{CNOT}_{tp}\cdot \textsf{CNOT}_{pt}\cdot \textsf{CNOT}_{tp}$ operation. The operation enables the coherent transfer of quantum information between the transmon and the PPQ. Interestingly, this coherent state transfer also gives a novel read-out method for the PPQ by swapping the quantum information onto the transmon and performing the read-out on the latter. § POSSIBLE ERRORS In the previous sections, we have focused on deriving a scheme for a $\textsf{CZ}_\phi$ gate within our hybrid qubit setup. For our scheme, we have assumed that the Cooper-pair parity on the PPQ is conserved during the gate operation time. An interesting question is if the gate protocol modifies if errors due to unintentional single Cooper-pair tunneling terms, as given by Eq. (<ref>), are present on the PPQ? §.§ $\boldsymbol{\sin(\phi_{p})}$ errors To address this question, we consider the PPQ at its $n_{g,p}=0$ operation point for optimal gate-speed. We initially consider an error term, $H^{y}_{} -\varepsilon^{y}\,\sin(\phi_{p})$, with an amplitude $\varepsilon^{y}$ that is small compared to the remaining energy scales of the setup. This $\sin(\phi_p)$ error arises in a PPQ realized by a nanowire Josephson interferometer if the two interferometers junctions are not in balance [32]. Due to the error term, we find that the low-energy Hamiltonian of Eq. (<ref>) changes to, \begin{equation} \rightarrow \begin{pNiceArray}{cccc|cc} \omega_{11} & 0 & 0 & 0 &\lambda' & 0 \\ 0 & \omega_{10} & 0 & 0 & 0 & -\lambda'' \\ 0 & 0 & \omega_{01} & 0 & 0 & \kappa \\ 0 & 0 & 0 & \omega_{00} & \kappa & 0 \\ \hline \lambda' & 0 & 0 & \kappa & \omega_{02} & 0 \\ 0 & -\lambda'' & \kappa & 0 & 0 & \omega_{03} \end{pNiceArray} \end{equation} Here, we introduced the real-valued matrix element $\kappa=\left\langle 0_{t},1_{p}|H^{y}|0_{t},3_{p}\right\rangle=\left\langle 0_{t},0_{p}|H^{y}|0_{t},2_{p}\right\rangle$. Moreover, in accordance with Eq. (<ref>), couplings of states with opposite Cooper-pair parity within the qubit subspace $\mathcal{P}_0$ are found to be absent at $n_{g,p}=0$, . Next, we integrate out the non-computational states, $\left|0_{t},2_{p}\right\rangle$ and $\left|0_{t},3_{p}\right\rangle$, with a Schrieffer-Wolff transformation and move to the rotating frame of the bare qubit frequencies. The effective rotating frame Hamiltonian of Eq. (<ref>) then modifies to, \begin{align} \frac{g^{zz}_{+}}{2} \frac{ \sigma^{z}_{t} \frac{g^{zz}_{-}}{2} \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \\ \sigma^{+}_{t}\sigma^{+}_{p} \sigma^{+}_{t}\sigma^{-}_{p}+\text{H.c.}),\nonumber \end{align} with the coefficients, \begin{align} g^{++}&=\frac{\kappa\lambda'}{2(\omega_{11}-\omega_{02})}, \quad g^{+-}=\frac{\kappa\lambda''}{2(\omega_{03}-\omega_{10})}. \end{align} It is now instructive to compare this result to the case of two capactively coupled transmons, $t1$ and $t2$, near the operation point of the $\textsf{iSWAP}$ gate [51]. In the latter case, the effective Hamiltonian comprises similar terms, $\propto \sigma^{+}_{t1}\sigma^{-}_{t2}$ and $\propto \sigma^{+}_{t1}\sigma^{+}_{t2}$, that are `rotating' with a factor $e^{i(\omega_{t1}-\omega_{t2})t}$ and `counter-rotating' with a factor $e^{i(\omega_{t1}+\omega_{t2})t}$, respectively. For $\omega_{t1}\approx\omega_{t2}$, the `counter-rotating' terms, which are fast-oscillating, average to zero within a `rotating-wave approximation'. Only the `rotating' terms, which oscillate slowly, are thus retained in the effective qubit Hamiltonian. In our case, the situation is very different. Because $\omega_{t}\gg\omega_{p}$, both factors, $e^{i(\omega_{t}+\omega_{p})t}$ and $e^{i(\omega_{t}-\omega_{p})t}$, are fast-oscillating. Within a `rotating-wave approximation', we thus expect that both error terms average to zero. To formalize this `rotating-wave approximation' argument, we integrate out the fast-oscillating terms with a time-dependent Schrieffer-Wolff transformation. To second order in $g^{++}$ and $g^{+-}$, we find that [53], \begin{equation} \begin{split} \tilde{H}^{(n_{g,p})}_{\text{eff}} \left( \frac{g^{zz}_{+}}{2} \frac{2[\tilde{g}^{xx}(t)-\tilde{g}^{yy}(t)]^{2}}{\omega_{t}} \right) \frac{ \sigma^{z}_{t} \\ \left( \frac{g^{zz}_{-}}{2} \frac{2[\tilde{g}^{xx}(t)-\tilde{g}^{yy}(t)]^{2}}{\omega_{t}} \right) \frac{ \sigma^{z}_{p} \\ \left( \frac{4[\tilde{g}^{xx}(t)+\tilde{g}^{yy}(t)]^{2}}{\omega_{t}} \right) \frac{ \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \label{rotframeHamiltonian3} \end{split} \end{equation} with $\tilde{g}(t)=g\sin(\omega_{t}t/2)$. From this expression for the effective rotating-frame Hamiltonian, we conclude that the mitigation of the effects of $\sin(\phi_{p})$ errors requires us to operate the setup in the regime when $g^{xx}\ll \omega_{t}$ and $g^{yy}\ll \omega_{t}$. §.§ $\boldsymbol{\cos(\phi_{p})}$ errors It is now interesting to compare our results for $\sin(\phi_p)$ errors with $\cos(\phi_p)$ errors that are described by an error term $H^{x}_{} -\varepsilon^x\,\cos(\phi_{p})$ in the Hamiltonian. Such an error term can arise in an implementation of the PPQ with a nanowire Josephson interferometer if the external flux that threading the interferometer loop is detuned from half flux quantum [32]. In this situation, the low-energy Hamiltonian of Eq. (<ref>) modifies to, \begin{equation} \rightarrow \begin{pNiceArray}{cccc|cc} \omega_{11} & \delta h^{x} & 0 & 0 &\lambda' & 0 \\ \delta h^{x} & \omega_{10} & 0 & 0 & 0 & -\lambda'' \\ 0 & 0 & \omega_{01} & \delta h^{x} & 0 & 0 \\ 0 & 0 & \delta h^{x} & \omega_{00} & 0 & 0 \\ \hline \lambda' & 0 & 0 & 0 & \omega_{02} & \chi \\ 0 & -\lambda'' &0 & 0 & \chi & \omega_{03} \end{pNiceArray} \end{equation} with the matrix element $\chi=\left\langle 0_{t},2_{p}|H^{y}|0_{t},3_{p}\right\rangle$. Importantly, we see that the $\cos(\phi_p)$ errors do not lead to off-diagonal terms that couple the matrix blocks representing the qubit subspace $\mathcal{P}_0$ and the non-computational subspace $\{|0_{t},2_{p}\rangle,|0_{t},3_{p}\rangle\}$. Consequently, we note that the $\cos(\phi_p)$ errors primarily induces mixing of opposite-parity states on the PPQ as described by $H^{x}_{\text{eff}}$ in Eq. (<ref>). A cosine error on the PPQ. Low-energy spectrum of the hybrid qubit setup for $n_{g, p}=0$ and $(E_{J,t}, E_{J,p},E_{C,t},E_{C,p},E_{C,c}) = 2\pi (12,2.7,0.2,0.15,0.025)\,\text{GHz}$ as a function of the external flux $\Phi_{\text{ext}}$ of the tunable transmon in the presence of a cosine error, $H^{x} =- \varepsilon^x\,\cos(\phi_{p})$ with $\varepsilon^x=0.05 E_{J, p}$. The error term introduce additional anti-crossings between states of opposite Cooper-pair parity. In summary, we have found that the nature of $\sin(\phi_p)$ errors and $\cos(\phi_p)$ errors is different in our hybrid qubit. While the $\sin(\phi_p)$ errors lead primarily to additional two-qubit interactions that become less relevant in the limit when $g^{xx}\ll \omega_{t}$ and $g^{yy}\ll \omega_{t}$, the $\cos(\phi_p)$ errors lead primarily to additional single-qubit terms. Finding strategies of mitigating such flux errors, for example by concatenating multiple imperfect PPQs [35, 32], is an important open challenge of the field. § CONCLUSION To conclude, we have proposed a coupling scheme for entangling a parity-protected superconducting qubit with a conventional transmon qubit and discussed coherent state transfer as an application. While our scheme could open the way for using PPQs as quantum memories in a transmon architecture, it could also allow for a comparison of coherence times of the two qubits types within the same device. § ACKNOWLEDGEMENTS We gratefully acknowledge discussions with K. Flensberg and A. Gyenis. This work was supported by the Danish National Research Foundation, the Danish Council for Independent Research Natural Sciences. MK gratefully acknowledges support from the Villum Foundation (grant 37467) through a Villum Young Investigator grant. We acknowledge support from the Microsoft Corporation. [1] J. Koch, T. M. Yu, J. Gambetta, A. A. Houck, D. I. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Phys. Rev. A 76, 042319 (2007). [2] J. Preskill, Quantum 2, 79 (2018). [3] C. Kraglund Andersen, A. Remm, S. Lazar, S. Krinner, N. Lacroix, G. J. Norris, M. Gabureac, C. Eichler, and A. Wallraff, Nat. Phys. 16, 875 (2020) [4] J. F. Marques, B. M. Varbanov, M. S. Moreira, H. Ali, N. Muthusubramanian, C. Zachariadis, F. Battistel, M. Beekman, N. Haider, W. Vlothuizen, A. Bruno, B. M. Terhal, L. DiCarlo, Nat. Phys. 18, 80 (2022) [5] S. Krinner, N. Lacroix, A. Remm, A. Di Paolo, E. Genois, C. Leroux, C. Hellings, S. Lazar, F. Swiadek, J. Herrmann, G. J Norris, C. Kraglund Andersen, M. Müller, A. Blais, C. Eichler, and A. Wallraff, arXiv:2112.03708 [quant-ph]. [6] Google Quantum AI, Nature 595, 383 (2021). [7] F. Arute et al., Science 369, 1084 (2020). [8] X. Mi et al., Science 374, 1479 (2021). [9] J. Braumüller, A. H. Karamlou, Y. Yanay, B. Kannan, D. Kim, M. Kjaergaard, A. Melville, B. M. Niedzielski, Y. Sung, A. Vepsäläinen, R. Winik, J. L. Yoder, T. P. Orlando, S. Gustavsson, C. Tahan, and W. D. Oliver, arXiv:2102.11751 [quant-ph]. [10] A. H. Karamlou, J. Braumüller, Y. Yanay, A. Di Paolo, P. Harrington, B. Kannan, D. Kim, M. Kjaergaard, A. Melville, S. Muschinske, B. Niedzielski, A. Vepsäläinen, R. Winik, J. L. Yoder, M. Schwartz, C. Tahan, T. P. Orlando, S. Gustavsson, and W. D. Oliver, arXiv:2107.05035 [quant-ph]. [11] F. Arute et al., Nature 574, 505 (2019). [12] Y. Wu et al., Phys. Rev. Lett. 127, 180501 (2021). [13] M. Kjaergaard, M. E. Schwartz, J. Braumüller, P. Krantz, J. I.-J. Wang, S. Gustavsson, and W. D. Oliver, Annu. Rev. Condens. Matter Phys. 11, 369 (2020). [14] A. Kitaev, arXiv:cond-mat/0609441 [cond-mat.mes-hall]. [15] D. K. Weiss, Andy C. Y. Li, D. G. Ferguson, and J. Koch, Phys. Rev. B 100, 224507 (2019). [16] P. Brooks, A. Kitaev, and J. Preskill, Phys. Rev. A 87, 052306 (2013). [17] J. M. Dempster, B. Fu, D. G. Ferguson, D. I. Schuster, and J. Koch, Phys. Rev. B 90, 094518 (2014). [18] P. Groszkowski, A. Di Paolo, A. L. Grimsmo, A. Blais, D. I. Schuster, A. A. Houck, and J. Koch, New J. Phys. 20 043053 (2018). [19] A. Di Paolo, A. L. Grimsmo, P. Groszkowski, J. Koch, and A. Blais, New J. Phys. 21 043002 (2019). [20] T. Karzig, C. Knapp, R. M. Lutchyn, P. Bonderson, M. B. Hastings, C. Nayak, J. Alicea, K. Flensberg, S. Plugge, Y. Oreg, C. M. Marcus, M. H. Freedman, Phys. Rev. B 95, 235305 (2017). [21] S. Hoffman, C. Schrade, J. Klinovaja, and D. Loss, Phys. Rev. B 94, 045316 (2016). [22] C. Schrade and L. Fu, Phys. Rev. Lett. 120, 267002 (2018). [23] C. Schrade and L. Fu, Phys. Rev. Lett. 121, 267002 (2018). [24] G. Blatter, V. B. Geshkenbein, and L. B. Ioffe, Phys. Rev. B 63, 174511 (2001). [25] L. B. Ioffe and M. V. Feigel’man, Phys. Rev. B 66, 224503 (2002). [26] B. Douçot, J. Vidal, Phys. Rev. Lett. 88, 227005 (2002). [27] B. Douçot, M. V. Feigel’man, L. B. Ioffe, and A. S. Ioselevich, Phys. Rev. B 71, 024505 (2005). [28] S. Gladchenko, D. Olaya, E. Dupont-Ferrier, B. Douçot, L. B. Ioffe, and M. E. Gershenson, Nature Physics 5, 48 (2008). [29] B. Douçot and L. B. Ioffe, Reports on Progress in Physics 75, 072001 (2012). [30] A. Gyenis, A. D. Paolo, J. Koch, A. Blais, A. A. Houck, and D. I. Schuster, PRX Quantum 2, 030101 (2021). [31] W. C. Smith, A. Kou, X. Xiao, U. Vool, and M. H. Devoret, npj Quantum Information 6, 8 (2020). [32] C. Schrade, C. M. Marcus, and A. Gyenis, arXiv:2112.06907 [quant-ph] (2021). [33] T.W. Larsen, M. E. Gershenson, L. Casparis, A. Kringhøj, N. J. Pearson, R. P. G. McNeil, F. Kuemmeth, P. Krogstrup, K. D. Petersson, and C. M. Marcus, Phys. Rev. Lett. 125, 056801 (2020). [34] W. C. Smith, M. Villiers, A. Marquet, J. Palomo, M. R. Delbecq, T. Kontos, P. Campagne-Ibarcq, B. Douçot, Z. Leghtas, arXiv:2010.15488 [quant-ph]. [35] M. T. Bell, J. Paramanandam, L. B. Ioffe, and M. E. Gershenson, Phys. Rev. Lett. 112, 167001 (2014). [36] A. Gyenis, P. S. Mundada, A. Di Paolo, T. M. Hazard, X. You, D. I. Schuster, J. Koch, A. Blais, and A. A. Houck, PRX Quantum 2, 010339 (2021). [37] K. Kalashnikov, W. T. Hsieh, W. Zhang, W.-S. Lu, P. Kamenov, A. Di Paolo, A. Blais, M. E. Gershenson, and M. Bell, PRX Quantum 1, 010307 (2020). [38] P. Campagne-Ibarcq, A. Eickbusch, S. Touzard, E. Zalys-Geller, N. E. Frattini, V. V. Sivak, P. Reinhold, S. Puri, S. Shankar, R. J. Schoelkopf, L. Frunzio, M. Mirrahimi and M. H. Devoret, Nature, 584, 368-372 (2020). [39] V. E. Manucharyan, J. Koch, L. I. Glazman, and M. H. Devoret, Science 326, 113 (2009). [40] N. Earnest, S. Chakram. Y. Lu, N. Irons, R. K. Naik, N. Leung, L. Ocola, D. A. Czaplewski, B. Baker, J. Lawrence, J. Koch, and D. I. Schuster, Phys. Rev. Lett. 120, 150504 (2018). [41] Y.-H. Lin, L. B. Nguyen, N. Grabon, J. San Miguel, N. Pankratova, V. E. Manucharyan, Phys. Rev. Lett. 120, 150503 (2018) [42] L. B. Nguyen, Y.-H. Lin, A. Somoroff, R. Mencia, N. Grabon, and V. E. Manucharyan, Phys. Rev. X 9, 041041 (2019) [43] T. M. Hazard, A. Gyenis, A. Di Paolo, A. T. Asfaw, S. A. Lyon, A. Blais, and A. A. Houck, Phys. Rev. Lett. 122, 010504 (2019). [44] A. Somoroff, Q. Ficheux, R. A. Mencia, H. Xiong, R. V. Kuzmin, and V. E. Manucharyan, arXiv:2103.08578 [quant-ph]. [45] F. Hassani, M. Peruzzo, L. N. Kapoor, A. Trioni, M. Zemlicka, and J. M. Fink, arXiv:2202.13917 [cond-mat.mes-hall]. [46] A. R. Klots and L. B. Ioffe, Phys. Rev. B 104, 144502 (2021). [47] M. Abdelhafez, B. Baker, A. Gyenis, P. Mundada, A. A. Houck, D. Schuster, and J. Koch, Phys. Rev. A 101, 022321 (2020). [48] K. N. Nesterov, I. V. Pechenezhskiy, C. Wang, V. E. Manucharyan, M. G. Vavilov, Phys. Rev. A 98, 030301 (2018) [49] Q. Ficheux, L. B. Nguyen, A. Somoroff, H. Xiong, K. N. Nesterov, M. G. Vavilov, and V. E. Manucharyan, Phys. Rev. X 11, 021026 (2021). [50] K. N. Nesterov, Q. Ficheux, V. E. Manucharyan, and M. G. Vavilov, PRX Quantum 2, 020345 (2021). [51] P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gustavsson, and W. D. Oliver, Applied Physics Reviews 6, 021318 (2019). [52] S. Bravyi, D. P. DiVincenzo, and D. Loss, Ann. Phys. 326, 2793 (2011). [53] In the Supplemental Material, we provide details on the derivations of the effective Hamiltonians presented in the main text. We also provide more details on the energy level structure of the hybrid qubit setup. [54] M. Eckstein, J. H. Mentink, and P. Werner, arXiv:1703.03269 [cond-mat.str-el]. [55] A. Petrescu, C. Le Calonnec, C. Leroux, A. Di Paolo, P. Mundada, S. Sussman, A. Vrajitoarea, A. A. Houck, and A. Blais, arXiv:2107.02343 [quant-ph] Supplemental Material to `Entangling transmons with low-frequency protected superconducting qubits' Andrea Maiani, Morten Kjaergaard, and Constantin Schrade Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark In the Supplemental Material, we provide details on the derivations of the effective Hamiltonians presented in the main text. We also provide more details on the energy level structure of the hybrid qubit setup. § TIME-INDEPENDENT EFFECTIVE HAMILTONIANS In this section of the Supplemental Material, we give details on the derivation of the time-independent effective Hamiltonians for the coupled qubit setup as presented in Eq. (7) of the main text. As a starting point, we project the setup Hamiltonian $H$ onto $\{\left|0_{t},0_{p}\right\rangle,\left|1_{t},0_{p}\right\rangle,\left|0_{t},1_{p}\right\rangle,\left|1_{t},1_{p}\right\rangle,\left|0_{t},2_{p}\right\rangle,\left|0_{t},3_{p}\right\rangle \}$, which correspond to the relevant low-energy states of the uncoupled Hamiltonian $H_{0}$. The resulting projected Hamiltonian is given by $H^{(n_{g,p})}_{\text{low}}=H^{(0)}_{\text{low}}+H^{(1)}_{\text{low}}+H^{(2)}_{\text{low}}$ with, \begin{equation} \begin{pNiceArray}{cccc|cc} \omega_{11} & 0 & 0 & 0 & 0 & 0 \\ 0 & \omega_{10} & 0 & 0 & 0 & 0 \\ 0 & 0 & \omega_{01}& 0 & 0& 0 \\ 0 & 0 & 0 & \omega_{00} & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & \omega_{02} & 0 \\ 0 & 0 & 0 & 0 & 0 & \omega_{03} \end{pNiceArray} \quad \begin{pNiceArray}{cccc|cc} 0 & 0 & -i\eta' & 0 & 0 & 0 \\ 0 & 0 & 0 & i\eta'' & 0 & 0 \\ i\eta' & 0 &0& 0 & 0& 0 \\ 0 & -i\eta'' & 0 & 0& 0 & 0 \\ \hline 0 & 0 & 0 & 0 &0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{pNiceArray} \quad \begin{pNiceArray}{cccc|cc} 0 & 0 & 0 & 0 & \lambda' & 0 \\ 0 & 0 & 0 & 0 & 0 & -\lambda'' \\ 0 & 0 & 0& 0 & 0& 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \lambda' & 0 & 0 &0 &0 & 0 \\ 0 & -\lambda'' &0 & 0 & 0 & 0 \end{pNiceArray}. \end{equation} Here we have chosen a gauge of the wavefunctions in the uncoupled system such that $\langle 2_{p}|n_{p}|1_{p}\rangle$, $\langle 3_{p}|n_{p}|0_{p}\rangle$, and $\langle 0_{t}|n_{t}|1_{t}\rangle$ are purely imaginary-valued. As a result of this gauge choice, the following quantities are real-valued, \begin{equation} \begin{split} \lambda'&=\left\langle 1_{t},1_{p}|H_{c}|0_{t},2_{p}\right\rangle, \\ -\lambda''&=\left\langle 1_{t},0_{p}|H_{c}|0_{t},3_{p}\right\rangle, \\ -i\eta'&=\left\langle 1_{t},1_{p}|H_{c}|0_{t},1_{p}\right\rangle, \\ i\eta''&=\left\langle 1_{t},0_{p}|H_{c}|0_{t},0_{p}\right\rangle. \end{split} \end{equation} Next, we integrate out the effects of the non-computational states $\{\left|0_{t},2_{p}\right\rangle,\left|0_{t},3_{p}\right\rangle \}$ by means of a Schrieffer-Wolff transformation [52, 57]. As the generator of the Schrieffer-Wolff transformation, we use \begin{equation} \begin{pNiceArray}{cccc|cc} 0 & 0 & 0 & 0 & -\Omega'/\lambda' & 0 \\ 0 & 0 & 0 & 0 & 0 & \Omega''/\lambda'' \\ 0 & 0 & 0& 0 & 0& 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \Omega'/\lambda' & 0 & 0 & 0 &0 & 0 \\ 0 & - \Omega''/\lambda''& 0 & 0 & 0 & 0 \end{pNiceArray} \quad \text{with} \quad \Omega'=\frac{\lambda'^{2}}{\omega_{11}-\omega_{02}}\quad\text{and} \quad \Omega''=\frac{\lambda''^{2}}{\omega_{10}-\omega_{03}}. \end{equation} We note that the generator satisfies $[H^{(0)}_{\text{low}},S]=-H^{(2)}_{\text{low}}$. To second order in $\lambda'$ and $\lambda''$, the Schrieffer-Wolff generator produces an effective Hamiltonian $H^{(n_{g,p})}_{\text{eff}}=H^{(0)}_{\text{low}}+H^{(1)}_{\text{low}}+[H^{(2)}_{\text{low}},S]/2$. Evaluating this expression yields, \begin{equation} \begin{split} \left(\omega_{t}+\frac{g^{zz}_{+}}{2}\right) \frac{ \sigma^{z}_{t} \left(\omega_{p}+\frac{g^{zz}_{-}}{2}\right) \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \quad \text{with} \quad \\ \end{split} \end{equation} This corresponds to the effective Hamiltonian presented in Eq. (7) of the main text. § EFFECTIVE HAMILTONIANS FOR THE TIME-EVOLUTION In this section of the Supplemental Material, we give details on the derivation of the effective Hamiltonian in Eq. (10) of the main text that approximates the time-evolution of our hybrid qubit system. §.§ Time-dependent Schrieffer-Wolff transformation As a starting point, we provide a brief review of the time-dependent version of the Schrieffer-Wolff transformation. Given some time-dependent effective Hamiltonian $H(t)$, the time-dependent Schrieffer-Wolff transformation generates an effective Hamiltonian $H_{\text{eff}}(t)$ via a unitary transformation with a time-dependent generator $S(t)$ with $S(t)=-S(t)^{\dagger}$. By using the Baker-Campbell-Haussdorf formula, we can formulate the action of the time-dependent Schrieffer-Wolff unitary transformation on the Hamiltonian $H(t)$ as, \begin{equation} \begin{split} \\ \left[ \right] \frac{1}{2} \left[ \left[ \right], \right] \dots \\ \left[ \right] \frac{1}{2} \left[ \left[ \right], \right] \frac{i}{2} \left[ \dot{S}(t), \right] \dots \end{split} \end{equation} Next, we choose the time-dependent Hamiltonian to be of the specific form, \begin{equation} \begin{split} H(t)&=H^{(0)}+\xi H^{(2)}(t). \end{split} \end{equation} Here, $H^{(0)}$ is a time-independent unperturbed Hamiltonian and $ H^{(2)}(t)$ is a time-dependent perturbation. The parameter $\xi$ is an aid to count the order in perturbation theory and can be set to $\xi=1$ at the end of the derivation. Besides specifying the form of the time-dependent Hamiltonian, we also require that the generator of the time-dependent Schrieffer-Wolff transformation satisfies the following differential equation, \begin{equation} \begin{split} \xi H^{(2)}(t) + \left[ H^{(0)}, S(t) \right] - i \dot{S}(t) =0. \end{split} \end{equation} Using these two conditions on the time-independent Hamiltonian and the generator $S(t)$, we find that the expression for the effective Hamiltonian $H_{\text{eff}}(t)$ can be simplified to, \begin{equation} \begin{split} \left[ \right] \frac{1}{2} \left[ \left[ \right], \right] \frac{i}{2} \left[ \dot{S}(t), \right] \dots \\ \frac{1}{2} \left[ \xi H^{(2)}(t) \right] \frac{1}{2} \left[ \left[ \xi H^{(2)}(t),S(t) \right], \right] \dots \end{split} \end{equation} We now proceed by assuming that the generator $S(t)$ can expanded in a perturbative series, \begin{equation} \begin{split} S(t)&=\xi S_{1}(t)+\xi^{2} S_{2}(t) + \cdots \end{split} \end{equation} Inserting this series into the expression for the effective Hamiltonian $H_{\text{eff}}$ and only retaining terms up to order $\xi^{2}$, we find that, \begin{equation} \begin{split} \frac{\xi^{2}}{2} \left[ \right] \mathcal{O}(\xi^{3}) \end{split} \end{equation} Finally, we set $\xi=1$ and arrive at the following form of the effective Hamiltonian, \begin{equation} \begin{split} \frac{1}{2} \left[ \right] \end{split} \end{equation} §.§ Rotating frame for the hybrid qubit setup We now want to apply the time-dependent Schrieffer-Wolff transformation to our hybrid qubit setup. For that purpose, it is helpful to initially move to a rotating reference frame, which is achieved by separating the full qubit entangling Hamiltonian into, \begin{equation} \begin{split} \quad \text{with} \quad \left(\omega_{t}+\frac{g^{zz}_{+}}{2}\right) \frac{ \sigma^{z}_{t} \left(\omega_{p}+\frac{g^{zz}_{-}}{2}\right) \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \frac{ \sigma^{z}_{p} } , \quad \end{split} \end{equation} and applying the following time-dependent unitary transformation, \begin{equation} \omega_{p}\sigma^{z}_{p})t/2}. \end{equation} This transformation yields the qubit entangling Hamiltonian in the frame that rotates at the bare qubit frequencies with components, \begin{equation} \begin{split} \tilde{H}^{(0)}&=U^{\dag}(t)H^{(0)}U(t)-iU^{\dag}(t)\dot{U}(t)=\frac{g^{zz}_{+}}{2} \frac{ \sigma^{z}_{t} \frac{g^{zz}_{-}}{2} \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \frac{ \sigma^{z}_{p} } ,\\ \tilde{H}^{(2)}(t)&=U^{\dag}(t)H^{(2)}U(t)-iU^{\dag}(t)\dot{U}(t)= g^{yz}\cos(\omega_{t}t)\,\sigma^{y}_{t}\sigma^{z}_{p} . \end{split} \end{equation} The rotating frame Hamiltonian can also be explicitly written in matrix form as, \begin{equation} \begin{split} \tilde{H}^{(0)}&= \frac{1}{4} \begin{pmatrix} g^{zz}_{+}+2g^{zz}_{-} & 0 &0 & 0 \\ 0 & g^{zz}_{+}-2g^{zz}_{-} & 0 &0 \\ 0 & 0 & -g^{zz}_{+}& 0 \\ 0 & 0 & 0 & -g^{zz}_{+} \end{pmatrix}, \\ \tilde{H}^{(2)}(t)&= \begin{pmatrix} 0 & 0 &-i (g^{y}+g^{yz}) e^{i\omega_{t}t} & 0 \\ 0 & 0 & 0 &-i (g^{y}-g^{yz}) e^{i\omega_{t}t} \\ i (g^{y}+g^{yz})e^{-i\omega_{t}t} & 0 &0& 0 \\ 0 & i (g^{y}-g^{yz}) e^{-i\omega_{t}t} & 0 &0 \end{pmatrix}. \end{split} \end{equation} §.§ Time-dependent Schrieffer-Wolff transformation for hybrid qubit setup We now want to perform a time-dependent Schrieffer-Wolff transformation that eliminates the fast-oscillating terms, $\propto e^{\pm i\omega_{t}t}$, in $\tilde{H}^{(2)}(t)$ to second order in $g^{y}$ and $g^{yz}$. We, therefore, introduce the following Schrieffer-Wolff generator, \begin{equation} \begin{pmatrix} 0 & 0 &f_{1}(t)& 0 \\ 0 & 0 & 0 &f_{2}(t) \\ -f_{1}(t)^{*} & 0 &0& 0 \\ 0 &-f_{2}(t)^{*} & 0 &0 \end{pmatrix} \end{equation} with the functions, \begin{equation} }{g^{zz}_{-}+g^{zz}_{+}+2\omega_{t}}, \quad \end{equation} This generator satisfies, \begin{equation} S_{1}(t)=-S_{1}(t)^{\dagger}, \quad \tilde{H}^{(2)}(t) + \left[ \tilde{H}^{(0)}, S_{1}(t) \right] - i \dot{S}_{1}(t) =0, \quad \text{and} \quad \end{equation} Moreover, the generator allows us to compute the effective correction term to $\tilde{H}^{(0)}(t)$ to second order in $g^{y}$ and $g^{yz}$, \begin{equation} \begin{split} \frac{1}{2} \left[ \tilde{H}^{(2)}(t),S_{1}(t) \right] \begin{pmatrix} h_{1}(t)& 0 &0 & 0 \\ 0 & h_{2}(t) & 0 &0 \\ 0 & 0 & -h_{1}(t) & 0 \\ 0 & 0 & 0 & -h_{2}(t) \end{pmatrix}, \end{split} \end{equation} with the functions, \begin{equation} \begin{split} &h_{1}(t) = \frac{4(g^{y}+g^{yz})^{2}\sin([g^{zz}_{-}+g^{zz}_{+}+2\omega_{t}]t/4)^{2}}{g^{zz}_{-}+g^{zz}_{+}+2\omega_{t}}, \quad h_{2}(t) = -\frac{4(g^{y}-g^{yz})^{2}\sin([g^{zz}_{-}-g^{zz}_{+}-2\omega_{t}]t/4)^{2}}{g^{zz}_{-}-g^{zz}_{+}-2\omega_{t}} \end{split} \end{equation} Provided that $\omega_{t}\gg g^{zz}_{\pm}$, we neglect the terms in the denominators and sine functions that are $\propto g^{zz}_{\pm}$. When then add the correction term to $\tilde{H}^{(0)}(t)$, which yields the full effective Hamiltonian, \begin{equation} \begin{split} \tilde{H}^{(n_{g,p})}_{\text{eff}}(t)&\approx \left( \frac{g^{zz}_{+}}{2} \frac{4[(g^{y})^{2}+(g^{yz})^{2}]\sin(\omega_{t}t/2)^{2}}{\omega_{t}} \right) \frac{ \sigma^{z}_{t} \frac{g^{zz}_{-}}{2} \frac{ \sigma^{z}_{p} \left( \frac{16g^{y}g^{yz}\sin(\omega_{t}t/2)^{2}}{\omega_{t}} \right) \,\frac{ \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \end{split} \end{equation} This concludes our derivation of the effective Hamiltonian for the time-evolution of our hybrid qubit setup. § MORE DETAILS ON THE POSSIBLE ERRORS In this section of the Supplemental Material, we provide details on the derivation of the effective Hamiltonians presented in Eq. (15) and Eq. (16) of the main text. These Hamiltonians account for the presence of $\sin(\phi_p)$ error terms in our hybrid qubit setup. First, we note that the derivations for the effective Hamiltonians with the $\sin(\phi_p)$ error terms are very similar to the derivations for the effective Hamiltonians without $\sin(\phi_p)$ the error terms. Since the latter derivations have been discussed in great detail in the previous sections of the Supplemental Material, we will focus only on the main modifications. §.§ Time-independent effective Hamiltonian For deriving the time-independent effective Hamiltonian of Eq. (15), we note that the low-energy Hamiltonian at $n_{g,p}=0$ is given by $H^{(n_{g,p}=0)}_{\text{low}}=H^{(0)}_{\text{low}}+H^{(2)}_{\text{low}}$ with, \begin{equation} \begin{pNiceArray}{cccc|cc} \omega_{11} & 0 & 0 & 0 & 0 & 0 \\ 0 & \omega_{10} & 0 & 0 & 0 & 0 \\ 0 & 0 & \omega_{01}& 0 & 0& 0 \\ 0 & 0 & 0 & \omega_{00} & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & \omega_{02} & 0 \\ 0 & 0 & 0 & 0 & 0 & \omega_{03} \end{pNiceArray} \quad \begin{pNiceArray}{cccc|cc} 0 & 0 & 0 & 0 & \lambda' & 0 \\ 0 & 0 & 0 & 0 & 0 & -\lambda'' \\ 0 & 0 & 0& 0 & 0& \kappa \\ 0 & 0 & 0 & 0 & \kappa & 0 \\ \hline \lambda' & 0 & 0 &\kappa &0 & 0 \\ 0 & -\lambda'' &\kappa & 0 & 0 & 0 \end{pNiceArray}. \end{equation} Here, the matrix element $\kappa=\left\langle 0_{t},1_{p}|H^{y}|0_{t},3_{p}\right\rangle=\left\langle 0_{t},0_{p}|H^{y}|0_{t},2_{p}\right\rangle$ is real-valued (in the same gauge as the one used in the first section of the Supplemental Material) and accounts for the presence of the $\sin(\phi_p)$ error terms. Next, we write down the generator of the Schrieffer-Wolff transformation, \begin{equation} \begin{pmatrix} 0 & 0 & 0 & 0 & -\Omega'/\lambda' & 0 \\ 0 & 0 & 0 & 0 & 0 & \Omega''/\lambda'' \\ 0 & 0 & 0& 0 &0& \Gamma' \\ 0 & 0 & 0 & 0 & \Gamma'' & 0 \\ \Omega'/\lambda' & 0 & 0 & -\Gamma'' &0 & 0 \\ 0 & - \Omega''/\lambda''& -\Gamma' & 0 & 0 & 0 \end{pmatrix} \quad \text{with} \quad \Gamma'=\frac{\kappa}{\omega_{03}-\omega_{01}}\quad\text{and} \quad \Gamma''=\frac{\kappa}{\omega_{02}-\omega_{00}}, \end{equation} The generator satisfies $[H^{(0)}_{\text{low}},S]=-H^{(2)}_{\text{low}}$ and yields the effective Hamiltonian, $H^{(n_{g,p}=0)}_{\text{eff}}=H^{(0)}_{\text{low}}+[H^{(2)}_{\text{low}},S]/2$. Projected onto the qubit subspace $\mathcal{P}_0$, the effective Hamiltonian evaluates to, \begin{equation} \begin{split} \left(\omega_{t}+\frac{g^{zz}_{+}}{2}\right) \frac{ \sigma^{z}_{t} \left(\omega_{p}+\frac{g^{zz}_{-}}{2}\right) \frac{ \sigma^{z}_{p} \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \end{split} \end{equation} with the coefficients, \begin{equation} \frac{\kappa}{4} \left( \frac{\lambda'}{\omega_{11}-\omega_{02}} \frac{\lambda''}{\omega_{03}-\omega_{10}} \right) \quad \left( \frac{\lambda'}{\omega_{02}-\omega_{11}} \frac{\lambda''}{\omega_{03}-\omega_{10}} \right). \end{equation} Here, we have dropped terms $\propto 1/(\omega_{01}-\omega_{03})$ and $\propto 1/(\omega_{00}-\omega_{02})$ due to the large separation of the respective energy levels. §.§ Time-dependent effective Hamiltonian For deriving the time-dependent effective Hamiltonian of Eq. (16), we transform the effective Hamiltonian, $H^{(n_{g,p}=0)}_{\text{eff}}$, to the frame that rotates with the bare qubit frequencies. The rotating-frame Hamiltonian is of the form $\tilde{H}^{(0)}+\tilde{H}^{(2)}(t)$ with the two contributions, \begin{equation} \begin{split} \tilde{H}^{(0)}&= \frac{1}{4} \begin{pmatrix} g^{zz}_{+}+2g^{zz}_{-} & 0 &0 & 0 \\ 0 & g^{zz}_{+}-2g^{zz}_{-} & 0 &0 \\ 0 & 0 & -g^{zz}_{+}& 0 \\ 0 & 0 & 0 & -g^{zz}_{+} \end{pmatrix}, \\ \tilde{H}^{(2)}(t)&= \begin{pmatrix} 0 & 0 &0 & (g^{xx}-g^{yy}) e^{i(\omega_{t}+\omega_{p})t} \\ 0 & 0 & (g^{xx}+g^{yy}) e^{i(\omega_{t}-\omega_{p})t} &0 \\ 0& (g^{xx}+g^{yy}) e^{-i(\omega_{t}-\omega_{p})t} &0& 0 \\ (g^{xx}-g^{yy}) e^{-i(\omega_{t}+\omega_{p})t} &0& 0 &0 \end{pmatrix}. \end{split} \end{equation} We now introduce the generator of the time-dependent Schrieffer-Wolff transformation, \begin{equation} \begin{pmatrix} 0 & 0 &0& f_{1}(t) \\ 0 & 0 & f_{2}(t) &0 \\ 0 & -f_{2}(t)^{*} &0& 0 \\ -f_{1}(t)^{*} &0 & 0 &0 \end{pmatrix}, \end{equation} with the functions, \begin{equation} \quad \end{equation} This generator satisfies, \begin{equation} S_{1}(t)=-S_{1}(t)^{\dagger}, \quad \tilde{H}^{(2)}(t) + \left[ \tilde{H}^{(0)}, S_{1}(t) \right] - i \dot{S}_{1}(t) =0, \quad \text{and} \quad \end{equation} The generator yields the effective Hamiltonian, \tilde{H}^{(0)}+ \tilde{H}^{(2)}(t),S_{1}(t) ]/2$, which evaluates to, \begin{equation} \begin{split} \tilde{H}^{(n_{g,p}=0)}_{\text{eff}}(t)&\approx \left(\frac{g^{zz}_{+}}{2} \frac{2(g^{xx}-g^{yy})^{2}\sin(\omega_{t}t/2)^{2}}{\omega_{t}} \right) \frac{ \sigma^{z}_{t} \left(\frac{g^{zz}_{-}}{2} \frac{2(g^{xx}-g^{yy})^{2}\sin(\omega_{t}t/2)^{2}}{\omega_{t}} \right) \frac{ \sigma^{z}_{p} \\ \left( \frac{4(g^{xx}+g^{yy})^{2}\sin(\omega_{t}t/2)^{2}}{\omega_{t}} \right) \,\frac{ \sigma^{z}_{t} \frac{ \sigma^{z}_{p} \end{split} \end{equation} This concludes our derivation of the effective Hamiltonians presented in Eq. (15) and Eq. (16) of the main text. § NUMERICAL SCHRIEFFER-WOLFF TRANSFORMATION In this section of the Supplemental Material, we provide more details on the theory and application of the Schrieffer-Wolff transformation for the coupled qubits problem. The numerical Schrieffer-Wolff method is used in the text to derive the 6-levels effective Hamiltonians integrating out the effect of high-energy levels. Let us consider the Hamiltonian of the coupled system $H = H_0 + H_c$ with $H_0 = H_t \otimes \mathbb{I} + \mathbb{I} \otimes H_p$ being the decoupled system Hamiltonian and $H_c$ the capacitive coupling Hamiltonian. The Hilbert space of the system can be decomposed as $\mathcal{H} = \mathcal{P}_{0} \oplus \mathcal{Q}_0 = \mathcal{P} \oplus \mathcal{Q}$ where $\mathcal{P}_0$ and $\mathcal{P}$ are the low energy Hilbert spaces of the decoupled and coupled system. The computational space of the system is identified by $\mathcal{P}_0$. For this reason, we are interested in finding a unitary $U \in \mathrm{End}(\mathcal{H})$ that maps the low-energy subspace of the interacting Hamiltonian $\mathcal{P}$ to the one of the uncoupled one $\mathcal{P}_{0}$. In other words, defining $P$ and $P_0$ the orthogonal projectors on the low-energy susbspaces \begin{align} P = \sum_{i}^{d} \ket{\psi_i}\bra{\psi_i} \\ P_0 = \sum_{i}^{d} \ket{\psi_i^0}\bra{\psi_i^0} \end{align} where $\ket{\psi_i}$ and $\ket{\psi_i^0}$ are, respectively, the coupled and decoupled system eigenstates and $d=4$, we are seeking a unitary $U$ that satisfies \begin{equation} U P U^\dag = P_0 \qquad \Rightarrow \qquad U P = P_0 U \,. \end{equation} This is achieved by the Schrieffer-Wolff transformation [56]. One way to see this is as a direct rotation that can be written as the square root of the product of the two reflections \begin{equation} U = \sqrt{\mathcal{M}_{\mathcal{P}_0}\mathcal{M}_{\mathcal{P}}} = \sqrt{(P_0 - Q_0)(P - Q)} = \sqrt{(2 P_0 - \mathbb{I})(2 P - \mathbb{I})} \end{equation} where $Q$ and $Q_0$ are the orthogonal projectors to the high-energy subspaces and $\mathcal{M}_j$ are the reflections upon the lower energy subspaces. The low energy Hamiltonian is then: \begin{equation} H_\mathrm{eff} = P_0 U H U^\dag P_0 = U P H P U^\dag \end{equation} An efficient way to tackle the problem numerically is the procedure developed in [58, 59] that we will now discuss. Since the final objective is the effect of the Schrieffer-Wolff transformation only in the low-energy sector, we can focus on the following operator product \begin{equation} P_0 U = U P = \sum_{ij} A_{ij} \ket{\psi_i^0}\bra{\psi_j} = A \end{equation} where $A \in \mathrm{Hom}\qty(\mathcal{P}_0, \mathcal{P})$ is a rank $d$ operator. For later use, we introduce also the rank $d$ operator $B \in \mathrm{Hom}\qty(\mathcal{P}_0, \mathcal{P})$ defined as $B = P_0 P$. Since both the operators belongs to $\mathrm{Hom}\qty(\mathcal{P}_0, \mathcal{P})$, $P_0$ acts as a left identity and $P$ is a right identity. Moreover, $A$ and $B$ are related, indeed \begin{equation} (U P)^2 = (P_0 A P)^2 = P_0 A P P_0 A P = P_0 A B^\dag A P = A B^\dag A \end{equation} at the same time \begin{equation} ( U P)^2 = P_0 U^2 P = P_0 (P_0 - Q_0)(P - Q) P = P_0 P = B \end{equation} and therefore $A B^\dag A = B$. Using singular values decomposition, we can decompose $B = W \Sigma V^\dag$ where $W$ and $V$ are unitaries and $\Sigma$ is a diagonal matrix. The equation $A B^\dag A = B$ is then solved by $A = W V^\dag$. In a practical implementation, by starting with the Hamiltonian $H_0$ and $H$ in whatever basis, we can calculate the incomplete low-energy $d$-dimensional orthogonal eigenbasis $V_0$ and $V$ ($n \times d$ matrices) with eigenvalue matrices $W_0$ and $W$ ($d \times d$ diagonal matrices) by using the Lanczos algorithm. With these, we can calculate $B = V_0^\dag V$, that is a dimension $d$ matrix and perform SVD to calculate the unitary $A$. The effective low-energy Hamiltonian in the computational basis is then $H_\mathrm{eff} = A W A^\dag$. The drawback of this method is that we lose the information on the dressed states that is encoded in the matrix $U$. §.§ Smooth gauge for parametric sweeps The purpose of the Schrieffer-Wolff transformation is to derive an effective Hamiltonian $H_\mathrm{eff}$ written in the basis of the unperturbed system $H_0$. Since $H_\mathrm{eff}$ is not an observable of the system, the effective Hamiltonian derived is not unique but it depends on the choice of the gauge for the unperturbed eigenstates $\qty{\ket{\psi_i^0}}$. This means that it is crucial to fix a smooth gauge when sweeping over a parameter that appear also in the unperturbed Hamiltonian $H_0$. Notice that, when numerical diagonalization is employed, control over the global phase of the eigenvectors is not guaranteed. Therefore a smooth gauging algorithm needs to be applied after the diagonalization. In the case addressed in this paper, we are free to fix the gauge of the two qubits independently since they are decoupled in $H_0$. In the case the charge offsets $n_{g,t}$ and $n_{g,p}$ are both zero, it is possible to fix the gauge consistently by imposing that the wavefunction is real at a reference point. A convenient choice is represented by picking $\phi=0$ for $\ket{0_p}$, $\ket{1_p}$ and $\ket{0_t}$ and $\phi = \pi/4$ for $\ket{1_t}$. To keep a smooth gauge fixing during the sweep of the offset charge, we use a smooth gauge fixing procedure. We first discretize the $n_{g, i}$ axes in a set of $N$ points in the interval $\qty[0, k]$ where $k$ is $1$ for the transmon and $2$ for the parity qubit. We will assume an homogeneous discretization with inter-site distance $\Delta n = k / (N-1)$ for simplicity. First, an arbitrary gauge fixing, like the one described in the previous paragraph, is applied for the wavefunction at point $i=0$. Next, for each wavefunction we impose that the overlap integral with the previous point is real. In other words, we calculate the fixing phase $\beta_i$ as \begin{equation} \beta_{n, i} = \sum_{j=0}^{i-1} \Im \ln \bra{\psi_{n, j}^0}\ket{\psi_{n, j+1}^0} \end{equation} and then the wavefunctions are updated as $\ket{\tilde{\psi}^0_{n,i}} = e^{-i \beta_{n,i}} \ket{\tilde{\psi}^0_{n,i}}$. This is possible because we have assumed that the index $n$ identifies corresponding eigenstates at different indexes $i$. In other words, if the index $n$ orders the states by energy we are assuming that there are no level crossings in the charge-Brillouin zone. This is not true for the parity qubit, but in this case we have labeled by $n=0,1$ the even and odd lowest states that can be identified, for example, by comparing the amplitude at $\phi=0$. A more general method is available for the case when it is not possible to easily identify corresponding eigenstates at different values of the parameters. In that case, the application of an additional unitary point by point is necessary. § ADDITIONAL RESULTS ON THE ENERGY LEVELS In this section of the Supplemental Material, we provide additional details on the energy spectrum of the two qubit system and design principles of an hybrid qubit. We recall that, for the transmon, it is possible to approximate the eigenvalues distribution by expanding the transmon Hamiltonian to fourth order. In this way, the Hamiltonian is mapped to a quantum Duffing oscillator [1]. This gives for the transmon the following approximate spectrum: \begin{equation} E_{t, m} = - E_{J, t} + \sqrt{8 E_{C, t} E_{J, t}}\qty(m + 1/2) - \frac{E_{C, t}}{12}\qty(6 m^2 + 6 m + 3)\,. \end{equation} A similar approach can be used to model the parity-protected qubit in the transmon regime by treating it as a double Duffing oscillator. We can expand the potential in the two wells as \begin{equation} V(\phi) = E_{J, p} \cos(2 \phi - \pi/4) = -E_{J, p} + 4 E_{j, p} \frac{(\phi-\phi_l)^2}{2} - 16 E_{j, p} \frac{(\phi-\phi_l)^4}{24} + o(\phi^5) \end{equation} with $\phi_L = \pm \pi/4$. Next we introduce a hopping amplitude between the two wells. For semplicity, we consider allowed only hoppings between the same energy levels in the left and right well, i.e. $t_{mm'} = t_m \delta_{m m'}$. Therefore, the approximate Hamiltonian is \begin{equation} H = \sum_{l=L, R} \qty[\omega_p a^\dag_l a_l - E_{j, p} - \frac{E_{C, p}}{3} (a_l + a_l^\dag)^4] + \sum_{m} \qty[ t_m \qty(1 + e^{i 2 \pi n_{g,p}})( a^\dag_L)^m(a_R)^m] \end{equation} where $\omega_p = 2 \sqrt{8 E_{j, p} E_{C, p}}$. The eigenvalues of the Hamiltonian can be calculated by first order perturbation theory using the number basis $\qty{\ket{m_L m_R}}$. The double Duffing oscillator spectrum is composed by pairs of states located around the mean value \begin{equation} \mu_{n, n+1} = \frac{E_{n} + E_{n+1}}{2} \simeq - E_{J, p} + 2 \sqrt{8 E_{C, p} E_{J, p}}\qty(\frac{n}{2} + 1/2) - 4 \frac{E_{C, p}}{12}\qty(6 \qty(\frac{n}{2})^2 + 6 \frac{n}{2} + 3)\,, \end{equation} for even $n$, with a splitting $\delta_{n, n+1} = E_{n+1}-E_{n} \simeq \frac{t_n}{2} \cos(\pi n_{g, p})$. Each state belong to either the even or odd parity sector. In the regime $-0.5< n_{g, p} < 0.5$ the order of the states is even, odd, odd, even, …, while in the regime $0.5< n_{g, p} <1.5$ is odd, even, even, odd, …. In the PPQ the $E_{J, p} / E_{C, p}$ ratio has a twofold role. On the one hand higher ratios reduce the splitting between pair of states (i.e, $\omega_{p, 1}$ and $\delta\omega_{p, 23} = \omega_{p, 2} - \omega_{p,3}$) on the other hand increase the separation between the pairs of states (i.e. $\mu_{p, 23}$). Low energy spectrum of: (a) a transmon with $E_{C, t} = (2\pi) \SI{0.2}{\giga\Hz}$ and (b) a parity-protected qubit with $E_{C, p} = (2\pi)\SI{0.18}{\giga\Hz}$ as function of the Josephson energy and zero offset charge. Panel (c): low energy spectrum of a parity protected qubit as a function of the offset charge with parameters $(E_{J, p}, E_{C, p}) = (2 \pi) (\SI{2.7}{\giga\Hz}, \SI{0.18}{\giga\Hz}) $ At the optimal point $n_{g, p}=0$, states $\vert 0_t, 2_p\rangle$ belongs to the odd sector while $\vert 0_t, 3_p\rangle$ belongs to the even sector. This pair of states show a splitting $\propto \abs{\cos (\pi n_{g, p})}$ and the mean is located approximately at energy $\mu_{p, 23} = \qty(\omega_{p, 2} + \omega_{p,3})/2 \simeq 2 \sqrt{8 E_{C, p} E_{J, p}} - 4 E_{C, p}$. Depending on the parameters of the system, the pair of states can be placed above the $\vert 1_t, 0_p\rangle$ $\vert 1_t, 1_p\rangle$ pair (when $\mu_{p, 23} \gtrsim \omega_{t, 1}$) or below (when $\mu_{p, 23} \lesssim \omega_{t, 1}$). In the presence of sizable splitting of the pair, the situation in which one of the two states lies below and one above is also possible. To obtain a controllable coupling, it is needed that at least one of the two excited states lays below the pair of computational states. For this reason, the choice of the parameters is crucial. For this reason, the approximate condition \begin{equation} \sqrt{8 E_{C, t} E_{J, t}} - E_{C, t} > 2 \sqrt{8 E_{C, p} E_{J, p}} - 4 E_{C, p} \end{equation} has to be satisfied. [56] Bravyi, S., DiVincenzo, D. P., and Loss, D. Schrieffer-Wolff transformation for quantum many-body systems. Ann. Phys. 326, 2793 (2011). [57] M. J. Rancic, S. Hoffman, C. Schrade, J. Klinovaja, D. Loss, Physical Review B 99 (16), 165306 (2019). [58] G. Consani and P. A. Warburton, Effective Hamiltonians for interacting superconducting qubits: local basis reduction and the Schrieffer-Wolff transformation, New J. Phys. 22, 053040 (2020). [59] M. Hita-Pérez, G. Jaumà, M. Pino, J. J. García-Ripoll, Three-Josephson Junctions Flux Qubit Couplings, arXiv:2111.05373 [quant-ph].
# Proximity and Remoteness in Graphs: a survey Mustapha Aouchichea and Bilal Ahmad Ratherb,111Corresponding author aPolytechnique Montreal, Montreal, QC, Canada <EMAIL_ADDRESS> bDepartment of Mathematical Sciences, College of Science, United Arab Emirate University, Al Ain 15551, Abu Dhabi, UAE <EMAIL_ADDRESS> ###### Abstract The proximity $\displaystyle\pi=\pi(G)$ of a connected graph $\displaystyle G$ is the minimum, over all vertices, of the average distance from a vertex to all others. Similarly, the maximum is called the remoteness and denoted by $\displaystyle\rho=\rho(G)$. The concepts of proximity and remoteness, first defined in 2006, attracted the attention of several researchers in Graph Theory. Their investigation led to a considerable number of publications. In this paper, we present a survey of the research work. Keywords: Distance; Transmission; Proximity; Remoteness; Extremal graphs AMS subject classification: 05C12, 05C35, 15A18. ## 1 Introduction Models involving paths, distances and location on graphs are much studied in operations research and mathematics. Models from operations research (see e.g. [11, 25]) usually use weighted graphs to describe some well-defined class of problems or some specific applications. Models from mathematics most often consider unweighted graphs and relations between graph invariants, i.e., numerical quantities whose values do not depend on the labelling of edges or vertices. In this paper, we present a survey of two graph invariants: proximity and remoteness, defined as the minimum and maximum of the average distance from a vertex to all others. Let $\displaystyle G=(V,E)$ denote a simple and connected graph, with vertex set $\displaystyle V$ and edge set $\displaystyle E$, containing $\displaystyle n=|V|$ vertices and $\displaystyle m=|E|$ edges. All the graphs considered in the present paper are finite, simple and connected. For a vertex $\displaystyle u\in V$, the set of its neighbors in $\displaystyle G$ is denoted as $\displaystyle N_{u}=\\{v\in V:uv\in E\\}.$ The degree of $\displaystyle u$ is the number of its neighbors, i.e., $\displaystyle d(u)=d_{u}=|N_{u}|$. The maximum degree $\displaystyle\Delta$ and the minimum degree $\displaystyle\delta$ of $\displaystyle G$ are defined as $\displaystyle\Delta=\max_{u\in V}d(u)\qquad\mbox{ and }\qquad\delta=\min_{u\in V}d(u).$ A graph $\displaystyle G$ is said to be $\displaystyle d$-regular, or regular of degree $\displaystyle d$, if all of its vertices have degree $\displaystyle d$. The distance between two vertices $\displaystyle u$ and $\displaystyle v$ in $\displaystyle G$, denoted by $\displaystyle d(u,v)$, is the length of a shortest path between $\displaystyle u$ and $\displaystyle v$. The average distance between all pairs of vertices in $\displaystyle G$ is denoted by $\displaystyle\overline{l}$. The eccentricity $\displaystyle e(v)$ of a vertex $\displaystyle v$ in $\displaystyle G$ is the largest distance from $\displaystyle v$ to another vertex of $\displaystyle G$. The minimum eccentricity in $\displaystyle G$, denoted by $\displaystyle r$, is the radius of $\displaystyle G$. The maximum eccentricity of $\displaystyle G$, denoted by $\displaystyle D$, is the diameter of $\displaystyle G$. The average eccentricity of $\displaystyle G$ is denoted $\displaystyle ecc$. That is $\displaystyle r=\min_{v\in V}e(v),\qquad D=\max_{v\in V}e(v) \qquad\mbox{ and }\qquad ecc=\frac{1}{n}\sum_{v\in V}e(v).$ The Wiener index $\displaystyle W=W(G)$, introduced in [55], of a connected graph $\displaystyle G$ is defined to be the sum of all distances in $\displaystyle G$, i.e., $\displaystyle W(G)=\frac{1}{2}\sum_{u,v\in V}d(u,v).$ The transmission $\displaystyle t(v)$ of a vertex $\displaystyle v$ is defined to be the sum of the distances from $\displaystyle v$ to all other vertices in $\displaystyle G$, i.e., $\displaystyle t(v)=\sum_{u\in V}d(u,v).$ A connected graph $\displaystyle G=(V,E)$ is said to be $\displaystyle k$–transmission regular if $\displaystyle t(v)=k$ for every vertex $\displaystyle v\in V$. The transmission regular graphs are exactly the distance–balanced graphs introduced in [28]. They are also called self–median graphs in [14]. It is clear that any vertex–transitive graph (a graph $\displaystyle G$ in which for every two vertices $\displaystyle u$ and $\displaystyle v$, there exist an automorphism $\displaystyle f$ on $\displaystyle G$ such that $\displaystyle f(u)=f(v)$) is a transmission regular graph. The converse is not true in general. Indeed, the graph on $\displaystyle 9$ vertices illustrated in Figure 1 is $\displaystyle 14$–transmission regular graph but not degree regular and therefore not vertex–transitive. Actually, the graph in Figure 1 is the smallest transmission regular but not degree regular (see e.g. [3, 32]). For more examples of transmission regular but not degree regular graphs see [3, 6, 32, 34, 54]. Figure 1: The transmission regular but not degree regular graph with the smallest order The proximity $\displaystyle\pi=\pi(G)$ of $\displaystyle G$ is the minimum average distance from a vertex of $\displaystyle G$ to all others. Similarly, the remoteness $\displaystyle\rho=\rho(G)$ of $\displaystyle G$ is the maximum average distance from a vertex to all others. The two last concepts were recently introduced in [1, 2]. They are close to the concept of transmission $\displaystyle t(v)$ of a vertex $\displaystyle v$. That is, if we denote $\displaystyle\tilde{t}(v)$ the average distance from a vertex $\displaystyle v$ to all other vertices in $\displaystyle G$, we have $\displaystyle\pi=\min_{v\in V}\tilde{t}(v)=\min_{v\in V}\frac{t(v)}{n-1} \qquad\mbox{ and }\qquad\rho=\max_{v\in V}\tilde{t}(v)=\max_{v\in V}\frac{t(v)}{n-1}.$ The transmission of a vertex is also known as the distance of a vertex [24] and the minimum distance (transmission) of a vertex is studied in [46]. A notion very close to the average distance from a vertex is the vertex deviation introduced by Zelinka [57] as $\displaystyle m_{1}(v)=\frac{1}{n}\sum_{u\in V}d(u,v)=\frac{t(v)}{n}.$ The vector composed of the vertex transmissions in a graph was first introduced by Harary [29] in 1959, under the name the status of a graph, as a measure of the ”weights” of individuals in social networks. The same vector was called the distance degree sequence by Bloom, Kennedy and Quintas [12]. It was used to tackle the problem of graph isomorphism. Randić [47] conjectured that two graphs are isomorphic if and only if they have the same distance degree sequence. The conjecture was refuted by several authors such as Slater [51], Buckley and Harary [13], and Entringer, Jackson and Snyder [24]. The transmission of a graph was also introduced by Sabidussi [48] in 1966 as a measure of centrality in social networks. The notion of centrality is widely used in different branches of sciences (see for example [36] and the references therein) such as transportation–network theory, communication network theory, electrical circuits theory, psychology, sociology, geography, game theory and computer science. Notions closely related to that of the distance from a vertex are those of a center and a centroid introduced by Jordan [35] in 1869. For mathematical properties of these two concepts see the survey, as well as the references therein, [52]. In 1964, Hakimi [27] used for the first time the sum of distances in solving facility location problems. In fact, Hakimi [27] considered two problems, subsequently considered in many works: the first problem was to determine a vertex $\displaystyle u\in V$ so as to minimize $\displaystyle max_{v\in V}\\{d(u,v):u\in V\\}$, i.e., the center of a graph; and the second problem is to determine a vertex $\displaystyle u\in V$ so as to minimize the sum of distances from $\displaystyle u$, i.e., the centroid. Interpretations of these problems can be found, for instance, in [26]. In view of the interest of the transmission vector in different domains of sciences, it is natural to study the properties of its extremal values themselves, and among the set of graph parameters. The study of proximity and remoteness, since closely related to respectively the minimum and maximum values of the transmissions, appears to be convenient, specially, with other metric invariants, such as the diameter, radius, average eccentricity and average distance. Indeed, it follows from the definitions that $\displaystyle\pi\leq r\leq ecc\leq D,\qquad\pi\leq\overline{l}\leq\rho\leq D\qquad\mbox{ and }\qquad\overline{l}=\frac{1}{n(n-1)}\sum_{v\in V}t(v).$ In addition to these inequalities, several related ones can be found in the graph theory literature. Recall that a subset $\displaystyle S$ of vertices of $\displaystyle G$ is said to be independent if its vertices are pairwise nonadjacent. The maximum cardinality of such a subset is called the independence number of $\displaystyle G$ and is denoted by $\displaystyle\alpha=\alpha(G)$. Then $\displaystyle\overline{l}\leq\alpha$ [15] and $\displaystyle r\leq\alpha$ [23]. Recall, also, that a matching in a graph is a set of disjoint edges. The maximum possible cardinality of a matching in a graph $\displaystyle G$ is called the matching number of $\displaystyle G$ and denoted by $\displaystyle\mu=\mu(G)$. The inequality $\displaystyle r\leq\mu$ can be found in [23]. It was proved in [4] that $\displaystyle\rho\leq ecc$. All these inequalities are illustrated in Figure 2, where vertices correspond to invariants $\displaystyle a,b,\ldots$, and directed arcs to the relations $\displaystyle a\leq b$. Observe that all relations mentioned are tight as all of them but $\displaystyle r\leq\mu$ become equalities for the complete graph $\displaystyle K_{n}$ (all invariants but $\displaystyle\mu$ being equal to 1) and $\displaystyle r=\mu=1$ for the star $\displaystyle S_{n}$. Figure 2: Relations between the invariants. Since their introduction in [1, 2], the proximity and the remoteness attracted the attention of several authors. ## 2 Proximity and Remoteness As for any graph invariant, the first questions asked about the proximity and the remoteness are: “what are their minimum and maximum values for given order $\displaystyle n$?” and “which extremal graphs are associated with these extremal values for a given order $\displaystyle n$?” Both questions are answered in the following proposition proved in [4]. ###### Proposition 2.1 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with proximity $\displaystyle\pi$ and remoteness $\displaystyle\rho$. Then $\displaystyle 1\leq\pi\leq\left\\{\begin{array}[]{ccl}\frac{n+1}{4}&&\mbox{ if $\displaystyle n$ is odd}\\\ \\\ \frac{n+1}{4}+\frac{1}{4(n-1)}&&\mbox{ if $\displaystyle n$ is even}\end{array}\right.$ and $\displaystyle 1\leq\rho\leq\frac{n}{2}.$ The lower bound on $\displaystyle\pi$ is reached if and only if $\displaystyle G$ contains a dominating vertex; the upper bound on $\displaystyle\pi$ is attained if and only if $\displaystyle G$ is either the cycle $\displaystyle C_{n}$ or the path $\displaystyle P_{n}$; the lower bound on $\displaystyle\rho$ is reached if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$; the upper bound on $\displaystyle\rho$ is attained if and only if $\displaystyle G$ is the path $\displaystyle P_{n}$. Since the proximity $\displaystyle\pi$ and the remoteness $\displaystyle\rho$ are respectively the minimum and the maximum of the same function on a graph, it is natural to ask about how large can the difference $\displaystyle\rho-\pi$ be, or in other words, how large can the spread of the average distances from vertices be. This problem was solved in [4]. ###### Proposition 2.2 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and proximity $\displaystyle\pi$. Then $\displaystyle\rho-\pi\leq\left\\{\begin{array}[]{lll}\frac{n-1}{4}&&\mbox{if $\displaystyle n$ is odd, }\\\ \frac{n-1}{4}-\frac{1}{4n-4}&&\mbox{if $\displaystyle n$ is even. }\end{array}\right.$ Equality holds if and only if $\displaystyle G$ is a graph obtained from a path $\displaystyle P_{\left\lceil\frac{n}{2}\right\rceil}$ and any connected graph $\displaystyle H$ on $\displaystyle\left\lfloor n/2\right\rfloor+1$ vertices by a coalescence of an endpoint of the path with any vertex of $\displaystyle H$. The problem of find bounds on $\displaystyle\pi$ and $\displaystyle\rho$ for a graph $\displaystyle G$ with given order $\displaystyle n$ and diameter $\displaystyle D$ was considered in [4]. Actually, the best possible lower bound on $\displaystyle\pi(G)$ and the best possible upper bound on $\displaystyle\rho(G)$ were established. These involve a particular class of graphs, next described. A double-tailed comet $\displaystyle DTC_{n,p,q}$ (see Figure 3 for an example, i.e., $\displaystyle DTC_{15,4,4}$), with $\displaystyle n\geq p+q+1$, $\displaystyle p\geq 2$ and $\displaystyle q\geq 2$ is the tree obtained from a path $\displaystyle u_{0}u_{1}\cdots u_{p}u_{p+1}\cdots u_{p+q}$ by attaching $\displaystyle n-p-q-1$ pendant edges to $\displaystyle u_{p}$. It is said to be balanced if $\displaystyle p=q$. It is easy to see that the diameter of $\displaystyle DTC_{n,p,q}$ is $\displaystyle D=p+q$ and its maximum degree is $\displaystyle\Delta=d(u_{p})=n-p-q+1$. Assuming, without loss of generality, that $\displaystyle p\geq q$, we have $\displaystyle\displaystyle\pi(DTC_{n,p,q})$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\frac{p(p+1)}{2(n-1)}+\frac{q(q+1)}{2(n-1)}-\frac{p+q}{n-1}+1$ $\displaystyle\displaystyle=$ $\displaystyle\displaystyle\frac{p(p-1)}{2(n-1)}+\frac{q(q-1)}{2(n-1)}+1$ and $\displaystyle\rho(DTC_{n,p,q})=\frac{(p+q)(p+q+1)}{2(n-1)}-\frac{p^{2}+pq}{n-1}+p.$ Figure 3: The double-tailed comet $\displaystyle DTC_{15,4,4}$. ###### Proposition 2.3 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n$ vertices with diameter $\displaystyle D$. Then $\displaystyle\pi(G)\geq\frac{p(p-1)}{2(n-1)}+\frac{q(q-1)}{2(n-1)}+1$ where $\displaystyle p=\left\lceil D/2\right\rceil$ and $\displaystyle q=\left\lfloor D/2\right\rfloor$. The bound is best possible as shown by the double-tailed comet $\displaystyle DTC_{n,p,q}$. A comet $\displaystyle CO_{n,\Delta}$ is the tree obtained from a star $\displaystyle S_{\Delta+1}$ and a path $\displaystyle P_{n-\Delta}$ by coalescence of an endpoint of $\displaystyle P_{n-\Delta}$ with a pendant vertex of $\displaystyle S_{\Delta+1}$. Easy computations lead to the following expressions for the diameter and the remoteness of a comet: $\displaystyle D(CO_{n,\Delta})=n-\Delta+1$ and $\displaystyle\rho(CO_{n,\Delta})=\frac{(n-\Delta+1)(n+\Delta-2)}{2(n-1)}.$ A kite $\displaystyle KI_{n,\omega}$ is the connected graph obtained from a clique $\displaystyle K_{\omega}$ and a path $\displaystyle P_{n-\omega}$ by adding an edge between an endpoint of the path and a vertex from the clique. A pseudo-kite $\displaystyle PKI_{n,p}$ is any connected graph which is a spanning subgraph of the kite $\displaystyle KI_{n,p}$ and that contains the comet $\displaystyle CO_{n,p}$ as a spanning tree. Note that $\displaystyle CO_{n,p}$, $\displaystyle KI_{n,p}$ and $\displaystyle PKI_{n,p}$ have the same proximity and the same remoteness. The first relationship between the pseudo-kites and the notion of remoteness is given in the following proposition. ###### Proposition 2.4 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with diameter $\displaystyle D$. Then $\displaystyle\rho(G)\leq\rho(PKI_{n,n-D+1})$ with equality if and only if $\displaystyle G$ is a pseudo-kite $\displaystyle PKI_{n,n-D+1}$. Let $\displaystyle G$ be a graph and $\displaystyle\overline{G}$ its complement. If $\displaystyle I$ is an invariant of $\displaystyle G$, we denote by $\displaystyle\overline{I}$ the same invariant but in $\displaystyle\overline{G}$. Nordhaus-Gaddum relations for the graph invariant $\displaystyle I$ are inequalities of the following form $\displaystyle L_{1}(n)\leq I+\overline{I}\leq U_{1}(n)\qquad\mbox{ and }\qquad L_{2}(n)\leq I\cdot\overline{I}\leq U_{2}(n),$ where $\displaystyle L_{1}(n)$ and $\displaystyle L_{2}(n)$ are lower bounding functions of the order $\displaystyle n$, and $\displaystyle U_{1}(n)$ and $\displaystyle U_{2}(n)$ upper bounding functions of the order $\displaystyle n$. Note that sometimes, in addition to the order $\displaystyle n$, other graph invariants are used in the bounds. This type of relations is so called after Nordhaus and Gaddum [41] who were the first authors to give such relations, namely $2\sqrt{n}\leq\chi+\overline{\chi}\leq n+1\qquad\mbox{ and }\qquad n\leq\chi\cdot\overline{\chi}\leq\left(\frac{n+1}{2}\right)^{2},$ (1) where $\displaystyle\chi$ is the chromatic number of a graph. Since then many graph theorists have been interested in finding such relations for various graph invariants. See [7] for a survey of Nordhaus–Gaddum type results. For proximity and remoteness, Nordhaus–Gaddum inequalities were proved in [5]. ###### Theorem 2.5 ([5]). For any connected graph $\displaystyle G$ on $\displaystyle n\geq 5$ vertices for which $\displaystyle\overline{G}$ is connected $\displaystyle\frac{2n}{n-1}\leq\pi+\overline{\pi}\leq\left\\{\begin{array}[]{ccl}\frac{n+1}{4}+\frac{n+1}{n-1}&&\mbox{ if $\displaystyle n$ is odd, }\\\ \frac{n}{4}+\frac{n}{4(n-1)}+\frac{n+1}{n-1}&&\mbox{ if $\displaystyle n$ is even. }\end{array}\right.$ The lower bound is attained if and only if $\displaystyle\Delta(G)=\Delta(\overline{G})=n-2$. The upper bound is attained if and only if either $\displaystyle G$ or $\displaystyle\overline{G}$ is the cycle $\displaystyle C_{n}$. ###### Theorem 2.6 ([5]). For any connected graph $\displaystyle G$ on $\displaystyle n\geq 5$ vertices for which $\displaystyle\overline{G}$ is connected $\displaystyle\frac{n^{2}}{(n-1)^{2}}\leq\pi\cdot\overline{\pi}\leq\left\\{\begin{array}[]{ccl}\frac{(n+1)^{2}}{4(n-1)}&&\mbox{ if $\displaystyle n$ is odd, }\\\ \frac{n(n+1)}{4(n-1)}+\frac{n(n+1)}{4(n-1)^{2}}&&\mbox{ if $\displaystyle n$ is even. }\end{array}\right.$ The lower bound is attained if and only if $\displaystyle\Delta(G)=\Delta(\overline{G})=n-2$. The upper bound is attained if and only if either $\displaystyle G$ or $\displaystyle\overline{G}$ is the cycle $\displaystyle C_{n}$. Recall that a comet $\displaystyle Co_{n,\Delta}$ is obtained from a star $\displaystyle S_{\Delta+1}$ by appending a path $\displaystyle P_{n-\Delta-1}$ to one of its pending vertices. Moreover, a path-complete graph $\displaystyle PK_{n,m}$ on $\displaystyle n$ vertices and $\displaystyle m$ edges is the graph obtained from a path $\displaystyle P_{k}$, $\displaystyle k\geq 1$, and a clique $\displaystyle K_{n-k}$ by adding at least one edge between one endpoint of the path and the vertices of $\displaystyle K_{n-k}$, where $\displaystyle(n-k)(n-k-1)/2+k\leq m\leq(n-k+1)(n-k)/2+k-1$. One can verify that there is exactly one path- complete graph $\displaystyle PK_{n,m}$ for all $\displaystyle n$ and $\displaystyle m$ such that $\displaystyle 1\leq n-1\leq m\leq n(n-1)/2$. ###### Theorem 2.7 ([5]). For any connected graph $\displaystyle G$ on $\displaystyle n\geq 6$ vertices for which $\displaystyle\overline{G}$ is connected $\displaystyle 3\leq\rho+\overline{\rho}\leq\frac{n+2}{2}+\frac{2}{n-1}.$ The lower bound is attained if and only if $\displaystyle n\geq 8$, $\displaystyle G$ is regular and $\displaystyle D=\overline{D}=2$. The upper bound is attained if and only if $\displaystyle G$ or $\displaystyle\overline{G}$ is the path $\displaystyle P_{n}$, the comet $\displaystyle Co_{n,3}$ or the path-complete graph $\displaystyle PK_{n,n}$ when $\displaystyle n\geq 7$, and if and only if $\displaystyle G$ or $\displaystyle\overline{G}$ is the path $\displaystyle P_{6}$, the comet $\displaystyle Co_{6,3}$, the path-complete graph $\displaystyle PK_{6,6}$ or one of the graphs in Fig. 4. Figure 4: Graphs with $\displaystyle D=3$ that maximize $\displaystyle\rho+\overline{\rho}$ for $\displaystyle n=6$. ###### Theorem 2.8 ([5]). For any connected graph $\displaystyle G$ on $\displaystyle n\geq 7$ vertices for which $\displaystyle\overline{G}$ is connected $\displaystyle\rho\cdot\overline{\rho}\leq\left\\{\begin{array}[]{lll}\frac{16n+20}{27}+\frac{8}{9(n-1)}+\frac{4}{27(n-1)^{2}}& \mbox{ if }&n=\,\,0\,\,(mod\,\,3),\\\ \frac{16n+20}{27}+\frac{2}{3(n-1)}& \mbox{ if }&n=\,\,1\,\,(mod\,\,3),\\\ \frac{16n+20}{27}+\frac{8}{9(n-1)}+\frac{5}{27(n-1)^{2}}& \mbox{ if }&n=\,\,2\,\,(mod\,\,3).\\\ \end{array}\right.$ The upper bound is the best possible as shown by the comets $\displaystyle Co_{n,\left\lceil\frac{n}{3}\right\rceil+1}$, and $\displaystyle Co_{n,\left\lceil\frac{n}{3}\right\rceil}$ if $\displaystyle n=\,\,1\,\,(mod\,\,3)$. Note that the bound provided in Theorem 2.8 is not valid for $\displaystyle n=4,5,6$ (if $\displaystyle n\leq 3$, then $\displaystyle G$ and $\displaystyle\overline{G}$ cannot be connected simultaneously). For the path $\displaystyle P_{4}$ (resp. the comets $\displaystyle Co_{5,3}$ and $\displaystyle Co_{6,4}$), $\displaystyle\rho\cdot\overline{\rho}=4$ (resp. 9/2 and 24/5) while the corresponding value provided by Theorem 2.8 is $\displaystyle 10/3$ (resp. 63/16 and 112/25). Czabarka et al. [16] obtained results on proximatity in triangulations and quadrangulations, that is graphs with maximal planer graphs and graphs with maximal bipartite planer graphs, respectiely. ###### Theorem 2.9 ([16]). Let $\displaystyle G$ be a planar graph of order $\displaystyle n$ and $\displaystyle v$ a central vertex of $\displaystyle G$. * (i) If $\displaystyle G$ is a triangulation, then $\displaystyle\pi\leq\frac{n+19}{12}+\frac{25}{3(n-1)}.$ * (ii) If $\displaystyle G$ is a $\displaystyle 4$-connected triangulation, then $\displaystyle\pi\leq\frac{n+35}{16}+\frac{91}{4(n-1)}.$ * (iii) If $\displaystyle G$ is a $\displaystyle 5$-connected triangulation, then $\displaystyle\pi\leq\frac{n+57}{20}+\frac{393}{10(n-1)}.$ * (iv) If $\displaystyle G$ is a quadrangulation, then $\displaystyle\pi\leq\frac{n+11}{8}+\frac{9}{2(n-1)}.$ * (v) If $\displaystyle G$ is a $\displaystyle 3$-connected quadrangulation, then $\displaystyle\pi\leq\frac{n+25}{12}+\frac{169}{12(n-1)}.$ Pei et al. [45] gave following intresting results relating proximity, average eccentricity and domination, results earlier conjuctured in [1]. The domination number $\displaystyle\gamma(G)$ is the minimum cardinality of a dominating set of G. Recall that $\displaystyle\gamma(G)\leq\left\lfloor\frac{n}{2}\right\rfloor.$ In fact, the graphs with domination number $\displaystyle\left\lfloor\frac{n}{2}\right\rfloor$ have been determined in the following result. Figure 5: Graphs in family $\displaystyle\mathcal{A}$ Figure 6: Graphs in family $\displaystyle\mathcal{B}$ ###### Lemma 2.10 ([42]). A connected graph $\displaystyle G$ of order $\displaystyle n$ satisfies $\displaystyle\gamma(G)=\left\lfloor\frac{n}{2}\right\rfloor$ if and only if $\displaystyle G\in\bigcup_{i=1}^{6}\mathcal{G}_{i}$, where $\displaystyle\mathcal{G}_{i},~{}i=1,\dots,6,$ is the set defined in the following. Let $\displaystyle H$ be any graph with vertex set $\displaystyle\\{v_{1},\dots,v_{k}\\}$. Denote by $\displaystyle f(H)$ the graph obtained from $\displaystyle H$ by adding new vertices $\displaystyle u_{1},\dots,u_{k}$ and the edges $\displaystyle v_{i}u_{i},~{}i=1,\dots,k.$ Define $\displaystyle\mathcal{G}_{1}=\\{C_{4}\\}\cup\\{G|G=f(H)~{}\text{for some connected graph}~{}H\\}.$ Let $\displaystyle\mathcal{F}=\mathcal{A}\cup\mathcal{B}$ and $\displaystyle\mathcal{G}_{2}=\mathcal{F}-\\{C_{4}\\},$ where $\displaystyle\mathcal{A}=\\{C_{4},G(7,i)|i=1,\dots,6\\}$ and $\displaystyle\mathcal{B}=\\{K3,G(5,i)|i=1,\dots,4\\},$ as shown in Fig. 5 and Fig. 6, respectively. For any graph $\displaystyle H$, let $\displaystyle\phi(H)$ be the set of connected graphs, each of which can be formed from $\displaystyle f(H)$ by adding a new vertex $\displaystyle x$ and edges joining $\displaystyle x$ to one or more vertices of $\displaystyle H$. Then define $\displaystyle\mathcal{G}_{3}=\\{G|G=\phi(H)~{}\text{for some graph}~{}H\\}.$ Let $\displaystyle G\in\mathcal{G}_{3}$ and $\displaystyle y$ be a vertex of a copy of $\displaystyle C_{4}$. Denote by $\displaystyle\theta(G)$ the graph obtained by joining $\displaystyle G$ to $\displaystyle C_{4}$ with the single edge $\displaystyle xy,$ where $\displaystyle x$ is the new vertex added in forming G. Then define $\displaystyle\mathcal{G}_{4}=\\{G|G=\theta(H)~{}\text{for some graph H}~{}\in\mathcal{G}_{3}\\}.$ Let $\displaystyle u,v,w$ be the vertex sequence of a path $\displaystyle P_{3}.$ For any graph $\displaystyle H,$ let $\displaystyle\mathcal{P}(H)$ be the set of connected graphs which may be formed from $\displaystyle f(H)$ by joining each of $\displaystyle u$ and $\displaystyle w$ to one or more vertices of M Then define $\displaystyle\mathcal{G}_{5}=\\{G|G=\mathcal{P}(H)~{}\text{for some graph}~{}H\\}.$ For a graph $\displaystyle X\in\mathcal{B}$, let $\displaystyle U\subset V(X)$ be a set of vertices such that no fewer than $\displaystyle\gamma(X)$ vertices of $\displaystyle X$ dominate $\displaystyle V(X)\setminus U.$ Let $\displaystyle\mathcal{R}(H,X)$ be the set of connected graphs which may be formed from $\displaystyle f(H)$ by joining each vertex of $\displaystyle U$ to one or more vertices of $\displaystyle H$ for some set $\displaystyle U$ as defined above and any graph $\displaystyle H.$ Then define $\displaystyle\mathcal{G}_{6}=\\{G|G\in\mathcal{R}(H,X)~{}\text{for some}~{}X\in\mathcal{B}~{}\text{and some}~{}H\\}.$ ###### Lemma 2.11 ([45]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ with $\displaystyle\gamma(G)\geq 2,\Delta(G)\leq n-2$ and $\displaystyle\delta(G)=1.$ Then $\displaystyle ecc\geq 2+\frac{\gamma(G)}{n}.$ ###### Lemma 2.12 ([45]). Let $\displaystyle G$ be a connected graph with $\displaystyle\Delta(G)\leq n-2$ and $\displaystyle\delta(G)=2.$ Then $\displaystyle\gamma(G)=2$ or $\displaystyle ecc\geq 2+\frac{2}{n}.$ ###### Lemma 2.13 ([45]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 2$. If $\displaystyle\Delta(G)=n-1,$ then $\displaystyle\frac{\gamma(G)}{ecc}\leq 1$ with equality if and only if $\displaystyle G\cong K_{n}.$ ###### Lemma 2.14 ([45]). Let $\displaystyle G$ be a connected graph on $\displaystyle n$ vertices, where $\displaystyle 2\leq n\leq 5.$ Then $\displaystyle\frac{\gamma(G)}{ecc}\leq 1,$ with equality if and only if $\displaystyle G\in\\{P_{2},C_{3},K_{4},C_{4}\\}$ when $\displaystyle n\leq 4$, and $\displaystyle G\in{K_{5},G(5,1),G(5,2),G(5,3),G(5,4)}\subseteq\mathcal{B}$ when $\displaystyle n=5,$ see Fig. 6 ###### Theorem 2.15 ([45]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 2.$ Then $\displaystyle\frac{\gamma(G)}{ecc}\begin{cases}1&\text{if}~{}n\leq 5,\\\ \frac{n.\left\lfloor\frac{n}{2}\right\rfloor}{\left\lfloor\frac{5n}{2}\right\rfloor}&\text{if}~{}n\geq 6,\end{cases}$ with equality if and only if $\displaystyle G\in\\{P_{2},C_{3},K_{4},C_{4},K_{5},G(5,1),G(5,2),G(5,3),G(5,4)\\}$ when $\displaystyle n\leq 5$, or $\displaystyle G\in\\{K_{\left\lceil\frac{n}{2}\right\rceil,\left\lfloor\frac{n}{2}\right\rfloor},G(7,1),G(7,2),G^{\prime},G^{\prime\prime}\\}$ when $\displaystyle n\geq 6,$ where $\displaystyle G^{\prime},G^{\prime\prime}$ are defined in [44]. The first observation in [45] is that for a vertex $\displaystyle v$ in a conected graph $\displaystyle G$, the proximity is $\displaystyle\pi(v)\geq\frac{2n-2-d(v)}{n-1},$ with equality if and only if $\displaystyle e(v)\leq 2.$ we define a subset of $\displaystyle\mathcal{G}$. Namely, $\displaystyle\mathcal{G}^{\ast}=\cup_{i\in\\{1,2,3,5,6\\}}\mathcal{G}_{i}^{\ast},$ where $\displaystyle\mathcal{G}_{i}^{\ast}$ is shown as follows, $\displaystyle i=1,2,3,5,6.$ We continue with the meaning of the notations $\displaystyle x,u,v,w$ in the definition of $\displaystyle\mathcal{G}_{3}$ and $\displaystyle\mathcal{G}_{5}$, where $\displaystyle x$ is the vertex added in forming graph $\displaystyle G\in\mathcal{G}_{3},u,v,w$ is the vertex sequence of a path $\displaystyle P_{3}$ mentioned in the constructionof $\displaystyle G_{5}.$ Let $\displaystyle H^{\ast}$ be any connected graph with vertex set $\displaystyle\\{v_{1},\dots,v_{k}\\}$ and $\displaystyle\triangle(H^{\ast})=k-1.$ Assume that $\displaystyle d_{H^{\ast}}(v_{i_{0}})=\triangle(H^{\ast})$ for some $\displaystyle i_{0}\in\\{1,\dots,k\\},$ that is $\displaystyle{v_{i_{0}}v_{j}|j=1,\dots,i_{0}-1,i_{0}+1,\dot{,}k}\subseteq E(H^{\ast}).$ Let $\displaystyle\displaystyle\mathcal{G}_{1}^{\ast}$ $\displaystyle\displaystyle=\\{G|G=f(H^{\ast})~{}\text{for some connected graph}H^{\ast}\\},$ $\displaystyle\displaystyle\mathcal{G}_{2}^{\ast}$ $\displaystyle\displaystyle=\\{G(5,2),G(5,3),G(5,4),G(7,2),G(7,5),G(7,6)\\},$ $\displaystyle\displaystyle\mathcal{G}_{3}^{\ast}$ $\displaystyle\displaystyle=\\{G|G=\phi(H^{\ast})~{}\text{and}~{}v_{i_{0}}x\in E(G)\\},$ $\displaystyle\displaystyle\mathcal{G}_{5}^{\ast}$ $\displaystyle\displaystyle=\\{G|G=\mathcal{P}(H^{\ast}),v_{i_{0}}u\in E(G)~{}\text{and}~{}v_{i_{0}}w\in E(G)\\}$ and $\displaystyle\displaystyle\mathcal{G}_{6}^{\ast}$ $\displaystyle\displaystyle=\mathcal{G}_{6}^{1}\cup\mathcal{G}_{6}^{1}\cup\mathcal{G}_{6}^{(3,2)}\cup\mathcal{G}_{6}^{(3,3)}\cup\mathcal{G}_{6}^{(3,4)},$ where $\displaystyle\displaystyle\mathcal{G}_{6}^{1}$ $\displaystyle\displaystyle=\\{G|G=\mathcal{R}(K3,H^{\ast})~{}\text{with}~{}|U|=2,~{}\text{and}~{}v_{i_{0}}s\in E(G)~{}\text{for each vertex}~{}s\in U\\}.$ $\displaystyle\displaystyle\mathcal{G}_{6}^{2}$ $\displaystyle\displaystyle=\\{G|G\in\mathcal{G}_{6}~{}\text{with}~{}X=K_{3},~{}\text{and}~{}sv_{i}\in E(G)~{}\text{for some vertex}~{}s\in U~{}\text{and}~{}i=1,\dots,k\\}.$ For $\displaystyle i=2,3,4,$ $\displaystyle\mathcal{G}_{6}^{(3,i)}=\\{G|G\in\mathcal{G}_{6}~{}\text{with}~{}X=G(5,i),~{}\text{and}~{}sv_{j}\in E(G)~{}\text{for}~{}j=1,\dots,k,~{}\text{some vertex}s\in U~{}\text{with}~{}d_{G}(5,i)(s)=3\\}.$ ###### Lemma 2.16 ([45]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 5$ with $\displaystyle\gamma(G)=\left\lfloor\frac{n}{2}\right\rfloor$. Then $\displaystyle\pi\geq\begin{cases}\frac{3}{2}-\frac{1}{2(n-1)},&n~{}\text{is even}\\\ \frac{3}{2}-\frac{1}{n-1},&n~{}\text{is odd}\end{cases}$ with equality if and only if $\displaystyle G\in\mathcal{G}^{\ast}.$ ###### Theorem 2.17 ([45]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 2$. Then $\displaystyle\gamma(G)-\pi\leq\begin{cases}\frac{n-3}{2}+\frac{1}{2(n-1)},&\text{if $\displaystyle n$ is even}\\\ \frac{n-4}{2}-\frac{1}{n-1},&\text{if $\displaystyle n$ is odd}\end{cases}$ with equality if and only if $\displaystyle G\in\mathcal{G}^{\ast}\cup\\{P_{2},P_{3},C_{3},P_{4},C_{4}\\}.$ Recently Pie [43] obtained the following results for the domination number and the remoteness of a graph. ###### Lemma 2.18 ([43]). Let $\displaystyle G$ be a connected graph with $\displaystyle n\leq 6$ vertices with $\displaystyle\gamma(G)=\left\lfloor\frac{n}{2}\right\rfloor-1.$ Then $\displaystyle\gamma(G)-\rho(G)<\begin{cases}\frac{n-5}{2}+\frac{3}{2n-2},&n~{}\text{is even}\\\ \frac{n-6}{2}+\frac{2}{n-1},&n~{}\text{is odd and}~{}n\geq 9\\\ \frac{n-3}{4},&n=7.\end{cases}$ ###### Lemma 2.19 ([43]). Suppose that $\displaystyle G$ be a connected graph of order $\displaystyle n$ with $\displaystyle 1\leq\gamma(G)=\left\lfloor\frac{n}{2}\right\rfloor-2.$ Then $\displaystyle\gamma(G)-\rho(G)<\begin{cases}\frac{n-5}{2}+\frac{3}{2n-2},&n~{}\text{is even}\\\ \frac{n-6}{2}+\frac{2}{n-1},&n~{}\text{is odd and}~{}n\geq 9\\\ \frac{n-3}{4},&n~{}\text{is odd and}~{}n\leq 7.\end{cases}$ ###### Lemma 2.20 ([43]). If $\displaystyle G$ is a connected graph with order $\displaystyle n(\geq 2)$ and $\displaystyle\gamma(G)=\left\lfloor\frac{n}{2}\right\rfloor,$ then $\displaystyle\gamma(G)-\rho(G)\leq\begin{cases}\frac{2}{3},&n=4\\\ \frac{n-5}{2}+\frac{3}{2n-2},&n~{}\text{is even and}~{}n\neq 4\\\ \frac{n-3}{4},&n~{}\text{is odd and}~{}n\leq 7\\\ \frac{n-6}{2}+\frac{2}{n-1},&n~{}\text{is odd and}~{}n\geq 9,\\\ \end{cases}$ with equality if and only if $\displaystyle G\in\\{C_{4},K_{\frac{n}{2},\frac{n}{2}}~{}|~{}n~{}\text{is even and}~{}n\neq 4\\}\cup(\mathcal{G}_{2}-\\{G^{5}_{7})\\})\cup\\{K_{\lceil\frac{n}{2}\rceil,\lfloor\frac{n}{2}\rfloor},G^{\prime},G^{\prime\prime}$ $\displaystyle|~{}n~{}\text{is odd and}~{}n\geq 9\\},$ where $\displaystyle G^{\prime},G^{\prime\prime}$ are defined in [44]. ###### Lemma 2.21 ([43]). If $\displaystyle G$ is a connected graph with $\displaystyle n\leq 7$ vertices and $\displaystyle\gamma(G)=\left\lfloor\frac{n}{2}\right\rfloor-1,$ then $\displaystyle n\geq 4$ and $\displaystyle\gamma(G)-\rho(G)\leq\begin{cases}\frac{4}{5},&n=6\\\ 0&4\leq n\leq 5\end{cases}$ with equality if and only if G is $\displaystyle 4$-regular when $\displaystyle n=6$, and $\displaystyle G\cong K_{n}$ when $\displaystyle 4\leq n\leq 5.$ Dankelmann, Jonck and Mafunda [20], obtained boubds for $\displaystyle\pi$ and $\displaystyle\rho$ in triangle-free and $\displaystyle C_{4}$-free graphs in terms of order and minimum degree. Given positive integers $\displaystyle n,\delta$ with $\displaystyle\delta\geq 3.$ Let $\displaystyle A=(1,1,\delta-1,\delta-1,1,1,\delta-1,\delta-1,\dots)$ be the infinite sequence repeating the $\displaystyle(1,1,\delta-1,\delta-1)$-pattern indefinitely. Define the finite sequence $\displaystyle X_{n,\delta}$ by $\displaystyle X_{n,\delta}=(1,\delta,\delta-1,a_{1},a_{2},\dots,a_{l(A,n-4\delta)},\delta,r_{n,\delta}),$ where $\displaystyle r_{n,\delta}=n-3\delta-\sum_{i=1}^{l(A,n-4\delta)}a_{i}.$ The sequential sum of graphs $\displaystyle G=G_{1}+G_{2}+\dots+G_{n}$ to be the sequential join such that the vertex set $\displaystyle V(G)=V(G_{1})\cup V(G_{2})\cup\dots\cup V(G_{n})$ and the edge set $\displaystyle E(G)=E(G_{1})\cup E(G_{2})\cup\dots\cup E(G_{n})\bigcup_{i=1}^{n-1}\\{uv|u\in V(G_{i}),v\in V(G_{i+1})\\}.$ For a finite sequence $\displaystyle X=(x_{0},x_{1},\dots,x_{d})$ of positive integers we define the graph $\displaystyle G(X)$ by $\displaystyle G(X)=\overline{K}_{x_{0}}+\overline{K}_{x_{1}}+\dots+\overline{K}_{x_{d}}$ ###### Theorem 2.22 ([20]). Let $\displaystyle G$ be a connected, triangle free graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta$, where $\displaystyle\delta\geq 3$ and $\displaystyle n\geq 6\delta.$ Then $\displaystyle\rho\leq\rho(G(X_{n,\delta})).$ ###### Corollary 2.23 ([20]). If $\displaystyle G$ is a connected triangle-free graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle\rho\leq 2\left\lceil\frac{n-3\delta}{2\delta}\right\rceil+2-\frac{\delta}{n-1},$ and this bound is sharp. ###### Theorem 2.24 ([20]). If $\displaystyle G$ is a connected, triangle-free graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle\pi\leq\frac{n}{2\delta}+2-\frac{5}{2\delta}-\frac{21\delta^{2}-8\delta-3}{2\delta(n-1)}.$ ###### Theorem 2.25 ([20]). If $\displaystyle G$ is a connected, $\displaystyle C_{4}$-free graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle\rho\leq\frac{5}{2}\left\lfloor\frac{n}{\delta^{2}-2\left\lfloor\frac{\delta}{2}\right\rfloor+1}\right\rfloor+2.$ The next theorem shows that for many values of $\displaystyle\delta$ the bound on remoteness is close to being best possible in the sense that the ratio of the coefficients of $\displaystyle n$ in the bound and approach $\displaystyle 1$ as $\displaystyle\delta$ gets large. ###### Theorem 2.26 ([20]). Let $\displaystyle\delta\geq 3$ be an integer such that $\displaystyle\delta=q-1$ for some prime power $\displaystyle q$. Then there exists an infinite number of $\displaystyle C_{4}$-free graphs $\displaystyle G$ of minimum degree at least $\displaystyle\delta$ with $\displaystyle\displaystyle\rho$ $\displaystyle\displaystyle=\frac{5}{2}\frac{n}{\delta^{2}+3\delta+2}+O(1),$ $\displaystyle\displaystyle\pi$ $\displaystyle\displaystyle=\frac{5}{4}\frac{n}{\delta^{2}+3\delta+2}+O(1),$ where $\displaystyle n$ is the order of $\displaystyle G.$ ###### Theorem 2.27 ([20]). If $\displaystyle G$ is a connected $\displaystyle C_{4}$-free graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle\pi\leq\frac{5}{4}\left\lfloor\frac{n}{\delta^{2}-2\left\lfloor\frac{\delta}{2}\right\rfloor+1}\right\rfloor+\frac{147}{32}.$ Dankelmann and Mafunda [21] gave results about difference between $\displaystyle\pi$ and distance parameters in triangle-free and $\displaystyle C_{4}$-free graphs. ###### Theorem 2.28 ([21]). If $\displaystyle G$ is a connected, triangle-free graph of order $\displaystyle n\geq 7$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle\rho-\pi\leq\frac{n+1}{2\delta}+4.$ ###### Theorem 2.29 ([21]). If $\displaystyle G$ is a connected, $\displaystyle C_{4}$-free graph of order $\displaystyle n\geq 6$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle\rho-\pi\leq\frac{5(n+1)}{4\left(\delta^{2}-2\left\lfloor\frac{\delta}{2}\right\rfloor+1\right)}+\frac{101}{20}.$ ###### Theorem 2.30 ([22]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n,$ minimum degree $\displaystyle\delta$ and maximum degree $\displaystyle\triangle$. If $\displaystyle\triangle>\frac{n}{2}-1,$ then $\displaystyle\pi\leq\frac{3(n-\triangle)^{2}}{2(n-1)(\delta+1)}+\frac{13}{2}.$ If $\displaystyle\triangle\leq\frac{n}{2}-1,$ then $\displaystyle\pi\leq\frac{3(n^{2}-2\triangle^{2})}{4(n-1)(\delta+1)}+\frac{35}{4}.$ ###### Theorem 2.31 ([22]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$, minimum degree $\displaystyle\delta$ and maximum degree $\displaystyle\triangle$. Then there exists a spanning tree $\displaystyle T$ of $\displaystyle G$ with $\displaystyle\rho(T)\leq\frac{3(n^{2}-\triangle^{2})}{2(n-1)(\delta+1)}+7.$ Since $\displaystyle\rho(G)\leq\rho(T)$ for every spanning tree of a conncetd graph $\displaystyle G,$ the following result follows from Theorem 2.31. ###### Corollary 2.32 ([22]). If $\displaystyle G$ is a connected graph of order $\displaystyle n$, minimum degree $\displaystyle\delta$ and maximum degree $\displaystyle\triangle$, then $\displaystyle\rho\leq\frac{3(n^{2}-\triangle^{2})}{2(n-1)(\delta+1)}+7.$ A thesis about the proximity and the remoteness of graphs written by Mallu in 2022 can be seen [38], jointly supervised by Dankelmann and Mafunda. ## 3 Proximity and Remoteness Compared to Metric Invariants Aouchiche and Hansen in 2011 compared proximity and remoteness with the metric invariants of a graph [4]. ###### Proposition 3.1 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with diameter $\displaystyle D$ and proximity $\displaystyle\pi$. Then $\displaystyle D-\pi\leq\left\\{\begin{array}[]{ll}\frac{3n-5}{4}&\quad\mbox{ if $\displaystyle n$ is odd, }\\\ \frac{3n-5}{4}-\frac{1}{4n-4}&\quad\mbox{ if $\displaystyle n$ is even, }\end{array}\right.$ with equality if and only if $\displaystyle G$ is a path $\displaystyle P_{n}$. ###### Proposition 3.2 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and diameter $\displaystyle D$. Then $\displaystyle D-\rho\leq\frac{n-2}{2},$ with equality if and only if $\displaystyle G$ is the path $\displaystyle P_{n}$. ###### Proposition 3.3 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with radius $\displaystyle r$ and proximity $\displaystyle\pi$. Then $\displaystyle r-\pi\leq\left\\{\begin{array}[]{ll}\frac{n-1}{4}-\frac{1}{4(n-1)}&\mbox{if $\displaystyle n$ is even, }\\\ \frac{n-1}{4}-\frac{1}{n-1}&\mbox{ if $\displaystyle n$ is odd. }\end{array}\right.$ The bound is best possible as shown by the graph composed of a cycle with an additional edge forming a triangle or two additional crossed edges on four successive vertices of the cycle if $\displaystyle n$ is odd, and by the path $\displaystyle P_{n}$ or the cycle $\displaystyle C_{n}$ if $\displaystyle n$ is even. ###### Proposition 3.4 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and radius $\displaystyle r$. Then $\displaystyle\rho-r\leq\left\\{\begin{array}[]{lll}\frac{n+1}{8}+\frac{1}{8(n-1)}&&\mbox{ if }n=0\,\,(mod\,\, 4),\\\ \frac{n+1}{8}&&\mbox{ if }n=1\,\,(mod\,\, 4),\\\ \frac{n+1}{8}-\frac{3}{8(n-1)}&&\mbox{ if }n=2\,\,(mod\,\, 4),\\\ \frac{n+1}{8}&&\mbox{ if }n=3\,\,(mod\,\, 4).\\\ \end{array}\right.$ The bound is attained if and only if $\displaystyle G$ is a pseudo-kite $\displaystyle PKI_{n,n-2r^{*}+1}$ where $\displaystyle r^{*}=\left\\{\begin{array}[]{lll}\frac{n}{4}&&\mbox{ if }n=0\,\,(mod\,\, 4),\\\ \frac{n-1}{4}&&\mbox{ if }n =1\,\,(mod\,\, 4),\\\ \frac{n-2}{4}\mbox{ or }\frac{n+2}{4}&&\mbox{ if }n=2\,\,(mod\,\, 4),\\\ \frac{n+1}{4}&&\mbox{ if }n=3\,\,(mod\,\, 4).\\\ \end{array}\right.$ ###### Proposition 3.5 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and average eccentricity $\displaystyle ecc$. Then $\displaystyle\rho\leq ecc$ with equality in both cases if and only if $\displaystyle G$ is a complete graph $\displaystyle K_{n}$. Ma, Wu and Zhang [40] gave the following lemmas and proved a conjecture that consists of an upper bound on $\displaystyle ecc-\pi$ together with the corresponding extremal graphs. A vertex $\displaystyle v\in V$ is called a centroidal vertex if $\displaystyle\pi(v)=\pi(G)$, and the set of all centroidal vertices is the centroid (sometimes known as median or barycenter) of $\displaystyle G.$ For an edge $\displaystyle e\in E(G)$ and a vertex $\displaystyle u\in V(G)$, we denote by $\displaystyle(G-e)_{u}$ the component of $\displaystyle G-e$ containing $\displaystyle u$, and let $\displaystyle V_{u}(e)=V((G-e)_{u})$ and $\displaystyle n_{u}(e)=|V_{u}(e)|$. Clearly, for any edge $\displaystyle e=uv\in E(G),~{}n_{u}(e)+n_{v}(e)=|V(G)|$. The following lemmas are given in [40]. ###### Lemma 3.6 ([40]). Let $\displaystyle G$ be a tree of order $\displaystyle n$. For any edge $\displaystyle e=uv\in E(G),$ $\displaystyle \pi(u)+\frac{1}{n-1}n_{u}(e)=\pi(v)+\frac{1}{n-1}n_{v}(e).$ ###### Lemma 3.7 ([40]). The following holds for a tree $\displaystyle G$ of order $\displaystyle n$ * (i) If $\displaystyle x$ is a centroidal vertex of $\displaystyle G$, then $\displaystyle ecc(x)\leq\left\lfloor\frac{n}{2}\right\rfloor,$ with equality if and only if there exists a path of length $\displaystyle\left\lfloor\frac{n}{2}\right\rfloor$ in $\displaystyle G$,which joins $\displaystyle x$ to a pendent vertex of $\displaystyle G$ with the property that the degree of every internal vertex of it is equal to two in $\displaystyle G$. * (ii) If there is a path $\displaystyle P$ of length $\displaystyle\left\lfloor\frac{n}{2}\right\rfloor$ in $\displaystyle G$, which joins a vertex $\displaystyle y$ and a pendent vertex of $\displaystyle G$ with the property that the degree of every internal vertex of it is equal to two in $\displaystyle G$, then $\displaystyle y$ is a centroidal vertex of $\displaystyle G$. ###### Lemma 3.8 ([40]). Let G be a tree of order $\displaystyle n\geq 3$. Let $\displaystyle v_{0}v_{1}\dots v_{d}$ be a longest path in $\displaystyle G$. Set $\displaystyle V_{0}=\\{v\in V(G)|d(v,v_{0})\geq d(v,v_{d})\\},~{}V_{d}=\\{v\in V(G)|d(v,v_{d})\geq d(v,v_{0})\\}$. Without loss of generality, let $\displaystyle|V_{0}|\geq|V_{d}|$. If $\displaystyle G$ is not a path, then the following holds: * (1) there is a pendent vertex $\displaystyle v^{\ast}\in V_{0}$ distinct from $\displaystyle v_{d}$, * (2) $\displaystyle ecc(G)-ecc(G^{\prime})\geq\frac{1}{2}$, where $\displaystyle G^{\prime}$ is the tree obtained from $\displaystyle G$ by deleting the edge incident with $\displaystyle v^{\ast}$ and joining $\displaystyle v^{\ast}$ and $\displaystyle v_{0}$, * (3) $\displaystyle\pi(G)-\pi(G^{\prime})<\frac{1}{2}$, where $\displaystyle G^{\prime}$ is the tree as defined in (2). Lemmas 3.6, 3.7 and 3.8 help in establishing the following result and verifies a conjecture of [4]. ###### Theorem 3.9 ([40]). For a connected graph $\displaystyle G$ on $\displaystyle n\geq 3$ vertices, $\displaystyle ecc-\pi\leq\left\\{\begin{array}[]{lll}\frac{(3n+1)(n-1)}{4n}-\frac{n+1}{4}&&\mbox{ if $\displaystyle n$ is odd,}\\\ \frac{n-1}{2}-\frac{n}{4(n-1)}&&\mbox{ if $\displaystyle n$ is even,}\end{array}\right.$ with equality if and only if $\displaystyle G$ is the path $\displaystyle P_{n}$. Sedlar [50] studied three AutoGraphiX conjectures involving proximity and remoteness. She solved the conjecture by using the following graph transformation for trees. Let $\displaystyle G$ be a tree. Then graph transformtaion transforms $\displaystyle G$ into $\displaystyle G^{\prime}$, where $\displaystyle G^{\prime}$ is either * 1) Path $\displaystyle P_{n}$, * 2) a tree consisting of four paths if equal length with a common end point, * 2) a tree consisting of three paths of almost equal length with a common end point. Next, the following results prove that among those graphs the difference $\displaystyle\overline{\ell}-\pi$ is maximum for the last. ###### Lemma 3.10 ([50]). The difference $\displaystyle\overline{\ell}-\pi$ is greater for a tree $\displaystyle G$ on $\displaystyle n\geq 4 $ vertices consisting of three paths of almost equal length with a common end point than for path $\displaystyle P_{n}$. ###### Lemma 3.11 ([50]). The difference $\displaystyle\overline{\ell}-\pi$ is greater for the tree $\displaystyle G$ on $\displaystyle n\geq 9$ vertices, where $\displaystyle n=1$mod$\displaystyle(4)$, consisting of three paths of almost equal length with a common end point than for the tree $\displaystyle G^{\prime}$ on $\displaystyle n$ vertices consisting of four paths of equal length, where $\displaystyle G^{\prime}$ is tree obtained from $\displaystyle G$ by some transformation. ###### Lemma 3.12 ([50]). Let $\displaystyle G$ be a tree on $\displaystyle n\geq 6$ vertices with at least four leafs (a vertex of degree $\displaystyle 1$ in a tree).Then there is a tree $\displaystyle G^{\prime}$ on $\displaystyle n$ vertices with three leafs for which the difference $\displaystyle\overline{\ell}-\pi$ is greater or equal than for $\displaystyle G$. ###### Lemma 3.13 ([50]). Among trees with three leafs, the difference $\displaystyle\overline{\ell}-\pi$ is maximal for the tree $\displaystyle G$ on $\displaystyle n$ vertices consisting of three paths of almost equal length with a common end vertex. Lemmas 3.10, 3.11, 3.12 and 3.13 can be summarized into the following theorem. ###### Theorem 3.14 ([50]). Among all trees on $\displaystyle n\geq 4~{}(n,\neq 5)$ vertices with average distance $\displaystyle\overline{\ell}$ and proximity $\displaystyle\pi$, the difference $\displaystyle\overline{\ell}-\pi$ is maximal for a tree $\displaystyle G$ composed of three paths of almost equal lengths with a common end vertex. From Theorem 3.14, the following result follows and settles a conjecture involving proximity and remoteness. ###### Theorem 3.15 ([50]). Among all connected graphs $\displaystyle G$ on $\displaystyle n\geq 3$ vertices with average distance $\displaystyle\overline{\ell}$ and proximity $\displaystyle\pi$, the difference $\displaystyle\overline{\ell}-\pi$ is maximum for a graph $\displaystyle G$ composed of three paths of almost equal lengths with a common end point. Sedlar [50] also proved partial results related to another conjecture: while the conjecture is stated for all connected graphs, the results are proved for trees only. ###### Lemma 3.16 ([50]). Let $\displaystyle G$ be a tree on $\displaystyle n$ vertices with diameter $\displaystyle D$ and let $\displaystyle P=v_{0}v_{1}\dots v_{D}$ be a diametric path in $\displaystyle G$. If there is $\displaystyle j\leq\frac{D}{2}$ such that the degree of $\displaystyle v_{k}$ is at most $\displaystyle 2$ for $\displaystyle k\geq j+1$, then the difference $\displaystyle ecc-\rho$ is greater or equal for the path $\displaystyle P_{n}$ than for $\displaystyle G.$ ###### Theorem 3.17 ([50]). Among all trees on $\displaystyle n\geq 3$ vertices, the difference $\displaystyle ecc-\rho$ is maximum for the path $\displaystyle P_{n}$. The following sequence of results are given by Sedlar [50], which relates remotness $\displaystyle\rho$ wity radius $\displaystyle r.$ ###### Lemma 3.18 ([50]). Let $\displaystyle G$ be a tree on $\displaystyle n$ vertices. There is a caterpillar tree $\displaystyle G^{\prime}$ on $\displaystyle n$ vertices for which the difference $\displaystyle\rho-r$ is less or equal than for $\displaystyle G$. ###### Lemma 3.19 ([50]). Let $\displaystyle G\neq P_{n}$ be a caterpillar tree on $\displaystyle n$ vertices with diameter $\displaystyle D$,remoteness $\displaystyle\rho$ and only one centroidal vertex. Let $\displaystyle P=v_{0}v_{1}\dots v_{D}$ be the diametric path in $\displaystyle G$ such that $\displaystyle v_{j}\in P$ is the only centroidal vertex in $\displaystyle G$ and every of the vertices $\displaystyle v_{j+1},\dots,v_{D}$ is of the degree at most $\displaystyle 2$. Then there is a caterpillar tree $\displaystyle G^{\prime}$ on $\displaystyle n$ vertices of the diameter $\displaystyle D+1$ and the remoteness at most $\displaystyle\rho+\frac{1}{2}.$ ###### Lemma 3.20 ([50]). Let $\displaystyle G\neq P_{n}$ be a caterpillar tree on $\displaystyle n$ vertices with diameter $\displaystyle D$, remoteness and exactly two centroidal vertices. Let $\displaystyle P=v_{0}v_{1}\dots v_{D}$ be a diametric path in $\displaystyle G$ such that $\displaystyle v_{j},v_{j+1}\in P$ are centroidal vertices and every of the vertices $\displaystyle v_{j+1},\dots,v_{D}$ is of degree at most $\displaystyle 2$. Then there is a caterpillar tree $\displaystyle G^{\prime}$ on $\displaystyle n$ vertices of the diameter $\displaystyle D+1$ and the remoteness at most $\displaystyle\rho+\frac{1}{2}.$ ###### Lemma 3.21 ([50]). Let $\displaystyle G\neq P_{n}$ be a caterpillar tree on $\displaystyle n$ vertices with diameter $\displaystyle D$, remoteness $\displaystyle\rho$ and exactly two centroidal vertices of different degrees. Let $\displaystyle P=v_{0}v_{1}\dots v_{D}$ be a diametric path in $\displaystyle G$ such that $\displaystyle v_{j},v_{j+1}\in P$ are centroidal vertices and every of the vertices $\displaystyle v_{0},\dots,v_{j-1},v_{j+2},\dots,v_{D}$ is of degree at most $\displaystyle 2$. Then there is a caterpillar tree $\displaystyle G^{\prime}$ on $\displaystyle n$ vertices of the diameter $\displaystyle D+1$ and the remoteness at most $\displaystyle\rho+\frac{1}{2} $. ###### Lemma 3.22 ([50]). Let $\displaystyle G\neq P_{n}$ be a caterpillar tree on $\displaystyle n$ vertices with diameter $\displaystyle D$, remoteness $\displaystyle\rho$ and exactly two centroidal vertices of equal degrees. Let $\displaystyle P=v_{0}v_{1}\dots v_{D}$ be a diametric path in $\displaystyle G$ such that $\displaystyle v_{j},v_{j+1}\in P$ are centroidal vertices and every of the vertices $\displaystyle v_{0},\dots,v_{j-1},v_{j+2},\dots,v_{D}$ is of degree at most $\displaystyle 2$. Then the difference $\displaystyle\rho-r$ is less or equal for path $\displaystyle P_{n}$ than for $\displaystyle G.$ ###### Lemma 3.23 ([50]). Let $\displaystyle G$ be a caterpillar tree on $\displaystyle n$ vertices. If $\displaystyle n$ is odd, then the difference $\displaystyle\rho-r$ is less or equal for path $\displaystyle P_{n}$ then for $\displaystyle G$. If $\displaystyle n$ is even, then the difference $\displaystyle\rho-r$ is less or equal for path $\displaystyle P_{n}-1$ with a leaf appended to a central vertex than for G. Lemmas 3.18, 3.19, 3.20, 3.21, 3.22 and 3.23 can be summarized in the following result, which gives minimal trees for $\displaystyle\rho-r.$ ###### Theorem 3.24 ([50]). Let $\displaystyle G$ be a tree on $\displaystyle n$ vertices. If $\displaystyle n$ is odd, then the difference $\displaystyle\rho-r$ is less or equal for path $\displaystyle P_{n}$ then for $\displaystyle G$. If $\displaystyle n$ is even, then the difference $\displaystyle\rho-r$ is less or equal for path $\displaystyle P_{n-1}$ with a leaf appended to a central vertex than for $\displaystyle G$. Let $\displaystyle G$ be a connected graph. A vertex $\displaystyle u\in V(G)$ is called a peripheral vertex if $\displaystyle\sigma(u)=\rho(G)$. For a vertex $\displaystyle u\in V(G),$ let $\displaystyle V_{i}(u)=\\{v\in V(G)|d(u,v)=i\\}$ and $\displaystyle n_{i}(u)=|V_{i}(u)|$ for each $\displaystyle i\in\\{1,2,\dots,d\\},$where $\displaystyle d=e_{G}(u).$ In what follows, $\displaystyle V_{i}(u)$ is simply denoted by $\displaystyle V_{i}$ for a peripheral vertex $\displaystyle u$ of $\displaystyle G.$ Wu and Zhang [56] proved some lemmas and two theorems, first conjectured in [4]. ###### Lemma 3.25 ([56]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 3.$ Let $\displaystyle u$ be a peripheral vertex of $\displaystyle G$ and let $\displaystyle d=e_{G}(u).$ Let $\displaystyle G^{\prime}$ be the graph obtained from $\displaystyle G$ by joining each pair of all nonadjacent vertices $\displaystyle x,y$ of $\displaystyle G,$ where $\displaystyle x,y\in V_{j}\cup V_{j+1}$ for some $\displaystyle j\in\\{1,2,\dots,d-1\\}$. We have $\displaystyle\rho(G^{\prime})-\overline{\ell}(G^{\prime})\geq\rho(G)-\overline{\ell}(G),$ with equality if and only if $\displaystyle G^{\prime}=G.$ ###### Lemma 3.26 ([56]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 3$. Let $\displaystyle u$ be a peripheral vertex of $\displaystyle G$ and $\displaystyle e_{G}(v)=d.$ Assume that $\displaystyle G[V_{j}\cup V_{j+1}]$ is a clique for each $\displaystyle j\in\\{0,1,\dots,d-1\\}$. Let $\displaystyle G^{\prime}$ be the graph with $\displaystyle V(G^{\prime})=V(G)$ and $\displaystyle E(G^{\prime})=E(G)\cup[xy:x\in V_{d},y\in V_{d}.$ If $\displaystyle d>\left\lfloor\frac{n+1}{2}\right\rfloor,$ then $\displaystyle\rho(G^{\prime})-\overline{\ell}(G^{\prime})\leq\rho(G)-\overline{\ell}(G),$ with equality if and only if $\displaystyle n$ is even and $\displaystyle d=\frac{n}{2}+11.$ ###### Lemma 3.27 ([56]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 3$. Let $\displaystyle u$ be a peripheral vertex of $\displaystyle G$ and $\displaystyle e_{G}(v)=d.$ Assume that $\displaystyle G[V_{j}\cup V_{j+1}]$ is a clique for each $\displaystyle j\in\\{0,1,\dots,d-1\\}$. Let $\displaystyle i$ be the smallest integer in $\displaystyle\\{1,2,\dots,d\\}$ such that $\displaystyle n_{i}(u)\geq 2.$ Let $\displaystyle V_{i-1}(u)=\\{u_{i-1}\\}$ and $\displaystyle v$ a vertex in $\displaystyle V_{i}(u)$. Denote by $\displaystyle G^{\prime}$ the graph with $\displaystyle V(G^{\prime})=V(G)$ and $\displaystyle E(G^{\prime})=E(G)\setminus(\\{u_{i-1}y:y\in V_{i}\setminus\\{v\\}\\}\cup A),$ where $\displaystyle A=\\{vx:x\in V_{i+1}\\},$ if $\displaystyle i\leq d-1$, and $\displaystyle A=\emptyset$ otherwise. If $\displaystyle d<\left\lfloor\frac{n+1}{2}\right\rfloor$, then $\displaystyle\rho(G^{\prime})-\overline{\ell}(G^{\prime})>\rho(G)-\overline{\ell}(G).$ ###### Lemma 3.28 ([56]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 3.$ Let $\displaystyle u$ be a peripheral vertex of $\displaystyle G$ and $\displaystyle e_{G}(v)=d.$ Assume that $\displaystyle G[V_{j}\cup V_{j+1}]$ is a clique for each $\displaystyle j\in\\{0,1,\dots,d-1\\}$ and that $\displaystyle n_{i}(u)\geq 2$ for some $\displaystyle i\in\\{1,2,\dots,d-1\\}$. Further, assume that $\displaystyle i$ is the minimum subject to the above condition. Let $\displaystyle v$ be a vertex in $\displaystyle V_{i}(u)$ and $\displaystyle V_{i-1}=\\{u_{i-1}\\}$. Let $\displaystyle G^{\prime}$ be the graph with $\displaystyle V(G^{\prime})=V(G)$ and $\displaystyle E(G^{\prime})=E(G)\cup A\setminus\\{vu_{i-1}\\},$ where $\displaystyle A=\\{vy:y\in V_{i+2}\\}$ if $\displaystyle i\leq d-2$, and $\displaystyle A=\emptyset$ otherwise. If $\displaystyle d=\left\lfloor\frac{n+1}{2}\right\rfloor,$ then $\displaystyle\rho(G^{\prime})-\overline{\ell}(G^{\prime})>\rho(G)-\overline{\ell}(G).$ A Soltés or path-complete graph [53] is the graph obtained from a clique and a path by adding at least one edge between an endpoint of the path and the clique. The Soltés graphs are known to maximize the average distance $\displaystyle\overline{\ell}$ when the number of vertices and of edges are fixed [53]. Lemmas 3.25, 3.26, 3.27 and 3.28 lead the following result. ###### Theorem 3.29 ([56]). Among all connected graphs $\displaystyle G$ on $\displaystyle n\geq 3$ vertices with average distance $\displaystyle\overline{\ell}$ and remoteness $\displaystyle\rho$, the Soltés graphs with diameter $\displaystyle\lfloor(n+1)/2\rfloor$ maximize the difference $\displaystyle\rho-\overline{\ell}$. Theorem 3.29 can be equivalently states as ###### Theorem 3.30 ([56]). Among all connected graphs $\displaystyle G$ on $\displaystyle n\geq 3$ vertices with average distance $\displaystyle\overline{\ell}$ and remoteness $\displaystyle\rho$, the maximum value of $\displaystyle\rho-\overline{\ell}$ is attained by the Soltés graphs with diameter $\displaystyle D$, where $\displaystyle\begin{cases}D=\frac{n+1}{2},&\text{if $\displaystyle n$ is odd}\\\ D\in\\{\frac{n}{2},\frac{n}{2}+1\\}&\text{if $\displaystyle n$ is even}.\end{cases}$ Wu and Zhang [56] proved the following lemma, which helped them in proving Theorem 3.32, first conjectured in [4]. ###### Corollary 3.31 ([56]). Let $\displaystyle G$ be connected graph with order $\displaystyle n\geq 5$. If $\displaystyle n$ is odd and $\displaystyle r=\frac{n-1}{2}$. Then $\displaystyle\rho\geq\dfrac{n+1}{4}$, with equality if and only if $\displaystyle G$ is the cycle $\displaystyle C_{n}$ or the graph composed of the cycle $\displaystyle C_{n}$ together with two crossed edges on four successive vertices of the cycle. ###### Theorem 3.32 ([56]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and radius $\displaystyle r$. Then $\displaystyle\rho-r\geq\left\\{\begin{array}[]{lll}\frac{3-n}{4}&&\mbox{ if $\displaystyle n$ is odd }\\\ \frac{n^{2}}{4n-4}-\frac{n}{2}&&\mbox{ if $\displaystyle n$ is even.}\end{array}\right.$ The inequality is best possible as shown by the cycle $\displaystyle C_{n}$ if $\displaystyle n$ is even and by the graph composed of the cycle $\displaystyle C_{n}$ together with two crossed edges on four successive vertices of the cycle. Figure 7: Graphs $\displaystyle C_{n}^{1},C_{n}^{2}$ and $\displaystyle C_{n}^{1}$ (n is odd) Hau, Chen and Das [31] obtained the folllowing result, earlier conjurectured in [4]. ###### Theorem 3.33 ([31]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and radius $\displaystyle r$. If $\displaystyle n$ is odd, then $\displaystyle\rho-r\geq\frac{3-n}{4}$ with equality if and only if $\displaystyle G\cong Cn$ or $\displaystyle C_{n}^{i},~{}i=1,2,3$ (see Fig. 7), and if $\displaystyle n$ is even, then $\displaystyle\rho-r\geq\frac{2n-n^{2}}{4(n-1)}$ with equality if and only if $\displaystyle G\cong Cn.$ Dankelmann related proximity and remoteness with minimum degree and gave several intresting results [18]. ###### Theorem 3.34 ([18]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta$, where $\displaystyle\delta\geq 2$. Then there exists a spanning tree $\displaystyle T$ of $\displaystyle G$ with $\displaystyle\pi(T)\leq\frac{3n}{4(\delta+1)}+3$ and $\displaystyle\rho(T)\leq\frac{3n}{2(\delta+1)}+\frac{7}{2}.$ The following is an immediate consequence of Theorem 3.34 ###### Corollary 3.35 ([18]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta$, where $\displaystyle\delta\geq 2$. Then $\displaystyle\pi(T)\leq\frac{3n}{4(\delta+1)}+3$ and $\displaystyle\rho(T)\leq\frac{3n}{2(\delta+1)}+\frac{7}{2}.$ Next, we have result given upper bound for $\displaystyle\rho-\pi$ in terms of order $\displaystyle n$ and minimum degree $\displaystyle\delta.$ ###### Theorem 3.36 ([18]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta$, where $\displaystyle\delta\geq 2$. Then $\displaystyle\rho-\pi\leq\frac{3}{4(\delta+1)}+3.$ Dankelmann [19] obtained some new bounds on proximity and remoteness. ###### Theorem 3.37 ([19]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta$, where $\displaystyle n\geq 20$ and $\displaystyle\delta\geq 2$. Then $\displaystyle D-\pi\leq\frac{9n}{4(\delta+1)}+\frac{3\delta}{4}.$ Next result gives sharp bound for remoteness in terms of diameter and order od graph ###### Proposition 3.38 ([19]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ and diameter $\displaystyle D$. Then $\displaystyle\rho\geq\frac{nd}{2(n-1)},$ and this bound is sharp for all $\displaystyle n$ and $\displaystyle D$ with $\displaystyle n\geq D+1\geq 33$ for which $\displaystyle nD$ is even. ###### Corollary 3.39 ([19]). Let $\displaystyle G$ be a connected graph of diameter $\displaystyle D.$ Then $\displaystyle\rho>\frac{D}{2},$ and the coefficient $\displaystyle\frac{1}{2}$ is best possible. ###### Theorem 3.40 ([19]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ and minimum degree $\displaystyle\delta$, where $\displaystyle\delta<\frac{n}{4}-1.$ Then $\displaystyle r-\pi\leq\frac{3n}{4(\delta+1)}+\frac{8\delta+5}{4(\delta+1)},$ and this bound is best possible, apart from an additive constant. ###### Corollary 3.41 ([19]). Let $\displaystyle G$ be a connected graph. Then $\displaystyle\rho>\frac{r}{2}.$ The girth $\displaystyle g$ of a graph $\displaystyle G$ is the length of its smallest cycle. A lollipop $\displaystyle L_{n,g}$ is the graph obtained from a cycle $\displaystyle C_{g}$ and a path $\displaystyle P_{n-g}$ by adding an edge between an endpoint of $\displaystyle P_{n-g}$ and a vertex of the cycle $\displaystyle C_{g}$. The lollipop $\displaystyle L_{11,7}$ is illustrated in Figure 9. For a lollipop $\displaystyle L_{n,g}$, we have $\displaystyle\rho(L_{n,g})=\left\\{\begin{array}[]{ccc}\frac{n}{2}-\frac{g(g-2)}{4(n-1)}&&\mbox{ if $\displaystyle g$ is even }\\\ \frac{n}{2}-\frac{g(g-2)+1}{4(n-1)}&&\mbox{ if $\displaystyle g$ is odd. }\end{array}\right.$ A turnip $\displaystyle T_{n,g}$, with $\displaystyle n\geq g\geq 3$, is the graph obtained from a cycle $\displaystyle C_{g}$ by attaching $\displaystyle n-p$ pending edges to one vertex from the cycle. The turnip $\displaystyle T_{10,5}$ is illustarated in Figure 9. If $\displaystyle g=n$, the turnip $\displaystyle T_{n,g}=T_{n,n}$ is the cycle $\displaystyle C_{n}$. For a turnip $\displaystyle T_{n,g}$, we have $\displaystyle\pi(T_{n,g})=\left\\{\begin{array}[]{ccl}\frac{g^{2}-4g+4n-1}{4(n-1)}&&\mbox{ if $\displaystyle g$ is odd}\\\ \frac{g^{2}-4g+4n}{4(n-1)}&&\mbox{ if $\displaystyle g$ is even.}\end{array}\right.$ Figure 8: The lollipop $\displaystyle L_{11,7}$. Figure 9: The turnip $\displaystyle T_{10,5}$. ###### Lemma 3.42 ([10]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with girth $\displaystyle g$. $\displaystyle(i)$ If $\displaystyle g=3$, then $\displaystyle\pi\geq 1$ with equality if and only if $\displaystyle G$ contains a dominating vertex. $\displaystyle(ii)$ If $\displaystyle g=4$, then $\displaystyle\pi\geq n/(n-1)$ with equality if and only if $\displaystyle G$ contains the turnip $\displaystyle T_{n,4}$ as a spanning subgraph and is a spanning subgraph of the complete bipartite graph $\displaystyle K_{n-2,2}$. $\displaystyle(iii)$ If $\displaystyle g\geq 5$, then $\displaystyle\pi\geq\pi(T_{n,g})$ with equality if and only if $\displaystyle G$ is the turnip $\displaystyle T_{n,g}$. ###### Theorem 3.43 ([10]). For any connected graph $\displaystyle G$ on $\displaystyle n\geq 3$ vertices with a finite girth $\displaystyle g$ and proximity $\displaystyle\pi$, we have $\left.\begin{array}[]{lr}\mbox{if $\displaystyle n$ is odd, }&\frac{1-3n}{4}\\\ \mbox{if $\displaystyle n$ is even, }&\frac{4n-3n^{2}}{4n-4}\end{array}\right\\}\leq\pi-g\leq\left\\{\begin{array}[]{ll}\frac{n-11}{4}-\frac{1}{n-1}&\mbox{if $\displaystyle n$ is odd, }\\\ \frac{n-11}{4}-\frac{3}{4(n-1)}&\mbox{if $\displaystyle n$ is even; }\end{array}\right.$ (2) $4\leq\pi+g\leq\left\\{\begin{array}[]{lr}\frac{5n+1}{4}&\mbox{if $\displaystyle n$ is odd, }\\\ \frac{5n^{2}-4n}{4(n-1)}&\mbox{if $\displaystyle n$ is even; }\end{array}\right.$ (3) $\frac{1}{2\left\lfloor\sqrt{n}\right\rfloor+1}+\frac{\left\lfloor\sqrt{n}\right\rfloor(\left\lfloor\sqrt{n}\right\rfloor-1)}{(2\left\lfloor\sqrt{n}\right\rfloor+1)(n-1)}\leq\frac{\pi}{g}\leq\left\\{\begin{array}[]{ll}\frac{n^{2}-4}{12n-12}&\mbox{if $\displaystyle n$ is even, }\\\ \frac{n+1}{12}-\frac{1}{3n-3}&\mbox{if $\displaystyle n$ is odd; }\end{array}\right.$ (4) $3\leq\pi\cdot g\leq\left\\{\begin{array}[]{lr}\frac{n^{2}+n}{4}&\mbox{if $\displaystyle n$ is odd, }\\\ \frac{n^{3}}{4(n-1)}&\mbox{if $\displaystyle n$ is even; }\end{array}\right.$ (5) The lower bound in (2) and the upper bounds in (3) and (5) are reached if and only if $\displaystyle G$ is the cycle $\displaystyle C_{n}$. The upper bounds in (2) and (4) are reached if and only if $\displaystyle G$ is the lollipop $\displaystyle L_{n,3}$. The lower bounds in (3) and (5) are reached if and only if $\displaystyle G$ contains a dominating vertex. The lower bound in (4) is reached if and only if $\displaystyle G$ is the turnip $\displaystyle T_{n,s}$, where $\displaystyle s=2\left\lfloor\sqrt{n}\right\rfloor+1$ when $\displaystyle\sqrt{n}$ is not an integer, and if and only if $\displaystyle G$ is any one of the turnips $\displaystyle T_{n,2\sqrt{n}-1}$, $\displaystyle T_{n,2\sqrt{n}}$ or $\displaystyle T_{n,2\sqrt{n}+1}$ when $\displaystyle\sqrt{n}$ is an integer. ###### Lemma 3.44 ([10]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 4$ vertices with girth $\displaystyle g\leq n-1$ and remoteness $\displaystyle\rho$. Then $\displaystyle\rho\leq\rho(L_{n,g})$ with equality if and only if $\displaystyle G$ is the lollipop $\displaystyle L_{n,g}$. ###### Theorem 3.45 ([10]). For any connected graph $\displaystyle G$ on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and girth $\displaystyle g$, we have $\left.\begin{array}[]{lr}\mbox{if $\displaystyle n$ is even, }&\frac{4n-3n^{2}}{4n-4}\\\ \mbox{if $\displaystyle n$ is odd, }&\frac{1-3n}{4}\end{array}\right\\}\leq\rho-g\leq\frac{(n+1)(n-2)}{2n-2}-3;$ (6) $4\leq\rho+g\leq\left\\{\begin{array}[]{ll}\frac{5n^{2}-4n}{4n-4}&\mbox{if $\displaystyle n$ is even, }\\\ \frac{5n+1}{4}&\mbox{if $\displaystyle n$ is odd; }\end{array}\right.$ (7) $\frac{\rho}{g}\leq\frac{(n+1)(n-2)}{6n-6};$ (8) $3\leq\rho\cdot g\leq\rho(L_{n,g^{*}})\cdot g^{*}$ (9) where $\displaystyle g^{*}$ is the girth for which $\displaystyle\rho(L_{n,g_{i}})\cdot g_{i}$, $\displaystyle i=1,\ldots,4$, is maximum with $\displaystyle g_{1}=\left\lfloor\frac{2+\sqrt{6n^{2}-6n+4}}{3}\right\rfloor$, $\displaystyle g_{2}=\left\lceil\frac{2+\sqrt{6n^{2}-6n+4}}{3}\right\rceil$, $\displaystyle g_{3}=\left\lfloor\frac{2+\sqrt{6n^{2}-6n+7}}{3}\right\rfloor$ and $\displaystyle g_{4}=\left\lceil\frac{2+\sqrt{6n^{2}-6n+7}}{3}\right\rceil$. The lower bound in (6) and the upper bound in (7) are reached if and only if $\displaystyle G$ is the cycle $\displaystyle C_{n}$. The upper bounds in (6) and (8) are reached if and only if $\displaystyle G$ is the lollipop $\displaystyle L_{n,3}$. The lower bounds in (7) and (9) are reached if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$. The upper bound in (9) is reached if and only if $\displaystyle G$ is the lollipop $\displaystyle L_{n,g^{*}}$. The lower bound on $\displaystyle\rho/g$, first conjectured using AGX [1], was proved in [30]. ###### Theorem 3.46 ([30]). For any connected graph $\displaystyle G$ on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and girth $\displaystyle g$, $\displaystyle\frac{\rho}{g}\geq\left\\{\begin{array}[]{ll}\frac{n}{4n-4}&\mbox{if $\displaystyle n$ is even, }\\\ \frac{n+1}{4n}&\mbox{if $\displaystyle n$ is odd. }\end{array}\right.$ with equality if and only if $\displaystyle G$ is a cycle $\displaystyle C_{n}$. ###### Theorem 3.47 ([30]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 2$ vertices with proximity $\displaystyle\pi$ and average distance $\displaystyle\overline{\ell}$. Then $\displaystyle\frac{\pi}{\overline{\ell}}\geq\frac{n}{2(n-1)},$ with equality if and only if $\displaystyle G$ is isomorphic to the star $\displaystyle S_{n}$. The following is an immediate consequence of Theorem 3.47. ###### Corollary 3.48 ([30]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 2$ vertices with proximity $\displaystyle\pi$, average degree $\displaystyle\overline{d}$ and average distance $\displaystyle\overline{\ell}$. Then $\displaystyle\pi.\overline{d}\geq\overline{\ell},$ with equality if and only if $\displaystyle G$ is isomorphic to the star $\displaystyle S_{n}$. Dankelmann and Mafunda [21] gave results relating $\displaystyle\pi$ with diameter and radius of triangle-free and $\displaystyle C_{4}$-free graphs. ###### Theorem 3.49 ([21]). If $\displaystyle G$ is a connected, triangle-free graph of order $\displaystyle n\geq 8$, minimum degree $\displaystyle\delta\geq 3$ and diameter $\displaystyle D$, then $\displaystyle\pi\geq\frac{\delta(D-4)(D-1)}{8(n-1)}.$ ###### Corollary 3.50 ([21]). If $\displaystyle G$ is a connected, triangle-free graph of order $\displaystyle n\geq 8$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle D-\pi\leq\frac{3(n-1)}{2\delta}+\frac{5}{2}.$ This bound is sharp apart from an additive constant. ###### Theorem 3.51 ([21]). If $\displaystyle G$ is a connected, triangle-free graph of order $\displaystyle n\geq 6$, minimum degree $\displaystyle\delta\geq 3$ and radius $\displaystyle r\geq 1$, then $\displaystyle\pi\geq\frac{\delta}{2(n-1)}\left(r^{2}-7r+\frac{47}{8}\right).$ ###### Corollary 3.52 ([21]). If $\displaystyle G$ is a connected, triangle-free graph of order $\displaystyle n\geq 6$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle r-\pi\leq\frac{n-1}{2\delta}+\frac{11}{2}.$ This bound is sharp apart from an additive constant. ###### Theorem 3.53 ([21]). If $\displaystyle G$ is a connected, $\displaystyle C_{4}$-free graph of order $\displaystyle n\geq 16$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle\pi\geq\frac{\delta^{2}-2\left\lfloor\frac{\delta}{2}\right\rfloor+1}{5(n-1)}\left(r^{2}-8r+\frac{127}{8}\right).$ ###### Corollary 3.54 ([21]). If $\displaystyle G$ is a connected, $\displaystyle C_{4}$-free graph of order $\displaystyle n\geq 16$ and minimum degree $\displaystyle\delta\geq 3$, then $\displaystyle r-\pi\leq\frac{5(n-1)}{4\left(\delta^{2}-2\left\lfloor\frac{\delta}{2}\right\rfloor+1\right)}+4.$ Czabarka et al. [17] gave the upper bounds on $\displaystyle\pi$ in triangulations and quadrangulations. The plane graph $\displaystyle G$ is a triangulation (resp. quadrangulation) if every face is a triangle (resp. 4-cycle). ###### Proposition 3.55 ([17]). * (a) Let $\displaystyle G$ be a $\displaystyle 5$-connected triangulation of order $\displaystyle n$. Then $\displaystyle\rho\leq\frac{n+4}{10}+\epsilon_{n},$ where $\displaystyle\epsilon_{n}=-\frac{3}{5(n-1)}$ if $\displaystyle n\equiv 0$ (mod $\displaystyle 5$), $\displaystyle\epsilon_{n}=-\dfrac{1}{n-1}$ if $\displaystyle n\equiv 1$ (mod $\displaystyle 5$), $\displaystyle\epsilon_{n}=\dfrac{2}{5(n-1)}$ if $\displaystyle n\equiv 2$ (mod $\displaystyle 5$), and $\displaystyle\epsilon_{n}=-\frac{2}{5(n-1)}$ if $\displaystyle n\equiv 3,4$ (mod $\displaystyle 5$). * (b) If $\displaystyle G$ is a $\displaystyle 3$-connected quadrangulation of order $\displaystyle n$, then $\displaystyle\rho\leq\frac{n+2}{6}+\epsilon_{n},$ where $\displaystyle\epsilon_{n}=-\frac{5}{3(n-1)}$ if $\displaystyle n\equiv 0$ (mod $\displaystyle 3$), $\displaystyle\epsilon_{n}=-\frac{1}{n-1}$ if $\displaystyle n\equiv 1$ (mod $\displaystyle 3$), and $\displaystyle\epsilon_{n}=\dfrac{1}{3(n-1)}$ if $\displaystyle n\equiv 2$(mod $\displaystyle 3$). ## 4 Proximity and Remoteness Compared to other Invariants Aouchiche and Hasen [4] related remoteness with the independence number and presented the following results. ###### Proposition 4.1 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 8$ vertices with remoteness $\displaystyle\rho$ and independence number $\displaystyle\alpha$. Then $\displaystyle\rho-\alpha\geq 3-n-\frac{1}{n-1}$ and $\displaystyle\rho-\alpha\leq\left\\{\begin{array}[]{lcl}\frac{n-3}{8}-\frac{3}{8(n-1)}&&\mbox{ if } n=0\,\,(mod\,\,4),\\\ \frac{n-3}{8}&&\mbox{ if } n=1\,\,(mod\,\,2),\\\ \frac{n-3}{8}+\frac{1}{8(n-1)}&&\mbox{ if } n=2\,\,(mod\,\,4).\end{array}\right.$ The lower bound is attained if and only if $\displaystyle G$ is the star $\displaystyle S_{n}$. The bound is best possible as shown by the kites $\displaystyle KI_{n,n_{0}}$ where $\displaystyle n_{0}=(n+n\,\,(mod\,\, 4))/2$. ###### Proposition 4.2 ([4]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ with remoteness $\displaystyle\rho$ and matching number $\displaystyle\mu$. Then $\displaystyle\rho-\mu\leq\left\\{\begin{array}[]{lll}\frac{n+1}{8}+\frac{1}{8(n-1)}&&\mbox{ if }n=0\,\,(mod\,\, 4),\\\ \frac{n+1}{8}&&\mbox{ if }n=1\,\,(mod\,\, 4),\\\ \frac{n+1}{8}-\frac{3}{8(n-1)}&&\mbox{ if }n=2\,\,(mod\,\, 4),\\\ \frac{n+1}{8}&&\mbox{ if }n=3\,\,(mod\,\, 4),\\\ \end{array}\right.$ with equality if and only if $\displaystyle G$ is the comet $\displaystyle CO_{n,n-D^{*}+1}$ where $\displaystyle D^{*}=2\left\lfloor\frac{n+2}{4}\right\rfloor$. The (vertex) connectivity $\displaystyle\nu$ of a connected graph $\displaystyle G$ is the minimum number of vertices whose removal disconnects $\displaystyle G$ or reduces it to a single vertex. The algebraic connectivity $\displaystyle a$ of a graph $\displaystyle G$ is the second smallest eigenvalue of its Laplacian $\displaystyle L=Diag-A$, where $\displaystyle Diag$ is the diagonal square matrix indexed by the vertices of $\displaystyle G$ whose diagonal entries are the degrees in $\displaystyle G$, and $\displaystyle A$ is the adjacency matrix of $\displaystyle G$. ###### Theorem 4.3 ([49]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 2$ vertices with vertex connectivity $\displaystyle\nu$, algebraic connectivity $\displaystyle a$ and remoteness $\displaystyle\rho$. Then $\displaystyle\nu\cdot\rho\leq n-1$ with equality if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$; and $\displaystyle a\cdot\rho\leq n$ with equality if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$. Moreover, if $\displaystyle G$ is not complete, then $\displaystyle a\cdot\rho\leq n-1-\frac{1}{n-1}$ with equality if and only if $\displaystyle G\cong K_{n}-M$, where $\displaystyle M$ is any non empty set of disjoint edges. ###### Theorem 4.4 ([30]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with remoteness $\displaystyle\rho$ and maximum degree $\displaystyle\triangle$. If $\displaystyle\triangle\geq\left\lceil\frac{n}{4}\right\rceil+1,$ then $\displaystyle\rho+\triangle\geq\begin{cases}\frac{n+9}{4}&\text{if $\displaystyle n$ is odd},\\\ 2+\frac{n^{2}}{4n-4}&\text{if $\displaystyle n$ is even},\end{cases}$ with equality if and only if $\displaystyle G\cong C_{3}$ or $\displaystyle G\cong C_{4}.$ ###### Theorem 4.5 ([30]). Let $\displaystyle G$ be a connected graph on $\displaystyle n$ vertices, $\displaystyle m$ edges with proximity $\displaystyle\pi$ and average degree $\displaystyle\overline{d}$. If $\displaystyle m\leq\frac{2(n-1)^{2}}{n},$ then $\displaystyle\pi.\overline{d}\leq n-1.$ ###### Theorem 4.6 ([30]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with clique number $\displaystyle\omega$ and remoteness $\displaystyle\rho$. Then $\displaystyle\rho\leq\frac{n^{2}-\omega^{2}-n+3\omega-2}{2(n-1)},$ with equality if and only if $\displaystyle G\cong Ki_{n,\omega}.$ ###### Theorem 4.7 ([30]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with clique number $\displaystyle\omega$, remoteness $\displaystyle\rho$ and proximity $\displaystyle\pi$. Then $\displaystyle\rho+\pi\leq\begin{cases}\frac{3n^{2}-2\omega^{2}-2n+6\omega-5}{4(n-1)}&\text{if $\displaystyle n$ is odd},\\\ \frac{3n^{2}-2\omega^{2}-2n+6\omega-4}{4(n-1)}&\text{if $\displaystyle n$ is odd},\end{cases}$ with equality if and only if $\displaystyle G\cong P_{n}.$ The next result gives an upper bound on $\displaystyle\rho-\pi$ in terms of $\displaystyle n$ and $\displaystyle\omega.$ ###### Theorem 4.8 ([30]). Let $\displaystyle G$ be a connected graph on $\displaystyle n\geq 3$ vertices with clique number $\displaystyle\omega$, remoteness $\displaystyle\rho$ and proximity $\displaystyle\pi$. Then $\displaystyle\rho-\pi\leq\frac{(n-\omega)(n+\omega-3)}{2(n-1)},$ with equality if and only if $\displaystyle G\cong Ki_{n,n-1}.$ ###### Theorem 4.9 ([30]). Let $\displaystyle G$ be a connected bipartite graph with each partite set of cardinality $\displaystyle n\geq 2$. If the average distance $\displaystyle\overline{\ell}\leq\frac{3}{2}+\frac{n-2}{2n(2n-1)},$ then $\displaystyle G$ is Hamiltonion. ###### Corollary 4.10 ([30]). Let $\displaystyle G$ be a connected bipartite graph with each partite set of cardinality $\displaystyle n\geq 2$. If $\displaystyle\rho\leq\frac{3}{2}+\frac{n-2}{2n(2n-1)},$ then $\displaystyle G$ is Hamiltonion. The distance eigenvalues of a connected graph, denoted by $\displaystyle\partial_{1},\partial_{2},\ldots,\partial_{n}$, are those of its distance matrix, and are indexed such that $\displaystyle\partial_{1}\geq\partial_{2}\geq\ldots\geq\partial_{n}$. For a survey on distance spectra of graphs see [8]. Next, we have results relating proximity $\displaystyle\pi$ and remoteness $\displaystyle\rho$ with the distance eigenvalues of graphs. ###### Theorem 4.11 ([9]). Let $\displaystyle G$ be a graph on $\displaystyle n\geq 4$ vertices with largest distance eigenvalue $\displaystyle\partial_{1}$, proximity $\displaystyle\pi$ and remoteness $\displaystyle\rho$. Then $\displaystyle\pi\leq\overline{\ell}\leq\frac{\partial_{1}}{n-1}\leq\rho$ with equalities if and only if $\displaystyle G$ is a transmission regular graph. ###### Corollary 4.12 ([9]). Let $\displaystyle G$ be a graph on $\displaystyle n\geq 2$ vertices with largest distance eigenvalue $\displaystyle\partial_{1}$ and proximity $\displaystyle\pi$. Then $\displaystyle\partial_{1}-\pi\geq n-2$ with equalities if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$. ###### Corollary 4.13 ([9]). Let $\displaystyle G$ be a graph on $\displaystyle n\geq 4$ vertices with second largest distance eigenvalue $\displaystyle\partial_{2}$ and remoteness $\displaystyle\rho$. Then $\displaystyle\rho+\partial_{2}\geq 0$ with equality if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$. The bound in the above corollary is best possible among the bounds of the form $\displaystyle\rho+\partial_{k}\geq 0$, with a fixed integer $\displaystyle k$, over the class of all connected graphs. Indeed, if we consider the complete bipartite graphs $\displaystyle K_{\left\lfloor n/2\right\rfloor,\left\lceil n/2\right\rceil}$, on $\displaystyle n\geq 3$, by direct calculation we get $\displaystyle\rho(K_{\left\lfloor n/2\right\rfloor,\left\lceil n/2\right\rceil})+\partial_{3}(K_{\left\lfloor n/2\right\rfloor,\left\lceil n/2\right\rceil})=\left\\{\begin{array}[]{lcl}-\frac{1}{2}&&\mbox{ if $\displaystyle n$ is odd,}\\\ \\\ -\frac{1}{2}-\frac{1}{2(n-1)}&&\mbox{ if $\displaystyle n$ is even,}\\\ \end{array}\right.$ which is negative for $\displaystyle n\geq 3$. ###### Proposition 4.14 ([9]). Let $\displaystyle T$ be a tree on $\displaystyle n\geq 4$ vertices with remoteness $\displaystyle\rho$, diameter $\displaystyle D$ and distance spectrum $\displaystyle\partial_{1}\geq\partial_{2}\geq\cdots\geq\partial_{n}$. Then $\displaystyle\rho+\partial_{\left\lfloor\frac{D}{2}\right\rfloor}>0.$ ###### Corollary 4.15 ([9]). Let $\displaystyle G$ be a graph on $\displaystyle n\geq 4$ vertices with second largest distance eigenvalue $\displaystyle\partial_{2}$ and proximity $\displaystyle\pi$. Then $\displaystyle\pi+\partial_{n}\leq 0$ with equality if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$. ###### Theorem 4.16 ([9]). Let $\displaystyle G$ be a graph on $\displaystyle n\geq 4$ vertices with largest distance eigenvalue $\displaystyle\partial_{1}$ and remoteness $\displaystyle\rho$. Then $\displaystyle\partial_{1}-\rho\geq n-2$ with equalities if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$. ###### Proposition 4.17 ([9]). Let $\displaystyle G$ be a graph on $\displaystyle n\geq 4$ vertices with least distance eigenvalue $\displaystyle\partial_{n}$ and remoteness $\displaystyle\rho$. Then $\displaystyle\partial_{n}+\rho\leq 0$ with equallity if and only if $\displaystyle G$ is $\displaystyle K_{n}$. ###### Theorem 4.18 ([9]). Let $\displaystyle G$ be a graph on $\displaystyle n\geq 4$ vertices with second largest distance eigenvalue $\displaystyle\partial_{2}$ and proximity $\displaystyle\pi$. Then $\displaystyle\pi+\partial_{2}\geq 0$ with equality if and only if $\displaystyle G$ is the complete graph $\displaystyle K_{n}$. ###### Proposition 4.19 ([9]). Let $\displaystyle T$ be a graph on $\displaystyle n\geq 4$ vertices with proximity $\displaystyle\pi$, diameter $\displaystyle D$ and distance spectrum $\displaystyle\partial_{1}\geq\partial_{2}\geq\cdots\geq\partial_{n}$. Then $\displaystyle\pi+\partial_{\left\lfloor\frac{D}{2}\right\rfloor}>0.$ Lin, Das and Wu [37] proved two theorems, earlier as conjectures in [9]. Following lemmas and results were given. ###### Lemma 4.20 ([37]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ with diameter $\displaystyle D$ and remoteness $\displaystyle\rho$. Then $\displaystyle\rho\geq\frac{D}{2}.$ ###### Theorem 4.21 ([37]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 4$ with diameter $\displaystyle D$, remoteness $\displaystyle\rho$ and distance eigenvalues $\displaystyle\partial_{1}\geq\dots\geq\partial_{n}.$ Then we have the following statements. * (i) If $\displaystyle D=2,$ then $\displaystyle\rho+\partial_{3}\geq\frac{\left\lceil\frac{n}{2}\right\rceil-2}{n-1}-1,$ with equality holding if and only if $\displaystyle G\cong K_{n_{1},n_{2}}.$ * (ii) If $\displaystyle D\geq 3$, then $\displaystyle\rho+\partial_{3}>\frac{d}{2}-1.2.$ ###### Theorem 4.22 ([37]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 4$ with diameter $\displaystyle D$, remoteness $\displaystyle\rho$ and distance spectrum $\displaystyle\partial_{1}\geq\dots\geq\partial_{n}.$ Then $\displaystyle\rho+\partial_{\left\lfloor\frac{7D}{8}\right\rfloor}>0.$ Besides above results, more results regarding remoteness and distance eigenvalues were given in [37]. Denote by $\displaystyle H_{n-D}(n>D)$ a graph of order $\displaystyle n-D$ such that $\displaystyle V(H_{n-d})=V(\overline{K}_{n-d})$ and $\displaystyle E(H_{n-D})\supseteq E(\overline{K}_{n-D}),$ where $\displaystyle\overline{K}_{n-D}$ is a null graph of order $\displaystyle n-D.$ Let $\displaystyle H_{n,D}$ be a graph of order $\displaystyle n$ with diameter $\displaystyle D$ obtained by joining $\displaystyle n-D$ edges between one end of the path $\displaystyle P_{D}$ with each vertex of $\displaystyle H_{n-D}.$ ###### Lemma 4.23 ([37]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ with diameter $\displaystyle D$ and remoteness $\displaystyle\rho$. Then $\displaystyle\rho\leq D-\frac{D^{2}-D}{2(n-1)},$ with equality holding if and only if $\displaystyle G\cong H_{n,D}.$ ###### Theorem 4.24 ([37]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ with diameter $\displaystyle D$ and remoteness $\displaystyle\rho$. Then $\displaystyle\rho+\partial_{n}\leq-\frac{D^{2}-D}{2(n-1)},$ with equality holding if and only if $\displaystyle G\cong K_{n}.$ ###### Lemma 4.25 ([37]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n$ with diameter $\displaystyle D\geq 3.$ Then $\displaystyle\partial_{1}>n-2+D.$ ###### Theorem 4.26 ([37]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n$ and remoteness $\displaystyle\rho$. Then $\displaystyle\partial_{1}-\rho\geq\frac{n-1+\sqrt{(n-1)^{2}+8}}{2}-\frac{n}{n-1},$ with equality holding if and only if $\displaystyle G\cong K_{n}-e,$ where $\displaystyle e$ is an edge of $\displaystyle G.$ Jia and Song [33] obtained various results related to remoteness and distance, distance (signless) Laplacian eigenvalues of graphs. ###### Theorem 4.27 ([33]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Thenn $\displaystyle n\leq\rho+\partial_{1}\leq\rho(P_{n})+\partial_{1}(P_{n}),$ with the left equality holding if and only if $\displaystyle G\cong K_{n}$ and the right equality holding if and onlyif $\displaystyle G\cong P_{n}.$ ###### Theorem 4.28 ([33]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\rho+\partial_{1}\geq\frac{n}{n-1}+\frac{n-1+\sqrt{(n-1)^{2}+8}}{2},$ with equality holding if and only if $\displaystyle G\cong K_{n}-e.$ ###### Theorem 4.29 ([33]). Let $\displaystyle G\ncong(K_{n},K_{n}-e)$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\rho+\partial_{1}\geq\frac{n}{n-1}+\frac{n-1+\sqrt{(n-1)^{2}+16}}{2},$ with equality holding if and only if $\displaystyle G\cong K_{n}-2e,$ where $\displaystyle 2e$ are two matching edges. ###### Theorem 4.30 ([33]). Let $\displaystyle G$ be a complete bipartite graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\rho+\partial_{2}\geq n-\frac{1}{n-1}-\sqrt{n^{2}-3n+3},$ with equality holding if and only if $\displaystyle G$ is star. For connected graphs Jia and Song [33] proposed the following conjecture. ###### Conjecture 1 ([33]). Let $\displaystyle G\ncong(K_{n},K_{n}-e)$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\rho+\partial_{2}\geq\frac{n}{n-1}+\frac{n-1-\sqrt{(n-1)^{2}+8}}{2},$ with equality holding if and only if $\displaystyle G\cong K_{n}-2e,$ where $\displaystyle 2e$ are two matching edges. ###### Theorem 4.31 ([33]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\rho-\partial_{n}\geq 2,$ with equality holding if and only if $\displaystyle G\cong K_{n}.$ ###### Theorem 4.32 ([33]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\rho-\partial_{n}\geq 3+\frac{1}{n-1},$ with equality holding if and only if $\displaystyle G\cong K_{n}-me,$ where $\displaystyle me$ denotes $\displaystyle m$ matching edges. The distance Laplacian eigenvalues of a connected graph, denoted by $\displaystyle\partial_{1}^{L},\partial_{2}^{L},\ldots,\partial_{n}^{L}$, are those of its distance Laplacian matrix, and are indexed such that $\displaystyle\partial_{1}^{L}\geq\partial_{2}^{L}\geq\ldots\geq\partial_{n}^{L}$. ###### Theorem 4.33 ([33]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle n+1\leq\rho+\partial_{1}^{L}\leq\rho(P_{n})+\partial_{1}^{L}(P_{n}),$ with the left equality holding if and only if $\displaystyle G\cong K_{n}$ and the right equality holding if and only if $\displaystyle G\cong P_{n}.$ ###### Theorem 4.34 ([33]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\rho+\partial_{1}^{L}\geq n+\frac{1}{n-1}+3,$ with equality holding if and only if $\displaystyle G\cong K_{n}-me,$ where $\displaystyle me$ denotes $\displaystyle m$ matchings. ###### Theorem 4.35 ([33]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\partial_{1}^{L}-\rho\geq n-1,$ with equality holding if and only if $\displaystyle G\cong K_{n}.$ ###### Theorem 4.36 ([33]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n\geq 5$ with remoteness $\displaystyle\rho$. Then $\displaystyle\partial_{1}^{L}-\rho\geq n+1-\frac{1}{n-1},$ with equality holding if and only if $\displaystyle G\cong K_{n}-me,$ where $\displaystyle 1\leq m\leq\left\lfloor\frac{n}{2}\right\rfloor.$ The distance signless Laplacian eigenvalues of a connected graph, denoted by $\displaystyle\partial_{1}^{Q},\partial_{2}^{Q},\ldots,\partial_{n}^{Q}$, are those of its distance signless Laplacian matrix, and are indexed such that $\displaystyle\partial_{1}^{Q}\geq\partial_{2}^{Q}\geq\ldots\geq\partial_{n}^{Q}$. ###### Theorem 4.37 ([33]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle 2n\leq 2\rho+\partial_{1}^{Q}\leq 2\rho(P_{n})+\partial_{1}^{Q}(P_{n}),$ with the left equality holding if and only if $\displaystyle G\cong K_{n}$ and the right equality holding if and only if $\displaystyle G\cong P_{n}.$ ###### Theorem 4.38 ([33]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle 2\rho+\partial_{1}^{Q}\geq\frac{3n-2+\sqrt{(n-2)^{2}+16}}{2}+\frac{2n}{n-1},$ with equality holding if and only if $\displaystyle G\cong K_{n}-e.$ ###### Theorem 4.39 ([33]). Let $\displaystyle G\ncong(K_{n},K_{n}-e)$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle 2\rho+\partial_{1}^{Q}\geq\frac{3n-2+\sqrt{(n-2)^{2}+32}}{2}+\frac{2n}{n-1},$ with equality holding if and only if $\displaystyle G\cong K_{n}-2e,$ where $\displaystyle 2e$ are two matching edges. ###### Theorem 4.40 ([33]). Let $\displaystyle G$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho.$ Then $\displaystyle\partial_{1}^{Q}-2\rho\geq 2n-4,$ with equality holding if and only if $\displaystyle G\cong K_{n}.$ ###### Theorem 4.41 ([33]). Let $\displaystyle G\ncong K_{n}$ be a connected graph of order $\displaystyle n\geq 4$ with remoteness $\displaystyle\rho$. Then $\displaystyle\partial_{1}^{Q}-2\rho\geq\frac{3n-2+\sqrt{(n-2)^{2}+16}}{2}-\frac{2n}{n-1},$ with equality holding if and only if $\displaystyle G\cong K_{n}-e$. Mojallal and Hansen [39] obtained relation between proximity and the third largest distance eigenvalue of a graph, some results earlier conjuctured in [9]. Figure 10: The tree $\displaystyle T_{1}$ and the graph $\displaystyle G_{1}$ ###### Lemma 4.42 ([39]). Let $\displaystyle T_{1}$ be the tree of order $\displaystyle n$ given in Fig 10. Then $\displaystyle\pi(T_{1})+\partial_{3}(T_{1})>0.$ ###### Lemma 4.43 ([39]). Let $\displaystyle G_{1}$ be the graph of order $\displaystyle n$ given in Fig 10. Then $\displaystyle\pi((G_{1}))+\partial_{3}(G_{1})>0.$ ###### Lemma 4.44 ([39]). Let $\displaystyle G$ be a graph of order $\displaystyle n$ with the diameter $\displaystyle D=3$ and let $\displaystyle i_{1}$ and $\displaystyle i_{4}$ be two vertices of $\displaystyle G$ with distance $\displaystyle 3$. If $\displaystyle d_{G}(i_{1})\geq 2$ and $\displaystyle d_{G}(i_{4})\geq 2.$ Then $\displaystyle\pi+\partial_{3}>0.$ ###### Lemma 4.45 ([39]). Let $\displaystyle G$ be a graph of order $\displaystyle n$ with the diameter $\displaystyle D=3$ and let $\displaystyle i_{1}$ and $\displaystyle i_{4}$ be two vertices of $\displaystyle G$ with distance $\displaystyle 3$. If $\displaystyle d_{G}(i_{1})=1$ or $\displaystyle d_{G}(i_{4})=1.$ Then $\displaystyle\pi+\partial_{3}>0.$ ###### Theorem 4.46 ([39]). Let $\displaystyle G$ be a graph with the diameter $\displaystyle D\geq 3$, proximity $\displaystyle\pi(G)$ and third largest distance eigenvalue $\displaystyle\partial_{3}.$ Then $\displaystyle\pi+\partial_{3}>0.$ ###### Corollary 4.47 ([39]). Let $\displaystyle G$ be a graph with the diameter $\displaystyle D\geq 3,$ and the third largest distance eigenvalue $\displaystyle\partial_{3}.$ Then $\displaystyle\displaystyle(i)~{}D+\partial_{3}>0\quad(ii)~{}\rho+\partial_{3}>0\quad(iii)~{}\overline{\ell}+\partial_{3}>0\quad(iv)~{}ecc+\partial_{3}>0\quad(v)~{}r+\partial_{3}>0.$ ## 5 Conclusion The survey collects all the results related to proximity $\displaystyle\pi$ and remoteness $\displaystyle\rho$ published in different journals from 2006 to September 2023. ## Data Availability There is no data associated with this article. ## Conflict of interest The authors declare that they have no competing interests. ## References * [1] M. Aouchiche, Comparaison Automatisée d’Invariants en Théorie des Graphes. PhD Thesis, École Polytechnique de Montréal, February 2006. * [2] M. Aouchiche, G. Caporossi and P. Hansen, Variable neighborhood search for extremal graphs. 20. Automated comparison of graph invariants MATCH Commun. Math. Comput. Chem. 58 (2007) 365–384. * [3] M. Aouchiche, G. Caporossi and P. Hansen, Variable neighborhood search for extremal graphs. 8. Variations on Graffiti 105. Congr. Numer. 148 (2001) 129–144. * [4] M. Aouchiche and P. Hansen, Proximity and remoteness in graphs: results and conjectures. Networks 58 (2011) 95–102. * [5] M. Aouchiche and P. Hansen, Nordhaus-Gaddum Relations for Proximity and Remoteness in Graphs. Comput. Math. Appl. 59 (2010) 2827–2835. * [6] M. Aouchiche and P. Hansen, On a conjecture about the Szeged index. European J. Combin. 31 (2010) 1662–1666. * [7] M. Aouchiche and P. Hansen, A survey of Nordhaus–Gaddum type relations. Discrete Appl. Math. 161 (2013) 466–546. * [8] M. Aouchiche and P. Hansen, Distance spectra of graphs: a survey. To appear in Linear Algebra Appl. 2014. * [9] M. Aouchiche and P. Hansen, Proximity, remoteness and distance eigenvalues of a graph. Discrete Appl. Math. 213 (2016) 17–25. * [10] M. Aouchiche and P. Hansen, Proximity, remoteness and girth in graphs. Discrete Appl. Math. 222 (2017) 31–39. * [11] C. Barnhart and G. Laporte, (eds), ”Transportation”, Handbooks in Operations Research and Management Science. Volume 14, Elsevier, Amsterdam, 2007. * [12] G. S. Bloom, J. W. Kennedy and L. V. Quintas, Distance degree regular graphs. In Proc. of the Fourth International Conference on the Theory and Applications of Graphs. Western Michigan University, Kalamazoo, MI, May 6–9, 1980. * [13] F. Buckley and F. Harary, Distance in graphs. Addison-Wesley Publishing Company, Redwood, 1990. * [14] S. Cabello and P. Lukšić, The complexity of obtaining a distance-balanced graph. Electron. J. Combin. 18, 1 (2011) Paper 49. * [15] F. Chung, The average distance and the independence number, J. Graph Theory 12 (1988), 229–235. * [16] E. Czabarka, P. Dankelmann, T. Olsen and L. A. Székely, Proximity in triangulations and quadrangulations arxiv:2001.09012 (2020) * [17] E. Czabarka, P. Dankelmann, T. Olsen and L. A. Székely, Wiener index and remoteness in triangulations and quadrangulations Discrete Math. Theor. Comp. Sci. 23,1 (2021), $\displaystyle\\#3$ * [18] P. Dankelmann, Proximity, remoteness and minimuim degree Discrete Appl. Math. 184 (2015) 223–228. * [19] P. Dankelmann, New bounds on proximity and remoteness in graphs Comm. Combin. Optm. 1, 1 (2016) 29–41. * [20] P. Dankelmann, E. Jonck and S. Mafunda, Proximity and remoteness in triangle-free and $\displaystyle C_{4}$-free graphs in terms of order and minimum degree Discrete Math. 344 (2021) 112513. * [21] P. Dankelmann and S. Mafunda, On the difference between proximity and other distance parameters in triangle-free graphs and $\displaystyle C_{4}$-free graphs Discrete Appl. Math. 321 (2022) 295–307. * [22] P. Dankelmann, S. Mafunda and S. Mallu, Proximity, remoteness and maximum degree in graphs Discrete Math. Theor. Comp. Sci. 24(2) (2022) https://doi.org/10.46298/dmtcs.9432. * [23] S. Fajtlowicz, On conjectures of Graffiti, II. Congr. Numer. 60 (1987) 187–197. * [24] R. C. Entringer, D. E. Jackson and D. A. Snyder, Distance in graphs. Czechoslovak Math. J. 26(101) (1976) 283–296. * [25] G. Ghiani, G. Laporte and R. Musmanno, Introduction to Logistics Systems Planning and Control. Wiley, Chichester, 2004 * [26] A. J. Goldman, Minimax location of a facility in a network. Transportation Sci. 6 (1972) 407–418. * [27] S. L. Hakimi, Optimum locations of switching centers and the absolute centers and medians of a graph. Operations Res. 12 (1964) 450–459. * [28] K. Handa, Bipartite graphs with balanced $\displaystyle(a,b)$–partitions. Ars Combin. 51 (1999) 113–119. * [29] F. Harary, Status and contrastatus. Sociometry, 22 (1959) 23–43. * [30] H. Hua and K. Ch. Das, Proof of conjectures on remoteness and proximity in graphs. Discrete Appl. Math. 171 (2014) 72–80. * [31] H. Hua, Y. Chen and K. Ch. Das, Proof of conjectures on remoteness and proximity in graphs. Discrete Appl. Math. 187 (2015) 103–110. * [32] A. Ilić, S. Klavžar, M. Milanović, On distance-balanced graphs. Euro J. Combin. 31 (2010) 733–737. * [33] H. Jia and H. Song, Remoteness and distance, distance (signless) Laplacian eigenvalues of a graph J. Ineq. Appl. (2018) 69. * [34] J. Jerebic, S. Klavžar, D. F. Rall, Distance-balanced graphs. Annals of Combinatorics 12 (2008) 71–79. * [35] C. Jordan, Sur les assemblages de lignes. J. Reine Angew. Math. 70 (1869) 185–190. * [36] D. J. Klein, Centrality measure in graphs. J Math. Chem. 47 (2010) 1209–1223. * [37] H. Lin, K. C. Das and B. Wu Remoteness and distance eigenvalues of a graph. Discrete Appl. Math. 215 (2016) 218–224. * [38] S. A. Mallu, Proximity and Remoteness In Graphs, University of Johannesburg (South Africa), 2022, available at https://hdl.handle.net/10210/498379. * [39] S. A. Mojallal and P. Hansen, A relation between proximity and the third largest distance eigenvalues of a graph. Discrete Appl. Math. 293 (2021) 50–58. * [40] B. Ma, B. Wu and W. Zhang, Proximity and average eccentricity of a graph. Inf. Process. Lett. 112 (2012) 392–395. * [41] E. A. Nordhaus and J. W. Gaddum, On Complementary Graphs. Amer. Math. Monthly 63 (1956) 175–177. * [42] C. Payen and N. H. Xoung, Domination balanced Graphs. J. Graph. Theory 6 (1982) 23–32. * [43] L. Pei, About AutoGraphiX conjecture on domination number and remoteness of graphs Math. 10(19) (2022) 3706. * [44] L. Pei, X. Pan, K. Wang, J. Tian and G. peng, On AutoGraphix conjectures regarding domination number and average eccentrity Filomat 3 (2019) 699–710. * [45] L. Pei, X. Pan, K. Wang and J. Tian, Proofs of the AutoGraphix conjectures on the domination number, average eccentrity and proximity Discrete Appl. Math. 289 (2021) 292–301. * [46] O. E. Polansky and D. Bonchev, The minimum distance number of trees. MATCH Commun. Math. Comput. Chem. 21 (1986) 341–344. * [47] M. Randić, Characterizations of atoms, molecules and classes of molecules based on paths enumerations. Proc. of Bremen Konferenz zur Chemie Univ. Bremen, Bremen, 1978, Part 11, Match No. 7 (1979) 5–64. * [48] G. Sabidussi, The centrality index of a graph. Psychometrika, 31 (1966) 581–603. * [49] J. Sedlar, D Vukčević, M. Aouchiche, P. Hansen, Variable Neighborhood Search for Extremal Graphs: 25. Products of Connectivity and Distance Measure. Graph Theory Notes of New York 55 (2008) 6–13. * [50] J. Sedlar, Remoteness, proximity and few other distance invariants in graphs. Filomat 27 (2013) 1425–1435. * [51] P.J. Slater, Counterexamples to Randić’s conjecture on distance degree sequences for trees. J. Graph Theory, 6 (1982) 89–92. * [52] S. Smart and P. Slater, Center, median, and centroid subgraphs. Networks 34 (1999) 303–311. * [53] L. Soltés, Transmission in Graphs : a Bound and Vertex Removing. Math. Slovaca 41 (1991) 11–16. * [54] M. Tavakoli, F. Rahbarnia, A. R. Ashrafi, Further results on distance-balanced graphs. UPB Scientific Bulletin, Series A: Applied Mathematics and Physics 75 (2013) 77–84. * [55] H. Wiener, Structural determination of the Paraffin Boiling Points. J. Amer. Chem. Soc. 69 (1947) 17–20. * [56] B. Wu and W. Zhang, Average distance, radius and remoteness of a graph. ARS MATHEMATICA CONTEMPORANEA 7 (2014) 441–452. * [57] B. Zelinka, Medians and peripherians of trees. Archivum Mathematicum 4 (1968) 87–95.
(see also Fig. 5). Firstly, the number of bound states at a given mass is shown on the left of Fig. 6. We have argued that production of a bound state with a given $S$ and $L>0$ will generically lead to production (via one or more decays) of an $L=0$ bound state with the same $S$, so it unnecessary to compute the rate to produce SM particles directly from those $L>0$ states. We confirm this numerically on the right of Fig. 6: however measured, the branching fraction is always greater than 90%. For instance, at 100 TeV, the deepest $Q=0$ $L+S$ odd $p$-wave bound state decays to the deepest $s$-wave state with 99.6% probability and to the second deepest $s$-wave state with a likelihood of 0.4%. (The next most dominant transition is to the third deepest $s$-wave bound state, and only occurs with a probability of 0.006%.) We also discussed decays of the $S=1,\,L=0$ bound states, noting that they generate only a power suppressed contribution to the photon endpoint spectrum. Nevertheless, the contribution from the $S=1$ states can be comparable to that from the $S=0$ states, after accounting for both their (enhanced) production rate and (suppressed) endpoint spectrum from annihilation. However, in practice this means that the contributions to the endpoint spectrum from both the $S=0$ and $S=1$ bound states are suppressed compared to direct annihilation (either due to power suppression or because of the small formation rate). For the aforementioned reasons, we focus on the $L=S=0$ bound states to estimate the size of the (generally subdominant) bound-state contribution. Doing so implies we can reuse our SCET results from the direct annihilation case, with appropriate modification of the wavefunction factors. This choice does mean that the total bound-state contribution to the endpoint photon spectrum could increase by an $\mathcal{O}(1)$ factor in an improved calculation (once the $S=1$ states are included). Because the bound-state contribution to the endpoint spectrum is suppressed, this typically corresponds to a percent-level theoretical uncertainty in the overall endpoint spectrum, with larger uncertainties at specific mass points where the bound state formation cross section is enhanced relative to direct annihilation. The contribution to continuum photons (not near the endpoint) from bound state formation can be markedly larger, compared to the effect on the endpoint spectrum, as there is no power suppression in the continuum contribution from annihilation of the $S=1$ bound states. We will discuss each of these contributions to the total spectrum in the next section. ## 5 The Combined Photon Spectrum and Numerical Results At this stage we have all ingredients required to determine the quintuplet annihilation spectrum, including both direct annihilation and the contribution of bound states. In this section we collect our results to determine the full energy distribution of photons the quintuplet generates at the thermal mass of 13.6 TeV, but also for a wider range of masses. We will estimate the impact of several uncertainties on our results, such as the residual theoretical uncertainty on the NLL computations, but also from astrophysical uncertainties such as the distribution of $v$ values and on the DM density in the inner Galaxy. Finally, we will put these results together to estimate the sensitivity of existing and upcoming IACTs to quintuplet DM. ### 5.1 Predictions for the spectrum and rate of photon production A central goal of this work is to accurately determine the distribution of photons that emerge when two SU(2) quintuplets annihilate. This spectrum forms the signal template for telescopes searching for high energy photons, and therefore is a central theoretical input. To achieve this, throughout we have computed differential cross sections $d\sigma/dz$, both for the direct annihilation in Eq. (31), and also for the bound state contribution by combining the results of the previous sections with Eq. (1). For indirect detection, observables are sensitive to $d\langle\sigma v\rangle/dz$. To begin with we will assume the DM states are incident with a fixed $v=10^{-3}$, revisiting the validity of this approximation in the next subsection. In order to extract the shape of the photon distribution from the differential cross section, it is common in indirect detection to introduce a photon spectrum $dN/dE$, and our convention for doing so is the following,151515Further discussion of the connection between spectra used in indirect detection and the corresponding field theoretic quantities can be found in, for instance, Ref. Bauer:2020jay . $\displaystyle\frac{d\langle\sigma v\rangle}{dE}=\langle\sigma v\rangle_{\rm line}\times\frac{dN}{dE}.$ (84) This choice follows Ref. Baumgart:2017nsr , and implies that our spectrum is normalized with respect to the _line cross section_ , $\langle\sigma v\rangle_{\rm line}\equiv\langle\sigma v\rangle_{\gamma\gamma}+\tfrac{1}{2}\langle\sigma v\rangle_{\gamma Z}$, which is defined as the rate to produce two photons at exactly $E=M_{\chi}$. By construction, $dN/dE$ will contain a contribution of exactly $2\delta(E-M_{\chi})$ for the line, but it will also contain contributions from endpoint photons, bound state decays, and continuum photons arising primarily from the unstable particles the direct annihilations or bound state decays can produce. For each of these latter components, however, their additions to $dN/dE$ will be weighted by a branching fraction $\langle\sigma v\rangle_{i}/\langle\sigma v\rangle_{\rm line}$, with $\langle\sigma v\rangle_{i}$ the cross section for that particular contribution. The rationale for anchoring our calculations to the line cross section is that $\chi\chi\to\gamma\gamma$, which has a spectrum of exactly $dN/dE=2\delta(E-M_{\chi})$, is a common experimental target, and therefore there are a wide number of existing constraints on $\langle\sigma v\rangle_{\rm line}$ which we can then directly compare with. Further discussion of this point can be found in Ref. Baumgart:2017nsr . Figure 7: The cross section for line photons, breaking down the contributions from the direct annihilation and bound states. While generically the direct annihilation dominates, for isolated masses near Sommerfeld peaks the bound state contribution can be the leading one. For all masses lower than those shown bound states are strictly subdominant. The spectra of line and endpoint photons produced by decay of the bound states is computed using the methods of Sec. 4.161616In this work we do not track the much lower-energy photons radiated in bound state formation and transitions; these signals have been studied for the quintuplet in Refs. Mitridate:2017izz ; Mahbubani:2020knq . For our bound state formation and transition calculations, we include only states with $L=0,1,2$. Capture into $L=3$ and higher states requires at least $L=2$ for the initial state, and we expect the contributions from components of the initial state with high $L$ to be suppressed at low velocities, by a factor that is parametrically $(M_{\chi}v/m_{\scriptscriptstyle W})^{2L}$ (although for sufficiently high masses, $M_{\chi}\gtrsim 100$ TeV, this suppression is lifted). It is also worth noting that for essentially all the parameter space most relevant for experimental searches with H.E.S.S, at $M_{\chi}<20$ TeV, we find that no $L=3+$ bound states exist in the spectrum. We independently expect capture into states with high principal quantum number $n$ (which is required for high $L$) to be suppressed, as (1) the finite range of the potential means only a limited number of states are bound at all, so unlike in unbroken gauge theories there is no infinite tower of high-$n$ states, (2) capture into weakly-bound states is suppressed by a phase space factor, and (3) analytic approximations (App. F) suggest that we can expect the leading contribution to the capture cross section to be exponentially suppressed for large $n$. In practice, our numerical calculation expresses the bound states as a linear combination of 30 basis states for each combination of $L$, $Q$ and $(-1)^{L+S}$, allowing us to access up to 30 distinct bound states indexed by different values of $n$ (although as we approach this upper bound we expect the spectrum to become less accurate), and we include all these states in our calculation. We have checked at sample mass points that our binding energies and cross sections for capture into lower-$n$ states are not significantly affected by doubling the number of basis states. For the reasons given above, we generally expect the error due to the omission of higher-$n$ states to be small. Before showing the full distributions of line and endpoint photons, we can already consider one measure of the importance of bound states to the resulting photon signal: their contribution to $\langle\sigma v\rangle_{\text{line}}$. This is shown in Fig. 7, where we separate the contribution to the line from the direct annihilation to that of processes involving an intermediate bound state. At this stage we only include bound- state contributions that produce line photons, with energy essentially at $M_{\chi}$. The figure makes clear a point already estimated earlier: direct annihilation generally dominates the production of line photons at $E\simeq M_{\chi}$ by $1-2$ orders of magnitude. However, the bound-state contribution can be significant and even dominate at isolated mass points, for instance as at $M_{\chi}=68.1~{}{\rm TeV}$, and therefore a reliable prediction at arbitrary masses must include this contribution. Figure 8: The quintuplet annihilation spectrum for two masses, the thermal mass of 13.6 TeV (left), and a mass where bound state contributions are appreciable, 68.1 TeV (right). For each mass we show results convolved with the H.E.S.S. energy resolution (top) and unsmeared (below). The full spectrum is broken into five individual components: the line, endpoint, bound state line and endpoint, the direct annihilation continuum, and bound state contribution to the continuum. Details of each are provided in the text. Moving beyond the line, in Fig. 8 we show the full spectrum, broken down by various contributions, for two masses: the thermal mass of $M_{\chi}=13.6~{}{\rm TeV}$, and a mass where bound state contributions are significant, $68.1~{}{\rm TeV}$. For each mass we show two versions of the spectrum. In the lower panels, we show the unsmeared spectra, which is the distribution of photons that emerge from the annihilations. (Note in this case the line contribution is simply $dN/dE=2\delta(E-M_{\chi})$ and so represented by a vertical line.) In the upper panels, we have convolved the raw spectra with a finite experimental energy resolution in order to model what would actually be seen at a realistic instrument. For this we take the energy resolution of the H.E.S.S. telescope, determined from Ref. HESS:2013rld . In detail, we fix the relative width $\Delta E/E$ to 0.17 and 0.11 for $E=500~{}{\rm GeV}$ and $E=10~{}{\rm TeV}$, respectively, and then vary logarithmically between these endpoints, freezing the ratio either side. From this we compute $(dN/dE)_{\rm smeared}$ as $dN/dE$ convolved with a Gaussian of width equal to the energy resolution. In terms of these two notions of the spectra, Fig. 8 shows five contributions to the photons distributions for the two masses. The first three of these are: 1. the direct annihilation line; 2. direct annihilation endpoint; and 3. the bound state contribution to the line and endpoint. Again we see clearly a point noted for the wino in Refs. Baumgart:2017nsr ; Baumgart:2018yed : the endpoint contribution makes a considerable modification to the observed number of photons with $E\sim M_{\chi}$, with the peak smeared spectra enhanced by 1.9 and 3.1 for $M_{\chi}=13.6$ and 68.1 TeV. The bound state contribution is more modest: it is effectively negligible at the thermal mass, and a factor of 1.7 enhancement at 68.1 TeV, which again is a mass with an anomalously large bound state contribution to the hard photon spectrum. Beyond these three, we also show the contribution of two continuum sources, both of which can generate lower energy photons. The first of these is the continuum emission arising from direct annihilation. This results from tree level annihilation of the quintuplets into $W$ or $Z$ bosons. The latter of these arises from $\gamma Z$ and $ZZ$ final states, and is a simple reweighting of the line cross section as $\frac{\langle\sigma v\rangle_{ZZ}+\tfrac{1}{2}\langle\sigma v\rangle_{Z\gamma}}{\langle\sigma v\rangle_{\gamma\gamma}+\tfrac{1}{2}\langle\sigma v\rangle_{\gamma Z}}=\frac{c_{\scriptscriptstyle W}^{2}}{s_{\scriptscriptstyle W}^{2}}.$ (85) For the $W^{+}W^{-}$ final state, the tree level annihilation rate, with the Sommerfeld effect included can be computed as, $\displaystyle\langle\sigma v\rangle_{\scriptscriptstyle WW}=\frac{\pi\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}$ $\displaystyle\Big{[}18|s_{00}|^{2}+25|s_{0\pm}|^{2}+4|s_{0\pm\pm}|^{2}$ (86) $\displaystyle+30\sqrt{2}{\rm Re}(s_{00}s_{0\pm}^{*})+12\sqrt{2}{\rm Re}(s_{00}s_{0\pm\pm}^{*})+20{\rm Re}(s_{0\pm}s_{0\pm\pm}^{*})\Big{]}.$ As discussed for the case of the wino in Ref. Hryczuk:2011vi , higher order corrections to this should not be appreciable, and so we do not include them. We can then add the $W^{+}W^{-}$ final state to $dN/dE$ with weighting $\langle\sigma v\rangle_{\scriptscriptstyle WW}/\langle\sigma v\rangle_{\rm line}$ along with the $Z$ contribution. In each case, to determine the spectrum of photons that result from these electroweak bosons we use PPPC4DMID Cirelli:2010xx with the electroweak corrections turned off, to avoid any double counting of the endpoint contributions we computed. As seen in Fig. 8, these contributions are important for $E_{\gamma}\ll M_{\chi}$. The final contribution to the spectrum we consider is continuum photons arising from the bound state decays. This contribution is not the main focus of this work, but to get an estimate of its size and spectrum, we assume the most common SM decay products are light quarks (equally weighted between flavors), and employ the corresponding gamma-ray spectrum from PPPC4DMID. The motivation for this choice is that the bound states will decay through their couplings to (on-shell or off-shell) $W$ and $Z$ bosons, with the exact channel depending on their $L$ and $S$ quantum numbers, and the gauge bosons in turn decay dominantly to quarks due to their large associated degrees of freedom. We weight the continuum spectrum by the ratio between the bound state capture cross section and $\langle\sigma v\rangle_{\textrm{line}}$, similar to how we weight the $Z$ and $W$ continuum components above. At the thermal mass, this ratio is roughly 31, and the visible contribution can be seen in Fig. 8. The contribution is dominated by the $Q=0$ $p\to s$ capture cross section which sits near a Sommerfeld peak in this capture rate at 13.6 TeV (cf. Fig. 4). To highlight this, at the edges of the uncertainty band on the thermal mass, 12.8 and 14.4 TeV, the equivalent ratio is reduced significantly, to 0.15 and 0.70, respectively. At 68.1 TeV the ratio is larger still – just over 471 – and is dominated by the $Q=0$ and $Q=1$ $p\to s$ capture rates. Figure 9: The full quintuplet spectrum for three different masses, $M_{\chi}=14,\,16,$ and $18~{}{\rm TeV}$. As shown, the spectrum can change significantly as a function of mass, a feature which does not arise for the wino or higgsino. Figure 8 highlights the various contributions to the spectrum, but does not capture the variation of the spectrum as a function of mass. The variation can be considerable, as shown in Fig. 9. From the definition, the line contribution to this spectrum is fixed at $2\delta(E-M_{\chi})$ independent of mass. What is not fixed, however, is the endpoint and continuum contributions, which can vary significantly even for small changes in mass. (The bound state contributions are not significant for the masses shown.) As shown in Ref. Montanari:2022buj , such rapid variations can lead to sharp features in the instrumental sensitivity to $\langle\sigma v\rangle_{\rm line}$, as the shape of the DM signal being searched for varies rapidly with mass. These effects do not occur for the wino or higgsino, where the spectra varies relatively smoothly with mass (see Ref. Montanari:2022buj ). The origin of this behavior seems to be interference between the different Sommerfeld factors, associated with the distinct mass eigenstates for the final annihilation: $\chi^{0}\chi^{0}$, $\chi^{+}\chi^{-}$, and $\chi^{++}\chi^{--}$. These states have different branching ratios into the various SM final states, and the positions of the resonance peaks can differ between the interfering Sommerfeld factors. As we vary the mass, we can move rapidly from the sharp turn off of one Sommerfeld peak to the sharp turn on of another, and in doing so transition rapidly in the strength of the associated endpoint and continuum contributions, as seen in Fig. 9. One might ask why this behavior was not seen for the wino, which also has multiple Sommerfeld factors that can interfere with each other. For the line cross section, one might suspect that the issue is that only the $\chi^{+}\chi^{-}$ state can annihilate to photons at tree-level; however, we also do not see sharp quintuplet-like features in annihilation of wino DM to W bosons, which is allowed at tree-level from both the $\chi^{0}\chi^{0}$ and $\chi^{+}\chi^{-}$ states. More insight can be gained by working in the basis of eigenstates of the potential at small $r$, rather than mass eigenstates; this corresponds to the basis of potential eigenstates in the limit of unbroken SU(2), as discussed in App. F. In the limit of unbroken SU(2), the relevant potential for the quintuplet (coupling states with total charge zero and even $L+S$) has two eigenstates that experience attractive interactions and one eigenstate that experiences repulsive interactions. We expect that the linear combination of Sommerfeld factors corresponding to the repulsed eigenstate at small $r$ should be suppressed, as the SU(2) symmetry is restored in the small-$r$ regime. We have confirmed numerically that this suppression is quite pronounced, typically several orders of magnitude at the velocities we consider. We would also expect a difference in the linear combination of Sommerfeld factors corresponding to the two attracted eigenstates, with the larger-magnitude eigenvalue yielding a larger Sommerfeld enhancement, but this difference is much less dramatic, and so the two attracted eigenstates still experience meaningful interference. We thus attribute the sharp features observed in the quintuplet case to this interference between the different attracted eigenstates (in the small-$r$ potential-dominated regime); its absence in the wino case is presumably related to the fact that the wino has only one attracted eigenstate (e.g. Ref. Asadi:2016ybp ). We thus expect this behavior (sharp variations in the spectrum with mass) to be ubiquitous for larger SU(2) representations. ### 5.2 Uncertainty associated with the velocity distribution of dark matter The complete initial-state wavefunctions naturally depend on the relative velocity of the incoming DM particles, which in the discussion so far we have simply set as $v=10^{-3}$. In this subsection we explore the systematic uncertainties associated with our modeling of the velocity. We first discuss the detailed dependence of the cross sections on the relative DM velocity, and then explore the effects on our spectra of averaging over different plausible velocity distributions. The effects of the long-range potential on the wavefunction saturate when $v\lesssim m_{\scriptscriptstyle W}/M_{\chi}$, which is true for halo velocities ($v\sim 10^{-3}$) for $M_{\chi}\lesssim 80~{}{\rm TeV}$; consequently, except near resonances or for very heavy DM, we do not expect the Sommerfeld enhancement from the weak interactions to depend sensitively on the velocity distribution. However, the bound-state formation rate from $L>0$ partial-wave components of the initial state will have a non- trivial velocity dependence even below this threshold. Furthermore, for the quintuplet the thermal mass is only a factor of few below the saturation threshold, and in systems with higher velocities than the Sun’s neighborhood – such as galaxy clusters – both the direct annihilation cross section and the bound-state formation rates are expected to depend sensitively on the velocity. Figure 10: The dependence of the cross sections on $v$ for four different fixed masses. We show the case of direct endpoint annihilation (left, the analogue of Fig. 11), and bound state capture (right, as in Fig. 12). In Fig. 10 we show how annihilation and capture vary as a function of velocity at four different masses. We observe that for a mass of $M_{\chi}=14$ TeV, noticeable velocity dependence is present at $v\gtrsim 4\times 10^{-3}$. As we discuss in depth in App. F, the oscillatory behavior observed at high velocities can be understood in the limit of unbroken SU(2). This behavior originates from interference between the different eigenvalues of the potential. At low velocities, by contrast, SU(2) breaking effects are expected to suppress the oscillations. For higher DM masses, where the velocity dependence is relevant even for $v\lesssim 10^{-3}$, our previous annihilation cross section plots should be taken as an illustrative estimate. A full calculation at high mass would involve integrating the formulae given in this paper over the true velocity distribution in the region of interest. The oscillatory behavior of the cross section at high velocities means that assuming a single velocity could in principle lead to large errors in this case. We now estimate the effect of averaging over the velocity distribution. The characteristic scale of the DM velocity dispersion should be comparable to the circular velocity of the visible matter, which in the vicinity of the Sun has been measured to be $v_{\text{circ}}\simeq 240$ km/s 2016MNRAS.463.2623H . Since the Milky Way’s rotation curve is roughly flat at the Sun’s location, we expect the velocity dispersion to be of a similar order over much of the Galaxy. However, close to the Galactic Center the DM velocity is not well- known. In DM-only simulations the velocity dispersion falls as one approaches the Galactic Center (e.g. Ref. Navarro:2008kc ) but simulations including baryons have demonstrated the opposite behavior (e.g. Refs. 2010MNRAS.406..922T ; Board:2021bwj ; 2022MNRAS.513…55M ). Even at the Sun’s location, the full DM velocity distribution is not well-understood: the distribution is often treated as Maxwellian up to some escape velocity, although this is only a crude approximation (e.g. Ref. Herzog- Arbeitman:2017fte ). The escape velocity is determined to be $\sim$500 km/s at the location of the solar system Necib:2021vxr ; kh2021 ; mfcEtAl2018 .171717Ref. mfcEtAl2018 finds that the escape velocity increases slightly toward the Galactic Center. However, they only present results in to a radius of around 5 kpc, where it is closer to 650 km/s. The precise value of this cutoff is numerically unimportant for this work though, due to the exponential suppression in the distribution (cf. Eq. (87)). In practice, we truncate the particle velocity at 500 km/s, but the numerical difference between this and 2400 km/s is at most a part per million in the annihilation rate. Within the Maxwellian approximation, the distribution is specified by that escape velocity and the velocity dispersion, with the latter having a greater effect on the annihilation rate. For the Milky Way, we use the velocity dispersion values obtained in Ref. Boddy:2018ike for a variety of NFW-profiles. In particular, we take the slowest and fastest velocities for locations interior to the solar system. This gives a range $v_{\rm disp}\in[130,330]$ km/s. As a function of $v_{\rm disp}$, the magnitude of the relative WIMP velocity is drawn from the following 1D probability distribution, $f(v)=\sqrt{\frac{27}{4\pi}}\,\frac{v^{2}}{v_{\rm disp}^{3}}e^{-3v^{2}/4v_{\rm disp}^{2}}.$ (87) Here $v_{\rm disp}$ is the RMS velocity for a single DM particle, which is equal to the three-dimensional velocity dispersion $\sigma_{v,3d}$ defined in Ref. Boddy:2018ike by $v_{\rm disp}^{2}=\sigma_{v,3d}^{2}=\frac{\int dv\,v^{4}f(r,v)}{\int dv\,v^{2}f(r,v)}.$ (88) Here $f(r,v)$ is the speed distribution for a single DM particle at a distance $r$ from the Galactic Center. Figure 11: An estimate of the range of uncertainty in our results associated with the DM velocity distribution for the dominant direct annihilation. We use two Maxwell distributions with $v_{\rm disp}$ – cf. Eq. (87) – at the extremal values found by Ref. Boddy:2018ike . We divide their resulting $\langle\sigma v\rangle$ by that of the simplified case of all quintuplets annihilating with $v=10^{-3}$. In Fig. 11, we plot the leading contribution to endpoint photon production, direct annihilation from an $s$-wave initial state, for two different velocity distributions, normalized to the simple assumption of all quintuplets having a fixed $v=10^{-3}$. Except on resonances, the uncertainty is typically negligible. We also see that off resonance, and particularly at lower masses, the simple fixed-velocity assumption is a good approximation of either more realistic model. The reason for this is simply that we are in the saturation regime, as seen in Fig. 10. Therefore, we conclude that in general the fixed velocity assumption is a good one at low masses, although at higher masses one is generically underestimating the actual cross section, sometimes by more than an order of magnitude. Accordingly, for an actual experimental analysis, completeness would require an appropriate weighting of the cross section according to the specific region of interest studied. Figure 12: Capture rate by emission of a dipole photon from $p$-wave initial state to $s$-wave quintuplet bound state for $Q=0$ states. The three choices of velocity mirror those in Fig. 11. We see two different values of $v_{\text{disp}}$ plotted, plus a $v_{\text{fixed}}=10^{-3}$ whose capture rate we divide by. For bound state capture, the off-resonance uncertainties are generally larger than for direct annihilation, as anticipated. This is demonstrated in Fig. 12, where we show $p$-to-$s$ capture, the leading single-photon dipole transition. As we see by comparing the rates with those in Fig. 7 though, capture is generally far subdominant to direct annihilation. In this channel, however, the simple assumption of all DM having $v=10^{-3}$ generally provides a result in the middle of the band given by the remaining two options. Even where it does not, we see that it still provides a good approximation to the more realistic velocity profiles. Accordingly, just as with direct annihilation, we will take this value as a representative approximation of this subleading contribution to endpoint photons when forecasting experimental sensitivity. ### 5.3 Estimating the experimental sensitivity to quintuplet DM Finally we turn to an estimate of the experimental sensitivity to the quintuplet DM hypothesis using the spectra we have computed. Using our definition of the spectrum in Eq. (84), the average DM-generated flux an instrument observes from a region of interest (ROI) of solid angle $\Omega_{\rm ROI}$ is, $\frac{d\Phi}{dE}=\frac{\langle\sigma v\rangle_{\rm line}}{8\pi M_{\chi}^{2}}\left(\frac{dN}{dE}\right)_{\rm smeared}\left(\frac{1}{\Omega_{\rm ROI}}\int ds\,d\Omega\,\rho_{\chi}^{2}\right)\\!.$ (89) As defined, the flux has units of $[{\rm counts}/{\rm cm}^{2}/{\rm s}/{\rm TeV}/{\rm sr}]$ (for a detailed discussion of the units, see Ref. Lisanti:2017qoz ). The final term in parentheses here is often referred to as the $J$-factor, and is an integral over the DM density squared in the region being observed. If the DM density $\rho_{\chi}$ was known exactly, then for a model like the thermal quintuplet the flux is fully determined, as we have computed both the cross section and spectrum (up to residual uncertainties from higher order terms in the theory prediction, and velocity distribution as discussed in the last subsection). To test the quintuplet hypothesis, we need to compare the flux in Eq. (89) to experimental measurements. For this, we will estimate the sensitivity of H.E.S.S. to the endpoint photon signal using the “mock analysis” method described in Ref. Baumgart:2017nsr . The approach in that work was to make use of the publicly available H.E.S.S. data in Ref. HESS:2013rld , where a search for $\chi\chi\to\gamma\gamma$ was performed using 112 hours of Galactic Center observations taken by the instrument between $2004-2008$. This is a small fraction of the observations H.E.S.S. has taken towards the galactic center. As emphasized in Ref. Montanari:2022buj , the collaboration has already collected roughly 800 hours of data in the region, and continues to collect roughly 150 hours each year. In that sense, the dataset we consider represents a small fraction of what is available. A further limitation of our approach is that the analysis in Ref. HESS:2013rld was a search purely for line photons, and therefore adopted a flexible background model that absorbed all smooth components. The analysis is therefore unsuitable to consider continuum contributions, which can play an important role in these sorts of analyses, as emphasized in, for instance, Ref. Rinchiuso:2020skh . Our rationale for adopting this mock analysis, however, is that Ref. HESS:2013rld provided enough information that the full analysis they undertook can be performed reconstructed, as was shown in Ref. Baumgart:2017nsr . Later on we will provide a rough estimate of how sensitive more recent and upcoming analyses could be. Figure 13: (Left) The estimated sensitivity of H.E.S.S. I using 112 hours of galactic center (GC) data to the quintuplet. Assuming an Einasto profile, we show sensitivity to $\langle\sigma v\rangle_{\rm line}$ as defined in Eq. (84), which can then be compared to the equivalent theoretical prediction. Across the entire mass range considered here, H.E.S.S. would be able to exclude the quintuplet assuming an Einasto profile. (Right) If the DM profile is cored as in Eq. (91), the core sizes that would be required to make a non- observation consistent with our quintuplet predictions. At the thermal mass, 13.6 TeV, a core of 1 kpc is required, whereas at upper edge of the thermal band, 14.4 TeV, the results would be consistent with a 0.5 kpc core. For the mock analysis, we fit the data provided in Ref. HESS:2013rld to a combination of the flux in Eq. (89) and a parametric background model adopted by the experiment—full details are provided in Ref. Baumgart:2017nsr . If we have a prediction for $\rho_{\chi}$, this approach combined with our prediction for the spectrum can be used to obtain an estimated sensitivity for $\langle\sigma v\rangle_{\rm line}$. For this we adopt the Einasto profile 1965TrAlm…5…87E used by the H.E.S.S. analysis (based on Ref. Pieri:2009je ), $\rho_{\rm Einasto}(r)\propto\exp\left[-\frac{2}{\alpha}\left(\left(\frac{r}{r_{s}}\right)^{\alpha}-1\right)\right]\\!,$ (90) with $\alpha=0.17$, $r_{s}=20~{}{\rm kpc}$, and the normalization fixed to $0.39~{}{\rm GeV/cm}^{3}$ at the solar radius, $r=8.5~{}{\rm kpc}$. The resulting estimated constraint is shown in the left of Fig. 13.181818One can find an early attempt to make projected $\gamma$-ray constraints from the H.E.S.S. galactic center data in Ref. Cirelli:2015bda . Their Fig. 7 is analogous to our Fig. 13. The earlier paper did not include LL (or NLL) resummation, nor the NLO corrections to the electroweak potential. For similar Einasto parameters, their bounds on the line cross section are about an order of magnitude weaker than ours (however, note that our predicted line cross section is also smaller). We emphasize, that even though this is a constraint on $\langle\sigma v\rangle$ it is not based solely on the line prediction—the results are based on the estimated detectability of the entire spectrum resulting from line, endpoint, and bound state photons. For the mass range considered, the bound state contribution is negligible (cf. Fig. 7), however the endpoint is not: at 13.6 TeV, it enhances the sensitivity by a factor of 1.9. As mentioned above, by default, the results do not include either continuum contribution considered in Fig. 8. The motivation for this is the particular background model adopted in Ref. HESS:2013rld was designed solely to search for a narrow line feature, and there can be considerable degeneracy with the continuum emission (see the discussion in Ref. Baumgart:2017nsr ). Nevertheless, we have tested adding the continuum emission from direct annihilation to $W$ and $Z$ final states, and found for most masses it has no impact on the estimated limits, although there is a slight fluctuation around the thermal mass, which increases the sensitivity by $\simeq$$20\%$. We also note that there are several locations, such as just below $M_{\chi}=3~{}{\rm TeV}$ and just above the thermal mass where there is a larger theoretical error. This results from the sharp variations in the spectra observed in Fig. 9. In fact, the sensitivity to these features is reduced by the insensitivity of the background model used in this work to smooth features. These results can also be compared to Ref. Montanari:2022buj , where an alternate H.E.S.S. analysis is performed using the spectra from this work, and much sharper variations are observed in their sensitivity. (We note that the results of that work made use of the LO Sommerfeld potential calculations, not the NLO results we have adopted here.) The results of the mock analysis suggest that for the central value of the thermal mass, even 112 hours of H.E.S.S. data can exclude the thermal prediction by a factor of 10. Nevertheless, this varies considerably across the uncertainty band on the thermal mass: at 12.8 TeV, the exclusion factor is 55, whereas it is only 4 at 14.4 TeV. The sharp variation is a result of the thermal window sitting near a Sommerfeld resonance, as shown in Fig. 13. We emphasize once more that although we use the NLO potential in our computations, the thermal mass range was computed with the LO potential, and given the sensitivity of our findings to the exact mass, computing the thermal mass at NLO will be important for narrowing the fate of the thermal quintuplet. If we relax the thermal cosmology assumption and consider a broader range of masses, we see the quintuplet is excluded across the full $0.5-20~{}{\rm TeV}$ mass range. Yet this statement is contingent on the form of $\rho_{\chi}$ adopted, about which there is considerable uncertainty. In particular, the density profile may flatten toward the inner Galaxy. As the annihilation signal is sensitive to $\rho_{\chi}^{2}$ flattening the profile has a marked impact on the flux, making this one of the dominant uncertainties in $\rho_{\chi}$ for our purposes. We parameterize a possible flattening of the profile by replacing the Einasto density profile with a constant value for Galactocentric distances $r<r_{c}$, where we will refer to $r_{c}$ as the “core size”:191919To be explicit, we fix the normalization of the Einasto profile in Eq. (90) to the solar radius before we impose the core restriction of Eq. (91). This implies that for a core size larger than the solar radius, the profile will predict less than 0.39 GeV/cm3 at our location. Of course, the more significant concern is that such a large core is not consistent with observations, as discussed in the text, and should solely be viewed as a proxy for how much the $J$-factor needs to be reduced. $\rho(r)=\begin{cases}\rho_{\rm Einasto}(r)&r>r_{c},\\\ \rho_{\rm Einasto}(r_{c})&r<r_{c}.\end{cases}$ (91) We can then ask what choice of $r_{c}$ would raise the estimated constraint on $\langle\sigma v\rangle_{\text{line}}$ above the theoretical prediction. This is plotted in the right-hand panel of Fig. 13, both employing our full endpoint spectrum and in the case where we (incorrectly) use only the line cross section in setting the bounds. To provide an estimate for what core sizes are consistent with data, we note that simulations of Milky Way like galaxies can generate ${\cal O}(1~{}{\rm kpc})$ cores Chan:2015tna , however, measurements of stars in the Bulge seem to disfavor $r_{c}\gtrsim 2~{}{\rm kpc}$ 2015MNRAS.448..713P ; Hooper:2016ggc . At the lower end of the thermal mass range, 12.8 TeV, the thermal Quintuplet would already be in tension with this, requiring a 2.8 kpc core. At the central (13.6 TeV) and upper (14.4 TeV) end, however, the required core size is 1.0 and 0.5 kpc, respectively, and therefore not obviously in tension with our mock analysis. A more recent study claims evidence for a few kpc core that could potentially saturate the earlier limits Ou:2023adg . If confirmed, this could suppress the $J$-factor by nearly an order of magnitude. That would challenge the indirect-detection community to set aggressive limits, but as we estimate, even cores of this size are in reach of CTA and possibly H.E.S.S (cf. Fig. 14). As already mentioned, the mock analysis we consider makes use of only a very small amount of existing data, and with that we forecast a sensitivity to $\langle\sigma v\rangle_{\rm line}\simeq 8.5\times 10^{-27}~{}{\rm cm^{3}/s}$ at the central thermal mass. (The error on this value due to uncertainty on the spectrum from our NLL calculation is less than 1%, far smaller than the variation across the thermal mass range, which is closer to 10%.) Using 500 hours of H.E.S.S. data and an identical Einasto profile, Ref. Montanari:2022buj forecast a sensitivity at the thermal mass of $\simeq$$9.3\times 10^{-28}~{}{\rm cm^{3}/s}$, almost a factor of ten better than used here. This is significantly more than the naive $\sqrt{5}$ the additional data would suggest, which can be primarily attributed to the fact that work used H.E.S.S. II observations, whereas we adopt the sensitivity from H.E.S.S. I, combined with a different analysis used in that work. With such a sensitivity, we would require a core size slightly larger than $3.5~{}{\rm kpc}$ to save the thermal quintuplet, which would be in tension with observations. Nevertheless, repeating the process at 14.4 TeV, the required core size would only be $1.6~{}{\rm kpc}$, and therefore not yet clearly excluded. We can also give a crude estimate for the sensitivity the upcoming CTA could have for the quintuplet. Although no dedicated forecast for the quintuplet has been performed using the spectra in our work, we can estimate the improved sensitivity as follows. Reference Baumgart:2018yed performed an identical mock analysis to ours for the NLL wino spectrum, estimating sensitivity at $M_{\chi}=13.6~{}{\rm TeV}$ of $\langle\sigma v\rangle_{\rm line}\simeq 8\times 10^{-27}~{}{\rm cm^{3}/s}$, slightly stronger than the sensitivity to the quintuplet. Using the identical NLL spectrum, Ref. Rinchiuso:2020skh then estimated that with 500 hours of data, CTA could reach $\simeq 1\times 10^{-28}~{}{\rm cm^{3}/s}$, a factor of eighty improvement. Assuming the same improvement for the quintuplet, CTA would be sensitive to $\langle\sigma v\rangle_{\rm line}\simeq 1.1\times 10^{-28}~{}{\rm cm^{3}/s}$, excluding the thermal value by a factor of eight hundred. To not have seen the thermal quintuplet, we would need to core $\rho_{\chi}$ out to almost $8.6~{}{\rm kpc}$ – beyond the solar radius – which is simply inconsistent with observations. Even at the upper end of the mass range, a core of $6.4~{}{\rm kpc}$ would be required. In this sense, CTA would provide the definitive word on the whether the thermal quintuplet is the DM of our Universe. Figure 14: An estimate for the required core size of the Einasto profile, given no preference for a signal is seen in our H.E.S.S. mock analysis, and if no signal emerges in an analysis of the full H.E.S.S. dataset, or at CTA. The central values correspond to 13.6 TeV, the central thermal mass, whereas the upper and lower error bars correspond to 12.8 and 14.4 TeV, the edges of the thermal mass window. The dashed corresponds to the rough core size constraint from Ref. Hooper:2016ggc . Our results suggest that H.E.S.S. can already considerably test the quintuplet, with the final word likely being left to CTA. These results are summarized in Fig. 14, where the point shows the core size required for the central thermal mass, 13.6 TeV, whereas the upper and lower error bars corresponds to the lower and upper ends of the thermal mass range. We also show the core size disfavored by the analysis in Ref. Hooper:2016ggc . The figure summarizes the conclusion reached above: CTA will seemingly have the final word of the thermal quintuplet, however if a full analysis of H.E.S.S. data sees no sign of the signal, the model would already begin to be disfavored. There are two important caveats to this conclusion. The first is that our findings are based off an extrapolation from a mock analysis of H.E.S.S. I data, and are no substitution for a full analysis or projection using the present and forecast H.E.S.S. and CTA instrumental responses. To give one example of what could change, an analysis that accounts for the continuum emission could be able to even more strongly test the quintuplet. Secondly, the range of masses we have considered is the thermal mass window of $13.6\pm 0.8~{}{\rm TeV}$ that was determined using the LO potential, whereas the remainder of our calculations use the NLO results, as emphasized several times already. Updating the thermal mass using the NLO potentials will be important. To give a sense for the impact this could have, repeating the analysis in Fig. 14 for 13.6 TeV using the LO potential in our calculations, the core size would change from 1.0, 3.5, and 8.5 kpc, to 0.7, 2.5, and 7.1 kpc. ## 6 Conclusions For all the vastness of the DM parameter space, a thermal WIMP has remained a constant focus for decades. Minimal DM is an exemplar of thermal DM, and through indirect detection, many of the associated models are on the verge of being detected or firmly excluded, as we have shown for the quintuplet in the present work. Either way, these are important times in the search for DM. With this in mind, the present work has computed the quintuplet annihilation spectrum to NLL accuracy, and established the formalism to straightforwardly extend this to higher odd SU(2) representations. We plot the spectrum along with projected limits from a simple extension of the H.E.S.S. I analysis in Fig. 13. In doing so, we have demonstrated the power of the EFT of Heavy DM, and also extended this formalism to include the contribution from the rich set of bound states the model contains. While the bound states can make a significant contribution to the continuum photon emission, their impact on the number of photons with $E_{\gamma}\sim M_{\chi}$ is minimal, except at isolated masses. As seen in earlier studies of the wino, the same cannot be said for endpoint photons from direct annihilation, which again provide an ${\cal O}(1)$ correction to the line signal seen in IACTs. Taken together, we estimated that with our spectra, H.E.S.S. should almost be able to probe the entire allowed range for the quintuplet once uncertainties on the DM density in the inner galaxy are accounted for. Performing this analysis using the existing data, and the soon-to-be-collected data with CTA will be critical (cf. Fig. 14). The use of background models which enhance sensitivity to smooth features such as the continuum as well as the full contribution from the endpoint-photon spectrum can provide an additional piece of experimental leverage beyond conventional line searches. On the theory side, the thermal abundance should be recomputed using NLO potentials, as the sensitivity to the quintuplet depends strongly on what end of the predicted thermal mass range one sits. Finally, it will be interesting to extend the techniques in this work to additional representations, such as a $\mathbf{7}$ of SU(2), where we expect key features of the quintuplet such as the strong variation in the spectrum as a function of mass, to appear and be even more pronounced. ###### Acknowledgements. The work we have presented benefited from useful discussions with Tobias Binder, Marco Cirelli, Tongyan Lin, Alessandro Montanari, Emmanuel Moulin, Nikhil Raghuram, and Diego Redigolo. MB is supported by the DOE (HEP) Award DE-SC0019470. VV is supported by startup funds from the University of South Dakota. TRS’ work is supported by the Simons Foundation (Grant Number 929255, T.R.S), by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions), and by the U.S. Department of Energy, Office of Science, Office of High Energy Physics of U.S. Department of Energy under grant Contract Number DE-SC0012567. TRS thanks the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452, for hospitality during the completion of this work. ## Appendix ## Appendix A Quintuplet Dark Matter: A Brief Review Here we provide a brief review of quintuplet DM, also referred to as 5-plet electroweak DM. In particular, we firstly outline the relevant group theory necessary to specify the interactions used for the calculations in the main text. We will then review phenomenological aspects of the model beyond indirect detection. ### A.1 Interactions Quintuplet DM consists of adding to the SM five Majorana fermions that transform together in the $\mathbf{5}$ representation of SU(2), and as a singlet under the remaining SM forces. Above the electroweak symmetry breaking scale, we collect the five fields into a multiplet $\chi=(\chi^{1},\,\ldots,\,\chi^{5})^{T}$, in terms of which the DM Lagrangian takes the following form $\displaystyle\mathcal{L}_{\scriptscriptstyle\textrm{DM}}=\frac{1}{2}\bar{\chi}(i\not{D}-M_{\chi})\chi=\frac{1}{2}\bar{\chi}([i\not{\partial}-M_{\chi}]\mathbb{1}+g_{\scriptscriptstyle W}T_{\mathbf{5}}^{a}\not{W}^{a})\chi.$ (92) In the final expression, the first two contributions represent the kinetic terms for each of the fields, which are diagonal amongst the multiplets as indicated by $\mathbb{1}$. The final term describes the interaction between the additional fields and the SM electroweak bosons. Importantly, we emphasize that the interaction strength is specified by the SM SU(2) gauge coupling, $g_{\scriptscriptstyle W}$, and is not a free parameter. Instead, $M_{\chi}$ is the unique free parameter in the theory. If we assume a conventional thermal origin for the quintuplet, then even the mass can be fixed by the observed relic density to $M_{\chi}=13.6\pm 0.8~{}{\rm TeV}$ Mitridate:2017izz ; Bottaro:2021snn .202020As emphasized above, this value was computed with the LO electroweak potential. Redoing the analysis with the NLO potential would reduce an important theoretical uncertainty. The 13.6 TeV quintuplet is therefore a zero parameter DM model. After electroweak symmetry breaking, the five Majorana fermions rearrange themselves into three mass eigenstates: a Majorana neutral fermion $\chi^{0}$, and two charged Dirac fermions $\chi^{+}$ and $\chi^{++}$. At leading order, each of these states maintains a mass of $M_{\chi}$. However, radiative corrections to the charged states break this degeneracy, raising the masses of the charged fermions, singling out $\chi^{0}$ as the lightest state and the DM candidate. These corrections have been computed, and in detail $\delta_{0}=M_{\chi^{+}}-M_{\chi^{0}}\simeq 164~{}{\rm MeV}$, and $\delta_{+}=M_{\chi^{++}}-M_{\chi^{+}}=3\delta_{0}$ Cirelli:2005uq ; Ibe:2012sx . For most aspects of our calculations these mass splittings will be irrelevant and we will take $\delta_{0}\simeq\delta_{+}\simeq 0$. However, we do include them in the electroweak potential used to compute Sommerfeld factors, and scattering- & bound-state wavefunctions. This is done by adding $2\delta_{0}$ to the diagonal term in the potential matrix correspondingto the $\chi^{+}\chi^{-}$ component of any state, $8\delta_{0}$ to the diagonal term corresponding to the $\chi^{++}\chi^{--}$ component, and similar, appropriate shifts for diagonal elements in the potential matrix corresponding to components of $Q\neq 0$ states (the shift is given by the difference between the rest mass of the state constituents and $\chi^{0}\chi^{0}$). We now turn to the interaction term in Eq. (92), $\tfrac{1}{2}g_{\scriptscriptstyle W}\bar{\chi}T_{\mathbf{5}}^{a}\not{W}^{a}\chi$. Here $a=1,2,3$ indexes the electroweak gauge bosons, which transform together in an adjoint of SU(2). In the broken theory, these are mapped to the charge and mass eigenstates in the usual way, $\displaystyle W^{1}_{\mu}=\frac{1}{\sqrt{2}}(W^{+}_{\mu}+W^{-}_{\mu}),\;\;W^{2}_{\mu}=\frac{i}{\sqrt{2}}(W^{+}_{\mu}-W^{-}_{\mu}),\;\;W^{3}_{\mu}=s_{\scriptscriptstyle W}A_{\mu}+c_{\scriptscriptstyle W}Z_{\mu}.$ (93) The only part of Eq. (92) that remains undetermined is $T_{\mathbf{5}}^{a}$, the three generators of SU(2) in the quintuplet representation. A convenient basis in which to specify $T_{\mathbf{5}}^{a}$ is the basis of charged states discussed above, where the DM can be cleanly identified. We can determine the charged states through their couplings to the bosons in Eq. (93). In particular, as $A_{\mu}$ couples to charge, we can read off the charges of the states as soon as we know $T^{3}$. It will also be convenient to introduce $T^{\pm}=\frac{1}{\sqrt{2}}(T^{1}\pm iT^{2}),$ (94) in terms of which $T^{a}W^{a}=T^{+}W^{+}+W^{-}T^{-}+W^{3}T^{3}$, independent of representation. Before evaluating the generators for the quintuplet, let us review how the argument proceeds for the simpler case of the wino—a triplet or $\mathbf{3}$ of SU(2). In this case, it is conventional to exploit the fact that the generators are given by the structure constants of SU(2), so that $\bar{\chi}T_{\mathbf{3}}^{a}\gamma^{\mu}\chi=\bar{\chi}_{b}(T_{\mathbf{3}}^{a})_{bc}\gamma^{\mu}\chi_{c}=-i\epsilon_{abc}\bar{\chi}_{b}\gamma^{\mu}\chi_{c}.$ (95) This approach depended on the fact we already knew a representation of the generators for the adjoint, and so does not simply generalise to larger representations. However, we can derive a representation more systematically as follows. Recall that one representation of $\mathbf{n}$ in SU(2) is an $n-1$ index symmetric tensor, with each index transforming in the fundamental. In this representation we denote the adjoint as $\chi^{ij}$, and the quintuplet as $\chi^{ijkl}$, where the indices take values 1 and 2. Beginning with the wino, $\chi^{ij}$ has three unique components, which we can embed into a vector as212121The $\sqrt{2}$ ensures that $\chi$ as it appears in $\bar{\chi}\not{\partial}\chi/2=\bar{\chi}^{ij}\not{\partial}\chi^{ij}/2$ is canonically normalized. An identical argument explains the coefficients in Eq. (100). $\chi=\begin{pmatrix}\chi^{1}\\\ \chi^{2}\\\ \chi^{3}\end{pmatrix}=\begin{pmatrix}\chi^{11}\\\ \sqrt{2}\chi^{12}\\\ \chi^{22}\end{pmatrix}\\!.$ (96) We can use this representation to explicitly construct $T^{a}_{\mathbf{3}}$ as follows. The key is that we know exactly how the generators act on $\chi^{ij}$ as each index transforms in the fundamental. Accordingly, $T^{a}(\chi^{ij})=[(T^{a}_{F})^{i}_{k}\delta^{j}_{l}+\delta^{i}_{k}(T^{a}_{F})^{j}_{n}]\chi^{kl},$ (97) where $T^{a}_{F}=\sigma^{a}/2$, with $\sigma^{a}$ the Pauli matrices. Consider a generic infinitesimal transformation, $U=1+iu$, with $u=u_{a}T^{a}$. If we take $u^{a}=(0,0,\kappa)$ with $\kappa\ll 1$, then the components of $\chi$ transform as $\displaystyle\delta\chi^{1}=i\kappa\chi^{1},\;\;\delta\chi^{2}=0,\;\;\delta\chi^{3}=i\kappa\chi^{3}\\!.$ (98) From this, we can read off the action of an infinitesimal $\mathbf{U}$ on $\chi$, and hence infer that we must have $T^{3}_{\mathbf{3}}={\rm diag}(+1,\,0,\,-1).$ (99) We can now identify $\chi^{1}$, $\chi^{2}$, and $\chi^{3}$ as having charges $+1$, $0$, and $-1$, yielding the expected spectrum in the broken phase.222222The representation in the charge basis is not unique. For instance, the transformation $\chi^{1,2}\to e^{\pm i\phi}\chi^{1,2}$ leaves the charge assignments unchanged, but will introduce a phase into the off- diagonal $W^{\pm}$ couplings. The same will be true for the off-diagonal quintuplet couplings. The remaining components of $T^{a}_{\mathbf{3}}$ can be derived identically. This approach readily generalizes to the quintuplet. The five unique components of $\chi^{ijkl}$ can be embedded into a vector as follows, $\chi=\begin{pmatrix}\chi^{1}\\\ \chi^{2}\\\ \chi^{3}\\\ \chi^{4}\\\ \chi^{5}\end{pmatrix}=\begin{pmatrix}\chi^{1111}\\\ 2\chi^{1112}\\\ \sqrt{6}\chi^{1122}\\\ 2\chi^{1222}\\\ \chi^{2222}\end{pmatrix}=\begin{pmatrix}\chi^{++}\\\ \chi^{+}\\\ \chi^{0}\\\ \chi^{-}\\\ \chi^{--}\end{pmatrix}\\!.$ (100) To justify the charge assignments, we repeat the above analysis to find $T^{3}_{\mathbf{5}}={\rm diag}(+2,\,+1,\,0,\,-1,\,-2).$ (101) Further, we can compute $\displaystyle T^{+}_{\mathbf{5}}=\begin{pmatrix}0&\sqrt{2}&0&0&0\\\ 0&0&\sqrt{3}&0&0\\\ 0&0&0&\sqrt{3}&0\\\ 0&0&0&0&\sqrt{2}\\\ 0&0&0&0&0\end{pmatrix}\\!,$ (102) and $T^{-}_{\mathbf{5}}=(T^{+}_{\mathbf{5}})^{T}$. In the main body we will exclusively work in the charged basis of Eq. (100), using the form of $T_{\mathbf{5}}^{a}$ as above whenever necessary. ### A.2 Phenomenology We end this section with a brief description of quintuplet phenomenology beyond indirect detection. As already noted, the case where $M_{\chi}\simeq 13.6$ TeV is particularly appealing, as this is the mass singled out from a conventional cosmology via a WIMP-miracle-style argument. The interactions of the quintuplets, reviewed in the previous subsection, are sufficient to keep it in thermal equilibrium in the early Universe. As the Universe cools, eventually the quintuplet undergoes a conventional freeze-out. When this occurs dictates the final abundance, and matching to the observed DM density fixes the single parameter of the theory, $M_{\chi}$. For larger (smaller) values of $M_{\chi}$ than 13.6 TeV, one naively over (under) produces the observed DM density. Nevertheless, it is entirely possible that there are effects in the early Universe that modify this simple picture. The presence of additional beyond-the-SM states that either decay to the SM or directly to the quintuplet can dilute or increase its abundance, respectively, making a wider mass range viable. Further, for $M_{\chi}\leq 13.6$ TeV, even with no additional states, the quintuplet would represent a well motivated (and predictable) fraction of DM. For these reasons, in the main body we considered a wide range of quintuplet masses, although we emphasize once more that the scenario where $M_{\chi}\simeq 13.6$ TeV is compelling. As with any DM candidate, one can also consider searching for the quintuplet with either direct detection or at a collider. For direct detection, as the quintuplet carries no U(1) hypercharge, it does not couple to the $Z$ boson at tree level. Nevertheless, couplings to SM nucleons can arise at loop level. The spin-independent cross-section is $\sigma\simeq(1.0\pm 0.3)\times 10^{-46}~{}{\rm cm}^{2}$ Bottaro:2021snn (see also Ref. Hisano:2015rsa ; Chen:2023bwg ), beyond the reach of current searches at 13.6 TeV, and even next generation instruments such as LZ Mount:2017qzi . Nevertheless, the cross-section is a factor of $\sim 4$ above the neutrino floor, and is in reach of generation-3 instruments such as DARWIN DARWIN:2016hyl . Detection at colliders is similarly challenging, but also potentially within reach of future instruments. Existing LHC searches reach masses of $\sim 270~{}\text{GeV}$; even the high-luminosity dataset will only reach to $\sim 520~{}\text{GeV}$ Ostdiek:2015aga . Even future hadron colliders are unlikely to reach the thermal mass. A future 100 TeV hadron collider will just be able to reach the thermal masses for canonical neutralinos such as the higgsino and wino Cirelli:2014dsa ; Low:2014cba . Given these two candidates have significantly lower thermal masses of 1 and 2.9 TeV respectively Hisano:2006nn ; Cirelli:2007xd ; Hryczuk:2010zi ; Beneke:2016ync , the prospects of probing the 13.6 TeV quintuplet appear discouraging. Future muon colliders operating at lower center-of-mass energies could reach the quintuplet, although they would still need to obtain $\sqrt{s}\simeq 35~{}{\rm TeV}$ Bottaro:2021snn (see also Refs. Han:2020uak ; Han:2022ubw ). Taken together, in the short term indirect detection remains the most likely avenue for probing quintuplet DM. ## Appendix B Operators for higher-$L$ Bound State annihilation As argued in Sec. 4.1, the higher-$L$ bound states preferentially decay to the deeper bound states with lower $L$ instead of directly annihilating to SM particles. Nevertheless, for completeness here we provide the complete set of relevant operators up to ${\cal O}(v)$. We consider the structure of the operators which support up to $p$-wave bound states. Thus we need to keep operators (at the amplitude level) suppressed by at most one power of the DM 3-momentum; the $\mathcal{O}(v^{0})$ operators will support both the direct annihilation as well as $s$-wave bound state annihilation. By matching to the full tree level amplitude, we can obtain the structure that supports $S=0,\,L=1$ states,232323Throughout this appendix we will work with 4-component DM fields and not reduce it to 2 components as was done for the $L=S=0$ operator in the main text. $\mathcal{O}_{1}=\mathbf{v}_{\chi}\cdot\mathbf{n}\left(\bar{\chi}\Big{[}T^{a},T^{b}\Big{]}\gamma^{0}\gamma^{5}\chi\right)i\epsilon^{ijk}(n-\bar{n})^{k}{\cal B}_{\perp n}^{i,a}{\cal B}_{\perp\bar{n}}^{j,b},$ (103) where it is understood that the subscript $\chi$ on $\mathbf{v}_{\chi}$ indicates that the velocity vector is the velocity of the state created/annihilated by the $\chi$ field (as opposed to the $\bar{\chi}$ field). As is evident, this operator supports a bound state with $L=1$ and $S=0$. Likewise we can write down the next set of operators, $\displaystyle\mathcal{O}_{2}$ $\displaystyle=\mathbf{v}_{\chi}\cdot\mathbf{n}\left(\bar{\chi}\Big{\\{}T^{a},T^{b}\Big{\\}}\gamma^{i}\chi\right)(n-\bar{n})^{i}{\cal B}_{\perp n}^{\mu,a}{\cal B}^{b}_{\perp\bar{n}\mu},$ (104) $\displaystyle\mathcal{O}_{3}$ $\displaystyle=\mathbf{v}_{\chi}\cdot{\cal B}^{a}_{\perp n}{\cal B}^{b}_{\perp\bar{n}\mu}\left(\bar{\chi}T^{b}T^{a}\gamma^{\mu\perp}\chi\right)\\!,$ $\displaystyle\mathcal{O}_{4}$ $\displaystyle=\mathbf{v}_{\chi}\cdot{\cal B}^{b}_{\perp\bar{n}}{\cal B}^{a}_{\perp n\mu}\left(\bar{\chi}T^{a}T^{b}\gamma^{\mu\perp}\chi\right)\\!.$ From their Dirac structure, it is clear that these operators support $L=1,S=1$ bound states. There is also another operator which supports an $L+S$ odd bound state, specifically the $L=1$, $S=0$ bound state which arises out of a correction to an ultra-soft gauge boson emission off the heavy DM particles. The details of this operator are involved and hence are separately given further below.242424A complete anaysis and resummation of this channel at NLL would require us to do a full two loop computation in order to recover the anomalous dimension as well matching to stage II of the EFT. We leave this work for the future. We note that decays of SU(2)-singlet bound states with $L+S$ odd into two (transverse) gauge bosons are constrained by charge conjugation invariance and parity. This is because the underlying Lagrangian in Eq. (92) which couples the electroweak gauge sector and a Majorana quintuplet fermion is $C$ and $P$ invariant. A fermion-antifermion bound state has $C$ eigenvalue $(-1)^{L+S}$ and $P$ eigenvalue $(-1)^{L+1}$. A $\gamma\gamma$, $\gamma Z$, or $ZZ$ final state has $C$ eigenvalue $+1$, thereby forbidding an $L+S$ odd bound state decay into them by $C$ alone. Decays to the $W^{+}W^{-}$ final state are allowed regardles of the $L,\,S$ quantum numbers of the bound state.252525One might think the Landau-Yang theorem also forbids decay from $L+S$-odd initial states into two bosons at all orders, but the application of the theorem to non-Abelian theories is subtle; there is a generalized Landau-Yang theorem that holds so long as the decay products are in a color-singlet state Beenakker:2015mra , but this is not the case for the decay of $L+S$-odd states. In Ref. Asadi:2016ybp , one can see explicitly that for wino-onium, decays to $W^{+}W^{-}$ are allowed for all combinations of initial-state $L$ and $S$. In all cases, decays to longitudinal gauge bosons are potentially allowed, as well. Finally, let us return to the additional operator which is sub-leading in velocity that we can write down by looking at the emission of another gauge boson which usually contributes to the $Y_{v}$ Wilson line. We do not get any such correction from the $Y_{n}$ or the $Y_{\bar{n}}$ Wilson line since we do not wish to consider sub-leading terms in the SCET power counting parameter. We start again with our ${\cal O}(v^{0})$ operator at the amplitude level before we dress it with soft Wilson lines $\displaystyle\bar{\chi}\\{T_{\chi}^{a},T_{\chi}^{b}\\}\Gamma\chi{\cal B}_{\perp n}^{a}{\cal B}^{b}_{\perp\bar{n}},$ (105) where $\Gamma$ is an arbitrary Lorentz structure. Consider the emission of an SCET ultra-soft gauge boson of momentum k off the initial $\chi$ or $\bar{\chi}$ particle. First looking at the $\chi$ particle, $\displaystyle u_{e}(\tilde{p})+ig\frac{i(\not{p}-\not{k}+M_{\chi})}{(p-k)^{2}-M_{\chi}^{2}+i\epsilon}\gamma^{\mu}\epsilon_{\mu}^{a}(T_{\chi}^{a})_{ce}u_{e}(p)$ (106) $\tilde{p}$, p is the momentum of the $\chi$ particle. Let us now expand this result to $\mathcal{O}(v)$ (except inside the spinor). We see that apart from the usual Wilson line contribution, we also get two other relevant terms, $\displaystyle u_{e}(\tilde{p})-g_{\scriptscriptstyle W}(T_{\chi}^{a})_{ce}\left(-\frac{\epsilon_{0}^{a}}{k_{0}}+\frac{\mathbf{v}\cdot\mathbf{\epsilon}^{a}(k)}{k_{0}}-\frac{\epsilon_{0}^{a}(k)\mathbf{v}\cdot\mathbf{k}}{k_{0}^{2}}\right)u_{e}(M_{\chi}).$ (107) If we sum the infinite series of gauge bosons with one of the propagators expanded out to $\mathcal{O}(v)$, we then have a structure $\displaystyle\chi\rightarrow Y_{v}\mathbf{v}\cdot\mathbf{B_{s}}\chi$ (108) where we have defined the following Hermitian operator, $\displaystyle\mathbf{B}_{s}=\frac{Y^{\dagger}_{v}\left(\mathbf{\mathcal{P}}-g_{\scriptscriptstyle W}\mathbf{A}_{s}\right)Y_{v}}{v\cdot\mathcal{P}},$ (109) where $\mathcal{P}$ is the momentum label operator. If this term is included in a larger expression, the $v\cdot\mathcal{P}$ factor in the denominator only acts on the terms in the numerator while the one in the numerators acts only on the $Y_{v}$ Wilson line to the right. We can now combine this with the emissions off $\bar{\chi}$ to give us an effective operator, now dressed with soft Wilson lines $\displaystyle\bar{\chi}\Big{\\{}Y_{v}^{\dagger}\\{T^{a},T^{b}\\}\Gamma Y_{v},\mathbf{v}\cdot\mathbf{B}_{s}\Big{\\}}\chi{\cal B}_{\perp n}^{a^{\prime}}{\cal B}^{b^{\prime}}_{\perp\bar{n}}Y_{n}^{aa^{\prime}}Y_{\bar{n}}^{bb^{\prime}}.$ (110) We can once again use our Wilson line identity to write this in terms of our usual soft function $\displaystyle\bar{\chi}\Big{\\{}\\{T^{a},T^{b}\\}\Gamma,\mathbf{v}\cdot\mathbf{B}_{s}\Big{\\}}\chi{\cal B}_{\perp n}^{c}{\cal B}^{d}_{\perp\bar{n}}Y^{abcd}.$ (111) We can then formally define a new object $\mathbfcal{B}_{s}$ to explicitly separate out all the soft fields from the heavy DM fields $\displaystyle\mathbfcal{B}_{s}^{a}T^{a}=\mathbf{B}_{s}.$ (112) Then we see that $\displaystyle\mathbfcal{B}_{s}^{a}=\mathrm{Tr}[\mathbf{B}_{s}T^{a}],$ (113) where we have used $\mathrm{Tr}[T^{a}T^{b}]=\delta^{ab}$. Our operator now becomes $\displaystyle\bar{\chi}\Big{\\{}\\{T^{a},T^{b}\\}\Gamma,T^{e}\Big{\\}}\chi\,{\cal B}_{\perp n}^{c}{\cal B}^{d}_{\perp\bar{n}}Y^{abcd}\mathbf{v}\cdot\mathbfcal{B}_{s}^{e}.$ (114) In summary, we have an additional soft function which is explicitly suppressed in velocity. The Dirac structure for this operator is $\gamma^{0}\gamma^{5}$ so that this is the only operator that supports an L+S odd bound state. However, it is clear that the soft operator only begins at one loop and hence we are always forced into at least a 3 gauge boson final state. At the amplitude squared level, we simply have a square of this operator and there will no interference terms with other bound state operators since all other operators support an L+S even bound state. Let us consider the soft operator, $\displaystyle S^{abea^{\prime}b^{\prime}e^{\prime}}=\langle 0|(Y^{a^{\prime}b^{\prime}3d}\mathbf{v}\cdot\mathbfcal{B}_{s}^{e^{\prime}})^{\dagger}\mathcal{M}|X_{s}\rangle\langle X_{s}|Y^{ab3d}\mathbf{v}\cdot\mathbfcal{B}_{s}^{e}|0\rangle,$ (115) where $\mathcal{M}$ is the measurement performed on the soft operator and $X_{s}$ are the soft modes. This soft operator now has 6 free indices which must be contracted into the DM wavefunction factor. Since the soft final state is completely inclusive, we can simplify the operator as $\displaystyle S^{abea^{\prime}b^{\prime}e^{\prime}}=|\mathbf{v}|^{2}\langle 0|(Y^{a^{\prime}b^{\prime}3d}\mathbfcal{B}_{s}^{e^{\prime}i})^{\dagger}\mathcal{M}|X_{s}\rangle\langle X_{s}|Y^{ab3d}\mathbfcal{B}_{s}^{ei}|0\rangle.$ (116) We can now move the $|\mathbf{v}|^{2}$ factor to the DM wavefunction and instead redefine a soft operator $\displaystyle S^{abea^{\prime}b^{\prime}e^{\prime}}=\langle 0|(Y^{a^{\prime}b^{\prime}3d}\mathbfcal{B}_{s}^{e^{\prime}i})^{\dagger}\mathcal{M}|X_{s}\rangle\langle X_{s}|Y^{ab3d}\mathbfcal{B}_{s}^{ei}|0\rangle.$ (117) The only term that contributes at one loop is the $\mathbfcal{B}$ since it is 0 at ${\cal O}(\alpha_{\scriptscriptstyle W}^{0})$. So we can set all other Wilson lines to their tree level values. Explicitly, $\displaystyle Y^{ab3d}\big{|}_{\textrm{tree}}=(Y_{v}^{fa}Y_{n}^{f3})(Y_{v}^{gb}Y_{\bar{n}}^{gd})\big{|}_{\textrm{tree}}=\delta^{a3}\delta^{bd}.$ (118) Thus, our operator at one loop becomes $\displaystyle S^{abea^{\prime}b^{\prime}e^{\prime}}_{1\mathchar 45\relax\mathrm{loop}}=\delta^{a3}\delta^{a^{\prime}3}\delta^{bb^{\prime}}\langle 0|(\mathbfcal{B}_{s}^{e^{\prime},i})^{\dagger}\mathcal{M}|X_{s}\rangle\langle X_{s}|\mathbfcal{B}_{s}^{e,i}|0\rangle.$ (119) We will only calculate this operator to one loop to elucidate its properties. The one-loop integrand up to an overall factor take the form, $\displaystyle I=$ $\displaystyle 2\delta^{ee^{\prime}}g_{\scriptscriptstyle W}^{2}\int\frac{d^{d}k}{(2\pi)^{(d-1)}}\frac{\delta^{+}(k^{2}-M_{\chi}^{2})\delta(q^{+}-k^{+})}{k_{0}^{2}}$ (120) $\displaystyle-$ $\displaystyle\delta^{ee^{\prime}}g_{\scriptscriptstyle W}^{2}\int\frac{d^{d}k}{(2\pi)^{(d-1)}}\frac{\delta^{+}(k^{2}-M_{\chi}^{2})\delta(q^{+}-k^{+})k^{2}}{k_{0}^{4}}.$ where $q^{+}$ is the contribution to the final photon momentum from the soft function. The second term is proportional to $M_{\chi}^{2}$ and gives a power correction in $M_{\chi}^{2}/(q^{+})^{2}$ and hence can be ignored. The first term does not give a UV divergence, but will contribute a log which will be relevant for stage 2 of the EFT. In detail, $2\delta^{ee^{\prime}}g_{\scriptscriptstyle W}^{2}\int\frac{d^{d}k}{(2\pi)^{(d-1)}}\frac{\delta^{+}(k^{2}-M_{\chi}^{2})\delta(q^{+}-k^{+})}{k_{0}^{2}}=2\delta^{ee^{\prime}}\frac{\alpha_{\scriptscriptstyle W}}{\pi}\frac{q^{+}}{(q^{+})^{2}+M_{\chi}^{2}}.$ (121) Going to Laplace space and expanding out in the limit $M_{\chi}\rightarrow 0$, we have $\displaystyle I=-2\delta^{ee^{\prime}}\frac{\alpha_{\scriptscriptstyle W}}{\pi}\ln(M_{\chi}se^{\gamma_{E}}).$ (122) This result has an IR divergence. At first glance, this may not be that surprising since all our soft operators in the direct-channel annihilation also were IR divergent. In those cases, we could trace back the IR divergence to the violation of the KLN theorem due to the semi-inclusive nature of the final state. In this case however, the interesting point is that, even when the final state is completely inclusive, i.e., we do not constrain the final state to be just a photon, our soft function is still IR divergent. Here we can trace this to the exclusive nature of the initial state where we demand that our operator support an $L=1$, $S=0$ state. This forces the emission of the 3rd gauge boson which has no virtual counterpart, leading to an IR divergence. This is very similar to the IR divergence that appears in the computation of PDFs in QCD. ## Appendix C Unstable Particle Effective Theory In this section, we justify our use of Eq. (61) for computing the decay rate of bound states. Let us now look at the effective theory for resonances systematically. In the literature, this is referred to as unstable particle effective theory (for a review see Ref. Beneke:2015vfa ). To begin with, if we have an intermediate resonance state we expect a propagator in our amplitude of the form $\displaystyle D=\frac{i}{p^{2}-M_{*}^{2}},$ (123) where $M_{*}$ is a complex pole. If we write $M_{*}=M+i\Gamma/2$, then we have the result $\displaystyle D=\frac{i}{p^{2}-M^{2}-i\Gamma M+\Gamma^{2}/4}.$ (124) We wish to work in a regime of narrow width, i.e. $p^{2}-M^{2}\sim\Gamma M\ll M^{2}$, so that the propagator is well approximated by $\displaystyle D\simeq\frac{i}{p^{2}-M^{2}-i\Gamma M},$ (125) and we want to develop an effective theory with an expansion in the small parameter $\lambda=\Gamma/M$. For the case of inclusive decays of our DM bound state, $\Gamma M_{\chi}\sim\alpha_{\scriptscriptstyle W}^{5}M_{\chi}^{2}$; we are interested in the inclusive decay rate, so there are no large logarithms and the perturbative cross section begins at $\alpha_{\scriptscriptstyle W}^{2}$, and due to the nontrivial wavefunction, we have an additional factor of at least $\alpha_{\scriptscriptstyle W}^{3}$. The effective coupling thus scales as $\lambda\sim\alpha_{\scriptscriptstyle W}^{5}\ll 1$. The hard scale in this process is just the resonance mass, which in our case is simply $\sim$$M_{\chi}$. We can now treat this as an HQET theory writing $p=M_{\chi}v+k$, where $v$ is the four-velocity of the resonance with $v^{2}=1$ and $k$ is the residual momentum. Given the scaling above of $p^{2}-M^{2}\sim\Gamma M$, we can immediately see that $k\sim\Gamma$, so that we have a soft mode $k^{\mu}\sim(\Gamma,\Gamma,\Gamma)\equiv M_{\chi}(\alpha_{\scriptscriptstyle W}^{5},\alpha_{\scriptscriptstyle W}^{5},\alpha_{\scriptscriptstyle W}^{5})$. Obviously, the question is how does this scale relate to the mass scale $m_{\scriptscriptstyle W}\sim vM_{\chi}$ that we already have. Now the $\alpha_{\scriptscriptstyle W}$ in the decay rate is evaluated at the scale $M_{\chi}$ while the $\alpha_{\scriptscriptstyle W}$ in the HQET scaling is at the scale $m_{\scriptscriptstyle W}$; however, we can see by the equations relating $\alpha_{\scriptscriptstyle W}$ at the two scales that they are parametrically of the same order. If that is the case, then our mode has $k^{\mu}\sim M_{\chi}(\alpha_{\scriptscriptstyle W}^{5},\alpha_{\scriptscriptstyle W}^{5},\alpha_{\scriptscriptstyle W}^{5})$ with $k^{2}\ll m_{\scriptscriptstyle W}^{2}$ and hence can only be populated by a massless mode such as the photon. Given the fact that in our case $\lambda\sim\alpha_{\scriptscriptstyle W}^{5}$, any corrections of the order $\lambda^{1}$ are minute and will be sub-dominant in any error band at the accuracy we are aiming for. So we can safely work at leading order in $\lambda$. Following Beneke:2015vfa , it is clear that at this order the only term that exists is the propagator with the 1PI self-energy corrections. Any communication between the production and decay states via radiative emissions of our mode $k^{\mu}$ only occurs at $\mathcal{O}(\lambda)$ and hence is severely suppressed. Additionally, since $\Gamma\sim\alpha_{W}^{5}M_{\chi}$, but the splitting between bound states, $\Delta E_{n}\sim\alpha_{W}^{2}M_{\chi}$, any interference between bound states is also subleading. Therefore, we can safely ignore any radiative corrections by this mode. This suppression is a manifestation of the length separation in space-time between the process of production and decay. Now let us look at a cross-section for production of a resonance and its subsequent decay. We assume that we are in a regime $p^{2}-M^{2}\sim\Gamma M$, where $p$ is the momentum of the intermediate resonance state and $M$ is the real part of the pole. Let $N$ be our initial scattering state that will create the resonance, and we focus on the cross section to produce an observed final state $f$ and an ultrasoft photon $\gamma_{\textrm{us}}$ indicating that a bound state was formed. In detail, the differential cross section is $\displaystyle\frac{d\sigma}{dz}=\frac{1}{\mathcal{N}}\int d\Pi_{\gamma}d\Pi_{f}|{\cal M}(N\rightarrow f+\gamma_{\textrm{us}})|^{2}\delta^{(4)}(p_{\gamma}+p_{f}-p_{N})\mathcal{M}_{z}(f),$ (126) where ${\cal M}_{z}$ is the measurement (in this case the photon energy) function on the final state particles, and ${\cal N}$ is a normalizing kinematic factor. Since the photon emitted during the bound state formation is an ultrasoft photon, there is no measurement on it (via a multipole expansion of the measurement function) and its phase space is integrated over fully. The amplitude squared will contain the squared propagator for the resonance, $\displaystyle J=\frac{1}{(p^{2}-M^{2})^{2}+\Gamma^{2}M^{2}}.$ (127) Now, the key point is that if we are not interested in the details of the variation of the cross section near resonance, and we are sufficiently inclusive over $p^{2}$ around the resonance (by a value $\gg\Gamma M$), then we can make the following substitution $\displaystyle\lim_{\Gamma/M\rightarrow 0}J\rightarrow\frac{\pi}{\Gamma M}\delta(p^{2}-M^{2}).$ (128) This substitution is true only in the distribution sense, i.e. under the integral which at least encompasses the region of the size of the width about the resonance. This is the narrow width approximation. This substitution then puts the intermediate resonance on-shell. The cross section can then be written as $\displaystyle\frac{d\sigma}{dz}$ $\displaystyle=\frac{1}{\mathcal{N}}\frac{\pi}{\Gamma M}\int d\Pi_{\gamma}d\Pi_{f}|{\cal M}(N\rightarrow B(p)+\gamma_{\textrm{us}})|^{2}|{\cal M}(B(p)\rightarrow f)|^{2}$ (129) $\displaystyle\times\delta^{(4)}(p_{\gamma}+p_{f}-p_{N})\delta(p^{2}-M^{2})\mathcal{M}_{z}(f).$ Here $B(p)$ represents a bound state with momentum $p$, and again this result is true at leading order in $\Gamma/M$, which forbids any communication between the production and decay states. If we then insert a factor of unity, $1=\int d^{4}p\,\delta^{(4)}(p-p_{f})$, the result can be rearranged to yield, $\displaystyle\frac{d\sigma}{dz}=$ $\displaystyle\frac{1}{\mathcal{N}}\frac{\pi}{M}\Big{[}\int d\Pi_{\gamma}d\Pi_{R}|{\cal M}(N\rightarrow B(p)+\gamma_{\textrm{us}})|^{2}\delta^{(4)}(p_{\gamma}+p-p_{N})\Big{]}$ (130) $\displaystyle\times$ $\displaystyle\frac{1}{\Gamma}\Big{[}\int d\Pi_{f}|{\cal M}(B(p)\rightarrow f)|^{2}\delta^{(4)}(p-p_{f})\mathcal{M}_{z}(f)\Big{]}$ $\displaystyle=$ $\displaystyle\sigma(N\rightarrow B+\gamma_{\textrm{us}})\frac{1}{\Gamma}\frac{d\Gamma_{B\rightarrow f}}{dz},$ which is simply the product of the production cross section and the differential branching ratio. The above separation holds where the process proceeds solely through the long lived bound state, but in practice the result should be summed over all bound states in the spectrum, as well as the direct annihilation case where no bound states are formed. Applied to our specific case we arrive at Eq. (61). ## Appendix D Proof of the Wilson Line Identity In this section, we prove the identity involving soft Wilson lines used in Eq. (10) that eventually leads to a universal factorization of the IR physics in terms of soft and jet functions independent of the representation. The property that we wish to show is $\displaystyle S_{v}T^{a}S_{v}^{\dagger}=T^{a^{\prime}}S_{v}^{a^{\prime}a},$ (131) where $T$ is a generator in an arbitrary representation (we used $\mathbf{5}$ for the quintuplet), and on the left hand side we have two Wilson lines in the same representation, whereas on the right it is in the adjoint. (In the main text we used $Y_{v}$ for the latter, we keep all as $S$ here for notational convenience.) In the main text we actually used $S_{v}^{\dagger}T^{a}S_{v}=S_{v}^{aa^{\prime}}T^{a^{\prime}}$, although this follows from the above by applying various inverses. In position space, the Wilson lines are defined as $\displaystyle S_{v}(x)$ $\displaystyle=Pe^{ig\int_{-\infty}^{x}ds\,v\cdot A_{s}(vs)}=Pe^{ig\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y\,v\cdot A_{s}(\bar{v}\cdot y)},$ (132) $\displaystyle S_{v}^{\dagger}(x)$ $\displaystyle=\bar{P}e^{-ig\int_{-\infty}^{x}ds\,v\cdot A_{s}(vs)}.$ where $v\cdot A_{s}=v\cdot A_{s}^{a}T^{a}$, with $T^{a}$ in the appropriate representation for the Wilson line, whilst $P$ is path ordering and $\bar{P}$ indicates anti-path ordering. The variable $s$ parametrizes the path along the light cone direction $n$ from $x$ to $-\infty$. The statement in Eq. (131) is a generalization of the identity applied for QCD Becher:2014oda for other group representations. The soft Wilson line obeys the equation $\displaystyle\frac{d}{d\bar{v}\cdot x}S_{v}(x)$ $\displaystyle=ig\,v\cdot A_{s}(\bar{v}\cdot x)S_{v}(x),$ (133) $\displaystyle\frac{d}{d\bar{v}\cdot x}S^{\dagger}_{v}(x)$ $\displaystyle=-S_{v}^{\dagger}(x)ig\,v\cdot A_{s}(\bar{v}\cdot x).$ Coming back to our question, let us define $U^{a}(x)=S_{v}(x)T^{a}S_{v}^{\dagger}(x)$. We can then immediately see $\displaystyle\frac{d}{d\bar{v}\cdot x}U^{a}(x)=\left[igv\cdot A_{s}(\bar{v}\cdot x),U^{a}(x)\right]\\!.$ (134) We will solve this equation by recursion, order by order in the coupling $g$ to build up the full solution. The tree level result is simply $U^{a(0)}(x)=T^{a}=T^{a^{\prime}}\delta^{a^{\prime}a}$. At the next order, $\displaystyle U^{a(1)}(x)$ $\displaystyle=\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y\left[ign\cdot A_{s}(\bar{v}\cdot y),T^{a}\right]$ (135) $\displaystyle=ig\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y\,v\cdot A_{s}^{b}(\bar{v}\cdot y)if^{baa^{\prime}}T^{a^{\prime}}$ $\displaystyle=T^{a^{\prime}}ig\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y\,v\cdot(A_{s}(\bar{v}\cdot y))^{a^{\prime}a}.$ At ${\cal O}(g^{2})$, $\displaystyle U^{a(2)}(x)$ $\displaystyle=\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot y_{1}}d\bar{v}\cdot y_{2}\left[igv\cdot A_{s}(\bar{v}\cdot y_{1}),\left[igv\cdot A_{s}(\bar{v}\cdot y_{2}),T^{a}\right]\right]$ (136) $\displaystyle=\frac{1}{2}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{2}P\Big{\\{}\left[igv\cdot A_{s}(\bar{v}\cdot y_{1}),\left[igv\cdot A_{s}(\bar{v}\cdot y_{2}),T^{a}\right]\right]\Big{\\}}$ $\displaystyle=T^{a^{\prime}}\frac{(ig)^{2}}{2}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{2}P\Big{\\{}\left(v\cdot A_{s}(\bar{v}\cdot y_{1})v\cdot A_{s}(\bar{v}\cdot y_{2})\right)^{a^{\prime}a}\Big{\\}}.$ From here, we can see that the $n^{th}$ term will be $\displaystyle U^{a(n)}(x)$ $\displaystyle=\frac{1}{n!}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{2}\ldots\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{n}$ (137) $\displaystyle\times P\Big{\\{}\left[igv\cdot A_{s}(\bar{v}\cdot y_{1}),\left[igv\cdot A_{s}(\bar{v}\cdot y_{2}),\left[\ldots\left[igv\cdot A_{s}(\bar{v}\cdot y_{n}),T^{a}\right]\right]\right]\right]\Big{\\}}$ $\displaystyle=T^{a^{\prime}}\frac{(ig)^{n}}{n!}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{1}\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{2}\ldots\int_{-\infty}^{\bar{v}\cdot x}d\bar{v}\cdot y_{n}$ $\displaystyle\times P\Big{\\{}\left(v\cdot A_{s}(\bar{v}\cdot y_{1})v\cdot A_{s}(\bar{v}\cdot y_{2})\ldots v\cdot A_{s}(\bar{v}\cdot y_{n})\right)^{a^{\prime}a}\Big{\\}}.$ Summing to all orders then proves our result. ## Appendix E Subtle Signs in the Bound-state Formation and Decay Calculations In the bulk of the paper, we have freely used results from Ref. Harz:2018csl that are written in terms of two-body states of the form $|ij\rangle$. However, in general our 2-body states for non-identical particles will in fact be combinations of the form $\frac{1}{\sqrt{2}}(|ij\rangle+(-1)^{L+S}|ji\rangle)$. (This convention choice is also discussed in the context of Sommerfeld enhancement in Ref. Beneke:2014gja , where these two approaches are labeled “method-1” and “method-2”; we largely adopt “method-2”, where we treat $|ij\rangle$ and $|ji\rangle$ as components of a single state, rather than tracking them separately.) The factor of $(-1)^{L+S}$ arises from a factor of $(-1)^{S+1}$ from the behavior of the spin configuration under particle exchange, a factor of $(-1)^{L}$ from the parity of the spatial wavefunction, and a factor of $(-1)$ from the exchange of two fermions. This means that when considering a transition of the form $|ij\rangle\rightarrow|i^{\prime}j^{\prime}\rangle$, we also need to include transitions between the $|ji\rangle$ and $|j^{\prime}i^{\prime}\rangle$ states. In many cases this does not make a difference and it is adequate to represent states purely by one component $|ij\rangle$. For example, if the $|ij\rangle\rightarrow|i^{\prime}j^{\prime}\rangle$ and $|ji\rangle\rightarrow|j^{\prime}i^{\prime}\rangle$ processes have equal rates, but $|ij\rangle\rightarrow|j^{\prime}i^{\prime}\rangle$ and $|ji\rangle\rightarrow|i^{\prime}j^{\prime}\rangle$ are forbidden (for example, this occurs if $|ij\rangle=|0,++\rangle$), then the combined rate is the same as what one would obtain from purely considering the $|ij\rangle\rightarrow|i^{\prime}j^{\prime}\rangle$ process. However, this behavior is not universal. As an example of a case where this matters, consider the transition between $Q=1$ bound state components $|\\!+0\rangle\rightarrow|\\!+0\rangle$. Writing out the individual components of these states (labeled as $23$ and $32$, following the notation of Sec. 3),262626One might be tempted to write the 23 state as $|\\!+0\rangle$ and 32 as $|0+\rangle$. However, we reserve $|\\!+0\rangle$ for the full quantum state such that $|\\!+0\rangle=(|2\,3\rangle+(-1)^{L+S}|3\,2\rangle)/\sqrt{2}$. the full matrix element should be: $\displaystyle\mathcal{M}=\frac{1}{2}$ $\displaystyle\left(\mathcal{M}_{22,33}+(-1)^{(L+S)_{i}}\mathcal{M}_{32,23}+(-1)^{(L+S)_{f}}\mathcal{M}_{23,32}\right.$ (138) $\displaystyle\left.+(-1)^{(L+S)_{i}+(L+S)_{f}}\mathcal{M}_{33,22}\right)\\!.$ Now we can write $(L+S)_{f}=(L+S)_{i}+1\,\textrm{(mod 2)}$ for dipole transitions, and consequently: $\mathcal{M}=\frac{1}{2}\left(\mathcal{M}_{22,33}-\mathcal{M}_{33,22}+(-1)^{(L+S)_{i}}\left(\mathcal{M}_{32,23}-\mathcal{M}_{23,32}\right)\right)\\!.$ (139) Now as calculated in Sec. 3, if $\psi_{i}$ and $\psi_{f}$ denote the initial- and final-state wavefunctions, we have: $\displaystyle\mathcal{M}^{3}_{22,33}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\left[(T^{3})_{22}-(T^{3})_{33}\right]\int d^{3}\mathbf{r}\,\psi_{f}^{*}\nabla\psi_{i}$ (140) $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\int d^{3}\mathbf{r}\,\psi_{f}^{*}\nabla\psi_{i},{}$ $\displaystyle\mathcal{M}^{3}_{33,22}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\left[(T^{3})_{33}-(T^{3})_{22}\right]\int d^{3}\mathbf{r}\,\psi_{f}^{*}\nabla\psi_{i}{}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\int d^{3}\mathbf{r}\,(-\psi_{f}^{*}\nabla\psi_{i}),{}$ $\displaystyle\mathcal{M}^{3}_{23,32}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\left\\{\left[-i((T^{1})_{32}(T^{2})_{23}-(T^{2})_{32}(T^{1})_{23}\right]M_{\chi}\alpha_{\text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i}\right\\}{}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\left\\{-3M_{\chi}\alpha_{\text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i}\right\\}\\!,{}$ $\displaystyle\mathcal{M}^{3}_{32,23}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\left\\{\left[-i((T^{1})_{23}(T^{2})_{32}-(T^{2})_{23}(T^{1})_{32}\right]M_{\chi}\alpha_{\text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i}\right\\}{}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\left\\{3M_{\chi}\alpha_{\text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i}\right\\}\\!,{}$ recalling that the “3” superscript is for $\gamma,\,Z$ emission, depending on the value of $\alpha_{\text{rad}}$, the coupling of the emitted boson to the charged particle that radiated it. We also recall that the terms with $\alpha_{\text{NA}}$ correspond to emission off the virtual particles sourcing the potential. The $\alpha_{\text{NA}}$ factor is thus the coupling of the virtual line to the WIMPs. For the case of capture by a radiated $\gamma$, $\alpha_{\text{rad}}=\alpha_{\text{em}}$ and $\alpha_{\text{NA}}=\alpha_{W}$. Thus, overall we have $\mathcal{M}_{32,23}=-\mathcal{M}_{23,32}$ and $\mathcal{M}_{33,22}=-\mathcal{M}_{22,33}$, and consequently: $\displaystyle\mathcal{M}$ $\displaystyle=\mathcal{M}_{22,33}+(-1)^{(L+S)_{i}}\mathcal{M}_{32,23}{}$ $\displaystyle=i\sqrt{2^{6}\pi\alpha_{\text{rad}}M_{\chi}}\left[\int d^{3}\mathbf{r}\,\psi_{f}^{*}\nabla\psi_{i}+(-1)^{(L+S)_{i}}3M_{\chi}\alpha_{\text{NA}}\int d^{3}\mathbf{r}\,\hat{\mathbf{r}}\psi_{f}^{*}\psi_{i}\right]\\!.$ (141) The $(-1)^{(L+S)_{i}}$ factor is obtained by treating the different components correctly, and is required to ensure the correct behavior of the matrix element under time reversal. ## Appendix F Analytic Approximate Results for Annihilation and Bound-state Formation In this final appendix we provide analytic estimates for the annihilation and bound state formation rate of DM. We consider the quintuplet case of interest first, followed by providing equivalent results for a general representation. ### F.1 Results for the quintuplet In the limit of unbroken SU(2), the wavefunctions and their integrals, and hence the bound-state capture rate, can be computed analytically in the low- velocity limit. In this regime the Sommerfeld enhancement can also be computed analytically. These results can also be applied (approximately, and with caveats we will discuss below) to the case where SU(2) is broken but the DM mass is very heavy relative to the symmetry breaking scale. These calculations can be useful both as a cross-check on our numerical results, and to develop intuition for which channels are likely to dominate the overall annihilation signal. As such, we present the details of these analytic calculations below, beginning with the DM in the quintuplet representation. #### F.1.1 Capture and annihilation rates As an opening example, let us estimate the spin-averaged capture rate into the spin-triplet ground state via photon emission. The total cross section for this process is given by (see App. C of Ref. Asadi:2016ybp ),272727This expression as written includes capture only from the components of the incoming state that experience an attractive potential; we expect the contribution from repulsed incoming states to be suppressed, due to their small overlap with the bound states. $\displaystyle\sigma v$ $\displaystyle=\frac{3}{2}\times\frac{2^{8}\pi\alpha\,k}{3}\left|\sum_{i}({\bf I}\cdot\eta_{i})\alpha_{\scriptscriptstyle W}(\alpha_{\scriptscriptstyle W}\lambda_{f}M_{\chi}/2)^{-3/2}e^{-2\lambda_{i}/\lambda_{f}}e^{\pi\alpha_{\scaleto{W}{3.5pt}}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{\scriptscriptstyle W}\lambda_{i}/v)\right.{}$ $\displaystyle\left.\times\eta_{f}^{\dagger}\left[\lambda_{i}\hat{C}_{1}+\hat{C}_{2}\lambda_{i}/\lambda_{f}\right]\eta_{i}\right|^{2}\\!,$ (142) The notation we employ follows Ref. Asadi:2016ybp ; $\eta_{i}$ and $\eta_{f}$ are potential eigenvectors which for the quintuplet are given by (in our basis) $\eta_{f}=\\{-2,1,0\\}/\sqrt{5}$, $i=1,2$ with $\eta_{1}=\\{\sqrt{2},-\sqrt{2},1\\}/\sqrt{5}$, $\eta_{2}=\\{-2,-1,\sqrt{2}\\}/\sqrt{7}$, with corresponding attractive eigenvalues $\lambda_{f}=5$, $\lambda_{1}=6$, $\lambda_{2}=3$. The $I$ vector describes the fraction of the incoming plane wave in each state; we will choose $I=\\{0,0,1\\}$, as the state asymptotes to two noninteracting, neutral DM particles. The energy of the outgoing photon is $k$, which at low velocities can be approximated as the binding energy of the ground state, $\lambda_{f}^{2}\alpha_{\scriptscriptstyle W}^{2}M_{\chi}/4$. Lastly, the $\hat{C}_{1}$ and $\hat{C}_{2}$ matrices describe the couplings between the different components of the initial and final states; for capture via photon (or $Z$) emission, they take the form: $\hat{C}_{1}=\begin{pmatrix}2&0&0\\\ 0&1&0\\\ 0&0&0\end{pmatrix}\\!,\quad\hat{C}_{2}=\begin{pmatrix}0&2&0\\\ -2&0&3\sqrt{2}\\\ 0&-3\sqrt{2}&0\end{pmatrix}\\!.$ (143) In the wino and positronium cases studied in Ref. Asadi:2016ybp , there was only one eigenstate that experienced an attractive initial-state potential, and so the sum in Eq. (142) was trivial. There is a simple expression for $|e^{\pi\alpha_{\scaleto{W}{3.5pt}}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{\scriptscriptstyle W}\lambda_{i}/v)|^{2}\simeq 2\pi\alpha_{\scriptscriptstyle W}\lambda_{i}/v$ in the limit of small $v$ (for positive $\lambda_{i}$), which scales purely as $1/v$, and consequently in those cases (wino and positronium) $\sigma v$ had a simple $1/v$ scaling at low relative velocities. However, in the quintuplet case, we see there are multiple terms in the sum that can interfere with each other, and so even in the limit of unbroken SU(2), we expect there to be a non-trivial velocity dependence in the capture cross section. Similarly, the Sommerfeld factors can be read off from the components of the scattering-state wavefunction at the origin, and in the unbroken limit this wavefunction has the form $\sum_{i}(I\cdot\eta_{i})\eta_{i}\,\phi(\lambda_{i}\alpha_{\scriptscriptstyle W},r)$, where $\phi(\alpha,r)$ is the solution to the scalar Schrodinger equation with an attractive Coulomb potential with coupling $\alpha$. In principle this sum runs over both positive and negative eigenvalues of the potential (corresponding to both attracted and repulsed eigenstates), but for low velocities we expect the contribution of the eigenstates experiencing a repulsive interaction to be very small. Nonetheless, where (as in the quintuplet case) there are two eigenstates experiencing an attractive potential, the value of the wavefunction at the origin (and hence the Sommerfeld factors) will experience a non-trivial interference between the two contributions. This can give rise to a velocity dependence differing from the simpler case where there is only one attractive eigenstate. Similar interference effects can be seen in the form of rapid changes in spectrum with respect to $M_{\chi}$ in the case of broken SU(2) symmetry, where the interference occurs between the various Sommerfeld factors with resonances at different positions (as discussed in the context of Fig. 9). In contrast, the manifestation of the eigenstate interference identified here persists in the SU(2)-symmetric limit and does not require any resonance structure, only differing (velocity-dependent) phases between the interfering contributions. However, as noted in Ref. Schutz:2014nka ; Asadi:2016ybp , at low velocities the system is often in an “adiabatic” regime where the incoming particle wavefunction evolves such that at short distances it has complete overlap with the eigenvector with the largest-magnitude attractive eigenvalue. The criterion for this behavior is roughly $v\lesssim\delta/m_{\scriptscriptstyle W}$, where $\delta$ is the mass splitting between the states; for $\delta=164$ MeV, we expect this behavior to hold roughly for $v\lesssim 2\times 10^{-3}$, i.e. for Milky-Way-scale velocities and lower. Note that this criterion is independent of the DM mass, so even when the DM is very heavy and the ratio $m_{\scriptscriptstyle W}/M_{\chi}$ is small, the effect of SU(2) breaking can still be seen in the presence of this adiabatic regime. In this case, the interference will be suppressed for both bound-state formation and Sommerfeld enhancement, with only the $i=1$, $\lambda_{i}=6$ term contributing significantly, and with the coefficient $I\cdot\eta_{i}$ replaced with $\delta_{i1}$. This is an important simplifying approximation within its regime of validity. Note that the presence of this regime relies on SU(2) being broken, and also on low velocity ($v\lesssim\delta/m_{\scriptscriptstyle W}$); it will not appear if an unbroken symmetry ensures the degeneracy of the mass eigenstates, and it will also not generally be relevant in the early universe (e.g. for relic density calculations) where velocities are much higher. However, it is well-suited to the case of indirect detection in the Milky Way halo. For example, within this approximation, we obtain the spin-averaged capture rate to the ground state as: $\sigma v\simeq\frac{2^{8}}{5}\frac{\pi\alpha\alpha_{\scriptscriptstyle W}}{M_{\chi}^{2}}\times\frac{3^{3}\cdot 2^{9}}{5^{2}}e^{-24/5}\frac{\pi\alpha_{\scriptscriptstyle W}}{v}=\frac{2^{17}\cdot 3^{3}}{5^{3}}e^{-24/5}\frac{\pi^{2}\alpha\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}v}\simeq\frac{233\pi^{2}}{v}\frac{\alpha\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}.$ (144) Here we have employed the low-velocity approximation $|e^{\pi\alpha_{\scaleto{W}{3.5pt}}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{\scriptscriptstyle W}\lambda_{i}/v)|^{2}\simeq 2\pi\alpha_{\scriptscriptstyle W}\lambda_{i}/v$. In the same regime, where the initial state rotates into the most-attracted eigenstate, the $s$-wave direct annihilation cross section to gauge bosons can be computed as, $\sigma v\simeq\frac{720\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}.$ (145) In this unbroken limit, the effective branching ratio to the line (i.e. to $\gamma\gamma$ \+ half the branching ratio to $\gamma Z$) should be given by $(s_{\scriptscriptstyle W}^{4}+s_{\scriptscriptstyle W}^{2}c_{\scriptscriptstyle W}^{2})/3=s_{\scriptscriptstyle W}^{2}/3$, so the line cross section should be: $(\sigma v)_{\text{line}}\simeq\frac{240\pi^{2}\alpha_{\scriptscriptstyle W}^{2}\alpha}{M_{\chi}^{2}v}.$ (146) As for the bound-state formation, at higher velocities ($v\gtrsim 2\times 10^{-3}$), we expect to see the onset of interference between the contributions from the two attracted eigenstates, resulting in a non-monotonic dependence of the cross section on velocity even in the $s$-wave case. This behavior, and its onset at roughly Milky Way-scale velocities, can be observed in Fig. 10. (Note that the velocity dependence can also be suppressed if the DM mass is small enough that the Sommerfeld enhancement is fully saturated, such that the velocity dependence of the individual Sommerfeld factors is very different from the case of unbroken SU(2) symmetry.) This would suggest the cross-sections for capture (to the spin-triplet ground state) and for annihilation producing a line should be very similar, at least when this adiabatic approximation holds (numerical calculations indicate the non-adiabatic cross section for bound-state capture, as estimated in Eq. 142, can range between larger than the adiabatic result by a factor of $\sim$$2$ and smaller by a factor of $\sim$$5$, as $v$ is varied). However, for $v\ll m_{\scriptscriptstyle W}/M_{\chi}$, we have the SU(2)-breaking effect that $p$-wave processes should be parametrically suppressed by a factor of order $(vM_{\chi}/m_{\scriptscriptstyle W})^{2}$, which at a 13.6 TeV mass can suppress the $p$-wave capture cross section by $\sim$$2$ orders of magnitude. We can also study the cross-section for spin-singlet $s\rightarrow p$ capture to an $n=2$, $l=1$ state. In this case we have $k=(25/16)\alpha_{\scriptscriptstyle W}^{2}M_{\chi}$, and $\displaystyle\sigma v$ $\displaystyle=\frac{2^{13}\pi\alpha k}{3^{3}}\frac{1}{\alpha_{\scriptscriptstyle W}}M_{\chi}^{-3}\frac{1}{\lambda_{f}^{3}}\left|\sum_{i}({\bf I}\cdot\eta_{i})e^{-4\lambda_{i}/\lambda_{f}}e^{\pi\alpha_{\scaleto{W}{3.5pt}}\lambda_{i}/(2v)}\Gamma(1-i\alpha_{\scriptscriptstyle W}\lambda_{i}/v)\eta_{f}^{\dagger}\right.$ (147) $\displaystyle\left.\times\left[\lambda_{i}\left(\frac{4\lambda_{i}}{\lambda_{f}}-3\right)\hat{C}_{1}+\hat{C}_{2}\left(3-12\frac{\lambda_{i}}{\lambda_{f}}+8\frac{\lambda_{i}^{2}}{\lambda_{f}^{2}}\right)\right]\eta_{i}\right|^{2}$ $\displaystyle=\frac{2^{9}\pi\alpha\alpha_{\scriptscriptstyle W}}{5\cdot 3^{3}M_{\chi}^{2}}\Bigg{|}\frac{3}{25}\sqrt{\frac{2}{5}}e^{-24/5}e^{3\pi\alpha_{\scaleto{W}{3.5pt}}/(2v)}\left(37e^{12/5}\Gamma(1-3i\alpha_{\scriptscriptstyle W}/v)\right.$ $\displaystyle\hskip 184.9429pt\left.+89e^{3\pi\alpha_{\scaleto{W}{3.5pt}}/(2v)}\Gamma(1-6i\alpha_{\scriptscriptstyle W}/v)\right)\Bigg{|}^{2},$ or in the adiabatic regime, $\displaystyle\sigma v$ $\displaystyle\rightarrow\frac{2^{9}\pi\alpha\alpha_{\scriptscriptstyle W}}{5\cdot 3^{3}M_{\chi}^{2}}\left|\frac{267}{25}\sqrt{2}e^{-24/5}e^{3\pi\alpha_{\scaleto{W}{3.5pt}}/(2v)}\Gamma(1-6i\alpha_{\scriptscriptstyle W}/v)\right|^{2}$ (148) $\displaystyle=\frac{2^{12}\cdot 89^{2}\pi^{2}}{5^{5}\,v}e^{-48/5}\frac{\alpha\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}$ $\displaystyle\simeq\frac{0.70\pi^{2}}{v}\frac{\alpha\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}.$ We see that the scale for this cross-section is naturally two orders of magnitude smaller than the previous ones, which arises from the various numerical prefactors, primarily the factor of $e^{-48/5}$ compared to $e^{-24/5}$ for the capture to the $n=1$ state, corresponding to a suppression factor of $8\times 10^{-3}$. If these exponential terms were removed, the other prefactors would differ by less than a factor of 3. (Numerical calculations indicate the non-adiabatic cross section is larger than the adiabatic one in this case, by factors between 1 and 3.6.) This is suggestive that only the capture to the ground-state is likely to be comparable to direct annihilation, and the capture from the $p$-wave initial-state component suffers from a $v^{2}$ suppression once $v$ drops below $m_{\scriptscriptstyle W}/M_{\chi}$, which renders it subdominant at the $\mathcal{O}(1\%)$ level for our 13.6 TeV benchmark point. This suppression for higher-$n$ capture also suggests that the contribution to the endpoint hard photon spectrum from bound state formation and decay will be suppressed, as these contributions are dominated by capture into states with odd $L$ (and thus $n>1$) and $S=0$ that decay to $L=S=0$ states before annihilating (see Sec. 4 for a more in-depth discussion). Capture to the ground-state via emission of a dipole gauge boson changes $L$ by 1, thus requiring an initial $L=1$ state (which must then have $S=1$ if it contains identical DM particles), and $S=1$ states do not produce a leading-power contribution to the endpoint spectrum when they decay. For the $Q=1$ sector, let us again consider capture from the spin-triplet $p$-wave incoming wave to the spin-triplet $s$-wave state. In the unbroken limit the potential matrix for the final state takes the form: $V=\begin{pmatrix}-2&\sqrt{6}\\\ \sqrt{6}&-3\end{pmatrix}\\!.$ (149) Here the first row refers to the $++-$ state and the second to the $+\,0$ state. The transition matrices are now, $\hat{C}_{1}=\begin{pmatrix}-\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\\ 0&-\frac{\sqrt{3}}{2}&\sqrt{\frac{3}{2}}\end{pmatrix}\\!,\quad\hat{C}_{2}=\begin{pmatrix}2\sqrt{2}&\sqrt{2}&0\\\ 0&\sqrt{3}&0\end{pmatrix}\\!.$ (150) In this case we must also replace the $\alpha$ prefactor in the cross section with $\alpha_{\scriptscriptstyle W}$. The attractive eigenvalue for the final state has $\lambda_{f}=5$, $\eta_{f}=\\{-\sqrt{2},\sqrt{3}\\}/\sqrt{5}$. Again, the binding energy of the ground state is $k=\alpha_{\scriptscriptstyle W}^{2}\lambda_{f}^{2}M_{\chi}/4=(25/4)\alpha_{\scriptscriptstyle W}^{2}M_{\chi}$ (and since in the unbroken limit $m_{\scriptscriptstyle W}=0$, we do not need to include a kinematic suppression for the $W$ mass). Then we obtain for the unbroken limit: $\displaystyle\sigma v=\frac{2^{7}3^{2}\pi\alpha_{\scriptscriptstyle W}^{2}}{5^{4}M_{\chi}^{2}}$ $\displaystyle\left|16e^{-12/5}e^{6\pi\alpha_{\scaleto{W}{3.5pt}}/(2v)}\Gamma(1-6i\alpha_{\scriptscriptstyle W}/v)\right.$ (151) $\displaystyle\left.+7e^{-6/5}e^{3\pi\alpha_{\scaleto{W}{3.5pt}}/(2v)}\Gamma(1-3i\alpha_{\scriptscriptstyle W}/v)\right|^{2}\\!,$ When we assume the adiabatic regime, the result reduces to, $\sigma v=\frac{2^{17}\cdot 3^{3}\pi^{2}}{5^{3}}e^{-24/5}\frac{\alpha_{\scriptscriptstyle W}^{3}}{vM_{\chi}^{2}}\simeq 233\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{vM_{\chi}^{2}}.$ (152) This is for capture to the $Q=+1$ state; there is an equal rate for capture to the $Q=-1$ state. Note the $\alpha_{\scriptscriptstyle W}$ prefactor (rather than $\alpha$); including formation of the $Q=0$ state through $Z$ emission (as well as photon emission) would similarly promote that capture rate to have a prefactor of $\alpha_{\scriptscriptstyle W}$ rather than $\alpha$, for an overall capture rate (summing across all three channels) of $\sim 700\pi^{2}\alpha_{\scriptscriptstyle W}^{3}/(M_{\chi}^{2}v)$, similar to the full $s$-wave direct annihilation rate. The primary difference between this $p\rightarrow s$ capture rate and the inclusive direct annihilation rate will arise from velocity suppression of the $p\rightarrow s$ capture cross section in the broken-SU(2) case (with this suppression being lifted in the truly unbroken limit). #### F.1.2 Presence of metastable bound states The unbroken-SU(2) limit is also helpful for studying the question of whether there could be $L>0$ states in the spectrum whose decays to more deeply bound states are highly suppressed, leading them to decay through annihilation to SM particles with a substantial branching ratio. States which are degenerate in the unbroken limit are likely to remain close in energy as we reduce $M_{\chi}$, and therefore decays between them will be suppressed (although if this is decisive in determining whether a state is metastable, a more careful analysis will be required). The $L+S$-even potential for the quintuplet has two attractive eigenvalues, $Z=6$ and $Z=3$, whereas the $L+S$-odd potential has a single attractive eigenvalue, $Z=5$. Thus for spin-singlet states ($S=0$) we expect $L$-even bound states with energies $E_{n}/\alpha_{\scriptscriptstyle W}^{2}M_{\chi}=-9/n^{2},-2.25/n^{2}$, and $L$-odd bound states with energies $E_{n}/\alpha_{\scriptscriptstyle W}^{2}M_{\chi}=-6.25/n^{2}$. For spin-triplet states ($S=1$) we expect $L$-odd states with energies $E_{n}/\alpha_{\scriptscriptstyle W}^{2}M_{\chi}=-9/n^{2},-2.25/n^{2}$, and $L$-even states with energies $E_{n}/\alpha_{\scriptscriptstyle W}^{2}M_{\chi}=-6.25/n^{2}$. We first consider the case of $L$-odd states. Given a spin-singlet $L$-odd state with $n>L>0$ (binding energy $6.25/n^{2}$), there should always be a more deeply bound state with $n^{\prime}=n$, $L^{\prime}=L-1$ (binding energy $9/n^{2}$), which is accessible through a dipole transition. So in the spin- singlet case and Coulomb limit there should be no metastable states with $L>0$. For the spin-triplet, the case is slightly more complicated, as given an $L$-odd state with $n>L>0$ (binding energy $9/n^{2}$ or $2.25/n^{2}$), the accompanying state with $n^{\prime}=n$, $L^{\prime}=L-1$ has binding energy $6.25/n^{2}$. We see that the $L$-odd states with binding energies $2.25/n^{2}$ will always have an accompanying more-deeply-bound $L$-even spin- triplet state, to which they can decay, but this is not necessarily true for the states with binding energies $9/n^{2}$. A state with $L^{\prime}=L-1$ will be available if $9/n^{2}<6.25/m^{2}$ for some $m$ with $L\leq m<n$, i.e. if $m<n\sqrt{6.25/9}=n/1.2$ is consistent with $m\geq L$. This will be true for $n>1.2L$, so the dangerous range is states with $L<n\leq 1.2L$. In order for this range to include an integer, we must have $L>5$. For example, consider the spin-triplet state with $L=7$ and $n=8$, with dimensionless binding energy $9/8^{2}\simeq 0.14$ in the Coulombic limit. The lowest-lying $L=6$ spin- triplet state that is accessible via $\Delta L=1$, $\Delta S=0$ transitions has $n=7$, and consequently binding energy $6.25/7^{2}=0.13$ in the Coulombic limit; thus the $L=7$ state cannot decay through such a transition. For the $L=5$ case, in the Coulombic limit the states are degenerate, and so we would need to perform a more careful calculation. If we now consider the case of even-$L$ states, the situation is reversed between the spin-singlet and the spin-triplet; in the spin-triplet case we expect the even-$L$ states will always be able to decay to their accompanying, more deeply bound state with $L^{\prime}=L-1$, with the exception of the $L=0$ case where no such state exists (we will consider the $L=0$ case below). In the spin-singlet case, the same argument as previously tells us that for $L>0$, a state with $L^{\prime}=L-1$ will be available except in the case where $L<n\leq 1.2L$, which in the case of even $L$ is potentially relevant for $L\geq 6$. Therefore, based on the Coulombic limit, we would predict that the only possible (meta)stable states with $L>0$ are $L+S$-even states (spin-triplet with $L$ odd or spin-singlet with $L$ even) with $L\geq 5$, $L<n\leq 1.2L$, and the eigenstate structure corresponding to the $Z=6$ eigenvalue. These high-$L$ states may not even be bound for masses of interest to us, and in any case the capture rate into them is likely to be very small. ### F.2 Results for general representations Let us now consider the more general situation where the DM is the lightest component of an SU(2) multiplet in a real representation of odd dimension $N$. Larger representations require higher DM masses to obtain the correct relic density (e.g. Ref. Bottaro:2021snn ), and hence the unbroken-SU(2) approximation is likely to be better at their thermal masses. Recall, however, that the condition to be in the adiabatic regime is mass-independent, $v\lesssim\delta/m_{\scriptscriptstyle W}$, so while this is a feature of the broken SU(2) symmetry, we expect it to be retained at sufficiently low velocities even for very heavy DM. In the unbroken-SU(2) limit we can use the results of Ref. Mitridate:2017izz for general representations. They proceed by decomposing the two-particle state into eigenstates of isospin $I$ ($I=1,3,\cdots,2N-1$); this corresponds to identifying the eigenstates of the potential in our language. They find the eigenvalue associated with the state with isospin $I$ is $\lambda=(2N^{2}-1-I^{2})/8$ (where positive eigenvalues correspond to attracted states, as per our previous convention) and so the most-attracted channel is the singlet, where $\lambda=(N^{2}-1)/4$. The isospin singlet corresponds to an $L+S$-even state with total charge $Q=0$, and for the quintuplet $\lambda=6$, as discussed above. In general, states with $I<\sqrt{2N^{2}-1}$ can support bound states; for the quintuplet this means we have $I=1,3,5$ bound states. In the adiabatic regime, SU(2)-breaking effects cause the lowest-energy state at large distances (i.e. the $L+S$-even, $Q=0$ state of two identical DM particles) to smoothly transition into the isospin-singlet state at short distances. Thus in this regime, we expect both the Sommerfeld-enhanced direct annihilation and the bound state capture rates to be consistent with an initial isospin-singlet state. Since the dominant bound-state capture process (dipole emission of a gauge boson) changes isospin by 2, the final state must then have $I=3$, i.e. it is a SU(2) adjoint (this state is $L+S$-odd and the three components have $Q=0,\pm 1$). Consequently, within the adiabatic approximation, we only need to concern ourselves with (Sommerfeld enhanced) direct annihilation from the isospin- singlet state, and capture from an isospin-singlet initial state to an isospin-triplet final state, with the relevant eigenvalues being $\lambda_{i}=(N^{2}-1)/4$ and $\lambda_{f}=(N^{2}-5)/4$. (Transitions amongst bound states can involve higher-$I$ states; in particular $I=5$ for the quintuplet, with $\lambda=3$.) We will thus focus in this appendix on singlet-to-adjoint transitions. We reiterate that this approximation is not appropriate if the gauge symmetry is actually unbroken or at the high velocities associated with thermal freezeout in the early universe (as studied in e.g. Ref. Binder:2023ckj ), where other transitions can also contribute significantly and may dominate. The quality of this approximation—i.e. the degree to which the incoming state retains non- singlet components at small $r$, which could contribute significantly to the capture rate—is an interesting question, but we ignore it here, as our main purpose is simply to develop some simple intuition for the importance of bound-state effects for the gamma-ray endpoint signal. The corresponding approximation for the quintuplet appears to do a reasonable job of estimating the relative size of bound-state capture and annihilation, as we see in Fig. 4. For these singlet-to-adjoint transitions, we can write the group theory coefficients for bound state formation from Ref. Mitridate:2017izz in the simplified form: $\displaystyle C^{a1b}_{\mathcal{J}}$ $\displaystyle=\frac{1}{\sqrt{T_{R}d_{R}}}\text{Tr}(T^{b}T^{a}),$ (153) $\displaystyle C^{a1b}_{\mathcal{\tau}}$ $\displaystyle=i\frac{1}{\sqrt{T_{R}d_{R}}}\text{Tr}(T^{b}T^{c}T^{d})f^{acd}=-\frac{1}{\sqrt{T_{R}d_{R}}}\text{Tr}(T^{b}T^{c}T^{d})(T^{a}_{\text{adj}})^{cd}.$ We can now use $\text{tr}(T^{a}T^{b})=T_{R}\delta^{ab}$, and also $\displaystyle\text{Tr}(T^{b}T^{c}T^{d})(T^{a}_{\text{adj}})^{cd}$ $\displaystyle=\frac{1}{2}T_{R}T_{\text{adj}}\delta^{ab}.$ (154) Thus, finally we obtain: $\displaystyle C^{a1b}_{\mathcal{J}}$ $\displaystyle=\sqrt{\frac{T_{R}}{d_{R}}}\delta^{ab},\quad C^{a1b}_{\mathcal{\tau}}=-\frac{1}{2}\sqrt{\frac{T_{R}}{d_{R}}}T_{\text{adj}}\delta^{ab},$ (155) where we will show how the $C^{a1b}_{\mathcal{J}}$ and $C^{a1b}_{\mathcal{\tau}}$ coefficients enter the bound state capture rate in Secs. F.2.2 and F.2.3. Now if $R$ is the representation of size $N$, for SU(2) we have $T_{R}=N(N^{2}-1)/12$, and so $T_{\text{adj}}=2$, while $d_{R}=N$ (and in particular $d_{\text{adj}}=3$). Thus for SU(2) we obtain the coefficients: $\displaystyle C^{a1b}_{\mathcal{J}}$ $\displaystyle=\sqrt{\frac{N^{2}-1}{12}}\delta^{ab},\quad C^{a1b}_{\mathcal{\tau}}=-C^{a1b}_{\mathcal{J}}.$ (156) Let us also note that we can now extend the argument given in App. F.1.2 to general representations. A bound state of isospin $I$ will generally have open decay channels to states with lower isospin (by 2 units) which are hence more deeply bound due to the larger eigenvalue $\lambda$. The exception is $I=1$ states, which must decay to $I=3$ states which are more shallowly bound for the same principal quantum number. For this reason, excited $I=1$ states can be metastable if they have sufficiently large $L$ that all the $I=3$ states differing by only one unit in $L$ are more shallowly bound. This can occur for a general representation if $L<n\leq\frac{N^{2}-1}{N^{2}-5}L=\left(1+\frac{4}{N^{2}-5}\right)L$; this range will contain an integer if $L>(N^{2}-5)/4$. Thus the threshold $L$ at which this effect can occur increases as the representation size goes up. #### F.2.1 Direct annihilation In this case, if we can evaluate the tree-level cross section for annihilation from an isospin-singlet initial state to any desired SM final state, we can account for the Sommerfeld enhancement by simply multiplying the tree-level cross section by $S=2\pi\alpha_{\scriptscriptstyle W}\lambda_{i}/v$, in the low-velocity limit. This cross section is given for Majorana fermion DM by Ref. Mitridate:2017izz as: $\displaystyle(\sigma v)_{\text{tree,I=1}}$ $\displaystyle=\frac{\pi\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}\frac{T_{R}^{2}d_{\text{adj}}}{d_{R}}{}$ $\displaystyle=\frac{\pi\alpha_{\scriptscriptstyle W}^{2}}{2^{4}\times 3\times M_{\chi}^{2}}N(N^{2}-1)^{2}.$ (157) Multiplying by the Sommerfeld factor gives: $\displaystyle(\sigma v)_{\text{I=1}}$ $\displaystyle=\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{2^{5}\times 3\times M_{\chi}^{2}v}N(N^{2}-1)^{3}\rightarrow\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\frac{N^{7}}{96},$ (158) where in the final step we have assumed $N\gg 1$. Checking, for the wino and quintuplet this yields: $\displaystyle(\sigma v)_{\text{I=1}}$ $\displaystyle=\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\begin{cases}16,&N=3,\\\ 720,&N=5.\end{cases}$ (159) For the quintuplet this agrees with our calculation above. This also agrees with the wino result from Ref. Asadi:2016ybp , accounting for our assumption that the adiabatic condition holds. #### F.2.2 Capture to the ground state At small velocities, from Ref. Mitridate:2017izz we can read off the low- velocity bound state capture cross section to the $n=1$ state as: $\displaystyle(\sigma v)^{n=1,l=0}_{\text{bsf}}=\frac{\pi\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}\frac{2S+1}{g_{\chi}^{2}}\frac{2^{11}\pi}{3}\sum_{ab}|C^{a1b}_{\mathcal{J}}+(1/\lambda_{f})C^{a1b}_{\mathcal{\tau}}|^{2}\frac{\lambda_{i}^{3}\alpha_{\scriptscriptstyle W}}{\lambda_{f}v}e^{-4\lambda_{i}/\lambda_{f}}.$ (160) This expression involves an average over initial states, with degrees of freedom denoted by $g_{\chi}$; since we are interested in the case where $100\%$ of the DM captures from the singlet state, we only need to average over spin degrees of freedom, so $g_{\chi}=2$. Our initial state must be $L+S$-even and thus for capture to an $L=0$ final state, it must have $S=1$. Thus we obtain: $\displaystyle(\sigma v)^{n=1,l=0}_{\text{bsf}}$ $\displaystyle=\frac{\pi\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}\frac{3}{4}\frac{2^{11}\pi}{3}d_{\text{adj}}\left(\frac{N^{2}-1}{12}\right)|1-(1/\lambda_{f})|^{2}\frac{\lambda_{i}^{3}\alpha_{\scriptscriptstyle W}}{\lambda_{f}v}e^{-4\lambda_{i}/\lambda_{f}}{}$ $\displaystyle=\frac{8\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\frac{(N^{2}-9)^{2}(N^{2}-1)^{4}}{(N^{2}-5)^{3}}e^{-4(N^{2}-1)/(N^{2}-5)}.$ (161) where $d_{\text{adj}}=3$ counts the number of generators and arises from $\sum_{ab}|\delta^{ab}|^{2}=d_{\text{adj}}$. In the limit of $N\gg 3$, which is helpful for comparison against direct annihilation, we obtain the simplified result: $\displaystyle(\sigma v)^{n=1,l=0}_{\text{bsf}}$ $\displaystyle\rightarrow\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}8N^{6}e^{-4}\simeq\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\frac{N^{6}}{6.8},\quad N\gg 3.$ (162) (Note in the quintuplet case this approximate value is larger than the truth by about a factor of 3; it will be a better approximation for larger $N$.) #### F.2.3 Capture to the $n=2$ states Now let us consider capture to the $n=2$, $L=1$ states, as their subsequent decays and annihilations can give rise to endpoint photons, unlike capture directly to the $n=1$ state. In this case the final state must have $S=0$ (so the initial state has $L+S$ even). From Ref. Mitridate:2017izz in the low- velocity limit the $s$-wave (1st line) and $d$-wave (2nd line) contributions are: $\displaystyle(\sigma v)^{n=2,l=1}_{\text{bsf}}$ $\displaystyle=\frac{\pi\alpha_{\scriptscriptstyle W}^{2}}{M_{\chi}^{2}}\left(\frac{2S+1}{g_{\chi}^{2}}\right)\frac{2^{12}\pi\lambda_{i}}{9\lambda_{f}^{5}}\frac{\alpha_{\scriptscriptstyle W}}{v}\left[\sum_{ab}|C^{a1b}_{\mathcal{J}}(\lambda_{f}\lambda_{i}(3\lambda_{f}-4\lambda_{i})+C^{a1b}_{\mathcal{\tau}}(-3\lambda_{f}^{2}+12\lambda_{f}\lambda_{i}-8\lambda_{i}^{2})|^{2}\right.{}$ $\displaystyle\left.+\sum_{ab}2^{5}\lambda_{i}^{4}|C^{a1b}_{\mathcal{J}}\lambda_{f}+2C^{a1b}_{\mathcal{\tau}}|^{2}\right]e^{-8\lambda_{i}/\lambda_{f}}{}$ $\displaystyle=\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\frac{2^{8}\lambda_{i}}{9\lambda_{f}^{5}}(N^{2}-1)\left[|(\lambda_{f}\lambda_{i}(3\lambda_{f}-4\lambda_{i})-(-3\lambda_{f}^{2}+12\lambda_{f}\lambda_{i}-8\lambda_{i}^{2})|^{2}\right.{}$ $\displaystyle\left.+2^{5}\lambda_{i}^{4}|\lambda_{f}-2|^{2}\right]e^{-8\lambda_{i}/\lambda_{f}}{}$ $\displaystyle=\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\frac{2^{4}(N^{2}-1)^{2}}{9(N^{2}-5)^{5}}\left[(N^{6}+9N^{4}-165N^{2}-37)^{2}\right.{}$ $\displaystyle\left.+2^{5}(N^{2}-1)^{4}(N^{2}-13)^{2}\right]e^{-8(N^{2}-1)/(N^{2}-5)}.$ (163) The first line here agrees with the $s\rightarrow p$ quintuplet result computed in Eq. (148), once we multiply that result (which was for photon- mediated capture into a specific $n=2$ $L=1$ bound state) by a factor of 3 to account for $W$\- and $Z$-mediated capture and a second factor of 3 to account for the $m=0,\pm 1$ states. In the limit of large $N$, this expression has the scaling: $\displaystyle(\sigma v)^{n=2,l=1}_{\text{bsf}}$ $\displaystyle\rightarrow\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\frac{2^{4}N^{6}}{9}\left[1+2^{5}\right]e^{-8}\simeq\frac{\pi^{2}\alpha_{\scriptscriptstyle W}^{3}}{M_{\chi}^{2}v}\frac{N^{6}}{1700}\left[1+32\right],\quad N\gg 5$ (164) So we see that compared to direct annihilation, in the large-$N$ limit we expect the various contributions to scale as: * • $p\rightarrow s$ capture to the $n=1,L=0,S=1$ state (contributions to endpoint photons are power-suppressed): direct annihilation rate $\times 14/N$, in addition to (when SU(2) is broken) any kinematic suppression of $W/Z$ emission (by a factor as small as $\alpha/(3\alpha_{\scriptscriptstyle W})$) or velocity suppression due to the $p$-wave initial state (parametrically $\mathcal{O}(M_{\chi}^{2}v^{2}/m_{\scriptscriptstyle W}^{2})$ for $v\lesssim m_{\scriptscriptstyle W}/M_{\chi}$),282828As mentioned above, the large-$N$ approximation overestimates this ratio for the quintuplet by about a factor of 3, and so the rates are actually very comparable. * • $s\rightarrow p$ capture to the $n=2,L=1$ states collectively: direct annihilation rate $\times 0.06/N$, in addition to any kinematic suppression of $W/Z$ emission (up to a factor of $3\alpha_{\scriptscriptstyle W}/\alpha$), * • $d\rightarrow p$ capture to the $n=2,L=1$ states collectively: direct annihilation rate $\times 1.8/N$, in addition to any kinematic suppression of $W/Z$ emission (up to a factor of $3\alpha_{\scriptscriptstyle W}/\alpha$) or velocity suppression due to the $d$-wave initial state (parametrically $\mathcal{O}(M_{\chi}^{4}v^{4}/m_{\scriptscriptstyle W}^{4})$ for $v\lesssim m_{\scriptscriptstyle W}/M_{\chi}$). We see that the only contribution that is not suppressed at low velocities and that gives rise to leading-power contributions to the endpoint photon spectra (via its decay and subsequent annihilation) is generically expected to have a cross section 2 or more orders of magnitude below direct annihilation. Consequently, it is quite plausible for bound state formation to be a large or even dominant contribution to the inclusive annihilation rate when the velocity suppression for higher partial waves is mild or absent and the gauge bosons are massless (as in the case of freezeout), while simultaneously having a generically small effect on the endpoint spectrum for indirect detection, especially at low velocities ($v\ll m_{\scriptscriptstyle W}/M_{\chi}$) or where the $n=2$ states are too loosely bound to allow $W$\- or $Z$-mediated capture. One might ask about the contribution from capture into states with $n>2$. While in the unbroken limit bound states with large $L$ and $n$ may play a large role in the capture rate (e.g. Binder:2023ckj ), for the parameter space we have considered in this paper, the number of bound states is always truncated by the non-zero SU(2) breaking scale, preventing large enhancements from the proliferation of high-$n$ states. Furthermore, we expect velocity suppressions (of order $(M_{\chi}v/m_{\scriptscriptstyle W})^{2L}$) for all capture rates with initial $L>0$. Finally, within our adiabatic approximation both initial and final states always experience attractive interactions, with couplings that obey $\lambda_{i}/\lambda_{f}>1$, leading to increasingly strong exponential suppression for large-$n$ states and avoiding the potentially unitarity-violating region of parameter space identified in Ref. Binder:2023ckj . ## References * (1) M. Cirelli, N. Fornengo, and A. Strumia, Minimal dark matter, Nucl. Phys. B753 (2006) 178–194, [hep-ph/0512090]. * (2) A. Mitridate, M. Redi, J. Smirnov, and A. Strumia, Cosmological Implications of Dark Matter Bound States, JCAP 05 (2017) 006, [arXiv:1702.01141]. * (3) S. Bottaro, D. Buttazzo, M. Costa, R. Franceschini, P. Panci, D. Redigolo, and L. Vittorio, Closing the window on WIMP Dark Matter, Eur. Phys. J. C 82 (2022), no. 1 31, [arXiv:2107.09688]. * (4) M. Cirelli, A. Strumia, and M. Tamburini, Cosmology and Astrophysics of Minimal Dark Matter, Nucl. Phys. B787 (2007) 152–175, [arXiv:0706.4071]. * (5) M. Beneke, R. Szafron, and K. Urban, Wino potential and Sommerfeld effect at NLO, Phys. Lett. B 800 (2020) 135112, [arXiv:1909.04584]. * (6) K. Urban, NLO electroweak potentials for minimal dark matter and beyond, JHEP 10 (2021) 136, [arXiv:2108.07285]. * (7) F. Aharonian, D. Khangulyan, and D. Malyshev, Cold ultrarelativistic pulsar winds as potential sources of galactic gamma-ray lines above 100 GeV, Astron. Astrophys. 547 (2012) A114, [arXiv:1207.0458]. * (8) G. Sinnis, A. Smith, and J. E. McEnery, HAWC: A Next generation all - sky VHE gamma-ray telescope, in On recent developments in theoretical and experimental general relativity, gravitation, and relativistic field theories. Proceedings, 10th Marcel Grossmann Meeting, MG10, Rio de Janeiro, Brazil, July 20-26, 2003. Pt. A-C, pp. 1068–1088, 2004. astro-ph/0403096. * (9) HAWC Collaboration, A. U. Abeysekara et al., Sensitivity of HAWC to high-mass dark matter annihilations, Phys. Rev. D 90 (2014), no. 12 122002, [arXiv:1405.1730]. * (10) HAWC Collaboration, A. Albert et al., Dark Matter Limits From Dwarf Spheroidal Galaxies with The HAWC Gamma-Ray Observatory, Astrophys. J. 853 (2018), no. 2 154, [arXiv:1706.01277]. * (11) HAWC Collaboration, A. U. Abeysekara et al., A Search for Dark Matter in the Galactic Halo with HAWC, JCAP 02 (2018) 049, [arXiv:1710.10288]. * (12) HAWC Collaboration, A. Albert et al., Search for Dark Matter Gamma-ray Emission from the Andromeda Galaxy with the High-Altitude Water Cherenkov Observatory, JCAP 06 (2018) 043, [arXiv:1804.00628]. [Erratum: JCAP 04, E01 (2019)]. * (13) HAWC Collaboration, A. Albert et al., Search for gamma-ray spectral lines from dark matter annihilation in dwarf galaxies with the High-Altitude Water Cherenkov observatory, Phys. Rev. D 101 (2020), no. 10 103001, [arXiv:1912.05632]. * (14) H.E.S.S. Collaboration, J. A. Hinton, The Status of the H.E.S.S. project, New Astron. Rev. 48 (2004) 331–337, [astro-ph/0403052]. * (15) H.E.S.S. Collaboration, F. Aharonian et al., H.E.S.S. observations of the Galactic Center region and their possible dark matter interpretation, Phys. Rev. Lett. 97 (2006) 221102, [astro-ph/0610509]. [Erratum: Phys.Rev.Lett. 97, 249901 (2006)]. * (16) H.E.S.S. Collaboration, F. Aharonian et al., Observations of the Sagittarius Dwarf galaxy by the H.E.S.S. experiment and search for a Dark Matter signal, Astropart. Phys. 29 (2008) 55–62, [arXiv:0711.2369]. [Erratum: Astropart.Phys. 33, 274–275 (2010)]. * (17) H.E.S.S. Collaboration, A. Abramowski et al., Search for a Dark Matter annihilation signal from the Galactic Center halo with H.E.S.S, Phys. Rev. Lett. 106 (2011) 161301, [arXiv:1103.3266]. * (18) H.E.S.S. Collaboration, A. Abramowski et al., Search for Photon-Linelike Signatures from Dark Matter Annihilations with H.E.S.S., Phys. Rev. Lett. 110 (2013) 041301, [arXiv:1301.1173]. * (19) H.E.S.S. Collaboration, A. Abramowski et al., Search for dark matter annihilation signatures in H.E.S.S. observations of Dwarf Spheroidal Galaxies, Phys. Rev. D 90 (2014) 112012, [arXiv:1410.2589]. * (20) H.E.S.S. Collaboration, A. Abramowski et al., Constraints on an Annihilation Signal from a Core of Constant Dark Matter Density around the Milky Way Center with H.E.S.S., Phys. Rev. Lett. 114 (2015), no. 8 081301, [arXiv:1502.03244]. * (21) H.E.S.S. Collaboration, H. Abdallah et al., Search for dark matter annihilations towards the inner Galactic halo from 10 years of observations with H.E.S.S, Phys. Rev. Lett. 117 (2016), no. 11 111301, [arXiv:1607.08142]. * (22) H.E.S.S. Collaboration, H. Abdalla et al., H.E.S.S. Limits on Linelike Dark Matter Signatures in the 100 GeV to 2 TeV Energy Range Close to the Galactic Center, Phys. Rev. Lett. 117 (2016), no. 15 151302, [arXiv:1609.08091]. * (23) H.E.S.S. Collaboration, H. Abdallah et al., Search for $\gamma$-ray line signals from dark matter annihilations in the inner Galactic halo from ten years of observations with H.E.S.S, Phys. Rev. Lett. 120 (2018) 201101, [arXiv:1805.05741]. * (24) H.E.S.S. Collaboration, H. Abdalla et al., Searches for gamma-ray lines and ’pure WIMP’ spectra from Dark Matter annihilations in dwarf galaxies with H.E.S.S, JCAP 11 (2018) 037, [arXiv:1810.00995]. * (25) H.E.S.S. Collaboration, H. Abdalla et al., Search for Dark Matter Annihilation Signals in the H.E.S.S. Inner Galaxy Survey, Phys. Rev. Lett. 129 (2022), no. 11 111101, [arXiv:2207.10471]. * (26) J. Holder et al., Status of the VERITAS Observatory, AIP Conf. Proc. 1085 (2009), no. 1 657–660, [arXiv:0810.0474]. * (27) VERITAS Collaboration, T. Arlen et al., Constraints on Cosmic Rays, Magnetic Fields, and Dark Matter from Gamma-Ray Observations of the Coma Cluster of Galaxies with VERITAS and Fermi, Astrophys. J. 757 (2012) 123, [arXiv:1208.0676]. * (28) VERITAS Collaboration, S. Archambault et al., Dark Matter Constraints from a Joint Analysis of Dwarf Spheroidal Galaxy Observations with VERITAS, Phys. Rev. D 95 (2017), no. 8 082001, [arXiv:1703.04937]. * (29) A. Acharyya et al., Search for Ultraheavy Dark Matter from Observations of Dwarf Spheroidal Galaxies with VERITAS, Astrophys. J. 945 (2023), no. 2 101, [arXiv:2302.08784]. * (30) MAGIC Collaboration, E. Lorenz, Status of the 17-m MAGIC telescope, New Astron. Rev. 48 (2004) 339–344. * (31) MAGIC Collaboration, J. Aleksic et al., Searches for Dark Matter annihilation signatures in the Segue 1 satellite galaxy with the MAGIC-I telescope, JCAP 06 (2011) 035, [arXiv:1103.0477]. * (32) MAGIC, Fermi-LAT Collaboration, M. L. Ahnen et al., Limits to Dark Matter Annihilation Cross-Section from a Combined Analysis of MAGIC and Fermi-LAT Observations of Dwarf Satellite Galaxies, JCAP 02 (2016) 039, [arXiv:1601.06590].
# Retention of Water in Terrestrial Magma Oceans and Carbon-rich Early Atmospheres Dan J. Bower Center for Space and Habitability, University of Bern Gesellschaftsstrasse 6 3012 Bern, Switzerland Kaustubh Hakim Center for Space and Habitability, University of Bern Gesellschaftsstrasse 6 3012 Bern, Switzerland Paolo A. Sossi Institute of Geochemistry and Petrology, Department of Earth Sciences, ETH Zurich Clausiusstrasse 25 8092 Zurich, Switzerland Patrick Sanan Institute of Geophysics, ETH Zurich Sonneggstrasse 5 8092 Zurich, Switzerland (Received July 14, 2021; Revised October 12, 2021; Accepted March 21, 2022) ###### Abstract Massive steam and CO2 atmospheres have been proposed for magma ocean outgassing of Earth and terrestrial planets. Yet formation of such atmospheres depends on volatile exchange with the molten interior, governed by volatile solubilities and redox reactions. We determine the evolution of magma ocean–atmosphere systems for a range of oxygen fugacities, C/H ratios and hydrogen budgets that include redox reactions for hydrogen (H2–H2O), carbon (CO–CO2), methane (CH4), and solubility laws for H2O and CO2. We find that small initial budgets of hydrogen, high C/H ratios, and oxidizing conditions, suppress outgassing of hydrogen until the late stage of magma ocean crystallization. Hence early atmospheres in equilibrium with magma oceans are dominantly carbon-rich, and specifically CO-rich except at the most oxidizing conditions. The high solubility of H2O limits its outgassing to melt fractions below $\sim$30%, the fraction at which the mantle transitions from vigorous to sluggish convection with melt percolation. Sluggish melt percolation could enable a surface lid to form, trapping water in the interior and thereby maintaining a carbon-rich atmosphere (equilibrium crystallization). Alternatively, efficient crystal settling could maintain a molten surface, promoting a transition to a water-rich atmosphere (fractional crystallization). However, additional processes, including melt trapping and H dissolution in crystallizing minerals, further conspire to limit the extent of H outgassing, even for fractional crystallization. Hence, much of the water delivered to planets during their accretion can be safely harbored in their interiors during the magma ocean stage, particularly at oxidizing conditions. Planetary interior — Planetary atmospheres — Planetary structure — Super Earths — Extrasolar rocky planets — Exoplanet atmospheric composition — Exoplanet evolution ††journal: The Planetary Science Journal††software: Simulating Planetary Interior Dynamics with Extreme Rheology (SPIDER) (Bower et al., 2018, 2019) is open source and hosted at https://github.com/djbower/spider. SPIDER Version 0.2.1 was used in this study and is also available on Zenodo (Bower et al., 2021). ## 1 Introduction The origin and distribution of light volatile elements such as C, H, and O, in the major planetary reservoirs (core, mantle, and atmosphere), regulates how terrestrial planet atmospheres form and evolve (Gaillard et al., 2021, and references therein). During planetary accretion and contemporaneous iron-core formation—both of which promote the formation of a global magma ocean—the siderophile nature of both C (Corgne et al., 2008) and H (Tagawa et al., 2021) strongly partitions them into the core. Nevertheless, the mantle retains some fraction of C at core-forming conditions, in addition to replenishment of both C and H after core formation (Hirschmann, 2016); hence, these volatiles subsequently participate in short- and long-term geochemical cycling (e.g., Dasgupta, 2013). Here we examine how the inventories of C and H exchange between the interior and atmosphere as the magma ocean evolves after core formation. The magma ocean outgasses volatiles as it cools from molten to mostly solid silicate rock to form a secondary atmosphere. Therefore, chemical and physical processes operating in the magma ocean, both at its surface and at depth, exert strong controls on the size and composition of such early atmospheres. Early atmospheres establish the initial conditions for the subsequent long- term evolution of planetary interiors and their atmospheres through interaction with the surface environment via geochemical cycles. Therefore, magma ocean atmospheres provide the connection between the volatile budgets emplaced during planet formation and the establishment of long-term climate states. The shock degassing of substantial H2O from hydrated minerals (Lange & Ahrens, 1982) to simulate impacts during planetary accretion motivated investigation of the blanketing effect of a steam atmosphere above the molten early Earth (Abe & Matsui, 1985). Additional justification for the widespread presence of H2O and CO2 is provided by experiments and accompanying equilibrium models for the devolatilization of ordinary and carbonaceous chondrites, which are shown to produce oxidized species in abundance (Gooding & Muenow, 1977; Schaefer & Fegley, 2010; Thompson et al., 2021). Consequently, coupled magma ocean–atmosphere models have focused almost exclusively on the role of oxidized species (H2O and CO2) in controlling the lifetime of magma oceans and growth of early atmospheres (e.g., Lebrun et al., 2013; Hier-Majumder & Hirschmann, 2017; Salvador et al., 2017; Nikolaou et al., 2019; Barth et al., 2021). More reduced materials (such as enstatite chondrites that are thought to have contributed to Earth’s accretion on isotopic grounds) may instead produce H2-rich or CO-rich gases (Schaefer & Fegley, 2010). This has partly inspired calculations of the chemistry and cooling timescales of reducing atmospheres (Zahnle et al., 2020; Lichtenberg et al., 2021). However, chondrites may not be appropriate analogs for the materials that degas to produce early atmospheres around terrestrial planets (Sossi, 2021). This is because mass transfer is much more efficient when materials are liquid rather than solid, allowing for the more rapid replenishment of surface material via convection and thus facilitating atmosphere formation. This has further led coupled models to understand the astrophysical observables of magma oceans to interpret the spectroscopic data from next-generation telescopes (Hamano et al., 2015; Bonati et al., 2019; Bower et al., 2019; Katyal et al., 2020). The redox state (more precisely, the oxygen fugacity, $f{\rm O_{2}}$) of the mantle depends on pressure, temperature, and composition and dictates the speciation of outgassing products. It controls the relative fugacities of reduced and oxidized species, which is equivalent to partial pressure at the low total pressures of terrestrial atmospheres. Core formation on Earth established $f{\rm O_{2}}$ around two log10 units below the iron-wüstite (IW) buffer at equilibrium at depth, based on the Fe content of the mantle and core (e.g., Rubie et al., 2015). However, the volume change of the reaction between ferric and ferrous iron means that silicate melt transported to the surface defines higher $f{\rm O_{2}}$ than that set by core formation (Armstrong et al., 2019; Hirschmann, 2012). The surface $f{\rm O_{2}}$, rather than the $f{\rm O_{2}}$ at depth, imposes the redox state at the magma ocean–atmosphere interface and thereby controls outgassing chemistry (Sossi et al., 2020). Evolutionary models of magma oceans are usually derived from geochemical or geodynamic considerations. Chemical models focus on tracking the compositional evolution of cumulate mineral assemblages that form as the magma ocean cools (e.g., Elkins-Tanton, 2008). However, they ignore dynamics during the crystallization process and often assume fractional crystallization. In (stepwise) fractional crystallization, cumulates that form owing to cooling are isolated from the molten magma ocean, and the evolving composition of the melt is determined from mass balance. By contrast, dynamic models track thermal energy transport but at the expense of reduced compositional complexity. However, dynamic formulations that include a local representation of melt–solid separation (Abe, 1993; Bower et al., 2018) can replicate fractional crystallization. They can also replicate equilibrium crystallization, in which melt and solid freeze together before any significant relative motion has occurred. Equilibrium chemistry calculations propose a $\sim$100 bar CO-rich atmosphere for an Earth-like magma ocean around 2200 K (Sossi et al., 2020), although it remains unclear how the atmospheric size and speciation change during the cooling and crystallization of the magma ocean. In particular, it is unknown whether all early atmospheres are expected to be CO-rich and whether they can transition to atmospheres that are instead dominated by hydrogen species, such as H2, H2O, and CH4—with implications for habitability and the formation of surface water oceans. Furthermore, it remains to be determined how the transition is influenced by planetary conditions (e.g., $f{\rm O_{2}}$, C/H ratio, H budget), which is pertinent given the diversity of formation environments for terrestrial planets outside the solar system. To this end, we combine mass balance and equilibrium chemistry in a self- consistent and time-evolved model to probe magma ocean–atmosphere exchange. Unlike previous models, our model accounts for the CO2/CO, H2O/H2, and CH4/CO2 ratio set by the oxygen fugacity, C/H ratio and temperature of the magma ocean, as well as their relative solubilities. We explore a range of initial endowments of the hydrogen and carbon volatile inventory and quantify the relationship between these variables and the partial pressures of the degassed species. Finally, atmospheric escape is also modeled to determine the extent to which the preferential loss of hydrogen can influence the subsequent outgassing of dissolved volatiles. ## 2 Interior–Atmosphere Coupling ### 2.1 Overview The SPIDER code (Bower et al., 2018, 2019) solves for the coupled evolution of the silicate mantle and atmosphere, where the mantle can be molten, solid, or a mixture (partial melt). It considers interior energy transport by convection, conduction, and relative motion of melt and solid through mixing and separation and is a true 1D model in the sense that energy fluxes are determined locally (Abe, 1993; Bower et al., 2018). Cooling of the interior is regulated by radiative transfer in the atmosphere, and the atmosphere itself grows through outgassing of the interior as the mantle solidifies owing to cooling. We consider a two-phase system of single composition (MgSiO3) for simplicity, where the thermophysical properties of solid and melt are determined from Mosenfelder et al. (2009) and Wolf & Bower (2018), respectively. The melting curves adhere to peridotite melting data in the upper mantle and measurements on chondritic material in the lower mantle (Andrault et al., 2011); they are not perturbed according to the mantle volatile content. The atmosphere is treated as gray with no scattering, and the two-stream approximation is applied to solve for radiative transfer (Abe & Matsui, 1985); Radiation limits are not considered. Opacities of the gas species are provided in Appendix A. We modified the SPIDER code to additionally account for mass exchange between volatiles according to equilibrium chemistry (Section 2.3), as well as atmospheric escape (Section 2.4). The mathematical description of the volatile mass balance and its adherence to equilibrium chemistry is given in detail in Appendix C. For each chemical reaction, we introduce a term into the volatile mass balance to account for the exchange of mass necessary to retain chemical equilibrium between participating gas species as dictated by oxygen fugacity and temperature. Similarly, another term in the mass balance equation tracks volatile loss due to atmospheric escape. Hence, chemical equilibrium and escape are self-consistently determined as part of the same system of equations that are integrated to describe the coupled interior–atmosphere evolution. Our parameter choices are similar to those in Bower et al. (2018, 2019), so we only present pertinent details here. We consider a planet with Earth dimensions and a magma ocean that crystallizes from the bottom up, an approach that is justified owing to the steeper melting curves compared to the mantle adiabat. Convection and melt–solid mixing are determined by an eddy diffusivity that is based on a constant mixing length, and internal heat sources are not considered because their influence is negligible over the lifetime of a magma ocean. Viscosity controls the dynamic regime of the mantle and varies from $10^{2}$ Pa s for pure silicate melt to $10^{22}$ Pa s for pure solid, increasingly abruptly at a melt fraction of 40% (Costa et al., 2009; Bower et al., 2018). Other controls on viscosity, such as temperature and composition, are second order and therefore neglected. The crystal size, which controls the efficiency of melt–solid separation, is set at either 1 mm or 5 cm (Appendix B) to simulate equilibrium or fractional crystallization, respectively. We track the reservoir evolution of five volatile species during the magma ocean stage: H2, H2O, CO, CO2, and CH4. For convenience, we refer to H2O and H2 as hydrogen species and CO, CO2, and CH4 as carbon species. The dissolved content of H2O and CO2 in the magma ocean is determined by their solubility, and reduced species are set to zero solubility (Section 2.2). Redox reactions impose H2–H2O, CO$-$CO2, and CO2–H2–CH4 equilibrium (Section 2.3). Partitioning of volatiles into solids (crystals) is not considered because its effect is small compared to the partitioning of volatiles between the melt and atmosphere. Therefore, the amount of volatiles stored in the mantle during solidification is a minimum estimate. We simulate cooling of a magma ocean from an initial surface temperature of 2700 K until it reaches 1650 K. For surface temperatures cooler than 1650 K the assumption of equilibrium dissolution begins to break down. This is because a rheological transition caused by increasing crystal fraction results in mass transfer that is too sluggish to maintain equilibrium with the atmosphere; for conciseness we subsequently refer to this event as ”surface lid formation.” ### 2.2 Solubility A solubility law relates the dissolved volatile content of a particular species in melt $X$ to its fugacity $f$. Pressures at the magma ocean–atmosphere interface of an Earth-sized planet are expected to be around several hundred bars, which motivates our subsequent choice of solubility laws. In the following, the dissolved content of a given volatile in the magma ocean is set solely by its fugacity at the interface and the relevant solubility law. The volatile species are considered to be homogeneously distributed in the magma ocean. #### 2.2.1 Solubility of H2O We employ a general expression for solubility with a power-law dependence: $X=\alpha f^{1/\beta},$ (1) where $\alpha$ and $\beta$ are constants specific to a particular equilibrium between a gas species and its dissolved component. In Equation 1, $\alpha$ is an empirically determined coefficient that encompasses the temperature, pressure, and composition of the liquid in which the solubility is being determined, while $\beta$ reflects the stoichiometric relationship between the mole fraction of the dissolved species and the fugacity of the gas species. Henry’s law is recovered when $\beta=1$, which indicates no chemical reaction (hence no change in speciation) when the gas dissolves in the solvent. Throughout this work we assume gas ideality such that fugacity and partial pressure are equivalent, and hence symbols for fugacity $f$ and partial pressure $p$ are used interchangeably. In Equation 1, the relevant partial pressure (fugacity) is determined at the interface of the magma ocean and atmosphere. At low H2O fugacities, water dissolves in silicate melts as hydroxyl (OH-) group, so $\beta_{\rm H_{2}O}=2$ describes the solubility of H2O in melt (e.g., McMillan, 1994). However, this relationship breaks down at high H2O fugacities because H2O replaces OH- as the predominant melt species (above $\sim 2000$ bars, Berndt et al., 2002; Stolper, 1982). Therefore, a single $\beta_{\rm H_{2}O}$ determined by simply best-fitting data across large $f$H2O ranges yields spurious values of $\beta_{\rm H_{2}O}$ because different reactions govern solubility at different $f$H2O. Table 1 summarizes recent experimental constraints on H2O solubility for compositions, pressures, and temperatures relevant to the magma ocean–atmosphere interface of a terrestrial planet, i.e. a few hundred bars and greater than about 1500 K. The table is ordered based on the silica content of the sample, increasing from less than 45 wt% SiO2 to more than 50 wt% SiO2. Experiments on evolved compositions (e.g. rhyolitic/granitic melts) are not included because they are not representative of compositions during the magma ocean stage: these are instead summarized in Iacono-Marziano et al. (2012). Peridotite is a rock composed predominantly of olivine, which is the major constituent of Earth’s mantle and likely exoplanetary mantles (Putirka & Rarick, 2019). The solubility of water in liquid peridotite (red line, Figure 1) is determined at high temperature and low $f$H2O, so its solubility law is most appropriate for a terrestrial magma ocean (Sossi et al., 2022). For the relevant pressure–temperature conditions of the magma ocean surface, the mole fraction of H2O dissolved depends largely on variations in $f$H2O, while the effect of temperature is a subordinate (Hamilton & Oxtoby, 1986), if not poorly constrained, variable. As such, temperature does not explicitly appear as a functional dependence in Equation 1, and $\alpha$ is taken to be constant. #### 2.2.2 Solubility of CO2 There are no experimental constraints on the solubility of CO2 in peridotite, so instead we use a solubility law formulated for basalt as a proxy. At low CO2 fugacities, carbon dioxide dissolves in silicate melts as either molecular CO2 or the CO${}_{3}^{2-}$ group (Stolper & Holloway, 1988). Because the stability of the carbonate ion increases according to melt basicity (Mg2+, Ca2+ cations, Holloway & Blank (1994)), the Mg-rich nature of planetary mantles ensures that it occurs exclusively as CO${}_{3}^{2-}$ in such liquids. Determining the mole fraction of dissolved carbonate for pressures up to 815 bars $f$CO2, Dixon et al. (1995) constrained the solubility of CO2 in mid- ocean ridge basalt (MORB) melt: $X^{m}_{\rm CO_{2}}=(3.8\times 10^{-7})f{\rm CO_{2}}\exp\left(-23\ \frac{f{\rm CO_{2}}-1}{10RT}\right),$ (2) where $X^{m}_{\rm CO_{2}}$ is the mole fraction of CO2 in the melt, $f{\rm CO_{2}}$ is fugacity in bars, and $T$ temperature and $R$ gas constant in SI units. The exponential function on the right-hand side is known as the Poynting factor, which captures the influence of pressure and temperature on CO2 fugacity. The abundance of CO2 (ppmw) in melt is then $X_{\rm CO_{2}}=10^{4}\left(\frac{4400X^{m}_{\rm CO_{2}}}{36.594-44X^{m}_{\rm CO_{2}}}\right).$ (3) This solubility law compares favorably to experimental results at 10 kbar (Pan et al., 1991). At water concentrations less than about 3 wt% there is a negligible influence of the presence of dissolved H2O on CO2 solubility (Dixon et al., 1995). Above about 6 wt%, water can enhance the solubility of CO2 by around a factor of two (Iacono-Marziano et al., 2012), although CO2 remains insoluble compared to H2O. In our model, the water concentration only increases beyond 3 wt% at the end of the magma ocean stage for the most oxidizing condition and largest initial hydrogen budget; otherwise, the water concentration is significantly less. Hence, we ignore the minor influence of both carbon and hydrogen on each other’s solubility. #### 2.2.3 Solubility of H2, CO, and CH4 We assume that the solubilities of H2, CO, and CH4 are negligible compared to H2O and CO2, respectively, and therefore set $\alpha_{\rm H_{2}}=\alpha_{\rm CO}=\alpha_{\rm CH_{4}}=0$ (Equation 1). This is motivated by high-pressure experiments that find that the equilibrium constants for the dissolution of H2 gas in basaltic melts result in solubilities more than two orders of magnitude lower than that for H2O (Li et al., 2015b; Hirschmann et al., 2012). Similarly, the mole fraction of reduced carbon dissolved in silicate melts is typically one to two orders of magnitude lower than for CO2 (Yoshioka et al., 2019). Methane dissolved in silicate melts has been identified by spectroscopy in quenched glasses (Mysen et al., 2011; Ardia et al., 2013; Wetzel et al., 2013; Armstrong et al., 2015; Dalou et al., 2019). These studies together demonstrate that the fraction of dissolved methane increases in melts equilibrated under increasingly reducing, high-pressure and H-rich conditions. To quantify the solubility of methane in the melt at a given methane fugacity, Ardia et al. (2013) performed experiments with Fe-free basaltic melts at pressures between 0.7 and 3 GPa. The equilibrium constant of the reaction CH4(l) = CH4(g) results in an order of magnitude less dissolved C for a given $f$CH4 compared to an equivalent $f$CO2. Because estimated CH4 contents are lower than several hundred ppmw even at $f$CH4 of 1000 bars (Ardia et al., 2013), its solubility is neglected. It is noteworthy that solubility data for CH4 in silicate melts at low total pressures relevant to magma oceans ($<$0.7 GPa) are lacking, preventing an accurate assessment of its solubility behavior. Table 1: H2O Solubilities Constrained by Experimental Studies ${\alpha}^{\ddagger}$ | $\beta$ | Composition | Temp. (K) | Pres. (bars) | $f{\rm O_{2}}$ ($\Delta$IW) | Reference ---|---|---|---|---|---|--- 534 | 2.0 | Peridotite | 2173 | 1.013 | –1.4 to +6.5 | Sossi et al. (2022) 683 | 2.0 | Lunar Glass | 1623 | 1.013 | –3.0 to +4.8 | Newcombe et al. (2017) 727 | 2.0 | Anorthite-Diopside | 1623 | 1.013 | –3.0 to +4.8 | Newcombe et al. (2017) 965 | 2.0 | Basalt (MORB) | 1473 | 176-717 | +3.9 to +13.2 | Dixon et al. (1995) 1007 | 2.0 | Basalt (MORB) | 1473 | 503-2021 | +3.5 and +7.9 | Berndt et al. (2002) 215 | 1/0.7 | Basalt | 1373 | 1034-6067 | unbuffered | Wilson & Head III (1981)† Note. — † Used in several modeling studies, e.g. Lebrun et al. (2013), Salvador et al. (2017), Nikolaou et al. (2019), and Bower et al. (2019). ‡ $\alpha$ has units of ppmw/bar1/β and where relevant is determined by refits to existing data by constraining $\beta=2$ and no solubility at zero fugacity. We only consider experimental studies of basaltic or more mafic melts (i.e., Mg and Fe rich) at relatively low $f{\rm H_{2}O}$ ($\lesssim$2000 bars). Figure 1: Experimentally determined solubility data of H2O for several melt compositions and temperatures, for fugacities from 0 to 200 bars (Table 1). Dotted lines denote $\beta=2.0$, and $\alpha$ is reported at the end of the lines (Equation 1). ### 2.3 Redox Reactions When at equilibrium, the magma ocean and atmosphere must be considered as a single thermodynamic unit. This is because, based on the scaling of convective velocity (Solomatov, 2000), the magma ocean mixing timescale is a few weeks, which is considerably shorter than the cooling timescale of around 1 Myr. Accordingly, $f{\rm O_{2}}$ defined by the magma ocean at its surface is equivalent to the oxygen partial pressure (or fugacity) in the atmosphere. Therefore, the fugacity ratio of gaseous redox couples (H2/H2O and CO/CO2), as well as CO2–H2–CH4, satisfy chemical equilibrium at the $T$ and $f{\rm O_{2}}$ imposed by the magma ocean at its surface. We consider three reactions, where all species exist in the gas phase, albeit partially dissolved in the melt according to their respective solubilities (Sect. 2.2). For the hydrogen redox couple, $\displaystyle\rm{H}_{2}+0.5\rm{O}_{2}$ $\displaystyle=\rm{H}_{2}O,$ (4) $\displaystyle\log_{10}{\rm K_{eq}}$ $\displaystyle=\frac{13152}{T}-3.039,$ for the carbon redox couple, $\displaystyle\rm{CO}+0.5\rm{O}_{2}$ $\displaystyle=\rm{CO}_{2},$ (5) $\displaystyle\log_{10}{\rm K_{eq}}$ $\displaystyle=\frac{14468}{T}-4.348,$ and for CO2–H2–CH4, $\displaystyle\rm{CO}_{2}+2\rm{H}_{2}$ $\displaystyle=\rm{CH}_{4}+\rm{O}_{2},$ (6) $\displaystyle\log_{10}{\rm K_{eq}}$ $\displaystyle=\frac{-16276}{T}-5.4738.$ The equilibrium constants $\rm{K_{eq}}$ for the redox couples are determined from the Gibbs free energy of reaction using data from the JANAF database fit for temperatures between 1500 and 3000 K (Chase, 1998). The equilibrium constant for the CO2–H2–CH4 reaction is taken from the IVTANTHERMO database fit for temperatures between 300 and 2000 K (Schaefer & Fegley, 2017). The most abundant redox-sensitive element in Earth’s mantle is iron, so the amount of oxygen that is free to participate in reactions will be approximately regulated by Gibbs free energy changes along the IW buffer during the magma ocean stage. Due to the presence of melt and the difficulty of determining the thermodynamic properties of a nonstoichiometric phase (wüstite), it is preferable to consider equilibrium of solid metallic Fe with liquid FeO (the IW buffer), where the Gibbs free energy of reaction constrains $f{\rm{O_{2}}}$ (O’Neill & Eggins, 2002): $\displaystyle\rm{Fe}+0.5\rm{O}_{2}=\rm{FeO},$ (7) $\displaystyle\log_{10}{f{\rm O_{2},IW}}=\frac{-244118+115.559T-8.474T\ln T}{0.5\ln(10)RT}.$ It is convenient to define oxygen fugacity in log10 units relative to the IW buffer ($\Delta{\rm IW}$): $\Delta{\rm IW}=\log_{10}f{\rm O_{2}}-\log_{10}f{\rm O_{2},IW}.$ (8) In our nominal models we set $\Delta{\rm IW}=0.5$ based on a recent determination of the inferred oxygen fugacity of Earth’s magma ocean at its surface using the Fe3+/Fe2+ ratio from a global compilation of peridotites (Sossi et al., 2020). Hence, by fixing $f{\rm O_{2}}$ relative to the IW buffer, H2O/H2 (Equation 4) and CO2/CO (Equation 5) are constrained as a function of temperature, and by extension CH4/CO2 (Equation 6). Therefore, while solubility controls the abundance of species in the atmosphere versus the interior, redox reactions dictate the relative abundance of oxidized to reduced species in the atmosphere. The thermodynamic coupling between the atmosphere and the interior is set by the surface temperature at the magma ocean–atmosphere interface. Appendices C.1 and C.3 describe how reactions are self-consistently incorporated into the time stepper that evolves the magma ocean–atmosphere system. ### 2.4 Atmospheric Escape Atmospheric escape from terrestrial planets is challenging to parameterize given the complex interaction of thermal and nonthermal processes, as well as uncertainty regarding the history and details of the stellar environment and planetary magnetic field (e.g., Gronoff et al., 2020). In the context of this study, we concern ourselves with the impact of H2 loss from the atmosphere on the evolution of the magma ocean–atmosphere system. We focus on hydrogen since it is the lightest atmospheric component and therefore the most prone to escape. Escape of H2 may be buffered by outgassing from the interior, resulting in efficient depletion of hydrogen from the magma ocean. We apply a model that obeys an upper limit for H2 loss based on the diffusion rate of H2 through a hydrostatic carbon-dominated atmosphere (Zahnle et al., 2019). Below this limit, the escape flux follows the energy-limit that considers the stellar energy input necessary for a volatile to escape Earth’s gravitational field. Therefore, the H2 escape flux is (Zahnle et al., 2019) $\phi_{\rm H_{2}}\approx\frac{\gamma 10^{16}\rm{[VMR]}_{\rm H_{2}}S}{N_{A}\sqrt{1+0.006S^{2}}}\qquad\text{moles}\ {\rm m}^{-2}\ {\rm s}^{-1},$ (9) where $N_{A}$ is the Avogadro constant and ${\rm[VMR]}_{\rm H_{2}}$ is the volume mixing ratio of H2 that provides coupling between the evolving atmospheric speciation and H2 escape. Here $\gamma$ is a scaling factor (unity by default), and $S$ is the normalized solar irradiation according to Earth’s present-day value: $S(t)=F_{\rm XUV}/F_{\rm XUV\odot},$ (10) where XUV combines the influence of X-ray, extreme ultraviolet, and far ultraviolet radiation. We adopt an upper estimate of $S=40$ at 4.0 Ga (Tu et al., 2015). For ${\rm[VMR]}_{\rm H_{2}}$ of 30% and $\gamma=1$, the H2 mass- loss rate is 2.1$\times 10^{5}$ kg s-1 (Equation C4). Both the escape prefactor $\gamma$ (constant for a given case) and ${\rm[VMR]}_{\rm H_{2}}$ (time dependent) scale the mass-loss rate linearly. ## 3 Results ### 3.1 Outgassing of Hydrogen (One Earth Ocean) Figure 2: Coupled interior and atmosphere evolution during magma ocean outgassing at $\Delta$IW=0.5 with a hydrogen budget of one Earth ocean (no carbon), where E1 shows equilibrium crystallization and F1 shows fractional crystallization. For E1 and F1: (a, d) Interior temperature with gray showing the mixed phase region where melt and solid coexist; (b, e) interior melt fraction. (c) Atmospheric partial pressure of H2 and H2O for E1 and F1, where the pressures of F1 (black lines) initially follow E1 (gray lines) before departing at late time. Colored vertical ticks and lines show times in panels (a), (b), (d), and (e), and labels show when a surface lid forms for E1 and F1. (f) Interior depletion of melt (‘Melt’) and hydrogen (‘Hydr.’) for E1 and F1. We first present magma ocean outgassing models (Cases E1 and F1) where the total hydrogen (H) budget is equivalent to the mass of H in Earth’s present- day water ocean (hereinafter one ocean) (Figure 2). This budget is equivalent to a fully outgassed atmosphere of 270 bars for a single species atmosphere consisting of H2O. These models allow us to describe H2 and H2O outgassing before subsequently introducing further complexity in the form of carbon species, redox variation, and atmospheric escape. The total budget of H is partitioned between a reduced (H2) and an oxidized (H2O) species according to the $f{\rm O_{2}}$ buffer set at $\Delta$IW=0.5, with an initial surface temperature of 2700 K. We recall that in our model H2 has zero solubility in the melt and so can only reside in the atmosphere. The only parameter difference between Case E1 (hereinafter E1) and Case F1 (hereinafter F1) is the crystal size, where $a=$ 1 mm for E1 and $a=$ 5 cm for F1. Both of these crystal sizes are reasonable for a crystallizing magma ocean and demonstrate the end-member scenarios of equilibrium and fractional crystallization that arise depending on the efficiency of melt–solid separation, which scales according to the crystal size squared (Appendix B). Both E1 and F1 chart a similar thermal evolution when the melt fraction is high. Cooling proceeds quickly since most H remains in the magma ocean in the form of H2O; this keeps the atmospheric pressure low ($<$ 1 bar). The atmospheric composition (mole fraction of H2 compared to H2O) is determined by chemical equilibrium, where $f{\rm O_{2}}$ decreases with decreasing surface temperature (Equation 4) but parallels the IW buffer. Vigorous convection keeps crystals mostly in suspension assuming efficient re-entrainment of crystals (Solomatov et al., 1993; Solomatov & Stevenson, 1993a) and the interior is approximately adiabatic. Due to the curvature of the melting curves and the two-phase adiabat, the deepest mantle has the lowest melt fraction and hence reaches the ‘rheological transition’ first. The rheological transition is defined by an abrupt increase in viscosity around 30–40% melt fraction, where the formation of an interconnected solid matrix begins to dictate the convective timescale. The ”rheological front” is the interface between melt-dominated dynamics above (lower pressure) and solid creep below (higher pressure), and it moves upward through the mantle as cooling proceeds. Hence, convection in the deep mantle becomes sluggish, akin to the present-day solid mantle, acting as a brake on deep mantle cooling (Andrault et al., 2016). Gravitational separation of melt and solid becomes a dominant driver of energy transport at the rheological transition (Abe, 1993). Hence, the E1 and F1 models exhibit different interior and outgassing evolution because of the timescale of melt–solid separation (percolation) compared to net cooling rate. E1 represents equilibrium crystallization, where the timescale for melt–solid separation is longer than the timescale for the advancement of the rheological front (Tonks & Melosh, 1990; Solomatov & Stevenson, 1993b). The rheological front rapidly advances through the mantle, causing the thermal profile to collapse on top of the rheological transition (Figure 2b). When it reaches the surface, it initiates lid formation since convection has become so sluggish that heat transport by convection to the surface cannot prevent top-down cooling. Lid formation brings about the end of interior–atmosphere dissolution equilibrium (‘E1 lid’ at 4.3 kyr; Figure 2). At this time, the mantle is around 30% molten since it remains pinned at the rheological transition because it has not had time to undergo any significant differentiation through melt–solid separation prior to the formation of a surface lid. Therefore, the mantle contains sufficient melt to trap the majority of H as dissolved H2O beneath the surface lid (Figure 2f). Hence, the subsequent interaction of these sequestered volatiles with the surface and atmosphere will be regulated by geological processes operating over long timescales (millions to billions of years), rather than by dissolution equilibrium with a comparatively short- lived magma ocean. By contrast, for F1 the melt percolation velocity keeps pace with the upward velocity of the rheological front. This enables the complete solidification of deep mantle layers below the rheological front by crystal settling, as well as stalling the upward progression of the rheological front. This occurs because efficient upward draining of melt keeps the upper regions molten at the expense of cooling and fully crystallizing the lowermost mantle (Figure 2e). Therefore, dissolution equilibrium with the atmosphere is maintained while crystals form and are displaced to depth. Hence, F1 represents fractional crystallization, where formed solids deep in the mantle can be considered chemically isolated from the molten reservoir above. Below the rheological transition, efficient melt–solid separation causes the thermal profile to abruptly drop to the solidus, where now subsolidus cooling is greatly restricted by the viscous transport timescale and partly by the core that buffers the cooling of the mantle (Figure 2d). This ultimately enables a substantial reduction of the melt reservoir ($98\%$ crystallized) at a surface temperature of 1650 K compared to E1, where the crystal fraction only reaches 70% (Figure 2f). Due to the high solubility of H2O in melt, for F1 the additional $\approx 30\%$ crystallization compared to E1 further depletes H from the interior by $65\%$ before a surface lid forms. We note that fractional crystallization could also occur for E1 after the surface lid has formed, but in this case the melt reservoir is separated from the atmosphere by a surface lid, and therefore outgassing is not driven by dissolution equilibrium. The atmospheric pressures of H2O and H2 increase in unison during cooling (Figure 2c) because the relative abundance of these species is controlled by chemical equilibrium at $\Delta$IW=0.5 (Equation 4). Since H2O is highly soluble in silicate liquid, early in the lifetime of the magma ocean (prior to 1 kyr while melt fractions are high) the atmospheric mass remains mostly constant and the partial pressures are controlled by the temperature dependence of the $f{\rm O_{2}}$ buffer and equilibrium constant (Figure 2c, Equation 4). After 1 kyr, the extent to which H outgasses is proportional to the degree of crystallinity (i.e., continued cooling and crystallization) according to the lever rule. Despite the mantle reaching 70% crystal fraction for E1 before a surface lid forms, the atmospheric pressures of H2O and H2 only reach 4 and 2.4 bars, respectively (E1 lid, Figure 2c). For F1, the atmospheric pressures of H2 and H2O (and hence their depletion in the magma ocean) track E1 until around 2 kyr (gray lines, Figure 2c,f). At this time, efficient melt–solid separation enables growth of a more substantial H2O (80 bars) and H2 (50 bars) atmosphere prior to surface lid formation (”F1 lid”, Figure 2c). It takes around 4.3 and 26 kyr for a surface lid to form for E1 and F1, respectively, which compares favorably to a timescale of 10 kyr determined by a model with a more advanced treatment of radiative transfer (Lichtenberg et al., 2021). When a surface lid forms, the atmospheric pressure of E1 and F1 differs by more than an order of magnitude. ### 3.2 Outgassing of Hydrogen (1–10 Earth Oceans) Planet formation models predict substantial delivery of water to the inner solar system (Raymond & Izidoro, 2017; Raymond et al., 2007), which can account for the 1–10 oceans of water currently stored in Earth (Lécuyer et al., 1998). Recent geochemical estimates propose that around 2.5 oceans are currently stored in the mantle (Marty, 2012; Hirschmann, 2018). Therefore, we now present cases the same as E1 and F1 but with an initial H budget up to 10 Earth oceans. Since the main features of the coupled evolution of the interior–atmosphere are described for E1 and F1 (Section 3.1), only differences relative to these fiducial cases are now presented. Figure 3 reveals the interior depletion of H as a percentage of the H inventory, where the circles with a solid line and containing a 1 are E1 (Figure 3a) and F1 (Figure 3b). For both equilibrium and fractional crystallization, as the H inventory is increased from 1 to 10 oceans, the magma ocean lifetime increases by around two orders of magnitude, reaching a maximum duration slightly less than 1 Myr. This is because the optical depth of the atmosphere scales with pressure, and a larger inventory results in a larger outgassed atmosphere and hence a higher atmospheric pressure. Therefore, due to the thicker atmosphere, the magma ocean takes longer to cool before the rheological front reaches the surface and initiates lid formation. In addition, the interior depletion of H increases for larger total inventories as a direct consequence of power-law solubility where the exponent $\beta=2$. This is because, during cooling, the atmospheric reservoir of H grows with the water mole fraction to the second power (see Appendix C.6 for analysis). Hence, for H outgassing, both the pressure of the atmosphere and relative depletion of the interior increase with the total inventory at a given melt fraction. For a volatile species that obeys a linear solubility relation ($\beta=1$), its relative depletion at a given melt fraction is independent of its total inventory (Appendix C.6). Water is expected to transition to such a linear solubility above around 10 ocean masses on Earth owing to the prevalence of dissolved H2O. Figure 3: Depletion of hydrogen (H) from the interior during magma ocean outgassing at $\Delta$IW=0.5 for H budgets from 1 to 10 Earth oceans: (a) equilibrium crystallization, and (b) fractional crystallization. Lines show constant melt fraction, and circles show the H budget. Magma ocean stage ends at around 30% melt for equilibrium crystallization and 2% melt for fractional crystallization. Cases E1 and F1 (Figure 2) correspond to 1 H ocean for equilibrium and fractional crystallization, respectively. Dashed lines in panel (a) with a ‘W’ suffix denote models that only include H as H2O, i.e. a completely oxidized mantle. For equilibrium crystallization, lid formation occurs soon after 30% melt is reached (Figure 3a). For all equilibrium cases, melt–solid separation is slow compared to the cooling timescale, so when a lid forms, the melt fraction is only marginally reduced for cases with a long compared to a short cooling timescale. At 30% melt, a maximum of around 23% of the total H budget is outgassed for 10 oceans, although this decreases to about 4% for 1 ocean (Figure 3a). For fractional crystallization, the melt fraction decreases below 30% until lid formation occurs around 2% melt. Compared to equilibrium crystallization, this extra depletion of melt drives outgassing of hydrogen to around 80% of its total budget (Figure 3b). Regardless of total inventory, depletion must tend to 100% when the melt fraction reaches zero because we assume that no volatiles are retained in solids. This explains the flattening of the depletion curve for 2% melt fraction compared to, for example, 10% melt fraction. The magma ocean lifetimes for $\Delta{\rm IW}=0.5$ cases (solid lines, Figure 3a) are comparable to cases that are fully oxidized where H can only exist as water (W cases, dashed lines, Figure 3a). This is because for all cases H2O dominates the opacity of the atmosphere. However, H depletion of the $\Delta$IW=0.5 cases and that of the fully-oxidized (W) cases differ by up to about 11%. This is because at $\Delta$IW=0.5 outgassing of H2O must be accompanied by commensurate growth of H2 in the atmosphere to maintain chemical equilibrium. Since the H budget is conserved, increasing H2 in the atmosphere necessitates a decrease in the H2O reservoir dissolved in the magma ocean. In short, H2O solubility determines H2O outgassing, but oxygen fugacity dictates the additional depletion of interior hydrogen to maintain equilibrium between H2 and H2O in the atmosphere. ### 3.3 Outgassing of Hydrogen and Carbon Figure 4: Atmospheric composition during magma ocean outgassing at $\Delta$IW=0.5. Columns from left to right show hydrogen (H) budgets of 1, 3, and 10 oceans, respectively. Rows from top to bottom show C/H by weight of 0, 0.1, 1, and 5. Magma ocean lifetime corresponds to the termination of the colored region (at $\approx$ 2% melt for fractional crystallization, between $10^{4}$ and $10^{7}$ yr). White dotted lines correspond to 30% melt, which indicates both the duration and atmospheric composition if instead the mantle underwent equilibrium crystallization. Initial partial pressures of volatile species ($p_{\rm init}$) are for a completely molten magma ocean and $T_{\rm surf}=2700$ K. Final partial pressures ($p_{\rm final}$) are for complete solidification and outgassing of all volatile species and $T_{\rm surf}=1400$ K. In contrast to the high solubility of water in silicate melt, carbon dioxide is relatively insoluble (e.g., Dixon et al., 1995). Hence, due to the high abundance of C in rocky planets, the presence of gaseous CO, CO2, and CH4 may regulate the lifetime of a magma ocean and thereby the timescale over which H outgasses. Carbon also suppresses the outgassing of H through its influence on the mean molar mass of the atmosphere (Bower et al., 2019). Therefore, we now present fractional crystallization cases that additionally include C species (CO2, CO, and CH4), with their relative abundances in the gas phase determined by redox reactions. The initial carbon-to-hydrogen ratio (C/H by weight) is varied from 0.1 to 5, which is a range compatible with C/H ratios in chondritic meteorites and in the bulk silicate Earth (1.1 and 1.4 by mass, respectively (Hirschmann, 2016)). We independently verified the results of our model by tabulating the final total pressure and composition (in terms of moles of H, C, and O) of select outgassed atmospheres calculated using our model (Appendix E). These were used as input parameters for the Equilib module of FactSage 8.0 (Bale et al., 2009), which calculates the equilibrium partial pressures of gases and stable condensed phases using a Gibbs free energy minimizer with a database of more than 40 gas species in the H-C-O system. For the sake of comparison, we assumed ideal gas behavior. Provided that graphite is not predicted to precipitate (Section 4.4) the agreement with our models, which utilize a comparatively simple chemical network and five gas species, is commendable; the partial pressures differ by at most a few percent (Table E). Figure 4 compares the influence of the C/H ratio on magma ocean outgassing for an H budget of 1, 3, and 10 oceans. Cases F1 (Figure 4a), F3 (Figure 4b), and F10 (Figure 4c) serve as carbon-free reference cases. We first focus attention on increasing C/H for an H budget of one ocean (left column, Figure 4). Even for an addition of only 10% C by weight (i.e., C/H=0.1), the atmosphere is dominated by C species (almost 90% volume mixing ratio) for a fully molten mantle. This is due to the low solubility of CO2 compared to H2O, coupled with equilibrium chemistry that establishes the partial pressure of CO around a factor of seven greater than the partial pressure of CO2. During magma ocean cooling, the CO pressure in the atmosphere steadily decreases. This occurs as a result of the compounding effects of the equilibrium constant of Equation 5, which results in a decrease of $f{\rm CO}/f{\rm CO_{2}}$ as the surface temperature decreases, and the continued outgassing of CO2 as the melt fraction decreases. The latter also drives an initial increase in CO2 pressure, although the CO2 pressure decreases later once H begins to outgas in earnest. Hence, even though the ever-decreasing melt fraction drives outgassing of both H and C throughout the magma ocean evolution, the partial pressure of volatile species can either mostly decrease (CO), mostly increase (H2, H2O), or increase and then decrease (CO2) (Figure 5). Figure 5: Atmospheric pressures during magma ocean outgassing for different C/H and $f{\rm O_{2}}$. Columns from left to right show hydrogen (H) budgets of 1, 3, and 10 oceans, respectively. Rows from top to bottom show $f{\rm O_{2}}$ of $\Delta$IW=$-2$, $\Delta$IW=0.5, and $\Delta$IW=4. Colors denote volatiles (red: CO; green: CO2; orange: H2; blue: H2O; purple: CH4, where solid lines show C/H=1 and dashed lines show C/H=5. For an H budget of one ocean, the initial pressure of H2 and H2O is 0.5 and 0.4 bars, respectively, regardless of C/H. However, at higher initial C/H ratios, the H species constitute a smaller fraction (volume mixing ratio) of the atmosphere owing to the relative insolubility of C-bearing species compared to H2O. For C/H=5, hydrogen species constitute less than 1% of the atmosphere during the early evolution of the magma ocean ($>$30% melt). For this case, the mixing ratio of H species increases by a factor of 230 as the magma ocean cools from fully molten to fully solid (Figure 4j). In contrast, for C/H=0.1 the mixing ratio of H species only increases by a factor of about nine (Figure 4d). Even though the total inventory of H is fixed, the final outgassed pressures of H2 and H2O depend on C/H, increasing from 62 and 133 bars (C/H=0) to 102 and 221 bars (C/H=5.0), respectively. White dotted lines in Figure 4 denote a melt fraction of 30%, which corresponds to when a surface lid forms for equilibrium crystallization. Hence, for equilibrium crystallization, outgassed atmospheres are usually more dominated by C species versus H species. For C/H=0.1, the mixing ratio of H species increases by about a factor of two from 30% melt to 2% melt (Figure 4d). In contrast, for C/H=5, H species increase from about 1% to more than 50% over the same range of melt fraction (Figure 4j). Figure 6: Depletion of hydrogen (H) versus melt fraction at $\Delta$IW=0.5 for H budgets of 1, 3, and 10 oceans and different C/H by weight. For each shaded region, the upper bound is C/H=0, lower bound is C/H=5, and black line is C/H=1. Symbols indicate times of C/H=1 cases near 20% melt and 5% melt. For an inventory of three oceans (middle column, Figure 4), a larger fraction of the atmosphere consists of H species during the evolution compared to an inventory of one ocean. This is a direct consequence of the power-law solubility of H2O and the higher amount of H (Appendix C.6). When no melt remains—as for the end of fractional crystallization—the entire initial inventory of H and C sets the atmospheric composition where equilibrium chemistry continues to dictate the partitioning between reducing and oxidizing species. Therefore, for fixed C/H at the end of fractional crystallisation, the volume mixing ratios are also set. If CH4 is insignificant, the final partial pressures ($p_{\rm final}$) scale according to the initial inventory; for example, the final pressures for C/H=0.1 are a factor of three larger for a three-ocean inventory compared to a one-ocean inventory (compare $p_{\rm final}$ in Figure 4d,e). For large H inventories, the volume mixing ratio of H species changes less during outgassing because initially more H (compared to C) resides in the atmosphere owing to the power-law solubility of H2O. The case with a three-ocean budget and C/H=1 (Figure 4h, 5e) is similar to the bulk silicate Earth calculated in Sossi et al. (2020). At high temperatures, a CO-dominated atmosphere forms with $\sim$200-bar C-bearing species and $\sim$10-bar H-bearing species in which fH2 and fH2O are subequal, while fCO $>$ fCO2. Figure 7: Depletion of hydrogen (H) versus time at $\Delta$IW=0.5 for different C/H, where colored lines show constant melt fraction. Dashed lines show C/H=0 for H budgets from 1 to 10 Earth oceans; these are the solid lines in Figure 3b. Departing from the dashed lines, circles show 3 oceans and C/H=X, where X is 0, 1, or 5. Similarly, rectangles show 10 oceans and C/H=X, where X is 0, 1, or 5. Figure 6 summarizes the interior depletion of H for 1–10 oceans and C/H by weight varying from zero to five. To first order, depletion at a given melt fraction is dictated by the total inventory of H due to the power-law solubility of H2O. For 10% melt, this gives rise to around a 40% increase in depletion as the H inventory increases from 1 to 10 oceans. A secondary influence on depletion is C/H, where higher C/H suppresses the outgassing and hence depletion of H. This effect is not connected to the solubility of H2O, but rather is due to the dominant presence of C species in the atmosphere that increases the mean molar mass of the atmosphere relative to H species (Appendix C.1). Figure 7 reveals the influence of C/H on the depletion during magma ocean outgassing and can be compared with Figure 3. For a fixed H inventory, increasing C/H prolongs the time taken to reach a given melt fraction since the optical depth of the atmosphere increases. Increasing C/H increases the volume mixing ratio of C species in the atmosphere, and this alone would actually decrease the opacity of the atmosphere since CO has the lowest opacity of all the considered volatiles (Appendix A). However, increasing C/H also increases the total surface pressure substantially because CO2 is relatively insoluble compared to H2O. Hence, the optical depth increases in response to the larger surface pressure, and this increase predominates over the change in speciation; the net outcome is prolonged cooling for larger C/H. Figure 7 further demonstrates the trend of Figure 6, in which increasing C/H suppresses the outgassing and hence interior depletion of H at a given melt fraction. ### 3.4 Oxygen Fugacity Figure 8: Atmospheric composition during magma ocean outgassing for C/H=1 and different $f{\rm O_{2}}$. Columns from left to right show hydrogen (H) budgets of 1, 3, and 10 oceans, respectively. Rows from top to bottom show $f{\rm O_{2}}$ of $\Delta$IW=$-2$, $\Delta$IW=0.5, $\Delta$IW=2, and $\Delta$IW=4. Magma ocean lifetime corresponds to the termination of the colored region (at $\approx$ 2% melt for fractional crystallization, between $10^{4}$ and $10^{7}$ yr). White dotted lines correspond to 30% melt, which indicates both the duration and atmospheric composition if instead the mantle underwent equilibrium crystallization. Initial volatile partial pressures ($p_{\rm init}$) are for a completely molten magma ocean and $T_{\rm surf}=2700$ K. Final partial pressures ($p_{\rm final}$) are for complete solidification and outgassing of all volatiles and $T_{\rm surf}=1400$ K. Figure 9: Depletion of hydrogen (H) vs. melt fraction for H budgets of 1 and 10 oceans and different $f{\rm O_{2}}$. For each shaded region, the upper bound is $\Delta$IW=$-2$, lower bound is $\Delta$IW=4, and black line (same as Figure 6) is $\Delta$IW=0.5. Symbols indicate times of $\Delta$IW=0.5 cases near 20% and 5% melt. Figure 10: Depletion of hydrogen (H) vs. time for different $f{\rm O_{2}}$, where colored lines show constant melt fraction. Dashed lines show $\Delta$IW=0.5 and C/H=0 for H budgets from 1 to 10 Earth oceans; these are the solid lines in Figure 3b. Departing from the dashed lines, circles show 3 oceans and C/H=1 for $\Delta$IW=X, where X is $-2$, 0.5, or 4. Similarly, rectangles show 10 oceans and C/H=1 for $\Delta$IW=X, where X is $-2$, 0.5, or 4. Iron-core formation established the deep Earth’s redox state at $\Delta$IW=$-2$ whereas the present-day upper mantle redox is $\Delta$IW=4. This indicates that mantle redox can vary with both time (e.g., Scaillet & Gaillard, 2011) and pressure (Armstrong et al., 2019), potentially giving rise to atmospheres that are more reduced or oxidized than our nominal cases at $\Delta$IW=0.5 (e.g., Hirschmann, 2012). Hence, we supplement our calculations at $\Delta$IW=0.5 by additionally considering outgassing scenarios at $\Delta$IW=$-2$, $\Delta$IW=2, and $\Delta$IW=4. Figure 8 summarizes the results for C/H=1, and detailed figures for all C/H cases at each $f{\rm O_{2}}$ are presented in Appendix D. Increasing the oxygen fugacity from $\Delta$IW=0.5 to $\Delta$IW=4 progressively increases the ratio of oxidized to reduced species in the atmosphere at a given temperature. Since oxidized species of C and H have a higher molar mass than their reduced counterparts, this also increases the mean molar mass of the atmosphere which influences partial pressures through mass balance (e.g., Equation C1). Cooling times are generally extended for oxidized versus reduced atmospheres, due to the higher surface pressure and the intrinsic higher opacity of oxidized species (Appendix A). However, for $\Delta$IW=$-2$ and C/H=5, production of CH4 increases the cooling time to be comparable to an oxidized atmosphere dominated by CO2 and H2O. Oxidized interiors mitigate the interior depletion of H relative to reducing conditions (e.g., compare $\Delta$IW=$-2$ and $\Delta$IW=4 in Figure 9 and 10). At 30% melt, H depletion is around a factor of 10 larger for $\Delta$IW=$-2$ compared to $\Delta$IW=4. At 2% melt, H depletion at $\Delta$IW=$-2$ is approximately 20% larger than $\Delta$IW=4; we recall that depletion for all cases must reach 100% when no melt remains. Depletion of H is largest for $\Delta$IW=$-2$ because it exists mainly as the less soluble H2 and CH4 in the gas, so only a small quantity of H is dissolved in the melt reservoir as H2O. Whereas increasing C/H mitigates H depletion and extends cooling times, decreasing $f{\rm O_{2}}$ greatly enhances H depletion (i.e., outgassing) and generally reduces cooling times. For C/H=1, early atmospheres are almost always dominated by carbon species, as either CO (reduced cases) or CO2 (oxidized cases). For 10 H oceans and $\Delta$IW=$-2$ the early atmosphere is instead dominated by H2. Even for the most oxidized scenario when $\Delta$IW=4, H2O is never the dominant species in the atmosphere until the melt fraction drops below about 20% or even less. Hence, magma oceans that undergo fractional crystallization are more likely to produce oxidized and H2O-rich atmospheres versus equilibrium crystallization which produces reduced and often CO-rich atmospheres. ### 3.5 Atmospheric Escape of H During Outgassing Figure 11: Comparison of atmospheric composition and interior depletion at $\Delta$IW=0.5 for (1) C/H=0.1, three H oceans, $\gamma=1$ (2) C/H=1, three H oceans, $\gamma=1$ (3) C/H=0.1, one H ocean, $\gamma=1$ and (4) ”Large escape” with C/H=0.1, three H oceans, and an escape prefactor $\gamma=1000$ (Equation 9). (a, b) H2 partial pressure and volume mixing ratio; (d, e) CO partial pressure and volume mixing ratio; (c, f) interior depletion of H and C, respectively. During the magma ocean stage, escape of H2—the lightest atmospheric component—could impact the reservoir evolution of volatiles, so we explore this possibility in Figure 11. The reference case has C/H=0.1, three H oceans, and unity escape prefactor ($\gamma=1$, Equation 9). However, an equivalent case with no escape is visually indistinguishable from $\gamma=1$. This immediately demonstrates that H2 escape due to irradiation of a carbon–hydrogen atmosphere does not appreciably alter the volatile reservoirs during the magma ocean stage for an Earth-sized body at 1 AU, largely because of the magma ocean’s short lifetime ($\sim$105 yr). Even for a larger H budget of 10 oceans where cooling is relatively prolonged, $\gamma\lesssim 100$ has no appreciable impact on the evolution of the volatile reservoirs. Therefore, we show a case in which escape is increased by setting $\gamma=1000$ (”large escape”) to investigate an extreme end-member escape scenario, which may arise, for example, as a result of tidal effects. For comparison, $\gamma=1000$ gives a mass-loss rate around $2.1\times 10^{8}$ kg s-1, which is within the range of rates determined for volatile stripping of the proto- Moon by Earth (Charnoz et al., 2021) or for a Mars-sized body near 1 AU undergoing EUV-driven thermal escape (Benedikt et al., 2020). For large escape the inventory of hydrogen decreases by almost 50% over the lifetime of the magma ocean of around $10^{5}$ yr. By comparison, the magma ocean lifetime for an equivalent case without escape is around $1.5\times 10^{5}$ yr. Early on (equivalently times at which melt $>$40%), the H2 partial pressure of all cases with three H oceans follows a comparable trajectory. At later times (equivalently $<$40% melt), the cases with three oceans and $\gamma=1$ continue to track a similar trend, although the final H2 partial pressure depends on the initial C/H (e.g., Figure 4e, h). For high escape rate, however, the growth of H2 in the atmosphere is mitigated by escape such that the H2 partial pressure is reduced compared to the cases with an unchanging H inventory of three oceans. Nevertheless, equilibrium chemistry continues to dictate $f{\rm H_{2}}/f{\rm H_{2}O}$ so only the magnitudes of the partial pressures of H species are modulated by escape. The buffering of the atmosphere by the magma ocean presupposes that the loss of H2 is not sufficient to influence the redox state of the magma ocean itself. In practice, the preferential loss of H2 promotes oxidation of the residual mantle (e.g., Olson & Sharp, 2019). The precise amount depends on the abundance of H, but the net effect is to self-arrest loss of H as H2. This is because as escape proceeds, necessarily more H2 is converted to H2O, given that O-bearing species are heavier and thus escape less readily than H2. Since for large escape almost 50% of the initial three-ocean inventory is lost, $f{\rm H_{2}}$ is steered toward the final pressure (at 0% melt) of the one- ocean case, which has the same C/H=0.1. For fixed C/H=0.1, the volume mixing ratio of H2 is the same at 0% melt regardless of the final H inventory. However, for large escape, H is lost relative to C, and hence the H2 volume mixing ratio at the end of the magma ocean stage is slightly reduced compared to cases with unchanging C/H=0.1. Atmospheric escape modulates two controls on outgassing during magma ocean cooling that we have previously investigated: (1) escape decreases the inventory of H and hence modulates the partitioning of H between the interior and atmosphere when melt is present according to the power-law solubility of H2O, and (2) escape increases C/H, resulting in a final atmosphere richer in C compared to H. For example, the loss of H2 for large escape increases the volume mixing ratio of CO at later time compared to C/H=0.1. This abides by our previous models that show a larger contribution of carbon species to the atmosphere for a larger C/H. For large escape, C/H by weight increases from 0.1 initially to 0.2 at the end of the magma ocean stage. For minimal escape, both decreasing the H inventory and/or increasing C/H suppresses the interior depletion of H at a given melt fraction, relative to the total inventory (Figure 6). However, for large escape, the depletion of H is enhanced relative to other cases (Figure 11). This demonstrates that the direct loss of H due to escape is a stronger control on its depletion in the magma ocean than the secondary effects on its solubility (through H inventory) and the suppression of H outgassing due to a heavier, carbon-rich atmosphere. In short, H2 escape driven by irradiation given best estimates for the XUV flux of the young Sun has an insignificant impact on volatile reservoir evolution of an Earth-sized body at 1 AU during the magma ocean stage. However, more extreme escape scenarios can reduce the partial pressure of H species in the atmosphere and drive faster depletion of the interior H reservoir. ## 4 Discussion ### 4.1 Atmospheric Composition and Evolution Figure 12: Summary of atmospheric composition and hydrogen depletion of the interior during magma ocean cooling at 100%, 30%, and 0% melt, where the surface temperature is given below the melt fraction. Transition from carbon- dominated to hydrogen-dominated atmospheres depends on surface lid formation and the style of magma ocean crystallization, which may prevent or hinder interior outgassing below 30% melt fraction. For a redox state inferred for the early Earth, the earliest outgassed atmospheres of terrestrial planets are dominantly CO-rich by volume mixing ratio when the mantle is mostly molten (Figure 12). This is borne out of the equilibrium chemistry of CO–CO2 and the low solubility of C species compared to H species. With C present, H species can in principle constitute more than 50% of the atmosphere if the H budget is greater than about 6 Earth oceans owing to the power-law solubility of H2O. However, this is only possible if C/H by weight is low—around an order of magnitude less than for the bulk silicate Earth. As C/H increases to unity (Earth-like) and beyond, the atmosphere is CO-rich regardless of the H budget. Hence, only for C/H$\lesssim 0.1$ is an H-rich atmosphere dominant during the early cooling phase, when most of the mantle is molten. Throughout the magma ocean stage, H2 and H2O have roughly constant mixing ratios owing to the weak temperature dependence of the equilibrium constant (Equation 4), while $f{\rm H_{2}}/f{\rm H_{2}O}$ is approximately unity at $\Delta$IW=0.5 at magmatic temperatures. For C/H$\gtrsim$1, H species can only begin outgassing in earnest once the melt fraction drops below about 30%, regardless of the redox state. However, this is contingent on the surface remaining molten to ensure equilibrium between the interior and atmosphere; this is less likely if the magma ocean undergoes equilibrium crystallization where crystals and melt freeze together and a surface lid forms quickly. In comparison, fractional crystallization—driven by melt–solid separation—can more likely maintain a molten surface while the deeper mantle crystallizes and exsolves volatiles. Hence, the transition from a C-rich to H-rich atmosphere depends on the style of crystallization, and crucially the behavior of the mantle as it transitions from mostly molten to mostly solid. A large steam-dominated atmosphere can only form for a relatively oxidized mantle toward the end of mantle crystallization, although it may persist for a longer duration than the early carbon-rich atmosphere in the absence of an efficient loss mechanism. An atmosphere that contains initially more H species by volume mixing ratio (C/H$\lesssim 0.1$ and particularly for H inventories $>$ 6 oceans) undergoes less modification during outgassing to reach its final volatile mixing ratios when the mantle has fully crystallized. The chemical boundary layer at the surface of the magma ocean facilitates interior–atmosphere equilibrium, where the equilibrium timescale is rapid relative to a crystallization timescale of around a million years (Pahlevan et al., 2019). For low $f{\rm O_{2}}$ or small atmospheres, the crystallization timescale decreases by no more than two orders of magnitude ($\sim 10^{4}$ yr) and this decrease would be further mitigated by a greater H2 greenhouse forcing (Lichtenberg et al., 2021). Other mechanisms to facilitate equilibrium, such as bubble formation due to volatile supersaturation (Elkins- Tanton, 2008), would also reduce the equilibrium timescale. Nevertheless, if crystallization proceeds too rapidly compared to the equilibrium timescale, then disequilibrium chemistry may arise between the interior and atmosphere. ### 4.2 Hydrogen in the Interior For otherwise highly soluble volatiles, such as H2O, to exist in the atmosphere, the magma ocean must have crystallized below 30% melt fraction before a surface lid could form and persist at the surface. Otherwise, thermodynamic communication between the interior and atmosphere is broken, and dissolved volatiles remain trapped in the mantle. Below 30% melt fraction, continued outgassing of dissolved volatiles requires that significant solidification occurs through melt percolation and solid compaction before the rheological front reaches the surface or a quench crust forms (fractional crystallization). In an end-member scenario, this can enable near-complete outgassing of all volatiles, which has received the most attention in the literature. At a given melt fraction, we find that a larger fraction of H can be retained in the interior for smaller total inventories of H, larger C/H by weight, and more oxidized interiors. Although C does not influence H solubility directly, it impacts H retention since it suppresses H outgassing through its influence on the mean molar mass of the atmosphere (Bower et al., 2019). Equilibrium crystallization results in a substantial reservoir of melt and hence volatiles trapped in the interior when the surface forms a lid. This later possibility has received less attention in the context of volatile evolution during the magma ocean stage, but it provides a mechanism to safely harbor a significant quantity of volatiles in the mantle to protect against loss from atmospheric escape and impacts. Hence, the high solubility of H2O in silicate melt may be a crucial property of this life-supporting molecule that enables it to survive during the violent early years of a terrestrial planet’s life. Volatile retention would be further enhanced if a melt–crystal density crossover enabled the formation of a voluminous basal magma ocean (Labrosse et al., 2007; Caracas et al., 2019) or if melt is captured as the rheological front advances through the mantle (Hier-Majumder & Hirschmann, 2017). Trapped melt, and hence soluble volatiles, could then be sequestered in the deep mantle by Rayleigh-Taylor instability (Maurice et al., 2017; Miyazaki & Korenaga, 2021). Moreover, we assume no solubility of H in crystallizing minerals, whereas experimental data indicate significant quantities of water may be stored in ringwoodite (Fei & Katsura, 2020). In short, there are additional processes not included in our models that further conspire against the complete outgassing of soluble volatiles, namely, water, during the magma ocean stage. Furthermore, estimates of the volatile concentration in the bulk silicate Earth are incompatible with complete outgassing (e.g., Hier-Majumder & Hirschmann, 2017). Therefore, complete outgassing of soluble volatiles—frequently an outcome of magma ocean models—only occurs with fractional crystallization if the aforementioned processes are absent or inefficient. Furthermore, it requires that a surface lid or quench crust, if present, does not hinder volatile outgassing as fractional crystallization proceeds. Hence, at the end of the magma ocean stage it is reasonable to expect that a trapped reservoir of H2O in the interior will interact with the surface environment and atmosphere. This could occur owing to post-magma ocean cumulate overturn or owing to processes operating over geological timescales (millions to billions of years). The detailed chemical and physical processes governing the formation and sustenance of cumulate layers and a surface lid in a dynamic magma ocean require further investigation; external influences such as projectile bombardment could also stifle lid formation (e.g., Perera et al., 2018). For an Earth-like planet orbiting a Sun-like star at 1 AU during its magma ocean stage, we find that atmospheric escape of H2 due to irradiation (energy and diffusion limited) does not significantly impact the evolution of volatile reservoirs, in agreement with Hamano et al. (2013) and Katyal et al. (2020). This is because escape rates are sufficiently low and the magma ocean duration is sufficiently short (at most a few million years). In future work, a feedback to probe is that loss of H2 increases $f{\rm O_{2}}$ slightly, causing more H2 to convert to H2O and thereby partly self-arresting the loss process. Furthermore, we have not considered photodissociation of H2O, but this could interplay with the included geochemical reactions that depend on mantle redox. ### 4.3 Crystallisation Style Based on energetic considerations of convection versus gravitational settling, Solomatov & Stevenson (1993a) propose a critical crystal size of 1 cm above which fractional crystallization is inevitable. Furthermore, they show that the critical crystal size is weakly dependent on crystal fraction and therefore depth. Hence, their results are compatible with our fractional crystallization models with a constant crystal size of 5 cm. In detail, gravitational settling in our models becomes dominant at the rheological transition owing to the substantial decrease of turbulence that enabled crystals to remain mostly in suspension. It is reasonable to assume that crystals are suspended at high melt fraction (Tonks & Melosh, 1990), although they could begin to settle and thereby initiate fractional crystallization before the rheological transition is obtained (Patoc̆ka et al., 2020), assuming inefficient re-entrainment (Solomatov et al., 1993) or a high planetary rotation rate (Maas & Hansen, 2019). Settling also depends on mineral buoyancy, and if the minerals float rather than sink, this could hinder interior–atmosphere communication. For example, preferential partitioning of Fe into melt during crystallization continuously changes the level of neutral buoyancy between crystals and melt (Caracas et al., 2019). In contrast to fractional crystallization, equilibrium crystallization prevents differentiation of the mantle and occurs for a smaller crystal size of 1 mm (Solomatov & Stevenson, 1993b). Most previous dynamic models of magma ocean cooling do not consider melt–solid separation (Lebrun et al., 2013; Hamano et al., 2013; Schaefer et al., 2016; Nikolaou et al., 2019); rather, the interior is assumed to always be adiabatic. Yet some of these previous models report agreement with the solidification timescale derived from geochemical models that explicitly consider fractional crystallization. The previous models implicitly assume that although the deep mantle reaches the rheological transition first, it continues cooling as efficiently by convection as the uppermost mantle; this allows the melt fraction to continue decreasing at a similar rate. However, this is unlikely given the rheological behavior of a melt–solid aggregate (e.g., Costa et al., 2009), which at the rheological transition predicts a large reduction in the convective velocity and rapid melt–solid separation (Abe, 1993). Hence, somewhat coincidentally, previous models decrease the melt reservoir and outgas volatiles at a similar rate to geochemical fractional crystallization models, even though the key ingredient to justify fractional crystallisation (i.e., melt–solid separation) is not included. ### 4.4 Graphite, Diamond, and Water Precipitation Our model assumes that all volatile species dissolved at the surface of the magma ocean continue to remain so, irrespective of pressure (depth of the magma ocean) and temperature (cooling of the magma ocean). Phase transformations involving these volatile elements are likely to occur as pressure and temperature change (e.g., Hirschmann, 2012). However, experimental data for the speciation and partitioning of H-, C-, and O-bearing volatile species rarely exceed 7 GPa (Section 2.2), which precludes a holistic, bottom-up model of magma ocean crystallization. Nevertheless, we now systematically investigate possible phase transformations of H and C to identify scenarios under which the pressures of outgassed atmospheres could diverge from those calculated by our model. Reducing conditions may induce graphite or diamond precipitation (Hirschmann, 2012; Takahashi et al., 2013; Keppler & Golabek, 2019). The precipitation of graphite occurs if the CO fugacity of the atmosphere exceeds that defined by the graphite–carbon monoxide (CCO) buffer: $\displaystyle\rm{C(s)}+0.5\rm{O_{2}(g)}$ $\displaystyle=\rm{CO(g)},$ (11) $\displaystyle\log_{10}{\rm K_{eq}}$ $\displaystyle=\frac{6254}{T}+4.334,$ where $\mathrm{K_{eq}}$ as a function of temperature is provided by the JANAF database (Chase, 1998). Equation 11 indicates that $\log_{10}f$CO in equilibrium with graphite decreases proportional to $0.5\log_{10}f$O2 and increases with temperature. Thus, graphite precipitation is favored under more reducing conditions and lower temperatures (Figure 13). For our cases at $f\rm{O_{2}}$s of $\Delta$IW=0.5 or more oxidized, graphite precipitation would not occur because $f$CO never exceeds that defined by the CCO buffer. This suggests that it is unlikely that a terrestrial magma ocean was saturated in graphite at high melt fractions ($>30$%). However, it may occur for most atmospheres as the surface cools below the temperature at which a surface lid forms around 1650 K (Sossi et al., 2020). At this stage, however, the atmosphere and mantle are not necessarily in equilibrium and the atmosphere can evolve as a near-closed system. For a magma ocean more reduced than IW, the modeled cases at $f\rm{O_{2}}$ of $\Delta$IW=$-2$ and C/H=5 show that graphite precipitation would occur in a cooling atmosphere, even accounting for the production of CH4 that buffers $f$CO (Figure 13). Specifically, higher initial C budgets result in higher $f$CO, where, for C/H=5, the scenario with 10 oceans crosses the CCO buffer at $\sim$2300 K, while it does so at $\sim$2000 K for five oceans. As such, the quoted fugacities of carbon-bearing species for these two scenarios are upper limits. Cases at C/H=1 and $\Delta$IW=$-2$ do not result in graphite saturation at any temperature, irrespective of the number of oceans, owing to the lower $f$CO. Figure 13: The fugacity of CO in equilibrium with graphite (Equation 11) assuming ideal gas behavior as a function of temperature at $\Delta$IW=$-2$ (dashed black line) and $\Delta$IW=0.5 (dashed gray line). All cases at $\Delta$IW=$-2$ and C/H=5 (colored lines) intercept the CCO buffer at $\Delta$IW=$-2$, indicating graphite precipitation. Only cases that intercept the CCO buffer are plotted. We presume that the magma ocean is well mixed, such that it remains isochemical throughout its depth. However, the saturation of graphite in equilibrium with silicate melt depends on pressure and temperature (Dasgupta et al., 2013; Stanley et al., 2014; Chi et al., 2014). As such, chemical gradients could arise from its precipitation during magma ocean cooling and/or at depth followed by its isolation from the melt (Hirschmann, 2012). To assess the potential effect of graphite/diamond precipitation on the calculated fugacities of carbon-bearing species, we compare the C contents dissolved in the magma ocean predicted by our models with the graphite/diamond precipitation curve. Above the IW buffer, C dissolves predominantly as CO${}_{3}^{2-}$ in mafic and ultramafic melts (Armstrong et al., 2015; Duncan et al., 2017). Holloway et al. (1992) devised an expression to determine CO${}_{3}^{2-}$ solubility at graphite saturation. To calculate whether graphite is expected to precipitate from the silicate melt in our simulations, we adopt the calibration of Duncan et al. (2017) extrapolated to peridotite compositions. At constant pressure (1 GPa) and for the temperature range of cooling considered in our models (1650 to 2500 K), little or no C precipitation occurs in the magma ocean under any of the scenarios modeled at IW+0.5 and above. At 1650 K and 1 GPa, the model of Duncan et al. (2017) predicts that $\sim$350 ppmw CO2 is dissolved in a peridotite melt or 100 ppmw for a komatiitic melt (expected for an evolved magma ocean) at graphite saturation, compared to $\sim$300 ppmw in the melt in the most extreme, C/H=5, 10-ocean case. Below the IW buffer, CO${}_{3}^{2-}$ is no longer the prevailing melt species of dissolved carbon (instead, carbon likely occurs as some CO-bearing molecule, Wetzel et al., 2013; Dalou et al., 2019); therefore other expressions for the prediction of C solubility at graphite saturation need to be considered. Here we adopt the calibration of Yoshioka et al. (2019) that links CO dissolved in the melt to its fugacity, $f\rm{CO}$. We fix $f\rm{CO}$ as a function of pressure in equilibrium with graphite at IW$-2$ using thermodynamic data and the equation of state of Jakobsson & Oskarsson (1994). At 2 GPa, this yields an $f\rm{CO}$ of $\sim$8.3 GPa, which, using the expression of Yoshioka et al. (2019), leads to 500 ppmw of dissolved CO. These estimates far exceed those derived by solely considering CO${}_{3}^{2-}$ as the dissolved species ($\sim$2 ppmw, Duncan et al., 2017). Given that dissolved C contents reach 20 ppmw at most in our simulations at IW$-2$, this implies that graphite and diamond should not precipitate at any stage in the magma ocean, even under reducing conditions. Another important phase change can occur in cooling atmospheres, namely, that between liquid water and steam: ${\rm H_{2}O(l)}={\rm H_{2}O(g)}.$ (12) Its equilibrium constant as a function of temperature indicates that, at 1400 K, H2O condenses at about 600-bar $f$H2O. This phase transformation is neglected in our simulations primarily because these pressures and temperatures exceed the critical point of water, which is 650 K and 220 bars. Hence at 600-bar $f$H2O the atmosphere is a supercritical fluid rather than an ideal gas as modeled herein. Moreover, the mixing of CO2 and H2O is nonideal, such that departures from ideal gas behavior are expected in these solutions (Duan et al., 1992; Frost & Wood, 1997). ### 4.5 Meteoritic Degassing Theoretical calculations (Schaefer & Fegley, 2010) and outgassing experiments (Thompson et al., 2021) predict the composition of gases evolved from the degassing of meteoritic materials, some of which may be representative of planetary building blocks. By examining equilibrium between gas and condensed phases, thermodynamic models find that H2 and CO gases form from reduced chondrites (such as EH) at high temperature (Schaefer & Fegley, 2010). Only for the most oxidizing materials (CI chondritic composition) is H2O predicted to be a major constituent of the gas mixture. Since water is only an important gaseous species during degassing of one particular class of chondrite, this calls into question the applicability of steam atmospheres to the evolution of terrestrial planets at large. Although instructive, directly correlating gases produced from meteorites to the composition of planetary atmospheres is complicated for several reasons. Meteorites are imperfect analogs of the material that went on to form the terrestrial planets (Sossi, 2021), particularly as regards volatile elements due to the ubiquitous alteration processes occurring on their parent body (Brearley, 2006), as well as thermal metamorphism and the likelihood that planetesimals are already differentiated when they accrete. More importantly, the likely presence of a deep magma ocean during and immediately after accretion would have modified the budgets of elements available to degas at the planetary surface (e.g., Grewal et al., 2020; Dalou et al., 2019). As shown in our work, equilibrium reactions between silicate melt and volatiles in the gas phase influence the partitioning of gas species between the atmosphere and magma ocean according to their solubility. Furthermore, some volatiles (including H and C) may be permanently sequestered into an iron core owing to their siderophile nature (e.g., Tagawa et al., 2021). Therefore, connecting gas mixtures to atmospheric composition necessitates an assumption that the starting bulk composition—usually derived from chondrites–adequately captures elemental abundances in the atmosphere during and after the magma ocean phase. This assumes that volatiles did not experience significant partitioning due to magma ocean and core formation processes and that atmosphere formation proceeds by the breakdown of solid mineral assemblages (i.e., degassing occurs subsolidus). Solid bodies cannot efficiently replenish surface material from deeper regions, whereas for molten bodies the entire melt mass can potentially communicate with the atmosphere, subject to the interior–atmosphere equilibration timescale (e.g., Pahlevan et al., 2019). Alternatively, partitioning can be permitted if ongoing impact- induced outgassing of chondritic materials after a surface lid or quench crust forms (e.g. in a late-veneer scenario or during the late heavy bombardment) is sufficient to displace or dilute any previously equilibrated atmosphere (Zahnle et al., 2020). Extraction of volatiles to solids in the core or mantle locks them away from outgassing to the atmosphere over a short timescale, and perhaps indefinitely. By contrast, volatiles partitioned in melt are readily available to outgas during mantle crystallization as long as the interior and atmosphere remain in thermodynamic communication (e.g., for fractional crystallization). Accounting for thermodynamic equilibrium with silicate melt, we demonstrate that the earliest atmosphere evolves in both mass and composition during magma ocean cooling. An atmosphere can transition from dominantly CO-rich (”reducing”) to H2O-rich (”oxidizing”) owing to solubility and redox reactions as the magma ocean cools (Figure 12). Therefore, the solubility of volatiles in silicate melt prevents drawing a simple connection between the composition of planetary building blocks and the earliest atmospheres of terrestrial planets. ## 5 Summary and Conclusions The investigation of catastrophic outgassing of CO2 and H2O from terrestrial magma oceans—as well as their influence on the nature of early atmospheres around rocky planets—was originally motivated by the degassing of hydrated minerals and oxidized chondritic materials. However, for redox conditions appropriate for the early Earth, we find that CO is usually the dominant atmospheric species during the early molten stage, where the melt fraction is greater than 30% (Figure. 12). This is due to the relatively high solubility of hydrogen species (H2O) compared to carbon species (CO2), as well as redox reactions that govern CO–CO2, H2–H2O, and CO2–H2–CH4 equilibria. Only when C/H by weight is small (C/H$<$0.1) and the H budget large ($>$6 oceans) can H species constitute more than 50% of the atmosphere, with H2 and H2O in approximately equal abundance (Figure 4). For more oxidizing conditions, early atmospheres are dominated by CO2 with H2O a minor species (Figure 8). For the late molten stage ($<30\%$ melt), more H is retained (relative to the total budget) for smaller H budgets, larger C/H, and more oxidizing conditions (Figure 12). At this stage, continued outgassing driven by dissolution equilibrium requires that the surface remains molten and a persistent lid does not form; this requirement can be satisfied if a magma ocean undergoes fractional crystallization. However, for magma oceans that freeze more quickly by equilibrium crystallization, formation of a surface lid around 30% melt can prevent subsequent outgassing of highly soluble volatiles such as H2O. This would leave the atmosphere as carbon/CO dominated by stifling the formation of a steam atmosphere. Methane only forms for reduced conditions ($\Delta$IW$\lesssim$0.5) when pressures are sufficiently high or temperatures sufficiently low; otherwise, CH4 is absent. Graphite precipitation from the atmosphere during magma ocean cooling is expected only for very high C budgets (higher than those anticipated for Earth) and for $\Delta$IW$\lesssim$0.5. Nevertheless, for all cases, graphite in the atmosphere may precipitate with continued cooling. Complete outgassing of volatiles during magma ocean solidification can arise as a result of fractional crystallisation. However, by additionally considering equilibrium crystallization and other processes, we expect that a substantial reservoir of H could remain in planetary mantles and outgas over geological timescales, potentially impacting the depth of surface oceans and the ability to desiccate planets. In this case, hydrogen species would only play a minor role in determining atmospheric opacity or modulating atmospheric escape during the early magma ocean; rather, the behavior of carbon species is dominant. The style of magma ocean crystallization—fractional or equilibrium—therefore controls the composition and mass of early atmospheres and the efficiency of volatile delivery to the planetary atmosphere and surface. Ultimately, the high solubility of H2O in magma oceans may enable its safe storage during the tumultuous phase of planet formation. ## 6 Data Availability All of the data generated as part of this study can be obtained by contacting Dan J. Bower. DJB acknowledges Swiss National Science Foundation (SNSF) Ambizione Grant 173992. KH is supported by the European Research Council via Consolidator Grant ERC-2017-CoG-771620-EXOKLEIN. PAS acknowledges Swiss National Science Foundation (SNSF) Ambizione Grant 180025. PS acknowledges financial support from the Swiss University Conference and the Swiss Council of Federal Institutes of Technology through the Platform for Advanced Scientific Computing (PASC) program. This research was partly inspired by discussions and interactions within the framework of the National Center for Competence in Research (NCCR) PlanetS supported by the SNSF. The calculations were performed on UBELIX (http://www.id.unibe.ch/hpc), the HPC cluster at the University of Bern. We thank S. Grimm and D. Kitzmann for computing Rosseland mean opacities, and A. Wolf, M. Tian, and T. Lichtenberg for discussions and feedback on the manuscript. Three anonymous referees provided comments that further enhanced the manuscript. ## Appendix A Opacities We computed Rosseland mean opacities at 1 bar between 1700 and 2700 K for H2O, H2, CO2, CO, and CH4 using Helios-k (Grimm et al., 2021; Grimm & Heng, 2015). The opacity (Rosseland mean mass absorption coefficient) for H2O using the full BT2 list (Barber et al., 2006) is $1.5\times 10^{-2}$ m2 kg-1 at 1700 K and $5.0\times 10^{-4}$ m2 kg-1 at 2700 K; the opacity at 1800 K compares favorably with $10^{-2}$ m2 kg-1 which has been used extensively in previous magma ocean studies. For simplicity we also adopt a constant value (i.e., independent of temperature) of $10^{-2}$ m2 kg-1. The opacity of H2 is determined by collision-induced absorption, and using HITRAN (Karman et al., 2019; Abel et al., 2011) it is essentially constant ($5\times 10^{-5}$ m2 kg-1) from 2700 to 1700 K. For CO2, HITEMP (Rothman et al., 2010) provided the line list, and we determined an opacity of $1.5\times 10^{-4}$ m2 kg-1 at 1700 K and $3\times 10^{-5}$ m2 kg-1 at 2700 K. Hence the CO2 opacity at 1700 K is compatible with $10^{-4}$ m2 kg-1, which is the lowest opacity considered for CO2 in previous magma ocean models. To be consistent with our selection of the H2O opacity around 1800 K, we similarly adopt a constant CO2 opacity of $10^{-4}$ m2 kg-1. For CO we determined an opacity of $1.2\times 10^{-5}$ m2 kg-1 at 1700 K and $3\times 10^{-6}$ m2 kg-1 at 2700 K using HITEMP (Li et al., 2015a). Again, we selected an opacity of $10^{-5}$ m2 kg-1 based on the value at 1800 K. We computed a CH4 opacity of $10^{-2}$ m2 kg-1 at 1800 K based on the line lists of Yurchenko et al. (2013); Yurchenko & Tennyson (2014). The opacities of species at 1 bar are reference opacities that are used to compute a pressure- dependent opacity (Equation A22, Abe & Matsui, 1985). ## Appendix B Melt–Solid Separation The fluxes to describe energy transport in a planetary mantle (which is molten, partially molten, or solid) are described in detail in Bower et al. (2018); Abe (1995). However, we recap the physics underpinning melt–solid separation owing to its importance for determining whether a mantle undergoes fractional or equilibrium crystallization. At high melt fraction, the settling or flotation of crystals within a magma ocean is determined by Stokes’s law: $u_{m}-u_{s}=\frac{2a^{2}g(\rho_{m}-\rho_{s})}{9\eta_{m}},$ (B1) where $u$ is velocity (positive radially outward), $a$ crystal size, $g$ gravity (negative), $\rho$ density, $\eta$ viscosity, and subscripts $m$ and $s$ denote melt and solid, respectively. Interaction amongst crystals is not considered, and therefore free settling or flotation of crystals is an upper limit on the efficiency of melt–solid separation at high melt fraction. At low melt fraction, separation occurs via Darcy flow: $u_{m}-u_{s}=\frac{k_{p}g(\rho_{m}-\rho_{s})}{\phi_{p}\eta_{m}},$ (B2) where $k_{p}$ is permeability and $\phi_{p}$ porosity (synonymous with the volume fraction of melt). The Rumpf–Gupte permeability law is appropriate at intermediate porosity (Rumpf & Gupte, 1971), $k_{rg}=\frac{a^{2}\phi_{p}^{5.5}}{1.4},$ (B3) and the Blake–Kozeny–Carman permeability law at low porosity, $k_{bkc}=\frac{a^{2}\phi_{p}^{3}}{1000(1-\phi_{p})^{2}}.$ (B4) The constants in these permeability laws are constrained by experiments (see McKenzie, 1984; Abe, 1995, for discussion). The flow laws can be represented by a single description (Abe, 1995): $u_{m}-u_{s}=\frac{a^{2}g(\rho_{m}-\rho_{s})}{\eta_{m}}f,$ (B5) $\displaystyle f=$ $\displaystyle\dfrac{2}{9}$ $\text{for }0.77\leq\phi_{p}$, (B6a) $\displaystyle f=$ $\displaystyle\dfrac{k_{rg}}{a^{2}\phi_{p}}$ $\text{for }0.08<\phi_{p}<0.77$, (B6b) $\displaystyle f=$ $\displaystyle\dfrac{k_{bkc}}{a^{2}\phi_{p}}$ $\text{for }\phi_{p}\leq 0.08$, (B6c) where the range of applicability of each flow law ensures that $f$ is a continuous function of porosity. Through consideration of the local barycentric velocity, the mass flux of melt is $J_{m}=\rho\phi(1-\phi)(u_{m}-u_{s}),$ (B7) where $\phi$ is melt fraction (the mass fraction of melt). Equation B7 naturally satisfies the requirement that $J_{m}=0$ for $\phi=0$ and $\phi=1$. The energy flux of melt–solid separation is therefore $F_{\rm grav}=J_{m}T_{\rm fus}\Delta S_{\rm fus},$ (B8) where $T_{\rm fus}$ is the temperature at 50% melt fraction and $\Delta S_{\rm fus}$ is the entropy of fusion, both of which are functions of pressure according to the melting curves. Equation B5 reveals the common factors appearing in all of the flow laws and emphasizes the importance of crystal size owing to its squared dependence: larger crystals enhance melt–solid separation. Since we consider a single-component mantle (MgSiO3), everywhere $\rho_{m}-\rho_{s}<0$ and hence crystals always sink and melt always drains upward to the surface. ## Appendix C Magma ocean and atmosphere equilibrium ### C.1 Volatile mass balance and evolution We extended Bower et al. (2019) to express the mass balance for a volatile $v$ in a chemically reactive environment with atmospheric escape: $\displaystyle X_{v}k_{v}M^{s}+X_{v}M^{l}+4\pi R_{p}^{2}\left(\frac{\mu_{v}}{\bar{\mu}}\right)\frac{p_{v}}{|g|}+m_{v}^{e}+\sum_{w=1}^{r}m_{v}^{w}$ $\displaystyle=X_{v}^{0}M^{m},$ (C1) where: * • $X_{v}$ is the volatile abundance in the pure melt; * • $k_{v}$ is the distribution coefficient between solid and melt; * • $M^{s}$ is the mantle mass of solid; * • $M^{l}$ is the mantle mass of melt; * • $M^{m}=M^{s}+M^{l}$ is the total mantle mass, which is constant; * • $R_{p}$ is the planetary surface radius; * • $g$ is the surface gravity; * • $\mu_{v}$ is the molar mass of the volatile; * • $\bar{\mu}$ is the mean molar mass of the atmosphere; * • $p_{v}$ is the surface partial pressure of the volatile; and * • $X_{v}^{0}$ is the initial total volatile abundance relative to the total mantle mass. A solubility law relates the volatile concentration in the melt ($X_{v}$) to the partial pressure of the volatile in the atmosphere ($p_{v}$). The first three terms on the left-hand side of Equation C1 represent the mass of the volatile stored in the solid mantle, the molten mantle, and the atmosphere. The final two terms on the left-hand side are fictitious reservoirs used to account for mass loss due to escape ($m_{v}^{e}$; Appendix C.2), and mass transfer due to participation of this volatile in reaction $w$ ($m_{v}^{w}$, Appendix C.3). The time derivative of Equation C1 derives the evolution equation (cf., Equation A4, Bower et al., 2019): $\displaystyle\frac{dX_{v}}{dt}\left(k_{v}M^{m}+(1-k_{v})M^{l}\right)+X_{v}(1-k_{v})\frac{dM^{l}}{dt}$ $\displaystyle+\frac{4\pi R_{p}^{2}\mu_{v}}{|g|}\left[\frac{1}{\bar{\mu}}\frac{dp_{v}}{dt}-\frac{p_{v}}{\bar{\mu}^{2}}\sum_{w=1}^{n}\frac{\mu_{w}}{P_{s}}\left(\frac{dp_{w}}{dt}-\frac{p_{w}}{P_{s}}\frac{dP_{s}}{dt}\right)\right]$ $\displaystyle+\frac{dm_{v}^{e}}{dt}+\sum_{w=1}^{r}\frac{dm_{v}^{w}}{dt}=0,$ (C2) where $P_{s}$ is the total surface pressure, which is equal to the summation of the partial pressures of all volatiles according to Dalton’s law. The derivative of $X_{v}$ with respect to $t$ can be determined from the solubility law relevant for volatile $v$, which for power-law solubility (Equation 1) is $\frac{dX_{v}}{dt}=\frac{dX_{v}}{dp_{v}}\frac{dp_{v}}{dt}=\left(\frac{\alpha_{v}{p_{v}}^{1/\beta_{v}-1}}{\beta_{v}}\right)\frac{dp_{v}}{dt}.$ (C3) Similarly, $X_{v}$ can be eliminated from Equation C2 using the solubility law. This results in a system of nonlinear equations that can be solved simultaneously to determine $dp_{v}/dt$ for each volatile $v$. Solving for $dp_{v}/dt$ rather than $dX_{v}/dt$ enables us to treat the case of zero solubility. Oxygen fugacity $f{\rm O_{2}}$ (Equation 7) depends on surface temperature, so oxygen is not explicitly tracked by mass balance as with the other volatiles. ### C.2 Mass loss due to atmospheric escape The escape flux of H2 (Equation 9) is used to determine a mass-loss rate of H2 (e.g., Katyal et al., 2020): $\frac{dm_{H_{2}}^{e}}{dt}=4\pi R_{p}^{2}\phi_{\rm H_{2}}\mu_{\rm H_{2}}.$ (C4) The loss rate depends on the time-dependent volume mixing flux of H2 through $\phi_{\rm H_{2}}$. In turn, the volume mixing ratio of H2 is computed from the partial pressure of H2 and the total surface pressure. In the present work, the mass-loss rate for volatiles other than H2 is set to zero. ### C.3 Mass exchange due to chemical reactions During the magma ocean stage, the silicate melt and overlying atmosphere can accommodate mass exchange via chemical reactions. We consider a general reaction with two reactants ($A$, $B$) and two products ($C$, $D$): $aA+bB\rightleftharpoons cC+dD,$ (C5) where $a$, $b$, $c$, and $d$ are stoichiometric coefficients. The reaction quotient $Q$ is a function of time $t$: $Q(t)=\frac{\\{C\\}^{c}\\{D\\}^{d}}{\\{A\\}^{a}\\{B\\}^{b}}\doteq\frac{Q_{p}(t)}{Q_{r}(t)},$ (C6) where curly brackets $\\{\\}$ denote the activity of the species and the activity of a gas is its fugacity divided by a reference pressure. At chemical equilibrium, the reaction quotient is equal to the equilibrium constant $K$, which is a function of temperature. An appropriate temperature is that of the magma ocean–atmosphere interface; hence, $\frac{\\{C\\}^{c}\\{D\\}^{d}}{\\{A\\}^{a}\\{B\\}^{b}}-K(T)=0.$ (C7) The reaction in Equation C5 can also be expressed in terms of a mass balance of reactants and products: $m_{A}+m_{B}\rightleftharpoons m_{C}+m_{D}.$ (C8) This is because the fractional change of moles corresponds to the fractional change of mass, so correlating terms in Equation. C5 and C8: $m_{B}=\left(\frac{b\mu_{B}}{a\mu_{A}}\right)m_{A},\quad m_{C}=\left(\frac{c\mu_{C}}{a\mu_{A}}\right)m_{A},\quad{\rm etc.}$ (C9) where $\mu$ is the molar mass. It is convenient in Equation C9 to express the mass of the reactants and products using the mass of the first reactant ($m_{A}$) multiplied by a constant factor that depends only on molar masses and reaction stoichiometry; mathematically this choice is arbitrary, and another mass could equally be used. Hence, each chemical reaction introduces a single unknown ”reaction mass” ($m_{A}$) and one equation (Equation C7) to the complete system of equations that we solve. The functional dependencies of all quantities in Equation C7 are known, since we assume ideal gas behavior to connect gas activities to volatile partial pressures and the functional dependence of the equilibrium constant on temperature $T$ is documented in thermochemical tables (Chase, 1998). Evolution equations of Equations C7 and C9 are necessary to time-step the model. The time derivatives of the reactant and product masses are trivial to compute since stoichiometric coefficients and molar masses are constant. For example (c.f., Equation C9): $\frac{dm_{B}}{dt}=\left(\frac{b\mu_{B}}{a\mu_{A}}\right)\frac{dm_{A}}{dt},$ (C10) and the time derivative of the reactant masses ($m_{C}$, $m_{D}$) can be similarly computed. The time derivative of the equilibrium condition (Equation C7) is: $\frac{1}{Q_{r}}\frac{dQ_{p}}{dt}-\frac{Q_{p}}{Q_{r}^{2}}\frac{dQ_{r}}{dt}-\frac{dK}{dT}\frac{dT}{dt}=0,$ (C11) where the reaction quotient numerator $Q_{p}=\\{C\\}^{c}\\{D\\}^{d}$ and denominator $Q_{r}=\\{A\\}^{a}\\{B\\}^{b}$ depend on the volatile partial pressures and reaction stoichiometry. The equilibrium constant $K$ depends only on temperature $T$ (Equations 4 and 5) and therefore $dK/dT$ is straightforward to evaluate. The code also solves for $dT/dt$ in the mantle at each staggered mesh point, although only the change of surface temperature is required since this temperature defines the interface of the magma ocean and atmosphere. Hence, to include reactions in the time stepper, Equation C11 constitutes an extra equation per reaction that must be satisfied, where the extra unknowns we solve for are the change of reaction mass for each reaction ($dm_{A}/dt$). Equation C10 relates the change of reaction mass for reactant $B$ and products $C$ and $D$ to the solution quantity ($dm_{A}/dt$). For each volatile, we can thus evaluate the total rate of change of its mass due to all reactions that it participates in; this is the final term on the left-hand side of Equation C2. ### C.4 Initial volatile abundances For the initial condition, we solve for the abundance of each volatile in the melt using a set of coupled equations consisting of all the volatile mass balances (Equation C1) and $r$ chemical reactions (Equation C7). To do this, estimates of the initial total volatile abundances $X_{v}^{\rm 0,est}$ are prescribed, and the initial atmospheric escape masses are set to zero ($m_{v}^{e}=0$). We solve for the partial pressure of each volatile in the atmosphere and the reaction masses, the latter of which are likely to be nonzero. Hence, it is convenient—although formally not necessary—to offset the estimates of the initial total volatile abundances ($X_{v}^{\rm 0,est}$) using the solved reaction masses to enforce that the initial total volatile abundances ($X_{v}^{0}$) adhere to chemical equilibrium constraints with zero reaction mass: $X_{v}^{0}=X_{v}^{\rm 0,est}-\frac{1}{M^{m}}\sum_{w}^{r}m_{v}^{w}.$ (C12) Hence, the reaction masses are zero by definition for the initial condition and can increase or decrease during the time integration. ### C.5 Alternative using thermodynamic components The system of volatiles we consider can be represented by two components: C and H2, where we recall that oxygen is not explicitly tracked. Hence, an alternative approach to solving the mass balance for each volatile is to instead solve for the total abundance of C and H2 moles. Conservation laws for C and H2 abundances can be derived by converting the relevant volatile mass balances to C or H2 moles and then summing. Without chemical reactions, conserving species mass or abundance is the same because there is no exchange of material between species. With reactions, either one can continue to track species mass by accounting for mass exchange by reactions (as we choose to do), or alternatively one can instead track the abundance of components. Considering species mass allows one to retain the familiar mass formulation derived without reactions, but at the expense of more equations (e.g., five volatile species plus three reactions). By contrast, considering components reduces the number of equations (e.g., two components plus three reactions), but requires a departure from the traditional species mass balance approach. Ultimately, both approaches are valid and equivalent and will likely find utility in different applications. ### C.6 Simplified mass balance and interior depletion The partitioning of volatiles between a fully molten magma ocean and atmosphere can be analyzed using a reduced form of the volatile mass balance (Equation C1). This aids the interpretation of the results—notably demonstrating their universality—as well as providing analytical expressions and scaling relations. We consider the mass balance for a single volatile that is only partitioned between the melt and atmosphere. However, we retain the mean molar mass of the atmosphere $\bar{\mu}$ to provide insight when the atmosphere is dominated by the presence of another (or mixture) of volatiles that are less soluble than the volatile under consideration. Following Equation C1: $X_{v}M^{l}+4\pi R_{p}^{2}\left(\frac{\mu_{v}}{\bar{\mu}}\right)\frac{p_{v}}{|g|}=X_{v}^{0}M^{m}.$ (C13) For power-law solubility (Equation 1), $\frac{\phi_{g}}{C}X_{v}+{X_{v}}^{\beta_{v}}=\frac{X_{v}^{0}}{C},$ (C14) where $\phi_{g}=M^{l}/M^{m}$ is the global melt fraction and $C=\frac{4\pi R_{p}^{2}\mu_{v}{\alpha_{v}}^{-\beta_{v}}}{M^{m}\bar{\mu}|g|}.$ (C15) For a single species ($\bar{\mu}=\mu_{v}$), an Earth-sized mantle, and values of $\alpha_{v}$ and $\beta_{v}$ from the peridotite solubility law (Table 1), the coefficient $C\approx 17/4$. #### C.6.1 Volatile depletion for $\beta_{v}=2$ For the peridotite solubility law (as well as other laws), $\beta_{v}=2$ and Equation C14 is a quadratic equation that can be solved for the volatile abundance in the melt: $X_{v}=\frac{-\phi_{g}}{2C}+\frac{1}{2}\sqrt{\frac{\phi_{g}^{2}}{C^{2}}+\frac{4X_{v}^{0}}{C}}.$ (C16) Hence, comparing to Equation C14, the scaled volatile mass in the liquid is $m_{v}^{l}=\frac{-\phi_{g}^{2}}{2C^{2}}+\frac{\phi_{g}}{2C}\sqrt{\frac{\phi_{g}^{2}}{C^{2}}+\frac{4X_{v}^{0}}{C}},$ (C17) the scaled volatile mass in the atmosphere is $m_{v}^{a}=\frac{X_{v}^{0}}{C}+\frac{\phi_{g}^{2}}{2C^{2}}-\frac{\phi_{g}}{2C}\sqrt{\frac{\phi_{g}^{2}}{C^{2}}+\frac{4X_{v}^{0}}{C}},$ (C18) and the scaled total volatile mass is $m_{v}^{t}=\frac{X_{v}^{0}}{C}.$ (C19) Therefore, the volatile depletion $\mathcal{D}$ expressed as a fraction relative to the total volatile mass is $\mathcal{D}=1-\frac{m_{v}^{l}}{m_{v}^{t}}=\frac{m_{v}^{a}}{m_{v}^{t}}=1+\frac{\phi_{g}^{2}}{2CX_{v}^{0}}-\frac{\phi_{g}}{2X_{v}^{0}}\sqrt{\frac{\phi_{g}^{2}}{C^{2}}+\frac{4X_{v}^{0}}{C}}.$ (C20) Taking the derivative of Equation C20 with respect to the initial volatile abundance $X_{v}^{0}$, we obtain $\frac{\partial\mathcal{D}}{\partial X_{v}^{0}}=\frac{\phi_{g}\left(-\phi_{g}\sqrt{\phi_{g}^{2}+4CX_{v}^{0}}+\phi_{g}^{2}+2CX_{v}^{0}\right)}{2(X_{v}^{0})^{2}C\sqrt{\phi_{g}^{2}+4X_{v}^{0}C}}.$ (C21) All variables must be positive to be physically meaningful. The denominator is clearly always positive and the numerator is also positive since the following condition is always satisfied: $\phi_{g}^{2}+2CX_{v}^{0}>\phi_{g}\sqrt{\phi_{g}^{2}+4CX_{v}^{0}}\,.$ (C22) Therefore, $\frac{\mathcal{\partial D}}{\partial X_{v}^{0}}>0\,.$ (C23) Hence, at a given melt fraction $\phi_{g}$, the relative depletion of the interior increases with the initial volatile abundance $X_{v}^{0}$ (Figure 3). #### C.6.2 Volatile depletion for $\beta_{v}=1$ For a volatile that obeys Henrian solubility behavior ($\beta_{v}=1$), similar to Appendix C.6.1, we can write: $X_{v}=\frac{X_{v}^{0}}{C+\phi_{g}},\quad m_{v}^{l}=\frac{\phi_{g}X_{v}^{0}}{C(C+\phi_{g})},\quad m_{v}^{a}=\frac{X_{v}^{0}}{C+\phi_{g}}.$ (C24) Hence, the interior depletion $\mathcal{D}$ is $\mathcal{D}=\frac{C}{C+\phi_{g}},\qquad\frac{\partial\mathcal{D}}{\partial X_{v}^{0}}=0.$ (C25) Notably, interior depletion is independent of the initial volatile abundance $X_{v}^{0}$ when $\beta_{v}=1$. ## Appendix D Oxygen fugacity and atmosphere speciation Figure 4 shows the atmospheric composition at an oxygen fugacity of $\Delta$IW=0.5. The following figures show the same at $\Delta$IW=$-2$ (Figure 14), $\Delta$IW=2 (Figure 15), and $\Delta$IW=4 (Figure 16). Figure 14: Atmospheric composition during magma ocean outgassing at $f{\rm O_{2}}$ of $\Delta$IW=$-2$. See Figure 4 caption. Figure 15: Atmospheric composition during magma ocean outgassing at $f{\rm O_{2}}$ of $\Delta$IW=2. See Figure 4 caption. Figure 16: Atmospheric composition during magma ocean outgassing at $f{\rm O_{2}}$ of $\Delta$IW=4. See Figure 4 caption. ## Appendix E Comparison with FactSage 8.0 We independently verified the results of our model by comparison to FactSage calculations, assuming ideal gas behavior (Table E). Table E: Comparison of Select Outgassed Atmospheres Calculated Using Our Model Compared to FactSage Calculations Case | Our model (basr) | FactSage (bars) ---|---|--- H Oceans | C/H | $f\rm O_{2}$ ($\Delta$IW) | CO | CO2 | H2 | H2O | CH4 | CO | CO2 | H2 | H2O | CH4 3 | 1 | -2.0 | 7.1 | 0.4 | 186 | 23 | 42 | 7.2 | 0.4 | 187.5 | 22.4 | 41 3 | 1 | +0.5 | 51 | 48 | 213 | 460 | 22 | 48.9 | 49.7 | 216.1 | 458.6 | 20.7 1 | 0.1 | +2.0 | 0.7 | 3.7 | 20 | 240 | 0 | 0.6 | 3.7 | 20.2 | 239.9 | 0.00043 1 | 5 | +4.0 | 7 | 368 | 3.7 | 444 | 0 | 6.4 | 367.6 | 3.7 | 445 | 0 Note. — This comparison reveals that the partial pressure of a given volatile differs by at most a few percent. ## References * Abe (1993) Abe, Y. 1993, Thermal Evolution and Chemical Differentiation of the Terrestrial Magma Ocean, ed. E. Takahashi, R. Jeanloz, & D. Rubie, Vol. 74 (AGU, Washington, D.C.), 41–54, doi: https://doi.org/10.1029/GM074p0041 * Abe (1995) —. 1995, in The Earth’s Central Part: Its Structure and Dynamics, ed. T. Yukutake (Terra Sci. Pub. Com., Tokyo), 215–230. https://www.terrapub.co.jp/e-library/ecp/pdf/EC0215.PDF * Abe & Matsui (1985) Abe, Y., & Matsui, T. 1985, JGRB, 90, C545, doi: https://doi.org/10.1029/JB090iS02p0C545 * Abel et al. (2011) Abel, M., Frommhold, L., Li, X., & Hunt, K. L. C. 2011, JPCA, 115, 6805, doi: https://doi.org/10.1021/jp109441f * Andrault et al. (2011) Andrault, D., Bolfan-Casanova, N., Nigro, G. L., et al. 2011, E&PSL, 304, 251, doi: https://doi.org/10.1016/j.epsl.2011.02.006 * Andrault et al. (2016) Andrault, D., Monteux, J., Bars, M. L., & Samuel, H. 2016, E&PSL, 443, 195, doi: https://doi.org/10.1016/j.epsl.2016.03.020 * Ardia et al. (2013) Ardia, P., Hirschmann, M., Withers, A., & Stanley, B. 2013, GeCoA, 114, 52, doi: https://doi.org/10.1016/j.gca.2013.03.028 * Armstrong et al. (2019) Armstrong, K., Frost, D. J., McCammon, C. A., Rubie, D. C., & Boffa Ballaran, T. 2019, Sci, 365, 903, doi: https://doi.org/10.1126/science.aax8376 * Armstrong et al. (2015) Armstrong, L. S., Hirschmann, M. M., Stanley, B. D., Falksen, E. G., & Jacobsen, S. D. 2015, GeCoA, 171, 283, doi: https://doi.org/10.1016/j.gca.2015.07.007 * Bale et al. (2009) Bale, C., Bélisle, E., Chartrand, P., et al. 2009, Calphad: Computer Coupling of Phase Diagrams and Thermochemistry, 33, 295, doi: https://doi.org/10.1016/j.calphad.2008.09.009 * Barber et al. (2006) Barber, R. J., Tennyson, J., Harris, G. J., & Tolchenov, R. N. 2006, MNRAS, 368, 1087, doi: https://doi.org/10.1111/j.1365-2966.2006.10184.x * Barth et al. (2021) Barth, P., Carone, L., Barnes, R., et al. 2021, AsBio, doi: https://doi.org/10.1089/ast.2020.2277 * Benedikt et al. (2020) Benedikt, M., Scherf, M., Lammer, H., et al. 2020, Icarus, 347, 113772, doi: https://doi.org/10.1016/j.icarus.2020.113772 * Berndt et al. (2002) Berndt, J., Liebske, C., Holtz, F., et al. 2002, AmMin, 87, 1717, doi: https://doi.org/10.2138/am-2002-11-1222 * Bonati et al. (2019) Bonati, I., Lichtenberg, T., Bower, D. J., Timpe, M., & Quanz, S. 2019, A&A, 621, doi: https://doi.org/10.1051/0004-6361/201833158 * Bower et al. (2019) Bower, D. J., Kitzmann, D., Wolf, A. S., et al. 2019, A&A, 631, doi: https://doi.org/10.1051/0004-6361/201935710 * Bower et al. (2018) Bower, D. J., Sanan, P., & Wolf, A. S. 2018, PEPI, 274, 49, doi: https://doi.org/10.1016/j.pepi.2017.11.004 * Bower et al. (2021) —. 2021, SPIDER: Simulating Planetary Interior Dynamics with Extreme Rheology, Version 0.2.1, Zenodo, doi: https://doi.org/10.5281/zenodo.5682523 * Brearley (2006) Brearley, A. J. 2006, The Action of Water, ed. D. S. Lauretta & H. Y. McSween (University of Arizona Press), 587–624 * Caracas et al. (2019) Caracas, R., Hirose, K., Nomura, R., & Ballmer, M. D. 2019, E&PSL, 516, 202, doi: https://doi.org/10.1016/j.epsl.2019.03.031 * Charnoz et al. (2021) Charnoz, S., Sossi, P. A., Lee, Y.-N., et al. 2021, Icarus, 364, 114451, doi: https://doi.org/10.1016/j.icarus.2021.114451 * Chase (1998) Chase, M. 1998, NIST-JANAF Thermochemical Tables, 4th Edition (American Institute of Physics), doi: https://doi.org/10.18434/T42S31 * Chi et al. (2014) Chi, H., Dasgupta, R., Duncan, M. S., & Shimizu, N. 2014, GeCoA, 139, 447, doi: https://doi.org/10.1016/j.gca.2014.04.046 * Corgne et al. (2008) Corgne, A., Wood, B. J., & Fei, Y. 2008, GeCoA, 72, 2409, doi: https://doi.org/10.1016/j.gca.2008.03.001 * Costa et al. (2009) Costa, A., Caricchi, L., & Bagdassarov, N. 2009, GGG, 10, doi: https://doi.org/10.1029/2008GC002138 * Dalou et al. (2019) Dalou, C., Hirschmann, M. M., Jacobsen, S. D., & Le Losq, C. 2019, GeCoA, 265, 32, doi: https://doi.org/10.1016/j.gca.2019.08.029 * Dasgupta (2013) Dasgupta, R. 2013, RvMG, 75, 183, doi: https://doi.org/10.2138/rmg.2013.75.7 * Dasgupta et al. (2013) Dasgupta, R., Chi, H., Shimizu, N., Buono, A. S., & Walker, D. 2013, GeCoA, 102, 191, doi: https://doi.org/10.1016/j.gca.2012.10.011 * Dixon et al. (1995) Dixon, J. E., Stolper, E. M., & Holloway, J. R. 1995, JPet, 36, 1607, doi: https://doi.org/10.1093/oxfordjournals.petrology.a037267 * Duan et al. (1992) Duan, Z., Møller, N., & Weare, J. H. 1992, GeCoA, 56, 2605, doi: https://doi.org/10.1016/0016-7037(92)90347-L * Duncan et al. (2017) Duncan, M. S., Dasgupta, R., & Tsuno, K. 2017, E&PSL, 466, 115, doi: https://doi.org/10.1016/j.epsl.2017.03.008 * Elkins-Tanton (2008) Elkins-Tanton, L. 2008, E&PSL, 271, 181, doi: https://doi.org/10.1016/j.epsl.2008.03.062 * Fei & Katsura (2020) Fei, H., & Katsura, T. 2020, E&PSL, 531, 115987, doi: https://doi.org/10.1016/j.epsl.2019.115987 * Frost & Wood (1997) Frost, D. J., & Wood, B. J. 1997, GeCoA, 61, 3301, doi: https://doi.org/10.1016/S0016-7037(97)00168-3 * Gaillard et al. (2021) Gaillard, F., Bouhifd, M. A., Füri, E., et al. 2021, SSRv, 217, 22, doi: https://doi.org/10.1007/s11214-021-00802-1 * Gooding & Muenow (1977) Gooding, J. L., & Muenow, D. W. 1977, Metic, 12, 401, doi: https://doi.org/10.1111/j.1945-5100.1977.tb00458.x * Grewal et al. (2020) Grewal, D. S., Dasgupta, R., & Farnell, A. 2020, GeCoA, 280, 281, doi: https://doi.org/10.1016/j.gca.2020.04.023 * Grimm & Heng (2015) Grimm, S. L., & Heng, K. 2015, ApJ, 808, 182, doi: https://doi.org/10.1088/0004-637X/808/2/182 * Grimm et al. (2021) Grimm, S. L., Malik, M., Kitzmann, D., et al. 2021, ApJS, 253, 30, doi: https://doi.org/10.3847/1538-4365/abd773 * Gronoff et al. (2020) Gronoff, G., Arras, P., Baraka, S. M., et al. 2020, JGRA, 125, e2019JA027639, doi: https://doi.org/10.1029/2019JA027639 * Hamano et al. (2013) Hamano, K., Abe, Y., & Genda, H. 2013, Natur, 497, 607, doi: https://doi.org/10.1038/nature12163 * Hamano et al. (2015) Hamano, K., Kawahara, H., Abe, Y., Onishi, M., & Hashimoto, G. L. 2015, ApJ, 806, 216, doi: https://doi.org/10.1088/0004-637X/806/2/216 * Hamilton & Oxtoby (1986) Hamilton, D., & Oxtoby, S. 1986, JG, 94, 626 * Hier-Majumder & Hirschmann (2017) Hier-Majumder, S., & Hirschmann, M. M. 2017, GGG, 18, doi: https://doi.org/10.1002/2017GC006937 * Hirschmann et al. (2012) Hirschmann, M., Withers, A., Ardia, P., & Foley, N. 2012, E&PSL, 345–348, 38, doi: https://doi.org/10.1016/j.epsl.2012.06.031 * Hirschmann (2012) Hirschmann, M. M. 2012, E&PSL, 341–344, 48, doi: https://doi.org/10.1016/j.epsl.2012.06.015 * Hirschmann (2016) —. 2016, AmMin, 101, 540, doi: https://doi.org/10.2138/am-2016-5452 * Hirschmann (2018) —. 2018, E&PSL, doi: https://doi.org/10.1016/j.epsl.2018.08.023 * Holloway & Blank (1994) Holloway, J. R., & Blank, J. G. 1994, in Volatiles in magmas, ed. M. R. Carroll & J. R. Holloway, Vol. 30 (Mineralogical Society of America), 187–230, doi: https://doi.org/10.1515/9781501509674-012 * Holloway et al. (1992) Holloway, J. R., Pan, V., & Gudmundsson, G. 1992, EJMin, 4, 105 * Iacono-Marziano et al. (2012) Iacono-Marziano, G., Morizet, Y., Trong, E. L., & Gaillard, F. 2012, GeCoA, 97, 1, doi: https://doi.org/10.1016/j.gca.2012.08.035 * Jakobsson & Oskarsson (1994) Jakobsson, S., & Oskarsson, N. 1994, GeCoA, 58, 9 * Karman et al. (2019) Karman, T., Gordon, I. E., van der Avoird, A., et al. 2019, Icarus, 328, 160, doi: https://doi.org/10.1016/j.icarus.2019.02.034 * Katyal et al. (2020) Katyal, N., Ortenzi, G., Grenfell, J. L., et al. 2020, A&A, 643, doi: https://doi.org/10.1051/0004-6361/202038779 * Keppler & Golabek (2019) Keppler, H., & Golabek, G. 2019, Geochemical Perspectives Letters, 11, doi: https://doi.org/10.7185/geochemlet.1918 * Labrosse et al. (2007) Labrosse, S., Hernlund, J. W., & Coltice, N. 2007, Natur, 450, 866, doi: https://doi.org/10.1038/nature06355 * Lange & Ahrens (1982) Lange, M. A., & Ahrens, T. J. 1982, Icarus, 51, 96, doi: https://doi.org/10.1016/0019-1035(82)90031-8 * Lebrun et al. (2013) Lebrun, T., Massol, H., Chassefière, E., et al. 2013, JGRE, 118, 1155, doi: https://doi.org/10.1002/jgre.20068 * Lécuyer et al. (1998) Lécuyer, C., Gillet, P., & Robert, F. 1998, ChGeo, 145, 249, doi: https://doi.org/10.1016/S0009-2541(97)00146-0 * Li et al. (2015a) Li, G., Gordon, I. E., Rothman, L. S., et al. 2015a, ApJS, 216, 15, doi: https://doi.org/10.1088/0067-0049/216/1/15 * Li et al. (2015b) Li, Y., Dasgupta, R., & Tsuno, K. 2015b, E&PSL, 415, 54, doi: https://doi.org/10.1016/j.epsl.2015.01.017 * Lichtenberg et al. (2021) Lichtenberg, T., Bower, D. J., Hammond, M., et al. 2021, JGRE, e2020JE006711, doi: https://doi.org/10.1029/2020JE006711 * Maas & Hansen (2019) Maas, C., & Hansen, U. 2019, E&PSL, 513, 81, doi: https://doi.org/10.1016/j.epsl.2019.02.016 * Marty (2012) Marty, B. 2012, E&PSL, 313-314, 56, doi: https://doi.org/10.1016/j.epsl.2011.10.040 * Maurice et al. (2017) Maurice, M., Tosi, N., Samuel, H., et al. 2017, JGRE, 122, 577, doi: https://10.1002/2016JE005250 * McKenzie (1984) McKenzie, D. 1984, JPet, 25, 713, doi: https://doi.org/10.1093/petrology/25.3.713 * McMillan (1994) McMillan, P. F. 1994, in Volatiles in Magmas, ed. M. R. Carroll & J. R. Holloway, Vol. 30 (Mineralogical Society of America), 131–156, doi: https://doi.org/10.1515/9781501509674-010 * Miyazaki & Korenaga (2021) Miyazaki, Y., & Korenaga, J. 2021, Does detecting water vapors on rocky planets indicate the presence of oceans?: An insight from self-consistent mantle degassing models. https://arxiv.org/abs/2108.03759 * Mosenfelder et al. (2009) Mosenfelder, J. L., Asimow, P. D., Frost, D. J., Rubie, D. C., & Ahrens, T. J. 2009, JGRB, 114, doi: https://doi.org/10.1029/2008JB005900 * Mysen et al. (2011) Mysen, B. O., Kumamoto, K., Cody, G. D., & Fogel, M. L. 2011, GeCoA, 75, 6183, doi: 10.1016/j.gca.2011.07.035 * Newcombe et al. (2017) Newcombe, M., Brett, A., Beckett, J., et al. 2017, GeCoA, 200, 330, doi: https://doi.org/10.1016/j.gca.2016.12.026 * Nikolaou et al. (2019) Nikolaou, A., Katyal, N., Tosi, N., et al. 2019, ApJ, 875, 11, doi: https://doi.org/10.3847/1538-4357/ab08ed * Olson & Sharp (2019) Olson, P. L., & Sharp, Z. D. 2019, PEPI, 294, 106294, doi: https://doi.org/10.1016/j.pepi.2019.106294 * O’Neill & Eggins (2002) O’Neill, H. S., & Eggins, S. M. 2002, ChGeo, 186, 151, doi: https://doi.org/10.1016/S0009-2541(01)00414-4 * Pahlevan et al. (2019) Pahlevan, K., Schaefer, L., & Hirschmann, M. M. 2019, E&PSL, 526, 115770, doi: https://doi.org/10.1016/j.epsl.2019.115770 * Pan et al. (1991) Pan, V., Holloway, J. R., & Hervig, R. L. 1991, GeCoA, 55, 1587, doi: https://doi.org/10.1016/0016-7037(91)90130-W * Patoc̆ka et al. (2020) Patoc̆ka, V., Calzavarini, E., & Tosi, N. 2020, PhRvF, 5, doi: https://doi.org/10.1103/PhysRevFluids.5.114304 * Perera et al. (2018) Perera, V., Jackson, A. P., Elkins-Tanton, L. T., & Asphaug, E. 2018, JGRP, 123, 1168, doi: https://doi.org/10.1029/2017JE005512 * Putirka & Rarick (2019) Putirka, K. D., & Rarick, J. C. 2019, AmMin, 104, 817, doi: https://doi.org/10.2138/am-2019-6787 * Raymond & Izidoro (2017) Raymond, S. N., & Izidoro, A. 2017, Icarus, 297, 134, doi: https://doi.org/10.1016/j.icarus.2017.06.030 * Raymond et al. (2007) Raymond, S. N., Quinn, T., & Lunine, J. I. 2007, AsBio, 7, 66, doi: https://doi.org/10.1089/ast.2006.06-0126 * Rothman et al. (2010) Rothman, L. S., Gordon, I. E., Barber, R. J., et al. 2010, JQSRT, 111, 2139, doi: https://doi.org/10.1016/j.jqsrt.2010.05.001 * Rubie et al. (2015) Rubie, D., Nimmo, F., & Melosh, H. 2015, in Treatise on Geophysics, 2nd edn., ed. G. Schubert, Vol. 9 (Elsevier), 43–79, doi: https://doi.org/10.1016/B978-0-444-53802-4.00154-8 * Rumpf & Gupte (1971) Rumpf, H. C. H., & Gupte, A. R. 1971, Chemie Ingenieur Technik, 43, 367, doi: https://doi.org/10.1002/cite.330430610 * Salvador et al. (2017) Salvador, A., Massol, H., Davaille, A., et al. 2017, JGRE, 122, 1458, doi: https://doi.org/10.1002/2017JE005286 * Scaillet & Gaillard (2011) Scaillet, B., & Gaillard, F. 2011, Natur, 480, 48, doi: https://doi.org/10.1038/480048a * Schaefer & Fegley (2010) Schaefer, L., & Fegley, B. 2010, Icarus, 208, 438, doi: https://doi.org/10.1016/j.icarus.2010.01.026 * Schaefer & Fegley (2017) —. 2017, ApJ, 843, 120, doi: https://doi.org/10.3847/1538-4357/aa784f * Schaefer et al. (2016) Schaefer, L., Wordsworth, R. D., Berta-Thompson, Z., & Sasselov, D. 2016, ApJ, 829, 63, doi: https://doi.org/10.3847/0004-637X/829/2/63 * Solomatov (2000) Solomatov, V. S. 2000, in Origin of the Earth and Moon, ed. R. M. Canup & K. Righter, Space Science Series (The University of Arizona Press), 323–338 * Solomatov et al. (1993) Solomatov, V. S., Olson, P., & Stevenson, D. J. 1993, E&PSL, 120, 387, doi: https://doi.org/10.1016/0012-821X(93)90252-5 * Solomatov & Stevenson (1993a) Solomatov, V. S., & Stevenson, D. J. 1993a, JGRE, 98, 5375, doi: https://doi.org/10.1029/92JE02948 * Solomatov & Stevenson (1993b) —. 1993b, JGRE, 98, 5391, doi: https://doi.org/10.1029/92JE02579 * Sossi (2021) Sossi, P. A. 2021, NatAs, doi: https://doi.org/10.1038/s41550-021-01353-9 * Sossi et al. (2020) Sossi, P. A., Burnham, A. D., Badro, J., et al. 2020, SciA, 6, doi: https://doi.org/10.1126/sciadv.abd1387 * Sossi et al. (2022) Sossi, P. A., Tollan, P., Badro, J., & Bower, D. J. 2022, E&PSL * Stanley et al. (2014) Stanley, B. D., Hirschmann, M. M., & Withers, A. C. 2014, GeCoA, 129, 54, doi: https://doi.org/10.1016/j.gca.2013.12.013 * Stolper (1982) Stolper, E. 1982, GeCoA, 46, 2609, doi: https://doi.org/10.1016/0016-7037(82)90381-7 * Stolper & Holloway (1988) Stolper, E., & Holloway, J. R. 1988, E&PSL, 87, 397, doi: https://doi.org/10.1016/0012-821X(88)90004-0 * Tagawa et al. (2021) Tagawa, S., Sakamoto, N., Hirose, K., et al. 2021, NatCo, 12, 2588, doi: https://doi.org/10.1038/s41467-021-22035-0 * Takahashi et al. (2013) Takahashi, S., Ohtani, E., Terasaki, H., et al. 2013, PCM, 40, 647, doi: https://doi.org/10.1007/s00269-013-0600-x * Thompson et al. (2021) Thompson, M. A., Telus, M., Schaefer, L., et al. 2021, NatAs, doi: https://doi.org/10.1038/s41550-021-01338-8 * Tonks & Melosh (1990) Tonks, W., & Melosh, H. 1990, in Origin of the Earth, ed. H. E. Newsom & J. H. Jones (Oxford University Press, New York), 151–174 * Tu et al. (2015) Tu, L., Johnstone, C. P., Güdel, M., & Lammer, H. 2015, A&A, 577, L3, doi: https://doi.org/10.1051/0004-6361/201526146 * Wetzel et al. (2013) Wetzel, D. T., Rutherford, M. J., Jacobsen, S. D., Hauri, E. H., & Saal, A. E. 2013, PNAS, 110, 8010, doi: https://doi.org/10.1073/pnas.1219266110 * Wilson & Head III (1981) Wilson, L., & Head III, J. W. 1981, JGRB, 86, 2971, doi: https://doi.org/10.1029/JB086iB04p02971 * Wolf & Bower (2018) Wolf, A. S., & Bower, D. J. 2018, PEPI, 278, 59, doi: https://doi.org/10.1016/j.pepi.2018.02.004 * Yoshioka et al. (2019) Yoshioka, T., Nakashima, D., & Keppler, H. 2019, GeCoA, 259, 129, doi: https://doi.org/10.1016/j.gca.2019.06.007 * Yurchenko & Tennyson (2014) Yurchenko, S. N., & Tennyson, J. 2014, MNRAS, 440, 1649, doi: https://doi.org/10.1093/mnras/stu326 * Yurchenko et al. (2013) Yurchenko, S. N., Tennyson, J., Barber, R. J., & Thiel, W. 2013, JMoSp, 291, 69, doi: https://doi.org/10.1016/j.jms.2013.05.014 * Zahnle et al. (2019) Zahnle, K. J., Gacesa, M., & Catling, D. C. 2019, GeCoA, 244, 56, doi: https://doi.org/10.1016/j.gca.2018.09.017 * Zahnle et al. (2020) Zahnle, K. J., Lupu, R., Catling, D. C., & Wogan, N. 2020, PSJ, 1, 11, doi: https://doi.org/10.3847/psj/ab7e2c
# Bounds on Dao numbers and applications to regular local rings Antonino Ficarra Department of Mathematics and Computer Sciences, Physics and Earth Sciences, University of Messina, Viale Ferdinando Stagno d’Alcontres 31, 98166 Messina, Italy<EMAIL_ADDRESS>, Cleto B. Miranda-Neto Departamento de Matemática, Universidade Federal da Paraíba, 58051-900 João Pessoa, PB, Brazil<EMAIL_ADDRESS>and Douglas S. Queiroz Departamento de Matemática, Universidade Federal da Paraíba, 58051-900 João Pessoa, PB, Brazil<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract. The so-called Dao numbers are a sort of measure of the asymptotic behaviour of full properties of certain product ideals in a Noetherian local ring $R$ with infinite residue field and positive depth. In this paper, we answer a question of H. Dao on how to bound such numbers. The auxiliary tools range from Castelnuovo-Mumford regularity of appropriate graded structures to reduction numbers of the maximal ideal. In particular, we substantially improve previous results (and answer questions) by the authors. As an application, we provide new characterizations of when $R$ is regular; for instance, we show that this holds if and only if the maximal ideal of $R$ can be generated by a $d$-sequence (in the sense of Huneke) if and only if the third Dao number of any (minimal) reduction of the maximal ideal vanishes. ###### Key words and phrases: Full ideal, Castelnuovo-Mumford regularity, Rees algebra, reduction number, regular local ring ###### 2020 Mathematics Subject Classification: Primary: 13C05, 13C13; Secondary: 13A30, 13H99 ## 1\. Motivation: Dao’s problem on the fullness of certain ideals Throughout this note, by a ring we mean a commutative, Noetherian, unital ring. Let $R$ be either a local ring with residue field $K$ and maximal ideal $\mathfrak{m}$, or a standard graded algebra over a field $K$ having a unique graded maximal ideal $\mathfrak{m}$. We will assume throughout that $K$ is infinite and $\mathrm{depth}\ R>0$ (i.e., $\mathfrak{m}$ contains a non- zerodivisor), and in addition $I\subset R$ stands for an ideal which we assume to be homogeneous whenever $R$ is graded. In this paper we focus on the properties of $\mathfrak{m}$-fullness, fullness, and weak $\mathfrak{m}$-fullness (to be recalled in the next section) of certain ideals. More precisely, we are interested in the so-called Dao numbers of the given ideal $I$, i.e., three non-negative integers ${\mathfrak{d}}_{i}(I)$, $i=1,2,3$, defined below, which in some sense provide a measure for the asymptotic behaviour of the full properties of certain product ideals involving $I$. ###### Definition 1.1. The Dao numbers of $I$ are defined as: $\displaystyle{\mathfrak{d}}_{1}(I)$ $\displaystyle=$ $\displaystyle{\rm min}\\{t\geq 0\mid\mbox{$I\mathfrak{m}^{k}$ is $\mathfrak{m}$-full for all $k\geq t$\\}};$ $\displaystyle{\mathfrak{d}}_{2}(I)$ $\displaystyle=$ $\displaystyle{\rm min}\\{t\geq 0\mid\mbox{$I\mathfrak{m}^{k}$ is full for all $k\geq t$\\}};$ $\displaystyle{\mathfrak{d}}_{3}(I)$ $\displaystyle=$ $\displaystyle{\rm min}\\{t\geq 0\mid\mbox{$I\mathfrak{m}^{k}$ is weakly $\mathfrak{m}$-full for all $k\geq t$\\}}.$ It is worth observing that, if $(R,\mathfrak{m},K)$ and $I$ are as above, then as shown in [17, Proposition 2.2] the basic relations among the Dao numbers of $I$ are ${\mathfrak{d}}_{2}(I)\leq{\mathfrak{d}}_{1}(I)={\mathfrak{d}}_{3}(I).$ Our motivation is the following problem suggested by H. Dao, which was first addressed in [17] and then in [6]. ###### Question 1.2. ([4, Question 4.5]) Can we find good lower and upper bounds for the ${\mathfrak{d}}_{i}(I)$’s? In the case where $I$ is a reduction of $\mathfrak{m}$, a lower bound for ${\mathfrak{d}}_{3}(I)$ is given by ${\rm r}_{I}(\mathfrak{m})$, the reduction number of $\mathfrak{m}$ with respect to $I$, which follows immediately from [17, Theorem 3.4] (the question as to whether ${\rm r}_{I}(\mathfrak{m})\leq{\mathfrak{d}}_{2}(I)$ remains unanswered). Otherwise, for a general $I$ (not necessarily a reduction of $\mathfrak{m}$), a more elaborated lower bound was established in [6, Proposition 1.8]. So, in the present paper, our main goal is to answer the upper bound part of Dao’s question, by using two fundamental numerical invariants in commutative algebra: the Castelnuovo-Mumford regularity (of appropriate graded structures) and, again, the reduction number. The connection between these numbers and Dao’s problem was first exploited in [17] and later developed even further (concerning specifically the Castelnuovo-Mumford regularity) in [6]. The present work establishes, in fact, generalizations and substantial improvements of the results given in these two papers. Additionally, we shall finally derive, as an application of some of our results, new characterizations of regular local rings. ## 2\. Auxiliary notions and basic properties In this section, we invoke some basic concepts and facts which we shall freely use in this note (without explicit mention). ### 2.1. Full properties of ideals Let $K$ be an infinite field and $(R,\mathfrak{m})$ be either a local ring with residue field $K$ or a standard graded $K$-algebra having a unique homogeneous maximal ideal $\mathfrak{m}$. Assume $\mathrm{depth}\ R>0$, and let $I\subset R$ stand for an ideal (homogeneous whenever $R$ is graded). ###### Definition 2.1. The following notions are central in this paper: 1. (a) $I$ is $\mathfrak{m}$-full if $I\mathfrak{m}:x=I$ for some element $x\in\mathfrak{m}\setminus{\mathfrak{m}}^{2}$; 2. (b) $I$ is full if $I:x=I:\mathfrak{m}$ for some element $x\in\mathfrak{m}\setminus{\mathfrak{m}}^{2}$; 3. (c) $I$ is weakly $\mathfrak{m}$-full if $I\mathfrak{m}:\mathfrak{m}=I$. For completeness, we recall a few interesting properties. It is clear that $\mathfrak{m}$-full ideals are weakly $\mathfrak{m}$-full. If $I$ is $\mathfrak{m}$-primary, then $I$ is weakly $\mathfrak{m}$-full if and only if $I$ is basically full in the sense of [8]. Moreover, $\mathfrak{m}$-full ideals satisfy the so-called Rees property, and if $R$ is a normal domain then any integrally closed ideal is $\mathfrak{m}$-full; see [24] (also [7]). ### 2.2. Castelnuovo-Mumford regularity Let $S=\bigoplus_{m\geq 0}S_{m}$ be a finitely generated standard graded algebra over a ring $S_{0}$. As usual, by standard we mean that $S$ is generated by $S_{1}$ as an $S_{0}$-algebra. We write $S_{+}=\bigoplus_{m\geq 1}S_{m}$ for the ideal generated by all elements of $S$ of positive degree. For a graded $S$-module $A=\bigoplus_{m\in{\mathbb{Z}}}A_{m}$ satisfying $A_{m}=0$ for all $m\gg 0$, we let $a(A)=\textrm{max}\\{m\in{\mathbb{Z}}\ |\ A_{m}\neq 0\\}$ if $A\neq 0$, and $a(A)=-\infty$ if $A=0$. Now, for a finitely generated graded $S$-module $N\neq 0$ and an integer $j\geq 0$, we take $A=H_{S_{+}}^{j}(N)$ and use the notation $a_{j}(N)\,:=\,a(H_{S_{+}}^{j}(N)),$ where $H_{S_{+}}^{j}(-)$ stands for the $j$-th local cohomology functor with respect to the ideal $S_{+}$. It is known that $H_{S_{+}}^{j}(N)$ is a graded module with $H_{S_{+}}^{j}(N)_{n}=0$ for all $n\gg 0$ (see, e.g., [1, Proposition 15.1.5(ii)]). Thus, $a_{j}(N)<+\infty$. ###### Definition 2.2. Maintain the above setting and notations. The Castelnuovo-Mumford regularity of $N$ is defined as $\mathrm{reg}_{S}N\,:=\,\mathrm{max}\\{a_{j}(N)+j\,\mid\,j\geq 0\\}.$ It is well-known that $\mathrm{reg}\,N$ governs the complexity of the graded structure of $N$ and is relevant in commutative algebra and algebraic geometry, for example in the study of degrees of syzygies over polynomial rings (see, e.g., [1, Chapter 15]). ###### Remark 2.3. A classical instance of interest is when $S=\mathcal{R}(J)$, the Rees algebra of an ideal $J$ in a ring $R$ (to be recalled in the next subsection), which is known to be a finitely generated standard graded $R$-algebra. In particular, we can consider the case where $R$ is local and $J=\mathfrak{m}$, the maximal ideal of $R$. We recall below a few basic rules about this invariant. For details, we refer to [2, p. 277, (a), (c) and (d)], [5, Corollary 20.19] and [10, Lemma 3.1]. * (i) As usual, given an integer $j$, we denote by $N(j)$ the module $N$ with degrees shifted by $j$, that is, $N(j)_{i}=N_{i+j}$ for all $i$. Then, $\mathrm{reg}_{S}N(j)=\mathrm{reg}_{S}N-j.$ * (ii) Let $0\rightarrow M\rightarrow N\rightarrow P\rightarrow 0$ be a short exact sequence of finitely generated graded $S$-modules. Then: * $\bullet$ $\mathrm{reg}_{S}N\leq\mathrm{max}\\{\mathrm{reg}_{S}M,\mathrm{reg}_{S}P\\},$ with equality if $\mathrm{reg}_{S}P\neq\mathrm{reg}_{S}M-1$ or $M_{k}=0$ for $k\gg 0$. * $\bullet$ $\mathrm{reg}_{S}M\leq\mathrm{max}\\{\mathrm{reg}_{S}N,\mathrm{reg}_{S}P+1\\},$ with equality if $\mathrm{reg}_{S}N\neq\mathrm{reg}_{S}P.$ * $\bullet$ $\mathrm{reg}_{S}P\leq\mathrm{max}\\{\mathrm{reg}_{S}N,\mathrm{reg}_{S}M-1\\},$ with equality if $\mathrm{reg}_{S}N\neq\mathrm{reg}_{S}M.$ * (iii) If $N_{j}=0$ for all $j\gg 0$, then $\mathrm{reg}_{S}N=a(N)$. ### 2.3. Rees algebra and Dao module Here, we consider some useful graded structures. Let $R$ be a ring and $J$ an ideal of $R$. ###### Definition 2.4. The Rees algebra of $J$ is the graded ring $\mathcal{R}(J)=\bigoplus_{k\geq 0}J^{k}=R\oplus J\oplus J^{2}\oplus\cdots$ ###### Notation 2.5. Given an $R$-module $M$, it is customary to write $\mathcal{R}(J,M)=\bigoplus_{k\geq 0}J^{k}M$ for the Rees module of $J$ relative to $M$. In this paper, if $I$ is another $R$-ideal, we will be particularly interested in $\mathcal{R}(J,I)=\bigoplus_{k\geq 0}IJ^{k}.$ Note $\mathcal{R}(J,I)=I\mathcal{R}(J)$, the extension of $I$ to the ring $\mathcal{R}(J)$, which is therefore a finitely generated ideal of $\mathcal{R}(J)$. Now let $(R,\mathfrak{m})$ be as in Section 1. Its associated graded ring is defined by $\mathrm{gr}_{\mathfrak{m}}(R)=\bigoplus_{k\geq 0}\mathfrak{m}^{k}/\mathfrak{m}^{k+1}$. In [6], the following graded structure is introduced for a given ideal $I$ of $R$. ###### Definition 2.6. The Dao module of $I$ is given by $\mathfrak{D}_{\mathfrak{m}}(I)=\bigoplus_{k\geq 0}\frac{I\mathfrak{m}^{k+1}:\mathfrak{m}}{I\mathfrak{m}^{k}},$ which is a graded $\mathcal{R}(\mathfrak{m})$-module. ###### Remark 2.7. The $k$th component of the Dao module vanishes if and only if the ideal $I{\mathfrak{m}}^{k}$ is weakly $\mathfrak{m}$-full. Since $\mathfrak{D}_{\mathfrak{m}}(I)_{k}=0\quad\mbox{for\, all}\quad k\geq{\mathfrak{d}}_{3}(I),$ it follows that $\mathfrak{D}_{\mathfrak{m}}(I)$ has finite length and therefore is a finitely generated graded $\mathcal{R}(\mathfrak{m})$-module, satisfying $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathfrak{D}_{\mathfrak{m}}(I)={\mathfrak{d}}_{3}(I)-1$ whenever ${\mathfrak{d}}_{3}(I)\geq 1$ (e.g., if $I$ is not weakly $\mathfrak{m}$-full). ### 2.4. Ratliff-Rush operation Let $I$ be an ideal of a ring $R$. ###### Definition 2.8. The Ratliff-Rush closure $\widetilde{I}$ of the ideal $I$ is given by $\widetilde{I}=\bigcup_{m\geq 1}\,I^{m+1}:I^{m}.$ This is an ideal of $R$ containing $I$ which in fact refines the integral closure of $I$, so that $\widetilde{I}=I$ whenever $I$ is integrally closed. For details, see [18]. Now suppose $I$ contains a regular element, i.e., a non-zerodivisor on $R$. Then it is well-known that $\widetilde{I}$ is the largest ideal that shares with $I$ the same sufficiently high powers; hence, $\widetilde{I^{m}}=I^{m}\quad\mbox{for\, all}\quad m\gg 0.$ This enables us to consider the following helpful number (inspired by [19, Proposition 4.2]). ###### Notation 2.9. If $I$ contains a regular element, we set $s(I)=\mathrm{min}\,\\{n\geq 1\,\mid\,\widetilde{I^{i}}=I^{i}\ \mathrm{for\ all}\ i\geq n\\}.$ ###### Remark 2.10. Let us invoke a couple of useful properties. First, according to [14, Lemma 2.2] we can write $\widetilde{I^{k+1}}:I=\widetilde{I^{k}}\quad\mbox{for \,all}\quad k\geq 0.$ Moreover, if $\mathrm{gr}_{I}(R)=\bigoplus_{k\geq 0}I^{k}/I^{k+1}$ denotes the associated graded ring of $I$, then by [19, Remark 1.6] we get that $\widetilde{I^{k}}=I^{k}$ for all $k\geq 0$ (i.e., $s(I)=1$) if and only if $\mathrm{depth}\,\mathrm{gr}_{I}(R)>0.$ ### 2.5. Reduction number One last auxiliary notion is in order. ###### Definition 2.11. Let $J$ be a proper ideal of a ring $R$. An ideal $I\subset J$ is said to be a reduction of $J$ if $IJ^{r}=J^{r+1}$ for some integer $r\geq 0$. Such a reduction $I$ is minimal if it is minimal with respect to inclusion. If $I$ is a reduction of $J$, we define the reduction number of $J$ with respect to $I$ as the number ${\rm r}_{I}(J)=\mathrm{min}\,\\{m\in\mathbb{N}\,\mid\,IJ^{m}=J^{m+1}\\},$ and the reduction number of $J$ as ${\rm r}(J)=\mathrm{min}\,\\{{\rm r}_{I}(J)\,\mid\,\mbox{$I$ is a minimal reduction of $J$}\\}.$ Of special interest in this paper will be the case where $(R,\mathfrak{m})$ is a local ring and $J=\mathfrak{m}$. ## 3\. Upper bounds on Dao numbers via Castelnuovo-Mumford regularity Before establishing the main results of the section, we fix a piece of notation. For a graded $R$-module $M=\bigoplus_{k\geq 0}M_{k}$ and an integer $\ell\geq 0$, we can consider the truncation $M_{\geq\ell}=\bigoplus_{k\geq\ell}M_{k}$. Specifically, with notation as in (2.5) and in case $(R,\mathfrak{m},K)$ is a local ring or a standard graded $K$-algebra, we will be interested in the truncation $\mathcal{R}(\mathfrak{m},I)_{\geq 1}=\bigoplus_{k\geq 1}I{\mathfrak{m}}^{k}.$ Here we are interested in tackling the upper bound part of Question 1.2 in terms of the Castelnuovo-Mumford regularity of appropriate graded structures. The first result in this direction, in case $R$ is local and $I$ is a reduction of $\mathfrak{m}$, was proved in [17] and can be stated as follows. ###### Theorem 3.1. $($[17, Theorem 3.10]$)$ Let $(R,\mathfrak{m})$ be a local ring with infinite residue field and ${\rm depth}\,R>0$, and let $I$ be a reduction of $\mathfrak{m}$. Then, ${\mathfrak{d}}_{3}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m}).$ Later, in [6], the following general answer to Question 1.2 was provided. ###### Theorem 3.2. $($[6, Theorem 1.1]$)$ Let $(R,\mathfrak{m},K)$ be either a local ring or a standard graded $K$-algebra, with $K$ infinite and $\mathrm{depth}\,R>0$. Let $I\subset R$ be an ideal $($homogeneous if $R$ is graded$)$. Then, ${\mathfrak{d}}_{3}(I)\leq\mathrm{max}\\{\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m},I),\ \mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m},I)_{\geq 1}:_{\mathcal{R}(R)}\mathfrak{m}\\}.$ Here, we prove the following result, which establishes [6, Conjecture 0.1] in its full generality. ###### Theorem 3.3. Let $R$ and $I$ be as in Theorem 3.2. Then, ${\mathfrak{d}}_{3}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m},I).$ Proof. We may suppose $\mathfrak{d}_{3}(I)>0$, otherwise there is nothing to prove. Let $\mathfrak{M}=\mathcal{R}(\mathfrak{m})_{+}=\bigoplus_{k>0}\mathfrak{m}^{k}$ be the homogeneous maximal ideal of $\mathcal{R}(\mathfrak{m})$. It was shown in [6, proof of Theorem 1.3] that (1) $\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))\ =\ 0:_{\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I)}\mathfrak{M}\ =\ \bigoplus_{k\geq 0}\frac{(I\mathfrak{m}^{k+1}:\mathfrak{m})\cap\mathfrak{m}^{k}}{I\mathfrak{m}^{k}}.$ Now, since $\frac{(I\mathfrak{m}^{k+1}:\mathfrak{m})\cap\mathfrak{m}^{k}}{I\mathfrak{m}^{k}}\ \subset\ \frac{I\mathfrak{m}^{k+1}:\mathfrak{m}}{I\mathfrak{m}^{k}}\quad\mbox{for\, all}\quad k\geq 0,$ we have (2) $\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))\ \subset\ \mathfrak{D}_{\mathfrak{m}}(I)$ as a graded $\mathcal{R}(\mathfrak{m})$-submodule of $\mathfrak{D}_{\mathfrak{m}}(I)$. Next, let $t=\max\\{k\mid\mathfrak{D}_{\mathfrak{m}}(I)_{k}\neq 0\\}$. Then, $t=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathfrak{D}_{\mathfrak{m}}(I)=\mathfrak{d}_{3}(I)-1$. Because $\mathfrak{M}\cdot\mathfrak{D}_{\mathfrak{m}}(I)_{t}=0$, it follows that $\mathfrak{D}_{\mathfrak{m}}(I)_{t}\ \subset\ [0:_{\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I)}\mathfrak{M}]_{t}\ =\ \mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))_{t}$ and note that the opposite inclusion always holds by equation (2). Therefore, $\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))_{t}=\mathfrak{D}_{\mathfrak{m}}(I)_{t}\neq 0$ and consequently $\displaystyle\mathfrak{d}_{3}(I)-1\ =\ t\ $ $\displaystyle\leq\ \max\\{j\mid\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))_{j}\neq 0\\}$ $\displaystyle\leq\ \textrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I)$ $\displaystyle=\ \textrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)-1.$ The assertion follows. $\square$ Naturally, we can ask when the inclusion (2) is actually an equality. ###### Proposition 3.4. Let $R$ and $I$ be as in Theorem 3.2. If $R$ is a standard graded $K$-algebra or $\mathrm{depth}\,\mathrm{gr}_{\mathfrak{m}}(R)>0$, then $\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))=\mathfrak{D}_{\mathfrak{m}}(I)$. Proof. In view of equation (1), it suffices to show that $I\mathfrak{m}^{k+1}:\mathfrak{m}\ \subset\ \mathfrak{m}^{k}\quad\mbox{for\, all}\quad k\geq 0.$ First, if $R$ is a standard graded $K$-algebra, we can represent it as a quotient $S/J$ where $S=K[x_{1},\dots,x_{n}]$ is a standard graded polynomial ring and $J\subset S$ is a homogeneous ideal. Now let $f\in I\mathfrak{m}^{k+1}:\mathfrak{m}$ be an homogeneous element. Then $(x_{1}+J)f\in I\mathfrak{m}^{k+1}\subset\mathfrak{m}^{k+1}$ and so $\deg f+1\geq k+1$. Thus $f\in\mathfrak{m}^{k}$ and consequently $I\mathfrak{m}^{k+1}:\mathfrak{m}\subset\mathfrak{m}^{k}$ for all $k\geq 0$. Otherwise, if $\mathrm{depth}\,\mathrm{gr}_{\mathfrak{m}}(R)>0$ then, as we know, $\widetilde{\mathfrak{m}^{k}}=\mathfrak{m}^{k}$ for all $k\geq 0$. Consequently, $I\mathfrak{m}^{k+1}:\mathfrak{m}\subset\mathfrak{m}^{k+1}:\mathfrak{m}=\widetilde{\mathfrak{m}^{k+1}}:\mathfrak{m}=\widetilde{\mathfrak{m}^{k}}=\mathfrak{m}^{k}.$ $\square$ We conclude this section with some other consequences of Theorem 3.3. ###### Corollary 3.5. Let $R$ and $I$ be as in Theorem 3.2. If $\mathfrak{d}_{3}(I)>0$, then $\mathfrak{d}_{3}(I)$ is equal to the highest socle degree of $\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I)$ increased by one, that is, $\mathfrak{d}_{3}(I)\ =\ \max\\{j\mid\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))_{j}\neq 0\\}+1.$ Proof. Set $h=\max\\{j\mid\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))_{j}\neq 0\\}$. In the proof of Theorem 3.3 we have seen that $\mathfrak{d}_{3}(I)\leq h+1$. On the other hand, since $\mathfrak{d}_{3}(I)>0$, we can write $\mathfrak{d}_{3}(I)=\max\\{j\mid\mathfrak{D}_{\mathfrak{m}}(I)_{j}\neq 0\\}+1$. By equation (2) we have an inclusion $\mathrm{Soc}(\mathcal{R}(\mathfrak{m})/\mathcal{R}(\mathfrak{m},I))\subset\mathfrak{D}_{\mathfrak{m}}(I)$, so that $\mathfrak{d}_{3}(I)\ =\ \max\\{j\mid\mathfrak{D}_{\mathfrak{m}}(I)_{j}\neq 0\\}+1\ \geq\ h+1,$ thus completing the proof. $\square$ Next, we estimate the Castelnuovo-Mumford regularity of some natural blowup algebras that are related to the Dao module of $I$. ###### Definition 3.6. The associated graded module of $\mathfrak{m}$ relative to $I$ is the $\mathcal{R}(\mathfrak{m})$-module $\mathrm{gr}_{\mathfrak{m}}(I)=\mathcal{R}(\mathfrak{m},I)/\mathfrak{m}\mathcal{R}(\mathfrak{m},I)=\bigoplus_{k\geq 0}I\mathfrak{m}^{k}/I\mathfrak{m}^{k+1}.$ Note that, for each $k\geq 0$, we have the following commutative diagram with exact rows and columns: $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\mathfrak{m}^{k+1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\mathfrak{m}^{k+1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\mathfrak{m}^{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{I\mathfrak{m}^{k+1}:\mathfrak{m}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cfrac{I\mathfrak{m}^{k+1}:\mathfrak{m}}{I\mathfrak{m}^{k}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cfrac{I\mathfrak{m}^{k}}{I\mathfrak{m}^{k+1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cfrac{I\mathfrak{m}^{k+1}:\mathfrak{m}}{I\mathfrak{m}^{k+1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\cfrac{I\mathfrak{m}^{k+1}:\mathfrak{m}}{I\mathfrak{m}^{k}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0}$$\textstyle{0}$ ###### Notation 3.7. For simplicity, we set $\mathcal{P}_{\mathfrak{m}}(I)=\mathcal{R}(\mathfrak{m},I)_{\geq 1}:_{\mathcal{R}(R)}\mathfrak{m}\quad\mbox{and}\quad\mathcal{Q}_{\mathfrak{m}}(I)=(\mathcal{R}(\mathfrak{m},I)_{\geq 1}:_{\mathcal{R}(R)}\mathfrak{m})/\mathcal{R}(\mathfrak{m},I)_{\geq 1}.$ Taking the direct sum in the above diagram, we obtain the following commutative diagram of finitely generated graded $\mathcal{R}(\mathfrak{m})$-modules with exact rows and columns: $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{R}(\mathfrak{m},I)_{\geq 1}(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{R}(\mathfrak{m},I)_{\geq 1}(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{R}(\mathfrak{m},I)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{P}_{\mathfrak{m}}(I)(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{D}_{\mathfrak{m}}(I)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathrm{gr}_{\mathfrak{m}}(I)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{Q}_{\mathfrak{m}}(I)(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathfrak{D}_{\mathfrak{m}}(I)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0}$$\textstyle{0}$ The lemma below is a special case of [25, Corollary 3]. ###### Lemma 3.8. There is an equality $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathrm{gr}_{\mathfrak{m}}(I)$. Now we prove: ###### Corollary 3.9. Let $R$ and $I$ be as in Theorem 3.2. Then, $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{P}_{\mathfrak{m}}(I),\,\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{Q}_{\mathfrak{m}}(I)\ \leq\ \mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)+1.$ In particular, $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{P}_{\mathfrak{m}}(I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{Q}_{\mathfrak{m}}(I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)+1$ if either $\mathfrak{d}_{3}(I)=0$ or $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)>\mathfrak{d}_{3}(I)>0$. Proof. There are short exact sequences of finitely generated graded $\mathcal{R}(\mathfrak{m})$-modules $0\rightarrow\mathcal{R}(\mathfrak{m},I)\rightarrow\mathcal{P}_{\mathfrak{m}}(I)(1)\rightarrow\mathfrak{D}_{\mathfrak{m}}(I)\rightarrow 0,$ $0\rightarrow\mathrm{gr}_{\mathfrak{m}}(I)\rightarrow\mathcal{Q}_{\mathfrak{m}}(I)(1)\rightarrow\mathfrak{D}_{\mathfrak{m}}(I)\rightarrow 0.$ Recall that Lemma 3.8 gives $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathrm{gr}_{\mathfrak{m}}(I)$. First, assume that $\mathfrak{d}_{3}(I)=0$. Then, $\mathfrak{D}_{\mathfrak{m}}(I)=0$. In this case, $\mathcal{P}_{\mathfrak{m}}(I)(1)\cong\mathcal{R}(\mathfrak{m},I)$ and $\mathcal{Q}_{\mathfrak{m}}(I)(1)\cong\mathrm{gr}_{\mathfrak{m}}(I)$, so that $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{P}_{\mathfrak{m}}(I)-1=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathrm{gr}_{\mathfrak{m}}(I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{Q}_{\mathfrak{m}}(I)-1.$ Now, suppose $\mathfrak{d}_{3}(I)>0$. In particular $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathfrak{D}_{\mathfrak{m}}(I)=\mathfrak{d}_{3}(I)-1$. On the other hand, Theorem 3.3 yields $\mathfrak{d}_{3}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathrm{gr}_{\mathfrak{m}}(I)$ and thus the above exact sequence implies $\displaystyle\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{P}_{\mathfrak{m}}(I)(1)\ $ $\displaystyle\leq\ \max\\{\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I),\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathfrak{D}_{\mathfrak{m}}(I)\\}$ $\displaystyle=\ \max\\{\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I),\mathfrak{d}_{3}(I)-1\\}$ $\displaystyle=\ \mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I).$ Similarly, $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{Q}_{\mathfrak{m}}(I)(1)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)$. Hence, $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{P}_{\mathfrak{m}}(I),\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{Q}_{\mathfrak{m}}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)+1.$ Finally, if we suppose that $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)>\mathfrak{d}_{3}(I)>0$, we derive $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)-1\neq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathfrak{D}_{\mathfrak{m}}(I),$ and the equalities $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{P}_{\mathfrak{m}}(I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{Q}_{\mathfrak{m}}(I)=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m},I)+1$ follow. $\square$ Next, we strengthen [6, Proposition 1.6] by finding the relation between the regularity of an ideal as in (2.5) and that of the ambient Rees algebra $\mathcal{R}(J)$. ###### Lemma 3.10. Let $(R,\mathfrak{m},K)$ be either a local ring or a standard graded $K$-algebra. Let $I,J$ be ideals of $R$. Then, $\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J)\leq\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J,I),$ with equality if $I$ is a reduction of $J$. Proof. Since $\mathcal{R}(J,I)$ is a homogeneous ideal of $\mathcal{R}(J)$, the short exact sequence $0\rightarrow\mathcal{R}(J,I)\rightarrow\mathcal{R}(J)\rightarrow\frac{\mathcal{R}(J)}{\mathcal{R}(J,I)}\rightarrow 0$ yields $\displaystyle\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J)$ $\displaystyle\leq$ $\displaystyle\mathrm{max}\Big{\\{}\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J,I),\mathrm{reg}_{\mathcal{R}(J)}\frac{\mathcal{R}(J)}{\mathcal{R}(J,I)}\Big{\\}}$ $\displaystyle=$ $\displaystyle\mathrm{max}\Big{\\{}\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J,I),\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J,I)-1\Big{\\}}$ $\displaystyle=$ $\displaystyle\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J,I).$ For the equality part, the proof of $\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J)\geq\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J,I)$ is exactly the same as the one given in [6, Proposition 1.6] for the case $J=\mathfrak{m}$. $\square$ ###### Remark 3.11. Notice that, as a direct consequence of Theorem 3.3 and Lemma 3.10, we rediscover Theorem 3.1. In our view, and in connection to Lemma 3.10, it is worth asking the following. ###### Question 3.12. Let $(R,\mathfrak{m},K)$ be either a local ring or a standard graded $K$-algebra. Let $I\subset J$ be ideals of $R$. When does the condition $\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J)=\mathrm{reg}_{\mathcal{R}(J)}\mathcal{R}(J,I)$ force $I$ to be a reduction of $J$? Inspired by Theorem 3.1 and Theorem 3.3, we might wonder whether the comparison ${\mathfrak{d}}_{3}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})$ holds in general. However, in the case where $I$ is not a reduction of $\mathfrak{m}$, the relationship with the regularity of $\mathcal{R}(\mathfrak{m})$ may become rather wild, as we can see in the examples below. ###### Example 3.13. Let $(R,\mathfrak{m})$ be a local ring with infinite residue field and ${\rm depth}\,R>0$. By [20, Proposition 1.5], we have $\mathfrak{m}^{n+1}:\mathfrak{m}=\mathfrak{m}^{n}$ for all $n\geq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})$. Now set $I={\mathfrak{m}}^{k}$ for any given $k\geq 2$ (note $I$ cannot be a reduction of $\mathfrak{m}$). We can write (3) $I\mathfrak{m}^{n}=\mathfrak{m}^{n+k}=\mathfrak{m}^{n+k+1}:\mathfrak{m}=I\mathfrak{m}^{n+1}:\mathfrak{m}$ for all $n\geq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})$ and therefore ${\mathfrak{d}}_{3}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})$. Furthermore notice that, by using (3) whenever $n\geq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})-k\geq 0$, we obtain ${\mathfrak{d}}_{3}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})-k<\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m}).$ Finally, if $k\geq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})$ then it is easy to see that ${\mathfrak{d}}_{3}(I)=0$. In particular, if $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})\leq 2$ then ${\mathfrak{d}}_{3}(I)=0$. ###### Example 3.14. Let $R$ be as in [17, Example 4.3]. As observed there, $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})=8$. Consider the ideal $I={\mathfrak{m}}^{2}$. By Example 3.13 above, we can write ${\mathfrak{d}}_{3}(I)\leq\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m})-2=6,$ whereas, on the other hand, a computation shows $I{\mathfrak{m}}^{5}\neq I{\mathfrak{m}}^{6}:\mathfrak{m}$. Therefore, we must have ${\mathfrak{d}}_{3}(I)=6$. We are also able to illustrate the opposite situation, as follows. ###### Example 3.15. Let $I=(x^{a},y^{a})\subset R=k[\\![x,y]\\!]$, where $k$ is a field and $a\geq 2$. Clearly, $I$ is not a reduction of $\mathfrak{m}=(x,y)$. From [4, Example 4.1] we can write ${\mathfrak{d}}_{3}(I)=a-1>0=\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\,\mathcal{R}(\mathfrak{m}),$ where the last equality holds because $R$ is a regular local ring (see Theorem 5.3 for a more general statement). ## 4\. Approach (and a conjecture) via reduction numbers We begin recalling the following result in dimension one. ###### Proposition 4.1. $($[17, Corollary 3.7]$)$ If $(R,\mathfrak{m})$ is a one-dimensional Cohen- Macaulay local ring with infinite residue field, then $\mathfrak{d}_{3}(I)={\rm r}(\mathfrak{m})$ for any minimal reduction $I$ of $\mathfrak{m}$. It is worth pointing out that this proposition is no longer true if the reduction $I$ is not required to be minimal, as exemplified in [17, Example 4.2]. In higher dimension, there is the conjecture below. ###### Conjecture 4.2. $($[17, Conjecture 3.8]$)$ If $(R,\mathfrak{m})$ is a Cohen-Macaulay local ring with infinite residue field and ${\rm dim}\,R\geq 2$, then $\mathfrak{d}_{3}(I)={\rm r}_{I}(\mathfrak{m})$ for any minimal reduction $I$ of $\mathfrak{m}$. Our main purpose in this section is to provide partial answers to this conjecture. A crucial tool in this investigation is given by the following result. ###### Theorem 4.3. $($[17, Theorem 3.4]$)$ Let $(R,\mathfrak{m})$ be a local ring with infinite residue field and ${\rm depth}\,R>0$, and let $I$ be a reduction of $\mathfrak{m}$. Then $\mathfrak{d}_{3}(I)=\mathrm{max}\\{{\rm r}_{I}(\mathfrak{m}),s(\mathfrak{m})-1\\}.$ Now we can illustrate Conjecture 4.2 in the case ${\rm dim}\,R=2$. ###### Example 4.4. Let $(R,\mathfrak{m})$ be the local ring of a rational triple point as in [17, Example 4.1], where we highlighted that $s(\mathfrak{m})=1$. Then, by Theorem 4.3, we obtain $\mathfrak{d}_{3}(I)={\rm r}_{I}(\mathfrak{m})$ for any reduction (in particular, minimal reduction) $I$ of $\mathfrak{m}$. The above example actually shows that Conjecture 4.2 is true in the case $s(\mathfrak{m})=1$, or equivalently, ${\rm depth}\,{\rm gr}_{\mathfrak{m}}(R)>0$ (see also Proposition 4.9 below). Furthermore, it is clear that $\widetilde{\mathfrak{m}}=\mathfrak{m}$, which forces $s(\mathfrak{m})\neq 2$. Therefore, in tackling the conjecture we can suppose $s(\mathfrak{m})\geq 3$. ###### Theorem 4.5. Conjecture 4.2 holds true in case $s:=s(\mathfrak{m})\geq 3$ and $I\widetilde{\mathfrak{m}^{s-2}}=I\mathfrak{m}^{s-2}.$ Proof. By virtue of Theorem 4.3, it suffices to show that ${\rm r}_{I}(\mathfrak{m})\geq s-1$. Suppose, by way of contradiction, that ${\rm r}_{I}(\mathfrak{m})\leq s-2.$ Since $I$ is a minimal reduction of $\mathfrak{m}$, we can apply [14, Proposition 2.4 and Theorem 2.10] to obtain $I\widetilde{\mathfrak{m}^{k}}=\widetilde{\mathfrak{m}^{k+1}}$ for all $k\geq{\rm r}_{I}(\mathfrak{m})$. Consequently, $\widetilde{\mathfrak{m}^{s-1}}=I\widetilde{\mathfrak{m}^{s-2}}=I\mathfrak{m}^{s-2}=\mathfrak{m}^{s-1},$ which contradicts the definition of $s$. $\square$ Since $\mathfrak{m}=\widetilde{\mathfrak{m}}$ is always true, the case $s(\mathfrak{m})=3$ is a straightforward consequence of Theorem 4.5. ###### Corollary 4.6. Conjecture 4.2 is true if $s(\mathfrak{m})=3$. ###### Example 4.7. Let $K$ be an infinite field and $R=\frac{K[\\![x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7}]\\!]}{(x_{1}^{2},x_{1}x_{2},x_{1}x_{3},x_{1}x_{4},x_{2}x_{3},x_{2}x_{4},x_{3}x_{4},x_{2}^{3}-x_{1}x_{5},x_{3}^{3}-x_{1}x_{6},x_{4}^{3}-x_{1}x_{7})},$ which is a three-dimensional Cohen-Macaulay local ring. Notice that $x_{1}\in\widetilde{\mathfrak{m}^{2}}\setminus\mathfrak{m}^{2}$ and $\widetilde{\mathfrak{m}^{n}}=\mathfrak{m}^{n}$ for all $n\geq 3$. Thus, $s(\mathfrak{m})=3$. By Corollary 4.6, we obtain $\mathfrak{d}_{3}(I)={\rm r}_{I}(\mathfrak{m})$ for any minimal reduction $I$ of $\mathfrak{m}$. When $s:=s(\mathfrak{m})\geq 4$, it may be difficult to determine whether $I\widetilde{\mathfrak{m}^{s-2}}=I\mathfrak{m}^{s-2}$. However, by employing a different argument, we are able to establish Conjecture 4.2 in the case $s(\mathfrak{m})=4$. ###### Proposition 4.8. Conjecture 4.2 is true if $s(\mathfrak{m})=4$. Proof. Suppose ${\rm r}_{I}(\mathfrak{m})\leq s(\mathfrak{m})-2=2$. Then, by [3, Theorem 3.6], the ring $\mathrm{gr}_{\mathfrak{m}}(R)$ must be Cohen- Macaulay. Hence ${\rm depth}\,\mathrm{gr}_{\mathfrak{m}}(R)={\rm dim}\,\mathrm{gr}_{\mathfrak{m}}(R)={\rm dim}\,R>0$. But this implies $s(\mathfrak{m})=1$, which is a contradiction. Therefore, ${\rm r}_{I}(\mathfrak{m})\geq s(\mathfrak{m})-1$ and the result follows from Theorem 4.3. $\square$ We close the section with yet another affirmative case of the conjecture. ###### Proposition 4.9. Conjecture 4.2 is true if $R$ has minimal multiplicity. Proof. For such $R$, the ring ${\rm gr}_{\mathfrak{m}}(R)$ is Cohen-Macaulay (see [22, Theorem 2]), hence its depth is equal to ${\rm dim}\,R\geq 2>0$. In this case, as we already know, the conjecture holds true. $\square$ ## 5\. Application: Regular local rings In this section, as applications of some of our results, we provide new characterizations of regular local rings, and describe a potential approach to the long-standing Zariski-Lipman conjecture about derivations. ### 5.1. Characterizations of regular local rings Our result in this part is Theorem 5.3 below. We observe that the implication (a) $\Rightarrow$ (d) recovers [17, Corollary 3.11]. Moreover, a crucial fact here (which, as far as we know, is new) is given by (b) $\Rightarrow$ (a), i.e., $(R,\mathfrak{m})$ must be regular if $\mathfrak{m}$ is generated by a $d$-sequence (see [11], [12] for the definition of this type of sequence and its properties), which in particular solves the problem suggested in [17, Remark 3.12]. Finally, the equivalence between assertions (a) and (e) reveals the curious role played by ${\mathfrak{d}}_{3}(I)$ in regard to the theory of regular local rings, which can be re-expressed by means of equivalence to the structural assertion (f). For the proof, the following two interesting facts will be useful. ###### Lemma 5.1. ([23, Corollary 5.2]) Let $(R,\mathfrak{m})$ be a local ring with infinite residue field. Then, $\mathfrak{m}$ is generated by a $d$-sequence if and only if $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m})=0$. ###### Lemma 5.2. ([15, Theorem 1.1]) Let $(R,\mathfrak{m})$ be a local ring with ${\rm depth}\,R>0$. If $Q$ is a parameter ideal of $R$ such that $Q^{n}$ is $\mathfrak{m}$-full for some $n\geq 1$, then $R$ is regular. ###### Theorem 5.3. Let $(R,\mathfrak{m},K)$ be either a local ring or a standard graded $K$-algebra, with $K$ infinite and $\mathrm{depth}\,R>0$. The following assertions are equivalent: * (a) $R$ is regular; * (b) $\mathfrak{m}$ is generated by a $d$-sequence; * (c) $s(\mathfrak{m})=1$ and ${\rm r}_{I}(\mathfrak{m})=0$, for any (minimal) reduction $I$ of $\mathfrak{m}$; * (d) ${\mathfrak{d}}_{1}(I)={\mathfrak{d}}_{2}(I)={\mathfrak{d}}_{3}(I)=0$, for any (minimal) reduction $I$ of $\mathfrak{m}$; * (e) ${\mathfrak{d}}_{3}(I)=0$, for any (minimal) reduction $I$ of $\mathfrak{m}$; * (f) $\mathcal{R}(\mathfrak{m},I)=(\mathcal{R}(\mathfrak{m},I)_{\geq 1}:_{\mathcal{R}(R)}\mathfrak{m})(1)$, for any (minimal) reduction $I$ of $\mathfrak{m}$. Proof. (a) $\Rightarrow$ (b) If $R$ is regular, then $\mathfrak{m}$ is generated by a regular sequence, which is therefore a $d$-sequence. (b) $\Rightarrow$ (c) According to [20, Theorem 2.1(ii)], we have ${\rm max}\\{\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m}),1\\}\geq s(\mathfrak{m})$. Hence, using Lemma 5.1, we obtain $s(\mathfrak{m})=1$. In order to deal with ${\rm r}_{I}(\mathfrak{m})$, consider first the case where the reduction $I$ is minimal. Applying [20, Theorem 1.3] and [21, p. 12], we derive $\mathrm{reg}_{\mathcal{R}(\mathfrak{m})}\mathcal{R}(\mathfrak{m})\geq{\rm r}_{I}(\mathfrak{m}),$ which gives ${\rm r}_{I}(\mathfrak{m})=0$. Now, if $I$ is not minimal, then it necessarily contains a minimal reduction $J$ of $\mathfrak{m}$, so that ${\rm r}_{I}(\mathfrak{m})\leq{\rm r}_{J}(\mathfrak{m})=0$. (c) $\Rightarrow$ (d) This follows readily from the fact (proved in [17, Theorem 3.4]) that ${\mathfrak{d}}_{3}(I)\leq\mathrm{max}\\{{\rm r}_{I}(\mathfrak{m}),s(\mathfrak{m})-1\\}.$ Now, by [17, Proposition 2.2], the vanishing of ${\mathfrak{d}}_{3}(I)$ forces that of ${\mathfrak{d}}_{1}(I)$ and ${\mathfrak{d}}_{2}(I)$. (d) $\Rightarrow$ (e) This is obvious. (e) $\Rightarrow$ (a) In the present setting, $\mathfrak{m}$ admits a reduction $Q$ which is a parameter ideal (see, e.g., [13, Exercise 8.11(ii)]). As ${\mathfrak{d}}_{3}(Q)={\mathfrak{d}}_{1}(Q)$, we obtain ${\mathfrak{d}}_{1}(Q)=0$ and consequently $Q$ is $\mathfrak{m}$-full. We are now in a position to apply Lemma 5.2 with $n=1$ to conclude that $R$ is regular. It remains to prove that assertions (a) and (f) are equivalent. To this end, simply note that (f) holds if and only if $(\mathcal{R}(\mathfrak{m},I))_{k}=(\mathcal{R}(\mathfrak{m},I)_{k\geq 1}:_{\mathcal{R}(R)}\mathfrak{m})(1)_{k}$ for all $k\geq 0$ and any (minimal) reduction $I$ of $\mathfrak{m}$, which means $I\mathfrak{m}^{k}=I\mathfrak{m}^{k+1}:\mathfrak{m}\quad\mbox{for\, all}\quad k\geq 0.$ This is, by definition, tantamount to ${\mathfrak{d}}_{3}(I)=0$, which as we have shown above is equivalent to $R$ being regular. $\square$ ### 5.2. Potential approach to a classical conjecture For a field $K$ and a $K$-algebra $R$, we write as usual ${\rm Der}_{K}(R)$ for the module of $K$-derivations of $R$, i.e., the additive maps $D\colon R\to R$ that vanish on $K$ and satisfy Leibniz rule: $D(\alpha\beta)=\alpha D(\beta)+\beta D(\alpha)$ for all $\alpha,\beta\in R$. Now assume that $R$ is a positive-dimensional local ring which is either (4) ${K}[x_{1},\ldots,x_{m}]_{\mathfrak{q}}/I\ \ \ ({\mathfrak{q}}\in{\rm Spec}\,{K}[x_{1},\ldots,x_{m}])\quad\mbox{or}\quad{K}[\\![x_{1},\ldots,x_{m}]\\!]/I,$ with $I$ a proper radical ideal and $x_{1},\ldots,x_{m}$ indeterminates over a field ${K}$ of characteristic zero. In this setting, there is the following long-held classical problem. ###### Conjecture 5.4. (Zariski-Lipman) Let $R$ be as in (4). If ${\rm Der}_{K}(R)$ is free, then $R$ is regular. This problem has an interesting long history, featuring in particular a strong geometric counterpart, and remains open in some cases. For details and references, see [9, Section 2] (in particular, see [9, Theorem 2.3] for a simple proof of the Zariski-Lipman conjecture in the graded case). Additionally, [16, Section 4] shows a relation between this conjecture and the $\mathfrak{m}$-full property of ideals in the (open) two-dimensional local case, which makes it somewhat natural to expect further connections. We point out that, under the hypotheses of the conjecture, $R$ must be a (normal) domain, and we can write ${\rm Der}_{K}(R)=\bigoplus_{1\leq j\leq t}RD_{j}\leavevmode\nobreak\ \leavevmode\nobreak\ \cong R^{t},\quad t={\rm dim}\,R,$ for some free basis $\\{D_{j}\\}$ consisting of precisely $t$ derivations. Conversely, if ${\rm Der}_{K}(R)$ admits a free basis $\\{D_{1},\ldots,D_{s}\\}$, then necessarily $s=t$. ###### Remark 5.5. Let $(R,\mathfrak{m})$ be as in (4). If, as above, the $R$-module ${\rm Der}_{K}(R)$ is free, then our guess is that, for each minimal reduction $I$ of $\mathfrak{m}$, there exist $\alpha_{j}^{(I)},\beta_{j}^{(I)}\in R$, $j=1,\ldots,t$, such that the element given by ${\ell}^{(I)}=\sum_{j=1}^{t}\beta_{j}^{(I)}D_{j}(\alpha_{j}^{(I)})$ satisfies ${\ell}^{(I)}\in\mathfrak{m}\setminus{\mathfrak{m}}^{2}$ and $I{\mathfrak{m}}^{k+1}\colon({\ell}^{(I)})=I{\mathfrak{m}}^{k}$ for all $k\geq 0$. This would confirm the validity of Conjecture 5.4, because such conditions yield $I{\mathfrak{m}}^{k}$ to be $\mathfrak{m}$-full for all $k\geq 0$, which means $\mathfrak{d}_{1}(I)=0$ and hence, as we already know, $\mathfrak{d}_{3}(I)=0$. Now, Theorem 5.3 ensures that $R$ is regular. Acknowledgements. The second-named author was partially supported by CNPq (grants 406377/2021-9 and 313357/2023-4). ## References * [1] M. Brodmann, R. Sharp, Local Cohomology: An Algebraic Introduction with Geometric Applications, Cambridge Univ. Press, Cambridge, 1998. * [2] W. Bruns, A. Conca, C. Raicu, M. Varbaro, Determinants, Gröbner bases and cohomology, Springer, 2022. * [3] A. Corso, C. Polini, M. E. Rossi, Depth of associated graded rings via Hilbert coefficients of ideals, J. Pure Appl. Algebra 201 (2005), 126 - 141. * [4] H. Dao, On colon operations and special types of ideals, Palestine J. Math. 10 (2021), 383-388. * [5] D. Eisenbud, Commutative Algebra with a View toward Algebraic Geometry, Springer-Verlag, 1995. * [6] A. Ficarra, Dao numbers and the asymptotic behaviour of fullness, 2024, preprint arxiv.org/abs/2402.05555. * [7] S. Goto, F. Hayasaka, Finite homological dimension and primes associated to integrally closed ideals, Proc. Amer. Math. Soc. 130 (2002), 3159-3164. * [8] W. Heinzer, L. J. Ratliff, D. E. Rush, Basically full ideals in local rings, J. Algebra 250 (2002), 371-396. * [9] J. Herzog, The module of differentials, Lecture notes, Workshop on Commmutative Algebra and its Relation to Combinatorics and Computer Algebra, International Centre for Theoretical Physics (Trieste, Italy, 1994). * [10] L. T. Hoa, N. D. Tam, On some invariants of a mixed product of ideals, Arch. Math. 94 (2010), 327-337. * [11] C. Huneke, On the symmetric and Rees algebras of an ideal generated by a $d$-sequence, J. Algebra 62 (1980), 268-275. * [12] C. Huneke, The theory of $d$-sequences and powers of ideals, Adv. Math. 46 (1982), 249-279. * [13] C. Huneke, I. Swanson, Integral Closure of Ideals, Rings and Modules, London Math. Soc. Lecture Note Ser. 336, Cambridge Univ. Press, 2006. * [14] A. Mafi, Ratliff-Rush ideal and reduction numbers, Comm. Algebra 46 (2018), 1272–1276. * [15] N. Matsuoka, On $\mathfrak{m}$-full powers of parameter ideals, Tokyo J. Math. 29 (2006), 405-411. * [16] C. B. Miranda-Neto, Free logarithmic derivation modules over factorial domains, Math. Res. Lett. 24 (2017), 153-172. * [17] C. B. Miranda-Neto, D. S. Queiroz, Dao’s question on the asymptotic behaviour of fullness, 2023, preprint arxiv.org/abs/2308.03997. * [18] L. J. Ratliff, D. E. Rush, Two notes on reductions of ideals, Indiana Univ. Math. J. 27 (1978), 929-934. * [19] M. E. Rossi, I. Swanson, Notes on the behaviour of the Ratliff-Rush filtration, Contemp. Math. 331 (2003), 313-328. * [20] M. E. Rossi, D. T. Trung, N. V. Trung, Castelnuovo–Mumford regularity and Ratliff–Rush closure, J. Algebra 504 (2018), 568–-586. * [21] M. E. Rossi, G. Valla, Hilbert functions of filtered modules, UMI Lect. Notes 9, Springer, 2010. * [22] J. D. Sally, On the associated graded ring of a local Cohen-Macaulay ring, J. Math. Kyoto Univ. 17 (1977), 19–21. * [23] N. V. Trung, The Castelnuovo regularity of the Rees algebra and the associated graded ring, Trans. Amer. Math. Soc. 350 (1998), 2813–2832. * [24] J. Watanabe, $\mathfrak{m}$-full ideals, Nagoya Math. J. 106 (1987), 101-111. * [25] N. Zamani, Regularity of the Rees and Associated Graded Modules, Eur. J. Pure Appl. Math. 7 (2014), 429-436.
# Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions Jaychandran Padayasi Ilya A. Gruzberg<EMAIL_ADDRESS>The Ohio State University, Department of Physics, Columbus OH, 43210, USA ###### Abstract Electronic wave functions at Anderson transitions exhibit multifractal scaling characterized by a continuum of generalized multifractal exponents $\Delta_{\gamma}$ with vector indices $\gamma=(q_{1},\ldots,q_{n})$. In a field theory description of the transitions, there are corresponding multifractal operators $\mathcal{O}_{\gamma}$ with scaling dimensions $\Delta_{\gamma}$. Assuming conformal invariance and using the conformal bootstrap framework, we derive a constraint that implies that the generalized multifractal spectrum $\Delta_{\gamma}$ must be quadratic in all $q_{i}$ in any dimension $d>2$. As several numerical studies have shown deviations from parabolicity, we argue that conformal invariance is likely absent at Anderson transitions in dimensions $d>2$. ††preprint: APS/123-QED Introduction. Continuous Anderson transitions (ATs) between metals and insulators, as well as transitions between topologically different localized phases are a major focal point in the study of disordered systems [1]. Despite major breakthroughs in the scaling theory [2], symmetry classification [3], critical properties at ATs have eluded a rigorous theoretical treatment because of the strongly-coupled nature of the critical points. A remarkable property of ATs is the multifractality of critical wave functions, or the local density of states $\nu(r)$: their moments scale (in any spatial dimension $d$) with the system size $L$ as $\overline{\nu^{q}(r)}\sim L^{-\Delta_{q}}$, where the exponents $\Delta_{q}$, called the multifractal (MF) spectrum, depend on the continuous (complex) parameter $q$ in a non-linear way. (The bar denotes disorder averaging.) Also, there are more general combinations $P_{\gamma}$ of critical wave functions [4, 5, 6, 7] labeled by vectors $\gamma=(q_{1},\ldots,q_{n})$ of complex numbers $q_{i}$, with scaling dimensions $\Delta_{\gamma}$ 111Notice that what we denote as $\Delta_{q}$ and $\Delta_{\gamma}$ here is usually denoted by $x_{q}$ and $x_{\gamma}$ in other papers on Anderson multifractality.. Field theories of Anderson localization are non-linear sigma models whose target spaces are cosets $\mathcal{G}/\mathcal{K}$ of certain Lie supergroups [9, 10, 11, 1, 12]. In these models, the generalized MF observables $P_{\gamma}$ are represented by gradientless composite operators $\mathcal{O}_{\gamma}$. A key fact is that $\mathcal{O}_{\gamma}$ can be constructed as highest-weight vectors under the action of the Lie superalgebra of $\mathcal{G}$ with weights $\gamma$. Then the $\mathcal{G}$-symmetry of the target space (not broken at the critical point) strongly restricts the fusion of MF operators and implies certain symmetries of the generalized ($\Delta_{\gamma}$) and simple ($\Delta_{q}$) MF spectra [13, 4]. Specifically, the MF operators $\mathcal{O}_{\gamma}$ satisfy 1) _Abelian fusion_ (addition of highest weights) $\displaystyle\mathcal{O}_{\gamma_{1}}\times\mathcal{O}_{\gamma_{2}}\sim\mathcal{O}_{\gamma_{1}+\gamma_{2}}+\ldots,$ (1) where the ellipses denote derivatives of $\mathcal{O}_{\gamma_{1}+\gamma_{2}}$; 2) _Weyl symmetry_ of the MF spectra $\Delta_{\gamma}=\Delta_{w\gamma}$, $w\in W$, where $W$ is the Weyl group that acts in the space of weights $\gamma$ and is generated by the sign inversions of ${\tilde{q}}_{j}\equiv q_{j}+c_{j}/2$ and interchanges of ${\tilde{q}}_{i}$ and ${\tilde{q}}_{j}$: $\displaystyle q_{i}$ $\displaystyle\rightarrow-c_{i}-q_{i},$ $\displaystyle q_{i}\rightarrow q_{j}+(c_{j}-c_{i})/2.$ (2) The parameters $c_{i}$ are the coefficients of the half-sum of the positive roots $\rho_{b}=\sum_{j=1}^{n}c_{j}e_{j}$ in a standard basis $e_{j}$. They are known for all families of symmetric superspaces [14, 13, 4, 5, 6, 7, 15]. A consequence of Eq. (2) is the existence of the operator $\mathcal{O}_{-\rho_{b}}=\mathcal{O}_{(-c_{1},-c_{2},\ldots,-c_{n})}$ with vanishing scaling dimension $\Delta_{-\rho_{b}}=0$. The simple MF spectrum $\Delta_{q}$ corresponds to $\gamma=(q,0,0,\ldots,0)$. In this case one denotes $c_{1}=-q_{*}$, and the Weyl symmetry reduces to $\Delta_{q}=\Delta_{q_{*}-q}$. The Weyl symmetry is fully supported by numerical and analytical results for various symmetry classes and dimensions $d\geq 2$ [16, 17, 14, 18, 19, 20, 21, 22, 23, 24, 25, 5, 6, 7, 15]. As for other critical points, the scale invariance at ATs may be enhanced to conformal invariance (though this is not guaranteed [26, 27]), in which case critical properties are described by a conformal field theory (CFT). In this letter we derive a general constraint on the MF spectra in an _arbitrary_ dimension $d$ and for _any_ symmetry class, that results from the properties (1)–(2) and the assumption of presence of conformal symmetry at ATs. Our main result is that in this situation the MF spectra have exactly parabolic form: $\displaystyle\Delta_{\gamma}$ $\displaystyle=-b\sum_{i}q_{i}(q_{i}+c_{i}),$ $\displaystyle\Delta_{q}$ $\displaystyle=bq(q_{*}-q),$ (3) where the overall prefactor $b$ cannot be determined from symmetry considerations alone. Our result is very general, and should apply to other situations where multifractality and conformal invariance may both be present, such as critical points in disordered systems [28, 29], turbulence [30, 31, 32], and other systems. Multifractality and CFT in $d=2$. Two-dimensional (2D) CFTs possess the infinite-dimensional Virasoro symmetry. In this setting, the ellipses in Eq. (1) represent Virasoro descendants, and lead to a single Virasoro block in a 4-point function of MF operators, and a Vafa-Lewellen [33, 34] constraint on the MF spectra. The unique solution of this constraint subject to the symmetries (2) is the parabolic spectrum (3) [35, 36, 5]. The MF operators appear as vertex operators in a 2D Coulomb gas theory (Gaussian free field with a background charge). Both the simple and the generalized parabolicity (3) were tested analytically and numerically, and were found to be violated at 2D ATs in various symmetry classes [21, 22, 37, 5, 6, 7, 15]. This has led to the understanding that conformal invariance might be lost at these critical points. Similarly, numerical studies of multifractality in $d=3,4,5$ have found strong deviations from parabolicity [25, 38, 39, 40] but there has not been any prediction in $d>2$ from a CFT perspective. Our paper provides such a prediction. The conformal bootstrap program [41] has brought the study of higher- dimensional CFTs into the limelight with extensive work on both analytical and numerical fronts. The bootstrap philosophy attempts to solve the _crossing symmetry_ conditions coming from associativity of the operator product expansion (OPE) of the CFT operators, with minimal inputs from global symmetry and expected fusion rules for the operators. Crossing symmetry relates three possible ways (or channels) of reducing a 4-point function $\langle\prod_{i=1}^{4}O_{i}(x_{i})\rangle$ to 2-point functions by replacing pairs of operators with their OPEs (Fig. 1). The $s$-channel fusion ($1\to 2$, $3\to 4$) and the $t$-channel fusion ($1\to 4$, $2\to 3$) result in expansions of the 4-point function that have overlapping regions of convergence and give the crossing equation $\displaystyle\sum_{O_{s}}\lambda_{12}^{O_{s}}\lambda_{34}^{O_{s}}W_{O_{s}}=\sum_{O_{t}}\lambda_{14}^{O_{t}}\lambda_{23}^{O_{t}}W_{O_{t}}.$ (4) $\displaystyle\sum_{O_{s}}$ $O_{1}$$O_{2}$$O_{4}$$O_{3}$$O_{s}$ = $\displaystyle\sum_{O_{t}}$ $O_{1}$$O_{2}$$O_{4}$$O_{3}$$O_{t}$ Figure 1: Schematic representation of the $s$-$t$ crossing equation. Here the factors $W_{O}$ are fully determined by conformal symmetry, while the CFT data $\\{\Delta_{i},\lambda_{ijk}\\}$ consisting of the spectrum of scaling dimensions and the OPE coefficients are to be found. Solutions of the crossing equations determine consistent CFTs fully defined by the CFT data, circumventing the need for a Lagrangian description. The $s$ and $t$ channels are obtained from each other by interchanges of indices of the operators (and their points of insertion): $s\leftrightarrow t\equiv 1\leftrightarrow 3$. Accordingly, starting with a function $f^{(s)}\equiv f(x_{1},x_{2};x_{3},x_{4})$ of four ordered arguments we obtain, by permuting $1\leftrightarrow 3$, another function $f^{(t)}\equiv f(x_{3},x_{2};x_{1},x_{4})$. Using this notation, we can write the 4-point function as a product of a conformally-covariant kinematic factor $\mathbb{K}_{4}^{(c)}$ and a $G$-function (different in each channel) $\displaystyle\Big{\langle}\prod_{i=1}^{4}O_{i}(x_{i})\Big{\rangle}=\mathbb{K}_{4}^{(s)}G^{(s)}=\mathbb{K}_{4}^{(t)}G^{(t)}.$ (5) The $G$-functions depend on the cross ratios $\displaystyle u$ $\displaystyle=\frac{x_{12}^{2}x_{34}^{2}}{x_{13}^{2}x_{24}^{2}},$ $\displaystyle v$ $\displaystyle=\frac{x_{14}^{2}x_{23}^{2}}{x_{13}^{2}x_{24}^{2}},$ where $\displaystyle x_{ij}$ $\displaystyle=x_{i}-x_{j},$ (6) and operator labels. The cross ratios get transformed upon crossing so that $\displaystyle G^{(s)}$ $\displaystyle=G_{12,34}(u,v),$ $\displaystyle G^{(t)}$ $\displaystyle=G_{32,14}(v,u).$ (7) In terms of the $G$-functions the $s$-$t$ crossing equation is $\displaystyle G^{(s)}u^{-\frac{\Delta_{1}+\Delta_{2}}{2}}$ $\displaystyle=G^{(t)}v^{-\frac{\Delta_{2}+\Delta_{3}}{2}}.$ (8) Much of the bootstrap formalism is geared towards solving Eq. (8) self- consistently for unitary CFTs. Since any putative CFT for MF correlators contains infinitely many relevant operators and, thus, is non-unitary, we resort to novel, unorthodox methods that focus on the $G$-function and various physical inputs (similar in spirit to the “inverse bootstrap” method in [42]). We start by studying the Coulomb gas theories in the language of modern conformal bootstrap and use them as signposts to generalize the notion of Abelian fusion to higher dimensions. Then we show that the generalized Abelian fusion and crossing symmetry together yield a constraint on the spectrum of scaling dimensions in any $d$ that is analogous to the Vafa-Lewellen constraints [33, 34] known in CFT in $d=2$. Finally, additional physical assumptions specific to MF observables allow us to solve the constraint, leading to a quadratic dependence of the MF spectrum $\Delta_{\gamma}$ on $q_{i}$ in any dimension $d$. Coulomb gas theories with global conformal blocks. In $d=2$, the Coulomb gas theories arise out of breaking the U(1) symmetry of the free boson $\phi$ by including a background charge $Q$ in the action [43, 44, 45]. We will be interested in the correlators of the vertex operators $V_{\alpha}\sim e^{d\alpha\phi}$. A Coulomb gas CFT can be defined [44, 45] in any dimension $d\in\mathbb{N}$ by considering an action with a possibly non-local kinetic term $\propto\phi(-\square)^{\frac{d}{2}}\phi$. Such CFTs also arise as limits of generalized free fields, where the scaling dimension of the free boson is tuned to $\Delta_{\phi}=0$. In this limit $\langle\phi\phi\rangle$ is logarithmic in any dimension which allows us to study vertex operators. Following the conventions in [44, 45], the spectrum and the multi-point functions in such higher-dimensional Coulomb gas theories are $\displaystyle\Delta_{\alpha}$ $\displaystyle=d\alpha(Q-\alpha),$ $\displaystyle\Big{\langle}\prod V_{\alpha_{i}}(x_{i})\Big{\rangle}$ $\displaystyle=\prod_{i<j}|x_{ij}|^{-2d\alpha_{i}\alpha_{j}},$ (9) where the correlators satisfy the charge neutrality $\sum{\alpha_{i}}=Q$. Our first goal is to derive the OPE of vertex operators in terms of primaries of the global conformal group by studying the conformal block expansion. Consider a four-point function of vertex operators which can be written in the form (5) with the $G$-function $\displaystyle G^{(s)}_{\text{CG}}$ $\displaystyle=u^{\frac{d}{2}(\alpha_{3}+\alpha_{4})(\alpha_{1}+\alpha_{2})}v^{-d\alpha_{2}\alpha_{3}}$ $\displaystyle=u^{\frac{1}{2}\Delta_{\alpha_{1}+\alpha_{2}}}v^{\frac{1}{2}(\Delta_{\alpha_{2}+\alpha_{3}}-\Delta_{\alpha_{2}}-\Delta_{\alpha_{3}})}.$ (10) This function is explicitly crossing symmetric [satisfies Eq. (8)] and has a convergent conformal block expansion [41] in the $s$-channel in any dimensionality $d$, $\displaystyle G^{(s)}_{\text{CG}}=\sum_{O}\lambda_{12}^{O}\lambda_{34}^{O}\,g_{\Delta_{O},l_{O}}(u,v).$ (11) The conformal blocks $g_{\Delta_{O},l_{O}}$ are known analytically in even dimensions. (Their dependence on the dimensions $\Delta_{1}$–$\Delta_{4}$ of the external operators will not be important for us.) The blocks are often written in conformal frame coordinates $(z,\bar{z})$ related to the cross ratios $(u,v)$ by $\displaystyle u$ $\displaystyle=z\bar{z},$ $\displaystyle v$ $\displaystyle=(1-z)(1-\bar{z}).$ (12) In the $s$-channel limit, $x_{12}\approx x_{34}\ll x_{13}\approx x_{24}\approx x_{23}\approx x_{14}$, and thus $u\rightarrow 0$, $v\rightarrow 1$, see Eq (6). This corresponds to $z,\bar{z}\rightarrow 0$, and the $G$-function (Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions) has the form $\displaystyle G^{(s)}$ $\displaystyle=(z\bar{z})^{\frac{\Delta^{(s)}}{2}}f(z,\bar{z}),$ (13) where $f(z,\bar{z})$ is a Taylor series symmetric in $(z,\bar{z})$, and $\Delta^{(s)}=\Delta_{\alpha_{1}+\alpha_{2}}$. In the Supplementary material [46] we use the leading behavior of the conformal blocks in the $s$-channel [47] to show that any $G$-function of the form (13) admits the conformal block expansion $\displaystyle G^{(s)}=\sum_{n,l\geq 0}\mu^{(n,l)}g_{\Delta^{(s)}+2n+l,l}(z,\bar{z})$ (14) in arbitrary dimensions $d\geq 2$. Conversely, any $G$-function that can be expanded as in Eq. (14) can also be written in the form of Eq. (13). Let us write global primaries in any dimension as $[\Delta,l]$ specifying their scaling dimension $\Delta$ and spin $l$. Then we say that the expansion (14) contains just one twist family [42] consisting of the leading primary $[\Delta^{(s)},0]$ and subleading operators $[\Delta^{(s)}+2n+l,l]$ which are constructed from its derivatives. The superscript of the product of the OPE coefficients $\mu^{(n,l)}\equiv\lambda_{1,2}^{(n,l)}\lambda_{3,4}^{(n,l)}$ identifies the operator $[\Delta^{(s)}+2n+l,l]$ in the twist family. Expanding the Coulomb gas $G$-function (Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions) in global conformal blocks gives the OPE of $V_{\alpha_{1}}\times V_{\alpha_{2}}$ as [46] $\displaystyle[\Delta_{\alpha_{1}},0]\times[\Delta_{\alpha_{2}},0]\sim\sum_{n,l\geq 0}\lambda^{(n,l)}_{1,2}[\Delta_{\alpha_{1}+\alpha_{2}}+2n+l,l],$ (15) where $n,l$ are non-negative integers, and the $(n,l)=(0,1)$ term is absent in the OPE. For two identical operators ($\alpha_{1}=\alpha_{2}$), their OPE is completely specified by the conformal block expansion since we can extract the squared OPE coefficients (see [46] for explicit expressions in the $d=2$ and $d=4$ cases). Generalized Abelian fusion. In the strict sense, Abelian fusion (1) cannot hold in CFTs in $d>2$, since an OPE written with finitely many global conformal primaries cannot satisfy crossing [48, 49, 50, 51]. Thus, we need to generalize the notion of Abelian fusion to $d>2$. Global conformal block expansions of Coulomb gas correlators exhibit certain features that we adopt as the _definition_ of Abelian fusion in $d>2$: * • All primary multifractal operators of the CFT can be grouped into twist families. * • The OPE of any two leading multifractal primaries contains only one twist family $\displaystyle[\Delta_{1},0]\times[\Delta_{2},0]\sim\sum_{n,l\geq 0}\lambda_{1,2}^{(n,l)}[\Delta+2n+l,l].$ (16) This definition is more general than Eq. (15), where the three operators involved, $V_{\alpha_{1}}$, $V_{\alpha_{2}}$, an $V_{\alpha_{1}+\alpha_{2}}$, are labeled by continuous parameters that add upon fusion. The generalized Abelian fusion (16) and the related conformal block expansion (14) constrain a general scalar 4-point $G$-function to have the form (13). Since $z\bar{z}=u$ and $z+\bar{z}=u+1-v$ form a basis in the ring of symmetric functions of $z$ and $\bar{z}$, Eq. (13) can also be written as $G^{(s)}=u^{\frac{\Delta^{(s)}}{2}}\sum_{n\geq 0}f^{(s)}_{n}(v)u^{n}.$ (17) All powers of $u$ in this expansion differ from the leading power $\Delta^{(s)}/2$ by non-negative integers $n$, consistent with the even integer gaps between the spin-zero conformal blocks in Eq. (14). The functions $f_{n}(v)$ are arbitrary so far, and quantities with the superscript $(s)$ depend on the external dimensions in a channel-covariant manner. At this point, one can make a further simplification by assuming that the functions $f_{r}(v)$ can be represented as (possibly infinite) sums of power laws in $v$, i.e. $G^{(s)}=u^{\frac{\Delta^{(s)}}{2}}\sum_{n,m\geq 0}C_{nm}v^{\sigma^{(s)}_{m}}u^{n}$ (18) where $\sigma_{m}$ are unrelated real numbers and the coefficients $C_{nm}$ do not depend on the cross-ratios $(u,v)$ or the external dimensions. The Coulomb gas theories are of this form with a single term, see Eq. (Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions). The generalized free field correlators [50] with $\Delta_{\phi}\geq 0$ are similarly composed of sums of power-laws in $u$ and $v$ although they do not satisfy Abelian fusion (integer gaps in powers of $u$). The above ansatz made an appearance for the case of a correlator of identical operators in Ref. [42] which discussed the idea of building crossing-symmetric $G$-functions. The issue of its generality was raised and left unanswered there. Similarly, the authors of Ref. [51] use a version where the coefficients $C_{nm}$ are functions of $\log u,\log v$ (the logarithms come from anomalous dimensions of the subleading operators in the twist family). As our definition of Abelian fusion, Eq. (16), exactly fixes the dimensions of all subleading operators, the logarithms are unnecessary in our treatment. Constraints on the $G$-function from crossing. Now we explore constraints on the Abelian $G$-function (18) imposed by crossing symmetry. Substituting this function into the crossing equation (8), we obtain $\displaystyle u^{\frac{\Delta^{(s)}}{2}-\frac{\Delta_{1}+\Delta_{2}}{2}}\sum_{n,m\geq 0}C_{nm}v^{\sigma^{(s)}_{m}}u^{n}$ $\displaystyle=v^{\frac{\Delta^{(t)}}{2}-\frac{\Delta_{2}+\Delta_{3}}{2}}\sum_{n,m\geq 0}C_{nm}u^{\sigma^{(t)}_{m}}v^{n}.$ (19) This equation enforces a structure on the $G$-function that is understandable in terms of crossing symmetric building blocks [42], such that any truncation up to $(N,M)$ of the double sum in Eq. (18) is also crossing symmetric. In this case all terms in Eq. (18) fall into two groups: single terms that are crossing-symmetric by themselves, and pairs of terms that get exchanged under crossing. The result is the $G$-function $\displaystyle G^{(s)}(u,v)$ $\displaystyle=u^{\frac{\Delta^{(s)}}{2}}v^{\frac{\Delta^{(t)}}{2}-\frac{\Delta_{2}+\Delta_{3}}{2}}\bigg{(}\sum_{k\geq 0}S_{k}(uv)^{k}$ $\displaystyle\quad+\sum_{j\geq 1,k\geq 0}D_{jk}(uv)^{k}(u^{j}+v^{j})\bigg{)}.$ (20) where we use $S_{k}$ and $D_{jk}$ to represent the coefficients of the crossing-symmetric terms and pairs respectively. Thus, we adopt Eq. (Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions) as the generic form of the Abelian $G$-function (18) that also satisfies $s$-$t$ crossing. Excluding the spin-1 operator. Focusing on the last part of the puzzle, we expand the $G$-function (Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions) in conformal blocks in arbitrary dimensions to first few orders in $z$ and $\bar{z}$. By construction, the first block that appears in the expansion is $[\Delta^{(s)},0]$. The product of the OPE coefficients of the leading block is read off as $\mu^{(0,0)}\equiv S_{0}$. The coefficient $\mu^{(0,1)}$ of the spin-1 block $[\Delta^{(s)}+1,1]$ can be obtained by matching the coefficients of the series for the order $\sim(z\bar{z})^{\Delta^{(s)}/2}(z+\bar{z})$ as $\displaystyle\frac{\mu^{(0,1)}}{S_{0}}$ $\displaystyle=\frac{\Delta_{2}+\Delta_{3}-\Delta^{(t)}}{2}-\frac{(\Delta^{(s)}-\Delta_{12})(\Delta^{(s)}+\Delta_{34})}{4\Delta^{(s)}}$ $\displaystyle+\frac{\Delta_{2}+\Delta_{3}-\Delta^{(t)}}{2}\sum_{j\geq 1}\frac{D_{j0}}{S_{0}}-\sum_{j\geq 1}\frac{jD_{j0}}{S_{0}}.$ (21) Both sums over $j$ must converge for the OPE coefficient to be well-defined, and thus we introduce $\displaystyle\sum_{j\geq 1}\frac{D_{j0}}{S_{0}}$ $\displaystyle\equiv\mathcal{P},$ $\displaystyle\sum_{j\geq 1}\frac{jD_{j0}}{S_{0}}$ $\displaystyle\equiv\mathcal{Q}.$ (22) In the context of MF correlators, we identify $\Delta^{(s)}\equiv\Delta_{\gamma_{1}+\gamma_{2}}$, $\Delta^{(t)}\equiv\Delta_{\gamma_{3}+\gamma_{2}}$. The neutrality condition $\sum_{i}\gamma_{i}=-\rho_{b}$ fixes $\Delta_{4}=\Delta_{-\rho_{b}-\gamma_{1}-\gamma_{2}-\gamma_{3}}=\Delta_{\gamma_{1}+\gamma_{2}+\gamma_{3}}$. This spin-1 operator cannot appear in any OPE of two identical scalar operators on general grounds. Indeed, $O(x_{1})O(x_{2})$ is even with respect to the interchange $x_{1}\leftrightarrow x_{2}$ but a spin-1 operator must appear in the OPE as $\sim(x_{1}-x_{2})\cdot\partial O_{\Delta^{(s)}}((x_{1}+x_{2})/2)$ which is odd. Exploiting this fact, we set $O_{2}\equiv O_{1}$ in which case $\mu^{(0,1)}=0$, and Eq. (Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions) becomes a constraint on $\Delta$’s: $\displaystyle\Delta^{(s)}+\Delta_{3}-\Delta_{4}+4\mathcal{Q}=2(\Delta_{1}+\Delta_{3}-\Delta^{(t)})(1+\mathcal{P}).$ (23) Furthermore, the continuity of MF spectra allows us to choose $\gamma_{1}=\gamma_{2}=\epsilon e_{i}$, where $e_{i}=(0,\ldots,1,\ldots,0)$ (unit in the $i$-th place), with $\epsilon\ll 1$, and $\gamma_{3}=\gamma$ in Eq. (23), which gives $\displaystyle\Delta_{2\epsilon e_{i}}+\Delta_{\gamma}-\Delta_{\gamma+2\epsilon e_{i}}+4\mathcal{Q}$ $\displaystyle\quad=2(\Delta_{\epsilon e_{i}}+\Delta_{\gamma}-\Delta_{\gamma+\epsilon e_{i}})(1+\mathcal{P}).$ (24) Now we expand in orders of $\epsilon$. At order $\epsilon^{0}$ we get $4\mathcal{Q}=(1+2\mathcal{P})\Delta_{0}=0$, because $\Delta_{0}=0$ by definition. At order $\epsilon$ we get $\mathcal{P}\partial_{q_{i}}(\Delta_{0}-\Delta_{\gamma})=0$ which implies either $\mathcal{P}=0$ or $\partial_{q_{i}}\Delta_{0}=\partial_{q_{i}}\Delta_{\gamma}$. The second condition [together with Eq. (2)] yields the trivial spectrum $\Delta_{\gamma}=\text{const}=0$. Hence, we consider the case $\mathcal{P}=0$. Then at order $\epsilon^{2}$ we get $\partial_{q_{i}}^{2}(\Delta_{0}-\Delta_{\gamma})=0$, which implies that $\Delta_{\gamma}$ is a quadratic polynomial in all $q_{i}$. Any such polynomial that vanishes at $\gamma=0$ and satisfies the symmetry properties (2) is exactly of the form given in Eq. (3) [5]. Thus, we arrive at our main result: > _The only MF spectrum $\Delta_{\gamma}$ which satisfies generalized Abelian > fusion and crossing symmetry has the form given in Eq. (3)._ Going back to Eq. (23), we substitute $\mathcal{P}=\mathcal{Q}=0$ and the quadratic solution for $\Delta_{\gamma}$ to find that the constraint $2\Delta_{\gamma_{1}}+\Delta_{\gamma_{3}}-2\Delta_{\gamma_{1}+\gamma_{3}}-\Delta_{2\gamma_{1}}+\Delta_{2\gamma_{1}+\gamma_{3}}=0$ (25) correctly picks out Abelian CFTs in $d\geq 2$, and thus is the appropriate generalization of the 2D Vafa-Lewellen constraint with a single exchanged Virasoro primary. Summary and Outlook. Using conformal invariance, we have shown that any Abelian CFT in $d>2$ must be intimately related to the Coulomb gas theory, and have a quadratic spectrum. To summarize, the set of assumptions used to prove the exact parabolicity of the MF spectrum in $d\geq 2$ is the following: 1. 1. The MF operators $\mathcal{O}_{\gamma}$ are global primaries in some CFT, so that their correlators solve the crossing equations. 2. 2. The fusion of two MF operators is of the general form (16) (generalized Abelian fusion). 3. 3. The $G$-functions of the charge-neutral 4-point correlators of MF operators admit an expansion of the form $G^{(s)}=\sum_{n,m}C_{nm}u^{\Delta_{\gamma_{1}+\gamma_{2}}/2+n}v^{\sigma_{m}}$. The second and third assumptions are our strongest, and are fundamentally related to each other. As in the case of weakly perturbed CFTs [51], it remains to be seen if the generalized Abelian CFT defined here could be perturbed so that the derivative operators gain anomalous dimensions. Finally, let us discuss the implication of the first assumption, that of conformal invariance. As we summarised earlier, the perturbative analytical results in $d=2+\epsilon$ and numerical simulations in $d=3,4,5$ have shown that the MF spectra for generic Anderson transitions are in fact, not parabolic. In the light of our result, it follows that conformal symmetry is likely lost at Anderson transitions. The only alternative is that the symmetries of the sigma models that were used to deduce Abelian fusion and Weyl symmetry are lost at the critical point. We believe this to be a very unlikely scenario that would contradict the vast body of literature on Anderson transitions. Thus, we identify Anderson transitions as examples of systems where scale invariance does not imply conformal invariance. Going forward, we believe that it is very important to understand the relationship between scale and conformal invariance, especially for non- unitary theories. Anderson transitions may provide a fertile ground for such understanding. Moreover, it would be interesting to translate our analysis to other physical systems exhibiting multifractal scaling. Finally, it is exciting to consider possible generalizations of rational CFTs to $d>2$. This line of inquiry may provide important answers in the much bigger project of tracking the evolution of CFTs with $d$. Acknowledgements. We thank Dalimil Mazáč, Marco Meineri, Alexander Mirlin, Yaron Oz, Lorenzo Quintavalle, Sylvain Ribault, and Bernardo Zan for useful discussions. This research was supported by Grant No. 2020193 from the United States-Israel Binational Science Foundation (BSF). ## References * Evers and Mirlin [2008] F. Evers and A. D. Mirlin, Anderson transitions, Rev. Mod. Phys. 80, 1355 (2008), arXiv:0707.4378 [cond-mat.mes-hall] . * Abrahams _et al._ [1979] E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, Scaling theory of localization: absence of quantum diffusion in two dimensions, Phys. Rev. Lett. 42, 673 (1979). * Altland and Zirnbauer [1997] A. Altland and M. R. Zirnbauer, Nonstandard symmetry classes in mesoscopic normal-superconducting hybrid structures, Phys. Rev. B 55, 1142 (1997), arXiv:cond-mat/9602137 . * Gruzberg _et al._ [2013] I. A. Gruzberg, A. D. Mirlin, and M. R. Zirnbauer, Classification and symmetry properties of scaling dimensions at Anderson transitions, Phys. Rev. B 87, 125144 (2013). * Karcher _et al._ [2021] J. F. Karcher, N. Charles, I. A. Gruzberg, and A. D. Mirlin, Generalized multifractality at spin quantum Hall transition, Annals Phys. 435, 168584 (2021), arXiv:2107.06414 [cond-mat.dis-nn] . * Karcher _et al._ [2022a] J. F. Karcher, I. A. Gruzberg, and A. D. Mirlin, Generalized multifractality at the spin quantum Hall transition: Percolation mapping and pure-scaling observables, Phys. Rev. B 105, 184205 (2022a), arXiv:2203.12617 [cond-mat.dis-nn] . * Karcher _et al._ [2022b] J. F. Karcher, I. A. Gruzberg, and A. D. Mirlin, Generalized multifractality at metal-insulator transitions and in metallic phases of two-dimensional disordered systems, Phys. Rev. B 106, 104202 (2022b), arXiv:2206.12226 [cond-mat.dis-nn] . * Note [1] Notice that what we denote as $\Delta_{q}$ and $\Delta_{\gamma}$ here is usually denoted by $x_{q}$ and $x_{\gamma}$ in other papers on Anderson multifractality. * Efetov [1983] K. Efetov, Supersymmetry and theory of disordered metals, Advances in Physics 32, 53 (1983). * Efetov [1997] K. Efetov, _Supersymmetry in Disorder and Chaos_ (Cambridge University Press, Cambridge, England, 1997). * Mirlin [2000] A. D. Mirlin, Statistics of energy levels and eigenfunctions in disordered systems, Physics Reports 326, 259 (2000). * Wegner [2016] F. Wegner, _Supermathematics and its Applications in Statistical Physics. Grassmann Variables and the Method of Supersymmetry_ (Springer, Heidelberg, Germany, 2016). * Gruzberg _et al._ [2011] I. A. Gruzberg, A. W. W. Ludwig, A. D. Mirlin, and M. R. Zirnbauer, Symmetries of multifractal spectra and field theories of Anderson localization, Phys. Rev. Lett. 107, 086403 (2011). * Mirlin _et al._ [2006] A. D. Mirlin, Y. V. Fyodorov, A. Mildenberger, and F. Evers, Exact Relations between Multifractal Exponents at the Anderson Transition, Phys. Rev. Lett. 97, 046803 (2006), arXiv:cond-mat/0603378 [cond-mat.mes-hall] . * Karcher _et al._ [2023] J. F. Karcher, I. A. Gruzberg, and A. D. Mirlin, Generalized multifractality in two-dimensional disordered systems of chiral symmetry classes, Phys. Rev. B 107, 104202 (2023). * Evers _et al._ [2003] F. Evers, A. Mildenberger, and A. D. Mirlin, Multifractality at the spin quantum Hall transition, Phys. Rev. B 67, 041303 (2003), arXiv:cond-mat/0203134 [cond-mat.mes-hall] . * Mirlin _et al._ [2003] A. D. Mirlin, F. Evers, and A. Mildenberger, Wavefunction statistics and multifractality at the spin quantum Hall transition, Journal of Physics A Mathematical General 36, 3255 (2003), arXiv:cond-mat/0208451 [cond-mat.mes-hall] . * Mildenberger _et al._ [2007] A. Mildenberger, A. R. Subramaniam, R. Narayanan, F. Evers, I. A. Gruzberg, and A. D. Mirlin, Boundary multifractality in critical one-dimensional systems with long-range hopping, Phys. Rev. B 75, 094204 (2007), arXiv:cond-mat/0611713 [cond-mat.mes-hall] . * Mildenberger and Evers [2007] A. Mildenberger and F. Evers, Wave function statistics at the symplectic two-dimensional Anderson transition: Bulk properties, Phys. Rev. B 75, 041303 (2007), arXiv:cond-mat/0608560 [cond-mat.mes-hall] . * Obuse _et al._ [2007] H. Obuse, A. R. Subramaniam, A. Furusaki, I. A. Gruzberg, and A. W. W. Ludwig, Multifractality and Conformal Invariance at 2D Metal-Insulator Transition in the Spin-Orbit Symmetry Class, Phys. Rev. Lett. 98, 156802 (2007), arXiv:cond-mat/0609161 [cond-mat.dis-nn] . * Obuse _et al._ [2008] H. Obuse, A. R. Subramaniam, A. Furusaki, I. A. Gruzberg, and A. W. W. Ludwig, Boundary multifractality at the integer quantum Hall plateau transition: implications for the critical theory, Phys. Rev. Lett. 101, 116802 (2008), arXiv:0804.2409 [cond-mat.mes-hall] . * Evers _et al._ [2008] F. Evers, A. Mildenberger, and A. D. Mirlin, Multifractality at the Quantum Hall Transition: Beyond the Parabolic Paradigm, Phys. Rev. Lett. 101, 116803 (2008), arXiv:0804.2334 [cond-mat.mes-hall] . * Vasquez _et al._ [2008] L. J. Vasquez, A. Rodriguez, and R. A. Römer, Multifractal analysis of the metal-insulator transition in the three-dimensional Anderson model. I. Symmetry relation under typical averaging, Phys. Rev. B 78, 195106 (2008), arXiv:0807.2217 [cond-mat.dis-nn] . * Rodriguez _et al._ [2008] A. Rodriguez, L. J. Vasquez, and R. A. Römer, Multifractal analysis of the metal-insulator transition in the three-dimensional Anderson model. II. Symmetry relation under ensemble averaging, Phys. Rev. B 78, 195107 (2008), arXiv:0807.2209 [cond-mat.dis-nn] . * Rodriguez _et al._ [2011] A. Rodriguez, L. J. Vasquez, K. Slevin, and R. A. Römer, Multifractal finite-size scaling and universality at the Anderson transition, Phys. Rev. B 84, 134209 (2011), arXiv:1107.5736 [cond-mat.dis-nn] . * Riva and Cardy [2005] V. Riva and J. Cardy, Scale and conformal invariance in field theory: a physical counterexample [rapid communication], Physics Letters B 622, 339 (2005), arXiv:hep-th/0504197 [hep-th] . * Nakayama [2015] Y. Nakayama, Scale invariance vs conformal invariance, Phys. Reports 569, 1 (2015). * Ludwig [1990] A. W. W. Ludwig, Infinite hierarchies of exponents in a diluted ferromagnet and their interpretation, Nuclear Physics B 330, 639 (1990). * Duplantier and Ludwig [1991] B. Duplantier and A. W. W. Ludwig, Multifractals, operator-product expansion, and field theory, Phys. Rev. Lett. 66, 247 (1991). * Bernard _et al._ [2006] D. Bernard, G. Boffetta, A. Celani, and G. Falkovich, Conformal invariance in two-dimensional turbulence, Nature Physics 2, 124 (2006), arXiv:nlin/0602017 [nlin.CD] . * Bernard _et al._ [2007] D. Bernard, G. Boffetta, A. Celani, and G. Falkovich, Inverse turbulent cascades and conformally invariant curves, Phys. Rev. Lett. 98, 024501 (2007), arXiv:nlin/0609069 [nlin.CD] . * Falkovich [2007] G. Falkovich, Conformal invariance in hydrodynamic turbulence, Russian Mathematical Surveys 62, 497 (2007). * Vafa [1988] C. Vafa, Toward Classification of Conformal Theories, Phys. Lett. B 206, 421 (1988). * Lewellen [1989] D. C. Lewellen, Constraints for Conformal Field Theories on the Plane: Reviving the Conformal Bootstrap, Nucl. Phys. B 320, 345 (1989). * Bondesan _et al._ [2017] R. Bondesan, D. Wieczorek, and M. R. Zirnbauer, Gaussian free fields at the integer quantum Hall plateau transition, Nucl. Phys. B 918, 52 (2017), arXiv:1612.04109 [cond-mat.dis-nn] . * Zirnbauer [2019] M. R. Zirnbauer, The integer quantum Hall plateau transition is a current algebra after all, Nuclear Physics B 941, 458 (2019), arXiv:1805.12555 [math-ph] . * Puschmann _et al._ [2021] M. Puschmann, D. Hernangómez-Pérez, B. Lang, S. Bera, and F. Evers, Quartic multifractality and finite-size corrections at the spin quantum Hall transition, Phys. Rev. B 103, 235167 (2021), arXiv:2104.12079 [cond-mat.dis-nn] . * Ujfalusi and Varga [2015] L. Ujfalusi and I. Varga, Finite-size scaling and multifractality at the Anderson transition for the three Wigner-Dyson symmetry classes in three dimensions, Phys. Rev. B 91, 184206 (2015), arXiv:1501.02147 [cond-mat.dis-nn] . * Lindinger and Rodríguez [2017] J. Lindinger and A. Rodríguez, Multifractal finite-size scaling at the Anderson transition in the unitary symmetry class, Phys. Rev. B 96, 134202 (2017), arXiv:1707.05701 [cond-mat.dis-nn] . * Tarquini _et al._ [2017] E. Tarquini, G. Biroli, and M. Tarzia, Critical properties of the Anderson localization transition and the high-dimensional limit, Phys. Rev. B 95, 094204 (2017), arXiv:1612.04753 [cond-mat.dis-nn] . * Poland _et al._ [2019] D. Poland, S. Rychkov, and A. Vichi, The conformal bootstrap: theory, numerical techniques, and applications, Rev. Mod. Phys. 91, 015002 (2019), arXiv:1805.04405 [hep-th] . * Li [2018] W. Li, Inverse Bootstrapping Conformal Field Theories, JHEP 01, 077, arXiv:1706.04054 [hep-th] . * Di Francesco _et al._ [1997] P. Di Francesco, P. Mathieu, and D. Senechal, _Conformal Field Theory_, Graduate Texts in Contemporary Physics (Springer-Verlag, New York, 1997). * Levy and Oz [2018] T. Levy and Y. Oz, Liouville Conformal Field Theories in Higher Dimensions, JHEP 06, 119, arXiv:1804.02283 [hep-th] . * Kislev _et al._ [2022] A. C. Kislev, T. Levy, and Y. Oz, Odd dimensional nonlocal Liouville conformal field theories, Journal of High Energy Physics 2022, 150 (2022), arXiv:2206.10884 [hep-th] . * [46] J. Padayasi and I. A. Gruzberg, Supplementary material. * Dolan and Osborn [2001] F. A. Dolan and H. Osborn, Conformal four point functions and the operator product expansion, Nucl. Phys. B 599, 459 (2001), arXiv:hep-th/0011040 . * Rattazzi _et al._ [2008] R. Rattazzi, V. S. Rychkov, E. Tonni, and A. Vichi, Bounding scalar operator dimensions in 4D CFT, JHEP 12, 031, arXiv:0807.0004 [hep-th] . * Simmons-Duffin [2017] D. Simmons-Duffin, The Lightcone Bootstrap and the Spectrum of the 3d Ising CFT, JHEP 03, 086, arXiv:1612.08471 [hep-th] . * Fitzpatrick _et al._ [2013] A. L. Fitzpatrick, J. Kaplan, D. Poland, and D. Simmons-Duffin, The Analytic Bootstrap and AdS Superhorizon Locality, JHEP 12, 004, arXiv:1212.3616 [hep-th] . * Alday and Zhiboedov [2016] L. F. Alday and A. Zhiboedov, Conformal Bootstrap With Slightly Broken Higher Spin Symmetry, JHEP 06, 091, arXiv:1506.04659 [hep-th] . * Hogervorst and Rychkov [2013] M. Hogervorst and S. Rychkov, Radial Coordinates for Conformal Blocks, Phys. Rev. D 87, 106004 (2013), arXiv:1303.1111 [hep-th] . * Padayasi [2023] J. Padayasi, Coulomb gas expansion notebooks (2023). ## Supplementary material ### .1 Subleading behavior of global conformal blocks Start with the Casimir equation solved by the spin-0 conformal blocks $\mathcal{D}g_{\Delta,0}^{\Delta_{12},\Delta_{34}}(z,\bar{z})=C_{\Delta,0}g_{\Delta,0}^{\Delta_{12},\Delta_{34}}(z,\bar{z})$ (26) where $\mathcal{D}$ is the Casimir operator of $SO(d+1,1)$: $\mathcal{D}=\mathcal{D}_{z}+\mathcal{D}_{\bar{z}}+2(d-2)\frac{z\bar{z}}{z-\bar{z}}\left[(1-z)\partial_{z}-(1-\bar{z})\partial_{\bar{z}}\right],$ (27) $\mathcal{D}_{z}=2z^{2}(1-z)\partial_{z}^{2}-(2+\Delta_{34}-\Delta_{12})z^{2}\partial_{z}+\frac{\Delta_{12}\Delta_{34}}{2}z$ (28) and $C_{\Delta,0}\equiv\Delta(\Delta-d)$ is the eigenvalue of the Casimir operator. Based on the leading form of $g_{\Delta,0}$ and the symmetricity with respect to $z$ and $\bar{z}$, we use the following power series as an ansatz for the solution: $g_{\Delta,0}(z,\bar{z})=(z\bar{z})^{\alpha}\sum_{r,s=0}^{\infty}\frac{\kappa_{rs}}{1+\delta_{rs}}(z^{r}\bar{z}^{s}+z^{s}\bar{z}^{r}).$ (29) For the subleading behavior of the block, we need to compute the coefficient $\kappa_{10}$. Substituting the ansatz into the Casimir equation and extracting the terms at leading order gives $C_{\Delta,0}=\Delta(\Delta-d)=4\alpha(\alpha-1)-2(d-2)\alpha$ (30) which is solved by $\alpha=\Delta/2$ as expected. $\kappa_{00}$ is a normalization that we can set to unity. Now considering the subleading order terms, we have $\displaystyle 2\kappa_{10}\bigg{(}(\alpha+1)_{2}-(\alpha)_{2}-(d-2)\alpha\bigg{)}-2(\alpha)_{2}-\alpha(2+\Delta_{34}-\Delta_{12})+\frac{\Delta_{12}\Delta_{34}}{2}=\kappa_{10}\Delta(\Delta-d).$ (31) $\kappa_{10}$ is read off from the equation as $\kappa_{10}=\frac{(\Delta-\Delta_{12})(\Delta+\Delta_{34})}{4\Delta}$ (32) with no dependence on dimension $d$. ### .2 Global conformal block decomposition of Coulomb gas-like correlator Consider a $G$-function which admits a series expansion at $z,\bar{z}\rightarrow 0$ of the form $G(z,\bar{z})=(z\bar{z})^{p}\sum_{i,j=0}^{\infty}\mathscr{C}_{ij}z^{i}\bar{z}^{j}$ (33) with $\mathscr{C}_{ij}=\mathscr{C}_{ji}$. The Coulomb gas correlator (Conformal invariance and multifractality at Anderson transitions in arbitrary dimensions) is a particular example, with $2p=\Delta_{\alpha_{1}+\alpha_{2}}\hskip 36.135pt\text{and}\hskip 36.135pt\mathscr{C}_{ij}=\binom{i+d\alpha_{2}\alpha_{3}-1}{i}\binom{j+d\alpha_{2}\alpha_{3}-1}{j}.$ (34) Let us show that the given $G(z,\bar{z})$ can be expanded in terms of global conformal blocks in any dimension $d$ with a single double trace family $[2p+2n+l,l].$ In other words, we show that $G(z,\bar{z})=\sum_{n,l\geq 0}\mu^{(n,l)}g_{2p+2n+l,l}(z,\bar{z})$ (35) where $\mu^{(n,l)}$ is a product of two OPE coefficients. The proof will be based upon the idea that the expansion Eq. 35 can be uniquely fixed order-by- order in powers of $z$ and $\bar{z}$. As noted in the main text, The leading order behavior of the global conformal blocks in any dimensions is $g_{\Delta,l}\sim\mathcal{N}_{d,l}(z\bar{z})^{\Delta/2}\text{Geg}_{l}^{d/2-1}\left(\frac{z+\bar{z}}{2\sqrt{z\bar{z}}}\right)$ (36) where we work with the normalization $\mathcal{N}_{d,l}=\frac{l!}{\left(d/2-1\right)_{l}}.$ (37) Focusing on the double-trace blocks $g_{2p+2n+l,l}$, we know further that the subleading terms should be expandable as a power series $g_{\Delta+2n+l,l}(z,\bar{z})=(z\bar{z})^{\Delta/2}\left(\mathcal{N}_{d,l}(z\bar{z})^{l/2+n}\text{Geg}_{l}^{d/2-1}\left(\frac{z+\bar{z}}{2\sqrt{z\bar{z}}}\right)+\sum_{r+s>(n+l/2)}\frac{\kappa^{(n,l)}_{rs}}{1+\delta_{rs}}z^{r}\bar{z}^{s}\right)$ (38) where $r,s$ are integers and $\kappa_{rs}=\kappa_{sr}$ [52]. The Gegenbauer polynomial for $l=0$ is just 1, so for the leading degree, the block $g_{2p,0}\sim(z\bar{z})^{p}$ has the correct power-law to match $G(z,\bar{z})=\mathscr{C}_{00}(z\bar{z})^{p}+\ldots$. Next, consider degree 1 terms in the series, $(z\bar{z})^{p}(\mathscr{C}_{10}(z+\bar{z})+\ldots)$. On the conformal blocks side, the subleading contribution from the leading block at this degree can be balanced by adding a new term $[2p+1,1]$ with the correct power-law behavior, $g_{2p+1,1}\sim(z\bar{z})^{p+\frac{1}{2}}\left(\frac{z+\bar{z}}{2\sqrt{z\bar{z}}}\right).$ (39) The OPE coefficient is read off by equating the two series upto this order, $\mu^{(1,1)}=\mathscr{C}_{10}-\mu^{(0,0)}\frac{\partial}{\partial z}\left.\frac{g_{2p,0}(z,\bar{z})}{(z\bar{z})^{p}}\right|_{z=\bar{z}=0}=\mathscr{C}_{10}-\mathscr{C}_{00}\frac{(2p-\Delta_{12})(2p+\Delta_{34})}{8p}$ (40) where we have used the subleading coefficient of the spin-zero block derived in .1. Now, consider an arbitrary term in the $G$-function expansion $\sim\mathscr{C}_{ij}z^{i}\bar{z}^{j}$. There is always a conformal block in the double-trace family (in this case $g_{2p+i+j,i-j}$) with the correct leading behavior to fix the series at this order. Looking closely at Eq. 36, we have (with $j=n$ and $i-j=l$) $\displaystyle g_{2p+2n+l,l}$ $\displaystyle\sim(z\bar{z})^{p+n+l/2}\frac{l!}{(d/2-1)_{l}}\text{Geg}_{l}^{d/2-1}\left(\frac{z+\bar{z}}{2\sqrt{z\bar{z}}}\right)$ (41) $\displaystyle=(z\bar{z})^{p+n+l/2}\frac{l!}{(d/2-1)_{l}}\sum_{k=0}^{\lfloor\frac{l}{2}\rfloor}\frac{\left(d/2-1\right)_{l-k}}{k!(l-2k)!}(-1)^{k}\left(\frac{z+\bar{z}}{\sqrt{z\bar{z}}}\right)^{l-2k}$ (42) $\displaystyle=(z\bar{z})^{p}\sum_{k=0}^{\lfloor\frac{l}{2}\rfloor}\frac{(-1)^{k}(l-2k)_{2k}}{k!(d/2+l-k-1)_{k}}(z\bar{z})^{k+n}(z+\bar{z})^{l-2k}.$ (43) Setting $k=0$ in Eq. (43) yields a term proportional to $(z\bar{z})^{p}(z^{i}\bar{z}^{j})$ with coefficient 1 as desired. There are additional terms even at the leading order in the conformal block, but all of them are of the same degree in $z,\bar{z}$: $i+j$. Therefore, for each degree, the terms in the series of the same degree must be fixed in descending order in $l$, so that the $l=0$ or $l=1$ term is fixed last. For example, in the quadratic degree, there are two terms in the G-function series, $\mathscr{C}_{20}(z\bar{z})^{p}(z^{2}+\bar{z}^{2})$ and $\mathscr{C}_{11}(z\bar{z})^{p}(z\bar{z})$. Of the two corresponding blocks $g_{2p+2,2}$ and $g_{2p+2,0}$, $g_{2p+2,2}\sim(z\bar{z})^{p}(z^{2}+\bar{z}^{2}+z\bar{z})$ contributes to the $z\bar{z}$ term but the block $g_{2p+2,0}\sim(z\bar{z})^{p+1}$ does not feed back into the former. In summary, the recursive process can be codified into a formula for calculating $\mu^{(n,l)}$ given $C_{ij}$ as $\mu^{(j,i-j)}=\mathscr{C}_{ij}-\frac{1}{i!j!}\frac{\partial^{i}}{\partial z^{i}}\frac{\partial^{i}}{\partial\bar{z}^{j}}\left(\frac{1}{(z\bar{z})^{p}}\left[\sum_{2\alpha<(i+j)}\sum_{\beta=0}^{i+j-1-2\alpha}\mu^{(\alpha,\beta)}g_{2p+2\alpha+\beta,\beta}+\sum_{\alpha=0}^{j-1}\mu^{(\alpha,i+j-2\alpha)}g_{(2p+i+j,i+j-2\alpha)}\right]\right)_{z=\bar{z}=0}.$ (44) Using the form of double-trace blocks in Eq. 38, this becomes $\mu^{(j,i-j)}=\mathscr{C}_{ij}-\left[\sum_{2\alpha<(i+j)}\sum_{\beta=0}^{i+j-1-2\alpha}\mu^{(\alpha,\beta)}\kappa^{(\alpha,\beta)}_{ij}+\sum_{\alpha=0}^{j-1}\mu^{(\alpha,i+j-2\alpha)}\mathscr{K}_{\alpha}\right].$ (45) where the numbers $\mathscr{K}_{\alpha}$ are given by $\mathscr{K}_{\alpha}=\sum_{k=j-\alpha}^{\lfloor\frac{i+j}{2}-\alpha\rfloor}\binom{i+j-2\alpha-2k}{i-\alpha-k}\frac{(-1)^{k}(i+j-2\alpha-2k)_{2k}}{k!(d/2+i+j-2\alpha-k-1)_{k}}.$ (46) Of course, to calculate $\mu^{(j,i-j)}$ more explicitly one requires the coefficients $\kappa_{ij}$ in the definition of the conformal blocks, which is not available in generic dimensions. But we still have the result > Any G-function of the form $G(z,\bar{z})=(z\bar{z})^{p}f(z,\bar{z})$ where > $f(z,\bar{z})$ has a convergent Taylor series expansion at $z=\bar{z}=0$ can > be expanded in conformal blocks from a single twist family $[2p+2n+l,l]$. ### .3 Coulomb gas OPE coefficients Following the procedure outlined in .2, we performed the global conformal block expansion of the Coulomb Gas $G$-function in $d=2$ and $d=4$. In this section we present some interesting observations and the first few coefficients in the OPE of identical scalars. Consider the Coulomb Gas $G$-function reproduced here for convenience, $G(u,v)=u^{\frac{d}{2}(\alpha_{3}+\alpha_{4})}v^{-d\alpha_{2}\alpha_{3}}.$ (47) One sees that it can be rewritten as a series expansion of the kind discussed in .2, $G(z,\bar{z})=(z\bar{z})^{\frac{\Delta_{\alpha_{1}+\alpha_{2}}}{2}}\left(\sum_{i,j=0}^{\infty}\binom{i+d\alpha_{2}\alpha_{3}-1}{i}\binom{j+d\alpha_{2}\alpha_{3}-1}{j}z^{i}\bar{z}^{j}\right)$ (48) where we have used $\Delta_{\alpha_{1}+\alpha_{2}}=d(\alpha_{1}+\alpha_{2})(\alpha_{3}+\alpha_{4})$ and charge neutrality. At leading order in any $d$, the coefficient $\mathscr{C}_{00}=1$ in the series above. Thus the leading block in the expansion is $[\Delta_{\alpha_{1}+\alpha_{2}},0]$ with coefficient $\mu^{(0,0)}=1$. At the next degree, the only block to be considered is $[\Delta_{\alpha_{1}+\alpha_{2}}+1,1]$, with the product OPE coefficient given by Eq. 40, which turns out to be $\mu^{(0,1)}=\binom{d\alpha_{2}\alpha_{3}}{1}-\frac{(\Delta_{\alpha_{1}+\alpha_{2}}-\Delta_{12})(\Delta_{\alpha_{1}+\alpha_{2}}+\Delta_{34})}{4\Delta_{\alpha_{1}+\alpha_{2}}}.$ (49) As the discussion pans out in the key argument of the main text, we notice upon simplification that generically, for all the Coulomb Gas theories, $\mu^{(0,1)}=0.$ (50) Let us stress that for $d=2$, it is expected that this OPE coefficient vanishes in the global block expansion. It is a manifestation of the result that in a Verma module, there are no quasiprimaries at level 1. But as we have shown, this fact explicitly generalizes to higher dimensions for Coulomb Gas theories. Computing the expansion to higher degrees in $d=2$, we observe that the twist- two $(n=1)$ family of operators $[\Delta_{\alpha_{1}+\alpha_{2}}+l+2,l]$ is missing from the OPE (their OPE coefficients vanish for any $G$-function independent of $\alpha_{i}$). Additionally, as we expect, in the OPE of identical operators, the coefficients of odd-spin operators vanish. For example, $\mu^{(0,3)}=\frac{2\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}(\alpha_{1}-\alpha_{2})(\alpha_{3}-\alpha_{4})}{3(1+(\alpha_{1}+\alpha_{2})(\alpha_{3}+\alpha_{4}))(2+(\alpha_{1}+\alpha_{2})(\alpha_{3}+\alpha_{4}))}$ (51) and we see that $\mu^{(0,3)}=0$ if $\alpha_{1}=\alpha_{2}$ or $\alpha_{3}=\alpha_{4}$. The non-zero coefficients in the OPE of two identical scalars are tabulated in Table 1. (For the case of non-identical scalars, see [53]). $(n,l)$ | $(\lambda^{(n,l)})^{2}$ ---|--- $(0,0)$ | 1 $(0,2)$ | $\dfrac{2\alpha^{4}}{8\alpha^{2}+1}$ $(0,4)$ | $\dfrac{\alpha^{4}(2\alpha^{2}+1)^{2}}{2(64\alpha^{4}+64\alpha^{2}+15)}$ $(2,0)$ | $\dfrac{4\alpha^{8}}{(8\alpha^{2}+1)^{2}}$ $(0,6)$ | $\dfrac{\alpha^{4}(2\alpha^{4}+3\alpha^{2}+1)^{2}}{3(8\alpha^{2}+5)(8\alpha^{2}+7)(8\alpha^{2}+9)}$ $(2,2)$ | $\dfrac{\alpha^{8}(2\alpha^{2}+1)^{2}}{(8\alpha^{2}+1)(8\alpha^{2}+3)(8\alpha^{2}+5)}$ $(0,8)$ | $\dfrac{\alpha^{4}(4\alpha^{6}+12\alpha^{4}+11\alpha^{2}+3)^{2}}{24(8\alpha^{2}+7)(8\alpha^{2}+9)(8\alpha^{2}+11)(8\alpha^{2}+13)}$ $(2,4)$ | $\dfrac{2\alpha^{8}(2\alpha^{4}+3\alpha^{2}+1)^{2}}{3(8\alpha^{2}+1)(8\alpha^{2}+5)(8\alpha^{2}+7)(8\alpha^{2}+9)}$ $(4,0)$ | $\dfrac{\alpha^{8}(2\alpha^{2}+1)^{4}}{4(64\alpha^{4}+64\alpha^{2}+15)^{2}}$ Table 1: All non-zero OPE coefficients up to degree 8 $(n=8)$ in $d=2$ for the coulomb gas correlator $\langle V_{\alpha}V_{\alpha}V_{\alpha}V_{\alpha}\rangle$ in the $s$-channel with $\alpha\equiv Q/4$ to satsify charge neutrality. Similarly, for $d=4$, we obtain the global conformal block expansion of the $G$-function. We notice that the odd spin OPE coefficients still vanish, the twist-two family is no longer missing in the OPE. We also see that not all OPE coefficient squares are positive definite, owing to the non-unitary nature of the theory. The coefficients are tabulated in Table 2 . $(n,l)$ | $(\lambda^{(n,l)})^{2}$ ---|--- $(0,0)$ | 1 $(0,2)$ | $\dfrac{8\alpha^{4}}{16\alpha^{2}+1}$ $(1,0)$ | $\dfrac{8\alpha^{4}}{1-16\alpha^{2}}$ $(0,4)$ | $\dfrac{2\alpha^{4}(4\alpha^{2}+1)^{2}}{(16\alpha^{2}+3)(16\alpha^{2}+5)}$ $(2,2)$ | $-\dfrac{2\alpha^{4}(4\alpha^{2}+1)^{2}}{3+64(4\alpha^{4}+\alpha^{2})}$ $(2,0)$ | $\dfrac{64\alpha^{8}}{256\alpha^{4}-1}$ $(0,6)$ | $\dfrac{4\alpha^{4}(8\alpha^{4}+6\alpha^{2}+1)^{2}}{3(16\alpha^{2}+5)(16\alpha^{2}+7)(16\alpha^{2}+9)}$ $(1,4)$ | $-\dfrac{4\alpha^{4}(8\alpha^{4}+6\alpha^{2}+1)^{2}}{3(16\alpha^{2}+3)(16\alpha^{2}+5)(16\alpha^{2}+7)}$ $(2,2)$ | $\dfrac{16\alpha^{8}(4\alpha^{2}+1)^{2}}{(16\alpha^{2}-1)(16\alpha^{2}+3)(16\alpha^{2}+5)}$ $(3,0)$ | $-\dfrac{16\alpha^{8}(4\alpha^{2}+1)^{2}}{(16\alpha^{2}+1)^{2}(16\alpha^{2}+3)}$ Table 2: All non-zero OPE coefficients up to degree 6 in $d=4$ for the coulomb gas correlator $\langle V_{\alpha}V_{\alpha}V_{\alpha}V_{\alpha}\rangle$ in the $s$-channel with $\alpha\equiv Q/4$ to satsify charge neutrality.
# Yang-Gaudin model: A paradigm of many-body physics Xi-Wen Guan<EMAIL_ADDRESS>State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, APM, Chinese Academy of Sciences, Wuhan 430071, China Department of Theoretical Physics, Research School of Physics and Engineering, Australian National University, Canberra ACT 0200, Australia Hai-Qing Lin <EMAIL_ADDRESS>Beijing Computational Science Research Center, Beijing 100193, China Department of Physics, Beijing Normal University, Beijing, 100875, China ###### Abstract Using Bethe’s hypothesis, C N Yang exactly solved the one-dimensional (1D) delta-function interacting spin-1/2 Fermi gas with an arbitrary spin-imbalance in 1967. At that time, using a different method, M Gaudin solved the problem of interacting fermions in a spin-balanced case. Later, the 1D delta-function interacting fermion problem was named as the Yang-Gaudin model. It has been in general agreed that a key discovery of C N Yang’s work was the cubic matrix equation for the solvability conditions. This equation was later independently found by R J Baxter for commuting transfer matrices of 2D exactly solvable vertex models. The equation has since been referred to Yang-Baxter equation, being the master equation to integrability. The Yang-Baxter equation has been used to solve a wide range of 1D many-body problems in physics, such as 1D Hubbard model, $SU(N)$ Fermi gases, Kondo impurity problem and strongly correlated electronic systems etc. In this paper, we will briefly discuss recent developments of the Yang-Gaudin model on several breakthroughs of many- body phenomena, ranging from the universal thermodynamics to the Luttigner liquid, the spin charge separation, the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO)-like pairing state and the quantum criticality. These developments demonstrate that the Yang-Gaudin model has laid out a profound legacy of the Yang-Baxter equation. ###### pacs: 03.75.Ss, 03.75.Hh, 02.30.IK, 05.30.Fk The Bethe ansatz (BA), i.e. a particular form of wavefunction, was first introduced in 1931 by Hans Bethe Bethe:1931 as a way to obtain the eigenspectrum of the one-dimensional (1D) spin-1/2 Heisenberg chain. In Bethe’s method, $N!$ plane waves are $N$-fold products of individual exponential phase factors $e^{\mathrm{i}k_{i}x_{j}}$, where the $N$ distinct wave numbers, $\left\\{k_{i}\right\\}$, are permuted among the $N$ distinct coordinates, $x_{j}$. Each of the $N!$ plane waves has an amplitude coefficient in each of regions, i.e. superposition of all possible plane waves of $N$ particles. However, only more than few decades later, physicists, L Hulthén Hulthen , R. Orbach Orbach , L R Walker Walker , R B Griffith Griffith , J des Cloizeaux and Pearson Cloizeau , and few others, studied the Bethe’s method and the Heisenberg chain in terms of Bethe’s solution. In the mid-60’s, C N Yang and C P Yang YY-1 ; YY-2 ; YY-3 presented a systematic study of the BA equations for the Heisenberg spin chain throughout the full range of anisotropic parameter $\Delta$ with a presence of magnetic field. While they coined Bethe’s method as Bethe’s hypothesis. In 60’s, Bethe’s hypothesis had been proved to be invaluable for the field of exactly solvable models in statistical mechanics. In 1963, using Bethe’s hypothesis, E Lieb and W Liniger Lieb-Liniger solved the 1D Bose gas with a delta-function interaction. In 1967, C N Yang Yang solved the 1D delta- interacting Fermi gas with a discovery of the necessary condition for the Bethe ansatz solvability, which is now known as the Yang-Baxter equation, i.e. the factorization condition–the scattering matrix of a quantum many-body system can be factorized into a product of many two-body scattering matrices. In the same year, M Gaudin also rigorously derived the BA equations for the spin-$1/2$ Fermi gas with a spin balance Gaudin . In 1972, R J Baxter Baxter independently showed that such a factorization relation also occurred as the conditions for commuting transfer matrices in 2D vertex models in statistical mechanics. The Bethe ansatz approach has since found success in the realm of condensed matter physics, such as the 1D Hubbard model Lieb-Wu , SU(N) interacting Fermi gases Sutherland:1968 , Kondo impurity problems Andrei:1983 , BCS pairing models Dukelsky:2004 , strongly correlated electron systems 1D-Hubbard ; Korepin ; Sutherland-book ; Takahashi-b ; Wang-book and spin chain and ladder compounds Batchelor:2007 , quantum gases of cold atoms Cazalilla:2011 ; yangyou ; Guan:2013 ; Batchelor:2016 ; Mistakidis:2022 , and also many other problems in physics and mathematics. The next significant progress was made by C N Yang and C P Yang Yang-Yang in 1969 on the finite temperature problem for the Lieb-Liniger Bose gas. They showed that the thermodynamics of the model can be determined from the minimisation conditions of the Gibbs free energy subject to the Bethe ansatz equations. Later Takahashi showed Takahashi:1971 ; Takahashi:1972 that the Yang-Yang method was an elegant way to analytically obtain thermodynamics of integrable models, e.g. 1D Heisenberg spin chains, Hubbard model, etc. Recent developments on the exactly solvable models in ultracold atoms have shown Guan:2013 ; Zhao ; Guan2007 ; Guan:2013PRL ; Yang:2017 ; He:2020 ; PhysRevB.101.035149 that Yang-Yang method provides an elegant way to study universal thermodynamics, Luttinger liquid, spin charge separation, transport properties and critical phenomena for a wide range of low-dimensional quantum many-body systems. In this short review, we shall discuss how the exact solution of the Yang-Gaudin model provides a rigorous understanding of such fundamental many-body physics in terms of the legacy of Yang-Baxter equation and Yang-Yang thermodynamics. In this paper, we will present Bethe ansatz solution of the Yang-Guadin model in Section I and will discuss the ground state, spin charge separation, universal thermodynamics and quantum criticality for the model with a repulsive interaction in Section II. In section III, we will briefly review novel quantum phases of pairing and depairing, the Fulde-Ferrell-Larkin- Ovchinnikov (FFLO) pairing correlation, multicomponent Luttinger liquids and dimensionless ratios for the Yang-Gaudin model with an attractive interaction. A brief conclusion and a short discussion on the future study of the Yang- Gaudin model will be presented in Section IV. ## I I. The Yang-Gaudin Model The Hamiltonian for the 1D contact interacting fermion problem Yang ; Gaudin $H=-\frac{\hbar^{2}}{2m}\sum_{i=1}^{N}\frac{\partial^{2}}{\partial x_{i}^{2}}+g_{1D}\sum_{1\leq i<j\leq N}\delta(x_{i}-x_{j})$ (1) describes $N$ fermions of the same mass $m$ with two internal spin states confined to a 1D system of length $L$ interacting via a $\delta$-function potential. In this Hamiltonian, we denote the numbers of fermions in the two hyperfine levels $|\uparrow\rangle$ and $|\downarrow\rangle$ as $N_{\uparrow}$ and $N_{\downarrow}$, respectively. While we denoted the total number of fermions and the magnetization as $N=N_{\uparrow}+N_{\downarrow}$ and $M^{z}=(N_{\uparrow}-N_{\downarrow})/2$. The coupling constant $g_{1D}$ can be expressed in terms of the interaction strength $g_{1D}=\hbar^{2}c/m$ with $c=-2/a_{1D}$, where $a_{1D}$ is the effective 1D scattering length Olshanii_PRL_1998 . In the following discussion, we let $2m=\hbar=1$ for our convenience. We define a dimensionless interaction strength $\gamma=c/n$ for our later physical analysis. Here the linear density is defined by $n=N/L$. For a repulsive interaction, $c>0$ and for an attractive interaction, $c<0$. Using Bethe’s hypothesis, C N Yang solved the model (1) with the following many-body wave function $\psi=\sum_{P}A_{\sigma_{1}\ldots\sigma_{N}}(P|Q)\exp\textrm{i}(k_{P1}x_{Q1}+\ldots+k_{PN}x_{QN})$ (2) for the domain $0<x_{Q1}<x_{Q2}<\ldots<x_{QN}<L$. Where $\\{k_{i}\\}$ denote a set of unequal wave numbers and $\sigma_{i}$ with $i=1,\ldots,N$ indicate the spin coordinates. Both $P$ and $Q$ are permutations of indices $\\{1,2,\ldots,N\\}$, i.e. $P=\left\\{P_{1},\ldots,P_{N}\right\\}$ and $Q=\left\\{Q_{1},\ldots,Q_{N}\right\\}$. The sum runs over all $N!$ permutations $P$ and the coefficients of the exponentials are column vectors with each of the $N!$ components representing a permutation $Q$. To determine the wave function associated with the irreducible representations of the permutation group $S_{N}$ and the irreducible representation of the Young tableau for different up- and down-spin fermions, one first needs to consider the boundary conditions of the wave function, i.e. the continuity of the wave function and discontiuity of its derivative. It follows that the two adjacent coefficients must meet the two-body scattering relation $A_{\sigma_{1}\ldots\sigma_{N}}(P_{i}P_{j}|Q_{i}Q_{j})=Y_{ij}(k_{Pj}-k_{Pi})A_{\sigma^{\prime}_{1}\ldots\sigma^{\prime}_{N}}(P_{j}P_{i}|Q_{i}Q_{j}),$ (3) where the two-body scattering matrix $Y_{ij}$ is given by $Y_{ij}(k_{Pj}-k_{Pi})=\left[\frac{\textrm{i}(k_{Pj}-k_{Pi})T_{ij}+cI}{\textrm{i}(k_{Pj}-k_{Pi})-c}\right]^{\sigma^{\prime}_{1}\ldots\sigma^{\prime}_{N}}_{\sigma_{1}\ldots\sigma_{N}}$ (4) with the operator $T_{ij}=-P_{ij}$, here $P_{ij}$ is the permutation operator. The key discovery of C N Yang’s work is that the two-body scattering matrix acting on three linear tensor spaces $V_{1}\otimes V_{2}\otimes V_{3}$ $Y_{ij}(u)=\frac{\textrm{i}uT_{ij}+cI}{\textrm{i}u-c}$ (5) satisfies the following cubic equation $\displaystyle Y_{12}(k_{2}-k_{1})Y_{23}(k_{3}-k_{1})Y_{12}(k_{3}-k_{2})$ $\displaystyle=Y_{23}(k_{3}-k_{2})Y_{12}(k_{3}-k_{1})Y_{23}(k_{2}-k_{1}),$ (6) which has been known as the Yang-Baxter equation. This equation guarantees no diffraction in the three-particle scattering process, i.e. $(k^{\prime}_{1},k^{\prime}_{2},k^{\prime}_{3})=(k_{1},k_{2},k_{3})$. The Yang-Baxter equation in a graphical representation presents a kind of topological invariance for interchanging the three operators according to the two paths, see Fig. 1. This equation was immediately seen as a necessary condition to the quantum integrability. $\textstyle{Y_{213}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Y_{231}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Y_{123}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Y_{321}}$$\textstyle{Y_{132}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Y_{312}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$ Figure 1: The graphical representation of the Yang-Baxter equation Eq. (6). In general, the scattering matrix $Y_{ab}(u)$ satisfies the following relations $\displaystyle Y_{ab}(u)Y_{cd}(v)$ $\displaystyle=$ $\displaystyle Y_{cd}(v)Y_{ab}(u),$ $\displaystyle Y_{ab}(u)Y_{bc}(u+v)Y_{ab}(v)$ $\displaystyle=$ $\displaystyle Y_{bc}(v)Y_{ab}(u+v)Y_{bc}(u),$ $\displaystyle Y_{ab}(u)Y_{ba}(-u)$ $\displaystyle=$ $\displaystyle 1.$ Let us further introduce the operator $R_{ij}=T_{ij}Y_{ij}$ which acts on the space $\prod_{n=1}^{N}\otimes V_{n}$. Then we define the following monodromy matrix: $\mathcal{T}_{N}(u)=L_{N}(k_{N}-u)\ldots L_{2}(k_{2}-u)L_{1}(k_{1}-u)$ (7) where $L_{i}(k_{i}-u)\equiv R_{i,a}(k_{i}-u)$. By taking a trance of the transfer matrix over the auxiliary space $a$, we have $\tau(u)=\textrm{Tr}_{a}(\mathcal{T}_{N}(u))$, then we have $\tau(u)|_{u=k_{i}}=\mathfrak{R}_{i}(k_{i}),$ (8) where $\displaystyle\mathfrak{R}_{i}(k_{i})$ $\displaystyle=$ $\displaystyle R_{i+1,i}(k_{i+1}-k_{i})\ldots R_{N,i}(k_{N}-k_{i})R_{1,i}(k_{1}-k_{i})\ldots$ (9) $\displaystyle\times R_{i-1,i}(k_{i-1}-k_{i}).$ The above notations were borrowed from the quantum inverse scattering method which had been developed in the 1980’s by Faddeev and others Korepin . Solving the eigenvalue problem of $N$ interacting particles (1) in a periodic box of length $L$, one needs to apply the periodic boundary condition $\psi(x_{1},\ldots,x_{i},\ldots,x_{N})=\psi(x_{1},\ldots,x_{i}+L,\ldots,x_{N}).$ (10) This is equivalent to solving the following eigenvalue problem $\mathfrak{R}_{i}(k_{i})A_{E}(P|Q)=\exp(\textrm{i}k_{i}L)A_{E}(P|Q)$ (11) with the transfer matrix $\mathfrak{R}_{i}(k_{i})$ given by (9). By some algebraic manipulations, the eigenvalue of the transfer matrix $\tau(u)=\textrm{Tr}_{a}(\mathcal{T}_{N}(u))$ can be straightforwardly obtained. Then one can obtain the following C N Yang’s Bethe ansatz equations (BAE) Yang for the Fermi gas $\displaystyle{\rm e}^{ik_{j}L}=\prod_{\alpha=1}^{M}\frac{k_{j}-\lambda_{\alpha}+ic/2}{k_{j}-\lambda_{\alpha}-ic/2},$ (12) $\displaystyle\prod_{j=1}^{N}\frac{\lambda_{\alpha}-k_{j}+ic/2}{\lambda_{\alpha}-k_{j}-ic/2}=-\prod_{\beta=1}^{M}\frac{\lambda_{\alpha}-\lambda_{\beta}+ic}{\lambda_{\alpha}-\lambda_{\beta}-ic},$ (13) with $j=1,2,\cdots,N$ and $\alpha=1,2,\cdots,M$. Here $M$ is the number of atoms with down-spins. The energy eigenspectrum is given in terms of the quasimomenta $\left\\{k_{i}\right\\}$ of the fermions via $E=\frac{\hbar^{2}}{2m}\sum_{j=1}^{N}k_{j}^{2}$. All quasimomenta $\left\\{k_{i}\right\\}$ are distinct and uniquely determine the wave function of model Eq.(2). The fundamental physics of the model (1) can be in principle obtained by solving the transcendental Bethe ansatz equations (12) and (13). However, the difficulty with this interacting fermion problem lies in finding all the solutions to these Bethe ansatz equations. In the following discussion, we will briefly review recent breakthroughs in the study of the Bethe ansatz solutions of the Yang-Gaudin model (1). ## II II. Spin Charge Separation and Spin Incoherent Liquid for the Repulsive Fermi Gas Finding the solution of the Bethe ansatz equations (12) and (13) is cumbersome. In the thermodynamic limit, i.e., $L,N\to\infty$, $N/L$ is finite, the above Bethe ansatz equations can be written as the generalized Fredholm equations $\displaystyle{\rho}(k)$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi}+\int_{-B_{2}}^{B_{2}}a_{1}(k-\lambda){\sigma_{1}}(\lambda)d\lambda,$ (14) $\displaystyle{\sigma_{1}}(\lambda)$ $\displaystyle=$ $\displaystyle\int_{-B_{1}}^{B_{1}}a_{1}(\lambda-k){\rho}(k)dk$ (15) $\displaystyle-\int_{-B_{2}}^{B_{2}}a_{2}(\lambda-\lambda^{\prime}){\sigma_{1}}(\lambda^{\prime})d\lambda^{\prime}.$ The associated integration boundaries $B_{1}$, $B_{2}$ are determined by the relations $\displaystyle n:$ $\displaystyle\equiv$ $\displaystyle N/L=\int_{-B_{1}}^{B_{1}}{\rho}(k)dk,$ $\displaystyle n_{\downarrow}:$ $\displaystyle\equiv$ $\displaystyle N_{\downarrow}/L=\int_{-B_{2}}^{B_{2}}{\sigma_{1}}(k)dk,$ (16) where $n$ denotes the linear density while $n_{\downarrow}$ is the density of spin-down Fermions. In the above equations we introduced the quasimomentum distribution function $\rho(k)$ and distribution function of the spin rapidity $\sigma_{1}(\lambda)$ for the ground state. The boundary $B_{1}$ characterizes the Fermi point in the quasimomentum space whereas the boundary $B_{2}$ characterizes the spin rapidity distribution interval with respect to the polarization. They can be obtained by solving the equations given in (16). In the above equations, we denoted the kernel function by $a_{\ell}(x)=\frac{1}{2\pi}\frac{\ell c}{(\ell c/2)^{2}+x^{2}}.$ (17) The ground state energy per unit length is given by $E=\int_{-B_{1}}^{B_{1}}k^{2}{\rho}(k)dk.$ (18) The ground state energy for weak and strong interactions can be calculated directly from the integral forms of the Bethe ansatz equations (14) and (15), namely $\displaystyle E$ $\displaystyle=$ $\displaystyle\frac{1}{12}n^{3}\pi^{2}+\frac{1}{2}n^{2}c+O(c^{2}),\,\,{\rm for}\,c\ll 1,$ (19) $\displaystyle E$ $\displaystyle=$ $\displaystyle\frac{n^{3}\pi^{2}}{3}\left[1-\frac{4\ln 2}{\gamma}+\frac{12(\ln 2)^{2}}{\gamma^{2}}-\frac{32(\ln 2)^{3}}{\gamma^{3}}\right.$ (20) $\displaystyle\left.+\frac{8\pi^{2}\zeta(3)}{5\gamma^{3}}\right]+O(c^{-4})\,\,{\rm for}\,c\gg 1.$ This is a good approximation for the spin-balanced Fermi gas with weakly and strongly repulsive interactions. More detailed study of the solutions of the BA equations was presented in Guan:PRA12 . In general, for a repulsive interaction, the Bethe ansatz equations (12) and (13) only admit real roots in the charge degree of freedom $k_{j}$, whereas in the spin sector, the spin string states are given by $\displaystyle\lambda_{\alpha}^{n,j}=\lambda_{\alpha}^{n}+\frac{ic}{2}(n+1-2j),\quad j=1,2,\cdots,n,$ (21) which are called the length-$n$ spin strings. Such a spin structure comprises a rich magnetism like what the 1D Heisenberg spin chain has. Accordingly, the Bethe ansatz equations (12) and (13) with string hypothesis (21) reduce to the following two sets of BA equations associated with the quantum number $\\{I_{j}\\}$ and $\\{J_{\alpha}^{n}\\}$ $\displaystyle k_{j}L=2\pi I_{j}-\sum_{n=1}^{\infty}\sum_{\alpha=1}^{M_{n}}\theta\left(\frac{2(k_{j}-\lambda_{\alpha}^{n})}{nc}\right),$ (22) $\displaystyle\sum_{j=1}^{N}\theta\left(\frac{2(k_{j}-\lambda_{\alpha}^{n})}{nc}\right)=2\pi J_{\alpha}^{n}$ $\displaystyle+\sum_{m=1}^{\infty}\sum_{\beta=1}^{M_{m}}\Theta_{mn}\left(\frac{2(\lambda_{\alpha}^{n}-\lambda_{\beta}^{m})}{c}\right),\ $ (23) where $j=1,2,\cdots,N$, $\alpha=1,2,\cdots,M_{n},\;n\geq 1$ and $M_{n}$ is the number of length-$n$ string, $\theta(x)=2\tan^{-1}(x)$, and the function $\Theta_{mn}(x)$ is defined by $\displaystyle\Theta_{mn}(x)=\left\\{\begin{array}[]{rcl}&&\theta\left(\frac{x}{|n-m|}\right)+2\theta\left(\frac{x}{|n-m|+2}\right)+\cdots\\\ &&+2\theta\left(\frac{x}{n+m-2}\right)+\theta\left(\frac{x}{m+n}\right),\,\,\text{for}\,\,n\neq m,\\\ &&2\theta\left(\frac{x}{2}\right)+2\theta\left(\frac{x}{4}\right)+\cdots\\\ &&+2\theta\left(\frac{x}{2n-2}\right)+\theta\left(\frac{x}{2n}\right),\quad\text{for}\quad n=m.\end{array}\right.$ (28) The quantum number $I_{j}$ for charge degree of freedom takes distinct integers (or half-odd integers) for even (odd) $\sum_{\alpha}M_{\alpha}$, explicitly $I_{j}\in\sum_{n=1}^{\infty}\frac{M_{n}}{2}+\mathbb{Z}.$ (29) The spin quantum number $J_{\alpha}^{n}$ are distinct integers (half-odd integers) for odd (even) $N-M_{m}$, which satisfy $\displaystyle J_{\alpha}^{n}\in\frac{N-M_{n}}{2}+\frac{1}{2}+\mathbb{Z},$ $\displaystyle|J_{\alpha}^{n}|\leq J_{+}^{n}=\frac{N}{2}-\sum_{m=1}^{n}mM_{m}-n\sum_{m=n+1}^{\infty}M_{m}+\frac{M_{n}}{2}-\frac{1}{2},$ $\displaystyle J_{\alpha}^{n}=-J_{+}^{n},-J_{+}^{n}+1,-J_{+}^{n}+2,\cdots,J_{+}^{n}-1,J_{+}^{n}.$ The total momentum of the system is given by $K=\sum_{j=1}^{N}k_{j}=\frac{2\pi}{L}\left(\sum_{j=1}^{N}I_{j}+\sum_{\alpha=1}^{M_{n}}\sum_{n=1}^{\infty}J_{\alpha}^{n}\right).$ (30) Figure 2: Exact low energy excitation spectra for charge (yellow green) and spin (dark brown) at $\gamma=5.03$ with the Fermi surface $k_{F}=n\pi$, density $n=N/L=3\times 10^{6}$. The yellow green spectrum shows the particle- hole continuum excitation. The dark brown spectrum shows the notable two- spinon excitation spectra. In long wave limit, i.e. $\Delta K\ll 1$, the spin and charge spectra show two independent linear dispersion with velocities $v_{s}$ and $v_{c}$, respectively. From He et. al. He:2020 . It is significantly found He:2020 from the above Bethe ansatz equations (12) and (13) that the excitations in charge sector display a particle-hole continuum spectrum, see Fig. 2. It shows a linear dispersion structure for arbitrary strongly interacting fermions in a long wave limit $\omega(q)=v_{c}|q|\pm\frac{\hbar q^{2}}{2m^{*}}+\cdots,$ (31) where the charge velocity and the effective mass are given by the following expressions $\displaystyle v_{c}$ $\displaystyle=$ $\displaystyle\frac{\varepsilon_{c}^{\prime}(k_{0})}{2\pi\rho_{c}(k_{0})},$ (32) $\displaystyle\frac{1}{2m^{*}}$ $\displaystyle=$ $\displaystyle\frac{\varepsilon_{c}^{\prime\prime}(k_{0})}{2(2\pi\rho_{c}(k_{0})^{2}}-\frac{\pi\rho_{c}^{\prime}(k_{0})\varepsilon_{c}^{\prime}(k_{0})}{(2\pi\rho_{c}(k_{0}))^{3}},$ (33) respectively. For a strong coupling limit, the charge velocity and the effective mass are given by He:2020 $\displaystyle v_{c}\approx 2\pi n_{c}\left(1-\frac{4\ln 2}{\gamma}\right),\quad m^{*}=m\left(1+\frac{4\ln 2}{\gamma}\right).$ (34) The low-energy spin flipping excitation in the spin sector is also displayed in Fig. 2. It shows a typical two-spinon excitation spectrum in spin elementary excitations for the Yang-Gaudin model. The total excited momentum is given by $\Delta K_{\rm spinon}=n\pi-2\pi\sum_{j=1}^{2}\int_{0}^{\lambda_{j}^{\rm h}}\rho_{s}^{0}(\lambda){\rm d}\lambda,$ (35) while the energy of two-spinon excitation is given by $\displaystyle\Delta E_{\rm spinon}=-\sum_{j=1}^{2}\phi_{s}^{0}(\lambda_{j}^{\rm h}),$ (36) that present a microscopic origin of the two deconfined spinons with spin-$1/2$, showing a fractional excitation. This result confirms an antiferromagnetic ordering in the Fermi gas with internal degrees of freedom. However, when the interaction increases, the spin excitation band becomes lower, and even it vanishes in the limit $\gamma\to\infty$. Moreover, Fig. 2 remarkably displays the origin of the separated excitations in spin and charge sectors. In fact, in one dimension such low-lying excitations with different sound velocities form two collective motions of bosons, i.e. the so-called spin-charge separation – fermions dissolve into spinons and holons. The spin-charge separation phenomenon is the hallmark of one-dimensional physics and has not been unambiguously confirmed by experiments, either in solids Kim:1996 ; auslaender2005spin ; Kim:2006 ; Jompol:2009 or ultracold atoms Hulet:2018 ; Vijayan:2020 . For theoretical understanding such unique 1D many-body phenomenon, it acquires low temperature thermodynamics and dynamical correlation functions for such excitations in spin and charge degrees of freedom. Such novel physics spin-charge separation has been recently observed with ultracold atoms in Senaratne:2021 On the other hand, the finite temperature problem for the Lieb-Liniger Bose gas was solved by C. N. Yang and C. P. Yang in 1969 Yang-Yang . Extension to the Yang-Gaudin model in terms of the Bethe ansatz equations (22) and (23) was done some time ago by Lai Lai:1971 ; Lai:1973 and M Takahashi Takahashi:1971 , namely, the so-called thermodynamic Bethe ansatz (TBA) equations for the Yang-Gaudin model are given by $\displaystyle\varepsilon(k)$ $\displaystyle=$ $\displaystyle k^{2}-\mu-\frac{H}{2}-T\sum_{n=1}^{\infty}a_{n}*{\rm ln}[1+{\rm e}^{-\phi_{n}(\lambda)/T}],\quad$ (37) $\displaystyle\phi_{n}(\lambda)$ $\displaystyle=$ $\displaystyle nH-Ta_{n}*\ln[1+{\rm e}^{-\varepsilon(k)/T}]$ (38) $\displaystyle+$ $\displaystyle T\sum_{m=1}^{\infty}T_{mn}*{\rm ln}[1+{\rm e}^{-\phi_{m}(\lambda)/T}]$ with $n=1,\ldots,\infty$. Here $*$ denotes the convolution, $\varepsilon(k)$ and $\phi_{n}(\lambda)$ are the dressed energies for the charge and the length-$n$ spin strings, respectively, with $k$’s and $\lambda$’s being the rapidities; and the function $T_{mn}(x)=d\Theta_{mn}(x)/dx$ is given in Refs. Takahashi:1971 . The pressure is given by $p=\frac{T}{2\pi}\int_{-\infty}^{\infty}{\rm ln}[1+{\rm e}^{-\varepsilon(k)/T}]{\rm d}k,$ (39) from which all the thermal and magnetic quantities can be derived according to the standard statistical relations. Figure 3: (color online) Contour plot of the dimensionless specific heat $\tilde{c}_{V}=c_{v}/T$, showing the phase diagram in the $\tilde{T}$-$\tilde{H}$ plane. Here dimensionless chemical potentials $\tilde{\mu}=2.5$, $\tilde{H}_{c}=2.9145$. The black dashed lines denote the peak positions of specific heat, whereas the white-dot-dashed line shows the boundary of the linear $T$ dependence of specific heat. $COR1$ denotes the crossover regions between QC and the TLL, giving a spin incoherent liquid. From He et. al. He:2020 . At low temperatures, the TBA equations (37) and (38) are extremely hard to be solved either numerically or analytically. However, one can solve these coupled integral equations for certain physical regimes, for examples $k_{B}T\ll\mu,H$ or $c^{2}\gg E_{F}$, etc. For a fixed chemical potential, the contour plot of the specific heat in temperature-magnetic-field plane naturally reads off different critical regions near the quantum phase transition from the spin charge separated Tomonaga-Luttinger liquid (TLL) phase to a fully-polarized phase (FP), see Fig. 3. The quantum critical region fans out from the critical point, forming a critical cone. For $T\ll E_{F}$, we can safely neglect the contributions from the high strings and just retain the leading length-1 string in the TBA equations (37) and (38). Throughout the TLL phase with $H<H_{c}$, where $H_{c}$ is the critical field for a fixed chemical potential $H_{c}=\left(\frac{c^{2}}{2\pi}+2\pi n^{2}\right)\tan^{-1}\left(\frac{2\pi n}{c}\right)-cn.$ (40) The pressure can, in general, be given by $p-p_{0}=\frac{\pi T^{2}}{6}\left(\frac{1}{v_{c}}+\frac{1}{v_{s}}\right),$ (41) where $p_{0}=\int_{-k_{0}}^{k_{0}}\varepsilon(k)dk$ is the pressure at $T=0$ and the charge and spin velocities are given by $v_{c}=\frac{t_{c}}{2\pi\rho(k_{0})}\,,\;\;\;\;v_{s}=\frac{t_{s}}{2\pi\sigma_{1}(\lambda_{0})}\,,$ (42) respectively, with $\rho$ and $\sigma_{1}$ being the distribution functions at Fermi points $k_{0}$ and $\lambda_{0}$ for the charge and the spin sectors, respectively. While $t_{c}=\frac{{\rm d}\varepsilon(k)}{{\rm d}k}\bigg{|}_{k=k_{0}}$ and $t_{s}=\frac{{\rm d}\phi_{1}(\lambda)}{{\rm d}\lambda}\bigg{|}_{\lambda=\lambda_{0}},$ are the respective linear slopes of the dispersion at the Fermi points. Although the proof of the form (41) is rather cumbersome, it gives a simple universal low temperature form of spin- charge separation theory Affleck1986 . The velocities can be numerically obtained for arbitrary interaction strength. For the strongly interacting regime, the charge and spin excitation velocities are given by $v_{c}\approx 2\pi n\left(1-\frac{4\ln 2}{\gamma}\right),\qquad v_{s}\approx\frac{2\pi^{3}n}{3\gamma}\left(1-\frac{6\ln 2}{\gamma}\right),$ respectively. For the external field approaching the saturation field $H_{c}$, the charge and spin velocities can be derived from the relations (42). The leading terms in the velocities are then found to be $\displaystyle v_{c}$ $\displaystyle=$ $\displaystyle 2\pi n\left(1-\frac{12}{\pi\gamma}\sqrt{1-\frac{H}{H_{c}}}\right),\,\,v_{s}=\frac{H_{c}}{n}\sqrt{1-\frac{H}{H_{c}}}.$ In the TLL regime, the dispersion relations for Yang-Gaudin model are approximately linear. Conformal field theory predicts that the energy per unit length has a universal finite size scaling form $E=E_{0}+\Delta/L^{2}$ where $E_{0}$ is the ground state energy per unit length for the infinite system and $\Delta$ is a universal term. In this scenario, Cardy Cardy1986 showed that the two-point correlation function between primary fields can be directly derived from conformal mapping using transfer matrix techniques and expressed the conformal dimensions in terms of finite-size corrections to the energy spectrum. For example, at zero temperature and zero magnetic field, by using conformal field theory Belavin1984 ; Blote1986 , the asymptotic of single particle correlation function can be given explicitly $\displaystyle G_{\uparrow}(x,t)\sim\langle\psi_{\uparrow}^{\dagger}(x,t)\psi_{\uparrow}(0,0)\rangle$ $\displaystyle\approx\frac{A_{\uparrow,1}e^{-\mathrm{i}k_{F\uparrow}x}}{(x-\mathrm{i}v_{c}t)^{2\Delta_{c}^{+}}(x+\mathrm{i}v_{c}t)^{2\Delta_{c}^{-}}(x-\mathrm{i}v_{s}t)^{2\Delta_{s}^{+}}}+h.c.$ (43) with the conformal dimensions $\displaystyle 2\Delta_{c}^{+}$ $\displaystyle=$ $\displaystyle\frac{9}{16}-\frac{3\ln 2}{4\gamma}+\frac{3}{2\pi^{2}}\left(\frac{H}{H_{c}}\right)+\frac{3}{2\gamma}\left(\frac{H}{H_{c}}\right)$ (44) $\displaystyle-\frac{\ln 2}{\pi^{2}\gamma}\left(\frac{H}{H_{c}}\right),$ $\displaystyle 2\Delta_{c}^{-}$ $\displaystyle=$ $\displaystyle\frac{1}{16}-\frac{3\ln 2}{4\gamma}-\frac{1}{2\pi^{2}}\left(\frac{H}{H_{c}}\right)+\frac{1}{2\gamma}\left(\frac{H}{H_{c}}\right)$ (45) $\displaystyle+\frac{3\ln 2}{\pi^{2}\gamma}\left(\frac{H}{H_{c}}\right),$ $\displaystyle 2\Delta_{s}^{+}$ $\displaystyle=$ $\displaystyle\frac{1}{2}-\frac{2}{\pi^{2}}\left(\frac{H}{H_{c}}\right)-\frac{1}{\gamma}\left(\frac{H}{H_{c}}\right)+\frac{4\ln 2}{\pi^{2}\gamma}\left(\frac{H}{H_{c}}\right).$ (46) We see clearly that the correlation function decays as some power of distance governed by the critical exponent. On the other hand, universal scaling behaviour can be derived in the vicinity of the critical point at low temperatures. Near the critical point, the spin dressed energy $\phi_{1}(\lambda)$ in (38) only has a small negative part, which makes a major contribution to the spin dressed energy near the critical point at low temperatures. Therefore, we can expand the integration kernel $a_{n}(k-\lambda)$ in terms of the functions of small variables $\lambda$. After a tedious calculation, we obtain the pressure of the Yang-Gaudin model (1) near the phase transition from the TLL phase to FP phase $\displaystyle p$ $\displaystyle\approx$ $\displaystyle p_{0}-\frac{\arctan\left(\frac{2}{c}k_{0}\right)T^{3/2}}{\pi^{3/2}\sqrt{(a+c_{2})}}\operatorname{Li}_{\frac{3}{2}}\left(-{\rm e}^{\frac{s_{0}\Delta H-c_{1}}{T}}\right)$ (47) $\displaystyle+\frac{T^{5/2}}{4\pi^{3/2}(a+c_{2})^{3/2}}\frac{ck_{0}}{\left(c^{2}/4+k_{T}^{2}\right)^{2}}\operatorname{Li}_{\frac{5}{2}}\left(-{\rm e}^{\frac{s_{0}\Delta H-c_{1}}{T}}\right),$ where the regular part $p_{0}$ of the pressure is given by $\displaystyle p_{0}=\frac{\pi T^{2}}{6v_{c}}+\frac{2}{3\pi}\left(\mu_{c}+\frac{H}{2}\right)^{3/2}=p_{0}^{\rm Liquid}+p_{0}^{\rm BG}.$ (48) The above expression $p_{0}^{\rm BG}=\frac{2}{3\pi}\left(\mu_{c}+\frac{H}{2}\right)^{3/2}$ can be regarded as the background part of charge, whereas $p_{0}^{\rm Liquid}=\frac{\pi T^{2}}{6v_{c}}$ denotes the Luttinger liquid contribution from charge degrees of freedom. However, the Luttinger liquid in the spin sector dissolves into the free fermion quantum criticality in the quantum critical (QC) region, see Fig. 3. It is worth noting that in the QC region the pressure (47) presents a universal scaling form of the equation of states $\displaystyle p=p_{0}^{\rm Liquid}+p_{0}^{\rm BG}+T^{\frac{1}{z}+1}\mathcal{G}\left(\frac{s_{0}\Delta H}{T^{1/\nu z}}\right),$ (49) from which one reads off the dynamical critical exponent $z=2$ and correlation length critical exponent $\nu=1/2$. Consequently, the scaling functions of all thermodynamic quantities can be derived based on this exact expression of the equation of states. The two crossover temperatures of the QC region are given by $T_{l}^{*}=\alpha_{1}|H-H_{c}|^{\nu z}$ and $T_{r}^{*}=\alpha_{2}|H-H_{c}|$, here $\alpha_{1,2}=s_{0}/y_{1,2}$ with $y_{1}=-1.5629$, $y_{2}=3.6205$ are numerical constants. In the QC region, all thermodynamic quantities can be cast into universal scaling forms. In Fig. 3, we also identify a crossover region COR1 in the region $E_{\rm spin}\ll k_{B}T\ll E_{F}$. Here $E_{\rm spin}$ and $E_{F}$ are the energy of spin sector and Fermi energy, respectively. In this region, the interplay between the spin and the charge degrees of freedom leads to a large deviation from the linear dispersion in the spin sector, leading to a large disruption of the spin charge coherent TLLs. Consequently, the spin-spin correlation function exponentially decays, while the charge-charge correlation still remains power law decay with distance. Breakdown of the conformal field theory gives rise to an asymptotic of single particle correlation function $\displaystyle G_{\uparrow}(x,t)$ $\displaystyle\approx$ $\displaystyle\frac{A_{\uparrow,1}e^{-\mathrm{i}k_{F\uparrow}x}}{(x-\mathrm{i}v_{c}t)^{2\Delta_{c}^{+}}(x+\mathrm{i}v_{c}t)^{2\Delta_{c}^{-}}}$ (50) $\displaystyle\times\frac{(\pi T/v_{s})^{2\Delta^{+}_{s}}}{e^{\frac{2\pi T}{v_{s}}x{\Delta_{s}^{+}}}}+h.c.$ The conformal dimensions can be given by Eqs. (44)-(46). In this regime, the temperature is low enough in comparison with the Fermi energy, whereas it is high enough in comparison with the spin excitation energy. Therefore the spin velocity $v_{s}\to 0$ in this crossover region COR1, which is a reminiscence of the spin incoherent liquid Fiete:2007 ; Cheianov:2004 . The novel phase diagram of Fig. 3 reveals a key concept that the low-lying excitations near the Fermi points dissolve into two separate collective motions of charge and spin, i.e., TLLs of charge and spin. Evidence for the spin-charge separation was reported in solid state materials. but none of those existing experiments in this study seemed to provide a conclusive observation of the spin-charge separated Luttinger liquids. In a recent experiment with ultracold atoms presented a confirmative observation of this phenomenon through the spin and charge dynamic structure factors Senaratne:2021 . ## III III. FFLO Pairing Correlation and Universal Thermodynamics for the Attractive Fermi Gas For the attractive regime, i.e. $c<0$, the root patterns of the BA equations (12) and (13) are significantly different from that of the model for $c>0$. For an attractive interaction, the quasimomenta $\left\\{k_{i}\right\\}$ of the fermions with different spins form two-body bound states Takahashi-a ; Gu- Yang , i.e., $k_{i}=\lambda_{i}\pm\mathrm{i}\frac{1}{2}c$, accompanied by the real spin parameter $\lambda_{i}$. Here $i=1,\ldots,N_{\downarrow}$. The excess fermions have real quasimomenta $\left\\{k_{j}\right\\}$ with $j=1,\ldots,N-2N_{\downarrow}$. Thus the Bethe ansatz equations for the ground state are transformed into the Fredholm equations according to the densities of the pairs $\rho_{2}(k)$ and density of single fermionic atoms $\rho_{1}(k)$. They satisfy the following Fredholm equations Yang ; Takahashi-a $\displaystyle\rho_{1}(k)$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi}+\int_{-Q_{2}}^{Q_{2}}a_{1}(k-\lambda)\rho_{2}(\lambda)d\lambda,$ (51) $\displaystyle\rho_{2}(\lambda)$ $\displaystyle=$ $\displaystyle\frac{2}{2\pi}+\int_{-Q_{1}}^{Q_{1}}a_{1}(\lambda-k^{\prime})\rho_{1}(k^{\prime})dk^{\prime}$ (52) $\displaystyle+\int_{-Q_{2}}^{Q_{2}}a_{2}(\lambda-\lambda^{\prime})\rho_{2}(\lambda^{\prime})d\lambda^{\prime}$ with the integration boundaries $Q_{1}$ and $Q_{2}$, which are the Fermi points of the single particles and pairs, respectively. Here $Q_{1}$ and $Q_{2}$ can be determined by the conditions $\displaystyle n$ $\displaystyle\equiv:$ $\displaystyle\frac{N}{L}=2\int_{-Q_{2}}^{Q_{2}}\rho_{2}(k)dk+\int^{Q_{1}}_{-Q_{1}}\rho_{1}(k)dk,$ $\displaystyle n_{\downarrow}$ $\displaystyle\equiv:$ $\displaystyle\frac{N_{\downarrow}}{L}=\int_{-Q_{2}}^{Q_{2}}\rho_{2}(k)dk.$ (53) The ground state energy per length is given by $E=\int_{-Q_{2}}^{Q_{2}}\left(2k^{2}+E_{B}\right)\rho_{2}(k)dk+\int_{-Q_{1}}^{Q_{1}}k^{2}\rho_{1}(k)dk.$ (54) In the above equation, the binding energy is given by $E_{B}=c^{2}/2$. For an attractive interaction, there exist two Fermi seas in charge degree of freedom at the ground state. So that the Fermi points $Q_{1}$ and $Q_{2}$ are finite. This allows us to asymptotically solve the BA equations (51) and (52). For weak and strong attractions with an arbitrary polarization $P=(n_{\uparrow}-n_{\downarrow})/(n_{\uparrow}+n_{\downarrow})$, the ground state energy are respectively given by Guan:PRA12 $\displaystyle E$ $\displaystyle=$ $\displaystyle\frac{1}{3}n_{\uparrow}^{3}\pi^{2}+\frac{1}{3}n_{\downarrow}^{3}\pi^{2}-2|c|n_{\uparrow}n_{\downarrow}+O(c^{2}),$ (55) $\displaystyle E$ $\displaystyle\approx$ $\displaystyle\frac{\hbar^{2}n^{3}}{2m}\left\\{-\frac{(1-P)\gamma^{2}}{4}+\frac{\pi^{2}(1-3P+3P^{2}+15P^{3})}{48}\right.$ (56) $\displaystyle\left.+\frac{\pi^{2}(1-P)(1+P-5P^{2}+67P^{3})}{48|\gamma|}\right.$ $\displaystyle\left.+\frac{\pi^{2}(1-P)^{2}(1+5P+3P^{2}+247P^{3})}{64\gamma^{2}}\right.$ $\displaystyle\left.-\frac{\pi^{2}(1-P)}{1440|\gamma|^{3}}\left[-15+31125{P}^{4}+1861{\pi}^{2}{P}^{5}\right.\right.$ $\displaystyle\left.\left.-15765P^{5}-659{\pi}^{2}{P}^{4}+346{\pi}^{2}{P}^{3}-14{\pi}^{2}{P}^{2}\right.\right.$ $\displaystyle\left.\left.+\pi^{2}P+{\pi}^{2}-105P-150P^{2}-15090P^{3}\right]\right\\},$ respectively. This result was also obtained from dressed energy equations Guan2007 ; Wadati . We notice that the energy (55) continuously connects to the repulsive ground state energy (19) at $c\to 0$. But this does not mean that the energy analytically connects because of the divergence in the small region $c\to\mathrm{i}0$, see a discussion Guan:FP ; Takahashi:1970-m . At finite temperatures, except the two-body bound states $k_{j}=\lambda_{j}\pm\mathrm{i}\frac{1}{2}c$ in charge sector, the spin quasimomenta of the BA equations (12) and (13) form complex strings $\lambda_{\alpha,j}^{n}=\lambda^{n}_{\alpha}+\mathrm{i}\frac{1}{2}(n+1-2j)c$ with $j=1,\ldots,n$ Takahashi:1971 ; Guan2007 . Here $\alpha=1,\ldots,N_{n}$ is the number of strings. The equilibrium states are determined by the minimization condition of the Gibbs free energy, which gives rise to a set of coupled nonlinear integral equations. In terms of the dressed energies $\epsilon_{2}(k):=T\ln(\rho_{2}^{h}(k)/\rho_{2}(k))$ and $\epsilon_{1}(k):=T\ln(\rho_{1}^{h}(k)/\rho_{1}(k))$ for paired and unpaired fermions Takahashi:1971 ; Guan2007 ; Schlottmann:1993 , the thermodynamic Bethe ansatz equations for the 1D attractive Fermi gas are given by $\displaystyle\epsilon_{2}(k)$ $\displaystyle=$ $\displaystyle 2(k^{2}-\mu-\frac{1}{4}{c^{2}})+Ta_{2}*\ln(1+\mathrm{e}^{-\epsilon_{2}(k)/T})$ $\displaystyle+\,Ta_{1}*\ln(1+\mathrm{e}^{-\epsilon_{1}(k)/{T}}),$ $\displaystyle\epsilon_{1}(k)$ $\displaystyle=$ $\displaystyle k^{2}-\mu-\frac{1}{2}{H}+Ta_{1}*\ln(1+\mathrm{e}^{-\epsilon_{2}(k)/{T}})$ $\displaystyle-T\sum_{n=1}^{\infty}a_{n}*\ln(1+\eta_{n}^{-1}(k)),$ $\displaystyle\ln\eta_{n}(\lambda)$ $\displaystyle=$ $\displaystyle\frac{nH}{T}+a_{n}*\ln(1+\mathrm{e}^{-\epsilon_{1}(\lambda)/{T}})$ (57) $\displaystyle+\sum_{n=1}^{\infty}T_{mn}*\ln(1+\eta^{-1}_{n}(\lambda)),$ where $n=1,\ldots,\infty$ and the function $\eta_{n}(\lambda):=\xi^{h}(\lambda)/\xi(\lambda)$ is the ratio of the string densities. While the function $T_{mn}(\lambda)$ is given in Takahashi:1971 ; Guan2007 . The Gibbs free energy per unit length, i.e. the pressure, is given by $\displaystyle p$ $\displaystyle=$ $\displaystyle\frac{T}{\pi}\int_{-\infty}^{\infty}dk\ln(1+\mathrm{e}^{-\epsilon_{2}(k)/{T}})$ (58) $\displaystyle+\,\frac{T}{2\pi}\int_{-\infty}^{\infty}dk\ln(1+\mathrm{e}^{-\epsilon_{1}(k)/{T}}).$ This serves as the equation of states, from which all the thermal and magnetic quantities can be derived according to the standard statistical mechanic relations. From the TBA equations (57), at low temperatures, the pressure of the attractive Fermi gas with a strong interaction can be written as a sum of two components: $p=p_{1}+p_{2}$, where $p_{1}$ is the pressure for unpaired fermions and $p_{2}$ for pairs, given explicitly in Guan:2011-QC $p_{1}=F^{1}_{\frac{3}{2}}\left[1+\frac{p_{2}}{4|c|^{3}}\right],\,p_{2}=F^{2}_{\frac{3}{2}}\left[1+\frac{4p_{1}}{|c|^{3}}+\frac{p_{2}}{4|c|^{3}}\right],$ (59) respectively. Here $F^{r}_{a}=-\sqrt{{r}/{4\pi}}T^{a}{\rm Li}_{a}[-\exp({A_{r}/T})]$ ($r=1,2$) with $A_{1}=\mu_{1}-\frac{2p_{2}}{|c|}+\frac{1}{4|c|^{3}}F^{2}_{\frac{5}{2}}+Te^{-\frac{H}{T}}e^{-\frac{J}{T}}I_{0}\left(\frac{J}{T}\right)$ and $A_{2}=2\mu_{2}-\frac{4p_{1}}{|c|}-\frac{p_{2}}{|c|}+\frac{8}{|c|^{3}}F^{1}_{\frac{5}{2}}+\frac{1}{4|c|^{3}}F^{2}_{\frac{5}{2}}$. In the above equations $\mathrm{Li}_{n}(x)=\sum_{k=1}^{\infty}{x^{k}}/{k^{n}}$ is the polylogarithm function, $J=2p_{1}/|c|$ and $I_{0}(x)=\sum_{k=0}^{\infty}\left(x/2\right)^{2k}/{(k!)^{2}}$. This is a remarkable result of the low temperature thermodynamics of the Yang-Gaudin model Guan:2011-QC . Based on this result, the exact results of quantum criticality of the attractive Yang-Gaudin model can be found in Guan:2011-QC ; Yin:2011 ; Chen:2014 ; He-WB:2016 . Figure 4: (Color online) Contour plot of the susceptibility Wilson ratio $R_{W}^{\chi}$ (60) of the attractive Fermi gas for dimensionless interaction $|\gamma|=10$. In this figure the reduced temperature $t=T/\varepsilon_{b}$ was used. $\varepsilon_{b}$ denotes the binding energy. The susceptibility Wilson ratio is temperature independent in the FFLO-like phase The dashed lines indicate the crossover temperature $T^{*}=\alpha|H-H_{c}|$ separating the relativistic TLL liquid from the free fermion quantum criticality. $R_{W}=0$ for both the TLL of pairs (PP) and the TLL of excess fermions (F). From Guan et. al., Guan:2013PRL . A significant feature of the Yang-Gaudin model with an attractive interaction is the existence of the FFLO pairing state Larkin1965 ; Fulde1964 ; Yang2001 . The FFLO physics in Yang-Gaudin model has received extensive study in theory Orso ; HuiHu ; Feiguin2007 ; Rizzi2008 ; Zhao2008 ; Lee2011 ; Schlottmann2012 and experiment Liao ; Revelle:2016 . In 2007, three groups Guan2007 ; Orso ; HuiHu predicted the phase diagram of the attractive Fermi gas where the novel FFLO-like pairing phase sits in a large parameter space. The phase diagram can be mapped out from specific heat or dimensionless ratios, such as Grüneisen parameter Peng:2019 ; Yu:2020 , Wilson ratios Guan:2013PRL ; Yu:2016 . The dimensionless susceptibility $\chi$ and compressibility $\kappa$ Wilson ratios are given by $\displaystyle R_{W}^{\chi}=\frac{4}{3}\left(\frac{\pi k_{B}}{g\mu_{B}}\right)^{2}\frac{\chi}{c_{V}/T},\quad R_{W}^{\kappa}=\frac{\pi^{2}}{3}\frac{\kappa}{c_{V}/T}$ (60) that map out the phase diagram of the model in low temperature limit Guan:2013PRL ; Yu:2016 . It is very interesting to note Guan:2013PRL ; Yu:2016 that additivity rules of thermodynamical properties within subsystems of the FFLO-like phase are a reminiscence of the rules for multi-resistor networks in series and parallel. Such simple additivity rules indicate a novel and useful characteristic of multi-component TLLs regardless of microscopic details of the systems. For the $SU(2)$ Fermi gas (1) with an attraction, we have the relations $H/2=\left({\mu}_{1}-{\mu}_{2}\right)+E_{B}/2$ and $n=2n_{2}+n_{1}$. Thus we may define the susceptibilities of subsystems in the FFLO state $\chi_{1}=(\mu_{\rm B}{g}_{\mathrm{Lande}})^{2}\left({\partial n_{1}}/{\partial\mu_{1}}\right)_{n}$, $\chi_{2}=2(\mu_{\rm B}{g}_{\mathrm{Lande}})^{2}\left({\partial n_{2}}/{\partial\mu_{2}}\right)_{n}$, and the compressibility $\kappa_{1}=\left({\partial n_{1}}/{\partial\mu_{1}}\right)_{H}$ and $\kappa_{2}=2\left({\partial n_{2}}/{\partial\mu_{2}}\right)_{H}$ in the grand canonical ensemble. Here $\mu_{B}$ is the Bohr magneton and $g_{\mathrm{Lande}}$ is the Lande factor. Consequently, the compressibility and susceptibility satisfy the following additivity rules $\displaystyle\kappa$ $\displaystyle=$ $\displaystyle\kappa_{1}+\kappa_{2},$ (61) $\displaystyle\frac{1}{\chi}$ $\displaystyle=$ $\displaystyle\frac{1}{\chi_{1}}+\frac{1}{\chi_{2}}.$ (62) The compressibilities and susceptibilities are given explicitly in units of $\hbar^{2}/(2m)$ $\displaystyle\kappa_{\mathrm{2}}^{-1}$ $\displaystyle\approx$ $\displaystyle\frac{\pi^{2}n_{2}}{4}\left(1+\frac{6n_{1}}{|c|}+\frac{4n_{2}}{|c|}+\frac{n_{2}^{2}}{2|c|n_{1}}+\frac{24n_{1}^{2}}{c^{2}}\right.$ (63) $\displaystyle\left.+\frac{24n_{1}n_{2}}{c^{2}}+\frac{17n_{2}^{2}}{c^{2}}-\frac{2n_{2}^{3}}{c^{2}n_{1}}+\frac{n_{2}^{4}}{4c^{2}n_{1}^{2}}\right),$ $\displaystyle\kappa_{\mathrm{1}}^{-1}$ $\displaystyle\approx$ $\displaystyle 2\pi^{2}n_{1}\left(1+\frac{12n_{2}}{|c|}+\frac{16n_{1}^{2}}{|c|n_{2}}+\frac{96n_{2}^{2}}{c^{2}}+\frac{384n_{1}^{2}}{c^{2}}\right.$ (64) $\displaystyle\left.-\frac{8n_{1}n_{2}}{c^{2}}-\frac{96n_{1}^{3}}{c^{2}n_{2}}+\frac{256n_{1}^{4}}{c^{2}n_{2}}\right)$ and $\displaystyle\chi_{1}^{-1}$ $\displaystyle=$ $\displaystyle\pi^{2}n_{2}\left[1+\frac{4}{|c|}(n-3n_{2})+\frac{3}{c^{2}}(4n^{2}\right.$ (65) $\displaystyle\left.-24nn_{2}+30n_{2}^{2})\right],$ $\displaystyle\chi_{2}^{-1}$ $\displaystyle=$ $\displaystyle 8\pi^{2}n_{1}\left[1+\frac{4}{|c|}(n-2n_{1})+\frac{4}{c^{2}}(2n^{2}\right.$ (66) $\displaystyle\left.+10n_{1}^{2}-12nn_{1})\right].$ Moreover, the interaction effect is revealed from the sound velocities $v_{1}$, $v_{2}$ of the excess fermions and bound pairs, for example, for strong attraction $\displaystyle v_{1}$ $\displaystyle\approx$ $\displaystyle\frac{\hbar}{2m}2\pi n_{1}\left(1+8n_{2}/|c|+48n_{2}^{2}/c^{2}\right),$ $\displaystyle v_{2}$ $\displaystyle\approx$ $\displaystyle\frac{\hbar}{2m}\pi n_{2}\left(1+2A/|c|+3A^{2}/c^{2}\right),$ (67) with $A=2n_{1}+n_{2}$. We similarly find that the specific heat satisfies $c_{V}=c_{V,1}+c_{V,2},$ (68) where $c_{V,r}=\frac{\pi k_{B}^{2}T}{3\hbar}\frac{1}{v_{r}}$ with $r=1,\,2$. It is remarkably observed that such additivity rules of compressibility (61), susceptibility (62) and the rescaled specific heat $c_{V}/T$ with (68) do not depend on the temperature in the FFLO-like state below a certain temperature. Such additivity rules are a characteristic of multi-component TLLs Yu:2016 . This nature presents a universal feature of thermodynamics for a two-component TLLs of pairs and single fermions in 1D. Moreover, it was proved Yu:2016 that the susceptibility and compressibility Wilson ratios $\displaystyle R^{\chi}_{\mathrm{W},r^{\prime}}$ $\displaystyle=$ $\displaystyle\left(\sum_{r=1}^{2}\frac{D_{r}^{\chi}}{r^{2}}\right)^{-1}\left(\sum_{r=1}^{2}\frac{1}{v_{r}}\right)^{-1},$ (69) $\displaystyle R^{\kappa}_{\mathrm{W}}$ $\displaystyle=$ $\displaystyle\left(\sum_{r=1}^{2}\frac{r^{2}}{D_{r}^{\kappa}}\right)\left(\sum_{r=1}^{2}\frac{1}{v_{r}}\right)^{-1}.$ (70) are dimensionless and uniquely determined by the sound velocities and stiffnesses. In the above expressions, the individual stiffnesses $D^{\kappa}_{r}$ and $D^{\chi}_{r}$ can be respectively given in terms of $\kappa_{r}$ and $\chi_{r}$ via $\displaystyle D_{r}^{\kappa}$ $\displaystyle=$ $\displaystyle\frac{r^{2}}{\pi\hbar}\frac{1}{\kappa_{r}},\qquad D_{r}^{\chi}=\frac{r^{2}}{\pi\hbar}\frac{1}{\chi_{r}}.$ (71) In this context the Wilson ratios of the Yang-Gaudin model with polarization elegantly determine the TLL nature of the FFLO-like state, see Fig. 4. It also indicates a universal feature of the quantum criticality of the attractive Fermi gas. Figure 5: Contour plot of the negative Grüneisen parameter (72), i.e. $-\Gamma$, mapping out the full phase diagram of the Yang-Gaudin model with an attractive interaction in $h-\mu$ plane, see the main text. Here we set the dimensionless temperature $t=0.001$. From Yu, et. al. Yu:2020 . On the other hand, Grüneisen parameter Gruneisen_AdP_1908 , which was introduced in the beginning of 20th century in the study of the effect of volume change of a crystal lattice on its vibrational frequencies, has been extensively studied for the exploration of the caloric effect of solids and phase transitions associated with volume change. Recently, one of the authors of this paper and coworkers Peng:2019 ; Yu:2020 studied interaction- and chemical potential-driven caloric effects and Grüneisen parameter in ultracold atomic gases in 1D. By using the Maxwell’s relations , the Grüneisen parameter of quantum gases in grand canonical ensemble is given by Yu:2020 $\Gamma=V\frac{\left.\frac{\mathrm{d}p}{\mathrm{d}T}\right|_{V,N}}{\left.\frac{\mathrm{d}E}{\mathrm{d}T}\right|_{V,N}}=\frac{1}{T}\frac{\frac{\partial^{2}p}{\partial\mu^{2}}\frac{\partial p}{\partial T}-\frac{\partial^{2}p}{\partial\mu\partial T}\frac{\partial p}{\partial\mu}}{\frac{\partial^{2}p}{\partial\mu^{2}}\frac{\partial^{2}p}{\partial T^{2}}-\left(\frac{\partial^{2}p}{\partial\mu\partial T}\right)^{2}},$ (72) that remarkably maps out the phase diagram of the Yang-Gaudin model with an attractive interaction, see Fig. 5. This is intimately related to the expansionary caloric effect $\displaystyle\left.\frac{\partial T}{\partial V}\right|_{S,N,H}$ $\displaystyle=$ $\displaystyle\frac{T}{V}\Gamma.$ (73) Similarly one can find $\displaystyle\left.\frac{\partial T}{\partial H}\right|_{S,N,V}$ $\displaystyle=$ $\displaystyle\frac{T}{H}\Gamma_{\text{mag}},$ (74) $\displaystyle\left.\frac{\partial T}{\partial c}\right|_{S,N,V,H}$ $\displaystyle=$ $\displaystyle\frac{T}{c}\Gamma_{\text{int}}$ (75) that establish important relations between magnetocaloric/interaction-driven caloric effect and the Grüneisen parameter, respectively. In the above equations, the magnetic and interaction Grüneisen parameters in grand canonical ensemble are given by $\displaystyle\Gamma_{\text{mag}}$ $\displaystyle=$ $\displaystyle-\frac{H}{T}\frac{\frac{\partial^{2}p}{\partial\mu^{2}}\frac{\partial^{2}p}{\partial H\partial T}-\frac{\partial^{2}p}{\partial\mu\partial H}\frac{\partial^{2}p}{\partial\mu\partial T}}{\frac{\partial^{2}p}{\partial\mu^{2}}\frac{\partial^{2}p}{\partial T^{2}}-(\frac{\partial^{2}p}{\partial\mu\partial T})^{2}},$ (76) $\displaystyle\Gamma_{\text{int}}$ $\displaystyle=$ $\displaystyle-\frac{\frac{\partial^{2}p}{\partial\mu^{2}}\frac{\partial^{2}p}{\partial c\partial T}-\frac{\partial^{2}p}{\partial\mu\partial c}\frac{\partial^{2}p}{\partial\mu\partial T}}{\frac{\partial^{2}p}{\partial\mu^{2}}\frac{\partial^{2}p}{\partial T^{2}}-(\frac{\partial^{2}p}{\partial\mu\partial T})^{2}}\frac{c}{T},$ (77) respectively. It is interesting to note that like the adiabatic demagnetization cooling in solid, the interaction ramp-up and -down in quantum gases provides a promising protocol of quantum refrigeration. The Yang-Gaudin model (1) with an attractive interaction exhibits three phases of quantum states: a fully paired phase with polarization $P=0$, a partially- polarized FFLO-like phase with $0<P<1$ as well as a fully-polarized phase with $P=1$ at zero temperature, see Fig. 5. The Grüneisen parameter has a sudden enhancement near the quantum phase transition that gives a universal divergent scaling $\Gamma\sim t^{-1/2}$. The key features of this phase diagram were experimentally confirmed using finite temperature density profiles of trapped fermionic 6Li atoms Liao . In the FFLO-like phase, at zero temperature, the leading order of the long distance asymptotics for the pair correlation function $G_{p}(x,t)=\langle\psi_{\uparrow}^{\dagger}(x,t)\psi_{\downarrow}^{\dagger}(x,t)\psi_{\uparrow}(0,0)\psi_{\downarrow}(0,0)\rangle$ oscillates with wave number $\pi(n_{\uparrow}-n_{\downarrow})\equiv\pi n_{f}P$ where $n_{f}=n_{\uparrow}+n_{\downarrow}$. The proof can be straightforward by using the conformal field theory Lee2011 , i.e. $\displaystyle G_{p}(x,t)$ $\displaystyle\approx$ $\displaystyle\frac{A_{p,1}\cos\left(\pi(n_{\uparrow}-n_{\downarrow})x\right)}{|x+\mathrm{i}v_{u}t|^{\theta_{1}}|x+\mathrm{i}v_{b}t|^{\theta_{2}}},$ where $\theta_{1}\approx\frac{1}{2}$ and $\theta_{2}\approx\frac{1}{2}-\frac{(1-P)}{2|\gamma|}$ for a strong attraction. The oscillations in $G_{p}(x,t)$ are caused by an imbalance in the densities of spin-up and spin-down fermions, i.e. $n_{\uparrow}-n_{\downarrow}$ which is a mismatch in Fermi surfaces between both species. The spatial modulation is characteristic of the FFLO state. The backscattering among the Fermi points of bound pairs and unpaired fermions results in a 1D analog of the FFLO-like state and displays a microscopic origin of the FFLO pairing correlation. These results are consistent with the Larkin-Ovchinikov phase Larkin1965 and the wave numbers coincide with the ones discovered through the density matrix renormalization group method Feiguin2007 ; Tezuka2008 ; Rizzi2008 , quantum Monte Carlo method Batrouni2008 , the mean field approach Liu2008 ; Zhao2008 and bosonization technique Yang2001 . ## IV IV. Conclusion and Outlook We have presented a brief review of recent developments of the Yang-Gaudin model from integrability perspectives. It turns out that the Bethe ansatz solution of the model provides a rigorous understanding of many-body phenomena ranging from fractional excitations to spin charge separation, FFLO pairing state and universal thermodynamics and quantum criticality as well. The legacy of Yang-Baxter equation significantly contributes to developments of analytical methods for cold atoms, spin liquids and condensed matter physics, particularly with regard to the fundamental many-body physics to be gained from exactly solvable quantum many-body systems. In this short review, we have also discussed a number of the authors’ contributions in the study of the Yang-Gaudin model, which have led to direct applications in recent breakthrough experiments on low-dimensional many-body physics of ultracold atoms Liao ; Yang:2017 ; Hulet:2018 ; Revelle:2016 ; Senaratne:2021 . An outlook for future research on the Yang-Gaudin model includes: (a) The observation of spin-charge separation phenomenon is a promising research in many-body physics. People, in several important papers Kim:1996 ; auslaender2005spin ; Kim:2006 ; Jompol:2009 ; Hulet:2018 ; Vijayan:2020 , experimentally probed spin-charge separation with evidence. Recent new experiment Senaratne:2021 has provided a conclusive observation of the spin- charge separated Luttinger liquids. This experimental confirmation of this 1D unique phenomenon should include: 1) identification of the separate collective excitation spectra of charge and spin, 2) confirmation of the spin and charge dynamical response correlation functions; 3) determination of the independent spin and charge sound velocities and their Luttinger parameters. This opens to further study of spin coherent and incoherent Luttinger liquids in quantum gases with higher spin symmetries. Applications of such unique 1D behaviour in quantum metrology and quantum information will be highly expected. (b) The wave function of the Yang-Gaudin model is largely unexplored due to the complexity of the Bethe’s superpositions of $N!$ plane waves. So far there has been a little understanding of correlation functions and dynamical response functions for ground state and excited states of this model. Such a lack of study prevent an access to quantum entanglement behaviour and metrological useful information for quantum technology. It was recently shown Hauke:2016 that the dynamical response function can be used to measure multipartite entanglement in quantum spin systems. This opens a promising opportunity to further explore realistic applications of fractional excitations, spin liquids and impurities in quantum metrology. (c) Cooling fermions in ultracold atomic experiments remain elusive. To achieve this goal, it is essential to understand caloric effects induced by magnetic field, trapping potential and dynamical interaction in quantum gases. Therefor in this research there exist many open questions regarding adiabatic processes and heat exchanges between the system and baths. The Yang-Gaudin model has rich phases of quantum matter which hold a promise for studying quantum transport, hydrodynamics, quantum heat engine and quantum refrigeration by driving external trapping potentials and interactions. Acknowledgement This article particularly delicates to the centenary of Professor C. N. Yang’s birthday. XWG is very grateful to professor C N Yang for his mentoring, constant help and encouragements since the first time I met him in 2010. He also acknowledges Institute for Advanced study, Tsinghua University and Beijing Computational Science Research Center for their kind hospitality. X.-W. G. is supported by NSFC Key Grant No.12134015 and NSFC Grant No.11874393. HQL thanks professor C N Yang for his endless support and great contributions to the physics department at the Chinese University of Hong Kong. ## References * (1) H. Bethe, Z. Physik, 71, 205 (1931). * (2) L. Hulthenén, Arkiv. Mat. Mstron. Fysik 26 A, 11 (1938). * (3) R. Orbach, Phys. Rev. 112, 309 (1958). * (4) L. R. Walker, Phys. Rev. 116, 1089 (1959). * (5) R. B. Griffiths, Phys. Rev. 113, A768 (1964). * (6) J. des Cloizeaux and J. J. Pearson, Phys. Rev. 128, 2131 (1962). * (7) C. N. Yang and C. P. Yang, Phys. Rev. 150, 321 (1966); * (8) C. N. Yang and C. P. Yang, Phys. Rev. 150, 327 (1966); * (9) C. N. Yang and C. P. Yang, Phys. Rev. 151, 258 (1966). * (10) E. H. Lieb and W. Liniger, Phys. Rev. 130, 1605 (1963). * (11) C. N. Yang, Phys. Rev. Lett. 19, 1312 (1967). * (12) M. Gaudin, Phys. Lett. A 24, 55 (1967). * (13) R. J. Baxter, Ann. Phys. (N. Y.) 70, 193 (1972); R. J. Baxter, Ann. Phys. (N. Y.) 70, 323 (1972). * (14) E. H. Lieb and F. Y. Wu, Phys. Rev. Lett. 20, 1445 (1968). * (15) B. Sutherland, Phys. Rev. Lett. 20, 98 (1968). * (16) N. Andrei, K. Furuya, and J. H. Lowenstein, Rev. Mod. Phys. 55, 331 (1983). * (17) J. Dukelsky, S. Pittel, and G. Sierra, Rev. Mod. Phys. 76, 643 (2004). * (18) Essler F H L, Frahm H, Göhmann F, Klümper A and Korepin V E 2005 The One-Dimensional Hubbard Model (Cambridge: Cambridge University Press) * (19) V. E. Korepin, A. G. Izergin, and N. M. Bogoliubov, Quantum Inverse Scattering Method and Correlation Functions (Cambridge: Cambridge University Press), 1993. * (20) B. Sutherland, Beautiful Models: 70 years of exactly solved quantum many-body problems (Singapore: World Scientific), 2004. * (21) M. Takahashi Thermodynamics of One-Dimensional Solvable Models (Cambridge: Cambridge University Press), 1999. * (22) Y.-P. Wang, W.-L. Yang, J. Cao and K.-J. Shi, Off-Diagonal Bethe Ansatz for Exactly Solvable Models, (Springer-Verlag Berlin Heidelberg), 2015. * (23) M. T. Batchelor, X. W. Guan, N. Oelkers, and Z. Tsuboi, Adv. Phys. 56, 465 (2007). * (24) M. A. Cazalilla, R. Citro, T. Giamarchi, E. Orignac and M. Rigol, Rev. Mod. Phys. 83 1405 (2011). * (25) C. N. Yang and Y. Z. You, Chin. Phys. Lett. 28, 020503 (2011). * (26) X. W. Guan, M. T. Batchelor and C. Lee, Rev. Mod. Phys. 85, 1633 (2013). * (27) M. T. Batchelor and A. Foerster, J. Phys. A: Math. Theor. 49, 173001 (2016). * (28) S. I. Mistakidis, et. al. arXiv:2202.11071 * (29) C. N. Yang and C. P. Yang, J. Math. Phys. 10, 1115 (1969). * (30) M. Takahashi, Prog. Theor. Phys. 46, 401 (1971); M. Takahashi, Prog. Theor. Phys. 46, 1388 (1971). * (31) M. Takahashi, Prog. Theor. Phys. 47, 69 (1972); M. Takahashi, Prog. Theor. Phys. 50, 1519 (1973); M. Takahashi, Prog. Theor. Phys. 52, 103 (1973). * (32) X. W. Guan, M. T. Batchelor, C. Lee and M. Bortz, Phys. Rev. B 76, 085120 (2007) * (33) E. Zhao, X.-W. Guan, W. V. Liu, M. T. Batchelor and M. Oshikawa, Phys. Rev. Lett. 103, 140404 (2009). * (34) X.-W. Guan, X.-G. Yin, A. Foerster, M. T. Batchelor, C.-H. Lee, and H.-Q. Lin, Phys. Rev. Lett., 111, 130401 (2013). * (35) Bing Yang, Yang-Yang Chen, Yong-Guang Zheng, Hui Sun, Han-Ning Dai, Xi-Wen Guan, Zhen-Sheng Yuan, and Jian-Wei Pan, Physical review letters, 119, 165701 (2017). * (36) Feng He, Yu-Zhu Jiang, Hai-Qing Lin, Randall G. Hulet, Han Pu, and Xi-Wen Guan, Physical review letters, 125, 190401 (2020). * (37) Ovidiu I. Pâţu, Andreas Klümper, and Angela Foerster. Physical Review B, 101, 035149 (2020). * (38) M. Olshanii, Physical Review Letters, 81, 938 (1998). * (39) X.-W. Guan, Z.-Q. Ma, Phys. Rev. A 85, 033632 (2012). * (40) C Kim, AY Matsuura, Z-X Shen, N Motoyama, H Eisaki, S Uchida, Takami Tohyama, and S Maekawa, Physical review letters, 77, 4054 (1996). * (41) OM Auslaender, H Steinberg, A Yacoby, Y Tserkovnyak, BI Halperin, KW Baldwin, LN Pfeiffer, and KW West, Science, 308, 88 (2005). * (42) B. J. Kim, H. Koh, E. Rotenberg, S.-J. Oh, H. Eisaki, N. Motoyama, S. Uchida, T. Tohyama, S. Maekawa, Z.-X. Shen, C. Kim. Nature Physics, 2, 397 (2006). * (43) Y. Jompol1, C. J. B. Ford, J. P. Griffiths, I. Farrer, G. A. C. Jones, D. Anderson, D. A. Ritchie, T. W. Silk, A. J. Schofield. Science, 325, 597 ( 2009). * (44) TL Yang, P Grišins, YT Chang, ZH Zhao, CY Shih, Thierry Giamarchi, and RG Hulet. Physical review letters, 121, 103001 (2018). * (45) Jayadev Vijayan, Pimonpan Sompet, Guillaume Salomon, Joannis Koepsell, Sarah Hirthe, Annabelle Bohrdt, Fabian Grusdt, Immanuel Bloch, Christian Gross. Science, 367, 186 (2020). * (46) R. Senaratne, D. Cavazos-Cavazos, S. Wang, F. He, Y.-T. Chang, A. Kafle, H. Pu, X.-W. Guan, R. G. Hulet 2021 Science in press. * (47) C. K. Lai, Phys. Rev. Lett. 26, 1472 (1971). * (48) C. K. Lai, Phys. Rev. A 8, 2567 (1973). * (49) I. Affleck, Phys. Rev. Lett. 56, 746 (1986). * (50) J. L. Cardy, Nucl. Phys. B 270 [FS16], 186 (1986). * (51) A. A. Belavin, A. M. Polyakov and A. B. Zamolodchikov, Nucl. Phys. B 241, 333 (1984). * (52) H. W. Blöte, J. L. Cardy and M. P. Nightingale, Phys. Rev. Lett. 55, 742 (1986). * (53) Gregory A. Fiete, Review of Modern Physics, 79, 801 (2007). * (54) Vadim V. Cheianov and M. B. Zvonarev. Phys. Rev. Lett., 92, 176401 (2004). * (55) M. Takahashi, Prog. Theor. Phys. 44, 899 (1970). * (56) C. H. Gu and C. N. Yang, Commun. Math. Phys. 122, 105 (1989). * (57) T. Iida and M. Wadati, J. Phys. Soc. Jpn, 77, 024006 (2008). * (58) X.-W. Guan, Front. Phys., 7, 8 (2012). * (59) M. Takahashi, Prog. Theor. Phys., 44, 11 (1970). * (60) P. Schlottmann, J. Phys. Condens. Matter 5, 5869 (1993). * (61) X.-W. Guan and T.-L. Ho, Phys. Rev. A 84, 023616 (2011). * (62) X. Yin, X.-W. Guan, M. T. Batchelor and S. Chen, Phys. Rev. A 83, 013602 (2011). * (63) Y.-Y. Chen, Y.-Z. Jiang, X.-W. Guan and Q. Zhou, Nat. Comms., 5: 5140 (2014). * (64) W.-B. He, Y.-Y. Chen, S.-Z Zhang and X.-W. Guan, Phys. Rev. A, 94, 031604(R) (2016). * (65) A. I. Larkin and Yu. N. Ovchinnikov, Sov. Phys. JETP 20, 762 (1965) * (66) P. Fulde and R. A. Ferrell, Phys. Rev. 135, A550 (1964) * (67) K. Yang, Phys. Rev. B 63, 140511(R) (2001) * (68) Y. Liao _et al._ , Nature 467, 567 (2010). * (69) M. C. Revelle, J. A. Fry, B. A. Olsen, and R. G. Hulet, Phys. Rev. Lett. 117, 235301 (2016). * (70) G. Orso, Phys. Rev. Lett. 98, 070402 (2007). * (71) H. Hu, X.-J. Liu and P. D. Drummond, Phys. Rev. Lett. 98, 070403 (2007). * (72) A. E. Feiguin and F. Heidrich-Meisner, Phys. Rev. B 76 220508 (2008). * (73) M. Rizzi, M. Polini, M. A. Cazalilla, M. R. Bakhtiari, M. P. Tosi and R. Fazio, Phys. Rev. B 77, 245105 (2008). * (74) E. Zhao and W. V. Liu, Phys. Rev. A 78, 063605 (2008). * (75) J.-Y. Lee and X.-W. Guan, Nucl. Phys. B 853, 125 (2011). * (76) P. Schlottmann and A. A. Zvyagin, Phys. Rev. B 85, 205129 (2012). * (77) L. Peng, Y.-C. Yu and X.-W. Guan, Phys. Rev. B 100, 245435 (2019). * (78) Y.-C. Yu, S.-Z. Zhang and X.-W. Guan, Phys. Rev. Research, 2, 043066 (2020). * (79) Y.-C. Yu, Y.-Y. Chen, H.-Q. Lin, R. A. Roemer, X.-W. Guan, Phys. Rev. B 94, 195129 (2016). * (80) E. Grüneisen, Annalen der Physik, 331, 211 (1908); Annalen der Physik, 344, 257 (1912). * (81) G. G. Batrouni, M. H. Huntley, V. G. Rousseau and R. T. Scalettar, Phys. Rev. Lett. 100, 116405 (2008) * (82) X.-J. Liu, H. Hu and P. D. Drummond, Phys. Rev. A 78, 023601 (2008) * (83) M. Tezuka and M. Ueda, Phys. Rev. Lett. 100, 110403 (2008) * (84) H. Frahm and V. E. Korepin, Phys. Rev. B 43, 5653 (1991) * (85) P. Hauke, M. H. Heyl, L. Tagliacozzo and P. Zoller, Nat. Phys. 12, 778 (2016).
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. # DeepScaleTool : A Tool for the Accurate Estimation of Technology Scaling in the Deep-Submicron Era Satyabrata Sarangi and Bevan Baas Department of Electrical and Computer Engineering University of California, Davis Email: {ssarangi<EMAIL_ADDRESS> ###### Abstract The estimation of classical CMOS “constant-field” or “Dennard” scaling methods that define scaling factors for various dimensional and electrical parameters have become less accurate in the deep-submicron regime, which drives the need for better estimation approaches especially in the educational and research domains. We present _DeepScaleTool,_ a tool for the accurate estimation of deep-submicron technology scaling by modeling and curve fitting published data by a leading commercial fabrication company for silicon fabrication technology generations from 130 nm to 7 nm for the key parameters of area, delay, and energy. Compared to 10 nm–7 nm scaling data published by a leading foundry, the DeepScaleTool achieves an error of 1.7% in area, 2.5% in delay, and 5% in power. This compares favorably with another leading academic estimation method that achieves an error of 24% in area, 9.1% in delay, and 24.9% in power. ## I Introduction Moore’s law [1] has been pivotal in the advancement of the semiconductor industry for decades, which lays out a projection of doubling of the transistors on an IC every two years. Similarly, Dennard scaling [2, 3] pioneered the progress by showing scaling across physical dimensions, substrate doping, and supply voltage, which in turn results in lower area, delay, and power dissipation for MOSFETs. The resulting changes due to scaling are depicted in terms of an entity called scaling factor $K$, which is defined as the ratio of two technology nodes. These scaling factors are discussed in various textbooks [4, 5] and the literature [6, 7], and are shown in Table I. The key points from the traditional scaling factors shown in Table I are the following: transistor physical dimensions shrink down by the scaling factor $K$, which in turn scales down the transistor area by a factor of $K^{2}$. Similarly, the speed of the transistor increases by a factor of $K$ as the delay reduces by $1{/}K$. The traditional scaling factors were accurate until the advent of the deep- submicron era. As the transistors get smaller in the deep-submicron regime, due to short channel effects, effect of leakage current and thermal runaway, and process variation, the traditional scaling estimations are no longer accurate [7, 8]. Moreover, resulting performance gain over recent technology generations is minimal unlike the predictions by traditional scaling estimations. The inaccuracy in the traditional scaling factors can also be depicted from the real silicon data from various foundries across technology fabrication generations. Notably, Holt [9] discusses transistor scaling and its effect on transistor area, gate delay, switching energy, and energy delay product in the deep-submicron regime based on Intel’s data. Bohr and Young [6] describe Intel’s scaling trends for area, transistor performance, and cost per transistor over the past decade. A method to propose accurate scaling predictions has been demonstrated using data from PTM [10], ITRS [11], and simulated measurements of Fan Out 4, or FO4 circuit [7]. However, accuracy achieved in this method differs significantly from both TSMC [12] based silicon technology scaling data and estimations presented in this article, which is discussed in Section IV. TABLE I: MOSFET device parameters and traditional scaling factors Parameter | Scaling factor ---|--- Device dimension $(W,L,tox)$ | $1{/}K$ Doping concentration $Na$ | $K$ Voltage $V$ | $1{/}K$ Current $I$ | $1{/}K$ Capacitance $\epsilon{A{/}t}$ | $1{/}K$ Delay time $VC{/}I$ | $1{/}K$ Power dissipation $VI$ | $1{/}{K^{2}}$ Power density $VI{/}A$ | $1$ The prediction of accurate scaling factors is also important for a fair comparison of design performance and other metrics across different technology fabrication nodes. Although the ITRS and PTM based simulation and modeling approach looks viable for predicting scaling factors in the deep-submicron regime, supply voltage information is not always publicly released, which is accounted for modeling the delay and power scaling equations in the article [7]. Moreover, such estimation method doesn’t necessarily align with the actual silicon technology scaling trends, which are discussed in Section IV. In academia, popular textbooks [4, 5] that cover digital VLSI design and scaling of CMOS transistors describe traditional scaling factors and reasons associated with the discontinuity in traditional scaling trends. However, an accurate estimation of scaling factors across technology fabrication nodes and correlation between traditional scaling factors and scaling factors resulting from actual silicon are usually not covered. Therefore, we propose a scaling tool whose modeling is based on the industrial technology scaling trends and polynomial based curve fitting approach for an easy and accurate estimation of scaling factors in the deep-submicron era. The major contributions of our work are as follows: * • We demonstrate DeepScaleTool, a spreadsheet-based tool for accurate estimation of scaling factors for area, delay, and energy from 130 nm to 7 nm for educational and research purposes. * • We show and analyze the percentage errors in between classical and estimated scaling factors from real silicon data in the deep-submicron range. * • We also illustrate examples of scaling factors estimation using DeepScaleTool and compare the results both with PTM and ITRS based modeling [7] and scaling data from TSMC [12]. The remainder of the paper is organized as follows. Section II describes the published silicon technology scaling trends [6, 9], method of scaling data modeling adopted in this work, and the tool framework. Section III describes the usage of DeepScaleTool and examples of scaling factors computations using the tool. Section IV discusses comparison of traditional scaling factors vs. accurately estimated scaling factors and differences in scaling estimation methods. Section V concludes the paper. ## II Transistor Scaling Trends, Data Modeling, and DeepScaleTool Framework Figure 1: Procedure of Data Collection and Framework Figure 1 provides an overview of the steps leading to the design of DeepScaleTool. We analyze published transistor scaling trends [6, 9], curve fit scaling data that are available for certain technology nodes using second- order polynomial based models, and then extrapolate scaling data for the rest of the technology nodes. Finally, we design and update the spreadsheet-based framework using modeled scaling factors for a combination of starting and target technology nodes. ### II-A Transistor Scaling Trends The following two notable transistor scaling trends that are based on Intel’s silicon results have been analyzed for our work. Holt [9] discusses generational technology benefits over reduction in gate delay, switching energy, and energy delay product. All the presented scaling data in the article span around 65 nm to 10 nm technology nodes and are relative to 65 nm. Moreover, the normalized transistor area across 130 nm to 14 nm technology nodes have been demonstrated. The scaling trends shown in the article over technology fabrication generations infer the following for circuits—acceleration in transistor density, higher performance, and lower power. Similarly, Bohr and Young [6] discuss scaling logic circuit area from 45 nm to 10 nm. The key takeaway from the presented circuit area scaling data is that with the advent of newer technology nodes and transistor level innovations more aggressive scaling is possible than the traditional scaling estimation. For example, both 14 nm and 10 nm technology nodes achieve 0.37 times logic area scaling than the previous generation. This article also presents the trends in improved transistor performance, active power, and performance per watt metrics. However, due to unavailability of proper axis labeling the corresponding trends have not been considered for data modeling purposes in this work. ### II-B Data Extraction and Modeling The g3data [13] tool has been used to extract the digitized data from the plots with technology scaling trends shown in the articles [6, 9]. The plots with proper axis labeling have been considered for data extraction. To obtain scaling trends from 130 nm to 7 nm for key circuit parameters like area, delay, and energy, available scaling data across technology fabrication generations have been extrapolated to obtain the scaling data of the corresponding parameter at the missing technology generations. The polynomial based extrapolation models that are used to curve fit for various circuit parameters yield a coefficient of determination or $R^{2}$ value of equal or greater than 0.99. The small differences in area scaling factors that are obtained after modeling the scaling trends in both articles [6, 9], have been offset by taking the average of the corresponding two scaling factors for any given starting and target technology generations. ### II-C DeepScaleTool Framework Instead of providing big tables consisting of scaling factors across technology fabrication nodes for each circuit parameter, we present a spreadsheet-based framework for the automated generation of scaling factors for various circuit parameters. The framework is designed using visual basic for applications (VBA) programming language. The scaling factor values fields in the spreadsheet are programmed for any of the supported current and target technology nodes. The DeepScaleTool is available as an open source tool and it can be accessed at https://sourceforge.net/projects/deepscaletool/ [14]. Figure 2 depicts a screenshot of the tool. The tool can be updated easily for future nodes with the availability of future scaling trends. ## III Usage of DeepScaleTool and Scaling Factor Computation Examples ### III-A DeepScaleTool Usage Figure 2: Screenshot of the DeepScaleTool page showing the values given for current node and target node, generated scaling factors, and user instructions. The usage of the DeepScaleTool is simple, which requires three-fold steps as shown in Figure 2. Currently, the tool supports scaling factors for the following fabrication nodes in units of nm: 130, 90, 65, 45, 40, 32, 28, 22, 14, 10, and 7. The user inputs one of those values for the current node and target node. Next, the user can press the corresponding button for any performance metric or parameter to display the scaling factor. Finally, the value of any metric at a target node can be found based on the value of that metric at the current node and resulting scaling factor using the following equation, where $x$ and $y$ denote target node and current node respectively, $Value_{x}=~{}{Value_{y}~{}{/}~{}Scalingfactor}$ (1) ### III-B Examples of Scaling Factor Computation The following examples are shown to illustrate the scaling factor computation procedure. To scale a circuit from 130 nm to 45 nm, area scaling factor is 8.3 per the current version of the tool as shown in Figure 2. If the circuit occupies an area of 100 um2 in 130 nm node, the resulting area in 45 nm node using equation (1) will be 100 / 8.3 = 12.05 um2. Similarly, to scale a circuit from 45 nm to 32 nm, the tool displays a power scaling factor of 1.238. If the circuit dissipates 100 mW in 45 nm node, the resulting power dissipation in 32 nm node using equation (1) will be 100 / 1.238 = 80.775 mW. The scaling computations for delay, energy, energy delay product, and throughput can be performed by generating the corresponding scaling factors from the tool and applying equation (1) like the above examples. The scaling factors for derived metrics like throughput/area and power density can also be found out using the scaling factors for the corresponding primary metrics that are generated from the tool. ## IV Comparison of Scaling Factors Estimation Methods and Accuracy with Traditional Scaling Figure 3: Scaling trends for area, delay, power from 130 nm to 7 nm based on the modeling presented in this work. ### IV-A Scaling Trends and Accuracy Analysis with Traditional Scaling Figure 3 shows the scaling trends for transistor area, delay, energy, throughput, and power based on the modeling presented in this work. Among all the considered parameters transistor area achieves a remarkable scaling over the years, which is better than the traditional scaling trends. The delay and throughput achieves minimal scaling with the recent technology nodes. Figure 4: Traditional scaling factors vs. modeled scaling factors presented in this work for the deep-submicron regime. Figure 4 shows the percentage error of value of scaling factors modeled for each parameter with respect to traditional scaling factors. The percentage errors shown for each technology node are relative to the values at 130 nm. Transistor area shows the least variation among all parameters as transistor area still scales by the traditional estimation or even better with the advent of transistor innovations like high-dielectric metal gates, FinFET, and 3D FinFET structures [6]. However, delay and power dissipation trends vary significantly as compared to traditional scaling improvements of $1{/}K$ and $1{/}K^{2}$. Due to the effect of leakage current threshold voltage scaling doesn’t happen aggressively and thus the same effect on scaling for supply voltage as well. Therefore, there is a limit on energy efficiency scaling than the traditional scaling factor of $1{/}K^{3}$. Similarly, transistor gate delay scaling is primarily limited by the slower interconnect scaling. The poor trends in gate delay and power dissipation scaling shown in Figure 4 affect the energy efficiency scaling and thereby a larger percentage of error is observed for scaling energy. Moreover, as we advance for the recent technology nodes we observe a greater percentage of error in relative scaling values with respect to 130 nm due to the cumulative effect of the error margins across technology nodes. ### IV-B Comparison of Scaling Factor Estimation Methods Figure 5: Comparison of area scaling factors between our work and Stillmaker [7] for starting nodes from 130 nm to 14 nm with a target node of 14 nm, where in our work shows better correlation to the reference data by Holt [9]. TABLE II: Comparison of area, delay, and power scaling (% reduction) factors for 10 nm to 7 nm scaling. The reference scaling data [12] corresponds to TSMC’s technology scaling. Source | Area | Delay | Power ---|---|---|--- Cai [12] | 30 – 35 | 10 | 35 Stillmaker [7] | 59 | 19.1 | 9.1 This work | 36.7 | 7.5 | 30 Table II shows the variation in area, delay, and power percentage scaling from 10 nm to 7 nm. The scaling factor values modeled in this work which are based on Intel’s silicon technology scaling trends [6, 9] achieve better correlation with TSMC based scaling data [12] than the ITRS and PTM based modeling approach [7] in terms of area, delay, and power. The delay and power scaling data for the article [7] have been calculated using the given corresponding coefficients, supply voltage per ITRS data, and modeling expressions. The errors in between modeled data presented in this work and TSMC’s scaling data are 1%, 2.5%, and 5% for area, delay, and power respectively, which states the reliability of DeepScaleTool across major foundries. However, area, delay, and power scaling factors presented in the article [7] differ from TSMC based data by 24–29%, 9.1%, and 24.9% respectively, which is significant given the percentage scaling occurring for those parameters with the said technology nodes. Moreover, area scaling factors presented in the article [7] varies significantly when compared to modeling based on silicon data as presented by DeepScaleTool as shown in Figure 5. For example, 130 nm to 7 nm technology scaling brings down the area by a factor of 110 per the article [7], while the current version of DeepScaleTool suggests the corresponding area scaling factor value of 754.55. The later scaling factor seems more accurate since area scales down by a factor of approximately 303 in general over eight generations from 130 nm to 7 nm. Moreover, the aggressive scaling than the normal rate of 0.49 makes the overall factor shoot up to 754.55. Therefore, DeepScaleTool caters to provide more accurate estimation of scaling factors for various design parameters irrespective of major foundries and it avoids the prediction inaccuracy from ITRS and PTM models. ## V Conclusion We present DeepScaleTool, a tool designed to provide accurate estimation of scaling factors using published silicon trends and polynomial based curve- fitting method. The scaling factors presented in this work shows that the traditional scaling factors go obsolete in deep-submicron era. Although the primary data sets considered for this work belong to Intel’s published technology scaling trends, the proposed tool achieves good correlation to the TSMC based scaling trends as well. Moreover, we show that published silicon data based modeling and estimation is more accurate than the simulation based modeling and data per ITRS and PTM, which is the state-of-the-art in estimating scaling factors in the deep-submicron regime. In conclusion, DeepScaleTool provides an easy platform to obtain reliable scaling factors for various design parameters in the deep-submicron era, understand the discrepancies with traditional scaling factors, and also helps in performing fair comparisons of circuits performance over different technology nodes. ## References * [1] G. E. Moore, “Cramming more components onto integrated circuits,” _Proceedings of the IEEE_ , vol. 86, no. 1, pp. 82–85, 1998. * [2] R. H. Dennard, F. H. Gaensslen, H. Yu, V. L. Rideout, E. Bassous, and A. R. LeBlanc, “Design of ion-implanted mosfet’s with very small physical dimensions,” _IEEE Journal of Solid-State Circuits_ , vol. 9, no. 5, pp. 256–268, 1974. * [3] G. Baccarani, M. R. Wordeman, and R. H. Dennard, “Generalized scaling theory and its application to a 1/4 micrometer MOSFET design,” _IEEE Transactions on Electron Devices_ , vol. 31, no. 4, pp. 452–462, 1984. * [4] J. M. Rabaey, A. P. Chandrakasan, and B. Nikolić, _Digital integrated circuits: a design perspective_. Pearson Education, 2003, vol. 7. * [5] J. Uyemura, _Introduction to VLSI Circuits and Systems_ , 1st ed. Hoboken, NJ: John Wiley ‘—&’ Sons, Inc., 2002. * [6] M. T. Bohr and I. A. Young, “CMOS scaling trends and beyond,” _IEEE Micro_ , vol. 37, no. 6, pp. 20–29, 2017. * [7] A. Stillmaker and B. Baas, “Scaling equations for the accurate prediction of CMOS device performance from 180 nm to 7 nm,” _Integration, the VLSI Journal_ , vol. 58, pp. 74–81, 2017, http://vcl.ece.ucdavis.edu/pubs/2017.02.VLSIintegration.TechScale/. * [8] K. J. Kuhn, “CMOS transistor scaling past 32 nm and implications on variation,” in _2010 IEEE/SEMI Advanced Semiconductor Manufacturing Conference (ASMC)_ , 2010, pp. 241–246. * [9] W. M. Holt, “1.1 moore’s law: A path going forward,” in _2016 IEEE International Solid-State Circuits Conference (ISSCC)_ , 2016, pp. 8–13. * [10] (2015, Oct.) Predictive technology model. [Online]. Available: http://ptm.asu.edu/ * [11] (2015, Oct.) International technology roadmap for semiconductors. [Online]. Available: http://www.itrs.net/ * [12] M. Cai _et al._ , “7nm mobile soc and 5g platform technology and design co-development for ppa and manufacturability,” in _2019 Symposium on VLSI Technology_ , 2019, pp. T104–T105. * [13] (2019, Oct.) g3data, a tool for extracting data from scanned graphs. [Online]. Available: https://github.com/pn2200/g3data * [14] S. Sarangi and B. Baas. (2021, Feb.) Deepscaletool. [Online]. Available: https://sourceforge.net/projects/deepscaletool/
# Topological phase transitions in strongly correlated systems: application to Co3Sn2S2 V. Yu. Irkhin<EMAIL_ADDRESS>Yu. N. Skryabin M. N. Mikheev Institute of Metal Physics, 620108 Ekaterinburg, Russia ###### Abstract The topological transition in the strongly correlated half-metallic ferromagnetic compound Co3Sn2S2 from Weyl semimetal (including chiral massless fermions) to a non-magnetic state is treated. This transition goes with a change in topological invariant, and is accompanied by a non-topological transition from saturated ferromagnetic to paramagnetic state, the minority Fermi surface being transformed from ghost (hidden) to real. A corresponding description is given in terms of slave fermion representation for the effective narrow-band Hubbard model. The system Co3Sn2S2 provides a bright example of coexistence of non-trivial topology and strong low-dimensional ferromagnetism. A comparison is performed with other compounds where frustrations result in formation of a correlated paramagnetic state. ## 1 Introduction Recently, the layered kagome lattice compound Co3Sn2S2 has been a subject of numerous experimental [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] and theoretical [14, 15] investigations. In particular, giant topological anomalous Hall effect was observed in this system [3, 6, 8]. Its electronic structure contains Weyl points, Fermi arcs and nodal rings (Fig. 1), which play an important role in the anomalous Hall effect [3, 8, 14]. Single-crystal experimental data on the Co3InxSn2-xS2 kagome system [18] show that these systems have an almost two-dimensional itinerant magnetism and a chiral spin state; in addition, a strongly correlated state with a high electronic heat capacity is formed. The important role of correlations is confirmed by a considerable enhancement of $\gamma T$-linear specific heat even in the ferromagnetic phase [19, 18], especially at approaching the magnetic-nonmagnetic critical point somewhat below $x=1$. At $x=1$, $\gamma$ vanishes, but strongly increases with further increasing $x$ [18]. The ferromagnetism in Co3Sn2S2 breaks time-reversal $T$-symmetry and is necessary for the existence of topological Weyl points. The strongly correlated metallic state in this system emerges as a result of magnetic fluctuations. Above $T_{C}$, intrinsic magnetic field disappears, the Weyl points annihilate and the Dirac points acquire a gap. This restores $T$-symmetry and eliminates the topological behavior. Figure 1: Schematic energy spectra for the Dirac semimetal (a), Weyl semimetal (b), and nodal line semimetal (c), according to Refs.[16, 17]. A similar, but quantum transition occurs with disappearance of ferromagnetism in the Co3InxSn2-xS2 system at the hole doping [6, 14]. The doping shifts the Weyl nodes away from the Fermi level. For small doping, the nodal rings are located around the Fermi energy, and for $x\sim 0.2$, the nodal lines surrounding the L point in the Brillouin zone cross the Fermi surfaces (Fig. 2). With further increasing $x$, the nodal lines are split into two rings as with the annihilation of Weyl points in the presence of the spin-orbit coupling (the role of the latter is discussed in detail in Ref. [12]). For $x>0.6$, the nodal lines are located far from the Fermi level, resulting in the small Berry curvature on the whole Fermi surfaces [14]. At $x=1$ the system becomes insulating [6]. According to [18], this anomalous nonmetallic state may originate from the Fermi energy tuning through a Dirac point. Figure 2: Schematic Fermi surfaces (solid green lines) for Co3Sn2S2 in the $k_{x}-k_{z}$ plane at $k_{y}=0$ according to Ref.[14]. The thin black solid line shows the nodal lines in the absence of spin-orbit coupling for (a) $x=0.2$ and (b) $x=0.4$. The upper and lower triangles on the nodal lines in (a) stand for the Weyl points with topological charges $+1$ and $-1$ in the presence of spin-orbit coupling. In the present work we treat the model picture of correlated half-metallic ferromagnetism in Co3Sn2S2 and provide a description of these transitions within the topological classification [20, 21, 22]. ## 2 Half-metallic ferromagnetism and Weyl points in the effective Hubbard model The half-metallic ferromagnetism of Co3Sn2S2 occurs in the partially filled Co 3$d_{x^{2}-y^{2}}$ band which crosses the Fermi level. The itinerant magnetism cannot be fully described by a simple one-electron approach including band splitting. In particular, the persistence of local moments from in the Curie Weiss susceptibility suggests a many-body correlated behavior [15]. The associated moment of 1$\mu_{B}$ is spread over three Co atoms, in agreement with the 0.33$\mu_{B}$ per Co magnetic moment from first-principle calculations and the experimental moment which is slightly less than 1$\mu_{B}$/f.u. The configuration with five electrons and one hole results in a spin-1/2 ferromagnetic state. This enables one to formulate a local Hubbard model for the three Co atom cluster [15]. According to [15], across the magnetic transition, Co3Sn2S2 evolves from a Mott ferromagnet to a correlated metallic state. In fact, the “Mott ferromagnet” is a half-metallic ferromagnetic state, so that we have a partial Mott transition in the minority spin subband. The physical reason of half- metallic ferromagnetism in a wide doping region is the presence of Van Hove singularity at the Fermi level, which is connected with quasi-two-dimensional kagome structure (see Fig.2 in Ref. [14]). First-principle calculations [14] and experimental data [6, 18] demonstrate that the doping in Co3InxSn2-xS2 system preserves half-metallic ferromagnetism with a linearly decreasing magnetic moment. The picture of half-metallic ferromagnetism can be qualitatively described by the simplest narrow-band Hubbard model with large on-site repulsion $U$. In this model, doubly occupied states (doubles) are absent owing to the Hubbard splitting, but states with both spin projections are still present. Thus the situation is different from the Stoner model where spin splitting becomes infinitely large. The corresponding Hamiltonian reads $\mathcal{H}_{\mathrm{H}}=\sum_{ij\sigma}t_{ij}\tilde{c}^{\dagger}_{i\sigma}\tilde{c}_{j\sigma}$ (1) where $\tilde{c}^{\dagger}_{i\sigma}=X_{i}(0,\sigma)=|i0\rangle\langle i\sigma|$ (2) are the Hubbard projection operators describing motion of holes in the correlated state on the background of magnetic moments. We can use the slave fermion representation $X_{i}(0,\sigma)=f_{i}^{\dagger}b_{i\sigma},\,X_{i}(+,-)=b^{\dagger}_{i\uparrow}b_{i\downarrow},$ (3) where $f_{i}$ are fermions and $b_{i\sigma}$ are Schwinger boson operators (cf. [23, 24]), so that $\sum_{\sigma}b^{\dagger}_{i\sigma}b_{i\sigma}+f_{i}^{\dagger}f_{i}=1.$ (4) In the saturated ferromagnetic state the $b_{i\uparrow}$ boson is condensed, $\langle b_{i\uparrow}\rangle\simeq 1$, and $b_{i\downarrow}$ becomes magnon annihilation operator. Consider the hole Green’s functions for a saturated ferromagnet in the representation (3). $G_{\mathbf{k}\sigma}(E)=\langle\langle X_{\mathbf{k}}(\sigma,0)|X_{\mathbf{-k}}(0,\sigma)\rangle\rangle_{E}.$ (5) The spin-up (majority) states propagate freely on the background of strong ferromagnetic ordering, $G_{\mathbf{k}\uparrow}(E)=(E-t_{\mathbf{k}})^{-1}$ (6) with $t_{\mathbf{k}}$ the bare band spectrum. For our system, these possess an exotic spectrum of chiral Weyl fermions in the internal field owing to ferromagnetic ordering of local moments. This just leads to unusual topological properties. The spin down (minority) Green’s function in the leading approximation is obtained as a convolution of the Green’s functions for free fermions and bosons, so that $G_{\mathbf{k}\downarrow}(E)=\sum_{\mathbf{q}}\frac{N_{B}(\omega_{\mathbf{q}})+f(t_{\mathbf{k}+\mathbf{q}})}{E-t_{\mathbf{k}-\mathbf{q}}+\omega_{\mathbf{q}}}$ (7) where $N_{B}(\omega)$ and $f(E)$ are the Bose and Fermi functions, $\omega_{\mathbf{q}}$ is the magnon spectrum. Similar results for a Hubbard ferromagnet were obtained earlier in the many-electron representation of X-operators [25, 26], the analogy with Anderson’s spinons [27] being discussed. The Green’s function (7) has a purely non-quasiparticle nature. Because of very weak $\mathbf{k}$-dependence of the corresponding distribution function the non-quasiparticle (incoherent) states do not carry current. As demonstrate more exact calculations [26], at small doping the spin down Green’s function (3) has no poles above the Fermi level of holes $E_{F}$, so that the above conclusions are not changed. However, with increasing doping, it can acquire a spin-polaron pole above $E_{F}$, and the half-metallic ferromagnetism is destroyed (Fig.3). Figure 3: Schematic picture of the density of states in the narrow-band Hubbard model illustrating the doping-induced Lifshitz transition, the data are from Ref. [29]. Dashed line is the bare (and also spin-up) density of states, solid and dot-dashed lines are the spin-down densities of states for the half-metallic and usual ferromagnetic states. Despite such unusual properties, the minority states cannot be fully neglected. In fact, their number is equal to the number of majority states $n_{0}$ owing to the sum rule $\sum_{\mathbf{k}}\langle X_{\mathbf{-k}}(0,\sigma)X_{\mathbf{k}}(\sigma,0)\rangle=\langle X_{i}(0,0)\rangle=n_{0}$ (8) for both projections $\sigma$ (which is satisfied in the approximation (7)), so that the current carriers (Hubbard’s holes) are in a sense spinless (see Fig.3: the corresponding areas below the Fermi level determining the occupations numbers are nearly equal). The minority states in the ground state are absent at the Fermi surface and occur there at finite temperatures owing to thermal magnon excitations. The physics does not qualitatively change in the case of finite Hubbard $U$, since the doubles are automatically eliminated in the saturated half-metallic state [26]. The description of the transition to the half-metallic state (vanishing of the quasiparticle residue for one spin projection) is similar to that of the Mott transition in the paramagnetic Hubbard model [28]. Here, the Fermi surface becomes ghost (hidden) after the metal-insulator transition in the insulating (Mott) phase, and the fractionalization of electron states occurs, including spin-charge separation into a neutral fermion (spinon) and charged boson (holon). Our situation can be described as a partial Mott transition in the minority spin subband, cf. the discussion of the orbital-selective Mott transition [30]. The electron states in a strongly correlated system need not to have purely quasiparticle nature. They can be described by both poles and branch cuts of the Green’s function. For minority states, the quasiparticle residue $Z$ is completely suppressed, which means formation of the energy gap and takes place, e.g., for the Mott transition [20]. The violation of the standard Fermi-liquid picture can be described in terms of the formation of the Luttinger surface which is the surface of zeros of the electron Green’s function. The Lifshitz transitions with vanishing quasiparticle poles can be viewed as quantum phase transitions with a change of the topology of the Fermi surface, but without symmetry breaking. Indeed, the Fermi surface itself is the singularity in the Green’s function, which is characterized by topological invariant $N_{1}$ and topologically protected: it is the vortex line in the frequency-momentum space [20, 21]. In the gapped phase, usual Fermi surface does not exist, but its topology is preserved if we take into account the Luttinger contribution. Then the Luttinger theorem (the conservation of the volume enclosed by the Fermi surface) is still valid [20]. Thus the Fermi surface becomes ghost (hidden) in the Mott phase for both spin projections and in our half-metallic situation for minority states, since the Fermi level lies in the corresponding gap. On the contrary, the transition with disappearance of the Weyl points is essentially topological: topological invariants are changed. In the Weyl semimetal phase, the Weyl points have topological charges $N_{3}=+1$ and $-1$ and annihilate in the critical Dirac semimetal. Further on, in the normal paramagnetic state the topology owing to the Berry curvature vanishes. Thus the conservation law for the topological charge [20] is fulfilled. In the insulator case, we have a transition from topological to normal insulator with restoring time-reversal symmetry. A still more complicated situation occurs in the case of Chern insulators with a change of the Chern number [31, 7, 32]. At the hole doping in the Co3InxSn2-xS2 system, suppression of ferromagnetism is also accompanied with decreasing the Berry curvature, and in the paramagnetic phase both the Weyl points and nodal rings vanish [14]. ## 3 Discussion As stated in Ref. [15], the topological properties and strong correlations in Co3Sn2S2 are intricately linked, so that one cannot be adequately considered without the other. The scanning tunnelling microscopy data in Co3Sn2S2 demonstrated a pronounced peak at the Fermi level, which was identified as arising from the kinetically frustrated kagome flat band [9]. The occurrence of an electronic band connecting the two Weyl cones and flattened by electronic correlations was demonstrated in Ref. [10], the Coulomb-interaction strength being estimated as $U\sim 4$ eV. The electron correlations in pyrochlore iridates $R_{2}$Ir2O7 were discussed by first-principle calculations in Ref. [33]. Depending on the strength of correlations $U$, a Mott insulating phase with a magnetic structure or Weyl semimetal phase with Weyl points at the Fermi energy were treated. A variety of phases ranging from normal metal at small $U$ to Weyl semimetal at intermediate $U\sim$ 1.5 eV and Mott insulating phase at $U$ above 2 eV with non-collinear magnetic all-in/all-out ordering were predicted. The giant anomalous Hall effect is observed in Nd2Mo2O7, a pyrochlore ferromagnet with geometrically frustrated lattice structure, is mostly due to the spin chirality and the associated Berry phase originating from the Mo spin tilting [34]. However, the situation in pyrochlores with strong spin-orbit coupling differs from half-metallic ferromagnetism. We have demonstrated that in the half-metallic ferromagnetic state Hubbard correlations do not result in narrowing of bare bands for majority states, but in the paramagnetic state the situation changes: we come to the regime of narrow correlated bands for both spin projections. These may be characterized either by strongly renormalized quasiparticle residue, or even by a non-Fermi- liquid (e.g., marginal Fermi-liquid [28]) behavior. Besides absence of $T$-breaking internal magnetic field in the paramagnetic phase, this can be important for vanishing of topological effects. Since magnetic frustration effects in the kagome lattice should play a role, the magnetic structure of Co3Sn2S2 is to be discussed in more detail. According to [13], at finite temperatures this includes the out-of-plane ferromagnetism, in-plane antiferromagnetism, and hidden phases. A metastable magnetic phase exists in some interval $T_{com}<T<T_{C}$. The corresponding values of transition temperatures are $T_{C}=182$ K, $T_{N}=177$ K, and $T_{com}=150$ K. The out-of-plane magnetization of Co3Sn2S2-xSex demonstrates a first-order phase transition, which may again indicate strong half-metallic magnetism. This may also important for a combined description of the non- topological ferromagnetic and topological transitions. A possibility of a chiral spin state is discussed in [18]. From the $\mu$SR measurements, the ground ferromagnetic state with moment along c axis exists in Co3Sn2S2 below $T^{*}_{C}$ = 90 K, and a coexistence of the ferromagnetic order and an in-plane 120∘ antiferromagnetic order was proposed at $T>T^{*}_{C}$ [11] (see, however, Ref.[35]). The antiferromagnetic volume fraction grows with increasing temperature and dominates around 170 K, before it disappears at $T_{C2}=172$ K. Above $T_{C2}$, the sample has the small volume fraction with the out-of-plane ferromagnetic order, the rest of the volume being occupied by the paramagnetic state up to $T_{C1}=177$ K. The temperature-dependent magnetic fraction shows a rather sharp transition between the paramagnetic and magnetic states with the coexistence of their regions in the interval $T_{C2}<T<T_{C1}$. Thus, although frustrations in Co3Sn2S2 seem to be important, they turn out to be insufficient for formation of a paramagnetic spin-liquid-like state. The situation seems to be different for the correlated kagome systems YCr6Ge6 [36] and Na2/3CoO2 [37] without magnetic ordering. Besides pyrochlores, comparison can be made with other three-dimensional systems. Two topological Weyl cones with band crossing points were identified around the X point for the Heusler alloy Co2MnGe, which can induce large anomalous Hall effect owing to the Berry flux in the half-metallic ferromagnet structure [38]. To conclude, Co3Sn2S2 provides a bright example of coexistence of non-trivial topology and half-metallic ferromagnetism in a quasi-two-dimensional system. Both these factors are important for non-usual phase transitions and anomalies of electronic properties, including giant anomalous Hall effect. The authors are grateful to A. V. Zarubin for the help in preparing the manuscript. The research was carried out within the state assignment of FASO of Russia (theme “Flux” No AAAA-A18-118020190112-8 and theme “Quantum” No. AAAA-A18-118020190095-4). ## References * [1] Q. Wang, Y. Xu, R. Lou, Zh. Liu, M. Li, Y. Huang, D. Shen, H. Weng, Sh. Wang, H. Lei, Nature Comm. 9, 3681 (2018). * [2] Q. Xu, E. Liu, W. Shi, L. Muechler, J. Gayles, C. Felser, and Y. Sun, Phys. Rev. B 97, 1 (2018). * [3] E. Liu, Y. Sun, N. Kumar, L. Muechler, A. Sun, L. Jiao, Sh.-Y. Yang, D. Liu, A. Liang, Q. Xu, J. Kroder, V. Seuss, H. Borrmann, Ch. Shekhar, Zh. Wang, Ch. Xi, W. Wang, W. Schnelle, S. Wirth, Y. Chen, S. T. B. Goennenwein and C. Felser, Nature Phys. 14, 1125 (2018). * [4] D. F. Liu, A. J. Liang, E. K. Liu, Q. N. Xu, Y. W. Li, C. Chen, D. Pei, W. J. Shi, S. K. Mo, P. Dudin, T. Kim, C. Cacho, G. Li, Y. Sun, L. X. Yang, Z. K. Liu, S. S. P. Parkin, C. Felser, and Y. L. Chen, Science 365, 1282 (2019). * [5] L. Jiao, Q. Xu, Y. Cheon, Y. Sun, C. Felser, E. Liu, and S. Wirth, Phys. Rev. B 99, 1 (2019). * [6] H. Zhou, G. Chang, G. Wang, X. Gui, X. Xu, J.-X. Yin, Z. Guguchia, S. S. Zhang, T.-R. Chang, H. Lin, W. Xie, M. Z. Hasan, Sh. Jia, Phys. Rev. B101, 125121 (2020). * [7] L. Muechler, E. Liu, J. Gayles, Q. Xu, C. Felser, Y. Sun, Phys. Rev. B 101, 115106 (2020). * [8] M. Tanaka, Y. Fujishiro, M. Mogi, Y. Kaneko, T. Yokosawa, N. Kanazawa, S. Minami, T. Koretsune, R. Arita, S. Tarucha, M. Yamamoto, and Y. Tokura, Nano Lett. 20, 7476 (2020). * [9] J.-X. Yin, S. S. Zhang, G. Chang, Q. Wang, S. S. Tsirkin, Z. Guguchia, B. Lian, H. Zhou, K. Jiang, I. Belopolski, N. Shumiya, D. Multer, M. Litskevich, T. A. Cochran, H. Lin, Z. Wang, T. Neupert, Sh. Jia, H. Lei and M. Z. Hasan, Nature Physics 15, 443 (2019). * [10] Y. Xu, J. Zhao, C. Yi, Q. Wang, Q. Yin, Y. Wang, X. Hu, L. Wang, E. Liu, G. Xu, L. Lu, A. A. Soluyanov, H. Lei, Y. Shi, J. Luo, and Z. Chen, Nat. Commun. 11, 3985 (2020). * [11] Z. Guguchia, J. A. T. Verezhak, D. J. Gawryluk, S. S. Tsirkin, J.-X. Yin, I. Belopolski, H. Zhou, G. Simutis, S.-S. Zhang, T. A. Cochran, G. Chang, E. Pomjakushina, L. Keller, Z. Skrzeczkowska, Q. Wang, H. C. Lei, R. Khasanov, A. Amato, S. Jia, T. Neupert, H. Luetkens, and M. Z. Hasan, Nat. Commun. 11, 559 (2020). * [12] D. F. Liu, E. K. Liu, Q. N. Xu, J. L. Shen, Y. W. Li, D. Pei, A. J. Liang, P. Dudin, T. K. Kim, C. Cacho, Y. F. Xu, Y. Sun, L. X. Yang, Z. K. Liu, C. Felser, S. S. P. Parkin, Y. L. Chen, arXiv:2103.08113. * [13] D.-H. Shin, J.-H. Jun, S.-E. Lee, M.-H. Jung, arXiv:2105.03892. * [14] Y. Yanagi, J. Ikeda, K. Fujiwara, K. Nomura, A. Tsukazaki, M.-T. Suzuki, Phys. Rev. B 103, 205112 (2021). * [15] A. Rossi, V. Ivanov, S. Sreedhar, A. L. Gross, Z. Shen, E. Rotenberg, A. Bostwick, Ch. Jozwiak, V. Taufour, S. Y. Savrasov, I. M. Vishik, Phys. Rev. B 104, 155115 (2021). * [16] M. Koshino and I. F. Hizbullah, Phys. Rev. B 93, 045201 (2016). * [17] N.P. Armitage, E. J. Mele, A. Vishwanath, Rev. Mod. Phys. 90, 15001 (2018). * [18] M. A. Kassem, PhD Dissertation (Kyoto Univ., 2016). * [19] W. Schnelle, A. Leithe-Jasper, H. Rosner, F. M. Schappacher, R. Poettgen, F. Pielnhofer, and R. Weihrich, Phys. Rev. B 88, 144404 (2013). * [20] G. E. Volovik, Phys. Usp. 61, 89 (2018). * [21] G. E. Volovik, Lect. Notes Phys. 718, 31, 2007; arXiv:cond-mat 0601372. * [22] K. Zhang and G. E. Volovik, JETP Lett.105, 519 (2017). * [23] C. L. Kane, P. A. Lee, N. Read, Phys. Rev. B39, 6880 (1989). * [24] V. Yu. Irkhin, Yu. N. Skryabin, JETP Lett. 106, 167 (2017). * [25] V. Yu. Irkhin and M. I. Katsnelson, Sov. Phys. - Solid State 25, 1947 (1983). * [26] V. Yu. Irkhin and M.I. Katsnelson, J. Phys.: Cond. Mat. 2, 7151 (1990). * [27] P. W. Anderson, Int. J. Mod. Phys. B4, 181 (1990). * [28] T. Senthil, Phys. Rev. B 78, 045109 (2008). * [29] V. Yu. Irkhin, A. V. Zarubin, Phys. Rev. B 70, 035116 (2004). * [30] M. Vojta, Rep. Prog. Phys. 81, 064501 (2018). * [31] M. Z. Hasan, C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). * [32] V. Yu. Irkhin, Yu. N. Skryabin, J. Exp. Theor. Phys. 133, 116 (2021). * [33] X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (2011). * [34] Y. Taguchi, H. Oohara, N. Yoshizawa, Y. Nagaosa, and Y. Tokura, Science 291, 2573 (2001). * [35] J.-R. Soh, Ch. J. Yi, I. Zivkovic, N. Qureshi, A. Stunault, B. Ouladdiaf, J. A. Rodríguez-Velamazán, Y. Shi, A. T. Boothroyd, arXiv:2110.00475. * [36] T. Y. Yang, Q. Wan, Y. H. Wang, M. Song, J. Tang, Z. W. Wang, H. Z. Lv, N. C. Plumb, M. Radovic, G. W. Wang, G. Y. Wang, Z. Sun, R. Yu, M. Shi, Y. M. Xiong, N. Xu, arXiv:1906.07140 (2019). * [37] I.F. Gilmutdinov, R. Schoenemann, D. Vignolles, C. Proust, I.R. Mukhamedshin, L. Balicas, H. Alloul, arXiv:2101.05252. * [38] T. Kono, M. Kakoki, T. Yoshikawa, X. Wang, K. Goto, T. Muro, R. Y. Umetsu, A. Kimura, Phys. Rev. Lett. 125, 216403 (2020).
# Joint Deep Multi-Graph Matching and 3D Geometry Learning from Inhomogeneous 2D Image Collections Zhenzhang Ye, Tarun Yenamandra, Florian Bernard, Daniel Cremers Technical University of Munich {zz.ye, tarun.yenamandra, f.bernard<EMAIL_ADDRESS> ###### Abstract Graph matching aims to establish correspondences between vertices of graphs such that both the node and edge attributes agree. Various learning-based methods were recently proposed for finding correspondences between image key points based on deep graph matching formulations. While these approaches mainly focus on learning node and edge attributes, they completely ignore the 3D geometry of the underlying 3D objects depicted in the 2D images. We fill this gap by proposing a trainable framework that takes advantage of graph neural networks for learning a deformable 3D geometry model from inhomogeneous image collections, i.e. a set of images that depict different instances of objects from the same category. Experimentally we demonstrate that our method outperforms recent learning-based approaches for graph matching considering both accuracy and cycle-consistency error, while we in addition obtain the underlying 3D geometry of the objects depicted in the 2D images. Previous approach [55] Our proposed approach Sparse non-rigid geometry 3D reconstruction Figure 1: We consider a deep graph matching approach for bringing 2D image key points into correspondence. Left: Existing deep graph matching methods completely ignore the underlying 3D geometry of the 3D objects depicted in the 2D images. In addition, they lead to cycle errors, as shown by the red line. Middle: Our method obtains the underlying 3D geometry from a collection of inhomogeneous 2D images (indicated by the coloured points and the bike sketch in the centre), while at the same time guaranteeing cycle consistency. Right: To model nonlinear 3D object deformations, we infer coarse 3D geometry and in addition use a 3D deformation module to refine the underlying 3D geometry based on the 2D image key point observations. ## 1 Introduction Graph matching is a widely studied problem in computer vision, graphics and machine learning due its universal nature and the broad range of applications. Intuitively, the objective of graph matching is to establish correspondences between the nodes of two given weighted graphs, so that the weights of corresponding edges agree as well as possible. Diverse visual computing tasks fit into the generic graph matching framework. In this work we focus in particular on the task of matching 2D key points defined in images, which has a high relevance for 3D reconstruction, tracking, deformation model learning, localisation, and many more. In this case, a graph is constructed for each image by using the key points as graph nodes, and by connecting neighbouring key points with edges, according to some suitable neighbourhood criterion. The edges typically contain information about geometric relations, such as the Euclidean distance between nodes in the simplest case. Image key point matching was traditionally addressed based on finding nearest neighbours between feature descriptors such as SIFT [33], SURF [3] or learned features [23]. A downside to this approach is that the geometric relation between the key points are completely ignored, which is in particular problematic if there are repetitive structures that lead to similar feature descriptors. Instead, we can use a graph matching formulation to establish correspondences between key points while taking into account geometric relations between points. Yet, the sequential nature of first computing features and then bringing them into correspondence may lead to sub-optimal results, since both tasks are solved independently from each other – despite their mutual dependence. More recently, several deep learning-based graph matching methods have been proposed that learn task-specific optimal features while simultaneously solving graph matching in an end-to-end manner [58, 51, 55, 41]. While such deep graph matching approaches lead to state-of-the-art results in terms of the matching accuracy, they have profound disadvantages, particularly in the context of 2D key point matching in image collections. On the one hand, most existing approaches only consider the matching of pairs of images, rather than the entire collection. This has the negative side-effect that so-obtained matchings are generally not cycle-consistent, i.e. matching image A via B to C may be different than matching image A directly to C. To circumvent this, there are approaches that use a post-processing procedure [52] to establish cycle consistency based on permutation synchronisation [6, 37]. Yet, they do not directly obtain cycle-consistent matchings but rather achieve it based on post-processing. On the other hand, and perhaps more importantly, approaches that use graph matching for 2D image key point matching have the strong disadvantage that the underlying 3D structure of the objects whose 2D projections are depicted in the images is not adequately considered. In particular, the spatial relations in the 2D image plane are highly dependent on the 3D geometric structure of the object, as well as on the camera parameters. Hence, learning graph features directly based on the image appearance and/or 2D image coordinates is sub-optimal, at best, since the neural network implicitly needs to learn the difficult task of reasoning about the underlying 3D structure. In this work we address these issues by proposing a deep multi-graph matching approach that simultaneously learns the 3D structure of objects. We summarise our main contributions as follows: * • For the first time we propose a solution for jointly considering deep multi- graph matching and inferring sparse 3D geometry from inhomogeneous 2D image collections, see Fig. Joint Deep Multi-Graph Matching and 3D Geometry Learning from Inhomogeneous 2D Image Collections. * • To effectively deal with the inhomogeneity of the image collection, in which different instances of objects of the same category are present (e.g. different types of bikes as shown in Fig. Joint Deep Multi-Graph Matching and 3D Geometry Learning from Inhomogeneous 2D Image Collections), we introduce a novel deformable 3D model that we directly learn from the image collection based on a graph neural network. * • Rather than performing pairwise image-to-image matching, we consider an image- to-deformable-3D-model matching formulation that by construction guarantees cycle consistency. * • Our approach substantially outperforms the previous state of the art in learning-based graph matching approaches considering accuracy and cycle error simultaneously. ## 2 Related Work In the following we summarise the works that we consider most relevant to our approach. For a more detailed background on image key point matching we refer interested readers to the recent survey paper [34]. Feature-Based Matching. Feature descriptors extracted from images at key point locations, e.g. based on SIFT [33], SURF [3], or deep neural networks [23], are often used for image matching. In order to bring extracted features into correspondence, commonly a nearest neighbour strategy [4] or a linear assignment problem (LAP) [13] formulation are used. However, these methods suffer from the problem that geometric relation between the key points in the images are not taken into account. Graph Matching and Geometric Consistency. Geometric relations can be taken into account by modelling feature matching as graph matching problem. Here, the image key points represent the graph nodes, and the edges in the graph encode geometric relations between key points (e.g. spatial distances). Mathematically, graph matching can be phrased in terms of a quadratic assignment problem [24, 38, 30, 13]. There are many existing works for addressing the graph matching problem in visual computing, including [17, 61, 45, 18, 5, 45]. A drawback of these approaches is that they mostly rely on handcrafted graph attributes and/or respective graph matching cost functions based on affinity scores. In [59], a learned-based approach that directly obtains affinity scores from data was introduced. More recently, the differentiation of the power iteration method has been considered in a deep graph matching approach [58]. A more general blackbox differentiation approach was introduced in [41]. Various other deep learning approaches have been proposed for graph matching [28, 20], and some approaches also address image key point matching [51, 60, 55]. In this case, optimal graph features are directly learned from the image appearance and/or 2D image coordinates, while simultaneously solving graph matching in an end-to-end manner. Although these methods consider geometric consistency, they are tailored towards matching a pair of graphs and thus lead to cycle-inconsistent matchings when pairwise matchings of more than two graphs are computed. Synchronisation and Multi-Matching. Cycle-consistency is often obtained as a post-processing step after obtaining pairwise matchings. Methods that establish cycle consistency in the set of pairwise matchings are commonly referred to as permutation synchronisation methods [37, 62, 35, 6, 8]. There are also methods for directly obtaining cycle-consistent multi-matchings [49, 50, 7]. Recently, permutation synchronisation has also been considered in a deep graph matching framework, where a separate permutation synchronisation module is utilised to generalise a two-graph matching approach to the matching of multiple graphs [52]. However, when applying such multi-matching approaches to image key point matching they have the significant shortcoming that they ignore the underlying 3D geometry of the 2D points. This makes it extremely difficult to establish correct matchings across images, which after all depict 2D projections of 3D objects in different poses, possibly even under starkly varying perspective projections. 3D Reconstruction. Obtaining 3D geometric information from 2D data is a widely studied problem known as 3D reconstruction. When relying on a monocular input only, 3D reconstruction is generally an ill-posed problem. To circumvent this, statistical 3D models [9, 32, 39] are commonly used as priors in such reconstruction tasks. Reconstruction from a single image or video using a deformable 3D prior has for example been achieved by fitting a 3D morphable model of a specific object class such as humans bodies, faces, or cars, and then finding the parameters of the model that best explain the image [47, 10, 53]. However, the availability of a suitable 3D prior is a rather strong assumption. Video-based methods are able to utilise temporal information and temporal consistency constraints to reconstruct dynamic scenes. For the specific object class of faces, recently a method was proposed that is able to simultaneously learn a 3D face prior model while performing 3D reconstruction [46]. There are also some methods that do not assume explicit 3D prior models, but they exploit different geometric and photometric properties of 3D shapes to reconstruct 3D information from images [21, 48, 56]. An alternative to address the ill-posedness of single-view reconstruction is to consider multiple views. Traditional static scene reconstruction methods use a collection of images from one or many cameras to match the images, obtain camera parameters, and reconstruct an object [43, 42, 1]. More recent methods for multi-view reconstruction assume camera parameters and use self-supervised learning based on a neural renderer to reconstruct static and dynamic objects with novel 3D representations [36, 40, 31, 44]. A downside of multi-view reconstruction methods is that they require many different images of the same object, which is often hard to obtain and unavailable in existing datasets. Contrary to existing approaches, we simultaneously solve deep multi-graph matching and infer sparse 3D geometry fromi nhomogeneous 2D image collections. In particular, our approach obtains cycle-consistent multi-matchings and does not rely on a hand-crafted template or any other prior 3D model. ## 3 Problem Formulation & Preliminaries Symbol | Meaning ---|--- $N$ | number of graphs/images $m_{j}$ | number of vertices in $j$-th graph $\mathcal{G}_{j}$ $d$ | number of vertices in universe graph $\mathcal{U}$ $X_{jk}$ | pairwise matching between $\mathcal{G}_{j}$ and $\mathcal{G}_{k}$ $X_{j}$ | matching between $\mathcal{G}_{j}$ and $\mathcal{U}$ $U$ | homogeneous coordinates of universe points $\mathcal{V}_{j}$ | homogeneous coordinates of pixels in $j$-th image $j,k,l$ | indices used to refer to images and graphs $i$ | index used to refer to the $i$-th point Table 1: Overview of our notation. In this section we summarise how to achieve cycle-consistency for multiple graph matching by utilising the notion of universe points. In order to explicitly construct such universe points, we formalise it as a sparse reconstruction of the 3D key points from multiple 2D images. ### 3.1 Multi-Matching and Cycle Consistency Given is the set $\\{\mathcal{G}_{j}\\}_{j=1}^{N}$ of $N$ undirected graphs, where each graph $\mathcal{G}_{j}=(\mathcal{V}_{j},\mathcal{E}_{j})$ comprises of a total of $m_{j}$ nodes $\mathcal{V}_{j}=\\{v_{1},\dots,v_{m_{j}}\\}$ and a total of $n_{j}$ edges $\mathcal{E}_{j}=\\{e_{1},\dots,e_{n_{j}}\\}$ that connect pairs of nodes in $\mathcal{V}_{j}$. We assume that each node represents an image key point, and that the node $v_{i}\in\mathbb{R}^{2}$ is identified with the respective 2D image coordinates. The pairwise graph matching problem is to find a node correspondence $X_{jk}\in\mathbb{P}_{m_{j}m_{k}}$ between $\mathcal{G}_{j}$ and $\mathcal{G}_{k}$. Here, $\mathbb{P}_{m_{j}m_{k}}$ is the set of $(m_{j}{\times}m_{k})$-dimensional partial permutation matrices. Let $\mathcal{X}=\\{X_{jk}\in\mathbb{P}_{m_{j}m_{k}}\\}_{j,k=1}^{N}$ be the set of pairwise matching between all graphs in $\\{\mathcal{G}_{j}\\}_{j=1}^{N}$. The set $\mathcal{X}$ is said to be cycle- consistent if for all $j,k,l\in\\{1,\dots,N\\}$, the following three properties hold [22, 49, 6]: 1. 1. $X_{jj}=\mathbb{I}_{m_{j}}$, where $\mathbb{I}_{m_{j}}$ denotes the identity matrix of size $m_{j}$. 2. 2. $X_{jk}=X_{kj}^{T}$. 3. 3. $X_{jk}X_{kl}\preceq X_{jl}$ (element-wise). When solving multi-graph matchings in terms of pairwise matching, cycle consistency is desirable since it is an intrinsic property of the (typically unknown) ground truth matching. Rather then explicitly imposing the above three constraints, it is possible to achieve cycle consistency by representing the pairwise matching in terms of a universe graph [22, 49, 6]: ###### Lemma 1 The set $\mathcal{X}$ of pairwise matchings is cycle-consistent, if there exists a collection $\\{X_{j}\in\mathbb{P}_{m_{j}d}:X_{j}\mathbf{1}_{d}=\mathbf{1}_{m_{j}}\\}_{j=1}^{N}$ such that for each $X_{jk}\in\mathcal{X}$ it holds that $X_{jk}=X_{j}X_{k}^{T}$. Here, the $X_{j}$ can be considered as the pairwise matching between the graph $\mathcal{G}_{j}$ and a universe graph $\mathcal{U}=(\mathcal{V},\mathcal{E})$ with similar definition to $\mathcal{G}_{j}$, where $\mathcal{V}=\\{u_{1},\dots,u_{d}\\}$ denote the universe points and $\mathcal{E}=\\{e_{1},\dots,e_{n}\\}$ the universe edges. Intuitively, the universe graph can be interpreted as assigning each point in $\mathcal{G}_{j}$ to exactly one of the $d$ universe points in $\mathcal{U}$. Therefore, rather than modelling the cubic number of cycle consistency constraints on $\\{\mathcal{G}_{j}\\}_{j=1}^{N}$ explicitly, we use an object-to-universe matching formulation based on the $\\{X_{j}\\}_{j=1}^{N}$. ### 3.2 3D Reconstruction Though the idea of the universe graph is a crucial ingredient for synchronisation approaches [37, 22, 6], the universe graph is never explicitly instantiated in these methods. That is because it is merely used as an abstract entity that must exist in order to ensure cycle consistency in multi- matchings. Considering that the graphs in this work come from image collections, we assume that the nodes $u_{i}\in\mathbb{R}^{3}$ of the universe graph represent 3D points, which will allow us to address their explicit instantiation based on multiple-view geometry. We denote the homogeneous coordinate representation of the universe point $u_{i}\in\mathbb{R}^{3}$ (represented in world coordinates) as $U_{i}=(u_{i},1)\in\mathbb{R}^{4}$. Its projection onto the $j$-th image plane, denoted by $\mathcal{V}_{ij}=(v_{ij},1)\in\mathbb{R}^{3}$, is given by $\mathcal{V}_{ij}=\lambda_{ij}K_{j}\underbrace{\left(\begin{matrix}1&0&0&0\\\ 0&1&0&0\\\ 0&0&1&0\end{matrix}\right)}_{\Pi_{0}}\underbrace{\left(\begin{matrix}R_{j}&T_{j}\\\ 0&1\end{matrix}\right)}_{g_{j}}U_{i}.$ (1) Here, $g_{j}$ represents the world-to-camera space rigid-body transformation comprising of the rotation $R_{j}\in\mathbb{R}^{3\times 3}$ and the translation $T_{j}\in\mathbb{R}^{3}$, $\Pi_{0}$ is the canonical projection matrix, $K_{j}\in\mathbb{R}^{3\times 3}$ is the intrinsic camera matrix, and $\lambda_{ij}\in\mathbb{R}$ is the scale parameter. For brevity, we define the general projection matrix $\Pi_{j}=K_{j}\Pi_{0}g_{j}$. Let $U\in\mathbb{R}^{4\times d}$ be the stacked universe points in homogeneous coordinates, let $\mathcal{V}_{j}\in\mathbb{R}^{3\times d}$ be the respective projection onto the $j$-th image plane $j$, and let $\Lambda_{j}=\operatorname{diag}(\lambda_{1j},\ldots,\lambda_{dj})\in\mathbb{R}^{d\times d}$ be the diagonal matrix with the $\lambda_{ij}$ on its diagonal. The matrix formulation of Eq. (1) is $\mathcal{V}_{j}=\Pi_{j}U\Lambda_{j}.$ (2) Once we have a collection of $N$ images of different objects from the same category (not necessarily the same object instance, e.g. two images of different bicycles), reconstructing the universe points $U$ can be phrased as finding the minimiser of the least-squares problem $\displaystyle\operatorname{arg\,min}_{U}\sum_{j=1}^{N}||\Pi_{j}U\Lambda_{j}-\mathcal{V}_{j}||_{F}^{2}.$ (3) Note that in practice the variables $U,\\{\Lambda_{j}\\}$ and $\\{\Pi_{j}\\}$ are generally unknown, so that without further constraints this is an under- constrained problem. In the next section, we will elaborate on how we approach this. Figure 2: Overview of our algorithm. Given an image with features, we predict the corresponding features in 3D as a combination of universe 3D points and deformations. The universe 3D points are learned during training for a given class of objects, while the deformations are predicted per image. We create edges for among the 2D and 3D features and find a matching between the two graphs using a graph matching network. Since the matchings are between universe points and images, our algorithm is cycle consistent. ## 4 Proposed Method Our learning framework consists of four main components. The first two components have the purpose to obtain 3D universe points, along with a deformation of these 3D points representing the underlying 3D structure of the 2D key points in the $j$-th image. The purpose of the other two components is to predict the matching between the 2D points of $\mathcal{G}_{j}$ and the 3D points of $\mathcal{U}$. Thus, rather than learning pairwise matchings between $\mathcal{G}_{j}$ and $\mathcal{G}_{k}$, we utilise an object-to-universe matching formulation. Therefore, the underlying 3D structure and cycle- consistent multi-matchings are both attained by our method. The whole pipeline is illustrated in Fig. 2 and comprises the following four main components: 1. 1. Learnable 3D Universe Points: the 2D key points $\\{\mathcal{V}_{j}\\}_{j=1}^{N}$ of all images in the collection are used to reconstruct the 3D universe points $U$ by incorporating a reconstruction loss that approximates Eq. (3). 2. 2. Deformation Module: the retrieved universe points $U$ are static and therefore they cannot accurately model the geometric variability present in different instances of an object from the same category (e.g. different bicycles). To address this, the universe points are non-linearly deformed by the deformation module that takes the 2D points and the (learned) 3D universe points as input. 3. 3. Assignment Graph Generation: by connecting the 2D points and universe points, respectively, the 2D graph and the 3D universe graph are constructed. The assignment graph is then constructed as the product of these two graphs. 4. 4. Graph Matching Network: a graph matching network performs graph convolutions on the assignment graph, and eventually performs a binary node classification on the assignment graph representing the matching between the 2D graph and the universe graph. ### 4.1 Learnable 3D Universe Points As discussed in Sec. 3.2, the universe points can be retrieved by minimising (3). This problem, however, is generally under-determined, since $U,\\{\Lambda_{j}\\}$ and $\\{\Pi_{j}\\}$ in (3) are generally unknown in most practical settings. Additionally, although all objects share a similar 3D geometry, the nonlinear deformations between different instances are disregarded in (3). Thus, instead of an exact solution we settle for an approximation that we later refine in our pipeline. To this end, we assume a weak perspective projection model, i.e. all universe points are assumed to have the same distance from the camera. With this condition, the diagonal of $\Lambda_{j}$ is constant and can absorbed into $\Pi_{j}$. This leads to the least-squares problem $\operatorname{arg\,min}_{U}\sum_{j=1}^{N}||\Pi_{j}U-\mathcal{V}_{j}||_{F}^{2},$ (4) which can be solved in an end-to-end manner during neural network training based on commonly used ‘backpropagable’ pseudo-inverse implementations. The variable $\Pi_{j}$ can be written as $\Pi_{j}=\mathcal{V}_{j}U^{+}$ where $U^{+}$ is the right pseudo-inverse that satisfies $UU^{+}=\mathbb{I}_{4}$. Therefore, we introduce the reconstruction loss $\mathcal{L}_{\text{rec}}=\frac{1}{N}\sum_{j=1}^{N}||\mathcal{V}_{j}U^{+}U-\mathcal{V}_{j}||^{2}_{F}.$ (5) ### 4.2 Deformation Module The universe points retrieved in the previous step are static, meaning that they can only reflect the rough geometric information of the underlying 3D object, but that they are unable to accurately represent finer-scale variations between different instances of the same object category. Thus, we introduce the deformation module to learn an additional nonlinear deformation. This modules takes the universe points $U$ and the 2D points $\mathcal{V}_{j}$ as input. As shown in the bottom left of Fig. 2, $\mathcal{V}_{j}$ is passed to a 2D Point encoder. The encoder first performs a nonlinear feature transform of all input points based on multi-layer perceptron (MLP), and then performs a max pooling to get a global feature representing the input object. As can be seen in the top left in Fig. 2, an MLP is utilised to perform a nonlinear feature transform for each of the 3D points in $U$. Each 3D point feature is then concatenated with the same global feature from the 2D Point Encoder. The concatenated per 3D point features are fed into an MLP to compute the motion of each point. The output is a set of per-point offsets $S\in\mathbb{R}^{3\times d}$ that are added to $U$ to generate the deformed 3D universe points. The computation of the per-point offsets is summarised as $\displaystyle S_{j}$ $\displaystyle=\text{MLP}\left(\text{MLP}(U)\circ\text{Encoder}(\mathcal{V}_{j})\right),$ (6) where $\circ$ represents the concatenation operation. Similarly as for the reconstruction loss, the projection of the deformed universe points onto the image plane should be close to the observed 2D points. Additionally, since the static 3D universe points should reflect the rough geometry of the underlying 3D object, the offset $S_{j}$ should be small. Therefore, we introduce the deformed reconstruction loss and the offset regulariser as $\displaystyle\mathcal{L}_{\text{def}}$ $\displaystyle=\frac{1}{N}\sum_{j=1}^{N}||\mathcal{V}_{j}(U{+}S_{j})^{+}(U{+}S_{j})-\mathcal{V}_{j}||^{2}_{F},\text{ and}$ (7) $\displaystyle\mathcal{L}_{\text{off}}$ $\displaystyle=||S_{j}||^{2}_{F}.$ (8) ### 4.3 Assignment Graph Generation To obtain graphs from the 2D points and the deformed 3D universe points, respectively, we apply the Delaunay algorithm [11] to generate edges, see Fig. 2. Moreover, we define the attribute of each edge as the concatenation of the respective coordinates of the respective connecting points. Once the 3D universe graph $\mathcal{U}$ and the 2D graph $\mathcal{G}_{j}$ are generated, we construct the assignment graph $\mathcal{G}^{A}_{j}$ as the product graph of $\mathcal{U}$ and $\mathcal{G}_{j}$ following [25]. To be more specific, the nodes in $\mathcal{G}^{A}_{j}$ are defined as the product of the two node sets $\mathcal{V}_{j}$ and $\mathcal{V}$ of $\mathcal{G}_{j}$ and $\mathcal{U}$, respectively, i.e. $\mathcal{V}^{A}_{j}=\\{v_{jk}:v_{jk}=(v_{j},u_{k})\in\mathcal{V}_{j}\times\mathcal{V}\\}$. The edges in $\mathcal{G}^{A}_{j}$ are built between nodes $v_{jk}$, $v_{mn}\in\mathcal{V}^{A}_{j}$ if and only if there is an edge between $v_{j}$ and $v_{m}$ in $\mathcal{E}_{j}$, as well as between $u_{k}$ and $u_{n}$ in $\mathcal{E}$. The attribute of each node and edge in $\mathcal{G}^{A}_{j}$ is again the concatenation of the attribute of corresponding nodes and edges in $\mathcal{G}_{i}$ and $\mathcal{U}$, respectively. ### 4.4 Graph Matching Network The graph matching problem is converted to a binary classification problem on the assignment graph $\mathcal{G}^{A}$. For example, an assignment graph is shown on the top right of Fig. 2. Classifying nodes $\\{1c,2b,3a\\}$ as positive equals to matching point $1$ to $c$, $2$ to $b$ and $3$ to $a$, where numeric nodes correspond to the 2D graph, and alphabetic nodes correspond the 3D universe graph. The assignment graph is then passed into the graph matching network [55], which encodes the attributes of the assignment graph into a latent representation. This is achieved by alternatingly applying edge convolutions and node convolutions. The edge convolution assembles the attributes of the connected nodes, while the node convolution aggregates the information from its adjacent edges and updates the attributes of each node. The overall architecture is based on the graph network from [2]. Car (Willow Dataset) | Duck (Willow Dataset) ---|--- | | | | | | | Bicycle (Pascal VOC Dataset) | Cow (Pascal VOC Dataset) | | | | | | | Figure 3: Qualitative results of the proposed method on the Willow Dataset and Pascal VOC Dataset. It can be seen that our method achieves accurate results for non-deformable objects of different types (car, bike), and that it achieves still reasonable results for different instances of articulated objects (duck, cow). ### 4.5 Loss Function Similarly as existing deep graph matching approaches, we train our network in a supervised way based on the ground-truth matching matrix $X_{j}^{\text{gt}}$ between $\mathcal{G}_{j}$ and $\mathcal{U}$. To this end, we adopt the mean squared loss to compute a matching loss $\mathcal{L}_{\text{match}}=\frac{1}{N}\sum_{j=1}^{N}||X_{j}^{\text{gt}}-X_{j}||^{2}.$ (9) Furthermore, one-to-one matching is a reasonable prior for graph matching, which was already adopted in previous work [54, 55]. To include a one-to-one matching soft-constraint, we first convert the predicted permutation matrix $X_{j}$ to a binary node label matrix $Y_{j}\in\\{0,1\\}^{m_{j}d\times 2}$ that we define as $Y_{j}=\begin{pmatrix}1{-}\text{vec}(X_{j}),\text{vec}(X_{j})\end{pmatrix}.$ (10) Here, $\text{vec}(X_{j})$ is the vectorisation of $X_{j}$. We can compute the corresponding index vector $y_{j}\in\\{0,1\\}^{m_{j}d}$ defined as $(y_{j})_{i}=\operatorname{arg\,max}_{k\in\\{1,2\\}}(Y_{j})_{ik}.$ (11) By leveraging the auxiliary matrix $B\in\\{0,1\\}^{(m_{j}+d)\times m_{j}d}$ and the ground-truth permutation matrix $X_{j}^{\text{gt}}$ [54], the one-to- one matching regularisation is $\mathcal{L}_{\text{reg}}=||B(y-\text{vec}(X_{j}^{\text{gt}}))||^{2}.$ (12) The total loss that we minimise during training is $\displaystyle\mathcal{L}=$ $\displaystyle~{}\omega_{\text{m}}\mathcal{L}_{\text{match}}+\omega_{\text{d}}\mathcal{L}_{\text{def}}+\omega_{\text{r}}\mathcal{L}_{\text{rec}}+\omega_{\text{o}}\mathcal{L}_{\text{off}}+\omega_{\text{reg}}\mathcal{L}_{\text{reg}}.$ (13) ### 4.6 Training We train a single network that is able to handle multiple object categories at the same time. To this end, we learn separate 3D universe points for each category, and in addition we introduce a separate learnable linear operator for each category that is applied to the global feature obtained by the 2D Point Encoder. The linear operator has the purpose to transform the generic global feature to a category-specific representation, and also helps in resolving ambiguities between categories with objects that are somewhat similar (e.g. cat and dog). In practice, we apply a warm start for learning the universe points $\mathcal{U}$, which are randomly initialised for each category. To this end, at the beginning of the training, $\omega_{\text{r}}$ is set to 1 and all other weights are 0. After a certain number of iterations, 4k in our setting, we replace the reconstruction loss $\mathcal{L}_{\text{rec}}$ by the deformed reconstruction loss $\mathcal{L}_{\text{def}}$ by setting $\omega_{r}=0$ , $\omega_{\text{m}}=1.0$, $\omega_{\text{d}}=0.5$, $\omega_{\text{o}}=0.05$ and $\omega_{\text{reg}}=0.1$ (in all our experiments). The batch size is 16 and the number of iterations after warm start is 150k. The learning rate is $0.008$ and scheduled to decrease exponentially by 0.98 after each 3k iterations. ## 5 Experiments In the following, we evaluate our method in various settings. We compare our method to different state-of-the-art methods on two datasets, and we evaluate our deformation module based on a dataset of 3D objects. Additional results can be found in the supplementary material. ### 5.1 Comparisons to State of the Art For the comparison experiments, we follow the testing protocol that was used in CSGM [55]. In the following, we summarise the experimental setting for each dataset and discuss our results. Parts of the matching results are visualised in Fig. 3 #### 5.1.1 Willow Dataset We simultaneously train our model for all categories of the Willow dataset [14]. The dataset consists of images from $5$ classes, car, duck, face, motorbike, and wine bottle. It is compiled from Caltech-256 and Pascal VOC 2007 datasets, and it consists of more than $40$ images per class with $10$ manually labelled distinctive features each. To compare our method against existing methods, we use the same training/test split as in CSGM [55]. For training, $20$ images are randomly chosen from each class and the rest are used for testing. For non-learning based methods, the affinity matrix is constructed using the appearance similarity of SIFT descriptors [33] as done in [54]. Delaunay triangulation is applied to build graph edges. Each edge $(k,l)\in\mathcal{E}_{j}$ requires two features $w_{kl}$ and $\theta_{kl}$, where $w_{kl}$ is the pairwise distance between the connected nodes $v_{k}$ and $v_{l}$, and $\theta_{kl}$ is the absolute angle between the edge and the horizontal line with $0\leq\theta_{kl}\leq\pi/2$. The edge affinity between edges $(k,l)$ in $\mathcal{G}_{1}$ and $(a,b)$ in $\mathcal{G}_{2}$ is computed as $e_{(k,a),(l,b)}=\text{exp}(-(|w_{kl}-w_{ab}|+|\theta_{kl}-\theta_{ab}|)/2)$. In our method, we use the 2D key point coordinates as attributes of nodes in $\mathcal{G}_{i}$, while the nodes in $\mathcal{U}$ have the 3D coordinates of the (learned) universe points as attributes. The attributes of edges are the concatenation of the coordinates of connected nodes. The attribute of nodes and edges in the assignment graph is described in Sec. 4.3. | car | duck | face | motor. | wine. | avg. acc. | cyc.-cons. ---|---|---|---|---|---|---|--- IPFP [26] | 74.8 | 60.6 | 98.9 | 84.0 | 79.0 | 79.5 | ✗ RRWM [15] | 86.3 | 75.5 | 100 | 94.9 | 94.3 | 90.2 | ✗ PSM [19] | 88.0 | 76.8 | 100 | 96.4 | 97.0 | 91.6 | ✗ GNCCP [29] | 86.4 | 77.4 | 100 | 95.6 | 95.7 | 91.0 | ✗ ABPF [54] | 88.4 | 80.1 | 100 | 96.2 | 96.7 | 92.3 | ✗ HARG [14] | 71.9 | 72.2 | 93.9 | 71.4 | 86.1 | 79.1 | ✗ GMN [58] | 74.3 | 82.8 | 99.3 | 71.4 | 76.7 | 80.9 | ✗ PCA [51] | 84.0 | 93.5 | 100 | 76.7 | 96.9 | 90.2 | ✗ CSGM [55] | 91.2 | 86.2 | 100 | 99.4 | 97.9 | 94.9 | ✗ Ours | 98.8 | 90.3 | 99.9 | 99.8 | 100 | 97.8 | ✓ Table 2: Matching accuracy on the Willow dataset. Our method beats the previous state of the art and achieves the best average accuracy. Table 2 shows the accuracy of our method compared to other methods on the Willow dataset. It can be seen that on an average, our method achieves an accuracy of $97.8\%$, outperforming other methods by at least $2.9\%$, while also being able to reconstruct the 3D structure of objects, see Fig. Joint Deep Multi-Graph Matching and 3D Geometry Learning from Inhomogeneous 2D Image Collections. In the car category, our method outperforms the others noticeably. Additionally, although there is non-rigid motion in the duck category that is cause by articulation, our method can still achieve a reasonable accuracy. Further, our method is the only one that can guarantee cycle-consistent matchings thanks to the object-to-universe matching formulation. #### 5.1.2 Pascal VOC Keypoints Dataset The Pascal VOC Keypoints dataset [12] contains $20$ categories of objects with labelled key point annotations. The number of key points varies from 6 to 23 for each category. Following [55], we use $7020$ images for training and $1682$ for testing. The accuracy computations are the same as described in Sec. 5.1.1. | aero | bike | bird | boat | bottle | bus | car | cat | chair | cow | table | dog | horse | mbike | person | plant | sheep | sofa | train | tv | avg. acc. | cyc.-cons. ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- GMN [58] | 31.9 | 47.2 | 51.9 | 40.8 | 68.7 | 72.2 | 53.6 | 52.8 | 34.6 | 48.6 | 72.3 | 47.7 | 54.8 | 51.0 | 38.6 | 75.1 | 49.5 | 45.0 | 83.0 | 86.3 | 55.3 | ✗ PCA [51] | 40.9 | 55.0 | 65.8 | 47.9 | 76.9 | 77.9 | 63.5 | 67.4 | 33.7 | 65.5 | 63.6 | 61.3 | 68.9 | 62.8 | 44.9 | 77.5 | 67.4 | 57.5 | 86.7 | 90.9 | 63.8 | ✗ CSGM [55] | 46.9 | 58.0 | 63.6 | 69.9 | 87.8 | 79.8 | 71.8 | 60.3 | 44.8 | 64.3 | 79.4 | 57.5 | 64.4 | 57.6 | 52.4 | 96.1 | 62.9 | 65.8 | 94.4 | 92.0 | 68.5 | ✗ Ours | 41.1 | 64.7 | 64.9 | 64.6 | 94.5 | 79.3 | 69.9 | 56.6 | 66.0 | 68.6 | 69.5 | 56.8 | 53.6 | 67.2 | 57.3 | 98.3 | 54.2 | 58.8 | 86.3 | 69.7 | 67.1 | ✓ Table 3: Results on Pascal VOC Keypoints dataset [12]. Note that in terms of accuracy we achieve comparable results to the previous state of the art method CSGM [55], while we are the only one that additionally achieves cycle consistency. We randomly sample from the training data to train our model. In Table 3 it can be seen that in terms of matching accuracy our method is almost on par with the previous state of the art [55]. Moreover, if a key point point is not visible in both images that are considered for obtaining the pairwise matching, the other methods discard this point and merely aim to find correspondences between key points common in both images. This procedure is not necessary for our method because the universe graph contains all possible key points in one category. Additionally, unlike any earlier methods, our method achieves cycle-consistent matchings, and also infers 3D geometry. We emphasise that accuracy alone is not a fully descriptive measure of performance, since the ground truth matchings must be cycle-consistent. Figure 4: Illustration of 3D universe points obtained from 2D projections of D3DFACs heads. Left: the coarse 3D universe points (error to ground truth: 0.3388). Right: the 3D universe points after refinment via our deformation module (error to ground truth: 0.0905). ### 5.2 3D Geometry and Deformation Evaluation As our network learns 3D universe points along with a nonlinear deformation model, we evaluate our 3D deformation model by training it on samples from the 3D head dataset D3DFACs [16, 27]. We use a similar pre-processing pipeline as in i3DMM [57] to obtain $8$ facial landmarks on each head in the dataset. We refer interested readers to [57] for more details. Please note that the landmarks are all in correspondence, as we use a template-registered dataset [27]. For our evaluation, we compute 2D projections of the 3D landmarks using a pinhole camera model and randomly sampled rotations and translations. The goal of this experiment is to show that reconstructed 3D universe points are plausible, and that our deformation module is indeed useful for model nonlinear deformations. As shown in Fig. Joint Deep Multi-Graph Matching and 3D Geometry Learning from Inhomogeneous 2D Image Collections (right) and Fig. 4, the learned 3D universe points (before applying the deformation module) can only reflect rough 3D geometry. Subsequently, the deformation module is able to refine the 3D universe points. The average error between the ground truth 3D points and the obtained 3D universe points are 0.356 before the deformation module, and 0.148 after the deformation module, confirming the merits of the deformation module. Although the reconstructed 3D geometry is not perfect, considering that we are inferring 3D geometry from multiple images that depict different instances of an object, we believe the obtained results clearly confirm the utility of our approach. ## 6 Conclusion We have introduced a novel graph neural network approach that jointly learns multi-graph matching and 3D geometry from an inhomogeneous image collection. For the first time, we achieve several favourable properties simultaneously: Our matchings are guaranteed to be cycle-consistent, which is an important property since the (unknown) ground truth matchings are cycle-consistent. Our approach does not rely on the availability of an initial 3D geometry model, so that we can train it on virtually any object category, as opposed to object- specific approaches that are for example tailored towards faces only [47]. Instead, during training we learn a (sparse) deformable 3D geometric model directly from 2D image data. Moreover, our methods merely requires multiple images of _different object instances_ of the same category. This is in contrast to typical multi-view reconstruction approaches that require multiple images of the _same object_ from different views. We believe that the joint consideration of deep graph matching and 3D geometry inference will open up interesting new research directions and that our approach may serve as inspiration for follow-up works on matching problems, 3D reconstruction, 3D deformation model learning, and many more. ## References * [1] Sameer Agarwal, Yasutaka Furukawa, Noah Snavely, Ian Simon, Brian Curless, Steven M. Seitz, and Richard Szeliski. Building rome in a day. Communications of the ACM, 54(10):105–112, Oct. 2011. * [2] Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. * [3] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computer Vision and Image Understanding, 110(3):346–359, 2008. Similarity Matching in Computer Vision and Multimedia. * [4] Jon Louis Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509–517, Sept. 1975. * [5] Florian Bernard, Christian Theobalt, and Michael Moeller. Ds*: Tighter lifting-free convex relaxations for quadratic matching problems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. * [6] Florian Bernard, Johan Thunberg, Jorge Goncalves, and Christian Theobalt. Synchronisation of partial multi-matchings via non-negative factorisations. Pattern Recognition, 92, 03 2018. * [7] Florian Bernard, Johan Thunberg, Paul Swoboda, and Christian Theobalt. HiPPI: Higher-order projected power iterations for scalable multi-matching. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019. * [8] Tolga Birdal and Umut Simsekli. Probabilistic permutation synchronization using the riemannian structure of the birkhoff polytope. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. * [9] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, USA, 1999. * [10] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J. Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In European Conference on Computer Vision (ECCV). Springer International Publishing, 2016. * [11] Mario Botsch, Leif Kobbelt, Mark Pauly, Pierre Alliez, and Bruno Lévy. Polygon Mesh Processing. CRC press, 2010. * [12] Lubomir D. Bourdev and Jitendra Malik. Poselets: Body part detectors trained using 3d human pose annotations. Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1365–1372, 2009. * [13] Rainer Burkard, Mauro Dell’Amico, and Silvano Martello. Assignment Problems: Revised Reprint. Society for Industrial and Applied Mathematics, 2012. * [14] Minsu Cho, Karteek Alahari, and Jean Ponce. Learning graphs to match. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013. * [15] Minsu Cho, Jungmin Lee, and Kyoung Mu Lee. Reweighted random walks for graph matching. In European Conference on Computer Vision (ECCV), 2010. * [16] Darren Cosker, Eva Krumhuber, and Adrian Hilton. A facs valid 3d dynamic action unit database with applications to 3d dynamic morphable facial modeling. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2011. * [17] Timothee Cour, Praveen Srinivasan, and Jianbo Shi. Balanced Graph Matching. Advances in Neural Information Processing Systems, 2006. * [18] Nadav Dym, Haggai Maron, and Yaron Lipman. Ds++: A flexible, scalable and provably tight relaxation for matching problems. ACM Transactions on Graphics, 36(6), Nov. 2017. * [19] Amir Egozi, Yosi Keller, and Hugo Guterman. A probabilistic approach to spectral graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(1):18–27, 2012. * [20] Matthias Fey, Jan E. Lenssen, Christopher Morris, Jonathan Masci, and Nils M. Kriege. Deep graph matching consensus. In International Conference on Learning Representations (ICLR), 2020\. * [21] Justin Johnson Georgia Gkioxari, Jitendra Malik. Mesh r-cnn. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019. * [22] Qi-Xing Huang and Leonidas Guibas. Consistent shape maps via semidefinite programming. In Computer Graphics Forum, volume 32, 2013. * [23] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2012\. * [24] Eugene L. Lawler. The quadratic assignment problem. Management Science, 9(4):586–599, 1963. * [25] Marius Leordeanu and Martial Hebert. A spectral technique for correspondence problems using pairwise constraints. 2005\. * [26] Marius Leordeanu, Martial Hebert, and Rahul Sukthankar. An integer projected fixed point method for graph matching and map inference. In Advances in Neural Information Processing Systems (NeurIPS). Citeseer, 2009. * [27] Tianye Li, Timo Bolkart, Michael J. Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, 36(6):194:1–194:17, 2017. * [28] Yujia Li, Chenjie Gu, Thomas Dullien, Oriol Vinyals, and Pushmeet Kohli. Graph matching networks for learning the similarity of graph structured objects. In International Conference on Machine Learning (ICML). PMLR, 2019\. * [29] Zhi-Yong Liu and Hong Qiao. Gnccp—graduated nonconvexity and concavity procedure. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(6):1258–1267, 2013. * [30] Eliane Loiola, Nair Abreu, Paulo Boaventura-Netto, Peter Hahn, and Tania Querido. A survey of the quadratic assignment problem. European Journal of Operational Research, 176:657–690, 01 2007\. * [31] Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. Neural volumes: Learning dynamic renderable volumes from images. ACM Transactions on Graphics, 38(4):65:1–65:14, July 2019. * [32] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. ACM Transactions on Graphics, 34(6):248:1–248:16, Oct. 2015. * [33] David Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60:91–, 11 2004. * [34] Jiayi Ma, Xingyu Jiang, Aoxiang Fan, Junjun Jiang, and Junchi Yan. Image matching from handcrafted to deep features: A survey. International Journal of Computer Vision, 129(1):23–79, 2021. * [35] Eleonora Maset, Federica Arrigoni, and Andrea Fusiello. Practical and Efficient Multi-View Matching. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. * [36] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision (ECCV), 2020. * [37] Deepti Pachauri, Risi Kondor, and Vikas Singh. Solving the multi-way matching problem by permutation synchronization. In Advances in Neural Information Processing Systems (NeurIPS). Curran Associates, Inc., 2013. * [38] Panos Pardalos, Franz Rendl, and Henry Wolkowitz. The quadratic assignment problem: A survey and recent developments. quadratic assignment and related problem. DIMACS: Series in Discrete Mathematics and Theoretical Computer Science, pages 1–42, 1994. * [39] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. * [40] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. Deformable neural radiance fields. arXiv preprint arXiv:2011.12948, 2020. * [41] Michal Rolínek, Paul Swoboda, Dominik Zietlow, Anselm Paulus, Vít Musil, and Georg Martius. Deep graph matching via blackbox differentiation of combinatorial solvers. In European Conference on Computer Vision (ECCV). Springer, 2020\. * [42] Johannes Lutz Schönberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. * [43] Johannes Lutz Schönberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In European Conference on Computer Vision (ECCV), 2016. * [44] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In Advances in Neural Information Processing Systems (NeurIPS), 2019\. * [45] Paul Swoboda, Carsten Rother, Hassan Abu Alhaija, Dagmar Kainmüller, and Bogdan Savchynskyy. Study of lagrangean decomposition and dual ascent solvers for graph matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. * [46] Ayush Tewari, Florian Bernard, Pablo Garrido, Gaurav Bharaj, Mohamed Elgharib, Hans-Peter Seidel, Patrick Pérez, Michael Zöllhofer, and Christian Theobalt. Fml: Face model learning from videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. * [47] Ayush Tewari, Michael Zollöfer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Theobalt Christian. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. * [48] Eno Toeppe, Claudia Nieuwenhuis, and Daniel Cremers. Volume constraints for single view reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, USA, 2013. * [49] Roberto Tron, Xiaowei Zhou, Carlos Esteves, and Kostas Daniilidis. Fast multi-image matching via density-based clustering. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. * [50] Qianqian Wang, Xiaowei Zhou, and Kostas Daniilidis. Multi-image semantic matching by mining consistent features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. * [51] Runzhong Wang, Junchi Yan, and Xiaokang Yang. Learning combinatorial embedding networks for deep graph matching. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019. * [52] Runzhong Wang, Junchi Yan, and Xiaokang Yang. Neural graph matching network: Learning lawler’s quadratic assignment problem with extension to hypergraph and multiple-graph matching. arXiv preprint arXiv:1911.11308, 2019. * [53] Rui Wang, Nan Yang, Joerg Stueckler, and Daniel Cremers. Directshape: Photometric alignment of shape priors for visual vehicle pose and shape estimation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2020. * [54] Tao Wang, Haibin Ling, Congyan Lang, and Songhe Feng. Graph matching with adaptive and branching path following. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 40(12):2853–2867, 2018. * [55] Tao Wang, He Liu, Yidong Li, Yi Jin, Xiaohui Hou, and Haibin Ling. Learning combinatorial solver for graph matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. * [56] Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi. Unsupervised learning of probably symmetric deformable 3d objects from images in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. * [57] T Yenamandra, A Tewari, F Bernard, HP Seidel, M Elgharib, D Cremers, and C Theobalt. i3dmm: Deep implicit 3d morphable model of human heads. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2021. * [58] Andrei Zanfir and Cristian Sminchisescu. Deep learning of graph matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. * [59] Quanshi Zhang, Xuan Song, Xiaowei Shao, Huijing Zhao, and Ryosuke Shibasaki. Learning graph matching: Oriented to category modeling from cluttered scenes. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013. * [60] Zhen Zhang and Wee Sun Lee. Deep graphical feature learning for the feature matching problem. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019. * [61] Feng Zhou and Fernando De la Torre. Factorized Graph Matching. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 38(9):1774–1789, 2016. * [62] Xiaowei Zhou, Menglong Zhu, and Kostas Daniilidis. Multi-image matching via fast alternating minimization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015. ## Appendix A Ablation Study To confirm the importance of the individual components of our approach we conducted an ablation study. To this end we evaluate the accuracy on the Pascal VOC dataset in cases where we omit individual terms of the loss function, omit the warm start for learning the universe points $\mathcal{U}$, and omit deformation module, see Table 4. When we omit the one-to-one matching regulariser by setting $\omega_{\text{reg}}$ to 0, the matching accuracy is depressed substantially. When we do not conduct a warm start for finding initial universe points, the matching accuracy deteriorates. Similarly, the matching accuracy lowers without the use of our deformation module. Further, the offset regularisation and the deformed reconstruction loss can refine the universe points for each object, which brings a better matching accuracy as shown in the last two experiments. Overall, the accuracy is highest when using all components together. ## Appendix B Cycle Consistency We further provide quantitative evaluations of the cycle consistency on the Pascal VOC dataset, as shown in Table 5. We quantify in terms of the cycle consistency score, which is computed as follows: 1. 1. Given three graphs $\\{\mathcal{G}_{j}\\}$, $\\{\mathcal{G}_{k}\\}$ and $\\{\mathcal{G}_{l}\\}$, we use the trained network to predict $X_{jk}$, $X_{jl}$ and $X_{kl}$. 2. 2. We compute the composed pairwise matching between $\\{\mathcal{G}_{k}\\}$ and $\\{\mathcal{G}_{l}\\}$ by $X^{\prime}_{kl}=X_{jk}^{T}X_{jl}$. 3. 3. We denote the number of points that $X^{\prime}_{kl}$ equals to $X_{kl}$ as $m_{\text{cycle}}$ and the number of points in $X_{kl}$ as $m_{kl}$. The cycle consistency score is then computed as $\text{cycle consistency score}=100\times\frac{m_{\text{cycle}}}{m_{kl}}\%.$ (14) Note that in this case, we only consider the common points that are observed in $\\{\mathcal{G}_{j}\\}$, $\\{\mathcal{G}_{k}\\}$ and $\\{\mathcal{G}_{l}\\}$. Loss | Avg. acc. ---|--- $\omega_{\text{reg}}=0$ | 58.11 w/o warm start | 58.49 w/o deformation module | 60.33 $\omega_{\text{o}}=0$ | 64.19 $\omega_{\text{d}}=0$ | 64.81 Ours | 67.1 Table 4: Matching accuracy on the Pascal VOC dataset with the variants on regularisation terms or training strategies. PCA [51]CSGM [55]Ours$0$ %$20$ %$40$ %$60$ %$80$ %$100$ %matching accuracycycle consistency score Figure 5: The average matching accuracy and cycle consistency score of three learning- based methods on Pascal VOC dataset. | aero | bike | bird | boat | bottle | bus | car | cat | chair | cow | table | dog | horse | mbike | person | plant | sheep | sofa | train | tv | Avg. ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- PCA [51] | 40.92 | 15.48 | 44.91 | 45.30 | 14.55 | 41.83 | 55.97 | 42.97 | 35.99 | 44.30 | 41.59 | 49.10 | 43.68 | 33.33 | 35.04 | 24.67 | 53.93 | 45.87 | 44.00 | 29.39 | 39.19 CSGM [55] | 49.08 | 51.50 | 60.13 | 67.84 | 81.13 | 80.36 | 67.40 | 57.10 | 51.26 | 61.42 | 56.16 | 55.28 | 61.61 | 54.17 | 54.57 | 96.84 | 60.71 | 58.30 | 96.6 | 93.60 | 65.75 Ours | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 Table 5: Cycle consistency scores (in percent) of three learning-based methods on the Pascal VOC Keypoints dataset. Our method is the only one that guarantees cycle consistency for all categories. In Fig. 5, we show the average matching accuracy and cycle consistency score of our method and compare it with PCA [51] and CSGM [55]. It is clear that our method can achieve comparable accuracy and the best cycle consistency at the same time. ## Appendix C More Deformation Results We provide more qualitative results for our deformation module, see Fig. 6. As shown in the figure, the deformation module is able to refine the 3D universe points. Although 3D reconstructions are not perfect, we can observe that they represent the overall 3D structure well, and are thus valuable for matching respective key points. $\mathcal{G}$ | Right View | Front View | Left View | $\mathcal{G}$ | Right View | Front View | Left View ---|---|---|---|---|---|---|--- Ground Truth | | | | Universe Points | | | Case 1 | | | | Case 2 | | | Case 3 | | | | Case 4 | | | Case 5 | | | | Case 6 | | | Case 7 | | | | Case 8 | | | Figure 6: Qualitative results of deformation module. The top-left part shows the ground truth points on a reference shape, and the top-right part shows the universe points before the deformation module is applied. The remaining parts show individual cases, where it can be seen that the deformation module adequately deforms the universe points (top right), and that it is able to approximate the overall 3D geometry of the face well.
# T-STAR: Truthful Style Transfer using AMR Graph as Intermediate Representation Anubhav Jangra Preksha Nema* Aravindan Raghuveer Google Research, India <EMAIL_ADDRESS> denotes equal contribution ###### Abstract Unavailability of parallel corpora for training text style transfer (TST) models is a very challenging yet common scenario. Also, TST models implicitly need to preserve the content while transforming a source sentence into the target style. To tackle these problems, an intermediate representation is often constructed that is devoid of style while still preserving the meaning of the source sentence. In this work, we study the usefulness of Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation. We posit that semantic notations like AMR are a natural choice for an intermediate representation. Hence, we propose T-STAR: a model comprising of two components, text-to-AMR encoder and a AMR-to-text decoder. We propose several modeling improvements to enhance the style agnosticity of the generated AMR. To the best of our knowledge, T-STAR is the first work that uses AMR as an intermediate representation for TST. With thorough experimental evaluation we show T-STAR significantly outperforms state of the art techniques by achieving on an average $15.2$% higher content preservation with negligible loss ($\sim$3%) in style accuracy. Through detailed human evaluation with $90,000$ ratings, we also show that T-STAR has upto $50$% lesser hallucinations compared to state of the art TST models. ## 1 Introduction A well accepted definition of style refers to the manner (via linguistic elements like word choices, syntactic structures, metaphors) in which semantics of a sentence are expressed McDonald and Pustejovsky (1985); Jin et al. (2020). Text Style Transfer (TST) is the task of rephrasing an input sentence to contain specific stylistic properties without altering the meaning of the sentence Prabhumoye et al. (2018). We refer the reader to Jin et al. (2020) for a detailed survey of approaches towards TST problem formulation, metrics and models. In the practical scenario, that we consider in this paper, where a large corpus of parallel data is not available Niu and Bansal (2018); Ma et al. (2020); Wu et al. (2020), two family of approaches have been proposed in the literature Jin et al. (2020). 1\. Disentanglement: Content and Style are disentangled in a latent space and only the style information is varied to transform the sentence. 2\. Prototype Editing: Style bearing words in the source sentence are replaced with those corresponding to the target style. The sentence may be further re-arranged for fluency and naturalness. Both the above approaches have drawbacks in the way the style agnostic intermediate representation is constructed, described as follows. First, in the disentangling approaches, it is not easy to verify the efficacy of separation between style and content. Recent approaches Subramanian et al. (2018); Samanta et al. (2021) even state that as content and style are so subtly entangled in text, it is difficult to disentangle both of them in a latent space. Consequently, this affects the model’s interpretability in that it is hard to attribute an effect we see in the output to the latent intermediate vector or the style vector. Second, with prototype editing approaches for lingustic styles (such as author, formality), where the content and style are tightly coupled it is not feasible to segregate style and content carrying words (word examples: cometh, thou). Furthermore, style- marker detection is a non-trivial NLP task and needs to be addressed for every new style that is added to the system causing scalabilty concerns Jin et al. (2020). Figure 1: Different syntactic variations, leads to the same AMR as they all are similar in meaning. In this paper, we propose T-STAR (Truthful Style Transfer using AMR Graph as Intermediate Representation) that uses a symbolic semantic graph notation called Abstract Meaning Representation (AMR) as the style agnostic intermediate stage. AMR Banarescu et al. (2013) is designed to capture semantics of a given sentence in all entirety while abstracting away the syntactic variations, inflections and function words. In other words, two sentences with the same meaning but written in very different styles will have a very similar AMR if not exactly the same (See Figure 1). This addresses the shortcomings with Disentanglement and Prototype Editing approaches. First, AMR being a representation with well-defined semantics, we can inspect, interpret and measure quality of the intermediate representation and the provenance of knowledge transfer between source and target sentence. Second, AMR being a well known standard has high quality, robust reference implementations, especially for head languages (e.g., English). Our contributions are as follows: 1. We propose a novel TST approach with AMRs, an interpretable, symbolic intermediate representation, to achieve better content preservation. To this end, we enhance AMR parsing techniques to better suit the TST task. To the best of our knowledge, we are the first work to use AMR representations for the style transfer task (Sections 3, 4). 2. Through novel experimentation, we show that an AMR, as a style agnostic intermediate representation, has better content preservation and less style information of the given source sentence compared to competitive baselines.(Sections 5, 6) 3. On multiple datasets we show T-STAR is able to beat competitive baselines by producing sentences with significantly higher content preservation with similar style transfer scores. (Section 7) 4. With thorough human evaluations spanning 90,000 ratings we show T-STAR has $\sim 70$% better content preservation compared to state of art baseline with $50$% lesser hallucinations. (Section 8) (a) MRPC (b) QQP (c) STS Figure 2: AMR comparison for MRPC, QQP, and STS datasets ## 2 Related Work ### 2.1 Unsupervised Text Style Transfer TST systems broadly use two family of approaches: disentanglement Shen et al. (2017) and prototype-editing Jin et al. (2020). Prior works Hu et al. (2017); John et al. (2019); Fu et al. (2017); Singh and Palod (2018); Logeswaran et al. (2018) disentangle the content and style information in latent space using style-based classifier or adversarial learning. Prototype-editing based approaches Li et al. (2018); Madaan et al. (2020); Sudhakar et al. (2019) are used to gain better controllability and interpretability. Recently, some works propose to jointly optimize for content and style information, to overcome the limitations of explicitly disentangling the style and content information. Yamshchikov et al. (2019b) illustrates that architectures with higher quality of information decomposition perform better on style transfer tasks. Subramanian et al. (2018) argues that it is often easy to fool style discriminators without explicitly disentangling content and style information, which may lead to low content preservation Xu et al. (2018). Instead, Subramanian et al. (2018); Logeswaran et al. (2018) use back- translation to optimize for content preservation. Luo et al. (2019); Liu et al. (2020) use reinforcement learning framework with explicit rewards designed for content preservation. Wang et al. (2019) pushes the entangled latent representation in the targeted-style direction using style discriminators. Samanta et al. (2019) uses normalizing flow to infuse the content and style information back before passing it to the decoder. Cheng et al. (2020) proposes a context aware text style transfer framework using two separate encoders for input sentence and the context. Similar to our work, Krishna et al. (2020) also generates an interpretable intermediate representation. The authors first paraphrase the given source to first convert it to a destylized version before passing it to the targeted style-specific decoder. Complementary to our work, Shi et al. (2021) uses syntactic graphs (dependency tree) as an intermediate representation for attribute transfer to retain the linguistic structure. On the other hand, we focus on retaining the semantics using AMR graphs as intermediate representation and modifying the linguistic structure (authorship style). ### 2.2 Text $\Longleftrightarrow$ AMR In order to improve the parsing performance for AMRs, neural models are receving increasing attention. Neural AMR parsers can be divided into following categories: i) sequence-to-sequence based AMR-parsers Xu et al. (2020a), ii) sequence-to-graph based AMR parsers Zhang et al. (2019), where the graph is incrementally built by spanning one node at a time. A more detailed survey on related works can be found in Appendix A. ## 3 AMR as an Intermediate Representation Abstract Meaning Representations (AMRs) is a semantic formalism construct that abstracts away the syntactic information and only preserves the semantic meaning, in a rooted, directed and acyclic graph. In Figure 1, we present certain syntactic variations (changing the voice, and tense) for a sentence without altering meaning. All variations result in the same AMR graph. The nodes in the AMR graphs (“produce-01”, “they”, “before”, etc.) are concepts (entities/predicates) that are canonicalized and mapped to semantic role annotations present in Propbank framesets111https://propbank.github.io/. The edges (“ARG0”, “time”, “duration”, etc.) are then relations between these concepts. In other words, AMRs aim at decoupling “what to say” from “how to say” in an interpretrable way. We posit that this could be beneficial for text style transfer, where the goal is to alter the “how to say” aspect while preserving “what to say”. Recently, semantic meaning representation property of AMRs has been shown to be useful in other generation tasks. In abstractive summarization, Liao et al. (2018) uses AMR as an intermediate representation to first obtain a summary AMR graph from document and then generate a summary from it. Hardy and Vlachos (2018) use AMRs to cover for the lack of explicit semantic modeling in sequence-to-sequence models. In machine translation, Song et al. (2019) adopted AMRs to enforce meaning preservation while translating from English to German. For Paraphrase Generation, Cai et al. (2021) found that using AMRs as intermediate representation reduces the semantic drift. Moreover, incorporating symbolic representations as an intermediate representation provides a way to effectively understand the reasons behind a model’s shortcomings. We utilize this advantage to analyse the weaknesses of T-STAR in Section 8.2. In order to demonstrate the semantic meaning preservation property of AMRs, we design an experiment using three publicly available paraphrase benchmarks, i.e., MRPC Dolan and Brockett (2005), QQP222https://www.quora.com/profile/Ricky-Riche-2/First-Quora-Dataset-Release- Question-Pairs, and STS. MRPC and QQP are sentence pair datasets with each pair labeled yes if they are paraphrases of each other no otherwise. STS dataset assigns a scores from 0 (not similar) to 5 (exactly similar) to a sentence pair. We hypothesize that if AMRs are indeed semantic meaning preserving, two sentences with similar meaning should have highly similar AMRs. To measure the similarity between two AMRs, we use the Smatch score Cai and Knight (2013) that calculates the number of overlapping triplets between two AMRs. We use an off-the-shelf AMR parser 333https://amrlib.readthedocs.io/en/latest/ to generate AMR given a sentence. We plot the distribution of the Smatch scores for MRPC, QQP and STS datasets in Figure 2. For MRPC, we infer that the Smatch scores for paraphrases are significantly higher than the Smatch scores for non-paraphrases. Similarly, for QQP, quartile distribution of Smatch scores for paraphrases is higher in comparison to non-paraphrases. For STS dataset, we observe a gradual increase in quartile distribution of Smatch scores as we move towards more similar sentences. The experiments above corroborate the claim that AMRs can preserve meaning under lexical variations like paraphrasing, tense and voice changes. Recent research discussed earlier, have successfully used this property to show task improvements. Building on the above qualitative, quantitative and prior research evidence, we further explore the applicability of AMR for the TST task. Vanilla T-STAR Encoder | T-STAR Encoder ---|--- Sentence: To make us feel existence, and to shew (a/and :op1(m/make-02 :ARG1(f/feel-01 :ARG0(w/we) :ARG1(e/exist-01 :ARG1 w))) :op2 (s / shew-01 :ARG0 w)) | (h/have-purpose-91 :ARG2(a/and :op1(m/make-02 :ARG1(f/feel-01 :ARG0(w/we) :ARG1(e/exist-01 :ARG1 w))) :op2(s/show-01 :ARG0 w :ARG1 e))) Sentence: But trust not this; too easy Youth, beware! (m/multi-sentence :snt1(c/contrast-01 :ARG2(t/trust-01 :polarity - :mode imperative :ARG0(y/you) :ARG1(t2/this))) :snt2(b/beware-01 :mode imperative :ARG0(y2/youth :ARG1-of(e/easy-05 :ARG2-of(h/have-degree-91 :ARG1 y2 :ARG3 (t3/too)))))) | (c/contrast-01 :ARG2 (a/and :op1 (t/trust-02 :polarity - :mode imperative :ARG0 (y/you :mod (y2/youth)) :ARG1 (t2/this)) :op2 (h/have-degree-91 :ARG1 t2 :ARG2 (e/easy-05 :ARG1 t2) :ARG3 (t3/too)))) Table 1: Comparison between AMRs from vanilla T-STAR Encoder and T-STAR Encoder. T-STAR Encoder generates better AMRs for stylized sentences. Dimension | Metric | Description | Metric used in related works ---|---|---|--- AMR similarity | SMATCH$\uparrow$ | SMATCH Cai and Knight (2013) measures degree of overlap between AMR graphs of _S,T_. The score is computed based on triplet (edge) overlap by finding a variable node mapping that maximizes the count of matching triplets. | Used extensively to measure similarity between two AMRs, across AMR-parsing literature. Lexical Diveristy | Self-BLEU $\downarrow$ | BLEU-4 Papineni et al. (2002) between _S,T_ | Hu et al. (2017); Li et al. (2018); Logeswaran et al. (2018); Xu et al. (2018) Content Preservation (C.P.) | WMD $\downarrow$ | Word Mover Distance Kusner et al. (2015) measures dissimilarity between _S,T_ as the minimum distance between their embedded words. | Yamshchikov et al. (2020) states that WMD correlates best with human-evaluations on semantic similarity SIM $\uparrow$ | SIM Wieting et al. (2019a) uses an embedding model proposed in Wieting et al. (2019b) to measure semantic similarity | Krishna et al. (2020); Luo et al. (2019) Style Transfer (S.T.) | Style Accuracy $\uparrow$ | Score of 4-way and 2-way fine-tuned RoBERTa-Large Liu et al. (2019) model for styles in CDS and Author-Imitation datasets respectively | Krishna et al. (2020)*,Hu et al. (2017); Fu et al. (2017); Madaan et al. (2020)* C.P. & S.T. | Weighted Style Accuracy $\uparrow$ | Style Accuracy weighed by their corresponding semantic similarity scores averaged across all test instances. | Krishna et al. (2020)*,Li et al. (2018)* Table 2: Evaluation Metrics: Source and Target sentences denoted as _S_ ,_T_ respectively. We use all the dimensions except AMR similarity to measure the performance of TST. Note that Weighted Style Accuracy is the only metric that encompasses two crucial dimension : C.P. and S.T. in one metric. AMR similarity is used to select best performing TSTAR-Encoder. * represents the works that use slight variations of the mentioned metrics. Note that across all metrics we compute an average scores across all test instances. Figure 3: An overview of T-STAR model architecture. It consists of two modules: T-STAR Encoder, that transforms a given sentence $s_{i}$ in style $i$ to its AMR representation $A_{i}$. To convert the sentence to style $j$, $A_{i}$ is passed to T-STAR Decoder specific to style $j$. ## 4 Proposed Solution Our proposed model T-STAR consists of two modules (refer Figure 3). First, T-STAR Encoder generates an AMR given source sentence in style $i$. Second, T-STAR Decoder generates a sentence in style $j$ with similar meaning as preserved in the generated intermediate AMR. We take T5-Base Raffel et al. (2020) pre-trained model as our basic seq2seq architecture for both the modules. In order to use AMR as a sequence in T5, we borrow the choice of DFS Traversal from Bevilacqua et al. (2021), that thoroughly study the effect of various traversals on AMR parsing. Figure 4: Generic sentences from AMR 3.0 corpus are stylized using a TST model. The corresponding AMR and stylized sentences are mapped together as silver training dataset to finetune T-STAR Encoder. ### 4.1 T-STAR Encoder We train our simplest encoder, called the vanilla T-STAR Encoder, by fine- tuning T5 base with the open source AMR 3.0 dataset Knight et al. (2021). The AMR 3.0 dataset consists of roughly 59,000 generic English sentences (denoted as $s_{i}$ for the $i^{th}$ sentence) and their corresponding AMRs (denoted as $A_{i}$). In a qualitative analysis, we observe that the vanilla T-STAR encoder under performs in two significant ways as illustrated in Table 1. First, style bearing words (such as “shew”) become concepts in the AMRs (Sentence-1 in Table 1) as opposed to being canonicalized to their respective propbank role annotations. Second, the meaning of the stylized sentences get incorrectly encoded in the AMRs, as shown in the second example in Table 1. To overcome this, we propose a style-agnostic fine-tuning strategy as follows. Style-agnostic Fine Tuning: We hypothesize that vanilla Text to AMR encoder is unable to effectively transform stylized sentences to AMRs because it has only been trained on generic English sentences. Therefore, we propose a data- augmentation strategy (refer to Figure 4), where we use an off the shelf style transfer model, e.g., STRAP Krishna et al. (2020), to stylize a generic English sentence $s_{i}$ in style $p$ ($\hat{s}^{p}_{i}$). While converting $s_{i}$ to $\hat{s}^{p}_{i}$, we alter the style of original sentence, keeping the meaning intact. For a high quality synthetic dataset, we filter out samples with low semantic similarity between $s_{i}$ and $\hat{s}^{p}_{i}$. We provide a detailed empirical analysis on this filtering strategy in Appendix C. Since the meaning is preserved, we can now map $\hat{s}^{p}_{i}$ to the same AMR $A_{i}$. We then fine-tune our T-STAR Encoder on $\bar{\mathbf{S}}=\mathbf{S}\cup\mathbf{\hat{S}}$ , where $\mathbf{\hat{S}}=\\{\hat{s}^{p}_{i},A_{i}\\}_{N}\forall{p\in P}$ and $P$ is the total number of styles in the dataset. 1 Input: parallel corpora ($\mathbf{S}$), mono-lingual corpus ($\mathbf{R}^{p},\forall p\in P$) 2 3$\mathbf{\hat{S}}=\\{\hat{s}^{p}_{i},A_{i}\\}_{N}\forall{p\in P}$ using an existing TST model, e.g., STRAP 4$X:=\mathbf{S}\cup\mathbf{\hat{S}}$ 5while _Convergence_ do 6 fine-tune TSTAR-Encoder $f_{amr}(.)$ on $X$ 7 $\mathbf{\hat{S}}:=\\{\\}$ 8 for _$p\in P$_ do 9 Use $f_{amr}(.)$ to create $\mathbf{\hat{R}}^{p}:=\\{r^{p}_{i},\tilde{A}^{p}_{i}\\}_{M}$ 10 fine-tune TSTAR-Decoder $f_{p}(.)$ for style $p$ 11 Use TSTAR-Decoder to get $\mathbf{\hat{S}^{p}}=\\{\hat{s}^{p}_{i},A_{i}\\}_{N}$ 12 $\mathbf{\hat{S}}=\mathbf{\hat{S}}\cup\mathbf{\hat{S}^{p}}$ 13 $X:=\mathbf{S}\cup\mathbf{\hat{S}}$ Algorithm 1 Iterative T-STAR ### 4.2 T-STAR Decoders Due to the unavailability of parallel style corpora, we are provided $P$ mono- style corpora $\mathbf{R}^{p}=\\{r^{p}_{i}\\}_{M_{p}}$ written in style $p$, where $P$ refers to the total number of styles and $r^{p}_{i}$ refers to the $i^{th}$ sentence of the $p$ style dataset which has $M_{p}$ sentences. Training a style specific decoder ($f_{p}(.)$) to generate a sentence in style $p$ from an AMR consists of two steps. First, we use our fine-tuned T-STAR Encoder (Section 4.1) to generate AMRs $\hat{A}^{p}_{i}$ for every sentence $\mathbf{r}^{p}_{i}$ for every style corpora. Second, we fine tune a T5 base model to recover the original sentence $\mathbf{r}^{p}_{i}$ given the AMR $\hat{A}^{p}_{i}$, obtaining style-specific decoders. In other words, we fine tune using $M_{p}$ pairs of ( $\mathbf{r}^{p}_{i}$, $\hat{A}^{p}_{i}$) constructed from the first step. Note that, we experimented with a data augmentation technique for the decoders as well, however it did not lead to an improvement in the style transfer performance (refer to Appendix E). Once style specific decoders have been trained for every style in $P$, we can use the T-STAR Encoder in tandem with the T-STAR Decoders to convert between arbitrary style combinations as in Krishna et al. (2020). ### 4.3 Iterative T-STAR The performance of our modules, T-STAR Encoder and T-STAR Decoders depends on the quality of the synthetic datasets ($\mathbf{\hat{S}}$, $\mathbf{\hat{R}^{p}}$) generated by their complementary modules. We adopt the iterative back-translation technique used in unsupervised machine translation Hoang et al. (2018). Iteration proceeds in rounds of training an encoder and decoder from mono-style data. In every round, we aim to iteratively improve the quality of the encoder and decoder modules by generating increasingly better synthetic data from the previous round. We briefly describe this process in Algorithm 1. ## 5 Experimental Setup In this section we briefly describe the various T-STAR variations that are analyzed in the subsequent sections, baselines, and the implementation details. The models are validated against the metrics summarized in Table 2. ### 5.1 T-STAR variations Vanilla T-STAR: The T-STAR Encoder used in this version, is only trained on AMR 3.0 dataset, and not finetuned for stylized sentences. T-STAR: We train the encoder and decoders using Algorithm 1 for only one iteration. Iterative T-STAR: We follow two iterations of Algorithm 1 to obtain better quality synthetic dataset. ### 5.2 Baselines UNMT: Subramanian et al. (2018) models style transfer as unsupervised machine translation task. DLSM He et al. (2020) is a deep generative model that unifies back-translation and adversarial loss. RLPrompt Deng et al. (2022a) uses a discrete prompt optimization approach using reinforcement learning. It is adopted in a zero-shot setting where we use Distil-BERT Sanh et al. (2019) and run the optimization for 1k steps. STRAP Krishna et al. (2020) first normalizes the style information by paraphrasing the source text into generic English sentence, which is then passed through a style-specific GPT-2 based model to generate the styled output. ### 5.3 Datasets We evaluate performance of T-STAR on two English datasets that capture Linguistic Styles Jin et al. (2020). First, Shakespeare Author Imitation Dataset Xu et al. (2012) consists of $18$K pairs of sentences written in two styles. Original Shakespeare’s plays have been written in Early Modern English, a significantly different style. Second, Corpus of Diverse Styles (CDS) Krishna et al. (2020). This dataset consists of non-parallel sentences written in 11 different styles. We will present our results on a subset of four styles : Bible, Poetry, Shakespeare and Switchboard which consists of $34.8$K, $29.8$K, $27.5$K, $148.8$K instances respectively (CDS uses MIT License). ### 5.4 Implementation Details We use pre-trained T5-Base model architecture for both the encoder and decoder. Following iterative back translation literature Kumari et al. (2021), we run Iterative T-STAR for two iterations. The AMRs are preprocessed and postprocessed based on the method mentioned in Appendix B. The best modules are selected based on the performance on validation set. Finer details about the model architecture and hyperparameters can be found in Appendix B. Style | Model | Content Preservation | S. R. $\uparrow$ ---|---|---|--- WMD $\downarrow$ | SIM $\uparrow$ Bible | STRAP | 0.200 | 0.715 | 0.979 T-STAR | 0.170 | 0.792 | 0.979 Poetry | STRAP | 0.290 | 0.664 | 0.969 T-STAR | 0.215 | 0.760 | 0.965 Shakespeare | STRAP | 0.328 | 0.610 | 0.971 T-STAR | 0.222 | 0.754 | 0.972 Switchboard | STRAP | 0.222 | 0.751 | 0.999 T-STAR | 0.163 | 0.848 | 1.0 Table 3: Reconstructing the original sentences using the intermediate semantic representation outperforms the baseline with a significant margin with respect to content preservation. T-STAR is on par with style retention (S.R.) with the baseline model. ## 6 Robustness of AMRs for Text Style Transfer An ideal style agnostic intermediate representation for TST should a) encode complete semantic information and b) minimal style information. We quantitatively measure AMR’s efficacy in these two dimensions. We compare T-STAR with STRAP Krishna et al. (2020) for it uses a human readable intermediate representation as well. ### 6.1 Semantic Containment of AMR If semantics of input sentence is completely preserved in the intermediate representation, we should be able to reconstruct the input sentence from it. To evaluate the robustness of AMRs across all styles for content preservation, we first generate intermediate AMR given the sentence in style $p$ using our encoder and then reconstruct the same sentence using our decoder $f_{p}(.)$ in style $p$. We study how close the generated sentence (from STRAP, T-STAR) is with respect to the original sentence. We can infer from Table 3 that AMRs as an intermediate representation performs significantly better on content preservation as compared to STRAP across all the styles. Specifically, it gives an average of $0.10$ and $0.06$ absolute improvement on SIM and WMD scores across the four styles respectively. Also, we get comparable performance on retaining the original style. We present an ablation study on AMR parser in Appendix C, and propose a new unsupervised Text-AMR evaluation metric to measure the content preservation of AMR parsing along with its results on the CDS dataset in Appendix D. Style | Original | Paraphrase | AMR ---|---|---|--- Vanilla-T-STAR | T-STAR Bible | 97.72 | 79.97 | 74.64 | 71.90 Switchboard | 99.47 | 89.79 | 87.74 | 64.66 Poetry | 83.17 | 81.04 | 49.05 | 61.61 Shakespeare | 70.07 | 78.07 | 88.8 | 80.73 Average | 87.61 | 82.22 | 75.06 | 69.73 Table 4: Accuracy of style classifiers trained on original, paraphrase and AMR inputs. AMR has least performance denoting it encodes least style information. ### 6.2 Style Agnosticity of AMR A style classifier $C$ assigns a style class label to an input sentence $S$. If $S$ does not encode style information, it should result in poor classifier performance. We use this observation to design an experiment to evaluate the style-agnosticity of AMRs as follows. We train three versions of 4-way style classifier using original sentences, paraphrased sentences (as used in STRAP) and AMRs as the input sequences to the classifier. Two models are used to generate AMRs: Vanilla T-STAR Encoder and T-STAR Encoder. For all versions of the style classifier, we mask content bearing words like entities, numbers, common nouns and AMR tags in the input sequences. The accuracy of the 4 classifiers is shown in Table 4. First, even Vanilla T-STAR Encoder has lower classifier accuracy for three out of four styles in comparison to original and paraphrases sentences. Second, with the T-STAR Encoder, we observe further reduction in classifier’s performance, and obtain an average absolute drop of 15.19% and 7.1% as compared to paraphrase and Vanilla T-STAR Encoder respectively. We illustrate some examples in Table 1, where the T-STAR Encoder generates better AMRs as compared to the Vanilla T-STAR Encoder. In the first example,the T-STAR Encoder, unlike the vanilla T-STAR Encoder, is able to map style-specific word “shew” to the valid concept “show” and also associate “existence” to it. In the second example, the T-STAR Encoder does not split the AMR into two sentences while parsing. Additionally, it is able to make the association between “you” and “youth”. Through the above quantitative and qualitative analysis we demonstrate that the T-STAR Encoder generates AMRs which are robust in preserving meaning of the source sentence while predominantly losing style information. ## 7 Performance on Style Transfer Tasks In this section, we compare our model performance T-STAR and Iterative-T-STAR to the baselines across two datasets: Shakespeare Imitation Dataset and CDS dataset. Model | Content Preserv. | Lex. Div. | S.T. | WSAcc. ---|---|---|---|--- SIM $\uparrow$ | WMD $\downarrow$ | Self-BLEU $\downarrow$ Original to Modern Style UNMT | 0.461 | 0.318 | 0.118 | 0.586 | 0.256 DLSM | 0.447 | 0.369 | 0.079 | 0.192 | 0.115 RLPrompt | 0.508 | 0.387 | 0.164 | 0.354 | 0.292 STRAP | 0.647 | 0.337 | 0.118 | 0.886 | 0.552 Vanilla-TSTAR | 0.848 | 0.182 | 0.269 | 0.601 | 0.497 TSTAR | 0.754 | 0.257 | 0.175 | 0.754 | 0.554 Iterative-TSTAR | 0.799 | 0.227 | 0.209 | 0.715 | 0.556 Modern to Original Style UNMT | 0.373 | 0.375 | 0.057 | 0.414 | 0.158 DLSM | 0.421 | 0.373 | 0.086 | 0.391 | 0.174 RLPrompt | 0.550 | 0.348 | 0.261 | 0.547 | 0.203 STRAP | 0.656 | 0.332 | 0.139 | 0.681 | 0.433 Vanilla TSTAR | 0.897 | 0.14 | 0.379 | 0.47 | 0.418 TSTAR | 0.842 | 0.185 | 0.324 | 0.490 | 0.402 Iterative-TSTAR | 0.853 | 0.181 | 0.329 | 0.540 | 0.446 Table 5: T-STAR models comparison with baseline models on Shakespeare Author Imitation Dataset for both the directions. Iterative T-STAR outperforms all the baselines on Weighted Style Accuracy. Direction | Model | L.D. | Cont. Preserv. | S.R. $\downarrow$ | S.T. $\uparrow$ | WSAcc $\uparrow$ ---|---|---|---|---|---|--- S-BLEU $\downarrow$ | WMD $\downarrow$ | SIM $\uparrow$ poetry $\rightarrow$ bible | STRAP | 0.067 | 0.314 | 0.571 | 0.335 | 0.548 | 0.289 Van-TSTAR | 0.170 | 0.183 | 0.812 | 0.430 | 0.289 | 0.226 TSTAR | 0.084 | 0.268 | 0.670 | 0.202 | 0.618 | 0.391 Itr-TSTAR | 0.106 | 0.241 | 0.721 | 0.231 | 0.566 | 0.383 shak. $\rightarrow$ bible | STRAP | 0.073 | 0.343 | 0.535 | 0.291 | 0.677 | 0.346 Van-TSTAR | 0.212 | 0.194 | 0.817 | 0.540 | 0.422 | 0.337 TSTAR | 0.123 | 0.286 | 0.656 | 0.277 | 0.705 | 0.437 Itr-TSTAR | 0.149 | 0.260 | 0.712 | 0.271 | 0.701 | 0.477 switch. $\rightarrow$ bible | STRAP | 0.042 | 0.323 | 0.476 | 0.062 | 0.670 | 0.307 Van-TSTAR | 0.128 | 0.196 | 0.729 | 0.085 | 0.456 | 0.330 TSTAR | 0.758 | 0.260 | 0.605 | 0.007 | 0.725 | 0.419 Itr-TSTAR | 0.097 | 0.241 | 0.637 | 0.006 | 0.745 | 0.456 poetry $\rightarrow$ shak. | STRAP | 0.067 | 0.327 | 0.571 | 0.159 | 0.810 | 0.450 Van-TSTAR | 0.207 | 0.166 | 0.821 | 0.398 | 0.576 | 0.460 TSTAR | 0.131 | 0.244 | 0.717 | 0.241 | 0.733 | 0.509 Itr-TSTAR | 0.164 | 0.207 | 0.768 | 0.278 | 0.704 | 0.522 switch. $\rightarrow$ shak. | STRAP | 0.045 | 0.328 | 0.489 | 0.012 | 0.956 | 0.461 Van-TSTAR | 0.152 | 0.187 | 0.740 | 0.034 | 0.948 | 0.696 TSTAR | 0.089 | 0.250 | 0.651 | 0.009 | 0.971 | 0.628 Itr-TSTAR | 0.122 | 0.22 | 0.696 | 0.010 | 0.973 | 0.675 bible $\rightarrow$ shak. | STRAP | 0.110 | 0.242 | 0.634 | 0.231 | 0.764 | 0.465 Van-TSTAR | 0.214 | 0.151 | 0.842 | 0.377 | 0.613 | 0.508 TSTAR | 0.100 | 0.227 | 0.713 | 0.309 | 0.676 | 0.472 Itr-TSTAR | 0.126 | 0.210 | 0.743 | 0.299 | 0.683 | 0.499 bible $\rightarrow$ switch. | STRAP | 0.088 | 0.238 | 0.625 | 0.080 | 0.918 | 0.565 Van-TSTAR | 0.171 | 0.162 | 0.810 | 0.186 | 0.813 | 0.649 TSTAR | 0.084 | 0.227 | 0.713 | 0.082 | 0.916 | 0.646 Itr-TSTAR | 0.101 | 0.213 | 0.73 | 0.083 | 0.911 | 0.658 shak. $\rightarrow$ switch. | STRAP | 0.039 | 0.344 | 0.534 | 0.000 | 0.998 | 0.533 Van-TSTAR | 0.138 | 0.212 | 0.767 | 0.002 | 0.988 | 0.755 TSTAR | 0.089 | 0.283 | 0.661 | 0.000 | 0.998 | 0.660 | Itr-TSTAR | 0.110 | 0.249 | 0.719 | 0.002 | 0.991 | 0.712 poetry $\rightarrow$ switch. | STRAP | 0.054 | 0.312 | 0.616 | 0.007 | 0.993 | 0.610 Van-TSTAR | 0.164 | 0.184 | 0.825 | 0.028 | 0.972 | 0.801 TSTAR | 0.113 | 0.250 | 0.740 | 0.015 | 0.985 | 0.728 Itr-TSTAR | 0.141 | 0.213 | 0.791 | 0.019 | 0.981 | 0.774 bible $\rightarrow$ poetry | STRAP | 0.060 | 0.275 | 0.633 | 0.346 | 0.621 | 0.374 Van-TSTAR | 0.094 | 0.195 | 0.777 | 0.533 | 0.417 | 0.315 TSTAR | 0.054 | 0.253 | 0.684 | 0.323 | 0.597 | 0.394 Itr-TSTAR | 0.064 | 0.237 | 0.700 | 0.348 | 0.559 | 0.375 shak. $\rightarrow$ poetry | STRAP | 0.071 | 0.353 | 0.550 | 0.160 | 0.818 | 0.449 Van-TSTAR | 0.155 | 0.219 | 0.753 | 0.228 | 0.748 | 0.551 TSTAR | 0.108 | 0.286 | 0.641 | 0.166 | 0.815 | 0.512 Itr-TSTAR | 0.128 | 0.258 | 0.693 | 0.190 | 0.794 | 0.538 switch. $\rightarrow$ poetry | STRAP | 0.037 | 0.331 | 0.524 | 0.094 | 0.844 | 0.432 Van-TSTAR | 0.101 | 0.215 | 0.715 | 0.261 | 0.486 | 0.334 TSTAR | 0.064 | 0.268 | 0.631 | 0.106 | 0.701 | 0.427 Itr-TSTAR | 0.084 | 0.239 | 0.673 | 0.158 | 0.603 | 0.388 All Styles Avg. | STRAP | 0.063 | 0.311 | 0.563 | 0.148 | 0.801 | 0.440 Van-TSTAR | 0.159 | 0.189 | 0.784 | 0.259 | 0.644 | 0.497 TSTAR | 0.093 | 0.256 | 0.674 | 0.145 | 0.787 | 0.519 Itr-TSTAR | 0.116 | 0.232 | 0.715 | 0.158 | 0.768 | 0.538 Table 6: Performance comparison of T-STAR models with STRAP on 12 different directions, across four styles. T-STAR and Iterative T-STAR beats STRAP for 11 directions out of 12. S.R. - Style Retention, S.T. - Style Transfer, WSAcc - Weighted Style Accuracy. ### 7.1 Performance Analysis on Shakespeare Imitation Dataset In Table 5, we present the results of T-STAR and Iterative-T-STAR in comparison with the baselines for the Shakespeare Author Imitation dataset for both the directions :original to modern style and vice versa. The Weighted Style Accuracy is the primary metric as it is shown to effectively combine both style accuracy and content preservation Krishna et al. (2020). We observe that our model, T-STAR slightly performs better than STRAP model for original to modern style, but has lower performance for modern to original style. However Iterative T-STAR, outperforms all the baselines for both the directions on Weighted Style Accuracy. We observe that STRAP has very high style accuracy that it achieves by compromising on Content Preservation. Through human evaluation (Section 8) we see STRAP employs significantly higher hallucinations to achieve style transfer. Vanilla T-STAR on the other hand achieves high content preservation via significant copying from source as seen by the substantially high self-BLEU score. Iterative T-STAR finds the middle ground of achieving style transfer while not compromising on content preservation. We also found that the length of generated sentence is similar to the input sentence, with on average one word difference. In the subsequent sections, we compare T-STAR to only the best performing baseline, STRAP. ### 7.2 Performance Analysis on CDS Dataset In Table 6, we compare the performance of T-STAR and Iterative T-STAR against STRAP and Vanilla T-STAR, across all 12 directions for {Poetry, Shakespeare, Switchboard, Bible} styles. We make the following observations: First, both our models T-STAR and Iterative T-STAR outperform the state-of-the-art baseline, STRAP, on 11 out of 12 directions, with an average absolute improvement of 7.7% and 9.7% respectively. Second, Vanilla T-STAR is observed to be a stronger baseline than STRAP, as it beats STRAP on 8 out of 12 directions. Third, when we compare iterative T-STAR against vanilla T-STAR, we beat it in 7 out of 12 directions, where the average absolute improvement across these 7 directions is 8%, as compare to the average absolute loss in 5 out of 12 styles is 2%. Fourth, Iterative T-STAR model outperforms T-STAR model on 9 out of 12 directions, which shows that the improvement in synthetic dataset quality, is boosting the performance of the downstream task. Consistent with the findings of the previous experiment, we see Iterative T-STAR is able to find the middle ground of transferring style without compromising on content preservation. Direction | Input Sentence | STRAP | Vanilla T-STAR | T-STAR | Iterative T-STAR ---|---|---|---|---|--- Poetry$\rightarrow$Bible | “Dear Lord! It has a fiendish look”– | But they laughed, and said, Oh my God! | Thou lookest fiendish, O LORD | Behold, it beholdeth to be fiendish, saith the LORD. | Dear Lord, it looketh fiendish unto thee. Bible$\rightarrow$Poetry | And he said unto another, Follow me. | And bade the other follow me; | And another: “O follow me!” said he, | And follow me! and I will be be ye gone, | And thou, my love, will follow me. Shake.$\rightarrow$Switch. | Dead art thou, dead! | did you get a uh a uh you have a dead art | you’re dead i say you are dead | oh you’d die | you’re dead yeah Shake.$\rightarrow$Switch. | Thus with a kiss I die. | so i’m i’m dying to get a kiss | so i die with a kiss | i die with a kiss | so i die with a kiss Table 7: Example of generated stylized sentences for STRAP, Vanilla T-STAR, T-STAR and Iterative T-STAR models for the given input sentence. #### 7.2.1 Qualitative Analysis In Table 7, we enumerate few examples with generated stylized sentences using STRAP and T-STAR variations. We can infer that, although STRAP performs well in transforming the sentence to the given style, it alters the meaning (row 1,3, and 4). On the other hand, vanilla T-STAR does not always transform the style (row 1 and 2). However, with T-STAR and Iterative T-STAR are able to transform the sentences while keeping the style intact. We further quantify these observations through an extensive set of human annotations as described below. ## 8 Human Evaluations Model | $>$ STRAP | $<$ STRAP | = STRAP ---|---|---|--- T-STAR | 70.8% | 26.4% | 2.8% Itr. T-STAR | 77.5% | 20.5% | 2.0% Table 8: Comparison of T-STAR and Iterative T-STAR models against STRAP for content preservation. Automatic metrics are insufficient to thoroughly understand subjective quality measures like content preservation. Therefore, we conduct an extensive case study with human evaluations. Our analysis is two folds, first we compare STRAP with our models T-STAR and Iterative T-STAR on meaning preservation. Second, we further understand the various categories of meaning loss failures. For both the human evaluation tasks below, the criteria for choosing annotators were i) proficient in English with a minimum education of Diploma. ii) The annotators have to first qualify on two simpler questions, else they are not allowed to continue on the task. Each instance is annotated by three taskers, and the final annotation is a majority vote. ### 8.1 Comparison on Meaning Preservation In order to study the faithfulness of the T-STAR models, we do a side-by-side human evaluation. In this task, a source sentence is shown with 2 stylized target sentence (one from T-STAR and another from STRAP). We present the annotators with three options to judge content preservation with respect to source sentence: option on left better than one on right, right better than left and both equal. We extensively compare the two models across all 12 directions for four styles. For each direction we randomly sample 500 instances. Each instance is rated by 3 annotators leading to a total of 18,000 ratings. We summarize our findings in Table 8. Both T-STAR ad Iterative T-STAR significantly outperform STRAP in terms of being better at content preservation (The $>$ STRAP column in Figure 8). Further Iterative T-STAR has 7% higher meaning preservation compared to T-STAR. In addition to the quantitative content preservation metrics discussed in Section 7, this analysis gathers additional qualitative evidence towards AMRs as an effective intermediate representation for content preserving style transfer. The complete statistics per direction are available in Appendix F. Model | Type of Error ---|--- None$\uparrow$ | Hal.$\downarrow$ | Sem. Drift.$\downarrow$ | Incomp.$\downarrow$ STRAP | 10.48% | 39.38% | 35.38% | 14.75% T-STAR | 19.36% | 24.46% | 33.71% | 22.45% Itr. T-STAR | 24.83% | 22.6% | 36.98% | 15.58% Table 9: Aggregate Error Analysis for error types: Hallucinations, Semantic Drift and Incomplete for 6,000 samples. With Iterative T-STAR, the number of samples with no errors and less hallucinations increase significantly. ### 8.2 Error Analysis In the next study, we further aim to study the nature of meaning loss errors made by style tranfer models. We categorize these errors into three categories i) Hallucinations: new information not present in the source sentence is added to target ii) Semantic Drift: the target sentence has a different meaning to source sentence iii) Incomplete: some important content information is missed in the target. The taskers also have the option to select “No Error” if the meaning is preserved in the generated target. As in the previous experiment, we collect 18,000 ratings and the results are summarized in Table 9. We observe that our models T-STAR and Iterative T-STAR consistently beat STRAP in the “No Error” category. Furthermore, the amount of hallucinations significantly drops to 24.46% and further 22.6% with Iterative T-STAR across all styles from 39.3% for STRAP. Reduction in hallucination can be clearly seen as a benefit of encoding critical information in the source sentence using a semantic parse representation like AMR. As a sign of improving the AMR parsing quality, we see that iterative T-STAR further reduce the Incomplete to 15.5%. For further details refer to Appendix F. #### 8.2.1 Usefulness of an interpretable intermediate representation With intermediate AMRs being interpretable, it is possible to broadly understand if such errors are emerging from either encoder or decoder module. To intuitvely understand the reason for high number in Incomplete and Semantic Drift errors, we qualitatively analyzed some instances along with the generated intermediate AMRs. We have listed down these examples in Appendix F. We observed that for Incomplete errors, the generated AMRs were not encoding complete information, and thus this error percolated from the T-STAR encoder. For the majority of the instances, either some entities were missing, and if the clause was separated using “:,;”, it was not parsed in the intermediate graph. Semantic Drift errors indicate shortcomings in both the modules, for some instances the encoder is not abstracting out the meaning efficiently and for others the decoder is not able to generate sentences with the meaning encoded in the intermediate AMR. ## 9 Conclusion We explored the use of AMR graphs as an intermediate representation for the TST task. We see that the performance of the proposed method T-STAR surpasses state of the art techniques in content preservation with comparable style accuracy. Through qualitative analysis we show that obtaining very high style accuracy scores without altering meaning is indeed a challenging problem. ## 10 Limitations Some of the limitations for T-STAR based models are the following. First, although our proposed models are performing better in the joint objective of content preservation and style transfer, but they are not able to outperform vanilla T-STAR (overall best performing model for CP) and STRAP (overall best performing model for ST). This is a promising future direction, to keep boosting the performance on both the directions without comprising on the other dimension. Second, we are not incorporating graph structure in our models, and thus there could be some information loss while interpreting and generating the AMRs. Third, based on our error analysis, although our T-STAR encoder is able to generate better AMRs for stylized sentences as compared to vanilla T-STAR model, we are generating significant incomplete AMRs that are missing out on important entities and relations to preserve meaning of source sentence. Fourth, similar to prior research to generate synthetic dataset, initial iteration of our model are dependant on an existing off the shelf TST model, however the quality of the generated AMRs improves significantly using the described data augmentation strategy. Fifth, our work is dependant on a robust AMR parsing approach, which makes it challenging to adopt our approach for other languages. However, with the recent advancements in multilingual AMR parsing, it will be feasible in upcoming future works. ## References * Bai et al. (2022) Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022. Graph pre-training for amr parsing and generation. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 6001–6015. * Bai et al. (2020) Xuefeng Bai, Linfeng Song, and Yue Zhang. 2020. Online back-parsing for amr-to-text generation. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1206–1219. * Banarescu et al. (2013) Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013\. Abstract Meaning Representation for sembanking. In _Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse_ , pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. * Bevilacqua et al. (2021) Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In _Proceedings of AAAI_. * Cai and Lam (2020) Deng Cai and Wai Lam. 2020. AMR parsing via graph-sequence iterative inference. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 1290–1301, Online. Association for Computational Linguistics. * Cai and Knight (2013) Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 748–752. * Cai et al. (2021) Yitao Cai, Yue Cao, and Xiaojun Wan. 2021. Revisiting pivot-based paraphrase generation: Language is not the only optional pivot. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 4255–4268, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. * Cheng et al. (2020) Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, and Jingjing Liu. 2020\. Contextual text style transfer. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 2915–2924. * Deng et al. (2022a) Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P Xing, and Zhiting Hu. 2022a. Rlprompt: Optimizing discrete text prompts with reinforcement learning. _arXiv preprint arXiv:2205.12548_. * Deng et al. (2022b) Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, and Patricia Riddle. 2022b. Interpretable amr-based question decomposition for multi-hop question answering. _arXiv preprint arXiv:2206.08486_. * Dolan and Brockett (2005) William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In _Proceedings of the Third International Workshop on Paraphrasing (IWP2005)_. * Du and Flanigan (2021) Wenchao Du and Jeffrey Flanigan. 2021. Avoiding overlap in data augmentation for amr-to-text generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_ , pages 1043–1048. * Fan and Gardent (2020) Angela Fan and Claire Gardent. 2020. Multilingual amr-to-text generation. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 2889–2901. * Flanigan et al. (2016) Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In _Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)_ , pages 1202–1206, San Diego, California. Association for Computational Linguistics. * Flanigan et al. (2014) Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014\. A discriminative graph-based parser for the Abstract Meaning Representation. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1426–1436, Baltimore, Maryland. Association for Computational Linguistics. * Fu et al. (2017) Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2017. Style transfer in text: Exploration and evaluation. _CoRR_ , abs/1711.06861. * Hardy and Vlachos (2018) Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 768–773, Brussels, Belgium. Association for Computational Linguistics. * He et al. (2020) Junxian He, Xinyi Wang, Graham Neubig, and Taylor Berg-Kirkpatrick. 2020. A probabilistic formulation of unsupervised text style transfer. _CoRR_ , abs/2002.03912. * Hoang et al. (2018) Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back-translation for neural machine translation. In _Proceedings of the 2nd Workshop on Neural Machine Translation and Generation_ , pages 18–24, Melbourne, Australia. Association for Computational Linguistics. * Holtzman et al. (2019) Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In _International Conference on Learning Representations_. * Hu et al. (2017) Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017\. Controllable text generation. _CoRR_ , abs/1703.00955. * Jin et al. (2020) Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2020. Deep learning for text style transfer: A survey. _CoRR_ , abs/2011.00416. * John et al. (2019) Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 424–434, Florence, Italy. Association for Computational Linguistics. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_. * Knight et al. (2021) Kevin Knight, Bianca Badarau, Laura Baranescu, Claire Bonial, Madalina Bardocz, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, Tim O’Gorman, et al. 2021. Abstract meaning representation (amr) annotation release 3.0. * Krishna et al. (2020) Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as paraphrase generation. _CoRR_ , abs/2010.05700. * Kumari et al. (2021) Surabhi Kumari, Nikhil Jaiswal, Mayur Patidar, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, and Lovekesh Vig. 2021. Domain adaptation for nmt via filtered iterative back-translation. In _Proceedings of the Second Workshop on Domain Adaptation for NLP_ , pages 263–271. * Kusner et al. (2015) Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In _Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015_ , volume 37 of _JMLR Workshop and Conference Proceedings_ , pages 957–966. JMLR.org. * Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_. * Li et al. (2018) Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: A simple approach to sentiment and style transfer. _CoRR_ , abs/1804.06437. * Liao et al. (2018) Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract Meaning Representation for multi-document summarization. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 1178–1190, Santa Fe, New Mexico, USA. Association for Computational Linguistics. * Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. _CoRR_ , abs/1907.11692. * Liu et al. (2020) Yixin Liu, Graham Neubig, and John Wieting. 2020. On learning text style transfer with direct rewards. _CoRR_ , abs/2010.12771. * Logeswaran et al. (2018) Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. 2018. Content preserving text generation with attribute controls. In _Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada_ , pages 5108–5118. * Luo et al. (2019) Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. _CoRR_ , abs/1905.10060. * Ma et al. (2020) Xinyao Ma, Maarten Sap, Hannah Rashkin, and Yejin Choi. 2020. Powertransformer: Unsupervised controllable revision for biased language correction. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020_ , pages 7426–7441. Association for Computational Linguistics. * Madaan et al. (2020) Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabás Póczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W. Black, and Shrimai Prabhumoye. 2020. Politeness transfer: A tag and generate approach. _CoRR_ , abs/2004.14257. * McDonald and Pustejovsky (1985) David D McDonald and James Pustejovsky. 1985. A computational theory of prose style for natural language generation. In _Second Conference of the European Chapter of the Association for Computational Linguistics_. * Niu and Bansal (2018) Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. _Trans. Assoc. Comput. Linguistics_ , 6:373–389. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA_ , pages 311–318. ACL. * Peng et al. (2017) Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the data sparsity issue in neural AMR parsing. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 366–375, Valencia, Spain. Association for Computational Linguistics. * Prabhumoye et al. (2018) Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018\. Style transfer through back-translation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 866–876, Melbourne, Australia. Association for Computational Linguistics. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21:1–67. * Samanta et al. (2021) Bidisha Samanta, Mohit Agrawal, and NIloy Ganguly. 2021. A hierarchical VAE for calibrating attributes while generating text using normalizing flow. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 2405–2415, Online. Association for Computational Linguistics. * Samanta et al. (2019) Bidisha Samanta, Niloy Ganguly, and Soumen Chakrabarti. 2019. Improved sentiment detection via label transfer from monolingual to synthetic code-switched text. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3528–3537, Florence, Italy. Association for Computational Linguistics. * Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _arXiv preprint arXiv:1910.01108_. * Shazeer and Stern (2018) Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In _International Conference on Machine Learning_ , pages 4596–4604. PMLR. * Shen et al. (2017) Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. _Advances in neural information processing systems_ , 30. * Shi et al. (2021) Yukai Shi, Sen Zhang, Chenxing Zhou, Xiaodan Liang, Xiaojun Yang, and Liang Lin. 2021. Gtae: Graph transformer–based auto-encoders for linguistic-constrained text style transfer. 12(3). * Singh and Palod (2018) Ayush Singh and Ritu Palod. 2018. Sentiment transfer using seq2seq adversarial autoencoders. _CoRR_ , abs/1804.04003. * Song et al. (2019) Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. _CoRR_ , abs/1902.07282. * Song et al. (2018) Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amr-to-text generation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1616–1626. * Subramanian et al. (2018) Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and Y-Lan Boureau. 2018. Multiple-attribute text style transfer. _CoRR_ , abs/1811.00552. * Sudhakar et al. (2019) Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran. 2019. ”transforming” delete, retrieve, generate approach for controlled text style transfer. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019_ , pages 3267–3277. Association for Computational Linguistics. * Takase et al. (2016) Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016\. Neural headline generation on Abstract Meaning Representation. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 1054–1059, Austin, Texas. Association for Computational Linguistics. * Tikhonov et al. (2019) Alexey Tikhonov, Viacheslav Shibaev, Aleksander Nagaev, Aigul Nugmanova, and Ivan P Yamshchikov. 2019. Style transfer for texts: Retrain, report errors, compare with rewrites. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3936–3945. * Wang et al. (2019) Ke Wang, Hang Hua, and Xiaojun Wan. 2019. Controllable unsupervised text attribute transfer via editing entangled latent representation. _CoRR_ , abs/1905.12926. * Wang et al. (2020) Tianming Wang, Xiaojun Wan, and Hanqi Jin. 2020. Amr-to-text generation with graph transformer. _Transactions of the Association for Computational Linguistics_ , 8:19–33. * Wang et al. (2021) Tianming Wang, Xiaojun Wan, and Shaowei Yao. 2021. Better amr-to-text generation with graph structure reconstruction. In _Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence_ , pages 3919–3925. * Wieting et al. (2019a) John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019a. Beyond BLEU: training neural machine translation with semantic similarity. In _Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers_ , pages 4344–4355. Association for Computational Linguistics. * Wieting et al. (2019b) John Wieting, Kevin Gimpel, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019b. Simple and effective paraphrastic similarity from parallel translations. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4602–4608, Florence, Italy. Association for Computational Linguistics. * Wu et al. (2020) Yu Wu, Yunli Wang, and Shujie Liu. 2020. A dataset for low-resource stylized sequence-to-sequence generation. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_ , pages 9290–9297. AAAI Press. * Xia et al. (2021) Qingrong Xia, Zhenghua Li, Rui Wang, and Min Zhang. 2021. Stacked amr parsing with silver data. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 4729–4738. * Xu et al. (2020a) Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, and Guodong Zhou. 2020a. Improving amr parsing with sequence-to-sequence pre-training. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 2501–2511. * Xu et al. (2020b) Jin Xu, Yinuo Guo, and Junfeng Hu. 2020b. Incorporate semantic structures into machine translation evaluation via UCCA. _CoRR_ , abs/2010.08728. * Xu et al. (2018) Jingjing Xu, Xu Sun, Qi Zeng, Xuancheng Ren, Xiaodong Zhang, Houfeng Wang, and Wenjie Li. 2018. Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. _arXiv preprint arXiv:1805.05181_. * Xu et al. (2012) Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In _Proceedings of COLING 2012_ , pages 2899–2914, Mumbai, India. The COLING 2012 Organizing Committee. * Yamshchikov et al. (2019a) Ivan Yamshchikov, Viascheslav Shibaev, and Alexey Tikhonov. 2019a. Dyr bul shchyl. proxying sound symbolism with word embeddings. In _Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP_ , pages 90–94, Minneapolis, USA. Association for Computational Linguistics. * Yamshchikov et al. (2020) Ivan P. Yamshchikov, Viacheslav Shibaev, Nikolay Khlebnikov, and Alexey Tikhonov. 2020. Style-transfer and paraphrase: Looking for a sensible semantic similarity metric. _CoRR_ , abs/2004.05001. * Yamshchikov et al. (2019b) Ivan P Yamshchikov, Viacheslav Shibaev, Aleksander Nagaev, J ürgen Jost, and Alexey Tikhonov. 2019b. Decomposing textual information for style transfer. _EMNLP-IJCNLP 2019_ , page 128. * Zhang et al. (2019) Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. Amr parsing as sequence-to-graph transduction. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 80–94. * Zhou et al. (2020) Qiji Zhou, Yue Zhang, Donghong Ji, and Hao Tang. 2020. Amr parsing with latent structural information. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4306–4319. ## Appendix A Extended Related Works ### A.1 TST Metrics Table 2 summarizes different metrics that have been used to measure content preservation and style transfer efficacy. Yamshchikov et al. (2020) presents a comprehensive analysis and categorization of several such metrics with respect to human evaluations. Tikhonov et al. (2019) also points out some flaws in traditional evaluation techniques and insists on using human written reformulations for better evaluation444Due to the difficulty of our task, we instead restrict to human evaluations instead of obtaining human gold-standard benchmarks.. ### A.2 Text to AMR Recent works Bevilacqua et al. (2021); Cai and Lam (2020); Zhou et al. (2020) for text-to-AMR task have pushed the SOTA, that makes it feasible to automatically construct AMR given a sentence. As a consequence, semantic- preserving NLG tasks, such as Neural Machine Translation Song et al. (2019); Xu et al. (2020b), Abstractive Summarization Takase et al. (2016), and Question Decomposition for multi-hop question answering Deng et al. (2022b) use AMRs as intermediate representations. However, AMRs have not been explored for style transfer tasks before our work. The increase in AMRs being adopted for several seq2seq tasks is due to the boost in the quality of AMR parsers. Earlier works, relied on statistical approaches Peng et al. (2017); Flanigan et al. (2014, 2016) to generate AMRs for a given piece of text. With the emergence of deep learning, various AMR parsers are being proposed, which can be divided into following categories: i) sequence-to-sequence based AMR-parsers Xu et al. (2020a), ii) sequence-to- graph based AMR parsers Zhang et al. (2019), where the graph is incrementally built by spanning one node at a time. More recently, several works have adopted pretrained models for AMR parsers, and have observed a boost in performance. Bai et al. (2022) uses BART model and posit the AMR parsing task as a seq2seq task, and generates a traversal of the AMR parser as the output. Bai et al. (2022) incorporates a pretraining strategy to better encode graph information in the BART architecture. Xu et al. (2020a) uses sentence encoding generated from BERT model. In this work, we adopt the pretrained technique based AMR parser, to generate high quality AMRs for the given stylized sentences. Although off-the-shelf AMR parsers work well for some problems Fan and Gardent (2020), they often need to be modified to be useful in the downstream tasks. For instance, Deng et al. (2022b) proposed graph segmentation strategy to perform question decomposition on a multi-hop query. Xia et al. (2021) and Du and Flanigan (2021) illustrated that using silver data augmentation can help improve in the task of AMR parsing. In this work, we also illustrate the benefit of using silver data towards improving the style agnosticity of AMR graphs as an intermediate representation. ### A.3 AMR to Text Similar to text-to-AMR models, AMR-to-text frameworks can also be categorised into two types - i ) sequence-to-sequence generation frameworks , ii) graph- encoder based frameworks Song et al. (2018); Wang et al. (2021, 2020). Bai et al. (2020) propose a decoder that back-predicts projected AMR graphs to better preserve the input meaning than standard decoders. Bai et al. (2022) argues that PLMs are pretrained on textual data, making is sub-optimal for modeling structural knowledge, and hence propose self-supervised graph-based training objectives to improve the quality of AMR-to-text generation. ## Appendix B Implementation Details Offensive language We used the “List of Dirty, Naughty, Obscene or Otherwise Bad Words”555https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and- Otherwise-Bad-Words to validate that the source and the generated target text do not contain any offensive text. Model Architecture: We use a standard t5-base encoder-decoder model as described in Raffel et al. (2020). The pre-trained HuggingFace666https://github.com/huggingface/transformers T5 transformer is used for both text-to-AMR and AMR-to-text parts of the proposed architecture. The model is pre-trained on the Colossal Clean Crawled Corpus (C4) 777https://www.tensorflow.org/datasets/catalog/c4 comprising of $\sim$750 GBs worth of text articles. The model comprises of 220 billion parameters, and is pre-trained for $2^{1}9$ steps before fine-tuning. For pre-training, AdaFactor optimizer Shazeer and Stern (2018) is used with “inverse square root” learning rate schedule. AMR Graph Construction: We use the SPRING model Bevilacqua et al. (2021) to generate AMR graphs from source style text. We use amrlib888https://github.com/bjascob/amrlib package to generate the AMR graphs. This implementation uses T5-base Raffel et al. (2020) as its underlying model, as opposed to the BART model Lewis et al. (2019) in the SPRING architecture. It is trained on AMR 3.0 (LDC2020T02) dataset Knight et al. (2021) that consists of 59K manually created sentence-AMR pairs. The model is trained for 8 epochs using a learning rate of $10^{-4}$. The source and target sequence lengths are restricted to 100 and 512 tokens respectively. Note that t5-based SPRING model achieves an SMATCH score of $83.5$, which guarantees the quality of obtained AMR representations $z_{i}$. AMR-Based Style Transfer: We use the T5wtense (T5 with tense) architecture from the amrlib package. The T5wtense architecture encodes part-of-speech (POS) tags to the concepts in the AMR graph, which helps the generation model to predict the tense of the output sentence since AMR graphs do not retain any tense information from their corresponding sentence. This model outperforms the standard T5-based model by 10 BLEU points on the AMR 3.0 (LDC2020T02) dataset Knight et al. (2021). To keep the training steps comparable for the subsets of the CDS dataset Krishna et al. (2020), we train this t5-base model for $20$ epochs for the Bible, Romantic Poetry, and Shakespeare datasets, and $5$ epochs for the Switchboard dataset. The model was trained for $20$ epochs for the Shakespeare Author Imitation Dataset Xu et al. (2012) as well. We used a learning rate of $10^{-4}$ for both datasets, and restricted source and target sequence lengths to 512 and 90 throughout, respectively. Everything else was kept same as the amrlib implementation to keep the results consistent. STRAP baseline: We train the model keeping the same hyperparameter configuration as reported in Krishna et al. (2020). We train each style- specific decoder for 3 epochs with learning rate of $5\times 10^{-5}$ with Adam optimizer Kingma and Ba (2014). During inference we set the p-value for nucleus sampling Holtzman et al. (2019) to 0.7 to have an appropriate balance between content preservation and style accuracy scores. Style classifiers: Similar to Krishna et al. (2020), we fine-tune a RoBERTa- large model Liu et al. (2019) using the official implementation in the fairseq package999https://github.com/pytorch/fairseq to train the style classifiers mentioned in Table 3. For all classifier variants, learning rate of $10^{-5}$ and a mini-batch size of $32$ was used. The models were trained for 10 epochs using Adam optimizer Kingma and Ba (2014). We also masked out named entities, nouns and numbers from the input before training the classifier. We used spacy package101010https://spacy.io/ to obtain the named entities and POS tags. To obtain the named-entities and POS for AMR graphs, information extracted from original sentences was used. Train-Validation-Test splits: The data splits were kept the same as the baseline model, STRAP Krishna et al. (2020) and can be found in Table 10. Dataset | Train split | Validation split | Test Split ---|---|---|--- CDS | Bible | 31,404 | 1,714 | 1,714 Switchboard | 145,823 | 1,487 | 1,488 Poetry | 26,880 | 1,464 | 1,470 Shakespeare | 24,852 | 1,313 | 1,293 SAID | Original | 18,395 | 1,128 | 1,462 Modern | 18,395 | 1,218 | 1,462 Table 10: Size of train, validation and test sets for CDS dataset Krishna et al. (2020) and Shakespeare Author Imitation Dataset (SAID) Xu et al. (2012). Computational time and device setup: All experiments were done on a 16 GB NVIDIA V100 GPU system with 120 GB n1-standard-32 Intel Broadwell CPU. It took $\sim$16 hrs to train the AMR-to-Text models for Shakespeare Author Imitation dataset Xu et al. (2012) and $\sim$25 hrs for the CDS dataset. Evaluation Metrics: We used the gensim111111https://radimrehurek.com/gensim/ package to compute the Word Mover Distance (WMD) Kusner et al. (2015), nltk121212https://www.nltk.org/ package to compute the BLEU scores Papineni et al. (2002), smatch package131313https://github.com/snowblink14/smatch to compute the SMATCH score Cai and Knight (2013) and implementation by Krishna et al. Krishna et al. (2020)141414https://github.com/martiansideofthemoon/style-transfer-paraphrase for the cosine similarity using SIM embeddings Wieting et al. (2019a). License of the packages used: The following packages use the MIT License151515https://opensource.org/licenses/MIT \- amrlib, spacy, fairseq, smatch, and STRAP. The following packages use the Apache License 2.0161616https://www.apache.org/licenses/LICENSE-2.0 \- nltk, and huggingface’s transformer. The following packages use the GNU LGPL license 3.0171717https://www.gnu.org/licenses/lgpl-3.0.en.html \- gensim. ## Appendix C T-STAR-Encoder Ablation Study The performance of our T-STAR-Encoder heavily depends on the quality of synthetic dataset generated while stylizing the sentences present in AMR 3.0 dataset. Therefore, we conducted thorough empirical analysis to identify a filtering strategy to boost the performance of the encoder. To obtain the initial set of stylized sentences, we use the state-of-the-art model available to transform generic English sentences to relevant styles {Poetry, Shakespeare, Bible, Switchboard}, i.e., seq2seq inverse paraphrase module Krishna et al. (2020). We then fine-tune our T-STAR-Encoder on synthetic datasets obtained from different filtering strategies, and compare the performan on test split of AMR 3.0, to ensure that the quality of the generated AMRs do not drop significantly. We present our findings in Table 11. We observe that using the whole set of generated samples, leads to a significant drop in the performance (row-1). Therefore, we filter out the augmented stylized sentences with SIM similarity score Wieting et al. (2019a) below the threshold $\delta$. This filtering strategy was able to significantly improve over the T-STAR-Encoder performance, giving competitive results to the non-augmented Vanilla T-STAR- Encoder. We select the best performing $\delta$ based on the performance on the test-split of AMR 3.0. Note that we have used the best performing threshold, $\delta=0.7$ Datasets | Precition | Recall | F-Score ---|---|---|--- Vanilla T-STAR Encoder | 0.829 | 0.794 | 0.811 T-STAR-Encoder | 0.671 | 0.390 | 0.493 T-STAR Encoder-Flt ($\delta$=0.5) | 0.830 | 0.790 | 0.810 T-STAR Encoder-Flt ($\delta$=0.6) | 0.807 | 0.753 | 0.779 T-STAR Encoder-Flt ($\delta$=0.7) | 0.836 | 0.798 | 0.816 T-STAR Encoder-Flt ($\delta$=0.8) | 0.828 | 0.793 | 0.810 Iterative T-STAR-Encoder($\delta$=0.7) | 0.829 | 0.794 | 0.811 Table 11: SMATCH scores on various versions of T-STAR-Encoder on AMR 3.0 dataset’s test split. ## Appendix D Unsupervised Evaluation of AMR parsing Since we use AMR graphs as the intermediate representation for the TST task, it is important to validate the generation quality of generated AMRs in terms of content preservation with respect to the input sentence. However, there does not exist an unsupervised metric to evaluate content overlap between an AMR graph and a sentence. Hence we propose to use a slight variation of the Word Mover Distance Kusner et al. (2015) for this purpose. We choose WMD over other content preservation metrics for the following reasons - * • Yamshchikov et al. (2019a) illustrate the efficacy of WMD to evaluate text style transfer over other metrics based on correlation with human evaluations. Which means that it is more robust to the domain difference in the input and the output sentence, making it an ideal candidate for text-AMR similarity measurement. * • Syntactic metrics like BLEU would not be able to compute the content overlap between an AMR graph and a sentence because word representation in an AMR graph discards noun forms and tense information, and some verb tokens are mapped to a different PropBank verb. These modifications along with the disparity in sequential-graphical representation makes syntactic metrics infeasible for the task. * • Semantic representations like SIM are fragile to the input sequence order, and affected by non-content bearing words as well. However, WMD adopts on a bag- of-words paradigm, making it more suitable for the task. We propose the following two variants of WMD - * • WMD Overall \- In this variant, we aimed to keep the content bearing tokens from the sentence and AMR graphs. For sentence, we removed the stopwords (after doing a detailed corruption study on sentences, refer to Table 13), while for AMR Graphs we removed AMR notation specific tokens (like “:op?”, “ARG?”), punctuations (like “(”, ‘” ’), assigned variables and propbank code for verbs (eg. changing “s / say-01” to “say”). * • WMD Verb Overall \- In this variant we specifically want to compute the similarity of verbs in the parsed AMR graphs and input sentence. For this, use nltk POS tagging tool to extract out verbs from the input sentence, and directly extract out propbank based verbs from the AMR graph. Refer to Table 12 for an example of the preprocessing strategy adopted. Input Sentence - --- Malaysian vice-prime minister Anwar ended a visit to China this afternoon , and left Shanghai for Tokyo. Extracted content from Input Sentence - malaysian vice-prime minister anwar ended visit china afternoon , left shanghai tokyo. Extracted Verbs from Input Sentence - ended left Input AMR- (a2 / and :op1 (e2 / end-01 :ARG0 (p / person :name (n / name :op1 ”Anwar”) :ARG0-of (h / have-org-role- 91 :ARG1 (c7 / country :name (n3 / name :op1 ”Malaysia”)) :ARG2 (m / minister :mod (p2 / prime) :mod (v / vice)))) :ARG1 (v2 / visit-01 :ARG0 p :ARG1 (c6 / country :name (n2 / name :op1 ”China”))) :time (d / date-entity :dayperiod (a3 / afternoon) :mod (t / today))) :op2 (l / leave-11 :ARG0 p :ARG1 (c8 / city :name (n4 / name :op1 ”Shanghai”)) :ARG2 (c9 / city :name (n5 / name :op1 ”Tokyo”)))) Extracted sequence- and end person name Anwar have-org-role country name Malaysia minister prime vice visit country name China afternoon today leave city name Shanghai city name Tokyo Corresponding Verb extraction (AMR) - end visit leave Table 12: Illustrative example of the preprocessing done in proposed text-AMR unsupervised WMD Overall and WMD Verb Overall metrics. Model | WMD Mean / Std dev. | SIM Mean / Std dev. ---|---|--- Original | 0.0 / 0.0 | 1.0 / 0.0 Stop Words | 0.1049 / 0.0761 | 0.9373 / 0.0734 Lowercase | 0.0754 / 0.1319 | 0.9946 / 0.0321 Stop Words + Lowercase | 0.1663 / 0.1347 | 0.9373 / 0.0734 POS | 0.1081 / 0.0731 | 0.8680 / 0.1929 Stop Words + Lowercase + POS | 0.1966 / 0.1438 | 0.8492 / 0.1957 Synonym Replacement | 0.1864 / 0.1266 | 0.6946 / 0.1832 Synonym + Stop Words | 0.2228 / 0.1335 | 0.6545 / 0.1979 Synonym + Lowercase | 0.2133 / 0.1511 | 0.6946 / 0.1832 Synonym + Stop Words + Lowercase | 0.2476 / 0.1534 | 0.6545 / 0.1979 Synonym + POS | 0.2320 / 0.1306 | 0.5948 / 0.2209 Synonym + Stop Words + Lowercase + POS | 0.2686 / 0.1525 | 0.5817 / 0.2243 Table 13: WMD scores on various corruption strategies against original sentence on test set of AMR 3.0 dataset. POS refers to removing tags other than nouns, verbs and adjectives from the sentence. We also study the effect on text-AMR WMD score juxtaposed to text-text WMD scores on varying degree of similarity between the compared sequences. For this we use the diverse paraphraser trained by Krishna et al. (2020) on the test set of AMR 3.0 dataset, and generate paraphrases with varying nucleus sampling p-values Holtzman et al. (2019) from 0.0 to 1.0 with step size of 0.1. We notice that the WMD scores for text-text WMD (blue line in Fig. 5) and text-AMR WMD (red line in Fig. 5) are similar to each other throughout. Figure 5: WMD Scores on AMR 3.0 dataset for sentence-paraphrase pairs (blue) and amr-paraphrase pairs (red). The shaded region denotes standard deviations from the mean. We present the WMD Overall scores and WMD Verb Overall scores for the CDS dataset in Table 14; validating the content retention in parsed AMR graphs for different strategies. We notice that Iterative T-STAR Encoder outperforms the T-STAR Encoder-Flt ($\delta$=0.7) and T-STAR Encoder (unfiltered) baselines in both WMD overall and Verb WMD overall. Even though comparable, it is still however lesser than the Vanilla T-STAR Encoder numbers. However, we believe we can credit that to more style information retention in Vanilla T-STAR Encoder (refer to Section 6.2), leading to poorer performance in downstream text style transfer task. Models | Styles | WMD Overall | Verb WMD Overall ---|---|---|--- Vanilla T-STAR Encoder | Bible | 0.243 | 0.409 Poetry | 0.272 | 0.532 Shakespeare | 0.322 | 0.484 Switchboard | 0.292 | 0.467 T-STAR Encoder (Unfiltered) | Bible | 0.339 | 0.511 Poetry | 0.357 | 0.599 Shakespeare | 0.403 | 0.552 Switchboard | 0.341 | 0.537 T-STAR Encoder Flt ($\delta$=0.7) | Bible | 0.290 | 0.461 Poetry | 0.322 | 0.568 Shakespeare | 0.365 | 0.522 Switchboard | 0.323 | 0.512 Iterative T-STAR Encoder Flt ($\delta$=0.7) | Bible | 0.281 | 0.439 Poetry | 0.300 | 0.550 Shakespeare | 0.344 | 0.500 Switchboard | 0.301 | 0.491 Table 14: WMD Overall and WMD Verb Overall scores for evaluation of unsupervised content preservation of different models across different styles in the CDS dataset Krishna et al. (2020). ## Appendix E Data Augmentation for T-STAR Decoder We hypothesize that the T-STAR decoder performances will improve if the underlying model, is better at generating text given an AMR graph. To this end, we create synthetic dataset using sentences from Wikipedia corpus. We sample 10 million sentences from it and generate the corresponding AMRs using our vanilla T-STAR model. We further filter out the samples for which WMD Overall, mentioned in Appendix D and keep samples with a WMD score below 0.15, which results in 2.3M instances. We first fine-tune the T5-Base model for AMR to Text task on this filter dataset. We obtain a BLEU score of 49.13 on Gold AMR test set. Note that this performance is very close to the state-of-the art result 49.2 BLEU for this task Bai et al. (2022). Table 15 lists down the different filtering strategy and dataset sizes we experimented to identify the best strategy to improve the performance. Dataset | BLEU Score ---|--- GoldAMR (Baseline) | 44.36 Wikipedia 1M (Unfiltered) | 48.14 Wikipedia 120K- Filter($<$0.15) | 44.56 Wikipedia 1M -Filter ($<$0.15) | 48.58 Wikipedia 2.3M - Filter ($<$0.15) | 49.14 Table 15: Fine-tuning model for AMR-To-Text generation task using data augmentation technique using Wikipedia Sentences We then compare the performance of the best performing model on the style transfer task again STRAP. We observe that this model is not beating the STRAP performance across various style directions. Therefore, we conclude that a vanilla fine-tuning of model for AMR to text task, does not necessarily boost the performance in the downstream tasks. ## Appendix F Error Analysis ### F.1 Comparison on Meaning Preservation We present the results across all the 12 directions for content preservation comparitive analysis in Table 16 and 17 respectively. We can observe that for every direction our models are consistently better in content preservation with respect to STRAP model. Direction | TSTAR >STRAP | STRAP <TSTAR | TSTAR=STRAP ---|---|---|--- bible $\rightarrow$ poetry | 60.8% | 37.6% | 1.6% bible $\rightarrow$ shak. | 66.6% | 32.2% | 1.2% bible$\rightarrow$switch. | 67% | 30.4% | 2.6% poetry$\rightarrow$bible | 68.6% | 29.4% | 2% poetry $\rightarrow$ shak. | 78.6% | 19.6% | 1.8% poetry$\rightarrow$switch. | 69.4% | 27.4% | 3.2% shak.$\rightarrow$bible | 73% | 24.8% | 2.2% shak.$\rightarrow$poetry | 66.2% | 27.4% | 3.2% shak.$\rightarrow$switch | 73.6% | 24% | 2.4% switch.$\rightarrow$bible | 73% | 24% | 3% switch.$\rightarrow$poetry | 73.6% | 23.4% | 3% switch.$\rightarrow$shak. | 79.4% | 18.6% | 4% Table 16: Comparitive analysis of TSTAR and STRAP model to understand which model generates more meaning preserving outputs Direction | TSTAR=STRAP | TSTAR $<$ STRAP | TSTAR $>$ STRAP ---|---|---|--- bible$\rightarrow$switch. | 1.8 | 31.6 | 66.6 poetry$\rightarrow$switch. | 2 | 15.6 | 82.4 shak.$\rightarrow$switch. | 4 | 20.6 | 75.4 switch.$\rightarrow$bible | 1.4 | 19.4 | 79.2 switch.$\rightarrow$poetry | 2.2 | 16.2 | 81.6 switch.$\rightarrow$shak. | 0.8 | 13 | 86.2 bible$\rightarrow$poetry | 1.8 | 27.8 | 70.4 bible$\rightarrow$shak. | 0.8 | 28.8 | 70.4 poetry$\rightarrow$bible | 2 | 20.2 | 77.8 poetry$\rightarrow$shak. | 2.8 | 12.2 | 85 shak.$\rightarrow$bible | 2.4 | 19.6 | 78 Table 17: Comparison on content preservation using human evaluations on Iterative T-STAR and STRAP across all 12 direction | STRAP | TSTAR | Itr-TSTAR | STRAP | TSTAR | Itr-TSTAR | STRAP | TSTAR | Itr-TSTAR | STRAP | TSTAR | Itr-TSTAR ---|---|---|---|---|---|---|---|---|---|---|---|--- Direction | No Error $\uparrow$ | Hallucination $\downarrow$ | Incomplete $\downarrow$ | Semantic Drift $\downarrow$ bible$\rightarrow$poetry | 42 | 43 | 62 | 98 | 68 | 70 | 239 | 270 | 228 | 121 | 119 | 140 bible$\rightarrow$shakespeare | 45 | 72 | 93 | 224 | 76 | 87 | 55 | 190 | 149 | 176 | 162 | 171 bible$\rightarrow$switchboard | 54 | 98 | 106 | 203 | 65 | 87 | 59 | 189 | 132 | 184 | 148 | 175 poetry$\rightarrow$bible | 46 | 85 | 110 | 257 | 232 | 175 | 22 | 27 | 15 | 175 | 156 | 200 poetry$\rightarrow$shakespeare | 43 | 134 | 166 | 224 | 89 | 79 | 50 | 53 | 47 | 183 | 224 | 208 poetry$\rightarrow$switchboard | 80 | 142 | 160 | 171 | 63 | 39 | 63 | 82 | 65 | 186 | 213 | 236 shakespeare$\rightarrow$bible | 44 | 69 | 120 | 253 | 203 | 186 | 19 | 43 | 14 | 184 | 185 | 180 shakespeare$\rightarrow$poetry | 44 | 55 | 85 | 187 | 190 | 161 | 77 | 89 | 69 | 192 | 166 | 187 shakespeare$\rightarrow$switchboard | 70 | 132 | 161 | 151 | 69 | 62 | 42 | 98 | 47 | 237 | 201 | 230 switchboard$\rightarrow$bible | 50 | 85 | 129 | 241 | 213 | 221 | 42 | 54 | 14 | 167 | 148 | 135 switchboard$\rightarrow$poetry | 49 | 98 | 114 | 165 | 113 | 97 | 140 | 165 | 123 | 146 | 124 | 166 switchboard$\rightarrow$shakespeare | 62 | 149 | 186 | 189 | 87 | 92 | 77 | 87 | 31 | 172 | 177 | 191 Table 18: Type of Error Analysis across three models STRAP, TSTAR, Iterative- TSTAR across all the four styles. ### F.2 Error Analysis per direction In this section, we present the error analysis for each direction in Table 18. We observe that T-STAR and Iterative T-STAR models are consistently better on No-Error and Hallucinations across all the directions. Moreover, we observe that for 7 out of 12 direction, Iterative T-STAR model is better than STRAP for Incompleteness error. Note that our T-STAR model was under-performing, however another iteration of model improvements increases that number significantly. Note that all the models are giving high error in Semantic Drift, and improving the model for this type of errors can be explored in future works. S.No | Input Stylized Sentence | Output Stylized Sentence | TSTAR-Encoder AMR ---|---|---|--- 1. | And she answered and said unto him, Yes, Lord: yet the dogs under the table eat of the children’s crumbs. | Yet the dogs under the table Eat children’s crumbs. | | (e / eat-01 --- :ARG0 (d / dog :location (u / under :op1 (t / table))) :ARG1 (c / crumb :poss (c2 / child)) :mod (s / still)) 2. | And the priest said unto them, Go in peace: before the LORD is your way wherein ye go. | Before the Lord go in peace on high; | | (g / go-02 --- :mode imperative :ARG0 (y / you) :manner (p / peace) :time (b / before :op1 (p2 / person :name (n / name :op1 ”Lord”)))) 3. | And there were made on them, on the doors of the temple, cherubims and palm trees, like as were made upon the walls; and there were thick planks upon the face of the porch without. | The cherubims and palm-trees are at the temple doors, And the thick planks on the porch face. | | (a / and --- :op1 (b / be-located-at-91 :ARG1 (a2 / and :op1 (c / cherubim) :op2 (t / tree :mod (p / palm))) :ARG2 (d / door :part-of (t2 / temple))) :op2 (b2 / be-located-at-91 :ARG1 (p2 / plank :ARG1-of (t3 / thick-03)) :ARG2 (f / face :part-of (p3 / porch)))) 4. | and uh they’ve had catalytic con- you know catalytic converters on those on uh i think they’re required in California and Oregon and and Washington | In Caledonia, Oregon, and Washington catalytic conversion required. | | (r / require-01 --- :ARG1 (t / thing :ARG0-of (c / convert-01 :mod (c2 / catalytic))) :location (a / and :op1 (s / state :name (n / name :op1 ”California”)) :op2 (s2 / state :name (n2 / name :op1 ”Oregon”)) :op3 (s3 / state :name (n3 / name :op1 ”Washington”)))) 5. | That crowns a lofty clift, which proudly towers | crowns the lofty towers | | (c / crown-01 --- :ARG0 (ii / it) :ARG1 (t / tower :mod (l / lofty))) 6. | And the rough goat is the king of Grecia: and the great horn that is between his eyes is the first king. | The rough goat o’er the king of Greece | | (h / have-org-role-91 --- :ARG0 (g / goat :ARG1-of (r / rough-04)) :ARG1 (c / country :name (n / name :op1 ”Greece”)) :ARG2 (k / king)) Table 19: Samples from Human Evaluations where the T-STAR output was marked incomplete. We observe that entities and relations missing in the sentence, were not present in their corresponding intermediate AMRs. Input Stylized Sentence | Output Stylized Sentence | Input generated AMR | Potential Error ---|---|---|--- Ay, mine own fortune in my misery. | my fortune is my misery | | (m / misery --- :poss (ii / i) :domain (f / fortune :poss ii)) T-STAR Decoder yeah it makes you wonder um had the jury been a different group of people | It maketh you to wonder at the diversities of the jury. | | (m / make-02 --- :ARG0 (ii / it) :ARG1 (w / wonder-01 :ARG0 (y / you) :ARG1 (d / differ-02 :ARG1 (j / jury)))) T-STAR Decoder Tis not Maria’s whispering call; | It is not whispered by Maria. | | (w / whisper-01 --- :polarity - :ARG0 (p / person :name (n / name :op1 ”Maria”)) :ARG1 (ii / it)) T-STAR Encoder Yea,” quoth he, “Dost thou fall upon thy face? | Quoth he, ”Dost thou fall thy face, | | (q / quote-01 --- :ARG1 (f / fall-01 :ARG1 (y / you :part (f2 / face)) :ARG4 f2 :polarity (a / amr-unknown)) :ARG2 (h / he)) T-STAR Encoder Around my grave they’ll wither. | They will wither about my grave. | | (w / wither-01 --- :ARG1 (t / they) :location (a / around :op1 (g / grave :poss (ii / i)))) T-STAR Decoder Justice is sworn ’gainst tears, and hers would crave | uh Justice has sworn to cried and cried | | (s / swear-01 --- :ARG0 (p / person :name (n / name :op1 ”Justice”)) :ARG1 (a / and :op1 (c / cry-02 :ARG0 p) :op2 (c2 / cry-02 :ARG0 p))) T-STAR Encoder Table 20: Various Instances that had Semantic Drift as a type of error. We manually analyze the same, and hypothesize that the potential error in the listed module. ### F.3 Qualitative Analysis for Incompleteness and Semantic Drift As we are using interpretable intermediate representations, it is easily possible to understand the intuition behind these errors, and broadly understand which modules (encoder or decoder) needs to be improved further. Therefore, we study few instances and analyze the generated Intermediate AMRs to understand the reason for high number in Semantic Drift and Incomplete errors. We list down some intstances in Table 20 and Table 19 Across the various instances that we analyzed, we observe that the generated AMRs were not encoding the complete information themselves. For instance, either missing some entities (example 1, 4, 5 and 6 in Table 19), if the clauses were separated using ”:”, ”;”, only one of those were parsed in the intermediate graph (example 2 and 3 in Table 19. For semantic drift, we observed that the errors was arising due to the shortcomings in both the modules, i.e., it was leading to meaning change if the Encoder didn’t generate an efficient AMR graph, or if the decoder was not able to interpret AMR correctly. We have listed the modules that could be the potential reason for the error in the last column in Table 20. It is important to note that, the source of errors is very easy to identify now because we are using robust, interpretable and symbolic representation as pivot to transfer from style A to style B. We have also provided a case study of performance of various baselines and proposed model in Table 7 on the CDS dataset.
# On Fully Nonlinear Loewner-Nirenberg Problem of Ricci curvature Zhenan Sui Institute for Advanced Study in Mathematics of HIT, Harbin Institute of Technology, Harbin, China<EMAIL_ADDRESS> ###### Abstract. We prove the existence of a smooth complete conformal metric with constant kth elementary symmetric function of negative Ricci curvature under certain condition on general domain in Euclidean space. We then formulate this problem for more general equations. ###### 2010 Mathematics Subject Classification: Primary 53C21; Secondary 35J60 ## 1\. Introduction In this paper, we discuss the fully nonlinear version of Loewner-Nirenberg problem: let $\Omega\subsetneq\mathbb{R}^{n}$ be a domain in Euclidean space with $n\geq 3$ and $g$ the Euclidean metric. Assume that $\partial\Omega$ consists of a finite number of disjoint, non-self-intersecting, smooth, compact surfaces. We want to find a smooth complete metric $g_{u}=u^{\frac{4}{n-2}}g$ with $u>0$, which satisfies (1.1) $\sigma_{k}\Big{(}-g_{u}^{-1}\text{Ric}_{g_{u}}\Big{)}:=\sigma_{k}\Big{(}\lambda\big{(}-g_{u}^{-1}\text{Ric}_{g_{u}}\big{)}\Big{)}=1\quad\text{ in }\Omega,$ where $\text{Ric}_{g}$ is the Ricci tensor of $g$, $\lambda(A)=(\lambda_{1},\ldots,\lambda_{n})$ are the eigenvalues of the matrix $A$, and $\sigma_{k}(\lambda)=\sum\limits_{1\leq i_{1}<\cdots<i_{k}\leq n}\lambda_{i_{1}}\cdots\lambda_{i_{k}}$ is the $k$th elementary symmetric function defined on Garding’s cone $\Gamma_{k}=\\{\lambda\in\mathbb{R}^{n}:\sigma_{j}(\lambda)>0,\,j=1,\ldots,k\\}.$ Under conformal deformation of metric $g_{u}=u^{\frac{4}{n-2}}g$, the Ricci tensor $\text{Ric}_{g_{u}}$ and $\text{Ric}_{g}$ are related by the formula $\frac{n-2}{2}\mbox{Ric}_{g_{u}}=\frac{n-2}{2}\mbox{Ric}_{g}-(n-2)\frac{\nabla^{2}u}{u}-\bigg{(}\frac{\Delta u}{u}+\frac{|\nabla u|^{2}}{u^{2}}\bigg{)}g+\frac{n}{u^{2}}du\otimes du,$ where $\nabla{u}$, $\nabla^{2}u$ and $\Delta u$ are the gradient, Hessian and Laplace-Beltrami operator of $u$ with respect to the background metric $g$ respectively. We note that $\mbox{Ric}_{g}\equiv 0$ if $g$ is Euclidean. Consequently, the geometric problem (1.1) is equivalent to the second order partial differential equation (1.2) $\sigma_{k}^{\frac{1}{k}}\big{(}W[u]\big{)}=\frac{n-2}{2}u^{\frac{n+2}{n-2}}\quad\text{ in }\Omega$ along with the boundary condition (1.3) $u=\infty\quad\text{ on }\partial\Omega,$ where (1.4) $W[u]=g^{-1}\bigg{(}(n-2)\nabla^{2}u-\frac{n}{u}du\otimes du+\Big{(}\Delta u+\frac{|\nabla u|^{2}}{u}\Big{)}g-\frac{n-2}{2}u\text{Ric}_{g}\bigg{)}.$ We shall call a $C^{2}$ function $u$ $k$-admissible (or admissible if there is no ambiguity) in $\Omega$ if $\lambda\big{(}W[u]\big{)}(x)\in\Gamma_{k}$ for any $x\in\Omega$. We note that equation (1.2) is elliptic if $u$ is admissible. For convenience, we call equation (1.2) the conformal $k$-Ricci curvature equation. Our goal in this paper is to seek smooth positive admissible solutions to (1.2). When $k=1$, (1.2)-(1.3) reduces to (1.5) $\left\\{\begin{aligned} \frac{4(n-1)}{n-2}\Delta u-S_{g}u=&u^{\frac{n+2}{n-2}}\quad\text{ in }\,\,\Omega,\\\ u=&\infty\quad\text{ on }\,\,\partial\Omega,\end{aligned}\right.$ where $S_{g}$ is the scalar curvature with respect to $g$. In this case, the above problem is known as Loewner-Nirenberg problem. If $\Gamma$ is a smooth compact submanifold of $\mathbb{R}^{n}$ of codimension $m$ such that $\partial\Omega\setminus\Gamma$ is also compact, Loewner and Nirenberg [13] proved the existence of a smooth conformally flat metric with constant negative scalar curvature in $\Omega$ satisfying $u(x)\rightarrow\infty\quad\text{ as }x\rightarrow\Gamma$ if $m<\frac{n}{2}+1$. They also gave the nonexistence result if $m>\frac{n}{2}+1$ and conjectured the nonexistence for the borderline case $m=\frac{n}{2}+1$, which was later proved by Aviles and Véron [2, 15]. When $k>1$, equation (1.2) becomes fully nonlinear. Following [5], we call the above problem fully nonlinear Loewner-Nirenberg problem. If $\text{Ric}_{g}$ in (1.1) is replaced by the Schouten tensor $A_{g}$: $A_{g}=\frac{1}{n-2}\Big{(}Ric_{g}-\frac{S_{g}}{2(n-1)}g\Big{)},$ González, Li and Luc [5] proved the existence of a Lipschitz continuous complete conformally flat metric $g_{u}=u^{\frac{4}{n-2}}g$ satisfying $\sigma_{k}\Big{(}-g_{u}^{-1}A_{g_{u}}\Big{)}=1$ if the vector $u_{m}=\big{(}\underbrace{1,\ldots,1}_{n-m+1},\underbrace{-1,\ldots,-1}_{m-1}\big{)}\in\Gamma_{k}.$ They also gave the nonexistence result if $u_{m}\notin\overline{\Gamma}_{k}$. The borderline case (1.6) $u_{m}\in\partial\Gamma_{k}$ remains open. Motivated by the above literatures, we investigate fully nonlinear Loewner- Nirenberg problem of Ricci curvature. Different from the Schouten tensor case, we are able to obtain smooth solutions, thanks to the interior second order estimates of Guan [6] and Evans-Krylov theory [3, 11]. Our first result is an existence result when $\partial\Omega$ is composed of hypersurfaces. ###### Theorem 1.1. Let $\Omega\subsetneq\mathbb{R}^{n}$ be a domain whose boundary $\partial\Omega$ consists of finitely many disjoint non-self-intersecting smooth compact hypersurfaces. Then there exists a smooth complete metric $g_{u}=u^{\frac{4}{n-2}}g$ satisfying (1.1), or equivalently, there exists a smooth positive admissible solution $u$ to (1.2)-(1.3). Moreover, the conformal factor $u$ has the following growth rate (1.7) ${\rho}^{\frac{n}{2}-1}(x)u(x)\rightarrow\big{(}(n-1)(C_{n}^{k})^{\frac{1}{k}}\big{)}^{\frac{n-2}{4}}\quad\text{ as }x\rightarrow\partial\Omega,$ where $\rho(x)$ is the distance of $x$ to $\partial\Omega$. If $\Omega$ is bounded or $k=1$, the solution $u$ is unique. We remark that on smooth compact Riemannian manifold with boundary, the growth rate (1.7) was discovered by Gursky, Streets and Warren [10]. In Theorem 2.8, we shall give a simplified proof for general smooth manifolds with compact boundaries. Our main result on the solvability of fully nonlinear Loewner-Nirenberg problem of Ricci curvature is as follows. ###### Theorem 1.2. Let $\Omega\subsetneq\mathbb{R}^{n}$ be a domain. Suppose that the boundary $\partial\Omega$ consists of finitely many disjoint non-self-intersecting smooth compact surfaces, and the maximal solution $u_{\Omega}$ of (1.2) satisfies $u_{\Omega}\not\equiv 0$. If each component of $\partial\Omega$ has codimension $m$ satisfying (1.8) $v_{m}=\big{(}\underbrace{n-m,\ldots,n-m}_{n-m+1},\underbrace{2-m,\ldots,2-m}_{m-1}\big{)}\in\Gamma_{k},$ then there exists a smooth complete metric $g_{u}=u^{\frac{4}{n-2}}g$ satisfying (1.1), or equivalently, there exists a smooth positive admissible solution to (1.2)-(1.3). If some component has codimension $m$ satisfying $v_{m}\notin\overline{\Gamma}_{k}$, then there does not exist a complete conformal metric satisfying (1.1), or equivalently, equation (1.2)-(1.3) is not solvable. The borderline case $v_{m}\in\partial\Gamma_{k}$ is still open. The difficulty lies in the lack of a suitable upper barrier. We observe that when $k=1$, the borderline case (1.6) agrees with (1.8), which was solved in [2, 15] by some integral estimates. We wish to reinvestigate the borderline case in the future. Motivated by [9], the second goal in this paper is to investigate the existence of complete conformal metrics subject to the following more general equation (1.9) $\sigma_{k}\Big{(}-g_{u}^{-1}\text{Ric}_{g_{u}}\Big{)}+\alpha(x)\sigma_{k-1}\Big{(}-g_{u}^{-1}\text{Ric}_{g_{u}}\Big{)}=\alpha_{0}(x),$ where $\alpha_{0}(x)>0$ and $\alpha(x)$ are real valued smooth functions on $\overline{\Omega}$. Under conformal deformation of metric $g_{u}=u^{\frac{4}{n-2}}g$, (1.9) is equivalent to (1.10) $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\alpha(x)\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\alpha_{0}(x).$ As proved in [9], equation (1.10) is elliptic when $u$ is $(k-1)$-admissible. In what follows, we may assume that $k\geq 2$ in (1.10). Our first result associated to (1.10) shows that similar to (1.2), there exists smooth positive $(k-1)$-admissible solutions to Dirichlet problems. ###### Theorem 1.3. Let $\Omega$ be a bounded domain in $\mathbb{R}^{n}$ whose boundary is composed of finitely many disjoint non-self-intersecting smooth compact hypersurfaces. For any smooth positive function $\varphi$ defined on $\partial\Omega$, there exists a smooth positive $(k-1)$-admissible solution $u$ to equation (1.10) which satisfies the boundary condition (1.11) $u=\varphi\quad\text{ on }\partial\Omega.$ The proof of Theorem 1.3 relies on the establishment of $C^{2}$ a priori estimates (see Theorem 4.2, 4.4, 4.5) and the existence of a subsolution. We notice that the method for $C^{2}$ estimates is similar to Guan [6] but may be more complicated due to some extra terms. In addition, the estimates in section 4 depend explicitly on $\inf\alpha_{0}$. Also, we give a new proof of the second order boundary estimate. For deriving preliminary estimates and conducting continuity process, we construct a smooth subsolution to the Dirichlet problem (1.10)–(1.11), synthesizing the ideas of Guan [6] and Guan [8]. Our next results associated to (1.10) are on the formulation of fully nonlinear Loewner-Nirenberg type problem. When the boundary $\partial\Omega$ is composed of hypersurfaces, we have the following result. ###### Theorem 1.4. Let $\Omega$ be a domain in $\mathbb{R}^{n}$ with nonempty smooth compact boundary which is composed of closed hypersurfaces. Then there exists a smooth complete metric $g_{u}=u^{\frac{4}{n-2}}g$ satisfying (1.9), or equivalently, there exists a smooth positive $(k-1)$-admissible solution $u$ to equation (1.10)–(1.3), provided that $\inf\limits_{\Omega}\alpha_{0}(x)>0,\quad\inf\limits_{\Omega}\alpha(x)>-\infty,\quad\sup\limits_{\Omega}\alpha_{0}(x)<\infty,\quad\sup\limits_{\Omega}\alpha(x)<\infty.$ The proof of Theorem 1.4 relies further on the interior second order estimates for equation (1.10), which are derived in Theorem 4.1, 4.3. Also, we need to find a global upper barrier and a global lower barrier to conduct the diagonal process. The latter is given by Proposition 5.4 and the former is provided by Loewner and Nirenberg [13]. When $\Omega$ is a general domain in $\mathbb{R}^{n}$ with smooth compact boundary, in order to define the maximal solution (see Definition 5.6), we have to assume that $\alpha(x)\leq 0$ in $\Omega$ so that we can apply the maximum principle Theorem 5.3. We obtain the following result. ###### Theorem 1.5. Let $\Omega\subsetneq\mathbb{R}^{n}$ be a domain with finitely many disjoint non-self-intersecting smooth compact surfaces as the boundary. Suppose that $\alpha(x)\leq 0\text{ in }\Omega,\quad\inf\limits_{\Omega}\alpha_{0}(x)>0,\quad\inf\limits_{\Omega}\alpha(x)>-\infty,\quad\sup\limits_{\Omega}\alpha_{0}(x)<\infty.$ In addition, suppose that the maximal solution $u_{\Omega}$ of (1.10) satisfies $u_{\Omega}\not\equiv 0\quad\text{ in }\Omega.$ If each component of $\partial\Omega$ has codimension $m$ satisfying $v_{m}\in\Gamma_{k}$, then there exists a smooth complete metric $g_{u}=u^{\frac{4}{n-2}}g$ satisfying (1.9), or equivalently, there exists a smooth positive $k$-admissible solution to (1.10)–(1.3). If some component has codimension $m$ satisfying $v_{m}\notin\overline{\Gamma}_{k}$, then there does not exist a complete conformal metric satisfying (1.9), or equivalently, equation (1.10)–(1.3) is not solvable. This paper is organized as follows. We prove Theorem 1.1 in section 2 and Theorem 1.2 in section 3. Section 4 is devoted to $C^{2}$ a priori estimates and interior $C^{2}$ estimates for equation (1.10). Then we apply continuity method and degree theory to prove Theorem 1.3. In section 5, we discuss the fully nonlinear Loewner-Nirenberg problem associated to equation (1.10) and give the proof of Theorem 1.4 and Theorem 1.5. ## 2\. Solutions on Euclidean domains with smooth compact boundary consisting of closed hypersurfaces In this paper, we shall mainly discuss admissible solutions to equation (1.2)-(1.3) in Euclidean domains which have smooth compact boundaries. Within this section, we focus on the case when these boundaries are composed of closed hypersurfaces. ### 2.1. Preliminaries ###### Definition 2.1. A function $0<\underline{u}\in C^{2}(\Omega)$ is a subsolution of (1.2) in $\Omega$ if $\lambda\big{(}W[\underline{u}]\big{)}\in\Gamma_{k}\text{ and }\sigma_{k}^{\frac{1}{k}}\big{(}W[\underline{u}]\big{)}\geq\frac{n-2}{2}\underline{u}^{\frac{n+2}{n-2}}\quad\text{ in }\Omega.$ A function $0<\overline{u}\in C^{2}(\Omega)$ is a supersolution of (1.2) in $\Omega$ if $\text{either }\lambda\big{(}W[\overline{u}]\big{)}\notin\Gamma_{k}\text{ or }\sigma_{k}^{\frac{1}{k}}\big{(}W[\overline{u}]\big{)}\leq\frac{n-2}{2}\overline{u}^{\frac{n+2}{n-2}}\quad\text{ in }\Omega.$ In view of the form of equation (1.2), we first give a general property on supersolutions and subsolutions. ###### Proposition 2.2. If $u$ is a positive subsolution of (1.2), so does $cu$ for any constant $0<c<1$. If $u$ is a positive supersolution of (1.2), so does $cu$ for any constant $c>1$; if in addition $\lambda\big{(}W[u]\big{)}\notin\Gamma_{k}$ everywhere in $\Omega$, then $cu$ is a supersolution of (1.2) for any $c>0$. ###### Proof. The conclusion follows from the fact that $W[cu]=cW[u]$ and $\sigma_{k}^{\frac{1}{k}}(W[cu])=\sigma_{k}^{\frac{1}{k}}(cW[u])=c\sigma_{k}^{\frac{1}{k}}(W[u])\quad\text{if }\lambda\big{(}W[u]\big{)}\in\Gamma_{k}.$ ∎ We will also need the following maximum principle. The proof is similar to [10]. ###### Theorem 2.3. Let $M$ be a smooth compact manifold with boundary. Suppose that $u$ and $v$ are $C^{2}$ positive subsolution and supersolution of (1.2) respectively on $M$. If $u\leq v$ on $\partial M$, then $u\leq v$ on $M$. ###### Proof. Suppose that $u>v$ somewhere in the interior of $M$. Let $C$ be the maximum of $\frac{u}{v}$ on $M$, which is attained at $x_{0}$ in the interior of $M$. Since $C>1$, by Theorem 2.2 we know that $w=\frac{u}{C}$ is a strict subsolution, that is, $\sigma_{k}^{\frac{1}{k}}(W[w])>\frac{n-2}{2}w^{\frac{n+2}{n-2}}.$ On the other hand, since $w(x_{0})=v(x_{0})$ while $w\leq v$ near $x_{0}$, thus at $x_{0}$ we have $\nabla w(x_{0})=\nabla v(x_{0}),\quad\nabla^{2}w(x_{0})\leq\nabla^{2}v(x_{0})$ and consequently $W[w](x_{0})\leq W[v](x_{0})$. It follows that $\sigma_{k}^{\frac{1}{k}}(W[v])(x_{0})\geq\sigma_{k}^{\frac{1}{k}}(W[w])(x_{0})>\frac{n-2}{2}w^{\frac{n+2}{n-2}}(x_{0})=\frac{n-2}{2}v^{\frac{n+2}{n-2}}(x_{0}),$ contradicting with the fact that $v$ is a supersolution. ∎ Next, we provide some special solutions on balls and exterior of balls in $\mathbb{R}^{n}$ of equation (1.2), which will be used frequently as supersolutions or subsolutions. In Euclidean coordinates, $W[u]$ can be expressed as (2.1) $W_{ij}[u]=(n-2)u_{ij}-n\frac{u_{i}u_{j}}{u}+\Big{(}\Delta u+\frac{|\nabla u|^{2}}{u}\Big{)}\delta_{ij}.$ Let $B_{R}(x_{0})=\big{\\{}x\in\mathbb{R}^{n}\big{|}|x-x_{0}|<R\big{\\}}.$ We can verify the following proposition. ###### Proposition 2.4. For any fixed $s>0$, $u(x)=\Big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}s^{2}\Big{)}^{\frac{n-2}{4}}\Big{(}s^{2}-{|x-x_{0}|}^{2}\Big{)}^{1-\frac{n}{2}}\quad\text{ in }B_{s}(x_{0}),$ $v(x)=\Big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}s^{2}\Big{)}^{\frac{n-2}{4}}\Big{(}{|x-x_{0}|}^{2}-s^{2}\Big{)}^{1-\frac{n}{2}}\quad\text{ in }\mathbb{R}^{n}\setminus\overline{B_{s}(x_{0})}$ are admissible solutions of (1.2) which approach to $\infty$ on $\partial B_{s}(x_{0})$. ###### Proof. We first prove that $u(x)$ is an admissible solution of (1.2). For convenience, denote $c:=\Big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}s^{2}\Big{)}^{\frac{n-2}{4}},\quad x=(x^{1},\ldots,x^{n}),\quad x_{0}=(x_{0}^{1},\ldots,x_{0}^{n}).$ Direct calculation shows that $W_{ij}[u]=2(n-1)(n-2)c\Big{(}s^{2}-|x-x_{0}|^{2}\Big{)}^{-\frac{n}{2}-1}s^{2}\delta_{ij}.$ Hence $\sigma_{k}^{\frac{1}{k}}\big{(}W[u]\big{)}=2(n-1)(n-2)c\Big{(}s^{2}-|x-x_{0}|^{2}\Big{)}^{-\frac{n}{2}-1}s^{2}\big{(}C_{n}^{k}\big{)}^{\frac{1}{k}},$ which agrees with the right hand side $\frac{n-2}{2}u^{\frac{n+2}{n-2}}=\frac{n-2}{2}c^{\frac{n+2}{n-2}}\Big{(}s^{2}-|x-x_{0}|^{2}\Big{)}^{-\frac{n}{2}-1}.$ $v(x)$ can be verified similarly. ∎ The following result is a direct consequence of Proposition 2.4 and Theorem 2.3. ###### Corollary 2.5. For any positive admissible solution $u$ of (1.2) in $B_{R}(x_{0})$, we have (2.2) $u(x)\leq\Big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}R^{2}\Big{)}^{\frac{n-2}{4}}\Big{(}R^{2}-{|x-x_{0}|}^{2}\Big{)}^{1-\frac{n}{2}}\quad\text{ in }B_{R}(x_{0}).$ In particular, we have (2.3) $u(x_{0})\leq\Big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}\Big{)}^{\frac{n-2}{4}}R^{1-\frac{n}{2}}.$ As a consequence of (2.3), we immediately obtain ###### Corollary 2.6. There does not exist a positive admissible solution to (1.2) on $\mathbb{R}^{n}$. ###### Proof. If $u$ is a positive admissible solution to (1.2) on $\mathbb{R}^{n}$, then $u$ is obviously a positive admissible solution on $B_{R}(0)$ for any $R>0$. By (2.3) we have $u(0)\leq\Big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}\Big{)}^{\frac{n-2}{4}}R^{1-\frac{n}{2}}.$ Letting $R\rightarrow\infty$ yields $u(0)\leq 0,$ which contradicts with the fact that $u(0)>0$. ∎ We also find the decay property at $\infty$ of any positive admissible solution of (1.2). ###### Corollary 2.7. Let $\Omega$ be an unbounded domain in $\mathbb{R}^{n}$ with smooth compact boundary. If $u$ is a smooth positive admissible solution of (1.2), then (2.4) $u(x)\rightarrow 0\quad\text{ as }\,\,|x|\rightarrow\infty.$ ###### Proof. Applying (2.3) in a ball centered at $x$ of radius $|x|/2$ for $|x|$ large, we have $u(x)\leq\Big{(}4(n-1)\big{(}C_{n}^{k}\big{)}^{\frac{1}{k}}\Big{)}^{\frac{n-2}{4}}\Big{(}{\frac{|x|}{2}}\Big{)}^{1-\frac{n}{2}},$ which implies (2.4). ∎ ### 2.2. Growth rate near codimension $1$ boundary In this section, we assume that the boundary of $\Omega$ is composed of smooth compact hypersurfaces. For positive admissible solution of (1.2) which approaches $\infty$ at $\partial\Omega$, we characterize the growth rate. ###### Theorem 2.8. Let $(M,g)$ be a smooth manifold with boundary. Assume that $\partial M$ is compact. If $u$ is a smooth positive admissible solution of (1.2) on $M$ which approaches to $\infty$ at $\partial M$, then (2.5) ${\rho}^{\frac{n}{2}-1}(x)u(x)\rightarrow\big{(}(n-1)(C_{n}^{k})^{\frac{1}{k}}\big{)}^{\frac{n-2}{4}}\quad\text{ as }x\rightarrow\partial M,$ where $\rho(x)$ is the distance from $x$ to $\partial M$. ###### Proof. We first construct a positive admissible subsolution in a neighborhood of $\partial M$. Consider $\underline{u}=c_{0}(\rho+\epsilon)^{1-\frac{n}{2}}e^{w},$ where $\epsilon$ is a small positive constant, $c_{0}=\big{(}(n-1)(C_{n}^{k})^{\frac{1}{k}}\big{)}^{\frac{n-2}{4}},$ and $w=w(\rho)=\frac{1}{\rho+\delta}-\frac{1}{\delta}$ with $\delta$ a small positive constant to be chosen later. For any point $x\in M\setminus\partial M$ at which $\rho$ is smooth, we choose a local orthonormal frame $e_{1},\ldots,e_{n}$ around $x$ such that $e_{1}=\frac{\partial}{\partial\rho}$ (recall that $\nabla\rho$ = $\frac{\partial}{\partial\rho}$ and $|\nabla\rho|=1$ at $x$). Elementary calculation shows that at $x$, $\nabla\underline{u}=c_{0}e^{w}(\rho+\epsilon)^{-\frac{n}{2}}\Big{(}1-\frac{n}{2}+(\rho+\epsilon)w^{\prime}\Big{)}\nabla\rho,$ $\displaystyle\nabla^{2}\underline{u}=$ $\displaystyle c_{0}e^{w}(\rho+\epsilon)^{-\frac{n}{2}-1}\Bigg{(}\bigg{(}\Big{(}1-\frac{n}{2}\Big{)}(\rho+\epsilon)+(\rho+\epsilon)^{2}w^{\prime}\bigg{)}\nabla^{2}\rho$ $\displaystyle+\bigg{(}\frac{n}{2}\Big{(}\frac{n}{2}-1\Big{)}+(\rho+\epsilon)^{2}w^{\prime\prime}-(n-2)(\rho+\epsilon)w^{\prime}+(\rho+\epsilon)^{2}w^{\prime 2}\bigg{)}\nabla\rho\otimes\nabla\rho\Bigg{)},$ and consequently, $\displaystyle W[\underline{u}]=$ $\displaystyle c_{0}e^{w}(\rho+\epsilon)^{-\frac{n}{2}-1}\Bigg{(}\Big{(}(n-2)(\rho+\epsilon)^{2}w^{\prime}-\frac{1}{2}(n-2)^{2}(\rho+\epsilon)\Big{)}\nabla^{2}\rho$ $\displaystyle+\Big{(}(n-2)(\rho+\epsilon)^{2}w^{\prime\prime}+2(n-2)(\rho+\epsilon)w^{\prime}-2(\rho+\epsilon)^{2}w^{\prime 2}\Big{)}\nabla\rho\otimes\nabla\rho$ $\displaystyle+\bigg{(}\Big{(}(\rho+\epsilon)^{2}w^{\prime}-\frac{1}{2}(n-2)(\rho+\epsilon)\Big{)}\Delta\rho$ $\displaystyle+(\rho+\epsilon)^{2}w^{\prime\prime}-2(n-2)(\rho+\epsilon)w^{\prime}+2(\rho+\epsilon)^{2}w^{\prime 2}+\frac{1}{2}(n-1)(n-2)\bigg{)}I$ $\displaystyle-\frac{n-2}{2}(\rho+\epsilon)^{2}\text{Ric}_{g}\Bigg{)}.$ For $\rho_{0}>0$, denote $M_{\rho_{0}}=\big{\\{}x\in M\big{|}\rho(x)<\rho_{0}\big{\\}}.$ We may choose $\rho_{0}$ sufficiently small such that on $M_{\rho_{0}}$, $u>c_{0}$ and (2.6) $\Delta\rho I+(n-2)\nabla^{2}\rho\leq\lambda I,\quad\text{Ric}_{g}\leq C_{0}I,$ where $I$ is the unit matrix and $\lambda$, $C_{0}$ are positive constants depending only on $g=I$. Denoting $\displaystyle\Psi=$ $\displaystyle(n-2)(\rho+\epsilon)^{2}w^{\prime\prime}\nabla\rho\otimes\nabla\rho+(n-2)(\rho+\epsilon)^{2}w^{\prime}\nabla^{2}\rho+(\rho+\epsilon)^{2}w^{\prime\prime}I$ $\displaystyle+2(\rho+\epsilon)^{2}w^{\prime 2}(I-\nabla\rho\otimes\nabla\rho)+(\rho+\epsilon)^{2}w^{\prime}\Delta\rho I-\frac{n-2}{2}(\rho+\epsilon)^{2}\text{Ric}_{g},$ $\displaystyle\Phi=$ $\displaystyle-\frac{1}{2}(n-2)(\rho+\epsilon)\Delta\rho I-\frac{1}{2}(n-2)^{2}(\rho+\epsilon)\nabla^{2}\rho$ $\displaystyle+2(n-2)(\rho+\epsilon)w^{\prime}(\nabla\rho\otimes\nabla\rho-I)+\frac{1}{2}(n-1)(n-2)I,$ then $W[\underline{u}]=c_{0}e^{w}(\rho+\epsilon)^{-\frac{n}{2}-1}(\Psi+\Phi).$ Note that $w^{\prime}=\frac{-1}{(\rho+\delta)^{2}}<0,\quad w^{\prime\prime}=\frac{2}{(\rho+\delta)^{3}}>0.$ In view of (2.6) we obtain $\Psi\geq(\rho+\epsilon)^{2}\Big{(}w^{\prime\prime}+\lambda w^{\prime}-\frac{n-2}{2}C_{0}\Big{)}I.$ We note that $w^{\prime\prime}+\lambda w^{\prime}-\frac{n-2}{2}C_{0}=\frac{2}{(\rho+\delta)^{3}}-\frac{\lambda}{(\rho+\delta)^{2}}-\frac{n-2}{2}C_{0}\geq 0$ for $\rho_{0}+\delta$ sufficiently small depending only on $n$, $\lambda$ and $C_{0}$. Hence we conclude that $\Psi\geq 0$. Also, by (2.6) and for $\rho_{0}+\delta\leq\sqrt{\frac{2}{\lambda}}$, we have $\displaystyle\Phi\geq$ $\displaystyle-\frac{n-2}{2}(\rho+\epsilon)\lambda I+\frac{2(n-2)(\rho+\epsilon)}{(\rho+\delta)^{2}}(I-\nabla\rho\otimes\nabla\rho)+\frac{1}{2}(n-1)(n-2)I$ $\displaystyle\geq$ $\displaystyle\frac{1}{2}(n-1)(n-2)\left(\begin{array}[]{cccc}1-\frac{\lambda(\rho+\epsilon)}{n-1}&&&\\\ &1+\frac{\lambda(\rho+\epsilon)}{n-1}&&\\\ &&\ddots&\\\ &&&1+\frac{\lambda(\rho+\epsilon)}{n-1}\\\ \end{array}\right).$ Letting $\epsilon<\rho_{0}$ and $\rho_{0}<\frac{n-1}{2\lambda}$, we have $\Phi>0$ and $\displaystyle\sigma_{k}^{\frac{1}{k}}(W[\underline{u}])\geq$ $\displaystyle c_{0}e^{w}(\rho+\epsilon)^{-\frac{n}{2}-1}\sigma_{k}^{\frac{1}{k}}(\Phi)$ $\displaystyle\geq$ $\displaystyle\frac{1}{2}(n-1)(n-2)c_{0}e^{w}(\rho+\epsilon)^{-\frac{n}{2}-1}(C_{n}^{k})^{\frac{1}{k}}\cdot$ $\displaystyle\Big{(}1+\frac{\lambda(\rho+\epsilon)}{n-1}\Big{)}^{\frac{k-1}{k}}\bigg{(}1+\Big{(}1-\frac{2k}{n}\Big{)}\frac{\lambda(\rho+\epsilon)}{n-1}\bigg{)}^{\frac{1}{k}}.$ Elementary calculation shows that $\Big{(}1+\frac{\lambda(\rho+\epsilon)}{n-1}\Big{)}^{k-1}\bigg{(}1+\Big{(}1-\frac{2k}{n}\Big{)}\frac{\lambda(\rho+\epsilon)}{n-1}\bigg{)}\geq 1$ when $k\leq\frac{n}{2}$, or when $k>\frac{n}{2}$ and $\rho_{0}\leq\frac{(n-1)(n-2)k}{2(2k-n)(k-1)\lambda}$. Therefore, $\sigma_{k}^{\frac{1}{k}}(W[\underline{u}])\geq\frac{1}{2}(n-1)(n-2)c_{0}(C_{n}^{k})^{\frac{1}{k}}e^{w}(\rho+\epsilon)^{-\frac{n}{2}-1}.$ On the other hand, $\frac{n-2}{2}\underline{u}^{\frac{n+2}{n-2}}=\frac{n-2}{2}{c_{0}}^{\frac{n+2}{n-2}}(\rho+\epsilon)^{-\frac{n}{2}-1}e^{\frac{n+2}{n-2}w}.$ Note that $w\leq 0$. Hence $\underline{u}$ is a subsolution of (1.2). Observe that $\underline{u}=c_{0}\epsilon^{1-\frac{n}{2}}\leq u=\infty\quad\text{ on }\partial M,$ and on $\rho=\rho_{0}$, $\underline{u}\leq c_{0}{\rho_{0}}^{1-\frac{n}{2}}e^{(\rho_{0}+\delta)^{-1}-{\delta}^{-1}}\leq c_{0}{\rho_{0}}^{1-\frac{n}{2}}e^{-\delta^{-1}/2}\leq c_{0}\leq u$ for $\delta$ sufficiently small depending in addition on $\rho_{0}$. By the maximum principle, $\underline{u}\leq u\quad\text{ on }M_{\rho_{0}}.$ Letting $\epsilon\rightarrow 0^{+}$ and then $x\rightarrow\partial M$ we obtain $\liminf\limits_{x\rightarrow\partial M}\rho^{\frac{n}{2}-1}u\geq c_{0}.$ We next prove (2.7) $\limsup\limits_{x\rightarrow\partial M}\rho^{\frac{n}{2}-1}u\leq c_{0}.$ For this, we construct a supersolution in $M_{\rho_{1}}\setminus\overline{M_{\epsilon}}$ for any $0<\epsilon<\rho_{1}$: $\overline{u}=c_{0}(\rho-\epsilon)^{1-\frac{n}{2}}e^{v},$ where $v=v(\rho)=\frac{1}{2}\Big{(}\ln(\delta+\rho)-\ln\delta\Big{)}$ with $\rho_{1}$, $\delta$ small positive constants to be chosen later. From the above calculation, it is easy to obtain $\displaystyle\sigma_{1}\big{(}W[\overline{u}]\big{)}=$ $\displaystyle c_{0}e^{v}(\rho-\epsilon)^{-\frac{n}{2}-1}\bigg{(}\Big{(}2(n-1)(\rho-\epsilon)^{2}v^{\prime}-(n-2)(n-1)(\rho-\epsilon)\Big{)}\Delta\rho$ $\displaystyle+2(n-1)(\rho-\epsilon)^{2}v^{\prime\prime}-2(n-2)(n-1)(\rho-\epsilon)v^{\prime}+2(n-1)(\rho-\epsilon)^{2}v^{\prime 2}$ $\displaystyle+\frac{n(n-1)(n-2)}{2}-\frac{n-2}{2}(\rho-\epsilon)^{2}S_{g}\bigg{)},$ where $S_{g}$ is the scalar curvature on $M$. We may choose $\rho_{1}$ sufficiently small such that on $M_{\rho_{1}}$, $|\Delta\rho|\leq C_{1},\quad S_{g}\geq-C_{1},$ where $C_{1}$ is a positive constant depending only on $g$. Note that $v^{\prime}=\frac{1}{2(\delta+\rho)},\quad v^{\prime\prime}=-\frac{1}{2(\delta+\rho)^{2}}.$ It follows that $\displaystyle 2(n-1)v^{\prime}\Delta\rho+2(n-1)v^{\prime\prime}+2(n-1)v^{\prime 2}-\frac{n-2}{2}S_{g}$ $\displaystyle\leq$ $\displaystyle\frac{(n-1)C_{1}}{\delta+\rho}-\frac{n-1}{2(\delta+\rho)^{2}}+\frac{n-2}{2}C_{1}\leq 0,$ and $\displaystyle-(n-2)(n-1)\Delta\rho-2(n-2)(n-1)v^{\prime}\leq(n-2)(n-1)\Big{(}C_{1}-\frac{1}{\delta+\rho}\Big{)}\leq 0$ for $\rho_{1}+\delta$ sufficiently small depending only on $n$ and $C_{1}$. Consequently, $\sigma_{1}\big{(}W[\overline{u}]\big{)}\leq c_{0}e^{v}(\rho-\epsilon)^{-\frac{n}{2}-1}\frac{n(n-1)(n-2)}{2}.$ Then applying Newton-Maclaurin inequality we arrive at $\sigma_{k}^{\frac{1}{k}}\big{(}W[\overline{u}]\big{)}\leq\frac{(C_{n}^{k})^{\frac{1}{k}}}{n}\sigma_{1}\big{(}W[\overline{u}]\big{)}\leq\frac{n-2}{2}{\overline{u}}^{\frac{n+2}{n-2}}\quad\text{ in }M_{\rho_{1}}\setminus\overline{M_{\epsilon}}.$ Also, we note that on $\rho=\epsilon$, $\overline{u}=\infty>u,$ and on $\rho=\rho_{1}$, $\overline{u}\geq c_{0}{\rho_{1}}^{1-\frac{n}{2}}\sqrt{\frac{\delta+\rho_{1}}{\delta}}\geq u$ if $\delta$ is chosen sufficiently small depending in addition on $\rho_{1}$ and $u$. By the maximum principle, we have $\overline{u}\geq u\quad\text{ on }M_{\rho_{1}}\setminus M_{\epsilon}.$ Letting $\epsilon\rightarrow 0^{+}$ and then $x\rightarrow\partial M$ we obtain (2.7). ∎ ###### Remark 2.9. In [10], a global subsolution $\underline{u}=c_{0}(\rho+\epsilon)^{1-\frac{n}{2}}e^{A\big{(}(\rho+\delta)^{-p}-{\delta}^{-p}\big{)}}$ is constructed on compact manifold with boundary which yields the lower bound of the growth rate. We simplify the proof by defining the subsolution only in a neighborhood of the boundary $\partial M$. For the upper bound, we also adopt a new supersolution near $\partial M$. For the special case when $M$ is a domain in $\mathbb{R}^{n}$, we may use González-Li-Nguyen’s method (see Lemma 3.4 in [5]) to give an easier proof. ### 2.3. Existence of smooth solutions We now cite an important result of Loewner and Nirenberg [13] when the boundary of the domain is composed of hypersurfaces for the scalar case ($k=1$), which will be used as an upper barrier later. ###### Lemma 2.10. Let $\Omega$ be a domain in $\mathbb{R}^{n}$ having smooth compact hypersurfaces as boundary. There exists a unique positive solution $u\in C^{\infty}(\Omega)$ of (1.5). In addition, if $\rho(x)$ denotes the distance of $x$ to the boundary of $\Omega$, then ${\rho}^{\frac{n}{2}-1}(x)u(x)\rightarrow\big{(}n(n-1)\big{)}^{\frac{n-2}{4}}\quad\text{ as }x\rightarrow\partial\Omega.$ If $\Omega$ is unbounded, the solution $u(x)$ has the property that $|x|^{n-2}u(x)\rightarrow c\quad\text{ as }|x|\rightarrow\infty,$ where $c$ is a positive constant depending only on $n$ and $\Omega$. We can now state the main theorem in this section. ###### Theorem 2.11. Let $\Omega$ be a domain in $\mathbb{R}^{n}$ with nonempty smooth compact boundary which are composed of closed hypersurfaces. Then there exists a positive admissible solution $u\in C^{\infty}(\Omega)$ to equation (1.2) which tends to $\infty$ on $\partial\Omega$. If $\Omega$ is bounded or $k=1$, the solution is unique. ###### Proof. If $\Omega$ is a bounded domain, by Corollary 1.3 in [6], there exists a positive admissible solution $u\in C^{\infty}(\Omega)$ to equation (1.2) with infinite boundary value. When $\Omega$ is unbounded, we may assume without loss of generality that $0\notin\overline{\Omega}$. Let $B_{s}(0)$ be a fixed ball such that $\overline{\Omega}\subset\mathbb{R}^{n}\setminus\overline{B_{s}(0)}$. By Proposition 2.4, there exists a smooth solution $\underline{u}=C_{1}(|x|^{2}-s^{2})^{1-\frac{n}{2}},\quad C_{1}=\big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}s^{2}\big{)}^{\frac{n-2}{4}}$ to equation (1.2) in $\mathbb{R}^{n}\setminus\overline{B_{s}(0)}$. For any $R>\max\limits_{\partial\Omega}\underline{u}$ large enough such that $\partial\Omega\subset B_{R}(0)$, by Corollary 5.4 in [6], there exists a smooth positive admissible solution $u_{R}$ to the Dirichlet problem (2.8) $\left\\{\begin{aligned} &\sigma_{k}^{\frac{1}{k}}(W[u])=\frac{n-2}{2}{u}^{\frac{n+2}{n-2}}\quad\text{ in }B_{R}(0)\cap\Omega,\\\ &u=R\quad\text{ on }\partial\Omega,\\\ &u=\underline{u}\quad\text{ on }\partial B_{R}(0).\end{aligned}\right.$ By Lemma 2.10, we are able to find a positive solution $\bar{u}\in C^{\infty}(\Omega)$ of (1.5) which tends to $\infty$ at $\partial\Omega$ with the growth rate $\rho^{\frac{n}{2}-1}(x)\overline{u}(x)\rightarrow\big{(}n(n-1)\big{)}^{\frac{n-2}{4}}\quad\text{ as }x\rightarrow\partial\Omega,$ and decays to $0$ at $\infty$ with the decay rate $|x|^{n-2}\overline{u}(x)\rightarrow c\quad\text{ as }|x|\rightarrow\infty,$ where $c$ is the positive constant in Lemma 2.10. Comparing the decay rate as $|x|\rightarrow\infty$, we have on $\partial B_{R}(0)$ with $R$ sufficiently large, $u_{R}(x)=\underline{u}(x)\leq\frac{2C_{1}}{c}\overline{u}(x).$ Meanwhile, we have $u_{R}(x)=R<\infty=\frac{2C_{1}}{c}\overline{u}(x)\quad\text{ on }\partial\Omega.$ By the maximum principle, we obtain $u_{R}(x)\leq\max\Big{\\{}\frac{2C_{1}}{c},1\Big{\\}}\overline{u}(x)\quad\text{ on }B_{R}(0)\cap\Omega.$ In proving the above inequality, we have applied the Newton-Maclaurin inequality as well as Proposition 2.2. On the other hand, by the maximum principle we have $u_{R}(x)\geq\underline{u}(x)\quad\text{ on }B_{R}(0)\cap\Omega.$ Now we can apply the interior estimates of Guan (see Theorems 2.1 and 3.1 in [6]), followed by the Evans-Krylov interior estimates [3, 11] and a standard diagonal process to obtain a smooth positive admissible solution $u$ of (1.2) on $\Omega$ which tends to $\infty$ at $\partial\Omega$. When $\Omega$ is bounded, we shall prove the uniqueness. Suppose $v$ is another positive admissible solution satisfying (1.2)-(1.3). By Theorem 2.8, $u$ and $v$ have the same growth rate as $x\rightarrow\partial\Omega$. Hence for any $\epsilon>0$, on $\\{\rho=c\\}$ where $c>0$ is a sufficiently small positive constant, we have $u\leq(1+\epsilon)v.$ By Proposition 2.2 and the maximum principle, $u\leq(1+\epsilon)v\quad\text{ in }\,\,\\{x\in\Omega|\rho(x)>c\\}.$ Letting $c\rightarrow 0$, we have $u\leq(1+\epsilon)v\quad\text{ in }\,\,\Omega.$ Then let $\epsilon\rightarrow 0$. It follows that $u\leq v\quad\text{ in }\,\,\Omega.$ Similarly, we can show that $v\leq u$ in $\Omega$. Hence we proved the uniqueness. ∎ ## 3\. Maximal solution on general domain in $\mathbb{R}^{n}$ For any domain $\Omega\subset\mathbb{R}^{n}$ with smooth compact boundary $\partial\Omega$, we hope to associate a smooth positive admissible solution $u_{\Omega}$ of (1.2) which is maximal, in the sense that it is greater than or equal to any smooth positive admissible solution of (1.2) in $\Omega$. We will then investigate when $u_{\Omega}$ tends to $\infty$ on $\partial\Omega$. ### 3.1. Construction of the maximal solution Let $\Omega_{(1)}\Subset\Omega_{(2)}\Subset\ldots$ be an increasing sequence of bounded subdomains of $\Omega$ with smooth compact boundaries $\partial\Omega_{(j)}$ which are closed hypersurfaces such that $\Omega=\cup\Omega_{(j)}$. By Theorem 2.11, we can find a unique smooth positive admissible solution $u_{(j)}$ of (1.2) in $\Omega_{(j)}$ which tends to $\infty$ on the boundary $\partial\Omega_{(j)}$. By the maximum principle, we see that $\\{u_{(j)}\\}$ is a monotone decreasing sequence of positive functions. It follows that $u_{(j)}$ converges to a nonnegative function $u_{\Omega}$ in $\Omega$. We have the following dichotomy, which is similar to [5]. ###### Lemma 3.1. We have either $u_{\Omega}>0$ in $\Omega$ or $u_{\Omega}\equiv 0$ in $\Omega$. ###### Proof. For the sake of completeness, we provide the proof. Suppose that $u_{\Omega}\not\equiv 0$. Then we must have $u_{\Omega}>c$ on some $B(x_{0},r_{0})\subset\Omega$ for some constant $c>0$. We may choose $0<r_{1}<r_{0}$ such that the solution $v=\big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}r_{1}^{2}\big{)}^{\frac{n-2}{4}}\big{(}{|x-x_{0}|}^{2}-r_{1}^{2}\big{)}^{1-\frac{n}{2}}\quad\text{ in }\mathbb{R}^{n}\setminus\overline{B_{r_{1}}(x_{0})}$ (see Proposition 2.4) satisfies $\big{(}4(n-1)(C_{n}^{k})^{\frac{1}{k}}r_{1}^{2}\big{)}^{\frac{n-2}{4}}\big{(}r_{0}^{2}-r_{1}^{2}\big{)}^{1-\frac{n}{2}}=c.$ Let $\Omega_{(j)}$ and $u_{(j)}$ be as above. By the maximum principle, we know that $u_{(j)}\geq v$ in $\Omega_{(j)}\setminus B(x_{0},r_{0})$ for any $j$. Hence $u_{\Omega}\geq v>0$ in $\Omega\setminus B(x_{0},r_{0})$. We thus have $u_{\Omega}>0$ in $\Omega$. ∎ ###### Remark 3.2. Several remarks are as follows. 1. (1) In view of Lemma 3.1 and the interior regularity of Guan [6] and Evans-Krylov [3, 11], we know that $u_{\Omega}$ is smooth. 2. (2) If there exists a positive admissible subsolution $v$ of (1.2) in $\Omega$ (for example, when $\mathbb{R}^{n}\setminus\overline{\Omega}\neq\emptyset$), then $u_{\Omega}>0$ in $\Omega$. 3. (3) When $u_{\Omega}>0$ in $\Omega$, $u_{\Omega}$ is the so-called maximal solution of equation (1.2) in $\Omega$. In fact, if $w$ is any smooth positive admissible solution of (1.2) in $\Omega$, then $u_{(j)}\geq w$ on $\Omega_{(j)}$ by the maximum principle. Hence $u_{\Omega}\geq w$ in $\Omega$. 4. (4) When $\Omega$ is bounded and $\partial\Omega$ is composed of closed hypersurfaces, $u_{\Omega}$ coincides with the solution given by Theorem 2.11. As in [13], we call a compact subset $\Gamma\subset\partial\Omega$ regular, if (3.1) $u_{\Omega}(x)\rightarrow\infty\quad\text{ as }x\rightarrow\Gamma.$ The rest of this section discusses the validity of (3.1). ### 3.2. Regularity of a portion on the boundary We consider a portion $\Gamma\subset\partial\Omega$ which is a smooth compact non-self-intersecting surface of codimension $m$. We also assume that $\partial\Omega\setminus\Gamma$ is smooth compact. As in [13, 5], we first give a necessary and sufficient condition for (3.1). ###### Theorem 3.3. Let $\Omega$ be a domain in $\mathbb{R}^{n}$ and $\Gamma$ be a compact subset of $\partial\Omega$ such that $\partial\Omega\setminus\Gamma$ is also compact. Suppose that $u_{\Omega}\not\equiv 0$. Then $u_{\Omega}(x)\rightarrow\infty$ as $x\rightarrow\Gamma$ if and only if there exists an open neighborhood $U$ of $\Gamma$ and a $C^{2}$ positive admissible subsolution $\phi(x)$ of (1.2) defined in $\Omega\cap U$ which tends to $\infty$ as $x\rightarrow\Gamma$. ###### Proof. Necessity is obvious. For sufficiency, without loss of generality we may assume that $\overline{U}$ is compact, $\overline{U}\cap(\partial\Omega\setminus\Gamma)=\emptyset$ and $0<\phi\in C^{2}(\overline{U}\cap\Omega)$. For $j$ sufficiently large, we have $\partial(U\cap\Omega_{(j)})=\partial U\cup(U\cap\partial\Omega_{(j)}).$ Since $u_{\Omega}$ is positive in $\Omega$ and $\partial U\subset\Omega$ is compact, we have $\inf\limits_{\partial U}u_{\Omega}:=m>0$. Denote $M:=\sup\limits_{\partial U}\phi(x)>0$ and $A:=\max\\{\frac{M}{m},1\\}$. We note that $u_{(j)}\geq u_{\Omega}\geq m\geq\frac{\phi}{A}$ on $\partial U$, and $u_{(j)}=\infty>\frac{\phi}{A}$ on $U\cap\partial\Omega_{(j)}$. By Proposition 2.2, $\frac{\phi}{A}$ is again a positive admissible subsolution of (1.2). By the maximum principle, we arrive at $u_{(j)}\geq\frac{\phi}{A}$ in $U\cap\Omega_{(j)}$. Consequently, we obtain $u_{\Omega}\geq\frac{\phi}{A}$ in $U\cap\Omega$. Hence we proved the sufficiency. ∎ Now we aim at constructing $\phi(x)$ in a neighborhood of $\Gamma$ as in Theorem 3.3. Let $\rho(x)$ denote the distance of $x$ to $\Gamma$. Define $\Gamma_{\rho_{0}}=\\{x\in\Omega|\rho(x)<\rho_{0}\\}.$ For $\rho_{0}$ sufficiently small, $\rho$ is a smooth function in $\Gamma_{\rho_{0}}\setminus\Gamma$. Define on $\Gamma_{\rho_{0}}\setminus\Gamma$ $\phi(x)=c\rho^{1-\frac{n}{2}}(x),$ where $c>0$ is a constant to be determined later. Recall that $|\nabla\rho|=1$. Direct calculation shows that $\phi_{i}=c\Big{(}1-\frac{n}{2}\Big{)}{\rho}^{-\frac{n}{2}}\rho_{i},$ $\phi_{ij}=c\Big{(}\frac{n}{2}-1\Big{)}\frac{n}{2}{\rho}^{-\frac{n}{2}-1}\rho_{i}\rho_{j}+c\Big{(}1-\frac{n}{2}\Big{)}{\rho}^{-\frac{n}{2}}\rho_{ij},$ $|\nabla\phi|^{2}=c^{2}\Big{(}1-\frac{n}{2}\Big{)}^{2}{\rho}^{-n},$ $\Delta\phi=c\Big{(}\frac{n}{2}-1\Big{)}\frac{n}{2}{\rho}^{-\frac{n}{2}-1}+c\Big{(}1-\frac{n}{2}\Big{)}{\rho}^{-\frac{n}{2}}\Delta\rho.$ Thus, we have $W_{ij}[\phi]=\frac{1}{2}(n-2)c{\rho}^{-\frac{n}{2}-1}\Big{(}(n-1-\rho\Delta\rho)\delta_{ij}-(n-2)\rho\rho_{ij}\Big{)}.$ As shown in [5], as $\rho\rightarrow 0$, $\lambda(\rho\nabla^{2}\rho)\rightarrow\big{(}\underbrace{0,\ldots,0}_{n-m+1},\underbrace{1,\ldots,1}_{m-1}\big{)}.$ Consequently, as $\rho\rightarrow 0$, $\lambda\big{(}{\rho}^{\frac{n}{2}+1}W_{ij}[\phi]\big{)}\rightarrow\frac{1}{2}(n-2)c\big{(}\underbrace{n-m,\ldots,n-m}_{n-m+1},\underbrace{2-m,\ldots,2-m}_{m-1}\big{)}.$ If we assume that $v_{m}=\big{(}\underbrace{n-m,\ldots,n-m}_{n-m+1},\underbrace{2-m,\ldots,2-m}_{m-1}\big{)}\in\Gamma_{k},$ then we have as $\rho\rightarrow 0$, ${\rho}^{\frac{n}{2}+1}\sigma_{k}^{\frac{1}{k}}\big{(}W_{ij}[\phi]\big{)}\rightarrow\frac{1}{2}(n-2)c\sigma_{k}^{\frac{1}{k}}(v_{m}).$ On the other hand, $\frac{n-2}{2}\phi^{\frac{n+2}{n-2}}=\frac{n-2}{2}c^{\frac{n+2}{n-2}}\rho^{-\frac{n}{2}-1}.$ If we choose $0<c<\Big{(}\sigma_{k}^{\frac{1}{k}}(v_{m})\Big{)}^{\frac{n-2}{4}},$ then $\phi$ is the one as in Theorem 3.3, which implies that $u_{\Omega}\rightarrow\infty$ as $x\rightarrow\Gamma$ if $u_{\Omega}\not\equiv 0$. ### 3.3. The case when $v_{m}\notin\overline{\Gamma}_{k}$ We adopt the same function as [5] $\psi=\psi_{c,d}=(c\rho^{-\alpha}+d)^{\beta},$ where $\alpha$, $\beta$, $c$ and $d$ are positive constants. Recall that $\rho(x)$ is the distance of $x$ to $\Gamma$. By direct calculation, we have $\displaystyle\psi_{i}=$ $\displaystyle-\alpha\beta c(c\rho^{-\alpha}+d)^{\beta-1}\rho^{-\alpha-1}\rho_{i},$ $\displaystyle\psi_{ij}=$ $\displaystyle\alpha^{2}\beta(\beta-1)c^{2}(c\rho^{-\alpha}+d)^{\beta-2}\rho^{-2\alpha-2}\rho_{i}\rho_{j}$ $\displaystyle+\alpha(\alpha+1)\beta c(c\rho^{-\alpha}+d)^{\beta-1}\rho^{-\alpha-2}\rho_{i}\rho_{j}-\alpha\beta c(c\rho^{-\alpha}+d)^{\beta-1}\rho^{-\alpha-1}\rho_{ij},$ $\displaystyle|\nabla\psi|^{2}=$ $\displaystyle\alpha^{2}\beta^{2}c^{2}(c\rho^{-\alpha}+d)^{2\beta-2}\rho^{-2\alpha-2},$ $\displaystyle\Delta\psi=$ $\displaystyle\alpha^{2}\beta(\beta-1)c^{2}(c\rho^{-\alpha}+d)^{\beta-2}\rho^{-2\alpha-2}+\alpha(\alpha+1)\beta c(c\rho^{-\alpha}+d)^{\beta-1}\rho^{-\alpha-2}$ $\displaystyle-\alpha\beta c(c\rho^{-\alpha}+d)^{\beta-1}\rho^{-\alpha-1}\Delta\rho.$ It follows that $\displaystyle\psi^{-\frac{n+2}{n-2}}W_{ij}[\psi]=$ $\displaystyle\alpha\beta c\rho^{-\alpha-2}(c\rho^{-\alpha}+d)^{-\frac{4\beta}{n-2}-1}\bigg{(}-(n-2)\rho\rho_{ij}$ $\displaystyle+\Big{(}\alpha(-n-2\beta+2)\frac{c\rho^{-\alpha}}{c\rho^{-\alpha}+d}+(n-2)(\alpha+1)\Big{)}\rho_{i}\rho_{j}$ $\displaystyle+\Big{(}\alpha(2\beta-1)\frac{c\rho^{-\alpha}}{c\rho^{-\alpha}+d}+\alpha+1-\rho\Delta\rho\Big{)}\delta_{ij}\bigg{)}.$ Choosing an appropriate coordinate system as in [5] such that as $\rho\rightarrow 0$, $\rho\nabla^{2}\rho=\left(\begin{array}[]{ccc}\mathcal{O}(\rho)_{(n-m)\times(n-m)}&&\\\ &0&\\\ &&I_{(m-1)\times(m-1)}\\\ \end{array}\right)$ and $\nabla\rho\otimes\nabla\rho=\left(\begin{array}[]{ccc}0_{(n-m)\times(n-m)}&&\\\ &1&\\\ &&0_{(m-1)\times(m-1)}\\\ \end{array}\right),$ we thus obtain (3.2) $\frac{1}{\alpha\beta c}\rho^{\alpha+2}(c\rho^{-\alpha}+d)^{\frac{4\beta}{n-2}+1}\psi^{-\frac{n+2}{n-2}}W_{ij}[\psi]=A_{m}+B_{\alpha\beta}(\zeta)+\mathcal{O}(\rho),$ where $\zeta=\frac{c\rho^{-\alpha}}{c\rho^{-\alpha}+d}$, $A_{m}=\left(\begin{array}[]{ccc}(n-m)I&&\\\ &n-m&\\\ &&(2-m)I\\\ \end{array}\right),$ and $\tiny{B_{\alpha\beta}(\zeta)=\left(\begin{array}[]{ccc}\big{(}2\alpha\beta\zeta+\alpha(1-\zeta)+2-n\big{)}I&&\\\ &(n-1)\alpha(1-\zeta)&\\\ &&\big{(}2\alpha\beta\zeta+\alpha(1-\zeta)+2-n\big{)}I\\\ \end{array}\right).}$ Since $v_{m}\notin\overline{\Gamma}_{k}$, we may choose $\alpha$ sufficiently small, and then an appropriate $\beta$ such that $\alpha\beta$ is slightly larger than $\frac{n}{2}-1$ in order to make $\lambda(A_{m}+B_{\alpha\beta}(\zeta))\notin\overline{\Gamma}_{k}$ for all $0<\zeta<1$. Then we can choose a sufficiently small $0<\rho_{0}<1$ such that on $0<\rho<\rho_{0}$ and for all $0<\zeta<1$, the righthand side of (3.2) satisfies $\lambda\big{(}A_{m}+B_{\alpha\beta}(\zeta)+\mathcal{O}(\rho)\big{)}\notin\overline{\Gamma}_{k}.$ It follows that $\lambda\big{(}W_{ij}[\psi]\big{)}\notin\overline{\Gamma}_{k}$ on $\\{0<\rho<\rho_{0}\\}$ for all $c>0$ and $d>0$, which means that $\psi$ is a supersolution of (1.2). Now we compare $u_{\Omega}$ and $\psi$. Choosing $d$ sufficiently large depending on $\rho_{0}$ such that on $\rho=\rho_{0}$ we have $\psi\geq d^{\beta}\geq u_{\Omega}$. By comparing $u_{\Omega}$ with the first function in Proposition 2.4 in the ball $B_{\rho(x)}(x)$ we can deduce that $u_{\Omega}(x)\leq C(n,k)\rho^{1-\frac{n}{2}}\leq c^{\beta}\rho^{-\alpha\beta}<\psi$ in $0<\rho\leq\delta$, where $\delta$ is a sufficiently small positive constant depending on $c$. By the maximum principle we have $u_{\Omega}\leq\psi$ in $\delta<\rho<\rho_{0}$. Letting $\delta\rightarrow 0$, we arrive at $u_{\Omega}\leq\psi$ in $0<\rho<\rho_{0}$. Next letting $c\rightarrow 0$ we deduce that $u_{\Omega}\leq d^{\beta}$ in $0<\rho<\rho_{0}$. ## 4\. More general equations By change of variable $u=e^{\frac{n-2}{2}v}$, equation (1.10) is equivalent to (4.1) $\sigma_{k}\bigg{(}(n-2)e^{-2v}\mathcal{W}[v]\bigg{)}+\alpha(x)\sigma_{k-1}\bigg{(}(n-2)e^{-2v}\mathcal{W}[v]\bigg{)}=\alpha_{0}(x),$ where $\mathcal{W}[v]=g^{-1}\bigg{(}\nabla^{2}v-dv\otimes dv+\Big{(}\frac{\Delta v}{n-2}+|\nabla v|^{2}\Big{)}g-\frac{\text{Ric}_{g}}{n-2}\bigg{)}.$ Now we derive a priori estimates for $(k-1)$-admissible solutions to equation (4.1) on a smooth compact manifold with boundary, that is, on $\overline{M}:=M\cup\partial M$. We write (4.1) in the following form (4.2) $F\big{(}\mathcal{W}[v]\big{)}:=\frac{\sigma_{k}\big{(}\mathcal{W}[v]\big{)}}{\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}}-\frac{\alpha_{0}(x)e^{2kv}}{(n-2)^{k}}\frac{1}{\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}}=-\frac{\alpha(x)}{n-2}e^{2v}.$ As shown in [9], for $(k-1)$-admissible $v$, namely $\lambda\big{(}\mathcal{W}[v]\big{)}\in\Gamma_{k-1}$, equation (4.2) is elliptic and the operator $F\big{(}\mathcal{W}[v]\big{)}$ is concave with respect to $\\{\mathcal{W}_{ij}\\}$. ### 4.1. Gradient estimates Let $v\in C^{3}(M)\cap C^{1}(\overline{M})$ be a $(k-1)$-admissible solution of (4.1). We adopt the same test function as [6]: $\Phi=\zeta(x)we^{\eta(v)}$, where $w=\frac{1}{2}|\nabla v|^{2}$, $\zeta$ and $\eta$ are functions to be chosen later. Assume that $\Phi$ attains its maximum at an interior point $x_{0}\in M$. Choose a local orthonormal frame $e_{1},\ldots,e_{n}$ about $x_{0}$. Then, at $x_{0}$, we have (4.3) $\frac{\nabla_{i}\zeta}{\zeta}+\frac{\nabla_{i}w}{w}+\eta^{\prime}\nabla_{i}v=0,$ (4.4) $F^{ij}\Big{(}\frac{\nabla_{ij}\zeta}{\zeta}-\frac{\nabla_{i}\zeta\nabla_{j}\zeta}{\zeta^{2}}+\frac{\nabla_{ij}w}{w}-\frac{\nabla_{i}w\nabla_{j}w}{w^{2}}+\eta^{\prime}\nabla_{ij}v+\eta^{\prime\prime}\nabla_{i}v\nabla_{j}v\Big{)}\leq 0,$ where $F^{ij}=\frac{\partial F}{\partial\mathcal{W}_{ij}}\big{(}\mathcal{W}[v]\big{)}$. By direct calculation, $\nabla_{i}w=\nabla_{im}v\nabla_{m}v,$ $\nabla_{ij}w=\nabla_{ijm}v\nabla_{m}v+\nabla_{im}v\nabla_{jm}v.$ By (4.3), we have (4.5) $F^{ij}\frac{\nabla_{i}w\nabla_{j}w}{w^{2}}\leq 3F^{ij}\frac{\nabla_{i}\zeta\nabla_{j}\zeta}{\zeta^{2}}+\frac{3\eta^{\prime 2}}{2}F^{ij}\nabla_{i}v\nabla_{j}v.$ Hence (4.4) becomes (4.6) $\displaystyle F^{ij}\Big{(}\frac{\nabla_{ij}\zeta}{\zeta}-\frac{5\nabla_{i}\zeta\nabla_{j}\zeta}{2\zeta^{2}}\Big{)}+\frac{1}{w}\Big{(}\delta_{lm}-\frac{\nabla_{l}v\nabla_{m}v}{2w}\Big{)}F^{ij}\nabla_{il}v\nabla_{jm}v$ $\displaystyle+\frac{F^{ij}\nabla_{ijm}v\nabla_{m}v}{w}+\eta^{\prime}F^{ij}\nabla_{ij}v+\Big{(}\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}\Big{)}F^{ij}\nabla_{i}v\nabla_{j}v\leq 0.$ Choose a smooth function $\zeta$ such that $0\leq\zeta\leq 1,\quad|\nabla\zeta|\leq b_{0}\sqrt{\zeta},\quad|\nabla^{2}\zeta|\leq b_{0}.$ Then (4.6) reduces to (4.7) $\frac{F^{ij}\nabla_{ijm}v\nabla_{m}v}{w}+\eta^{\prime}F^{ij}\nabla_{ij}v+\Big{(}\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}\Big{)}F^{ij}\nabla_{i}v\nabla_{j}v\leq\frac{C}{\zeta}\sum F^{ii}.$ Similarly, we have (4.8) $\displaystyle\frac{\Delta\nabla_{m}v\nabla_{m}v}{w}+\eta^{\prime}\Delta v+\Big{(}\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}\Big{)}|\nabla v|^{2}\leq\frac{C}{\zeta}.$ Differentiating (4.2) yields (4.9) $F^{ij}\nabla_{l}\mathcal{W}_{ij}-\frac{\nabla_{l}\alpha_{0}e^{2kv}+\alpha_{0}e^{2kv}2k\nabla_{l}v}{(n-2)^{k}}\frac{1}{\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}}=-\frac{\nabla_{l}\alpha e^{2v}+\alpha e^{2v}2\nabla_{l}v}{n-2},$ where $\nabla_{l}\mathcal{W}_{ij}[v]=\nabla_{lij}v-\nabla_{li}v\nabla_{j}v-\nabla_{i}v\nabla_{lj}v+\Big{(}\frac{\nabla_{l}\Delta v}{n-2}+2\nabla_{l}w\Big{)}\delta_{ij}-\nabla_{l}\mathcal{R}_{ij}.$ Since $\nabla_{ijl}v=\nabla_{lij}v+R_{lij}^{m}\nabla_{m}v,$ we have $\nabla_{l}\Delta v=\Delta\nabla_{l}v-\sum\limits_{m}R_{lmm}^{s}\nabla_{s}v.$ Assume that $|\nabla v|\geq 1$. Combining (4.7), (4.8) and (4.9) we obtain (4.10) $\displaystyle\eta^{\prime}F^{ij}\nabla_{ij}v+\eta^{\prime}\frac{\Delta v}{n-2}\sum F^{ii}$ $\displaystyle+\Big{(}\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}\Big{)}F^{ij}\nabla_{i}v\nabla_{j}v+\Big{(}\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}\Big{)}\frac{|\nabla v|^{2}}{n-2}\sum F^{ii}$ $\displaystyle+\frac{e^{2kv}\nabla_{l}\alpha_{0}\nabla_{l}v}{(n-2)^{k}\sigma_{k-1}w}+\frac{4k\alpha_{0}e^{2kv}}{(n-2)^{k}\sigma_{k-1}}-\frac{e^{2v}\nabla_{l}\alpha\nabla_{l}v}{(n-2)w}-\frac{4\alpha e^{2v}}{n-2}$ $\displaystyle\leq$ $\displaystyle\frac{C}{\zeta}\sum F^{ii}-\frac{F^{ij}\nabla_{li}v\nabla_{j}v\nabla_{l}v}{w}-\frac{F^{ij}\nabla_{i}v\nabla_{lj}v\nabla_{l}v}{w}+\frac{2\nabla_{l}w\nabla_{l}v}{w}\sum F^{ii}.$ By (4.3), (4.11) $\displaystyle-\frac{F^{ij}\nabla_{li}v\nabla_{j}v\nabla_{l}v}{w}-\frac{F^{ij}\nabla_{i}v\nabla_{lj}v\nabla_{l}v}{w}+\frac{2\nabla_{l}w\nabla_{l}v}{w}\sum F^{ii}$ $\displaystyle=$ $\displaystyle F^{ij}\Big{(}\frac{\nabla_{i}\zeta}{\zeta}+\eta^{\prime}\nabla_{i}v\Big{)}\nabla_{j}v+F^{ij}\nabla_{i}v\Big{(}\frac{\nabla_{j}\zeta}{\zeta}+\eta^{\prime}\nabla_{j}v\Big{)}$ $\displaystyle-2\nabla_{l}v\Big{(}\frac{\nabla_{l}\zeta}{\zeta}+\eta^{\prime}\nabla_{l}v\Big{)}\sum F^{ii}$ $\displaystyle\leq$ $\displaystyle\frac{C}{\sqrt{\zeta}}|\nabla v|\sum F^{ii}+2\eta^{\prime}F^{ij}\nabla_{i}v\nabla_{j}v-2\eta^{\prime}|\nabla v|^{2}\sum F^{ii}.$ Also, we have $F^{ij}\mathcal{W}_{ij}=\frac{\sigma_{k}}{\sigma_{k-1}}+(k-1)\frac{\alpha_{0}e^{2kv}}{(n-2)^{k}\sigma_{k-1}},$ and $F^{ij}\mathcal{W}_{ij}=F^{ij}\nabla_{ij}v-F^{ij}\nabla_{i}v\nabla_{j}v+\frac{\Delta v}{n-2}\sum F^{ii}+|\nabla v|^{2}\sum F^{ii}-F^{ij}\mathcal{R}_{ij}.$ Therefore, (4.12) $\displaystyle\eta^{\prime}F^{ij}\nabla_{ij}v+\eta^{\prime}\frac{\Delta v}{n-2}\sum F^{ii}$ $\displaystyle\geq$ $\displaystyle\eta^{\prime}\frac{\sigma_{k}}{\sigma_{k-1}}+(k-1)\frac{\alpha_{0}e^{2kv}}{(n-2)^{k}\sigma_{k-1}}\eta^{\prime}$ $\displaystyle+\eta^{\prime}F^{ij}\nabla_{i}v\nabla_{j}v-\eta^{\prime}|\nabla v|^{2}\sum F^{ii}-C|\eta^{\prime}|\sum F^{ii}.$ Taking (4.11) and (4.12) into (4.10), and in view of (4.2), we obtain (4.13) $\displaystyle\frac{e^{2kv}}{(n-2)^{k}\sigma_{k-1}}\Big{(}k\eta^{\prime}\alpha_{0}+4k\alpha_{0}+\frac{\nabla_{l}\alpha_{0}\nabla_{l}v}{w}\Big{)}$ $\displaystyle+\Big{(}\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}-\eta^{\prime}\Big{)}F^{ij}\nabla_{i}v\nabla_{j}v+\Big{(}\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}+(n-2)\eta^{\prime}\Big{)}\frac{|\nabla v|^{2}}{n-2}\sum F^{ii}$ $\displaystyle\leq$ $\displaystyle C\Big{(}\frac{1}{\zeta}+\frac{|\nabla v|}{\sqrt{\zeta}}+|\eta^{\prime}|\Big{)}\sum F^{ii}+\Big{(}\eta^{\prime}\alpha+4\alpha+\frac{\nabla_{l}\alpha\nabla_{l}v}{w}\Big{)}\frac{e^{2v}}{n-2}.$ Choose $\eta(v)=\Big{(}\frac{3}{2}+v-\inf_{\\{\zeta>0\\}}v\Big{)}^{-N},$ where $N\geq 1$ is sufficiently large such that $\displaystyle\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}-\eta^{\prime}$ $\displaystyle\geq$ $\displaystyle\eta^{\prime\prime}-\frac{3}{4}\eta^{\prime 2}+(n-2)\eta^{\prime}$ $\displaystyle\geq$ $\displaystyle\frac{N^{2}}{2}\Big{(}\frac{3}{2}+v-\inf_{\\{\zeta>0\\}}v\Big{)}^{-N-2}.$ By Newton-Maclaurin inequality $\sigma_{k}\sigma_{k-2}\leq\frac{(n-k+1)(k-1)}{(n-k+2)k}\sigma_{k-1}^{2},$ we know that $\displaystyle\sum F^{ii}=$ $\displaystyle(n-k+1)-\frac{(n-k+2)\sigma_{k}\sigma_{k-2}}{\sigma_{k-1}^{2}}+\frac{(n-k+2)\alpha_{0}e^{2kv}\sigma_{k-2}}{(n-2)^{k}\sigma_{k-1}^{2}}$ $\displaystyle\geq$ $\displaystyle\frac{n-k+1}{k}.$ Therefore, $\bigg{(}\eta^{\prime}\alpha+4\alpha+\frac{\nabla_{l}\alpha\nabla_{l}v}{w}\bigg{)}\frac{e^{2v}}{n-2}\leq C\bigg{(}|\eta^{\prime}|+1+\frac{1}{|\nabla v|}\bigg{)}\sum F^{ii}.$ Also, we know that $k\eta^{\prime}\alpha_{0}+4k\alpha_{0}+\frac{\nabla_{l}\alpha_{0}\nabla_{l}v}{w}\geq\alpha_{0}\bigg{(}-kN\Big{(}\frac{2}{3}\Big{)}^{N+1}+4k-\frac{C}{\alpha_{0}|\nabla v|}\bigg{)}>0$ for $N$ sufficiently large and $|\nabla v|\geq\frac{C}{2k\inf\alpha_{0}}$. Hence, (4.13) reduces to $|\nabla v|^{2}\leq C\Big{(}\frac{1}{\zeta}+\frac{|\nabla v|}{\sqrt{\zeta}}\Big{)}.$ Consequently, (4.14) $\sqrt{\zeta}|\nabla v|\leq C.$ Taking $\zeta$ to be a standard cutoff function in a geodesic ball $B_{r}$ of radius $r>0$ satisfying $|\nabla\zeta|\leq\frac{C}{r},$ by (4.14) we obtain ###### Theorem 4.1. If $v\in C^{3}(B_{r})$ is a $(k-1)$-admissible solution of (4.1) in a geodesic ball $B_{r}\subset M$ of radius $r>0$, then $\sup\limits_{B_{\frac{r}{2}}}|\nabla v|\leq C,$ where $C$ depends on $r^{-1}$, $n$, $k$, $\|v\|_{C^{0}(B_{r})}$, $\|\alpha\|_{C^{1}(B_{r})}$, $\|\alpha_{0}\|_{C^{1}{(B_{r})}}$ and $\inf\limits_{B_{r}}\alpha_{0}$. Let $\zeta\equiv 1$. By (4.14) we have ###### Theorem 4.2. Let $v\in C^{3}(M)\cap C^{1}(\overline{M})$ be a $(k-1)$-admissible solution of (4.1) in $\overline{M}$. Then $\max\limits_{\overline{M}}|\nabla v|\leq C,$ where $C$ depends on $n$, $k$, $\|v\|_{C^{0}(\overline{M})}$, $\max\limits_{\partial M}|\nabla v|$, $\|\alpha\|_{C^{1}(M)}$, $\|\alpha_{0}\|_{C^{1}{(M)}}$ and $\inf\limits_{M}\alpha_{0}$. ### 4.2. Interior and global estimates for second derivatives Let $v\in C^{4}(M)\cap C^{2}(\overline{M})$ be a $(k-1)$-admissible solution of (4.1). Consider the test function $\Psi(x)=\zeta e^{\eta(w)}\nabla_{\xi\xi}v$, where $\xi$ is a unit tangent vector to $\overline{M}$ at $x$, $w=\frac{1}{2}|\nabla v|^{2}$, $\zeta$ and $\eta$ are functions to be chosen later, with $\zeta$ smooth and satisfying $0\leq\zeta\leq 1,\quad|\nabla\zeta|\leq b_{0},\quad|\nabla^{2}\zeta|\leq b_{0}.$ Assume that $\Psi$ is attained at some interior point $x_{0}\in M$ and for some unit vector $\xi\in T_{x_{0}}\overline{M}$. Choose a smooth local orthonormal frame $e_{1},\ldots,e_{n}$ about $x_{0}$ such that $e_{1}(x_{0})=\xi$ and $\big{\\{}\mathcal{W}_{ij}[v](x_{0})\big{\\}}$ is diagonal. Assume that at $x_{0}$, $\nabla_{11}v\geq 1$. Since $\tilde{\Psi}:=\zeta e^{\eta(w)}\nabla_{11}v$ locally attains a maximum at $x_{0}$, thus at $x_{0}$, (4.15) $\frac{\nabla_{i11}v}{\nabla_{11}v}+\nabla_{i}\eta+\frac{\nabla_{i}\zeta}{\zeta}=0,$ and (4.16) $F^{ii}\bigg{(}\frac{\nabla_{ii11}v}{\nabla_{11}v}-\Big{(}\frac{\nabla_{i11}v}{\nabla_{11}v}\Big{)}^{2}+\nabla_{ii}\eta+\frac{\nabla_{ii}\zeta}{\zeta}-\Big{(}\frac{\nabla_{i}\zeta}{\zeta}\Big{)}^{2}\bigg{)}\leq 0.$ By (4.15) and Cauchy-Schwartz inequality, $\bigg{(}\frac{\nabla_{i11}v}{\nabla_{11}v}\bigg{)}^{2}\leq(1+\epsilon)(\nabla_{i}\eta)^{2}+\Big{(}1+\frac{1}{\epsilon}\Big{)}\Big{(}\frac{\nabla_{i}\zeta}{\zeta}\Big{)}^{2},\quad\epsilon>0.$ Therefore, (4.16) reduces to (4.17) $F^{ii}\bigg{(}\frac{\nabla_{ii11}v}{\nabla_{11}v}+\nabla_{ii}\eta-(1+\epsilon)\big{(}\nabla_{i}\eta\big{)}^{2}\bigg{)}\leq\frac{C}{\zeta^{2}}\sum F^{ii}.$ Similarly, (4.18) $\frac{\Delta\nabla_{11}v}{\nabla_{11}v}+\Delta\eta-(1+\epsilon)|\nabla\eta|^{2}\leq\frac{C}{\zeta^{2}}.$ Besides, we notice that (4.19) $\displaystyle F^{ii}\nabla_{11}\bigg{(}\nabla_{ii}v+\frac{\Delta v}{n-2}\bigg{)}=F^{ii}\nabla_{11}\Big{(}\mathcal{W}_{ii}+\big{(}\nabla_{i}v\big{)}^{2}-|\nabla v|^{2}+\mathcal{R}_{ii}\Big{)}$ $\displaystyle\geq$ $\displaystyle F^{ii}\nabla_{11}\mathcal{W}_{ii}+2F^{ii}\nabla_{i}v\nabla_{11i}v+2F^{ii}\big{(}\nabla_{1i}v\big{)}^{2}$ $\displaystyle-2\sum\limits_{k}\nabla_{11k}v\nabla_{k}v\sum\limits_{i}F^{ii}-2\sum\limits_{k}\big{(}\nabla_{1k}v\big{)}^{2}\sum\limits_{i}F^{ii}-C\sum F^{ii}.$ By (4.15) and the formula for interchanging order of covariant derivatives, $2F^{ii}\nabla_{i}v\nabla_{11i}v-2\sum\limits_{k}\nabla_{11k}v\nabla_{k}v\sum\limits_{i}F^{ii}\geq-C\bigg{(}|\nabla\eta|+\frac{1}{\zeta}\bigg{)}\nabla_{11}v\sum F^{ii}.$ Applying the formula for interchanging order of covariant derivatives, (4.19) reduces to (4.20) $\displaystyle F^{ii}\nabla_{ii11}v+\frac{\Delta\nabla_{11}v}{n-2}\sum F^{ii}$ $\displaystyle\geq$ $\displaystyle F^{ii}\nabla_{11}\mathcal{W}_{ii}+2F^{ii}\big{(}\nabla_{1i}v\big{)}^{2}-2\sum\limits_{k}\big{(}\nabla_{1k}v\big{)}^{2}\sum\limits_{i}F^{ii}$ $\displaystyle-C\bigg{(}|\nabla\eta|\nabla_{11}v+\frac{\nabla_{11}v}{\zeta}+\sum\limits_{j,l}|\nabla_{jl}v|\bigg{)}\sum F^{ii}.$ Differentiating equation (4.2) twice and by the concavity of $\frac{\sigma_{k}}{\sigma_{k-1}}$, (4.21) $\displaystyle F^{ii}\nabla_{11}\mathcal{W}_{ii}+\frac{\alpha_{0}e^{2kv}}{(n-2)^{k}\sigma_{k-1}}\bigg{(}\frac{\sigma_{k-1}^{ij,rs}}{\sigma_{k-1}}-\frac{2\sigma_{k-1}^{ij}\sigma_{k-1}^{rs}}{\sigma_{k-1}^{2}}\bigg{)}\nabla_{1}\mathcal{W}_{ij}\nabla_{1}\mathcal{W}_{rs}$ $\displaystyle+\frac{\big{(}2\nabla_{1}\alpha_{0}e^{2kv}+4k\alpha_{0}e^{2kv}\nabla_{1}v\big{)}\sigma_{k-1}^{ii}\nabla_{1}\mathcal{W}_{ii}}{(n-2)^{k}\sigma_{k-1}^{2}}$ $\displaystyle-\frac{\nabla_{1}\alpha_{0}e^{2kv}4k\nabla_{1}v+\alpha_{0}e^{2kv}4k^{2}\big{(}\nabla_{1}v\big{)}^{2}+\alpha_{0}e^{2kv}2k\nabla_{11}v+\nabla_{11}\alpha_{0}e^{2kv}}{(n-2)^{k}\sigma_{k-1}}$ $\displaystyle\geq$ $\displaystyle-\frac{\nabla_{11}\alpha e^{2v}+4\nabla_{1}\alpha e^{2v}\nabla_{1}v+4\alpha e^{2v}\big{(}\nabla_{1}v\big{)}^{2}+2\alpha e^{2v}\nabla_{11}v}{n-2}.$ By the concavity of $\sigma_{k-1}^{\frac{1}{k-1}}$, $\bigg{(}\frac{\sigma_{k-1}^{ij,rs}}{\sigma_{k-1}}+\Big{(}\frac{1}{k-1}-1\Big{)}\frac{\sigma_{k-1}^{ij}\sigma_{k-1}^{rs}}{\sigma_{k-1}^{2}}\bigg{)}\nabla_{1}\mathcal{W}_{ij}\nabla_{1}\mathcal{W}_{rs}\leq 0.$ Consequently, $\displaystyle\frac{\alpha_{0}e^{2kv}}{(n-2)^{k}\sigma_{k-1}}\bigg{(}\frac{\sigma_{k-1}^{ij,rs}}{\sigma_{k-1}}-\frac{2\sigma_{k-1}^{ij}\sigma_{k-1}^{rs}}{\sigma_{k-1}^{2}}\bigg{)}\nabla_{1}\mathcal{W}_{ij}\nabla_{1}\mathcal{W}_{rs}$ $\displaystyle+\frac{\big{(}2\nabla_{1}\alpha_{0}e^{2kv}+4k\alpha_{0}e^{2kv}\nabla_{1}v\big{)}\sigma_{k-1}^{ii}\nabla_{1}\mathcal{W}_{ii}}{(n-2)^{k}\sigma_{k-1}^{2}}$ $\displaystyle\leq$ $\displaystyle-\frac{\alpha_{0}e^{2kv}}{(n-2)^{k}\sigma_{k-1}}\frac{k}{k-1}\frac{\sigma_{k-1}^{ij}\sigma_{k-1}^{rs}}{\sigma_{k-1}^{2}}\nabla_{1}\mathcal{W}_{ij}\nabla_{1}\mathcal{W}_{rs}$ $\displaystyle+\frac{\big{(}2\nabla_{1}\alpha_{0}e^{2kv}+4k\alpha_{0}e^{2kv}\nabla_{1}v\big{)}\sigma_{k-1}^{ii}\nabla_{1}\mathcal{W}_{ii}}{(n-2)^{k}\sigma_{k-1}^{2}}$ $\displaystyle\leq$ $\displaystyle\frac{(k-1)e^{2kv}\big{(}\nabla_{1}\alpha_{0}+2k\alpha_{0}\nabla_{1}v\big{)}^{2}}{k(n-2)^{k}\alpha_{0}\sigma_{k-1}}.$ By (4.2) and Newton-Maclaurin inequality, $\displaystyle\frac{\alpha_{0}}{\sigma_{k-1}}=\frac{(n-2)^{k}}{e^{2kv}}\bigg{(}\frac{\sigma_{k}}{\sigma_{k-1}}+\frac{\alpha e^{2v}}{n-2}\bigg{)}\leq C\Big{(}\sigma_{k-1}^{\frac{1}{k-1}}+1\Big{)}\leq C(\sigma_{1}+1)\leq C\nabla_{11}v.$ Also, $|\nabla_{ij}v|\leq C\nabla_{11}v.$ Therefore, (4.21) reduces to $F^{ii}\nabla_{11}\mathcal{W}_{ii}\geq-C\nabla_{11}^{2}v.$ Combining this inequality with (4.20), (4.17) and (4.18) yields, (4.22) $\displaystyle F^{ii}\bigg{(}\nabla_{ii}\eta-(1+\epsilon)\big{(}\nabla_{i}\eta\big{)}^{2}\bigg{)}+\frac{1}{n-2}\bigg{(}\Delta\eta-(1+\epsilon)|\nabla\eta|^{2}\bigg{)}\sum F^{ii}$ $\displaystyle\leq$ $\displaystyle C\bigg{(}|\nabla\eta|+\nabla_{11}v+\frac{1}{\zeta^{2}}\bigg{)}\sum F^{ii}.$ Let $\eta(w)=\bigg{(}1-\frac{3w}{4M}\bigg{)}^{-1/2},$ where $w=\frac{|\nabla v|^{2}}{2},\quad M=\sup\limits_{\\{\zeta>0\\}}w.$ Choosing $\epsilon=\frac{1}{2}$, we have $\eta^{\prime\prime}-(1+\epsilon)\eta^{\prime 2}=\frac{9}{64M^{2}}\bigg{(}1-\frac{3w}{4M}\bigg{)}^{-\frac{5}{2}}\Bigg{(}3-(1+\epsilon)\bigg{(}1-\frac{3w}{4M}\bigg{)}^{-\frac{1}{2}}\Bigg{)}\geq 0.$ Therefore, $\displaystyle F^{ii}\bigg{(}\nabla_{ii}\eta-(1+\epsilon)\big{(}\nabla_{i}\eta\big{)}^{2}\bigg{)}+\frac{1}{n-2}\bigg{(}\Delta\eta-(1+\epsilon)|\nabla\eta|^{2}\bigg{)}\sum F^{ii}$ $\displaystyle\geq$ $\displaystyle\eta^{\prime}F^{ii}\big{(}\nabla_{iil}v\nabla_{l}v+\big{(}\nabla_{il}v\big{)}^{2}\big{)}+\frac{1}{n-2}\eta^{\prime}\bigg{(}\Delta\nabla_{l}v\nabla_{l}v+\sum\limits_{jl}\big{(}\nabla_{jl}v\big{)}^{2}\bigg{)}\sum F^{ii}.$ By (4.9) and interchanging order of covariant derivatives, $F^{ii}\nabla_{iil}v\nabla_{l}v+\frac{1}{n-2}\Delta\nabla_{l}v\nabla_{l}v\sum F^{ii}\geq-C\nabla_{11}v\sum F^{ii}.$ We note that $\frac{3}{8M}\leq\eta^{\prime}\leq\frac{3}{M},\quad|\nabla\eta|\leq C\nabla_{11}v.$ Hence (4.22) reduces to $\big{(}\nabla_{11}v\big{)}^{2}\leq C\bigg{(}\nabla_{11}v+\frac{1}{\zeta^{2}}\bigg{)}.$ Consequently, (4.23) $\zeta\nabla_{11}v\leq C.$ Finally, choosing $\zeta$ to be an appropriate cutoff function with support in $B_{r}$, we obtain the second order interior estimate for equation (4.1). ###### Theorem 4.3. Let $v\in C^{4}(B_{r})$ be a $(k-1)$-admissible solution of (4.1) in a geodesic ball $B_{r}\subset M$ of radius $r>0$. Then $\sup\limits_{B_{\frac{r}{2}}}|\nabla^{2}v|\leq C,$ where $C$ depends on $r^{-1}$, $n$, $k$, $\|v\|_{C^{1}(B_{r})}$, $\|\alpha\|_{C^{2}(B_{r})}$, $\|\alpha_{0}\|_{C^{2}{(B_{r})}}$ and $\inf\limits_{B_{r}}\alpha_{0}$. If we choose $\zeta\equiv 1$ in (4.23), we obtain the global second order estimate. ###### Theorem 4.4. Let $v\in C^{4}(M)\cap C^{2}(\overline{M})$ be a $(k-1)$-admissible solution of (4.1). Then $\max\limits_{\overline{M}}|\nabla^{2}v|\leq C,$ where $C$ depends on $n$, $k$, $\|v\|_{C^{1}(\overline{M})}$, $\max\limits_{\partial M}|\nabla^{2}v|$, $\|\alpha\|_{C^{2}(M)}$, $\|\alpha_{0}\|_{C^{2}{(M)}}$ and $\inf\limits_{M}\alpha_{0}$. ### 4.3. Boundary estimates for second derivatives ###### Theorem 4.5. Let $v\in C^{3}(\overline{M})$ be a $(k-1)$-admissible solution of (4.1) satisfying $v=\varphi\quad\text{ on }\partial M,$ where $\varphi\in C^{\infty}(\overline{M})$. Then we have the estimate $|\nabla^{2}v|\leq C\quad\text{ on }\partial M,$ where $C>0$ is a constant depending on $n$, $k$, $\|v\|_{C^{1}(\overline{M})}$, $\|\varphi\|_{C^{3}(\overline{M})}$, $\|\alpha\|_{C^{1}(\overline{M})}$, $\|\alpha_{0}\|_{C^{1}(\overline{M})}$ and $\min\limits_{\overline{M}}\alpha_{0}$. ###### Proof. Let $x_{0}$ be an arbitrary point on $\partial M$. Choose a smooth local orthonormal frame $e_{1},\ldots,e_{n}$ around $x_{0}$ such that $e_{n}$ is the interior unit normal vector field to $\partial M$ along $\partial M$. We obtain the pure tangential second derivative bound immediately (4.24) $\big{|}\nabla_{st}v(x_{0})\big{|}\leq C,\quad\forall\,s,t<n.$ For the tangential-normal second derivative estimate (4.25) $\big{|}\nabla_{sn}v(x_{0})\big{|}\leq C,\quad\forall\,s<n,$ we use the function $\Theta=\beta\bigg{(}\rho-\frac{N}{2}\rho^{2}\bigg{)},$ where $\beta$ and $N$ are positive constants, and $\rho(x)$ is the distance from $x$ to $\partial M$. Set $M_{\delta}=\\{x\in M\,|\,\rho(x)<\delta\\}.$ Choose $\delta>0$ sufficiently small such that $\rho(x)$ is smooth in $M_{\delta}$, on which, we have $|\nabla\rho|=1,\quad|\nabla^{2}\rho|\leq C.$ We consider the linearized operator $\mathcal{L}$ which is locally defined by $\mathcal{L}\eta=F^{ij}\big{(}\nabla_{ij}\eta-2\nabla_{i}v\nabla_{j}\eta\big{)}+\bigg{(}\frac{\Delta\eta}{n-2}+2\nabla_{m}v\nabla_{m}\eta\bigg{)}\sum F^{ii}.$ It can be seen that $\displaystyle\mathcal{L}\Theta\leq\beta\bigg{(}C(1+N\delta)-\frac{N}{n-2}\bigg{)}\sum F^{ii}.$ Choosing $N\geq(n-2)(2C+1),\quad\text{ and }\quad\delta\leq\frac{1}{N},$ we have (4.26) $\mathcal{L}\Theta\leq-\beta\sum F^{ii}\quad\text{ in }M_{\delta}$ and (4.27) $\Theta\geq\frac{\beta}{2}\rho\quad\text{ in }M_{\delta}.$ We note that $\nabla_{ij}(\nabla_{k}v)=\nabla_{kij}v+\Gamma_{ik}^{l}\nabla_{jl}v+\Gamma_{jk}^{l}\nabla_{il}v+\nabla_{k}\Gamma_{ij}^{l}\nabla_{l}v.$ Therefore, for $t<n$, $\displaystyle\mathcal{L}\big{(}\nabla_{t}(v-\varphi)\big{)}$ $\displaystyle=$ $\displaystyle F^{ij}\Big{(}\nabla_{tij}(v-\varphi)+\Gamma_{it}^{l}\nabla_{jl}(v-\varphi)+\Gamma_{jt}^{l}\nabla_{il}(v-\varphi)+\nabla_{t}\Gamma_{ij}^{l}\nabla_{l}(v-\varphi)\Big{)}$ $\displaystyle-2F^{ij}\nabla_{i}v\Big{(}\nabla_{jt}(v-\varphi)+\Gamma_{jt}^{l}\nabla_{l}(v-\varphi)\Big{)}$ $\displaystyle+\Bigg{(}\frac{\nabla_{t}\Delta(v-\varphi)+2\sum_{j}\Gamma_{jt}^{l}\nabla_{jl}(v-\varphi)+\sum_{j}\nabla_{t}\Gamma_{jj}^{l}\nabla_{l}(v-\varphi)}{n-2}$ $\displaystyle+2\nabla_{m}v\Big{(}\nabla_{mt}(v-\varphi)+\Gamma_{mt}^{l}\nabla_{l}(v-\varphi)\Big{)}\Bigg{)}\sum F^{ii}.$ By (4.9), we have $\displaystyle F^{ij}\Big{(}\nabla_{tij}v-2\nabla_{i}v\nabla_{tj}v\Big{)}+\Bigg{(}\frac{\nabla_{t}\Delta v}{n-2}+\nabla_{t}|\nabla v|^{2}\Bigg{)}\sum F^{ii}$ $\displaystyle\leq$ $\displaystyle C\sum F^{ii}+\frac{C\alpha_{0}}{\sigma_{k-1}}\leq C\sum F^{ii}+C\big{(}\sigma_{1}+1\big{)}\leq C\sum F^{ii}+C\sigma_{1}.$ Consequently, for $t<n$, (4.28) $\displaystyle\mathcal{L}\big{(}\nabla_{t}(v-\varphi)\big{)}$ $\displaystyle\leq$ $\displaystyle C\sum F^{ii}+C\sigma_{1}+C\sqrt{\sum_{jl}(\nabla_{jl}v)^{2}}\sum F^{ii}$ $\displaystyle\leq$ $\displaystyle C\Bigg{(}1+\sqrt{\sum_{jl}(\nabla_{jl}v)^{2}}\Bigg{)}\sum F^{ii}.$ Similarly, we may verify that (4.29) $\displaystyle\mathcal{L}\big{(}-\nabla_{t}(v-\varphi)\big{)}\leq C\Bigg{(}1+\sqrt{\sum_{jl}(\nabla_{jl}v)^{2}}\Bigg{)}\sum F^{ii}.$ We also have (4.30) $\displaystyle\mathcal{L}\Bigg{(}\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\Bigg{)}=\sum\limits_{s<n}2\nabla_{s}(v-\varphi)\mathcal{L}\big{(}\nabla_{s}(v-\varphi)\big{)}$ $\displaystyle+\sum\limits_{s<n}2F^{ij}\nabla_{i}\big{(}\nabla_{s}(v-\varphi)\big{)}\nabla_{j}\big{(}\nabla_{s}(v-\varphi)\big{)}+\sum\limits_{s<n}\frac{2\Big{|}\nabla\big{(}\nabla_{s}(v-\varphi)\big{)}\Big{|}^{2}}{n-2}\sum F^{ii}$ $\displaystyle\geq$ $\displaystyle-C\Bigg{(}1+\sqrt{\sum_{jl}(\nabla_{jl}v)^{2}}\Bigg{)}\sum F^{ii}+0+\frac{\sum\limits_{l}\sum\limits_{s<n}(\nabla_{ls}v)^{2}}{n-2}\sum F^{ii}-C\sum F^{ii}.$ We observe that if $\nabla_{nn}v\leq 0$, by the fact that $0\leq\sigma_{1}\big{(}\mathcal{W}[v]\big{)}\leq\frac{2(n-1)}{n-2}\Delta v+C,$ we have $0\geq\nabla_{nn}v\geq-\sum\limits_{s<n}\nabla_{ss}v-C.$ If $\nabla_{nn}v\geq 0$, by the concavity of $F$ with respect to $\\{\mathcal{W}_{ij}\\}$, that is, $\sum\limits_{ij}F^{ij}\big{(}\delta_{ij}-\mathcal{W}_{ij}\big{)}\geq F\big{(}I\big{)}-F\big{(}\mathcal{W}[v]\big{)}\geq-C,$ we have $\displaystyle\sum F^{ii}+C\geq\sum\limits_{ij}F^{ij}\mathcal{W}_{ij}$ $\displaystyle\geq$ $\displaystyle\sum\limits_{ij}F^{ij}\bigg{(}\nabla_{ij}v+\frac{\Delta v}{n-2}\delta_{ij}\bigg{)}-C\sum F^{ii}$ $\displaystyle=$ $\displaystyle\sum\limits_{i}\sum\limits_{s<n}F^{is}\nabla_{is}v+\sum\limits_{s<n}F^{sn}\nabla_{sn}v+F^{nn}\nabla_{nn}v+\frac{\Delta v}{n-2}\sum F^{ii}-C\sum F^{ii}$ $\displaystyle\geq$ $\displaystyle-C\sqrt{\sum_{l}\sum_{s<n}(\nabla_{ls}v)^{2}}\sum F^{ii}+\frac{\Delta v}{n-2}\sum F^{ii}-C\sum F^{ii},$ which implies that $0\leq\nabla_{nn}v\leq C+C\sqrt{\sum_{l}\sum_{s<n}(\nabla_{ls}v)^{2}}.$ By the above observation, (4.26), (4.28), (4.29) and (4.30), we have $\displaystyle\mathcal{L}\Bigg{(}\Theta-\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\pm\nabla_{t}(v-\varphi)\Bigg{)}$ $\displaystyle\leq$ $\displaystyle-\beta\sum F^{ii}+C\sqrt{\sum\limits_{l}\sum\limits_{s<n}(\nabla_{ls}v)^{2}}\sum F^{ii}-\frac{\sum\limits_{l}\sum\limits_{s<n}(\nabla_{ls}v)^{2}}{n-2}\sum F^{ii}+C\sum F^{ii}.$ If $\sqrt{\sum\limits_{l}\sum\limits_{s<n}(\nabla_{ls}v)^{2}}\leq C(n-2),$ then we obtain $\mathcal{L}\Bigg{(}\Theta-\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\pm\nabla_{t}(v-\varphi)\Bigg{)}\leq-\beta\sum F^{ii}+C^{2}(n-2)\sum F^{ii}+C\sum F^{ii}.$ If $\sqrt{\sum\limits_{l}\sum\limits_{s<n}(\nabla_{ls}v)^{2}}\geq C(n-2),$ then we obtain $\mathcal{L}\Bigg{(}\Theta-\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\pm\nabla_{t}(v-\varphi)\Bigg{)}\leq-\beta\sum F^{ii}+C\sum F^{ii}.$ Thus, choosing $\beta$ sufficiently large, we arrive at $\mathcal{L}\Bigg{(}\Theta-\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\pm\nabla_{t}(v-\varphi)\Bigg{)}\leq 0.$ Also, $\Theta-\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\pm\nabla_{t}(v-\varphi)=0\quad\text{ on }\partial M,$ and by (4.27), we may choose $\beta$ further large depending on $\delta$ such that $\Theta-\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\pm\nabla_{t}(v-\varphi)\geq\frac{\beta}{2}\delta-C\geq 0\quad\text{ on }\\{x\in M\,|\,\rho(x)=\delta\\}.$ By the maximum principle, $\Theta-\sum\limits_{s<n}\big{|}\nabla_{s}(v-\varphi)\big{|}^{2}\pm\nabla_{t}(v-\varphi)\geq 0\quad\text{ in }M_{\delta}.$ Hence we obtain (4.25). Next, we derive the double normal second derivative estimate (4.31) $\nabla_{nn}v(x_{0})\leq C.$ By (4.1), that is, $\sigma_{k}\big{(}\mathcal{W}[v]\big{)}+\frac{\alpha e^{2v}}{n-2}\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}=\frac{\alpha_{0}e^{2kv}}{(n-2)^{k}},$ if $\nabla_{nn}v(x_{0})$ is sufficiently large, in view of (4.24) and (4.25), we have at $x_{0}$, $\sigma_{k}\bigg{(}\frac{\Delta v}{n-2}\delta_{ij}-C\delta_{ij}\bigg{)}-C\sigma_{k-1}\bigg{(}\frac{\Delta v}{n-2}\delta_{ij}+\nabla_{nn}v\delta_{ij}+C\delta_{ij}\bigg{)}\leq C,$ which further implies that $\bigg{(}\frac{1}{n-2}\nabla_{nn}v-C\bigg{)}^{k}\sigma_{k}(I)-C\bigg{(}\frac{n-1}{n-2}\nabla_{nn}v+C\bigg{)}^{k-1}\sigma_{k-1}(I)\leq C.$ Hence we obtain (4.31). ∎ ###### Remark 4.6. When deriving the estimates, we can also directly obtain $\frac{1}{\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}}\leq C$ by (4.1). In fact, we have $\displaystyle\inf\alpha_{0}\leq$ $\displaystyle\alpha_{0}(x)=\sigma_{k}\bigg{(}(n-2)e^{-2v}\mathcal{W}[v]\bigg{)}+\alpha(x)\sigma_{k-1}\bigg{(}(n-2)e^{-2v}\mathcal{W}[v]\bigg{)}$ $\displaystyle\leq$ $\displaystyle\sigma_{k-1}^{\frac{k}{k-1}}\bigg{(}(n-2)e^{-2v}\mathcal{W}[v]\bigg{)}+\sup\alpha\sigma_{k-1}\bigg{(}(n-2)e^{-2v}\mathcal{W}[v]\bigg{)}.$ Next, we shall use continuity method and degree theory to prove the existence of a smooth $(k-1)$-admissible solution to the Dirichlet problem (4.32) $\left\\{\begin{aligned} F\big{(}\mathcal{W}[v]\big{)}=&-\frac{\alpha(x)}{n-2}e^{2v}\quad&\text{ in }\Omega,\\\ v=&\varphi\quad&\text{ on }\partial\Omega,\end{aligned}\right.$ where $\Omega$ is a bounded domain in $\mathbb{R}^{n}$ with smooth compact boundary which are composed of closed hypersurfaces. ### 4.4. Existence of subsolutions In this subsection, we construct a subsolution synthesizing the ideas of Guan [6] and Guan [8]. We note that there exist sufficiently large $R>r>0$ such that $\Omega\subset B_{r}(0)$, and $w(x)=-\ln\big{(}R^{2}-|x|^{2}\big{)}-C$ satisfies $\mathcal{W}[w]\geq\nabla^{2}w+\frac{\Delta w}{n-2}g\geq\frac{4R^{2}}{(n-2)\big{(}R^{2}-|x|^{2}\big{)}^{2}}g.$ It follows that $(n-2)e^{-2w}\mathcal{W}[w]\geq e^{2C}4R^{2}g>e^{2C}2R^{2}g.$ Choosing $C>0$ sufficiently large such that (4.33) $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}(n-2)e^{-2w}\mathcal{W}[w]\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}(n-2)e^{-2w}\mathcal{W}[w]\bigg{)}}$ $\displaystyle\geq$ $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}e^{2C}2R^{2}g\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}e^{2C}2R^{2}g\bigg{)}}>-\alpha(x)\quad\text{ on }\overline{\Omega}$ and $w\leq-\ln\big{(}R^{2}-r^{2}\big{)}-C<\varphi\quad\text{ on }\partial\Omega.$ Also, we consider $\eta=2\ln\delta-\ln(\rho+\delta^{2})+\varphi.$ For $\delta>0$ sufficiently small, $\mathcal{W}[\eta]\geq\nabla^{2}\eta+\frac{\Delta\eta}{n-2}g\geq\frac{1}{2(n-2)(\rho+\delta^{2})^{2}}g\quad\text{ on }\\{0\leq\rho\leq\delta\\}.$ Consequently, $(n-2)e^{-2\eta}\mathcal{W}[\eta]\geq\frac{e^{-2\varphi}}{2\delta^{4}}g>\frac{e^{-2\varphi}}{4\delta^{4}}g\quad\text{ on }\\{0\leq\rho\leq\delta\\}.$ Choosing $\delta>0$ further small such that (4.34) $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}(n-2)e^{-2\eta}\mathcal{W}[\eta]\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}(n-2)e^{-2\eta}\mathcal{W}[\eta]\bigg{)}}$ $\displaystyle\geq$ $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}\frac{e^{-2\varphi}}{4\delta^{4}}g\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}\frac{e^{-2\varphi}}{4\delta^{4}}g\bigg{)}}>-\alpha(x)\quad\text{ on }\\{0\leq\rho\leq\delta\\},$ and $\eta<2\ln\delta-\ln\delta+\varphi<w\quad\text{ on }\rho=\delta.$ Now we need a lemma from Guan [8]. ###### Lemma 4.7. For all $\epsilon>0$, there is an even function $h(t)\in C^{\infty}(\mathbb{R})$ such that 1. (1) $h(t)\geq|t|$ for all $t\in\mathbb{R}$, and $h(t)=|t|$ for all $|t|\geq\epsilon$; 2. (2) $\big{|}h^{\prime}(t)\big{|}\leq 1$ and $h^{\prime\prime}(t)\geq 0$ for all $t\in\mathbb{R}$, and $h^{\prime}(t)\geq 0$ for all $t\geq 0$. Define $\underline{v}=\left\\{\begin{aligned} &\frac{1}{2}(\eta+w)+\frac{1}{2}h(\eta-w)\quad\text{ on }\\{0\leq\rho\leq\delta\\}\\\ &w\quad\text{ on }\\{\rho\geq\delta\\}\end{aligned}\right..$ Direct calculation shows that on $\\{0\leq\rho\leq\delta\\}$, $\nabla_{i}\underline{v}=\frac{1}{2}\Big{(}\nabla_{i}\eta+\nabla_{i}w\Big{)}+\frac{1}{2}h^{\prime}(\eta-w)\Big{(}\nabla_{i}\eta-\nabla_{i}w\Big{)},$ $\displaystyle\nabla_{ij}\underline{v}=$ $\displaystyle\frac{1}{2}\Big{(}\nabla_{ij}\eta+\nabla_{ij}w\Big{)}+\frac{1}{2}h^{\prime}\Big{(}\nabla_{ij}\eta-\nabla_{ij}w\Big{)}$ $\displaystyle+\frac{1}{2}h^{\prime\prime}\Big{(}\nabla_{i}\eta-\nabla_{i}w\Big{)}\Big{(}\nabla_{j}\eta-\nabla_{j}w\Big{)}.$ It follows that within $\\{0\leq\rho\leq\delta\\}$, $\displaystyle\mathcal{W}[\underline{v}]\geq$ $\displaystyle\nabla^{2}\underline{v}+\frac{\Delta\underline{v}}{n-2}g$ $\displaystyle\geq$ $\displaystyle\frac{1}{2}\Big{(}\nabla^{2}\eta+\nabla^{2}w\Big{)}+\frac{1}{2}h^{\prime}\Big{(}\nabla^{2}\eta-\nabla^{2}w\Big{)}$ $\displaystyle+\frac{1}{2(n-2)}\Big{(}\Delta\eta+\Delta w\Big{)}g+\frac{1}{2(n-2)}h^{\prime}\Big{(}\Delta\eta-\Delta w\Big{)}g$ $\displaystyle=$ $\displaystyle\frac{1+h^{\prime}}{2}\bigg{(}\nabla^{2}\eta+\frac{\Delta\eta}{n-2}g\bigg{)}+\frac{1-h^{\prime}}{2}\bigg{(}\nabla^{2}w+\frac{\Delta w}{n-2}g\bigg{)}$ $\displaystyle\geq$ $\displaystyle\frac{1+h^{\prime}}{2}\frac{1}{2(n-2)(\rho+\delta^{2})^{2}}g+\frac{1-h^{\prime}}{2}\frac{4R^{2}}{(n-2)\big{(}R^{2}-|x|^{2}\big{)}^{2}}g.$ Since on $\\{0\leq\rho\leq\delta\\}$, $\underline{v}\leq\sup\\{\eta,w\\}+\frac{\epsilon}{2},$ we have $\displaystyle(n-2)e^{-2\underline{v}}\mathcal{W}[\underline{v}]$ $\displaystyle\geq$ $\displaystyle(n-2)e^{-2\underline{v}}\frac{1+h^{\prime}}{2}\frac{1}{2(n-2)(\rho+\delta^{2})^{2}}g+(n-2)e^{-2\underline{v}}\frac{1-h^{\prime}}{2}\frac{4R^{2}}{(n-2)\big{(}R^{2}-|x|^{2}\big{)}^{2}}g$ $\displaystyle\geq$ $\displaystyle\left\\{\begin{aligned} (n-2)e^{-2\eta-\epsilon}\frac{1}{2}\frac{1}{2(n-2)(\rho+\delta^{2})^{2}}g\quad\text{ on }\\{0\leq\rho\leq\delta\\}\cap\\{\eta\geq w\\}\\\ (n-2)e^{-2w-\epsilon}\frac{1}{2}\frac{4R^{2}}{(n-2)\big{(}R^{2}-|x|^{2}\big{)}^{2}}g\quad\text{ on }\\{0\leq\rho\leq\delta\\}\cap\\{\eta\leq w\\}\end{aligned}\right.$ $\displaystyle\geq$ $\displaystyle\left\\{\begin{aligned} \frac{1}{4\delta^{4}}e^{-2\varphi-\epsilon}g\quad\text{ on }\\{0\leq\rho\leq\delta\\}\cap\\{\eta\geq w\\}\\\ e^{2C-\epsilon}2R^{2}g\quad\text{ on }\\{0\leq\rho\leq\delta\\}\cap\\{\eta\leq w\\}\end{aligned}\right..$ In view of (4.33) and (4.34), we may choose $\epsilon>0$ sufficiently small such that $\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}(n-2)e^{-2\underline{v}}\mathcal{W}[\underline{v}]\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}(n-2)e^{-2\underline{v}}\mathcal{W}[\underline{v}]\bigg{)}}>-\alpha(x)\quad\text{ on }\overline{\Omega}.$ ### 4.5. Preliminary estimates For a $(k-1)$-admissible function $v\in C^{2}(\overline{\Omega})$ with $v\geq\underline{v}\text{ in }\overline{\Omega}\quad\text{ and }v=\varphi\text{ on }\partial\Omega,$ we change $v$ back into $u$ by $v=\frac{2}{n-2}\ln u$ to see that $\sigma_{1}\big{(}W[u]\big{)}=2(n-1)\Delta u\geq 0.$ Let $h$ be the solution to $\left\\{\begin{aligned} \Delta h=&0\quad&\text{ in }\Omega,\\\ h=&e^{\frac{n-2}{2}\varphi}\quad&\text{ on }\partial\Omega.\end{aligned}\right.$ By the maximum principle, $u\leq h$ in $\Omega$ and hence $v\leq\overline{v}:=\frac{2}{n-2}\ln h\quad\text{ in }\Omega.$ Then we have $\nabla_{\nu}\underline{v}\leq\nabla_{\nu}v\leq\nabla_{\nu}\overline{v}\quad\text{ on }\partial\Omega,$ where $\nu$ is the interior unit normal to $\partial\Omega$. ### 4.6. Existence of solutions Denote $\displaystyle G[v]:=$ $\displaystyle G(\nabla^{2}v,\nabla v,v)=F\big{(}\mathcal{W}[v]\big{)},$ $\displaystyle G^{ij}[v]:=$ $\displaystyle G^{ij}(\nabla^{2}v,\nabla v,v)=\frac{\partial G}{\partial\nabla_{ij}v},$ $\displaystyle G^{i}[v]:=$ $\displaystyle G^{i}(\nabla^{2}v,\nabla v,v)=\frac{\partial G}{\partial\nabla_{i}v},$ $\displaystyle G_{v}[v]:=$ $\displaystyle G_{v}(\nabla^{2}v,\nabla v,v)=\frac{\partial G}{\partial v}.$ Let $C_{0}$ be a positive constant such that (4.35) $G[\underline{v}]=G(\nabla^{2}\underline{v},\nabla\underline{v},\underline{v})>-C_{0}\quad\text{ in }\overline{\Omega}.$ For $t\in[0,1]$, we consider the following two equations (similar construction of the equations can be found in Su [14]). (4.36) $\left\\{\begin{aligned} G[v]=&(1-t)G[\underline{v}]-tC_{0}\quad&\text{ in }\Omega,\\\ v=&\varphi\quad&\text{ on }\partial\Omega.\end{aligned}\right.$ (4.37) $\left\\{\begin{aligned} G[v]=&-(1-t)C_{0}-t\frac{\alpha(x)}{n-2}e^{2v}\quad&\text{ in }\Omega,\\\ v=&\varphi\quad&\text{ on }\partial\Omega.\end{aligned}\right.$ ###### Remark 4.8. For $x\in\overline{\Omega}$ and a $C^{2}$ function $v$ which is $(k-1)$-admissible near $x$, we have $G_{v}(x)=-\frac{\alpha_{0}(x)e^{2kv}2k}{(n-2)^{k}}\frac{1}{\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}}<0$ if $\alpha_{0}(x)>0$. ###### Lemma 4.9. For $t\in[0,1]$, let $\underline{V}$ and $v$ be any $(k-1)$-admissible subsolution and solution of (4.36). Then $v\geq\underline{V}$ in $\Omega$. In particular, (4.36) has at most one $(k-1)$-admissible solution. ###### Proof. Suppose that $\underline{V}-v$ achieves a positive maximum at $x_{0}\in\Omega$, at which we have $\underline{V}(x_{0})>v(x_{0}),\quad\nabla\underline{V}(x_{0})=\nabla v(x_{0}),\quad\nabla^{2}\underline{V}(x_{0})\leq\nabla^{2}v(x_{0}).$ It follows that $\mathcal{W}[\underline{V}](x_{0})\leq\mathcal{W}[v](x_{0})$, and therefore $\displaystyle F\big{(}\mathcal{W}[\underline{V}]\big{)}(x_{0})=$ $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\big{(}\mathcal{W}[\underline{V}]\big{)}(x_{0})-\frac{\alpha_{0}(x_{0})e^{2k\underline{V}(x_{0})}}{(n-2)^{k}}\frac{1}{\sigma_{k-1}\big{(}\mathcal{W}[\underline{V}]\big{)}(x_{0})}$ $\displaystyle\leq$ $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\big{(}\mathcal{W}[v]\big{)}(x_{0})-\frac{\alpha_{0}(x_{0})e^{2k\underline{V}(x_{0})}}{(n-2)^{k}}\frac{1}{\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}(x_{0})}$ $\displaystyle<$ $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\big{(}\mathcal{W}[v]\big{)}(x_{0})-\frac{\alpha_{0}(x_{0})e^{2kv(x_{0})}}{(n-2)^{k}}\frac{1}{\sigma_{k-1}\big{(}\mathcal{W}[v]\big{)}(x_{0})}$ $\displaystyle=$ $\displaystyle F\big{(}\mathcal{W}[v]\big{)}(x_{0}).$ But $F\big{(}\mathcal{W}[\underline{V}]\big{)}(x_{0})\geq(1-t)G[\underline{v}](x_{0})-tC_{0}=F\big{(}\mathcal{W}[v]\big{)}(x_{0}),$ which is a contradiction. ∎ ###### Theorem 4.10. For $t\in[0,1]$, (4.36) has a unique $(k-1)$-admissible solution $v\geq\underline{v}$. ###### Proof. By Lemma 4.9, we immediately obtain uniqueness. For existence, we use standard continuity method. By assumption (4.35), $\underline{v}$ is a subsolution of (4.36). We note that the $C^{2}$ estimate for $(k-1)$-admissible solution $v\geq\underline{v}$ of (4.36) implies uniform ellipticity of this equation and hence gives $C^{2,\alpha}$ estimate by Evans-Krylov theory [3, 11] (4.38) $\|v\|_{C^{2,\alpha}(\overline{\Omega})}\leq C,$ where $C$ is independent of $t$. Denote $C_{0}^{2,\alpha}(\overline{\Omega})=\\{w\in C^{2,\alpha}(\overline{\Omega})\,|\,w=0\text{ on }\partial\Omega\\},$ and consider the open subset of $C_{0}^{2,\alpha}(\overline{\Omega})$ $\mathcal{U}=\\{w\in C_{0}^{2,\alpha}(\overline{\Omega})\,|\,\underline{v}+w\text{ is }(k-1)\text{-admissible in }\overline{\Omega}\\}.$ Define $\mathcal{L}:\mathcal{U}\times[0,1]\rightarrow C^{\alpha}(\overline{\Omega})$, $\mathcal{L}(w,t)=G[\underline{v}+w]-(1-t)G[\underline{v}]+tC_{0},$ and set $\mathcal{S}=\\{t\in[0,1]\,|\,\mathcal{L}(w,t)=0\text{ has a solution }w\text{ in }\mathcal{U}\\}.$ $\mathcal{S}\neq\emptyset$ since $\mathcal{L}(0,0)=0$. $\mathcal{S}$ is open in $[0,1]$. In fact, for any $t_{0}\in\mathcal{S}$, there exists $w_{0}\in\mathcal{U}$ such that $\mathcal{L}(w_{0},t_{0})=0$. Note that the Fréchet derivative of $\mathcal{L}$ with respect to $w$ at $(w_{0},t_{0})$ is a linear elliptic operator from $C^{2,\alpha}_{0}(\overline{\Omega})$ to $C^{\alpha}(\overline{\Omega})$, $\mathcal{L}_{w}\big{|}_{(w_{0},t_{0})}(h)=G^{ij}[\underline{v}+w_{0}]\nabla_{ij}h+G^{i}[\underline{v}+w_{0}]\nabla_{i}h+G_{v}[\underline{v}+w_{0}]h.$ Remark 4.8 implies $\mathcal{L}_{w}\big{|}_{(w_{0},t_{0})}$ is invertible. Thus a neighborhood of $t_{0}$ is also contained in $\mathcal{S}$ by implicit function theorem. $\mathcal{S}$ is closed in $[0,1]$. In fact, let $t_{i}$ be a sequence in $\mathcal{S}$ converging to $t_{0}\in[0,1]$ and $w_{i}\in\mathcal{U}$ be the unique solution to $\mathcal{L}(w_{i},t_{i})=0$. Lemma 4.9 implies $w_{i}\geq 0$. By (4.38), $v_{i}:=\underline{v}+w_{i}$ is a bounded sequence in $C^{2,\alpha}(\overline{\Omega})$, which possesses a subsequence converging to a $(k-1)$-admissible solution $v_{0}$ of (4.36). Since $w_{0}=v_{0}-\underline{v}\in\mathcal{U}$ and $\mathcal{L}(w_{0},t_{0})=0$, we know that $t_{0}\in\mathcal{S}$. ∎ Next we may assume that $\underline{v}$ is not a solution of (4.32). ###### Lemma 4.11. If $v\geq\underline{v}$ is a $(k-1)$-admissible solution of (4.37), then $v>\underline{v}$ in $\Omega$ and $\nabla_{\nu}(v-\underline{v})>0$ on $\partial\Omega$. ###### Proof. We note that $G[\underline{v}]-G[v]\geq-t\frac{\alpha(x)}{n-2}(e^{2\underline{v}}-e^{2v}),$ and $\displaystyle G[\underline{v}]-G[v]=F\big{(}\mathcal{W}[\underline{v}]\big{)}-F\big{(}\mathcal{W}[v]\big{)}$ $\displaystyle=$ $\displaystyle\int_{0}^{1}\frac{d}{ds}\Bigg{(}\frac{\sigma_{k}}{\sigma_{k-1}}\Big{(}(1-s)\mathcal{W}[v]+s\mathcal{W}[\underline{v}]\Big{)}-\frac{\alpha_{0}(x)e^{2k((1-s)v+s\underline{v})}}{(n-2)^{k}\sigma_{k-1}\Big{(}(1-s)\mathcal{W}[v]+s\mathcal{W}[\underline{v}]\Big{)}}\Bigg{)}ds$ $\displaystyle=$ $\displaystyle\underbrace{\int_{0}^{1}\Bigg{(}\Big{(}\frac{\sigma_{k}}{\sigma_{k-1}}\Big{)}^{ij}+\frac{\alpha_{0}(x)e^{2k((1-s)v+s\underline{v})}\sigma_{k-1}^{ij}}{(n-2)^{k}\sigma_{k-1}^{2}}\Bigg{)}\Big{(}(1-s)\mathcal{W}[v]+s\mathcal{W}[\underline{v}]\Big{)}ds}_{\text{denoted by }a_{ij}}$ $\displaystyle\cdot\Big{(}\mathcal{W}_{ij}[\underline{v}]-\mathcal{W}_{ij}[v]\Big{)}-\underbrace{\int_{0}^{1}\frac{\alpha_{0}(x)e^{2k((1-s)v+s\underline{v})}2k}{(n-2)^{k}\sigma_{k-1}\Big{(}(1-s)\mathcal{W}[v]+s\mathcal{W}[\underline{v}]\Big{)}}ds}_{\text{denoted by }c}\cdot(\underline{v}-v)$ $\displaystyle=$ $\displaystyle a_{ij}\bigg{(}\nabla_{ij}(\underline{v}-v)+\frac{\Delta(\underline{v}-v)}{n-2}\delta_{ij}\bigg{)}-a_{ij}\nabla_{i}\underline{v}\nabla_{j}(\underline{v}-v)-a_{ij}\nabla_{j}v\nabla_{i}(\underline{v}-v)$ $\displaystyle+\nabla(\underline{v}+v)\cdot\nabla(\underline{v}-v)\sum a_{ii}-c(\underline{v}-v).$ Applying the Maximum Principle and Lemma H (see p. 212 of [4]) we proved the lemma. ∎ ###### Theorem 4.12. For any $t\in[0,1]$, there is a $(k-1)$-admissible solution $v\geq\underline{v}$ to Dirichlet problem (4.37). ###### Proof. We obtain by classical Schauder theory the $C^{4,\alpha}$ estimate (4.39) $\|v\|_{C^{4,\alpha}(\overline{\Omega})}<C_{4}.$ Also, we have (4.40) $\text{dist}\big{(}\lambda(\mathcal{W}[v]),\partial\Gamma_{k-1}\big{)}>c_{2}>0\quad\text{ in }\overline{\Omega},$ where $C_{4}$ and $c_{2}$ are independent of $t$. Denote $C_{0}^{4,\alpha}(\overline{\Omega})=\big{\\{}w\in C^{4,\alpha}(\overline{\Omega})\,|\,w=0\text{ on }\partial\Omega\big{\\}}$ and the open bounded subset of $C_{0}^{4,\alpha}(\overline{\Omega})$ $\mathcal{O}=\Bigg{\\{}w\in C_{0}^{4,\alpha}(\overline{\Omega})\left|\footnotesize\begin{aligned} &w>0\text{ in }\Omega,\nabla_{\nu}w>0\text{ on }\partial\Omega,\|w{\|}_{C^{4,\alpha}(\overline{\Omega})}<C_{4}+\|\underline{v}\|_{C^{4,\alpha}(\overline{\Omega})},\\\ &\underline{v}+w\text{ is }(k-1)\text{-admissible in }\overline{\Omega},\text{dist}\Big{(}\lambda\big{(}\mathcal{W}[\underline{v}+w]\big{)},\partial\Gamma_{k-1}\Big{)}>c_{2}\text{ in }\overline{\Omega}\end{aligned}\right.\Bigg{\\}}.$ Define a map $\mathcal{M}_{t}(w):\mathcal{O}\times[0,1]\rightarrow C^{2,\alpha}(\overline{\Omega})$, $\mathcal{M}_{t}(w)=G[\underline{v}+w]+(1-t)C_{0}+t\frac{\alpha(x)}{n-2}e^{2(\underline{v}+w)}.$ By Theorem 4.10 and Lemma 4.9, there is a unique $(k-1)$-admissible solution $v^{0}$ of (4.36) at $t=1$, which is also the unique $(k-1)$-admissible solution of (4.37) at $t=0$. By Lemma 4.9, $w^{0}=v^{0}-\underline{v}\geq 0$ in $\Omega$. Consequently, $w^{0}>0$ in $\Omega$ and $\nabla_{\nu}w^{0}>0$ on $\partial\Omega$ by Lemma 4.11. We also note that $\underline{v}+w^{0}$ satisfies (4.39) and (4.40). Thus, $w^{0}\in\mathcal{O}$. By Lemma 4.11, (4.39) and (4.40), $\mathcal{M}_{t}(w)=0$ has no solution on $\partial\mathcal{O}$ for any $t\in[0,1]$. Since $\mathcal{M}_{t}$ is uniformly elliptic on $\mathcal{O}$ independent of $t$, the degree of $\mathcal{M}_{t}$ on $\mathcal{O}$ at $0$ can be defined independent of $t$. This degree is nonzero at $t=0$. In fact, since $\mathcal{M}_{0}(w)=0$ has a unique solution $w^{0}\in\mathcal{O}$. The Fréchet derivative of $\mathcal{M}_{0}$ with respect to $w$ at $w^{0}$ is a linear elliptic operator from $C^{4,\alpha}_{0}(\overline{\Omega})$ to $C^{2,\alpha}(\overline{\Omega})$, $\mathcal{M}_{0,w}|_{w^{0}}(h)=G^{ij}[v^{0}]\nabla_{ij}h+G^{i}[v^{0}]\nabla_{i}h+G_{v}[v^{0}]h.$ By Remark 4.8, $G_{v}[v^{0}]<0$ in $\overline{\Omega}$. Thus, $\mathcal{M}_{0,w}|_{w^{0}}$ is invertible. The degree theory in [12] implies that the degree at $t=0$ is nonzero, which implies that (4.37) has at least one $(k-1)$-admissible solution $v\geq\underline{v}$ for any $t\in[0,1]$. ∎ ## 5\. Fully nonlinear Loewner-Nirenberg problem for general equations In this section, we discuss fully nonlinear Loewner-Nirenberg problem related to general equation (1.10). ### 5.1. The case of smooth compact boundary consisting of closed hypersurfaces We first give the definition of supersolutions and subsolutions to equation (1.10). ###### Definition 5.1. A function $0<\underline{u}\in C^{2}(\Omega)$ is a subsolution of (1.10) in $\Omega$ if $\displaystyle\lambda\big{(}W[\underline{u}]\big{)}\in\Gamma_{k-1}\text{ and }$ $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}\underline{u}^{-\frac{n+2}{n-2}}W[\underline{u}]\bigg{)}$ $\displaystyle+\alpha(x)\sigma_{k-1}\bigg{(}\frac{2}{n-2}\underline{u}^{-\frac{n+2}{n-2}}W[\underline{u}]\bigg{)}\geq\alpha_{0}(x)\quad\text{ in }\Omega.$ A function $0<\overline{u}\in C^{2}(\Omega)$ is a supersolution of (1.10) in $\Omega$ if $\displaystyle\text{either }\lambda\big{(}W[\overline{u}]\big{)}\notin\Gamma_{k-1}\text{ or }$ $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}\overline{u}^{-\frac{n+2}{n-2}}W[\overline{u}]\bigg{)}$ $\displaystyle+\alpha(x)\sigma_{k-1}\bigg{(}\frac{2}{n-2}\overline{u}^{-\frac{n+2}{n-2}}W[\overline{u}]\bigg{)}\leq\alpha_{0}(x)\quad\text{ in }\Omega.$ ###### Proposition 5.2. If $\underline{u}$ is a subsolution of (1.10) satisfying (5.1) $\lambda\big{(}W[\underline{u}]\big{)}\in\overline{\Gamma}_{k}\quad\text{ in }\Omega,$ so does $c\underline{u}$ for any constant $0<c<1$. If $\overline{u}$ is a supersolution of (1.10) satisfying $\lambda\big{(}W[\overline{u}]\big{)}\notin\Gamma_{k-1}\quad\text{everywhere in }\Omega,$ then $c\overline{u}$ is a supersolution of (1.10) for any $c>0$. ###### Proof. When $\lambda\big{(}W[\underline{u}]\big{)}\in\overline{\Gamma}_{k}$, $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}\frac{2}{n-2}c^{-\frac{4}{n-2}}\underline{u}^{-\frac{n+2}{n-2}}W[\underline{u}]\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}\frac{2}{n-2}c^{-\frac{4}{n-2}}\underline{u}^{-\frac{n+2}{n-2}}W[\underline{u}]\bigg{)}}$ $\displaystyle\geq$ $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}\frac{2}{n-2}\underline{u}^{-\frac{n+2}{n-2}}W[\underline{u}]\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}\frac{2}{n-2}\underline{u}^{-\frac{n+2}{n-2}}W[\underline{u}]\bigg{)}}\geq-\alpha(x).$ Hence we proved the first statement. The second statement is straightforward by definition. ∎ Then we observe the following maximum principle of (1.10). ###### Theorem 5.3. Let $M$ be a smooth compact manifold with boundary. Suppose that $\underline{u}$ is a subsolution of $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\alpha(x)\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\alpha_{0}(x)$ satisfying $\lambda\big{(}W[\underline{u}]\big{)}\in\Gamma_{k}\quad\text{ in }\Omega,$ and $\overline{u}$ is a supersolution of $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\beta(x)\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\beta_{0}(x),$ with $\alpha(x)\leq\beta(x)$ and $\alpha_{0}(x)\geq\beta_{0}(x)$ on $M$. If $\underline{u}\leq\overline{u}$ on $\partial M$, then $\underline{u}\leq\overline{u}$ on $M$. ###### Proof. Suppose that $\underline{u}>\overline{u}$ somewhere in the interior of $M$. Let $C$ be the maximum of $\frac{\underline{u}}{\overline{u}}$ on $M$, which is attained at $x_{0}$ in the interior of $M$. Since $C>1$, by Proposition 5.2, we know that $\underline{u}_{1}:=\frac{\underline{u}}{C}$ satisfies $\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}\frac{2}{n-2}\underline{u}_{1}^{-\frac{n+2}{n-2}}W[\underline{u}_{1}]\bigg{)}-\frac{\alpha_{0}(x)}{\sigma_{k-1}\bigg{(}\frac{2}{n-2}\underline{u}_{1}^{-\frac{n+2}{n-2}}W[\underline{u}_{1}]\bigg{)}}>-\alpha(x).$ On the other hand, since $\underline{u}_{1}(x_{0})=\overline{u}(x_{0})$ while $\underline{u}_{1}\leq\overline{u}$ near $x_{0}$, thus at $x_{0}$ $\nabla\underline{u}_{1}(x_{0})=\nabla\overline{u}(x_{0}),\quad\nabla^{2}\underline{u}_{1}(x_{0})\leq\nabla^{2}\overline{u}(x_{0}).$ It follows that $W[\underline{u}_{1}](x_{0})\leq W[\overline{u}](x_{0})$, which further implies $\displaystyle-\alpha(x_{0})<\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}\frac{2}{n-2}\underline{u}_{1}^{-\frac{n+2}{n-2}}W[\underline{u}_{1}]\bigg{)}(x_{0})-\frac{\alpha_{0}(x_{0})}{\sigma_{k-1}\bigg{(}\frac{2}{n-2}\underline{u}_{1}^{-\frac{n+2}{n-2}}W[\underline{u}_{1}]\bigg{)}(x_{0})}$ $\displaystyle\leq$ $\displaystyle\frac{\sigma_{k}}{\sigma_{k-1}}\bigg{(}\frac{2}{n-2}\overline{u}^{-\frac{n+2}{n-2}}W[\overline{u}]\bigg{)}(x_{0})-\frac{\beta_{0}(x_{0})}{\sigma_{k-1}\bigg{(}\frac{2}{n-2}\overline{u}^{-\frac{n+2}{n-2}}W[\overline{u}]\bigg{)}(x_{0})}\leq-\beta(x_{0}),$ which is a contradiction. ∎ Similar to Proposition 2.4, we observe the following fact. ###### Proposition 5.4. For any real number $\alpha$, any positive constant $\alpha_{0}$, any fixed $s>0$, there exists a positive constant $c$ such that $u(x)=c\Big{(}s^{2}-{|x-x_{0}|}^{2}\Big{)}^{1-\frac{n}{2}}\quad\text{ in }B_{s}(x_{0}),$ and $v(x)=c\Big{(}{|x-x_{0}|}^{2}-s^{2}\Big{)}^{1-\frac{n}{2}}\quad\text{ in }\mathbb{R}^{n}\setminus\overline{B_{s}(x_{0})},$ are admissible solutions of $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\alpha\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\alpha_{0},$ which approach to $\infty$ on $\partial B_{s}(x_{0})$. ###### Proof. Consider $u(x)=c\Big{(}s^{2}-{|x-x_{0}|}^{2}\Big{)}^{1-\frac{n}{2}}.$ We have $W_{ij}[u]=2(n-1)(n-2)c\Big{(}s^{2}-|x-x_{0}|^{2}\Big{)}^{-\frac{n}{2}-1}s^{2}\delta_{ij},$ and therefore $\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W_{ij}[u]=4(n-1)c^{-\frac{4}{n-2}}s^{2}\delta_{ij}.$ Hence $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\alpha\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}$ $\displaystyle=$ $\displaystyle\bigg{(}4(n-1)c^{-\frac{4}{n-2}}s^{2}\bigg{)}^{k}\sigma_{k}\big{(}I\big{)}+\alpha\bigg{(}4(n-1)c^{-\frac{4}{n-2}}s^{2}\bigg{)}^{k-1}\sigma_{k-1}\big{(}I\big{)}.$ Since $k\geq 2$, by intermediate value theorem, there exists a positive constant $c$ such that the above quantity equals $\alpha_{0}$. ∎ ###### Remark 5.5. The relation between $c$ and $s$ in $u(x)$ and $v(x)$ in Proposition 5.4 can be obtained once $\alpha$ and $\alpha_{0}$ are fixed. In fact, let $\tilde{\mu}$ be a positive root of $\mu^{k}\sigma_{k}\big{(}I\big{)}+\alpha\mu^{k-1}\sigma_{k-1}\big{(}I\big{)}=\alpha_{0}.$ Then choose $4(n-1)c^{-\frac{4}{n-2}}s^{2}=\tilde{\mu}$ to obtain $c=\bigg{(}\frac{4(n-1)}{\tilde{\mu}}\bigg{)}^{\frac{n-2}{4}}s^{\frac{n-2}{2}}.$ Proof of Theorem 1.4 If $\Omega$ is a bounded domain, for any positive integer $m$, we consider the Dirichlet problem (5.2) $\left\\{\begin{aligned} F\big{(}\mathcal{W}[v]\big{)}=&-\frac{\alpha(x)}{n-2}e^{2v}\quad&\text{ in }\Omega,\\\ v=&m\quad&\text{ on }\partial\Omega.\end{aligned}\right.$ By Theorem 4.12, there exists a smooth $(k-1)$-admissible solution $v_{m}$ to (5.2). Let $\mu_{0}$ be the smallest positive root of the equation $\mu^{k}+\overline{\alpha}\mu^{k-1}=\underline{\alpha}_{0},$ where $\overline{\alpha}=\max\Big{\\{}\sup\limits_{\Omega}\alpha(x),0.01\Big{\\}}>0,\quad\underline{\alpha}_{0}=\inf\limits_{\Omega}\alpha_{0}(x)>0.$ By Loewner and Nirenberg [13], there exists a positive solution $\overline{u}$ to (5.3) $\left\\{\begin{aligned} \sigma_{1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=&\mu_{0}\quad\text{ in }\Omega,\\\ u=&\infty\quad\text{ on }\partial\Omega.\end{aligned}\right.$ Since $u_{m}=e^{\frac{n-2}{2}v_{m}}$ satisfies $\displaystyle\underline{\alpha}_{0}\leq\alpha_{0}(x)=$ $\displaystyle\sigma_{k}\bigg{(}\frac{2}{n-2}u_{m}^{-\frac{n+2}{n-2}}W[u_{m}]\bigg{)}+\alpha(x)\sigma_{k-1}\bigg{(}\frac{2}{n-2}u_{m}^{-\frac{n+2}{n-2}}W[u_{m}]\bigg{)}$ $\displaystyle\leq$ $\displaystyle\sigma_{1}^{k}\bigg{(}\frac{2}{n-2}u_{m}^{-\frac{n+2}{n-2}}W[u_{m}]\bigg{)}+\overline{\alpha}\sigma_{1}^{k-1}\bigg{(}\frac{2}{n-2}u_{m}^{-\frac{n+2}{n-2}}W[u_{m}]\bigg{)},$ we have $\sigma_{1}\bigg{(}\frac{2}{n-2}u_{m}^{-\frac{n+2}{n-2}}W[u_{m}]\bigg{)}\geq\mu_{0}.$ By the maximum principle Theorem 2.3, we have $u_{m}\leq\overline{u}\quad\text{ in }\Omega\quad\text{ for any }m.$ Denote $\underline{\alpha}=\inf\limits_{\Omega}\alpha(x),\quad\overline{\alpha}_{0}=\sup\limits_{\Omega}\alpha_{0}(x).$ Let $t>0$ be sufficiently large such that $\overline{\Omega}\subset B_{t}(0)$. Consider the equation (5.4) $\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\underline{\alpha}\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\overline{\alpha}_{0}.$ For any fixed $s>t>0$, by Proposition 5.4 and Remark 5.5, there exists a smooth solution $\underline{u}=\bigg{(}\frac{4(n-1)}{\tilde{\mu}}\bigg{)}^{\frac{n-2}{4}}s^{\frac{n-2}{2}}\Big{(}s^{2}-|x|^{2}\Big{)}^{1-\frac{n}{2}}$ to equation (5.4) in $B_{s}(0)$, which is a subsolution of (1.10). We may choose $s$ sufficiently large so that $\displaystyle\underline{u}\leq\bigg{(}\frac{4(n-1)}{\tilde{\mu}}\bigg{)}^{\frac{n-2}{4}}s^{\frac{n-2}{2}}\Big{(}s^{2}-t^{2}\Big{)}^{1-\frac{n}{2}}\leq e^{\frac{n-2}{2}}\leq e^{\frac{(n-2)m}{2}}=u_{m}\quad\text{ on }\partial\Omega.$ By Theorem 5.3 we know that $u_{m}\geq\underline{u}\quad\text{ in }\Omega\quad\text{ for any }m.$ By interior $C^{2}$ estimate established in last section and Evans-Krylov theory [3, 11], we obtain a smooth positive $(k-1)$-admissible solution $u$ to equation (1.10) which tends to $\infty$ on $\partial\Omega$. When $\Omega$ is unbounded, we may assume without loss of generality that $0\notin\overline{\Omega}$. Let $B_{s}(0)$ be a fixed ball such that $\overline{\Omega}\subset\mathbb{R}^{n}\setminus\overline{B_{s}(0)}$. By Proposition 5.4, there exists a smooth solution $\underline{u}=c_{0}\Big{(}|x|^{2}-s^{2}\Big{)}^{1-\frac{n}{2}}\quad\text{ for some positive constant }c_{0}$ to equation (5.4) in $\mathbb{R}^{n}\setminus\overline{B_{s}(0)}$, which is a subsolution of (1.10). For any $R>\max\limits_{\partial\Omega}\underline{u}$ large enough such that $\partial\Omega\subset B_{R}(0)$, by Theorem 4.12, there exists a smooth positive $(k-1)$-admissible solution $u_{R}$ to the Dirichlet problem $\left\\{\begin{aligned} &\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\alpha\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\alpha_{0}\quad\text{ in }B_{R}(0)\cap\Omega,\\\ &u=R\quad\text{ on }\partial\Omega,\\\ &u=\underline{u}\quad\text{ on }\partial B_{R}(0).\end{aligned}\right.$ By Theorem 5.3, we know that $u_{R}\geq\underline{u}\quad\text{ in }B_{R}(0)\cap\Omega.$ By Lemma 2.10, we are able to find a positive solution $\overline{u}\in C^{\infty}(\Omega)$ of (5.3) which decays to $0$ at $\infty$ with the decay rate $|x|^{n-2}\overline{u}(x)\rightarrow c\quad\text{ as }|x|\rightarrow\infty$ for some positive constant $c$. Comparing the decay rate as $|x|\rightarrow\infty$, we have on $\partial B_{R}(0)$ with $R$ sufficiently large, $u_{R}(x)=\underline{u}(x)\leq\frac{2c_{0}}{c}\overline{u}(x).$ Meanwhile, we have $u_{R}(x)=R<\infty=\frac{2c_{0}}{c}\overline{u}(x)\quad\text{ on }\partial\Omega.$ By Proposition 2.2 and Theorem 2.3, we obtain $u_{R}(x)\leq\max\Big{\\{}\frac{2c_{0}}{c},1\Big{\\}}\overline{u}(x)\quad\text{ in }B_{R}(0)\cap\Omega.$ Applying the interior estimates in section 4, Evans-Krylov interior estimates [3, 11] and a standard diagonal process, we obtain a smooth positive $(k-1)$-admissible solution $u$ of (1.10) in $\Omega$ which tends to $\infty$ on $\partial\Omega$. ∎ ### 5.2. Maximal solution on general domain in $\mathbb{R}^{n}$ when $\alpha\leq 0$ We first observe the fact that when $\alpha(x)\leq 0$, any $(k-1)$-admissible solution to equation (1.10) must be $k$-admissible. Then we are able to apply the maximum principle Theorem 5.3. As in section 3, we can define the maximal solution of (1.10). Assume throughout this subsection that $\Omega\subset\mathbb{R}^{n}$ is a domain with smooth compact boundary $\partial\Omega$. Moreover, we assume that $\alpha(x)\leq 0\text{ in }\Omega,\quad\underline{\alpha}_{0}=\inf\limits_{\Omega}\alpha_{0}(x)>0,\quad\underline{\alpha}=\inf\limits_{\Omega}\alpha(x)>-\infty,\quad\overline{\alpha}_{0}=\sup\limits_{\Omega}\alpha_{0}(x)<\infty.$ ###### Definition 5.6. A smooth positive $k$-admissible solution $u_{\Omega}$ of (1.10) is said to be maximal in $\Omega$, if it is greater than or equal to any smooth positive $k$-admissible solution of (1.10) in $\Omega$. Let $\Omega_{(1)}\Subset\Omega_{(2)}\Subset\ldots$ be an increasing sequence of bounded subdomains of $\Omega$ with smooth compact boundaries $\partial\Omega_{(j)}$ which are closed hypersurfaces such that $\Omega=\cup\Omega_{(j)}$. By Theorem 1.4, we can find a smooth positive $k$-admissible solution $u_{(j)}$ of (1.10) in $\Omega_{(j)}$ which tends to $\infty$ on $\partial\Omega_{(j)}$. By Theorem 5.3, we see that $\\{u_{(j)}\\}$ is a monotone decreasing sequence of positive functions. It follows that $u_{(j)}$ converges to a nonnegative function $u_{\Omega}$ in $\Omega$. ###### Lemma 5.7. Either $u_{\Omega}>0$ in $\Omega$ or $u_{\Omega}\equiv 0$ in $\Omega$. ###### Proof. Suppose that $u_{\Omega}\not\equiv 0$. Then we must have $u_{\Omega}>c$ on some $B(x_{0},r_{0})\subset\Omega$ for some constant $c>0$. By Proposition 5.4 and Remark 5.5, we may choose $0<r_{1}<r_{0}$ such that $v=\bigg{(}\frac{4(n-1)}{\tilde{\mu}}\bigg{)}^{\frac{n-2}{4}}r_{1}^{\frac{n-2}{2}}\Big{(}{|x-x_{0}|}^{2}-r_{1}^{2}\Big{)}^{1-\frac{n}{2}}\quad\text{ in }\mathbb{R}^{n}\setminus\overline{B_{r_{1}}(x_{0})}$ is a solution of $\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}+\underline{\alpha}\sigma_{k-1}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\overline{\alpha}_{0}$ satisfying $\bigg{(}\frac{4(n-1)}{\tilde{\mu}}\bigg{)}^{\frac{n-2}{4}}r_{1}^{\frac{n-2}{2}}\Big{(}r_{0}^{2}-r_{1}^{2}\Big{)}^{1-\frac{n}{2}}=c.$ By Theorem 5.3, we know that $u_{(j)}\geq v$ in $\Omega_{(j)}\setminus B(x_{0},r_{0})$ for any $j$. Hence $u_{\Omega}\geq v>0$ in $\Omega\setminus B(x_{0},r_{0})$. We thus have $u_{\Omega}>0$ in $\Omega$. ∎ ###### Remark 5.8. 1. (1) In view of Lemma 5.7, the interior regularity in section 4, and Evans-Krylov theory [3, 11], we know that $u_{\Omega}$ is smooth. 2. (2) If there exists a positive $k$-admissible subsolution of (1.10) in $\Omega$ (for example, when $\mathbb{R}^{n}\setminus\overline{\Omega}\neq\emptyset$), then $u_{\Omega}>0$ in $\Omega$. 3. (3) When $u_{\Omega}>0$ in $\Omega$, $u_{\Omega}$ is the maximal solution of equation (1.10) in $\Omega$. In fact, if $w$ is any smooth positive $k$-admissible solution of (1.10) in $\Omega$, then $u_{(j)}\geq w$ in $\Omega_{(j)}$ by Theorem 5.3. Hence $u_{\Omega}\geq w$ in $\Omega$. ###### Definition 5.9. We call a compact subset $\Gamma\subset\partial\Omega$ regular, if $u_{\Omega}(x)\rightarrow\infty\quad\text{ as }x\rightarrow\Gamma.$ Now we consider a portion $\Gamma\subset\partial\Omega$ which is a smooth compact non-self-intersecting surface of codimension $m$. Meanwhile, we assume that $\partial\Omega\setminus\Gamma$ is smooth compact. ###### Theorem 5.10. Let $\Omega$ be a domain in $\mathbb{R}^{n}$ and $\Gamma$ be a compact subset of $\partial\Omega$ such that $\partial\Omega\setminus\Gamma$ is also compact. Suppose that $u_{\Omega}\not\equiv 0$. Then $\Gamma$ is regular if there exists an open neighborhood $U$ of $\Gamma$ and a $C^{2}$ positive $k$-admissible subsolution $\phi(x)$ of (1.10) defined in $\Omega\cap U$ which tends to $\infty$ as $x\rightarrow\Gamma$. ###### Proof. Without loss of generality we may assume that $\overline{U}$ is compact, $\overline{U}\cap(\partial\Omega\setminus\Gamma)=\emptyset\text{ and }0<\phi\in C^{2}(\overline{U}\cap\Omega).$ For $j$ sufficiently large, we have $\partial(U\cap\Omega_{(j)})=\partial U\cup(U\cap\partial\Omega_{(j)}).$ Since $u_{\Omega}$ is positive in $\Omega$ and $\partial U\subset\Omega$ is compact, we have $\inf\limits_{\partial U}u_{\Omega}:=m>0$. Denote $M:=\sup\limits_{\partial U}\phi(x)>0$ and $A:=\max\\{\frac{M}{m},1\\}$. We note that $u_{(j)}\geq u_{\Omega}\geq m\geq\frac{\phi}{A}$ on $\partial U$, and $u_{(j)}=\infty>\frac{\phi}{A}$ on $U\cap\partial\Omega_{(j)}$. By Proposition 5.2, $\frac{\phi}{A}$ is again a positive $k$-admissible subsolution of (1.10). By Theorem 5.3, we arrive at $u_{(j)}\geq\frac{\phi}{A}$ in $U\cap\Omega_{(j)}$. Consequently, we obtain $u_{\Omega}\geq\frac{\phi}{A}$ in $U\cap\Omega$. ∎ In what follows we assume that $u_{\Omega}\not\equiv 0$. Similar to section 3, let $\rho(x)$ be the distance of $x$ to $\Gamma$, and $\Gamma_{\rho_{0}}=\big{\\{}x\in\Omega\,\big{|}\,\rho(x)<\rho_{0}\big{\\}}.$ For $\rho_{0}$ sufficiently small, define on $\Gamma_{\rho_{0}}\setminus\Gamma$ $\phi(x)=c\rho^{1-\frac{n}{2}}(x),$ where $c>0$ is a constant to be determined later. Denote $v_{m}=\big{(}\underbrace{n-m,\ldots,n-m}_{n-m+1},\underbrace{2-m,\ldots,2-m}_{m-1}\big{)}.$ As $\rho\rightarrow 0$, $\lambda\bigg{(}\frac{2}{n-2}\phi^{-\frac{n+2}{n-2}}W[\phi]\bigg{)}\rightarrow c^{-\frac{4}{n-2}}v_{m}.$ If we assume that $v_{m}\in\Gamma_{k}$, then there exists $c>0$ sufficiently small such that $\displaystyle\sigma_{k}\Big{(}c^{-\frac{4}{n-2}}v_{m}\Big{)}+\underline{\alpha}\sigma_{k-1}\Big{(}c^{-\frac{4}{n-2}}v_{m}\Big{)}>\overline{\alpha}_{0}.$ As $\rho_{0}>0$ sufficiently small, within $\\{0<\rho<\rho_{0}\\}$, $\sigma_{k}\bigg{(}\frac{2}{n-2}{\phi}^{-\frac{n+2}{n-2}}W[\phi]\bigg{)}+\underline{\alpha}\sigma_{k-1}\bigg{(}\frac{2}{n-2}{\phi}^{-\frac{n+2}{n-2}}W[\phi]\bigg{)}>\overline{\alpha}_{0}.$ Therefore, $\Gamma$ is regular by Theorem 5.10. If $v_{m}\in\mathbb{R}^{n}\setminus\overline{\Gamma}_{k}$, as in section 3, we consider $\psi=\psi_{c,d}=(c\rho^{-a}+d)^{b},$ where $a$, $b$, $c$ and $d$ are positive constants. Direct calculation shows that (5.5) $\frac{1}{abc}\rho^{a+2}(c\rho^{-a}+d)^{\frac{4b}{n-2}+1}\psi^{-\frac{n+2}{n-2}}W[\psi]=A_{m}+B_{ab}(\zeta)+\mathcal{O}(\rho),$ where $\zeta$, $A_{m}$ and $B_{ab}(\zeta)$ are as in (3.2). Choose $a$ sufficiently small, and then choose an appropriate $b$ such that $ab$ is slightly larger than $\frac{n}{2}-1$ such that $\lambda\big{(}A_{m}+B_{ab}(\zeta)\big{)}\notin\overline{\Gamma}_{k}\quad\text{for all }0<\zeta<1.$ Then choose $0<\rho_{0}<1$ further small such that within $0<\rho<\rho_{0}$ and for all $0<\zeta<1$, $\lambda\big{(}A_{m}+B_{ab}(\zeta)+\mathcal{O}(\rho)\big{)}\notin\overline{\Gamma}_{k},$ which implies that $\lambda\big{(}W[\psi]\big{)}\notin\overline{\Gamma}_{k}\text{ in }\\{0<\rho<\rho_{0}\\}\text{ for all }c>0\text{ and }d>0.$ Therefore, we have either $\lambda\big{(}W[\psi]\big{)}\notin\Gamma_{k-1}$, or, if $\lambda\big{(}W[\psi]\big{)}\in\Gamma_{k-1}$, then $\sigma_{k}\bigg{(}\frac{2}{n-2}\psi^{-\frac{n+2}{n-2}}W[\psi]\bigg{)}+\alpha\sigma_{k-1}\bigg{(}\frac{2}{n-2}\psi^{-\frac{n+2}{n-2}}W[\psi]\bigg{)}\leq 0<\alpha_{0}.$ That is, $\psi$ is a supersolution of (1.10). Now, choosing $d$ sufficiently large depending on $\rho_{0}$ such that on $\rho=\rho_{0}$ we have $\psi\geq d^{b}\geq u_{\Omega}$. By Proposition 5.4, Remark 5.5 and Theorem 5.3, the solution of the equation $\sigma_{k}\bigg{(}\frac{2}{n-2}u^{-\frac{n+2}{n-2}}W[u]\bigg{)}=\underline{\alpha}_{0}$ in the ball $B_{\rho(x)}(x)$ can serve as an upper bound for $u_{\Omega}$. Thus, we can deduce that $u_{\Omega}\leq\bigg{(}\frac{4(n-1)}{\tilde{\mu}}\bigg{)}^{\frac{n-2}{4}}\rho^{1-\frac{n}{2}}\leq c^{b}\rho^{-ab}<\psi$ within $0<\rho\leq\delta$, where $\delta>0$ is a sufficiently small constant depending on $c$. By Theorem 5.3 we have $u_{\Omega}\leq\psi$ in $\delta<\rho<\rho_{0}$. Letting $\delta\rightarrow 0$, we arrive at $u_{\Omega}\leq\psi$ in $0<\rho<\rho_{0}$. Next letting $c\rightarrow 0$ we deduce that $u_{\Omega}\leq d^{b}$ in $0<\rho<\rho_{0}$. Hence $\Gamma$ is not regular according to Definition 5.9. ## References * [1] * [2] P. Aviles, A study of the singularities of solutions of a class of nonlinear elliptic partial equations, Comm. Partial Differential Equations 7 (1982), 609–643. * [3] L. C. Evans, Classical solutions of fully nonlinear, convex, second order elliptic equations, Communications on Pure and Applied Mathematics 35 (1982), 333-363. * [4] B. Gidas, W. M. Ni and L. Nirenberg, Symmetry and related properties via the maximum principle, Comm. Math. Phys. 68 (1979), 209–243. * [5] M. González, Y. Li and L. Nguyen, Existence and uniqueness to a fully nonlinear version of the Loewner-Nirenberg problem, Commun. Math. Stat. 6 (2018), 269–288. * [6] B. Guan, Complete conformal metrics of negative Ricci curvature on compact manifolds with boundary, Int. Math. Res. Not. IMRN 2008, Art. ID rnn pp 105, 25. * [7] B. Guan, Addendum to: Complete conformal metrics of negative Ricci curvature on compact manifolds with boundary, Int. Math. Res. Not. IMRN 22 (2009), 4354–4355. * [8] P. Guan, The extremal function associated to intrinsic norms, Ann. of Math. 156 (2002), 197–211. * [9] P. Guan and X. Zhang, A class of curvature type equations, Pure Appl. Math. Q. 17 (2021), 865–907. * [10] M. Gursky, J. Streets and M. Warren, Existence of complete conformal metrics of negative Ricci curvature on manifolds with boundary, Calc. Var. Partial Differential Equations 41 (2011), 21–43. * [11] N. V. Krylov, Boundedly nonhomogeneous elliptic and parabolic equations in a domain, Izvestiya Rossiiskoi Akademii Nauk, Seriya Matematicheskaya 47 (1983), 75–108. * [12] Y. Li, Degree theory for second order nonlinear elliptic operators and its applications, Comm. Partial Differential Equations 14 (1989), 1541–1578. * [13] C. Loewner and L. Nirenberg, Partial differential equations invariant under conformal or projective transformations, 245–272. Contributions to Analysis. [A collection of papers dedicated to Lipman Bers]. New York: Academic Press, 1974. * [14] C. Su, Starshaped locally convex hypersurfaces with prescribed curvature and boundary, J. Geom. Anal. 26 (2016), 1730–1753. * [15] L. Véron, Singularités éliminables d’équations elliptiques non linéaires, J. Differential Equations 41 (1981), 87–95.
Kinetic equations and hierarchies of evolution equations of quantum systems V.I. Gerasimenko∗111E-mail<EMAIL_ADDRESS> ∗Institute of Mathematics of the NAS of Ukraine, 3, Tereshchenkivs’ka Str., 01601, Kyiv-4, Ukraine Abstract. The article provides an overview of some advances in the mathematical understanding of the nature of the kinetic equations of quantum systems of many particles. The fundamental equations of modern mathematical physics are studied, in particular, the hierarchies of evolution equations of quantum systems and their asymptotic behavior described by kinetic nonlinear equations. Key words: kinetic equation, von Neumann hierarchy, BBGKY hierarchy, density operator, correlation operator. 2010 Mathematics Subject Classification: 35Q20; 47J35. math ###### Contents 1. 1 A brief chronology of 150 years of the theory of kinetic equations 2. 2 Hierarchies of evolution equations of quantum systems 1. 2.1 Preliminaries: on evolution equations of quantum systems of many particles 2. 2.2 Dynamics of correlations of particle states 3. 2.3 The BBGKY hierarchies of evolution equations 3. 3 On quantum kinetic equations 1. 3.1 The origin of the kinetic evolution of a state 2. 3.2 The generalized quantum kinetic equation 3. 3.3 Scaling properties of state evolution 4. 4 The kinetic evolution of observables 1. 4.1 The hierarchy of kinetic equations for observables 2. 4.2 The propagation of the initial chaos 3. 4.3 Kinetic equations with initial correlations 5. 5 Outlook ## 1 A brief chronology of 150 years of the theory of kinetic equations Mathematically the collective properties of many-particle systems in kinetic theory are described by kinetic equations, namely, nonlinear equations that describe the evolution of the state of a system of many particles by means of the evolution of the state of a typical particle. The generator of such evolution equations consists of a term that describes the free-evolution (motion by inertia) of a typical particle of the system and a nonlinear term that simulates the self-interaction of a typical particle (collision integral). The mean values of observables of the many-particle system are determined by a solution of the kinetic equations. A well-known historical example of a kinetic equation is the Boltzmann equation, which describes the process of collision of particles in rarefied gases. Quantum kinetic equations are the corresponding generalizations of the kinetic equations of classical particle systems. From a physical point of view, kinetic equations describe a certain stage of the process of transition (relaxation) from a non-equilibrium state to the state of thermodynamic equilibrium of the system of many particles. Indeed, in the general case, in the process of relaxation, an arbitrary non-equilibrium state of the particle system attracts to a state that can be fully described in terms of a one-particle probability distribution function (for quantum systems, it is a one-particle statistical operator, the kernel of which is known as a one-particle density matrix), that is governed by the suitable kinetic equation depending on the interaction potential of the particles. At the next stage of relaxation, such a state of the particle system tends to a state of local equilibrium, which is described by the Maxwell distribution, characterized by hydrodynamic parameters that depend on a position in space. The evolution of hydrodynamic parameters in turn is governed by the equations of a continuous medium (hydrodynamics or diffusion equations). One hundred and fifty-year history of kinetic equations is derived from the works of J. C. Maxwell (1860, 1867) [54],[55] and L. E. Boltzmann (1872) [9]. In 1872 Ludwig Boltzmann published the paper [9] where the evolution equation for a one-particle distribution function was formulated and was showed that the Maxwell distribution describes only the equilibrium state of a gas. He proved the so-called H-theorem (on the increase in entropy) about a property of the solution of this equation, which explained the irreversibility of macroscopic dynamics. Thus, from this time began a period of development of the theory of kinetic equations, which was based on phenomenological models of kinetic phenomena. Later in order to generalize the Boltzmann equation to dense gases or liquids for a system of many hard spheres, the Enskog kinetic equation was formulated (D. Enskog, 1922) [14]. For a particle (Brownian) in a system of many particles (an environment) was introduced the Fokker–Planck equation (A. D. Fokker, 1914, M. Planck, 1917) [17],[63] and the Smoluchowski equation as its partial case (M. von Smoluchowski, 1906) [70]. In this period of development of kinetic theory were also formulated: the Leontovich equation for stochastic dynamics of a system of many particles (M. A. Leontovich, 1935) [53], the Landau equation (L. D. Landau, 1936) [50], the Vlasov equation (A. A. Vlasov, 1938) [68] and the Lenard–Balescu equation (A. Lenard, R. Balescu, R. L. Guernsey, 1960) [1],[45],[52] for systems of many charged particles (ionized gases, plasma). With the beginning of the development of quantum theory the Uehling–Uhlenbeck kinetic equation was formulated as a quantum analog of the Boltzmann equation (L. W. Nordheim, 1928; E. A. Uehling, G. E. Uhlenbeck, 1933) [56],[67] and the quantum Bogolyubov equation (M. M. Bogolyubov, K. P. Gurov, 1947) [7]. In the mean-field approximation for pure states, the Hartree equation (D. R. Hartree, 1928) [46] or the Hartree–Fock equation for systems of fermions and bosons (V. A. Fock, 1930) [16] were derived, and for mixed states, the quantum Vlasov kinetic equation (A. A. Vlasov, 1947) [69]. For quantum many-particle systems in condensed states, it was suggested the Bogolyubov kinetic equation (M. M. Bogolyubov, 1947) [5] and later the Gross–Pitaevskii equation (E. P. Gross, L. D. Pitaevskii, 1961) [44],[62]. From the second half of the 1940s, a new stage in the development of kinetic theory began, namely, the creation of a formalized theory of kinetic phenomena. In 1945 in Kyiv at the Institute of Mathematics in the famous monograph [8], M. M. Bogolyubov formulated a consistent approach to deriving kinetic equations from the dynamics of systems of many particles, namely, from fundamental equations described the evolution of all possible states of many- particle systems, they are now known as the BBGKY hierarchy (Bogolyubov–Born–Green–Kirkwood–Yvon) [8],[10],[49],[71]. Using the methods of perturbation theory, an approach was developed to construct a generalization of the Boltzmann equation, known as the kinetic equation of Bogolyubov, as well as the Vlasov and Landau kinetic equations. Thanks to this work, the irreversibility mechanism of the evolution of systems of many particles in the macroscopic scale, the dynamics of which are described reversible in time equations of motion in a microscopic scale, became clear. Initially, the mathematical theory of the BBGKY hierarchy was developed in works [32],[57],[59] and see also in [12],[23],[58]. Later H. Grad (H. Grad, 1958) [43] formulated an approach to the derivation of kinetic equations as the evolution equations, which describe the corresponding scaling asymptotic of a solution of the BBGKY hierarchy. In recent decades, this approach has been used as an accepted method for the rigorous derivation of kinetic equations of complex systems of various nature. In general, the problem of the rigorous derivation of kinetic equations from the dynamics of systems of many particles remains an open problem of kinetic theory. At the present stage of development of kinetic theory, the most advanced is the mathematical theory of the nonlinear Boltzmann equation, which takes its origin from the works of H. Poincaré (1906) [64], who drew the attention of mathematicians to the need to substantiate kinetic theory and D. Hilbert (1912) [48], who established the connection between a solution of the Boltzmann equation and the hydrodynamics equations of (D. Hilbert’s sixth problem was formulated at the International Congress of Mathematics in 1900 [47]), as well as the work of T. Carleman (1957) [11] on mathematical analysis of the spatially homogeneous Boltzmann equation. The mathematical theory of nonlinear kinetic equations began to develop intensively in the early 80s of the XX century. One of the achievements of this period was the rigorous derivation of the Boltzmann equation from the dynamics of an infinite number of hard spheres in the Boltzmann–Grad limit [51],[60]. Rigorous results of the theory of kinetic equations and their justification at the end of the XX century were summarized in monographs [12],[13],[66]. Over the last decade mathematical results on the derivation of the Boltzmann kinetic equation for classical systems of particles with short-range interaction potential have been summarized in the monograph [18] and new methods have been developed for deriving the Boltzmann equation [20] and the Enskog equation [30] from the collisional dynamics [31] (for details see the references in the above works). In the last two decades significant progress has also been observed in deriving quantum kinetic equations in the scaling limits of the BBCKY hierarchy solution constructed by methods of the perturbation theory [3], [42], [58], in particular, of the quantum Boltzmann equation [2] in a weak coupling limit, of the nonlinear Schrödinger equation [4], [61] and the Gross–Pitaevskii equation [15] in the mean-field limit. Below the fundamental equations that describe the nature of substance are studies, namely the hierarchies of evolution equations of quantum systems of many particles, and non-perturbative methods for constructing their solutions. Based on this, two new approaches to the description of the kinetic evolution of quantum systems [23] are considered. One of them is consists in the description of the evolution of quantum systems in the mean-field scaling limit of the hierarchy of evolution equations for the observables [19] and an approach based on a non-Markovian generalization of quantum kinetic equations [38]. ## 2 Hierarchies of evolution equations of quantum systems As is well-known, quantum systems are described by concepts such as observables and states. The average value functional of the observables (mathematical expectation) determines the duality of the observables and the state. Consequently, there are two approaches to describing the evolution of a quantum system of a finite number of particles, namely, in terms of observables whose evolution is governed by the Heisenberg equation, or in terms of a state whose evolution is governed by the von Neumann equation (quantum Liouville equation) for the density operator (statistical operator) which kernel is known as the density matrix, respectively. An equivalent approach to describing the evolution of systems of many particles, both of a finite and an infinite number of particles, is to describe the evolution in terms of a sequence of reduced operators of observables are governing by the dual BBGKY hierarchy (Bogolyubov–Born–Green–Kirkwood–Yvon) [23],[28], or in terms of the state describing by a sequence of the reduced density operators governed by the BBGKY hierarchy [8]. An alternative method for describing the evolution of the state of a quantum system of finitely many particles is to describe the state in terms of the operators that are determined by cluster expansions of the density operator. Such operators are interpreted as correlations of particle states, and their evolution is governed by the von Neumann hierarchy for a sequence of correlation operators [26],[34],[37]. ### 2.1 Preliminaries: on evolution equations of quantum systems of many particles For generality, we will consider below quantum systems in space $\mathbb{R}^{\nu},\,\nu\geq 1$of a non-fixed number of identical spinless particles, i.e. of an arbitrary but finite average number of particles, which obeying the Maxwell–Boltzmann statistics. We will use units where $h={2\pi\hbar}=1$ is a Planck constant, $m=1$ is particle mass. We denote the $n$-particle Hilbert space by $\mathcal{H}_{n}=\mathcal{H}^{\otimes n}$ and $\mathcal{H}^{\otimes 0}=\mathbb{C}$. We denote the Fock space over the space $\mathcal{H}$ by $\mathcal{F}_{\mathcal{H}}={\bigoplus\limits}_{n=0}^{\infty}\mathcal{H}_{n}$. The self-adjoint operator $f_{n}$ defined on the space $\mathcal{H}_{n}=\mathcal{H}^{\otimes n}$ further we will also denote by the symbol $f_{n}(1,\ldots,n)$. Let $\mathfrak{L}(\mathcal{H}_{n})$ be a space of bounded operators $g_{n}\equiv g_{n}(1,\ldots,n)\in\mathfrak{L}(\mathcal{H}_{n})$ with operator norm $\|.\|_{\mathfrak{L}(\mathcal{H}_{n})}$. Accordingly, let the space $\mathfrak{L}^{1}(\mathcal{H}_{n})$ be the space of trace class operators $f_{n}\equiv f_{n}(1,\ldots,n)\in\mathfrak{L}^{1}(\mathcal{H}_{n})$ with the norm: $\|f_{n}\|_{\mathfrak{L}^{1}(\mathcal{H}_{n})}=\mathrm{Tr}_{1,\ldots,n}|f_{n}(1,\ldots,n)|,$ where the symbol $\mathrm{Tr}_{1,\ldots,n}$ denotes partial traces of the operator $f_{n}$. The subspace of finite sequences of degenerate operators with infinitely differentiated kernels with compact supports is denoted by $\mathfrak{L}^{1}_{0}(\mathcal{F}_{\mathcal{H}})$. For a quantum system of a non-fixed number of particles, the observables are described by sequences $A=(A_{0},A_{1}(1),\ldots,A_{n}(1,\ldots,n),\ldots)$ of the self-adjoint operators $A_{n}\in\mathfrak{L}(\mathcal{H}_{n})$. The evolution of the observables $A(t)=(A_{0},A_{1}(t,1),\ldots,A_{n}(t,1,\ldots,n),\ldots)$, where $t\in\mathbb{R}$, is determined by the Cauchy problem for a sequence of the Heisenberg equations: $\displaystyle\frac{\partial}{\partial t}A(t)=\mathcal{N}A(t),$ (1) $\displaystyle A(t)|_{t=0}=A(0),$ (2) where $A(0)=(A_{0},A_{1}^{0}(1),\ldots,A_{n}^{0}(1,\ldots,n),\ldots)$ is the initial observable, the generator $\mathcal{N}=\oplus_{n=0}^{\infty}\mathcal{N}_{n}$ is defined by the formula $\displaystyle\mathcal{N}_{n}g_{n}\doteq-i\,(g_{n}H_{n}-H_{n}g_{n}),$ (3) and the self-adjoint operator $H_{n}={\sum\limits}_{j=1}^{n}K(j)+\epsilon{\sum\limits}_{j_{1}<j_{2}=1}^{n}\Phi(j_{1},j_{2})$ is the Hamilton operator of a system of $n$ particles, that is, the operator $K(j)$ is the kinetic energy operator of $j$ particle, $\Phi$ is the bounded operator of the pair interaction potential, $\epsilon>0$ is the scaling parameter. If $A(0)\in\mathfrak{L}(\mathcal{F}_{\mathcal{H}})$, then for $t\in\mathbb{R}$ in the space $\mathfrak{L}(\mathcal{F}_{\mathcal{H}})$ the unique solution of the Cauchy problem (1),(2) is represented by a one-parameter family of mappings $\mathcal{G}(t)=\oplus_{n=0}^{\infty}\mathcal{G}_{n}(t)$: $\displaystyle A(t)=\mathcal{G}(t)A(0),$ where the mapping $\mathcal{G}_{n}(t)$ is defined by the formula $\displaystyle\mathbb{R}^{1}\ni t\mapsto\mathcal{G}_{n}(t)g_{n}\doteq e^{itH_{n}}g_{n}e^{-itH_{n}}.$ (4) In the space $\mathfrak{L}(\mathcal{H}_{n})$ one-parameter mapping (4) forms a $\ast$-weakly continuous group of operators, the infinitesimal generator of which coincides with operator (3) on the domain of its definition. The average value (mathematical expectation) of the observable $A(t)$ at time $t\in\mathbb{R}$ is determined by a continuous linear functional, which is represented by such an expansion in a series [23]: $\displaystyle\langle A\rangle(t)=(I,D(0))^{-1}\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{1,\ldots,n}\,A_{n}(t)\,D_{n}^{0},$ (5) where the sequence $D(0)=(I,D_{1}^{0},\ldots,D_{n}^{0},\ldots)$ of positive self-adjoint operators $D_{n}^{0}\in\mathfrak{L}^{1}(\mathcal{H}_{n})$ is the sequence of density operators by which they are described all possible states of the quantum system of a non-fixed number of particles at the initial instant, the coefficient $(I,D(0))={\sum\limits}_{n=0}^{\infty}\frac{1}{n!}\mathrm{Tr}_{1,\ldots,n}D_{n}^{0}$ is the normalizing factor. Functional (5), which determines the duality of observables and a state, exists for $D_{n}^{0}\in\mathfrak{L}^{1}(\mathcal{H}_{n})$ and $A_{n}(t)\in\mathfrak{L}(\mathcal{H}_{n})$. For the normalizing factor, the following equality is valid $\displaystyle(I,D(0))=(I,\mathcal{G}^{\ast}(t)D(0)).$ (6) On the space $\mathfrak{L}^{1}(\mathcal{F}_{\mathcal{H}})$ the one-parameter mapping $\mathcal{G}^{\ast}(t)=\oplus_{n=0}^{\infty}\mathcal{G}^{\ast}_{n}(t)$ adjoint to the group (4) is defined by $\displaystyle\mathbb{R}^{1}\ni t\mapsto\mathcal{G}^{\ast}_{n}(t)f_{n}\doteq e^{-itH_{n}}f_{n}e^{itH_{n}},$ (7) and forms a strongly continuous isometric group of operators, which preserves positivity and self-adjointness of operators. Due to equality (6) for functional (5) the following representation is valid: $\displaystyle\hskip-34.1433pt(A(t),D(0))=(I,D(0))^{-1}\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{1,\ldots,n}\,\mathcal{G}_{n}(t)A_{n}^{0}\,D_{n}^{0}=$ $\displaystyle(I,\mathcal{G}^{\ast}(t)D(0))^{-1}\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{1,\ldots,n}\,A_{n}^{0}\,\mathcal{G}^{\ast}_{n}(t)D_{n}^{0}=$ $\displaystyle(I,D(t))^{-1}(A(0),D(t)),$ that is, the evolution of quantum systems of many particles in an equivalent way can be described as the evolution of the state. Indeed, the evolution of all possible states, i.e. the sequence $D(t)=(1,D_{1}(t),\ldots,D_{n}(t),\ldots)\in\mathfrak{L}^{1}(\mathcal{F}_{\mathcal{H}})$ of the density operators $D_{n}(t),\,n\geq 1$, is described by the Cauchy problem for a sequence of the von Neumann equations (quantum Liouville equations): $\displaystyle\frac{\partial}{\partial t}D(t)=\mathcal{N}^{\ast}D(t),$ (8) $\displaystyle D(t)|_{t=0}=D(0),$ (9) where the generator $\mathcal{N}^{\ast}=\oplus^{\infty}_{n=0}\mathcal{N}^{\ast}_{n}$ of the von Neumann equation (8) is adjoint operator to generator (3) of the Heisenberg equation (1) and is defined by the formula $\displaystyle\mathcal{N}^{\ast}_{n}f_{n}\doteq-i\,(H_{n}f_{n}-f_{n}H_{n}).$ (10) Operator (10) has such structure: $\mathcal{N}^{\ast}_{n}=\sum_{j=1}^{n}\mathcal{N}^{\ast}(j)+\epsilon\sum_{j_{1}<j_{2}=1}^{n}\mathcal{N}^{\ast}_{\mathrm{int}}(j_{1},j_{2})$, where the operator $\mathcal{N}^{\ast}(j)$ is the generator of the von Neumann equation of noninteracting particles and the operator $\mathcal{N}^{\ast}_{\mathrm{int}}$ is defined is defined by the operator of the particle interaction: $\mathcal{N}^{\ast}_{\mathrm{int}}(j_{1},j_{2})f_{n}\doteq-i\,(\Phi(j_{1},j_{2})f_{n}-f_{n}\Phi(j_{1},j_{2}))$. Thus, the unique solution of the Cauchy problem (8),(9) is representing by the group of operators (4) $\displaystyle D(t)=\mathcal{G}^{\ast}(t)D(0).$ Note that the density operator is represented by a convex linear combination of projectors of the first rank. Density operator, which is a projector of the first rank $D_{n}(t)=P_{\psi_{n}}(t),\,\psi_{n}\in\mathcal{H}_{n},$ is known as the pure state, and the arbitrary state is interpreted as a mixed state. As a consequence of the validity for the projector of such equality: $P_{\psi_{n}}(t)=P_{\psi_{n}(t)}$, where $\psi_{n}(t)=e^{-itH_{n}}\psi_{n}$, the evolution of the pure state can also be described by the Cauchy problem for a sequence of the Schrödinger equations: $\displaystyle i\frac{\partial}{\partial t}\psi_{n}(t)=H_{n}\psi_{n}(t),$ $\displaystyle\psi(t)_{n}\big{|}_{t=0}=\psi_{n}^{0},\quad n\geq 1,$ where the operator $H_{n}$ is the Hamiltonian of the system of $n$ particles. ### 2.2 Dynamics of correlations of particle states An alternative approach to describing the state of a quantum system of a finite average number of particles is to describe the state using cumulants of density operators, which are interpreted as correlations of the states of clusters of particles [27],[34],[37]. We introduce sequence $g(t)=(I,g_{1}(t,1),\ldots,g_{s}(t,1,\ldots,s),\ldots)$ of correlation operators by means of cluster expansions of the density operators $D(t)=(I,D_{1}(t,1),\ldots,D_{n}(t,1,\ldots,n),\ldots)$: $\displaystyle\hskip-34.1433ptD_{n}(t,1,\ldots,n)=g_{n}(t,1,\ldots,n)+\sum\limits_{\mbox{\scriptsize$\begin{array}[]{c}\mathrm{P}:(1,\ldots,n)=\bigcup_{i}X_{i},\\\ |\mathrm{P}|>1\end{array}$}}\prod_{X_{i}\subset\mathrm{P}}g_{|X_{i}|}(t,X_{i}),\quad n\geq 1,$ (13) where ${\sum\limits}_{\mathrm{P}:(1,\ldots,n)=\bigcup_{i}X_{i},\,|\mathrm{P}|>1}$ is the sum over all possible partitions $\mathrm{P}$ of the set of indices $(1,\ldots,n)$ on $|\mathrm{P}|>1$ non-empty subsets $X_{i}\subset(1,\ldots,n)$, which are not mutually intersecting. The solutions of recurrent relations (13) are determined by the following expansions: $\displaystyle\hskip-42.67912ptg_{s}(t,1,\ldots,s)=D_{s}(t,1,\ldots,s)+$ (14) $\displaystyle\hskip-34.1433pt\sum\limits_{\mbox{\scriptsize$\begin{array}[]{c}\mathrm{P}:(1,\ldots,s)=\bigcup_{i}X_{i},\\\ |\mathrm{P}|>1\end{array}$}}(-1)^{|\mathrm{P}|-1}(|\mathrm{P}|-1)!\,\prod_{X_{i}\subset\mathrm{P}}D_{|X_{i}|}(t,X_{i}),\quad s\geq 1.$ (17) The structure of expansions (14) is such that correlation operators can be interpreted as cumulants (semi-invariants) of density operators (2.1). If $g_{s}^{0}\in\mathfrak{L}^{1}(\mathcal{H}_{s}),\,s\geq 1$, then for $t\in\mathbb{R}$ sequence of correlation operators (14) is the unique solution of the Cauchy problem of the von Neumann hierarchy [34],[37]: $\displaystyle\hskip-42.67912pt\frac{\partial}{\partial t}g_{s}(t,1,\ldots,s)=\mathcal{N}^{\ast}_{s}g_{s}(t,1,\ldots,s)+$ (18) $\displaystyle\hskip-34.1433pt\sum\limits_{\mathrm{P}:\,(1,\ldots,s)=X_{1}\bigcup X_{2}}\,\sum\limits_{i_{1}\in X_{1}}\sum\limits_{i_{2}\in X_{2}}\epsilon\,\mathcal{N}_{\mathrm{int}}^{\ast}(i_{1},i_{2})g_{|X_{1}|}(t,X_{1})g_{|X_{2}|}(t,X_{2}),$ $\displaystyle\hskip-42.67912ptg_{s}(t,1,\ldots,s)\big{|}_{t=0}=g_{s}^{0}(1,\ldots,s),\quad s\geq 1,$ (19) where the symbol ${\sum\limits}_{\mathrm{P}:\,(1,\ldots,s)=X_{1}\bigcup X_{2}}$ means the sum over all possible partitions $\mathrm{P}$ of the set $(1,\ldots,s)$ on two nonempty subsets $X_{1}$ and $X_{2}$, which are not mutually intersect, and the operators $\mathcal{N}^{\ast}_{s}$, $\mathcal{N}_{\mathrm{int}}^{\ast}(i_{1},i_{2})$ are defined by formulas (10). We emphasize that the von Neumann hierarchy (18) is a set of recurrent evolution equations. If the initial state is described by a sequence of correlation operators $g(0)=(I,g_{1}^{0}(1),\ldots,$ $g_{n}^{0}(1,\ldots,n),\ldots)\in\oplus_{n=0}^{\infty}\mathfrak{L}^{1}(\mathcal{H}_{n})$, then the evolution of all possible states of a quantum system of many particles, i.e., the sequence $g(t)=(I,g_{1}(t,1),\ldots,g_{s}(t,1,\ldots,s),\ldots)$ of correlation operators $g_{s}(t),\,s\geq 1$, is determined by such a group of nonlinear operators [34]: $\displaystyle\hskip-42.67912ptg_{s}(t,1,\ldots,s)=\mathcal{G}(t;1,\ldots,s\mid g(0))\doteq$ (20) $\displaystyle\hskip-28.45274pt\sum\limits_{\mathrm{P}:\,(1,\ldots,s)=\bigcup_{j}X_{j}}\mathfrak{A}_{|\mathrm{P}|}(t,\\{X_{1}\\},\ldots,\\{X_{|\mathrm{P}|}\\})\prod_{X_{j}\subset\mathrm{P}}g_{|X_{j}|}^{0}(X_{j}),\quad s\geq 1,$ where $\sum_{\mathrm{P}:\,(1,\ldots,s)=\bigcup_{j}X_{j}}$ is the sum over all possible partitions $\mathrm{P}$ of the set of indices $(1,\ldots,n)$ on $|\mathrm{P}|>1$ non-empty subsets $X_{j}$, which are not mutually intersecting, the set $(\\{X_{1}\\},\ldots,\\{X_{|\mathrm{P}|}\\})$ consists of elements that are subsets $X_{j}\subset(1,\ldots,s)$, i.e., $|(\\{X_{1}\\},\ldots,\\{X_{|\mathrm{P}|}\\})|=|\mathrm{P}|$. The generating operator $\mathfrak{A}_{|\mathrm{P}|}(t)$ of expansion (20) is the $|\mathrm{P}|$-th order cumulant of groups of operators (7), which is determined by such expansion $\displaystyle\hskip-42.67912pt\mathfrak{A}_{|\mathrm{P}|}(t,\\{X_{1}\\},\ldots,\\{X_{|\mathrm{P}|}\\})\doteq$ (21) $\displaystyle\hskip-14.22636pt\sum\limits_{\mathrm{P}^{{}^{\prime}}:\,(\\{X_{1}\\},\ldots,\\{X_{|\mathrm{P}|}\\})=\bigcup_{k}Z_{k}}(-1)^{|\mathrm{P}^{{}^{\prime}}|-1}({|\mathrm{P}^{{}^{\prime}}|-1})!\prod\limits_{Z_{k}\subset\mathrm{P}^{{}^{\prime}}}\mathcal{G}^{\ast}_{|\theta(Z_{k})|}(t,\theta(Z_{k})),$ where $\theta$ is the declusterization mapping: $\theta(\\{X_{1}\\},\ldots,\\{X_{|\mathrm{P}|}\\})\doteq(1,\ldots,s)$. In the absence of correlations between the particles at the initial instant (the initial state satisfies the chaos condition [3],[66]), that is, of the sequence of initial correlation operators $g^{(c)}(0)=(0,g_{1}^{0}(1),0,\ldots,0,\ldots)$ (in the case of the Maxwell–Boltzmann statistics in terms of density operators this condition means that $D^{(c)}(0)=(I,D_{1}^{0}(1),\ldots,\prod^{n}_{i=1}D_{1}^{0}(i),\ldots)$), expansions (20) take the form: $\displaystyle g_{s}(t,1,\ldots,s)=\mathfrak{A}_{s}(t,1,\ldots,s)\,\prod\limits_{i=1}^{s}g_{1}^{0}(i),\quad s\geq 1,$ where the $s$-th order cumulant $\mathfrak{A}_{s}(t)$ of the groups of operators (7) is defined by the following expansion $\displaystyle\hskip-34.1433pt\mathfrak{A}_{s}(t,1,\ldots,s)=\sum\limits_{\mathrm{P}:\,(1,\ldots,s)=\bigcup_{i}X_{i}}(-1)^{|\mathrm{P}|-1}({|\mathrm{P}|-1})!\prod\limits_{X_{i}\subset\mathrm{P}}\mathcal{G}^{\ast}_{|X_{i}|}(t,X_{i}),$ (22) and where it was used notations accepted in formula (20). Thus, the cumulant origin of correlation operators (14) induces a cumulant structure of one-parameter mapping (20). We emphasize that the dynamics of correlations, i.e. the hierarchy of fundamental equations (18) which describes the evolution of state correlations can be used as a basis for describing the evolution of states both a system of a finite and infinite number of particles instead of the von Neumann equations (8) for density operators. ### 2.3 The BBGKY hierarchies of evolution equations To describe the evolution of quantum systems of both finite and infinite numbers of particles, another approach to the description of observables and a state is used, which equivalent to the approach formulated above in the case of systems of a finite average number of particles. [6],[12]. Indeed, the mean value functional of observables (5) can be else represented in the following form $\displaystyle\hskip-34.1433pt\langle A\rangle(t)=(I,D(0))^{-1}\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{1,\ldots,n}\,A_{n}(t)\,D_{n}^{0}=$ (23) $\displaystyle\sum\limits_{s=0}^{\infty}\frac{1}{s!}\,\mathrm{Tr}_{1,\ldots,s}\,B_{s}(t,1,\ldots,s)\,F_{s}^{0}(1,\ldots,s),$ where to describe the observables and the state introduced sequences of the reduced ($s$-particle) operators of observables $B(t)=(B_{0},B_{1}(t,1),\ldots,B_{s}(t,1,\ldots,s),\ldots)$ and the reduced ($s$-particle) density operators $F(0)=(I,F_{1}^{0}(1),\ldots,F_{s}^{0}(1,\ldots,s),\ldots)$, respectively [6],[66]. Therefore, the reduced observables are defined in terms of the observables by such expansions [28]: $\displaystyle\hskip-34.1433ptB_{s}(t,1,\ldots,s)=\sum_{n=0}^{s}\,\frac{(-1)^{n}}{n!}\sum_{j_{1}\neq\ldots\neq j_{n}=1}^{s}A_{s-n}(t,(1,\ldots,s)\setminus(j_{1},\ldots,j_{n})),\quad s\geq 1,$ (24) and reduced density operators are defined by density operators as follows [12]: $\displaystyle\hskip-34.1433ptF_{s}^{0}(1,\ldots,s)=(I,D)^{-1}\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{s+1,\ldots,s+n}\,D_{s+n}^{0}(1,\ldots,s+n),\quad s\geq 1.$ (25) We emphasize that the possibility of describing observables and a state with the help of corresponding reduced operators naturally arises as a result of dividing the series in the expression (5) by the series of the normalization factor, i.e. in consequence of redefining the representation of mean value functional (23). If at the initial moment an observable is determined by the sequence of reduced observables $B(0)=(B_{0},B_{1}^{0}(1),\ldots,B_{s}^{0}(1,\ldots,s),\ldots)\in\mathfrak{L}(\mathcal{F}_{\mathcal{H}})$, then for arbitrary $t\in\mathbb{R}$ the sequence $B(t)=(B_{0},B_{1}(t,1),\ldots,B_{s}(t,1,\ldots,s),\ldots)$ of reduced observables (24) satisfies the Cauchy problem of the quantum dual BBGKY hierarchy [21],[23],[19]: $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}B_{s}(t,1,\ldots,s)=\big{(}\sum\limits_{j=1}^{s}\mathcal{N}(j)+\epsilon\hskip-5.69054pt\sum\limits_{j_{1}<j_{2}=1}^{s}\mathcal{N}_{\mathrm{int}}(j_{1},j_{2})\big{)}B_{s}(t,1,\ldots,s)+$ (26) $\displaystyle\hskip-14.22636pt+\epsilon\,\sum_{j_{1}\neq j_{2}=1}^{s}\mathcal{N}_{\mathrm{int}}(j_{1},j_{2})B_{s-1}(t,1,\ldots,j_{1}-1,j_{1}+1,\ldots,s),$ $\displaystyle\hskip-34.1433ptB_{s}(t,1,\ldots,s)_{\mid t=0}=B_{s}^{0}(1,\ldots,s),\quad s\geq 1,$ (27) where the notation accepted in formula (3) was used. We remark that the hierarchy of equations (26) has the structure of recurrent evolution equations, for example, $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}B_{1}(t,1)=\mathcal{N}(1)B_{1}(t,1),$ $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}B_{2}(t,1,2)=\big{(}\sum\limits_{j=1}^{2}\mathcal{N}(j)+\epsilon\,\mathcal{N}_{\mathrm{int}}(1,2)\big{)}B_{2}(t,1,2)+\epsilon\,\mathcal{N}_{\mathrm{int}}(1,2)\big{(}B_{1}(t,1)+B_{1}(t,2)\big{)}.$ The solution of the Cauchy problem (26),(27) is represented by the following expansions [28],[19]: $\displaystyle\hskip-34.1433ptB_{s}(t,1,\ldots,s)=\sum_{n=0}^{s}\,\frac{1}{n!}\sum_{j_{1}\neq\ldots\neq j_{n}=1}^{s}\mathfrak{A}_{1+n}\big{(}t,\\{(1,\ldots,s)\setminus(j_{1},\ldots,j_{n})\\},$ (28) $\displaystyle(j_{1},\ldots,j_{n})\big{)}\,B_{s-n}^{0}(1,\ldots,j_{1}-1,j_{1}+1,\ldots,j_{n}-1,j_{n}+1,\ldots,s),\quad s\geq 1,$ where the generating operator of this expansion is the $(1+n)$-th order cumulant of the groups of operators (4): $\displaystyle\hskip-34.1433pt\mathfrak{A}_{1+n}(t,\\{(1,\ldots,s)\setminus(j_{1},\ldots,j_{n})\\},(j_{1},\ldots,j_{n}))\doteq$ $\displaystyle\sum\limits_{\mathrm{P}:\,(\\{(1,\ldots,s)\setminus(j_{1},\ldots,j_{n})\\},(j_{1},\ldots,j_{n}))={\bigcup}_{i}X_{i}}(-1)^{\mathrm{|P|}-1}({\mathrm{|P|}-1})!\prod_{X_{i}\subset\mathrm{P}}\mathcal{G}_{|\theta(X_{i})|}(t,\theta(X_{i})),\quad n\geq 0,$ and used notation similar to the formula (21). We note that expansion (28) for the solution can be represented as an iteration series (perturbation theory series) of recurrent evolution equations (26) in the result of the application of analogues of the Duhamel equation to the generating operators, i.e. to the cumulants of groups of operators (4). According to the definition of additive type observables (24) correspond to the one-component sequences $B^{(1)}(0)=(0,b_{1}^{0}(1),0,\ldots)$ of reduced observables, and the sequences $B^{(k)}(0)=(0,\ldots,$ $0,b_{k}^{0}(1,\ldots,k),0,\ldots)$ correspond to $k$-ary (non-additive) type observables. Then for a certain type of observables, the structure of expansion (28) takes the appropriate form $\displaystyle B_{s}^{(1)}(t,1,\ldots,s)=\mathfrak{A}_{s}(t,1,\ldots,s)\sum_{j=1}^{s}b_{1}^{0}(j),\quad s\geq 1,$ (29) where the generating operator $\mathfrak{A}_{s}(t)$ is the $s$-th order cumulant of the groups of operators (4), and, if $s\geq k$, $\displaystyle\hskip-34.1433ptB_{s}^{(k)}(t,1,\ldots,s)=\frac{1}{(s-k)!}\sum_{j_{1}\neq\ldots\neq j_{s-k}=1}^{s}\mathfrak{A}_{1+s-k}\big{(}t,\\{(1,\ldots,s)\setminus(j_{1},\ldots,j_{s-k})\\},j_{1},\ldots,$ (30) $\displaystyle j_{s-k}\big{)}\,b_{k}^{0}(1,\ldots,j_{1}-1,j_{1}+1,\ldots,j_{s}-k-1,j_{s}-k+1,\ldots,s),$ and, if $1\leq s<k$, we have: $B_{s}^{(k)}(t)=0$. Traditionally, the evolution of many-particle systems is described within the framework of the evolution of the state governed by the BBGKY hierarchy for reduced density operators [3],[6],[12],[66]. Indeed, for functional (23) the following representation is hold $\displaystyle(B(t),F(0))=(B(0),F(t)),$ that is, the evolution of quantum systems of many particles in an equivalent way can be described as the evolution of the state using reduced density operators (25). If $F(0)\in\mathfrak{L}^{1}(\mathcal{F}_{\mathcal{H}})$, the for arbitrary $t\in\mathbb{R}$ the sequence $F(t)=(I,F_{1}(t,1),\ldots,F_{s}(t,$ $1,\ldots,s),\ldots)$ of reduced density operators (25) satisfies the Cauchy problem of the quantum BBGKY hierarchy [6],[12]: $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}F_{s}(t,1,\ldots,s)=\mathcal{N}^{\ast}_{s}F_{s}(t,1,\ldots,s)+\epsilon\,\sum\limits_{j=1}^{s}\mathrm{Tr}_{s+1}\mathcal{N}^{\ast}_{\mathrm{int}}(j,s+1)F_{s+1}(t,1,\ldots,s,s+1),$ (31) $\displaystyle\hskip-34.1433ptF_{s}(t,1,\ldots,s)\mid_{t=0}=F_{s}^{0}(1,\ldots,s),\quad s\geq 1,$ (32) where the notation from formula (10) is used. The solution of the Cauchy problem (31),(32) is represented by the following series [35],[36]: $\displaystyle\hskip-34.1433ptF_{s}(t,1,\ldots,s)=$ (33) $\displaystyle\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{s+1,\ldots,{s+n}}\,\mathfrak{A}_{1+n}(t,\\{1,\ldots,s\\},s+1,\ldots,{s+n})F_{s+n}^{0}(1,\ldots,{s+n}),\quad s\geq 1,$ where the generating operator of this series $\displaystyle\hskip-42.67912pt\mathfrak{A}_{1+n}(t,\\{1,\ldots,s\\},s+1,\ldots,{s+n})=$ (34) $\displaystyle\sum\limits_{\mathrm{P}\,:(\\{1,\ldots,s\\},s+1,\ldots,{s+n})={\bigcup\limits}_{i}X_{i}}(-1)^{|\mathrm{P}|-1}(|\mathrm{P}|-1)!\prod_{X_{i}\subset\mathrm{P}}\mathcal{G}^{\ast}_{|\theta(X_{i})|}(t,\theta(X_{i}))$ is the $(1+n)$-th order cumulant of the groups of operators (7). In expansion (34) symbol ${\sum\limits}_{\mathrm{P}}$ means the sum over all possible partitions $\mathrm{P}$ of the set $(\\{1,\ldots,s\\},s+1,\ldots,{s+n})$ on $|\mathrm{P}|$ nonampty subsets $X_{i}\subset(\\{1,\ldots,s\\},s+1,\ldots,{s+n})$, which do not mutually intersect, and the notations introduced in (20) are used. We remark that one of the methods for constructing solutions (33) and (28) is based on the application of cluster expansions [23],[36] to groups of operators (7) and (4), which are the generating operators of the series (25) for the reduced density operators and the expansions (24) for the reduced observables ones, respectively. We note that a common form of representation of the solution of the Cauchy problem of the BBGKY hierarchy is its representation as a perturbation theory series (iteration series of the BBGKY hierarchy) [6],[57] (see also [3] [12] and therein references): $\displaystyle\hskip-34.1433ptF_{s}(t,1,\ldots,s)=$ (35) $\displaystyle\sum\limits_{n=0}^{\infty}\,\int\limits_{0}^{t}dt_{1}\ldots\int\limits_{0}^{t_{n-1}}dt_{n}\mathrm{Tr}_{s+1,\ldots,s+n}\mathcal{G}^{\ast}_{s}(t-t_{1})\sum\limits_{j_{1}=1}^{s}\mathcal{N}^{\ast}_{\mathrm{int}}(j_{1},s+1))\mathcal{G}^{\ast}_{s+1}(t_{1}-t_{2})\ldots$ $\displaystyle\mathcal{G}^{\ast}_{s+n-1}(t_{n-1}-t_{n})\sum\limits_{j_{n}=1}^{s+n-1}\mathcal{N}^{\ast}_{\mathrm{int}}(j_{n},s+n))\mathcal{G}^{\ast}_{s+n}(t_{n})F_{s+n}^{0}(1,\ldots,s+n),\quad s\geq 1,$ where notations from the expression (10) are used. The representation of the solution by the series (33) is equivalent to series (35) due to the validity under the appropriate conditions for the initial data and the interaction potential of particles of analogs of the Duhamel equation for generating operators (34) of series (33) that is, for cumulants of groups of operators (4). Thus, there are two approaches to describing the evolution of quantum systems of many particles, namely, in terms of observables whose evolution is governed by the dual BBGKY hierarchy (26), or in terms of a state whose evolution is governed by the BBGKY hierarchy (31). For systems of finitely many particles, these hierarchies of evolution equations are equivalent to the Heisenberg equation (1) and the von Neumann equation (8), respectively. In paper [23], these hierarchies of evolution equations are generalized for many-particle systems with multiparticle interaction potentials. An alternative approach, as noted above, to the description of the evolution of the state of quantum systems, consisting of both finitely and infinitely many particles, can be formulated by means of operators defined by cluster expansions of the reduced density operators, namely: $\displaystyle\hskip-42.67912ptG_{s}(t,1,\ldots,s)=\sum\limits_{\mbox{\scriptsize$\begin{array}[]{c}\mathrm{P}:(1,\ldots,s)=\bigcup_{i}X_{i}\end{array}$}}(-1)^{|\mathrm{P}|-1}(|\mathrm{P}|-1)!\,\prod_{X_{i}\subset\mathrm{P}}F_{|X_{i}|}(t,X_{i}),\quad s\geq 1,$ (37) where the notation from formula (14) was used. Such cumulants of reduced density operators are interpreted as reduced correlations of the state [6]. On a microscopic scale, the macroscopic characteristics of the fluctuations of the observables are directly determined by the reduced correlation operators, for example, the functional of dispersion of the additive type observables, that is, of such sequences $A^{(1)}=(0,a_{1}(1),\ldots,\sum_{i=1}^{n}a_{1}(i),\ldots)$, is represented by the following formula $\displaystyle\hskip-34.1433pt\langle(A^{(1)}-\langle A^{(1)}\rangle)^{2}\rangle(t)=\mathrm{Tr}_{1}\,(a_{1}^{2}(1)-\langle A^{(1)}\rangle^{2}(t))G_{1}(t,1)+\mathrm{Tr}_{1,2}\,a_{1}(1)a_{1}(2)G_{2}(t,1,2),$ where the mean value functional of observables of an additive type we denoted by $\langle A^{(1)}\rangle(t)=\mathrm{Tr}_{1}\,a_{1}(1)G_{1}(t,1)$. If $G(0)\in\in\mathfrak{L}^{1}(\mathcal{F}_{\mathcal{H}})$, then for arbitrary $t\in\mathbb{R}$ the sequence of reduced correlation operators (37) satisfies the Cauchy problem for the hierarchy of nonlinear evolution equations (quantun nonlinear BBGKY hierarchy) [6]: $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}G_{s}(t,1,\ldots,s)=\mathcal{N}^{\ast}_{s}G_{s}(t,1,\ldots,s)+$ (38) $\displaystyle\sum\limits_{\mathrm{P}:\,(1,\ldots,s)=X_{1}\bigcup X_{2}}\,\sum\limits_{i_{1}\in X_{1}}\sum\limits_{i_{2}\in X_{2}}\epsilon\,\mathcal{N}_{\mathrm{int}}^{\ast}(i_{1},i_{2})G_{|X_{1}|}(t,X_{1})G_{|X_{2}|}(t,X_{2}))+$ $\displaystyle\mathrm{Tr}_{s+1}\sum_{i\in(1,\ldots,s)}\epsilon\,\mathcal{N}^{\ast}_{\mathrm{int}}(i,s+1)\big{(}G_{s+1}(t,1,\ldots,s+1)+$ $\displaystyle\sum_{\mbox{\scriptsize$\begin{array}[]{c}\mathrm{P}:(1,\ldots,s+1)=X_{1}\bigcup X_{2},\\\ i\in X_{1};s+1\in X_{2}\end{array}$}}G_{|X_{1}|}(t,X_{1})G_{|X_{2}|}(t,X_{2})\big{)},$ (41) $\displaystyle\hskip-34.1433ptG_{s}(t,1,\ldots,s)\big{|}_{t=0}=G_{s}^{0}(,1,\ldots,s),\quad s\geq 1,$ (42) where the notations for the hierarchy of equations (18) is used. In the case of the initial state in the absence of correlations between the particles, i.e. the state described by the sequence $G^{(c)}=(0,G_{1}^{0},0,\ldots,0,\ldots)$, the solution of the Cauchy problem (38),(42) is represented by the following series: $\displaystyle\hskip-42.67912ptG_{s}(t,1,\ldots,s)=\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{s+1,\ldots,s+n}\,\mathfrak{A}_{s+n}(t;1,\ldots,s+n)\prod_{i=1}^{s+n}G_{1}^{0}(i),\quad s\geq 1,$ (43) where the generating operator $\mathfrak{A}_{s+n}(t)$ is the $(s+n)$-th order cumulant (22) of groups of operators (7). We note that for the specified initial state, the expressions for the reduced density operators (33) and the reduced correlation operators (43) differ only in the order of the generating operator for the corresponding terms of the series representing these operators. For an arbitrary initial state (42) a solution of the Cauchy problem for the nonlinear BBGKY hierarchy (38) was constructed in the paper [25] (see also [27]). Note that the description of the evolution of the state of quantum systems of many particles by means of both reduced density operators and reduced correlation operators can be based on an approach founded on the dynamics of correlations, which is governed by the von Neumann hierarchy (18) for correlation operators. Within the framework of this approach, the reduced density operators are defined by the following series: $\displaystyle\hskip-34.1433ptF_{s}(t,1,\ldots,s)\doteq\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{s+1,\ldots,s+n}\,\,g_{1+n}(t,\\{1,\ldots,s\\},s+1,\ldots,s+n),\quad s\geq 1,$ (44) where the correlation operators $g_{1+n}(t),\,n\geq 0,$ of particle cluster and particles are represented by the following expansions: $\displaystyle\hskip-34.1433ptg_{1+n}(t,\\{1,\ldots,s\\},s+1,\ldots,s+n)=$ (45) $\displaystyle\sum\limits_{\mathrm{P}:\,(\\{1,\ldots,s\\},\,s+1,\ldots,s+n)=\bigcup_{i}X_{i}}\mathfrak{A}_{|\mathrm{P}|}\big{(}t,\\{\theta(X_{1})\\},\ldots,\\{\theta(X_{|\mathrm{P}|})\\}\big{)}\prod_{X_{i}\subset\mathrm{P}}g_{|X_{i}|}^{0}(X_{i}),\quad n\geq 0,$ in which the notations from formula (20) are used. The reduced correlation operators are defined by the corresponding series: $\displaystyle\hskip-22.76219ptG_{s}(t,1,\ldots,s)\doteq\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{s+1,\ldots,s+n}\,\,g_{s+n}(t,1,\ldots,s+n),\quad s\geq 1,$ (46) where the correlation operators $g_{s+n}(t),\,n\geq 0,$ are represented by expansions (20). Thus, as a result of definitions (44) and (46), we establish that the cumulant structure of generating operators of expansions for correlation operators (20), or more general expansions (45), induces a cumulant structure of series for reduced density operators (33) and reduced correlation operators (43), i.e. in fact, the evolution of a system of infinitely many particles is generated by the dynamics of correlations. Note also that in the article [29] the hierarchies of evolution equations (18),(26),(31),(38) are studied as evolution equations in functional derivatives. ## 3 On quantum kinetic equations This section describes an approach to describing the evolution of a state using the state of a typical particle of a quantum system of many particles, or, in other words, discusses the origin of describing the evolution of a state by means of quantum kinetic equations [G09],[33],[38]. ### 3.1 The origin of the kinetic evolution of a state We consider a system of many particles that satisfy the Maxwell-Boltzmann statistics, in the absence of correlations between particles at the initial time, i.e. the initial state of which is determined by a one-particle density operator, namely, such a sequence of reduced density operators $F^{(c)}=(0,F_{1}^{0}(1),\ldots,\prod^{n}_{i=1}F_{1}^{0}(i),\ldots)$. We emphasize that the formulated assumption about the initial state is inherent in the kinetic theory of systems of many particles [3],[23],[43]. Due to the fact that the initial state is determined by a one-particle density operator, for the mean value functional of observables (23) the following representation is valid $\displaystyle\big{(}B(t),F^{(c)}\big{)}=\big{(}B(0),F(t\mid F_{1}(t))\big{)},$ that is, the evolution of all possible states is described by a sequence $F(t\mid F_{1}(t))=\big{(}I,F_{1}(t),F_{2}(t\mid F_{1}(t)),\ldots,F_{s}(t\mid F_{1}(t)),\ldots\big{)}$ of reduced functionals of the state $F_{s}(t,1,\ldots,s\mid F_{1}(t)),\,s\geq 2,$ which are represented by the series: $\displaystyle\hskip-34.1433ptF_{s}\bigl{(}t,1,\ldots,s\mid F_{1}(t)\bigr{)}=$ (47) $\displaystyle\sum_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{s+1,\ldots,s+n}\,\mathfrak{V}_{1+n}\bigl{(}t,\\{1,\ldots,s\\},s+1,\ldots,s+n\bigr{)}\prod_{i=1}^{s+n}F_{1}(t,i),\quad s\geq 2,$ with respect of a one-particle density operator $\displaystyle\hskip-34.1433ptF_{1}(t,1)=\sum\limits_{n=0}^{\infty}\frac{1}{n!}\,\mathrm{Tr}_{2,\ldots,{1+n}}\,\mathfrak{A}_{1+n}(t,1,\ldots,n+1)\prod_{i=1}^{n+1}F_{1}^{0}(i),$ (48) which describes the evolution of the state of a typical particle of a quantum system of many particles. The generating operators of the series (48) are the cumulants (34) of the corresponding order of the groups of operators (7). The reduced functionals of the state $F_{s}(t\mid F_{1}(t)),\,s\geq 2$, describe all possible correlations that created in the process of evolution of the quantum system of many particles in terms of the state of a typical particle. The $(1+n)$-th order generating operator of the series (47) is determined by the following expansion [38] $\displaystyle\hskip-25.60747pt\mathfrak{V}_{1+n}\bigl{(}t,\\{1,\ldots,s\\},s+1,\ldots,s+n\bigr{)}=n!\,\sum_{k=0}^{n}\,(-1)^{k}\,\sum_{n_{1}=1}^{n}\ldots$ (49) $\displaystyle\hskip-19.91692pt\sum_{n_{k}=1}^{n-n_{1}-\ldots- n_{k-1}}\frac{1}{(n-n_{1}-\ldots- n_{k})!}\,\widehat{\mathfrak{A}}_{1+n-n_{1}-\ldots- n_{k}}(t,\\{1,\ldots,s\\},s+1,\ldots,s+n-$ $\displaystyle\hskip-19.91692ptn_{1}-\ldots- n_{k})\prod_{j=1}^{k}\,\sum\limits_{\mbox{\scriptsize$\begin{array}[]{c}\mathrm{D}_{j}:Z_{j}=\bigcup_{l_{j}}X_{l_{j}},\\\ |\mathrm{D}_{j}|\leq s+n-n_{1}-\dots- n_{j}\end{array}$}}\frac{1}{|\mathrm{D}_{j}|!}\sum_{i_{1}\neq\ldots\neq i_{|\mathrm{D}_{j}|}=1}^{s+n-n_{1}-\ldots- n_{j}}\,\prod_{X_{l_{j}}\subset\mathrm{D}_{j}}\,\frac{1}{|X_{l_{j}}|!}\,\widehat{\mathfrak{A}}_{1+|X_{l_{j}}|}(t,i_{l_{j}},X_{l_{j}}),$ (52) where the symbol $\sum_{\mathrm{D}_{j}:Z_{j}=\bigcup_{l_{j}}X_{l_{j}}}$ means the sum over all possible dissections of the linearly ordered set $Z_{j}\equiv(s+n-n_{1}-\ldots-n_{j}+1,\ldots,s+n-n_{1}-\ldots-n_{j-1})$ on no more than $s+n-n_{1}-\ldots-n_{j}$ linearly ordered subsets and the generating operator of this expansion $\widehat{\mathfrak{A}}_{1+n}(t)$ is the $(1+n)$-th order cumulant (34) of the groups of scattering operators $\displaystyle\widehat{\mathcal{G}}_{n}(t)\doteq\mathcal{G}_{n}^{\ast}(t,1,\ldots,n)\prod_{i=1}^{n}(\mathcal{G}_{1}^{\ast}(t,i))^{-1},\quad n\geq 1,$ We give the simplest examples of generating operators of expansion (49): $\displaystyle\hskip-22.76219pt\mathfrak{V}_{1}(t,\\{1,\ldots,s\\})=\widehat{\mathfrak{A}}_{1}(t,\\{1,\ldots,s\\}),$ $\displaystyle\hskip-22.76219pt\mathfrak{V}_{1+1}(t,\\{1,\ldots,s\\},s+1)=\widehat{\mathfrak{A}}_{2}(t,\\{1,\ldots,s\\},s+1)-\widehat{\mathfrak{A}}_{1}(t,\\{1,\ldots,s\\})\sum_{i=1}^{s}\widehat{\mathfrak{A}}_{2}(t,i,s+1).$ Thus, according to the definition of (44), the cumulant structure of generating operators of expansions for correlation operators (45) induces a generalized cumulant structure of generating operators for series of reduced functionals of the state (47). Noticing that for the initial state specified by the one-particle density operator $F^{(c)}$, the Cauchy problem (38),(42) is not correctly well- defined, because the initial data are not are independent for each unknown reduced density operator of the BBGKY hierarchy. As a consequence, such a Cauchy problem can be reformulated as a new Cauchy problem for a one-particle density operator with an independent initial condition and a sequence of explicitly defined functionals of the solution of the Cauchy problem of the evolution equation for a one-particle density operator (kinetic equation). In this case, the method of constructing reduced state functionals (47) is based on the application of the species of cluster expansions, the so-called kinetic cluster expansions [38], to generating operators (34) of series representing reduced density operators (33). ### 3.2 The generalized quantum kinetic equation If $F_{1}^{0}\in\mathfrak{L}^{1}(\mathcal{H})$, then for an arbitrary $t\in\mathbb{R}$ a one-particle density operator (48) satisfies the Cauchy problem for the generalized quantum kinetic equation [38]: $\displaystyle\hskip-22.76219pt\frac{\partial}{\partial t}F_{1}(t,1)=\mathcal{N}^{\ast}(1)F_{1}(t,1)+\epsilon\,\mathrm{Tr}_{2}\,\mathcal{N}_{\mathrm{int}}^{\ast}(1,2)F_{2}\bigl{(}t,1,2\mid F_{1}(t)\bigr{)},$ (53) $\displaystyle\hskip-22.76219ptF_{1}(t,1)\big{|}_{t=0}=F_{1}^{0}(1),$ (54) where the collision integral is determined by the two-particle functional of the state (47) and the notation is used (10). In [38], the existence theorem of the Cauchy problem (53), (54) in the space of the trace class operators is proved. Thus, for the initial state in the absence of correlations between particles, i.e. of the state specified by the one-particle density operator, the evolution of all possible states of the quantum system of many particles can be described without any approximations by means of a one-particle density operator (53) and of a sequence of functionals (47) of this operator (48). ### 3.3 Scaling properties of state evolution The generally accepted philosophy of describing evolution by kinetic equations is as follows [8],[43]. If the initial state is determined by the state of a typical particle of the system, i.e. at the initial moment there are no correlations between particles (chaos condition), then in a certain scaling approximation [3],[15],[23],[42] the evolution of the state of a system of many particles can be effectively described within the state of a typical particle, i.e. by a one-particle density operator, which is governed by the corresponding nonlinear kinetic equation. Further, the scaling asymptotic behavior of the reduced functionals of the state $F(t\mid F_{1}(t))$ in the specific case of the mean-field limit is considered [40]. Let there exists the mean-field limit of the initial one-particle density operator in the following sense $\displaystyle\lim\limits_{\epsilon\rightarrow 0}\big{\|}\epsilon\,F_{1}^{0}-f_{1}^{0}\big{\|}_{\mathfrak{L}^{1}(\mathcal{H})}=0,$ (55) where $\epsilon>0$ is a scaling parameter. Since for an arbitrary finite time interval for an asymptotically perturbed first-order cumulant of groups of operators (7), i.e. for a strongly continuous group (7), such equality holds $\displaystyle\lim\limits_{\epsilon\rightarrow 0}\Big{\|}\mathcal{G}^{\ast}_{s}(t,1,\ldots,s)f_{s}-\prod\limits_{j=1}^{s}\mathcal{G}^{\ast}_{1}(t,j)f_{s}\Big{\|}_{\mathfrak{L}^{1}(\mathcal{H}_{s})}=0.$ then for the $(1+n)$-th order cumulant of asymptotically perturbed groups of operators (7) the equality valid $\displaystyle\hskip-34.1433pt\lim\limits_{\epsilon\rightarrow 0}\Big{\|}\frac{1}{\epsilon^{n}}\,\frac{1}{n!}\mathfrak{A}_{1+n}(t,\\{1,\ldots,s\\},s+1,\ldots,s+n)f_{s+n}-$ (56) $\displaystyle\hskip-5.69054pt\int\limits_{0}^{t}dt_{1}\ldots\int\limits_{0}^{t_{n-1}}dt_{n}\prod\limits_{j=1}^{s}\mathcal{G}_{1}^{\ast}(t-t_{1},j)\sum\limits_{i_{1}=1}^{s}\mathcal{N}^{\ast}_{\mathrm{int}}(i_{1},s+1)\prod\limits_{j_{1}=1}^{s+1}\mathcal{G}_{1}^{\ast}(t_{1}-t_{2},j_{1})\ldots$ $\displaystyle\hskip-5.69054pt\prod\limits_{j_{n-1}=1}^{s+n-1}\mathcal{G}^{\ast}_{1}(t_{n-1}-t_{n},j_{n-1})\sum\limits_{i_{n}=1}^{s+n-1}\mathcal{N}^{\ast}_{\mathrm{int}}(i_{n},s+n)\prod\limits_{j_{n}=1}^{s+n}\mathcal{G}^{\ast}_{1}(t_{n},j_{n})f_{s+n}\Big{\|}_{\mathfrak{L}^{1}(\mathcal{H}_{s+n})}=0,\quad n\geq 1.$ The last equations are a consequence of the validity for the bounded interaction potential of analogues of the Duhamel equation for cumulants (34) of the groups of operators (7). According to equalities (56) for a one-particle density operator (48) the following limit theorem holds. If condition (55) is satisfied, then for series (48) the equality is valid $\displaystyle\lim\limits_{\epsilon\rightarrow 0}\big{\|}\epsilon F_{1}(t)-f_{1}(t)\big{\|}_{\mathfrak{L}^{1}(\mathcal{H})}=0,$ where for an arbitrary finite time interval the limiting one-particle density operator $f_{1}(t)$ is determined by convergent series over the norm of space $\mathfrak{L}^{1}(\mathcal{H})$ $\displaystyle\hskip-34.1433ptf_{1}(t,1)=\sum\limits_{n=0}^{\infty}\,\int\limits_{0}^{t}dt_{1}\ldots\int\limits_{0}^{t_{n-1}}dt_{n}\,\mathrm{Tr}_{2,\ldots,n+1}\mathcal{G}^{\ast}_{1}(t-t_{1},1)\mathcal{N}^{\ast}_{\mathrm{int}}(1,2)\prod\limits_{j_{1}=1}^{2}\mathcal{G}^{\ast}_{1}(t_{1}-t_{2},j_{1})\ldots$ (57) $\displaystyle\prod\limits_{i_{n}=1}^{n}\mathcal{G}^{\ast}_{1}(t_{n}-t_{n},i_{n})\sum\limits_{k_{n}=1}^{n}\mathcal{N}^{\ast}_{\mathrm{int}}(k_{n},n+1)\prod\limits_{j_{n}=1}^{n+1}\mathcal{G}^{\ast}_{1}(t_{n},j_{n})\prod\limits_{i=1}^{n+1}f_{1}^{0}(i).$ For bounded interaction potentials, the series (57)the series converges in the norm of space $\mathfrak{L}^{1}(\mathcal{H})$ provided that: $t<t_{0}\equiv(2\,\|\Phi\|_{\mathfrak{L}(\mathcal{H}_{2})}\|f_{1}^{0}\|_{\mathfrak{L}^{1}(\mathcal{H})})^{-1}$. For the initial state $f_{1}^{0}\in\mathfrak{L}^{1}(\mathcal{H})$ limiting operator (57) satisfies the Cauchy problem for the quantum Vlasov kinetic equation $\displaystyle\hskip-14.22636pt\frac{\partial}{\partial t}f_{1}(t,1)=\mathcal{N}^{\ast}_{1}(1)f_{1}(t,1)+\mathrm{Tr}_{2}\,\mathcal{N}^{\ast}_{\mathrm{int}}(1,2)f_{1}(t,1)f_{1}(t,2),$ (58) $\displaystyle\hskip-14.22636ptf_{1}(t)|_{t=0}=f_{1}^{0}.$ (59) We note that for pure states, i.e. $f_{1}(t)=|\psi(t)\rangle\langle\psi(t)|$ or in terms of the kernel of this operator $f_{1}(t,q,q^{\prime})=\psi(t,q)\psi^{\ast}(t,q^{\prime})$, the quantum Vlasov kinetic (58) is reduced to the Hartree equation $\displaystyle i\frac{\partial}{\partial t}\psi(t,q)=-\frac{1}{2}\Delta_{q}\psi(t,q)+\int dq^{\prime}\Phi(q-q^{\prime})|\psi(q^{\prime})|^{2}\psi(t,q).$ In particular, if the kernel of the interaction potential $\Phi(q)=\delta(q)$ is a Dirac measure, then the Hartree kinetic equation turns into the nonlinear Schrödinger equation with cubic nonlinearity $\displaystyle i\frac{\partial}{\partial t}\psi(t,q)=-\frac{1}{2}\Delta_{q}\psi(t,q)+|\psi(t,q)|^{2}\psi(t,q).$ According to property (56) for cumulants of asymptotically perturbed groups of operators for generating operators (49) the equalities true: $\displaystyle\hskip-14.22636pt\lim\limits_{\epsilon\rightarrow 0}\Big{\|}\big{(}\mathfrak{V}_{1}(t,\\{1,\ldots,s\\})-I\big{)}f_{s}\Big{\|}_{\mathfrak{L}^{1}(\mathcal{H}_{s})}=0,$ and for $n\geq 1$ $\displaystyle\hskip-14.22636pt\lim\limits_{\epsilon\rightarrow 0}\Big{\|}\frac{1}{\epsilon^{n}}\,\mathfrak{V}_{1+n}(t,\\{1,\ldots,s\\},s+1,\ldots,s+n)f_{s+n}\Big{\|}_{\mathfrak{L}^{1}(\mathcal{H}_{s+n})}=0,$ respectively. Due to such equalities and convergence in the mean-field limit of the solution of the Cauchy problem of the generalized quantum kinetic equation (53),(54) to the solution of the Cauchy problem of the quantum Vlasov kinetic equation (58),(59) for reduced functionals of the state (47) the following equalities are valid: $\displaystyle\lim\limits_{\epsilon\rightarrow 0}\Big{\|}\epsilon^{s}F_{s}\big{(}t,1,\ldots,s\mid F_{1}(t)\big{)}-\prod\limits_{j=1}^{s}f_{1}(t,j)\Big{\|}_{\mathfrak{L}^{1}(\mathcal{H}_{s})}=0,\quad s\geq 2,$ where the limit operator $f_{1}(t)$ is determined by series (57), which represents the solution of the quantum Vlasov equation (58). The last statement describes the process of propagation of the initial chaos in the mean-field limit, i.e., if at the initial moment there are no correlations in the system, then in the process of evolution correlations are not created in this approximation. Note that the traditional approach to the problem of the propagation of initial chaos is based on the construction of the asymptotic behavior of a solution of the quantum BBGKY hierarchy for reduced density operators within the perturbation theory [3],[42]. ## 4 The kinetic evolution of observables In this section, we consider the scaling asymptotic behavior of a solution of the Cauchy problem (28) for the dual BBGKY hierarchy (26),(27) in the case of the mean-field limit [19], or, in other words, we consider the foundations of the description of the kinetic evolution of quantum systems of many particles within the framework of observables. Note that one of the advantages of such an approach to the description of kinetic evolution is the possibility to describe the propagation of initial correlations in scaling limits. ### 4.1 The hierarchy of kinetic equations for observables Suppose that at the initial moment of time there exists the mean-field limit of reduced observables (27) in the sense of $\ast$-weak convergence of the space of bounded operators $\mathfrak{L}(\mathcal{H}_{s})$ $\displaystyle\mathrm{w^{\ast}-}\lim\limits_{\epsilon\rightarrow 0}\big{(}\epsilon^{-s}B_{s}^{\epsilon,0}-b_{s}^{0}\big{)}=0,$ (60) where $\epsilon>0$ is a scaling parameter. Then the following limit theorem holds for reduced observables (28), which are the solution of the dual BBGKY hierarchy (26). If the condition (60) is satisfied, then for an arbitrary finite time interval there is a mean-field limit for the sequence of reduced observables (28) in the same sense [24],[19] $\displaystyle\mathrm{w^{\ast}-}\lim\limits_{\epsilon\rightarrow 0}\big{(}\epsilon^{-s}B_{s}(t)-b_{s}(t)\big{)}=0,$ (61) where the reduced observables $b_{s}(t),\,s\geq 1$, are determined by the following expansions: $\displaystyle\hskip-34.1433ptb_{s}(t,1,\ldots,s)=\sum\limits_{n=0}^{s-1}\,\int\limits_{0}^{t}dt_{1}\ldots\int\limits_{0}^{t_{n-1}}dt_{n}\,\mathcal{G}_{s}^{0}(t-t_{1})\sum\limits_{i_{1}\neq j_{1}=1}^{s}\mathcal{N}_{\mathrm{int}}(i_{1},j_{1})\mathcal{G}_{s-1}^{0}(t_{1}-t_{2})\ldots$ (62) $\displaystyle\hskip-34.1433pt\mathcal{G}_{s-n+1}^{0}(t_{n-1}-t_{n})\sum\limits^{s}_{\mbox{\scriptsize$\begin{array}[]{c}i_{n}\neq j_{n}=1,\\\ i_{n},j_{n}\neq(j_{1},\ldots,j_{n-1})\end{array}$}}\mathcal{N}_{\mathrm{int}}(i_{n},j_{n})\mathcal{G}_{s-n}^{0}(t_{n})b_{s-n}^{0}((1,\ldots,s)\setminus({j_{1}},\ldots,{j_{n}})),$ (65) $\displaystyle\hskip-34.1433pts\geq 1,$ and for the group of operators of non-interacting particles, the notation was used $\displaystyle\mathcal{G}_{s-n+1}^{0}(t_{n-1}-t_{n})\equiv\prod\limits_{j\in(1,\ldots,s)\setminus(j_{1},\ldots,j_{n-1})}\mathcal{G}_{1}(t_{n-1}-t_{n},j).$ For a certain type of observables, the structure of expansion (62) takes a particular form, for example, in the case of the $k$-ary type of observables we have: $\displaystyle\hskip-34.1433ptb_{s}^{(k)}(t,1,\ldots,s)=\int\limits_{0}^{t}dt_{1}\ldots\int\limits_{0}^{t_{s-k-1}}dt_{s-k}\,\mathcal{G}_{s}^{0}(t-t_{1})\sum\limits_{i_{1}\neq j_{1}=1}^{s}\mathcal{N}_{\mathrm{int}}(i_{1},j_{1})\mathcal{G}_{s-1}^{0}(t_{1}-t_{2})\ldots$ (66) $\displaystyle\mathcal{G}_{s-n+1}^{0}(t_{s-k-1}-t_{s-k})\sum\limits^{s}_{\mbox{\scriptsize$\begin{array}[]{c}i_{s-k}\neq j_{s-k}=1,\\\ i_{s-k},j_{s-k}\neq(j_{1},\ldots,j_{s-k-1})\end{array}$}}\mathcal{N}_{\mathrm{int}}(i_{s-k},j_{s-k})\mathcal{G}_{s-n}^{0}(t_{s-k})\times$ (69) $\displaystyle b_{k}^{0}((1,\ldots,s)\setminus({j_{1}},\ldots,{j_{s-k}})),\quad 1\leq s\leq k.$ If $b^{0}\in\mathfrak{L}(\mathcal{F}_{\mathcal{H}})$, then the sequence $b(t)=(b_{0},b_{1}(t),\ldots,b_{s}(t),\ldots)$ of limit observables (62) satisfies the Cauchy problem of the dual Vlasov hierarchy: $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}b_{s}(t,1,\ldots,s)=\sum\limits_{j=1}^{s}\mathcal{N}(j)\,b_{s}(t,1,\ldots,s)+\sum_{j_{1}\neq j_{2}=1}^{s}\mathcal{N}_{\mathrm{int}}(j_{1},j_{2})\,b_{s-1}(t,(1,\ldots,s)\setminus(j_{1})),$ (70) $\displaystyle\hskip-34.1433ptb_{s}(t,1,\ldots,s)\mid_{t=0}=b_{s}^{0}(1,\ldots,s),\quad s\geq 1.$ (71) We give some examples of the dual Vlasov hierarchy (70) in terms of kernels of the operators for the reduced observables: $\displaystyle\hskip-22.76219pti\,\frac{\partial}{\partial t}b_{1}(t,q_{1};q^{\prime}_{1})=-\frac{1}{2}(-\Delta_{q_{1}}+\Delta_{q^{\prime}_{1}})b_{1}(t,q_{1};q^{\prime}_{1}),$ $\displaystyle\hskip-22.76219pti\,\frac{\partial}{\partial t}b_{2}(t,q_{1},q_{2};q^{\prime}_{1},q^{\prime}_{2})=\big{(}-\frac{1}{2}\sum\limits_{i=1}^{2}(-\Delta_{q_{i}}+\Delta_{q^{\prime}_{i}})+$ $\displaystyle\hskip 28.45274pt(\Phi(q^{\prime}_{1}-q^{\prime}_{2})-\Phi(q_{1}-q_{2}))\big{)}b_{2}(t,q_{1},q_{2};q^{\prime}_{1},q^{\prime}_{2}))+$ $\displaystyle\hskip 28.45274pt\big{(}\Phi(q^{\prime}_{1}-q^{\prime}_{2})-\Phi(q_{1}-q_{2})\big{)}\big{(}b_{1}(t,q_{1};q^{\prime}_{1})+b_{1}(t,q_{2};q^{\prime}_{2})\big{)}.$ We note that the sequence of evolution equations (70) has the structure of recursive equations. Thus, in the mean-field limit, the collective behavior (kinetic evolution) of quantum systems of many particles is described in terms of the sequence of limiting reduced observables (62), in particular, by sequence (66) whose evolution is governed by the Cauchy problem of the dual Vlasov hierarchy (70),(71). ### 4.2 The propagation of the initial chaos Let us consider the relationship of collective behavior within the mean-field approximation of a quantum system of many particles, which is described by the dual Vlasov hierarchy (70) for the limiting reduced observables, and by the Vlasov kinetic equation (58) for the state of a typical particle of a system. Let the initial state satisfy a chaos condition, i.e. at the initial instant there are no correlations between particles (the statistically independent particles), namely, in the case of the Maxwell–Boltzmann statistics by such a sequence of limiting reduced density operators $\displaystyle f^{(c)}\equiv\big{(}I,f_{1}^{0}(1),\ldots,\prod_{i=1}^{s}f_{1}^{0}(i),\ldots\big{)}.$ (72) As mentioned above, this assumption regarding the initial state is characteristic of the kinetic description of the gas, because in this case the state is completely determined using a one-particle density operator. For mean value functional (23) of additive type limit observables (66) and initial state (72) on a finite time interval, the equality holds $\displaystyle\hskip-22.76219pt\big{(}b^{(1)}(t),f^{(c)}\big{)}=\sum\limits_{s=0}^{\infty}\frac{1}{s!}\,\mathrm{Tr}_{1,\ldots,s}\,b_{s}^{(1)}(t,1,\ldots,s)\prod\limits_{i=1}^{s}f_{1}^{0}(i)=$ $\displaystyle\hskip 19.91692pt\mathrm{Tr}_{1}\,b_{1}^{0}(1)f_{1}(t,1),$ where a one-particle density operator is represented by such a series $\displaystyle\hskip-34.1433ptf_{1}(t,1)=\sum\limits_{n=0}^{\infty}\int\limits_{0}^{t}dt_{1}\ldots\int\limits_{0}^{t_{n-1}}dt_{n}\,\mathrm{Tr}_{2,\ldots,n+1}\prod\limits_{i_{1}=1}^{1}\mathcal{G}^{\ast}_{1}(t-t_{1},i_{1})\mathcal{N}^{\ast}_{\mathrm{int}}(1,2)\prod\limits_{j_{1}=1}^{2}\mathcal{G}^{\ast}_{1}(t_{1}-t_{2},j_{1})\ldots$ (73) $\displaystyle\prod\limits_{i_{n}=1}^{n}\mathcal{G}^{\ast}_{1}(t_{n}-t_{n},i_{n})\sum\limits_{k_{n}=1}^{n}\mathcal{N}^{\ast}_{\mathrm{int}}(k_{n},n+1)\prod\limits_{j_{n}=1}^{n+1}\mathcal{G}^{\ast}_{1}(t_{n},j_{n})\prod\limits_{i=1}^{n+1}f_{1}^{0}(i).$ The one-particle density operator (73) satisfies the Cauchy problem of the quantum Vlasov kinetic equation (58),(59). Thus, the hierarchy of evolutionary equations (70) for limit reduced observables of additive type and initial state (72) describes the evolution of quantum many-particle systems in an equivalent way compared to the Vlasov equation (58). Accordingly, for the mean value functional of the limit observables of nonadditive type (66) and the initial state (72) on a finite time interval, equations valid: $\displaystyle\hskip-19.91692pt\big{(}b^{(k)}(t),f^{(c)}\big{)}=\sum\limits_{s=0}^{\infty}\frac{1}{s!}\,\mathrm{Tr}_{1,\ldots,s}\,b_{s}^{(k)}(t,1,\ldots,s)\prod\limits_{i=1}^{s}f_{1}^{0}(i)=$ (74) $\displaystyle\frac{1}{k!}\mathrm{Tr}_{1,\ldots,k}\,b_{k}^{0}(1,\ldots,k)\prod\limits_{i=1}^{k}f_{1}(t,i),\quad k\geq 2,$ where the one-particle density operator is represented by series (73), i.e. satisfies the Cauchy problem for the quantum Vlasov kinetic equation (58),(59). Equality (74) describes the process of the propagation of initial chaos (72) in the mean-field limit by means of solution (73) of the quantum Vlasov kinetic eqation (58), namely, for an arbitrary finite time interval, the state is represented by such a sequence of limit reduced operators: $\displaystyle f_{k}(t,1,\ldots,k)=\prod\limits_{i=1}^{k}f_{1}(t,i),\quad k\geq 2,$ (75) that is, if at the initial moment of time in the system of particles there are no correlations, then in this approximation in the process of evolution correlations of particle states are not create. ### 4.3 Kinetic equations with initial correlations We note that the above approach to the derivation of kinetic equations allows us to formulate kinetic equations in the case of more general initial states, which describe not only the gases of quantum particles (72) but also systems of many particles in condensed states. Further, we consider the initial states of quantum systems of many particles, which are determined by a one-particle density operator and correlation operators (Maxwell - Boltzmann statistics) [24] $\displaystyle\hskip-34.1433ptf^{(cc)}=\big{(}I,f_{1}^{0}(1),g_{2}^{0}(1,2)\prod_{i=1}^{2}f_{1}^{0}(i),\ldots,g_{n}^{0}(1,\ldots,n)\prod_{i=1}^{n}f_{1}^{0}(i),\ldots\big{)},$ (76) where the correlations of the initial states of the particles are determined by the operators $g_{n}^{0}(1,\ldots,n)\equiv g_{n}^{0}\in\mathfrak{L}^{1}_{0}(\mathcal{H}_{n}),\,n\geq 2$. We emphasize that this assumption (76) with respect to the initial state is typical for the kinetic description of systems of many particles in condensed states, which are characterized by correlations, for example, such as in particle flows [6],[65]. Then, using the method of derivation of kinetic equations based on the hierarchy of kinetic equations for the observables, for an arbitrary finite time interval we establish that the state is described by the sequence $f(t)=\big{(}I,f_{1}(t),\ldots,f_{n}(t,1,\ldots,n),\ldots\big{)}$ of limiting reduced density operators where a one-particle density operator is represented by such a series $\displaystyle\hskip-34.1433ptf_{1}(t,1)=\sum\limits_{n=0}^{\infty}\int\limits_{0}^{t}dt_{1}\ldots\int\limits_{0}^{t_{n-1}}dt_{n}\,\mathrm{Tr}_{2,\ldots,n+1}\prod\limits_{i_{1}=1}^{1}\mathcal{G}^{\ast}_{1}(t-t_{1},i_{1})\mathcal{N}^{\ast}_{\mathrm{int}}(1,2)\prod\limits_{j_{1}=1}^{2}\mathcal{G}^{\ast}_{1}(t_{1}-t_{2},j_{1})\ldots$ (77) $\displaystyle\prod\limits_{i_{n}=1}^{n}\mathcal{G}^{\ast}_{1}(t_{n}-t_{n},i_{n})\sum\limits_{k_{n}=1}^{n}\mathcal{N}^{\ast}_{\mathrm{int}}(k_{n},n+1)\prod\limits_{j_{n}=1}^{n+1}\mathcal{G}^{\ast}_{1}(t_{n},j_{n})g_{n+1}^{0}(1,\ldots,n+1)\prod\limits_{i=1}^{n+1}f_{1}^{0}(i),$ and the limiting density operators $f_{k}(t,1,\ldots,k),\,k\geq 2,$ are determined by the following expressions [41]: $\displaystyle\hskip-34.1433ptf_{k}(t,1,\ldots,k)=\prod_{i_{1}=1}^{k}\mathcal{G}^{\ast}_{1}(t,i_{1})g_{k}^{0}(1,\ldots,k)\prod_{i_{2}=1}^{k}(\mathcal{G}_{1}^{\ast})^{-1}(t,i_{2})\prod\limits_{j=1}^{k}f_{1}(t,j),\quad k\geq 2.$ (78) Indeed, for the mean value functionals of the limiting observables (66) in the case of the initial state (76) for a finite time interval, the equalities are valid: $\displaystyle\hskip-19.91692pt\big{(}b^{(k)}(t),f^{(cc)}\big{)}=\frac{1}{k!}\mathrm{Tr}_{1,\ldots,k}\,b_{k}^{0}(1,\ldots,k)f_{k}(t,1,\ldots,k),\quad k\geq 1,$ where the sequence $f(t)$ is determined by expansions (77) and (78). In [41], a similar result was obtained using a generalized quantum kinetic equation with initial correlations. In the case of initial states specified by a sequence of limiting correlation operators (76), the process of propagation of initial correlations is described by the following sequence of correlation operators (37): $\displaystyle\hskip-34.1433ptg_{n}(t,1,\ldots,n)=\prod_{i_{1}=1}^{n}\mathcal{G}_{1}^{\ast}(t,i_{1})\sum\limits_{\mbox{\scriptsize$\begin{array}[]{c}\mathrm{P}:(1,\ldots,n)=\bigcup_{i}X_{i}\end{array}$}}(-1)^{|\mathrm{P}|-1}(|\mathrm{P}|-1)!\,\prod_{X_{i}\subset\mathrm{P}}g_{|X_{i}|}^{0}(X_{i})\times$ $\displaystyle\prod_{i_{2}=1}^{n}(\mathcal{G}_{1}^{\ast})^{-1}(t,i_{2})\prod\limits_{j=1}^{n}f_{1}(t,j),\quad n\geq 2.$ Limiting one-particle density operator (77) is governed by the quantum Vlasov kinetic equation with initial correlations [41]: $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}f_{1}(t,1)=\mathcal{N}^{\ast}(1)f_{1}(t,1)+$ (80) $\displaystyle\mathrm{Tr}_{2}\,\mathcal{N}^{\ast}_{\mathrm{int}}(1,2)\prod_{i_{1}=1}^{2}\mathcal{G}^{\ast}_{1}(t,i_{1})g_{2}^{0}(1,2)\prod_{i_{2}=1}^{2}(\mathcal{G}^{\ast}_{1})^{-1}(t,i_{2})f_{1}(t,1)f_{1}(t,2),$ $\displaystyle\hskip-34.1433ptf_{1}(t)|_{t=0}=f_{1}^{0},$ (81) where the notations (10) is used and the inverse group of operators to group (4) is denoted by $(\mathcal{G}^{\ast}_{1})^{-1}(t)$. We note that the kinetic equation (80) is a non-Markov kinetic equation. For pure states, equation (80) reduces to the Hartree kinetic equation with initial correlations. For the initial states of the system of statistically independent particles, the kinetic equation (80) coincides with the quantum Vlasov equation (58), and reduced density operators (78) describe the process of propagation of the initial chaos (75). We remark also that in the case of arbitrary initial states in a mean-field limit, a sequence of reduced density operators, as well as sequence (77), (78), is the solution of the Cauchy problem of the Vlasov hierarchy: $\displaystyle\hskip-34.1433pt\frac{\partial}{\partial t}f_{s}(t,1,\ldots,s)=\sum\limits_{i=1}^{s}\mathcal{N}^{\ast}(i)f_{s}(t,1,\ldots,s)+\sum\limits_{i=1}^{s}\mathrm{Tr}_{s+1}\mathcal{N}^{\ast}_{\mathrm{int}}(i,s+1)f_{s+1}(t,1,\ldots,s+1),$ $\displaystyle\hskip-34.1433ptf_{s}(t)_{|t=0}=f_{s}^{0},\quad s\geq 1.$ Thus, in the case of initial states given by a one-particle density operator and correlation operators (76), the dual Vlasov hierarchy (70) for the additive type reduced observables describes the evolution of quantum systems of many particles, just as non-Markov quantum Vlasov kinetic equation with initial correlations (80). In the case of reduced nonadditive-type observables, the dual Vlasov hierarchy (70) describes in an equivalent way in the sense of equality (78) the process of propagation of initial correlations in terms of reduced density operators (78). In other words, the alternative method of describing the evolution of states of quantum systems of many particles in the mean-field approximation is based on the non-Markovian Vlasov kinetic equation with initial correlations (80). ## 5 Outlook The possible approaches to the description of the evolution of quantum systems of many particles are considered above, namely, in terms of the observables ones, the evolution of which is described by the dual BBGKY hierarchy (26), or in terms of the state whose evolution is described by the BBGKY hierarchy (31). For systems of a finite number of particles, these hierarchies of evolution equations are equivalent to the Heisenberg equation (1) and the von Neumann equation (8), respectively. In particular, possible methods of describing the evolution of the state by means of the hierarchy of fundamental equations (18), which describes the evolution of state correlations, are considered. It was established that the concept of cumulant (22) of groups of operators forms the basis of expansions for solutions of fundamental evolution equations, which describe the evolution of quantum systems of many particles, namely: in the case of the groups of operators (4) for the dual BBGKY hierarchy for reduced observables; in the case of groups of operators (7) for the von Neumann hierarchy (18) for correlation operators, for the BBGKY hierarchy (31) for reduced density operators and for the BBGKY hierarchy of nonlinear equations (38) for reduced correlation operators, respectively, as well as for the basis of the kinetic description of infinitely many particles systems (47),(53). We emphasize that the structure of expansions for correlation operators (45), in which the generating operators are of the appropriate order cumulants (21) of the groups of operators (7), induces the cumulant structure of expansions in a series for reduced density operators (33), reduced correlation operators (46) and reduced functionals of the state (47). Thus, the dynamics of systems of infinitely many particles are generated by the dynamics of correlations. The article also was considered two new approaches to describing the kinetic evolution of quantum systems of many particles. One of them consists in the description of the kinetic evolution of the quantum system of particles by means of the reduced observables in mean-field scaling limit (70). Another approach is based on the non-Markovian generalization of quantum kinetic equations (53). One of the advantages of the considered approaches is connected with the possibility of construction of kinetic equations in scaling approximations taking into account correlations of particles at the initial instant which characterize the condensed states of systems of many particles. Another advantage is related to the problem of the rigorous derivation of kinetic equations of the non-Markovian type based on the dynamics of quantum many- particle systems, which allow us to describe the effects of memory in nanostructures. We remark that the approach to the derivation of the quantum Vlasov kinetic equation from the dynamics of many-particle systems, which is based on the generalized quantum kinetic equation (53), also allows the construction of higher-order corrections to the approximation of mean-field for quantum kinetic equations. The above results can be extended to systems of many bosons or fermions [34],[39]. ## References * [1] R. Balescu, _Irreversible processes in ionized gases_ , Phys. of Fluids, 3, (1), (1960), 52–63. * [2] D. Benedetto, F. Castella, R. Esposito and M. Pulvirenti, _A short review on the derivation of the nonlinear quantum Boltzmann equations_. Commun. Math. Sci., 5, (2007), 55–71. * [3] N. Benedikter, M. Porta and B. Schlein, _Effective Evolution Equations from Quantum Dynamics_ , SpringerBriefs in Mathematical Physics, 2016. * [4] Ch. Boccato, S. Cenatiempo and B. Schlein, _Quantum many-body fluctuations around nonlinear Schrödinger dynamics_. Ann. Henri Poincaré, 18, (1), 2017, 113–191. * [5] M. M. Bogolyubov, _On the theory of superfluidity_. Proc. Inst. Mathematics of Acad. Sci. USSR (in Ukrainian), No. 9, (1947), 89–103. * [6] M. M. Bogolyubov, _Lectures on Quantum Statistics. Problems of Statistical Mechanics of Quantum Systems_. K.: Rad. Shkola, 1949 (in Ukrainian). * [7] M. M. Bogolyubov and K. P. Gurov, _Kinetic equations in quantum mechanics_. JETP (in Russian), 17, (7), (1947), 614—-628. * [8] M. M. Bogolyubov, _Problems of the dynamical theory in statistical physics_ , M.: Gostechizdat, 1946 (in Russian). * [9] L. E. Boltzmann, _Weitere Studien über das Wärmegleichgewicht unter Gasmolekülen_. Sitzungsber. Akad. Wiss., 66, 1872, 275–-370. * [10] M. Born and H. S. Green, _A general kinetic theory of liquids. I. The molecular distribution functions_. Proceedings of the Royal Soc. of London. A, 188, (1012), (1946), 10–18. * [11] T. Carleman, _Probleèmes mathématiques dans la théorie cinétique des gaz_. Publ. Sci. Inst. Mittag-Leffler, 2 Almqvist & Wiksells Boktryckeri Ab, Uppsala, 1957. * [12] C. Cercignani, V.I. Gerasimenko and D.Ya. Petrina, _Many-Particle Dynamics and Kinetic Equations_. the Netherlands: Springer, 2012. * [13] C. Cercignani, R. Illner and M. Pulvirenti, _The mathematical theory of dilute gases_. Springer-Verlag, New York, 1994 * [14] D. Enskog, _Kinetische Theorie der Wärmeleitung, Reibung und Selbstdiffusion in gewissen verdichteten Gasen und Flüssigkeiten_. Kungl. Sv. Vetenskapsakademiens Handl., 63, (4), (1922), 3–44. * [15] L. Erdős, B. Schlein, and H.-T. Yau, _Derivation of the Gross-Pitaevskii equation for the dynamics of Bose-Einstein condensate_. Ann. of Math., 172, (1), (2010), 291–370. * [16] V. Fock, _Näherungsmethode zur Lösung des quantenmechanischen Mehrkörperproblems_. Zeitschrift für Physik, 61, (1930), 126–148. * [17] A. D. Fokker, _Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld_ , Annalen der Physik, 348, (43), (1914), 810–-820. * [18] I. Gallagher, L. Saint-Raymond and B. Texier, _From Newton to Boltzmann: hard spheres and short-range potentials_ , EMS, Zürich, 2013. * [19] V.I. Gerasimenko, _Heisenberg picture of quantum kinetic evolution in mean-field limit_. Kinet. Relat. Models, 4, (1), (2011), 385–399. * [20] V. Gerasimenko and I. Gapyak, _Low-density asymptotic behavior of observables of hard sphere fluids_. Advances in Math. Phys., 2018, (2018), 1–11. * [21] V. Gerasimenko, V. Shtyk and A. Zagorodny, _Hydrodynamic equations for microscopic phase densities_ , Cent. Eur. J. Phys., 9, (1), (2011), 71–77. * [22] V. I. Gerasimenko, _Approaches to derivation of quantum kinetic equations_. Ukrainian J. Phys. 54, (8-9), (2009), 834–846. * [23] V. I. Gerasimenko, _Hierarchies of quantum evolution equations and dynamics of many-particle correlations_. Int. J. Evol. Equ., 7, (2), (2012), 109–163. * [24] V. I. Gerasimenko, _New approach to derivation of quantum kinetic equations with initial correlations_. Carpathian Math. Publ., 7, (1), (2015), 38–48. * [25] V. I. Gerasimenko, _Evolution of correlation operators of large particle quantum systems_. Methods Funct. Anal. Topology, 23, (2), (2017), 123–132. * [26] V. I. Gerasimenko, _Description of evolution of states in terms of operators originating by density matrix_ , In: Understanding Density Matrices, N.Y.: Nova Science Publ., Inc., 2019, 229–250. * [27] V. I. Gerasimenko, _Processes of Creation and Propagation of Correlations in Large Quantum Particle System_. In: Panorama of Contemporary Quantum Mechanics - Concepts and Applications, IntechOpen, 2019. * [28] V. I. Gerasimenko and G. Borgioli, _Initial-value problem of the quantum dual BBGKY hierarchy_. Il Nuovo Cimento C, 33, (1), (2010), 71–78. * [29] V. I. Gerasimenko and Yu. Yu. Fedchun, _Nonperturbative solution expansions of hierarchies of evolution equations in functional derivatives_. Proceedings Inst. Math. NASU, 9, (2), (2012), 347–375. * [30] V. I. Gerasimenko and I. V. Gapyak, _Hard sphere dynamics and the Enskog equation_ , Kinetic & Related Models, 5, (3), (2012), 459–484. * [31] V. I. Gerasimenko and I. V. Gapyak, _Boltzmann–Grad asymptotic behavior of collisional dynamics_. Reviews in Math. Phys., 33, (2), (2021), 2130001. * [32] V. I. Gerasimenko and D. Ya. Petrina, _Evolution of states of infinite systems in classical statistical mechanics_. In: Sov. Sci. Rev., sect. C: Math. Phys. Rev., v.5, N.Y.: Harwood Acad. Publ., 1985, 1–52. * [33] V. I. Gerasimenko and D. Ya. Petrina, _The generalized kinetic equation generated by the BBGKY hierarchy_. Ukr. J. Phys., 43, (6/7), (1998), 697–702. * [34] V. I. Gerasimenko and D. O. Polishchuk, _Dynamics of correlations of Bose and Fermi particles_. Math. Meth. Appl. Sci. 34, (1), (2011), 76–93. * [35] V. I. Gerasimenko and T. V. Ryabukha and M. O. Stashenko, _On the structure of expansions for the BBGKY hierarchy solutions_. J. Phys. A: Math. and General, 37, (42), (2004), 9861–9872. * [36] V.I. Gerasimenko and V. O. Shtyk, _Initial-value problem of the Bogolyubov hierarchy for quantum systems of particles_. Ukrain. Math. J. 58, (9), (2006), 1175–1191. * [37] V. I. Gerasimenko and V. O. Shtyk, _Evolution of correlations of quantum many-particle systems_. J. Stat. Mech.: Theory and Experiment, 2008, (03), (2008), P03007. * [38] V. I. Gerasimenko and Zh. A. Tsvir, _A description of the evolution of quantum states by means of the kinetic equation_ , J. Phys. A.: Math. and Theor., 43, (48), (2010), 485203. * [39] V. I. Gerasimenko and Zh. A. Tsvir, _The generalized quantum kinetic equation for interacting particles with quantum statistics_. Math. Bulletin Sh. Sci. Soc., 7, (2010), 351–367. * [40] V. I. Gerasimenko and Zh. A. Tsvir, _Mean field asymptotics of generalized quantum kinetic equation_ , Rep. Math. Phys., 70, (2), (2012), 135–147. * [41] V. I. Gerasimenko and Zh. A. Tsvir, _On quantum kinetic equations of many-particle systems in condensed states_. Physica A: Stat. Mech. and its Appl., 391, (24), (2012), 6362–6366. * [42] F. Golse, _On the dynamics of large particle systems in the mean field limit_ , In: Macroscopic and large scale phenomena: coarse graining, mean field limits and ergodicity, Lect. Notes Appl. Math. Mech., Springer, 2016, 3, 1–144. * [43] H. Grad, _Principles of the kinetic theory of gases_. In: Handbuch der Physik (herausgegeben von S. Flügge), Bd. 12, Thermodynamik der Gase, Springer-Verlag, Berlin-Göttingen-Heidelberg, 1958, 205–294. * [44] E. P. Gross, _Structure of a quantized vortex in boson systems_. Nuovo Cim., 20, (1961), 454–-477. * [45] R. L. Guernsey, _Kinetic equation for a completely ionized gas_. Phys. of Fluids, 5, (3), (1962), 322–328. * [46] D. R. Hartree, _The Wave Mechanics of an Atom with a non-Coulomb Central Field. Part III. Term Values and Intensities in Series in Optical Spectra_. Math. Proc. of the Cambridge Phil. Soc., 24, (3), (1928), 426–437. * [47] D. Hilbert, _Principles of the kinetic theory of gases_. In: Sur les problèmes futurs des mathématiques, Compte-Rendu du 2ème Congrés International de Mathématiques, tenuà Paris en 1900, Paris: Gauthier-Villars, 1902, 58–114. * [48] D. Hilbert, _Begründung der kinetischen Gastheorie_. Math. Ann., 72, (1912), 562–-577. * [49] J. G. Kirkwood, _The statistical mechanical theory of transport processes II. Transport in gases_. J. Chem. Phys., 15, (1), (1947), 72–76. * [50] L. D. Landau, _The Transport Equation in the Case of Coulomb Interactions_. Phys. Z. Sowjet., 10, (1936), 154–-168. * [51] O. E. Lanford, III, _Time evolution of large classical systems_. In: Dynamical systems, theory and applications, Lecture Notes in Phys., 38, 1975, 1–111. * [52] A. Lenard, _On Bogoliubov’s kinetic equation for a spatially homogeneous plasma_. Ann. Phys. (USA), 10, (3), (1960), 390–400. * [53] M. A. Leontovich, _Basic equations of the kinetic theory of gases from the point of view of random processes_. J. Exp. Theor. Phys., 5, (1935), 211–-214. * [54] J. C. Maxwell, _Illustrations of the Dynamical Theory of Gases_. Phil. Trans. of the Royal Soc. of London, 19, 20, (1860), 19–-32; 21–-37. * [55] J. C. Maxwell, _On the dynamical theory of gases_. Phil. Trans. of the Royal Soc. of London, 157, (1867), 49–88. * [56] L. W. Nordhiem, _Transport Phenomena in Einstein-Bose and Fermi-Dirac Gases. I_ , R. Soc. Lond. A, 119, (1928), 689–-698. * [57] D. Ya. Petrina, _On solutions of the Bogolyubov kinetic equations. Quantum statistics_. Theor. Math. Phys., 13, (3), (1972), 391–405. * [58] D. Ya. Petrina, _Mathematical Foundations of Quantum Statistical Mechanics. Continuous Systems_. Kluwer, 1995. * [59] D. Ya. Petrina and V. I. Gerasimenko, _A mathematical description of the evolution of the state of infinite systems of classical statistical mechanics_. Russian Math. Surveys, 38, (5), (1983), 1–61. * [60] D. Ya. Petrina and V. I. Gerasimenko, _Mathematical problems of statistical mechanics of a system of elastic balls_. Russian Math. Surveys, 45, (3), (1990), 153–211. * [61] F. Pezzotti and M. Pulvirenti, _Mean-field limit and semiclassical expansion of a quantum particle system_. Ann. Henri Poincaré, 10, (1), (2009), 145–187. * [62] L. P. Pitaevskii, _Vortex Lines in an Imperfect Bose Gas_. J. Exp. Theor. Phys., 13, (2), (1961), 451–-454. * [63] M. Planck, _Über einen Satz der statistischen Dynamik und seine Erweiterung in der Quantentheorie_. Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin, 24, (1917), 324–341. * [64] H. Poincaré, _The Milky Way and the theory of gases_ , Popular Astronomy, 14, (1906), 475–488. * [65] L. Saint-Raymond, _Kinetic models for superfluids: a review of mathematical results_ , Comptes Rendus Physique, 5, (1), (2004), 65–75. * [66] H. Spohn, _Large Scale Dynamics of Interacting Particles_. Springer, Berlin, 1991. * [67] E. A. Uehling and G. E. Uhlenbeck, _Transport Phenomena in Einstein–Bose and Fermi–Dirac Gases. I_. Phys. Rev., 43, (7), (1933), 552–561. * [68] A. A. Vlasov, _On Vibration Properties of Electron Gas_. J. Exp. Theor. Phys., 8, (3), (1938), 291–-318. * [69] A. A. Vlasov, _Many-particle theory and its application to plasma_. N. Y., Gordon and Breach, 1961. * [70] M. von Smoluchowski, _Zur kinetischen Theorie der Brownschen Molekularbewegung und der Suspensionen_. Annalen der Physik, 326, (14), (1906), 756–780. * [71] J. Yvon, _La théorie statistique des fluides et l’équation d’état_. Hermann, no 203, 1935.
# DIALOGUE STRATEGY ADAPTATION TO NEW ACTION SETS USING MULTI-DIMENSIONAL MODELLING ###### Abstract A major bottleneck for building statistical spoken dialogue systems for new domains and applications is the need for large amounts of training data. To address this problem, we adopt the multi-dimensional approach to dialogue management and evaluate its potential for transfer learning. Specifically, we exploit pre-trained task-independent policies to speed up training for an extended task-specific action set, in which the single summary action for requesting a slot is replaced by multiple slot-specific request actions. Policy optimisation and evaluation experiments using an agenda-based user simulator show that with limited training data, much better performance levels can be achieved when using the proposed multi-dimensional adaptation method. We confirm this improvement in a crowd-sourced human user evaluation of our spoken dialogue system, comparing partially trained policies. The multi- dimensional system (with adaptation on limited training data in the target scenario) outperforms the one-dimensional baseline (without adaptation on the same amount of training data) by 7% perceived success rate. Index Terms— dialogue systems, policy optimisation, reinforcement learning, transfer learning ## 1 Introduction One of the main challenges in spoken dialogue system development is their scalability to new domains and applications. A statistical spoken dialogue system can be built efficiently for a large slot-filling domain when sufficient annotated data in that domain is available. However, as domains are expanded with new slots, and new applications emerge that require new, task- specific system actions, in-domain annotated datasets are typically hard to get by. Therefore, numerous efforts have been made to use transfer learning techniques [1, 2] to efficiently develop dialogue systems for new domains with limited or no data in the target domain. Most of the efforts in statistical dialogue management and action selection in particular focus on adaptation to newly introduced slots and values, with the underlying task unchanged [3, 4, 5]. The multi-dimensional approach to dialogue modelling [6] offers the potential to exploit its principled separation of domain- and application-independent aspects of dialogue to adapt to new domains as well as new applications. In [7], a multi-dimensional statistical dialogue manager was presented and it was demonstrated that policy optimisation for a target domain could be improved by re-using the policies for the application-independent dimensions (Social Obligations Management and Auto-feedback; see Section 2) that were pre-trained in a source domain. However, the addressed use-case was limited to adapting across very similar domains, where the set of task-specific actions was the same and only the set of slots and values changed. In this paper, we adopt the multi-dimensional approach to dialogue management [8, 9, 7], but rather than adapting to new slots and values, we focus on adapting to a different policy action set, which is typically required when developing a dialogue system for a new application. In addition, we present two extensions of the model: 1) whereas [7] restricted their model to allow only single dialogue acts being generated (despite the multi-agent design), we allow combinations of auto-feedback and task acts that support implicit confirmation, and 2) the agent that evaluates the generated dialogue act candidates from the different dimensions (see Section 2) is trained jointly with the other agents, rather than hand-coded. Using the improved multi-dimensional design, we present policy optimisation and evaluation experiments using our simulated user and error model, demonstrating significant improvements in performance on limited training data when using the proposed adaptation method (Section 3). Furthermore, a human user evaluation was carried out to confirm this result in more realistic conditions; the experimental design, spoken dialogue system implementation, and evaluation results are described in Section 4. Section 5 contains a more detailed discussion of related work in the area of transfer learning for dialogue management. The paper is wrapped up in Section 6 with conclusions and directions for future work. Task | Auto-Feedback | SOM | All | Example utterance ---|---|---|---|--- offer | | | offer | How about the Rice Boat? offer | impl-confirm | | offer+impl-confirm | The Rice Boat is a nice Indian restaurant answer | | | answer | The address of the Rice Boat is … request | | | request | What price range did you have in mind? request | impl-confirm | | req+impl-confirm | Okay, Indian food. What price range did you have in mind? | expl-confirm | | expl-confirm | You want Indian food, is that right? | auto_negative | | auto_negative | I did not quite catch that, could you please rephrase? | | accept_thanking | accept_thanking | You’re welcome | | return_goodbye | return_goodbye | Have a nice day none | none | none | | Table 1: The dialogue act agent action sets. Note that impl-confirm is essentially an inform act stating with a slot-value pair a preference that the system believes the user has, whereas expl-confirm is essentially a propositional question asking the user whether they have a specific preference, expressed by a slot-value pair. ## 2 Multi-dimensional dialogue management In conventional reinforcement learning-based statistical dialogue systems [10], the dialogue policy selects an action from a single set of possible actions in each turn. In contrast to such ‘one-dimensional’ systems, multi- dimensional dialogue systems employ multiple dialogue act agents, each dedicated to a different aspect of the dialogue process, using a policy that selects actions from its own specialised action set. We propose and evaluate a model with three dimensions and corresponding agents: Auto-feedback, Social Obligation Management (SOM), and Task. The Auto-feedback agent has actions for giving feedback to the user about processing their utterances, for example indicate non-understanding when the speech recogniser does not return any results (“I did not quite get that, could you please repeat?”) or provide articulate feedback in the form of an explicit/implicit confirmation when the system is unsure about the user’s input (e.g., “Expensive, you said?”), the SOM agent deals with greeting, thanking, apologising, and other social actions, and the Task agent focuses on the underlying task or activity (information navigation, tutoring, negotiation, etcetera). Additional agents to support other dimensions could be introduced as well (e.g. for turn-taking and time-management [11]), but the three agents currently included are considered to be the minimum requirement for a task-oriented multi-dimensional dialogue system. An additional Evaluation agent is used for determining which of the generated dialogue act candidates are forwarded as final dialogue acts to the natural language generation component [12]. Figure 1 gives an outline of the multi-agent action selection component. Fig. 1: Multi-agent action selection component, showing the Task, Auto- feedback, and SOM agents generating candidate dialogue acts, of which two are selected by the Evaluation agent. Besides reflecting the multi-dimensional nature of dialogue [6], this design opens up opportunities for efficient adaptation of dialogue managers to new tasks and domains. The Auto-feedback and SOM dimensions are considered to be domain and task independent, and therefore policies for these dimensions may be transferred, leaving only the Task policy to be trained from scratch in the target domain/application. We introduce a new design of the multi-agent action selection model, in which the Evaluation agent is designed to allow two possible combinations of two dialogue acts. Both of these combinations include an implicit confirmation from the Auto-feedback dimension, combined with either an offer (e.g., “Prezzo is a popular Italian restaurant.”), or a request (e.g., “You are looking for an Italian restaurant in which area?”) from the Task dimension. Table 1 lists all allowed action combinations, organised along the three supported dimensions, as well as in a single action set for a one-dimensional baseline system. In future versions, additional combinations can be considered, for example negative feedback combined with an apology (e.g., “I’m sorry, I did not quite get that”). ## 3 Policy optimisation and adaptation experiments The potential for transfer learning in a multi-dimensional dialogue manager has been demonstrated previously for the use-case of adapting from a hotel search application to a restaurant search application [7]. Here, we introduce a use-case where we stay within the restaurant domain, but adapt to a new action set. Specifically, the action set of the Task agent is extended by replacing the summary action request for requesting a slot by separate request actions for each slot, e.g. request-pricerange. Whereas in the source scenario the policy may select the request summary action, after which heuristics determine which slot is requested, in the target scenario the policy may select a request action for a particular slot, for example the area. Therefore, in the target scenario with extended action set, the system learns automatically which slot to request, rather than relying on heuristics in the source scenario. In the DSTC-2 restaurant search domain [13], this means that the request summary action is replaced by the three slot-specific request actions, corresponding to the slots food, area, and pricerange. Going back to the original action sets outlined in Table 1, the action set of the Task agent therefore grows from 4 (offer, request, answer, none) to 6 actions (offer, request-food, request-area, request-pricerange, answer, none), whereas the action set of the one-dimensional baseline (column ‘All’) grows from 9 to 13 actions, since the extension applies to both request and req+impl-confirm. Hence, we first train a multi-dimensional system for the source scenario, in which the Task agent uses a summary action for requesting slots (mdim-src). This yields 4 trained policies, corresponding to the 3 conversational dimensions plus the evaluation agent. Out of these, the policies for Auto- feedback and SOM are subsequently re-used and adapted when we train a system for the target scenario, in which the Task agent uses the extended action set (mdim-ada). As baselines, we also train a multi-dimensional system for the target scenario without re-using the pre-trained Auto-feedback and SOM policies (multi-dim) and a one-dimensional system for the target scenario in which all action combinations from the multi-dimensional system are taken as single actions into one action set (one-dim). ### 3.1 Training details All action selection models are trained in online interaction with an agenda- based user simulator [14]. In each case, all 4 policies are trained simultaneously using Monte Carlo Control reinforcement learning with linear value function approximation [15]. Each policy selects actions from its own action set, based on the approximated values given the current state. The values predict the long-term cumulative reward when taking an action in a given state and following the policy in subsequent turns, where the state is represented by a set of features extracted from the full dialogue state. During training, the policies use Boltzmann exploration, i.e., actions are sampled from a softmax distribution applied to the estimated values. The temperature hyper-parameter of the softmax is decayed linearly, gradually reducing the level of exploration until the policy only selects actions with the highest estimated value. The weights of the linear value function are updated after every dialogue/episode, based on a shared reward signal. The policies coordinate their actions only indirectly through the shared rewards, i.e., each policy operates independently without any direct communication with other policies. The rewards are calculated at each turn, combining rewards obtained from the simulated user with internal rewards. The user gives a reward of +100 upon task completion: in the restaurant search domain, this is when the system has recommended a restaurant matching the user’s preferences and has provided all requested information about this restaurant. Such ‘user goals’ are randomly initialised from the domain ontology at the start of each dialogue, and fed to the simulated user. In addition, the user issues a penalty when the system fails to respond to a thanking action or inserts a social act when it is not called for (-5 for each occurrence). This is to force the system to learn basic reactive social behaviour, though we are aware that this might be experienced as repetitive and unnatural. In future work, we will consider learning more sophisticated social patterns. Internally, a penalty of -1 is applied for each turn (to encourage shorter dialogues) and a penalty of -25 when a ‘processing problem’ is encountered and the system does not signal this to the user with a feedback act. A processing problem is recorded in the dialogue state when speech recognition or natural language understanding fails, i.e., returns no results. During training, such processing problems are simulated randomly in 5% of the user turns, discarding the original simulated user act. The Evaluation agent from [7] was implemented via a set of rules, based on definitions from the dialogue act annotation standard [11]. Here, we implement it as another reinforcement learning agent, which takes as input the candidate dialogue act for each dimension and selects which dialogue act combination will be passed on to the natural language generation component. Given that there are 3 supported dimensions, the actions correspond to the 8 possible combinations of dimensions that can be selected. Using an action masking mechanism, we ensure that the Evaluation agent only allows single dialogue acts or a combination of impl-confirm (Auto-feedback dimension) with offer or with any of the three request actions (Task dimension). ### 3.2 Results The policy optimisation results in simulation for the target scenario are shown in Figure 2. The learning curves (here shown in terms of success rates) quite clearly show that the performance levels are much higher in the early stages of training when using our proposed multi-dimensional adaptation method (mdim-ada, in red) than when training a multi-dimensional action selection model from scratch (multi-dim, in green). In the first 20k training dialogues we also observe much higher success rates compared to the one-dimensional baseline (one-dim, in blue). Fig. 2: Learning curves for the target scenario in terms of success rate for the three training methods. Note that the results are averaged over 10 training runs in each setting; in each training run, success rates over a sliding window of 100 dialogues are recorded. After training all policies, we ran evaluations of the fully trained policies over 3000 simulated dialogues each, at 25% semantic error rate. The first three rows in Table 2 correspond to the systems with the extended action set (the target scenario) that were also shown in Figure 2. These systems all score similar average rewards, although the adapted system (mdim-ada) gets a slightly higher success rate, but with slightly longer dialogues. Since the system with the original action set (the source scenario) covers the same range of full system dialogue acts (through its summary actions and heuristics for mapping them to complete dialogue acts), it has the same functionality as the extended system, and can therefore be included in the evaluation for comparison. As this system (mdim-src) achieves similar scores, automatically learning request actions for each slot did not result in improved performance levels in this particular scenario. Whether improvements can be achieved in this way, however, depends on the nature of the domain ontology and database, and on the behaviour of the users. System | ASet | ADA | NLU | Succ | Len | Rew ---|---|---|---|---|---|--- one-dim | ext | no | sim | 96.0 | 9.28 | 86.06 multi-dim | ext | no | sim | 96.7 | 9.49 | 85.49 mdim-ada | ext | yes | sim | 98.3 | 9.71 | 87.92 mdim-src | sum | no | sim | 97.2 | 9.57 | 87.15 one-dim-asu | ext | no | asu | 99.5 | 7.69 | 91.83 mdim-ada-asu | ext | yes | asu | 88.9 | 7.51 | 81.41 Table 2: Evaluation of fully trained policies over 3000 simulated dialogues. ASet: Action Set (summary/extended), ADA: adapted (yes/no), NLU: Natural Language Understanding (asu/simulated), Succ: Success Rate, Len: Average Length, Rew: Average Reward System | Success rate | Average length | Average reward | Num. dial’s ---|---|---|---|--- one-dim | 96.0 | 96.0 | 99.5 | 9.28 | 9.28 | 7.69 | 86.06 | 86.06 | 91.83 | mdim-ada | 98.3 | 98.3 | 88.9 | 9.71 | 9.71 | 7.51 | 87.92 | 87.92 | 81.41 | 100k one-dim | 59.1 | 61.7 | 64.9 | 8.02 | 11.64 | 10.15 | 49.05 | 45.78 | 52.13 | 18k mdim-ada | 73.5 | 73.8 | 73.3 | 8.95 | 10.01 | 7.71 | 62.18 | 60.77 | 64.25 | one-dim | 48.9 | 57.5 | 60.3 | 7.55 | 11.98 | 10.63 | 37.31 | 38.05 | 45.82 | 17k mdim-ada | 77.1 | 79.1 | 78.5 | 9.49 | 10.70 | 7.75 | 65.55 | 64.99 | 68.72 | one-dim | 45.3 | 48.5 | 54.1 | 7.30 | 11.60 | 10.16 | 34.07 | 30.04 | 39.77 | 16k mdim-ada | 83.5 | 83.5 | 83.2 | 9.32 | 10.36 | 8.13 | 73.04 | 71.18 | 74.21 | one-dim | 30.6 | 33.4 | 37.5 | 5.61 | 11.94 | 10.70 | 19.86 | 13.95 | 21.48 | 15k mdim-ada | 64.5 | 64.3 | 62.3 | 9.26 | 10.14 | 8.08 | 52.67 | 50.54 | 52.44 | | no-expl | expl | expl+asu | no-expl | expl | expl+asu | no-expl | expl | expl+asu | Table 3: Evaluation of partially trained policies over 3000 simulated dialogues, reporting success rate, average dialogue length, and average reward in three settings. The results in the first 4 rows have been obtained using a rule-based state tracker that takes (simulated) user dialogue act hypotheses as input, as is the case during policy optimisation. For the human user evaluation, we use the Action State Update (ASU) dialogue state tracker [16] that takes user utterances as input (see also Section 4.1). We have therefore also evaluated the one-dim and mdim-ada policies with this tracker in the loop, and included a hybrid retrieval/template based natural language generation component that maps simulated user dialogue acts to natural language utterances that can be fed to the ASU component111Training the policies together with the ASU tracker is currently not practical, because its BERT model is too slow for running the required volume of training dialogues.. For the one-dimensional system, this results in higher scores (one-dim-asu), whereas for the adapted multi- dimensional system, we get considerably lower scores (mdim-ada-asu). In the latter case, we note however that this is mainly due to one of the 10 policy combinations performing very badly: the policy repeatedly responds with negative feedback, after which the simulated user loses its patience and hangs up, resulting in 0% success rate and -5.02 average reward. In contrast, the other policies all achieve 98% success rate. In order to decide which partially trained policies to include in the human user evaluation, we have run simulated evaluations for the policies that were obtained after 15k, 16k, 17k, and 18k training dialogues. This pre-selection is based on the learning curves in Figure 2 and in particular the area where the success rate of the one-dimensional system is starting to build up, but is still strongly outperformed by the adapted multi-dimensional system. The results in Table 3 report success rates, average dialogue lengths (in terms of user turns), and average rewards over 3000 simulated dialogues at 25% error rate. Each evaluation is carried out in three different settings, indicated in the bottom row of the table: 1) using the rule-based state tracker and no policy exploration, i.e. the policies always select the action with the highest estimated value (no-expl); 2) using the rule based state tracker and policy exploration at the level determined by the temperature setting at the corresponding stage of training (expl); and 3) using the ASU state tracker and policy exploration (expl+asu). At a relatively early training stage, the policy is still exploring the state- action space, and has not experienced many successful dialogues yet. Therefore, the reported performance levels can differ between the no-expl and expl settings. Overall, the results in Table 3 show lower average rewards in the expl setting, but higher success rates. Furthermore, the performance levels are much higher when evaluating with the ASU tracker in the loop. The likely reason for this is that no semantic error model was used in this setting, i.e., the true user dialogue acts from the simulator were used to generate the input user utterances for the tracker. Hence, the language understanding performance is probably much better in this setting, and therefore the action selection performance as well. Based on these results, we selected the one- and multi-dimensional models that were trained on 17k dialogues for the human evaluation, as well as the fully- trained multi-dimensional system. The three selected variants are highlighted in grey in Table 3 and also pointed out in Figure 2. System | Num. | Average | Q1 [%] | Q2 [%] | Q3 [1-6] | Q4 [1-6] | Q5 [1-6] | Q6 [1-6] ---|---|---|---|---|---|---|---|--- | Dialogues | Length | Found Venue | DialSuccess | Understand | Recognise | SysResponse | Naturalness one-dim-17k | 203 | 7.69 (4.34) | 65.52 (3.34) | 63.05 (3.40) | 3.34 (1.68) | 3.46 (1.66) | 3.21 (1.72) | 3.01 (1.77) mdim-ada-17k | 201 | 6.52 (3.24) | 73.00 (3.15) | 70.00 (3.25) | 3.43 (1.58) | 3.56 (1.55) | 3.48 (1.59) | 3.35 (1.67) mdim-ada-100k | 199 | 5.88 (2.38) | 78.89 (2.90) | 79.90 (2.85) | 3.76 (1.56) | 3.86 (1.50) | 3.93 (1.46) | 3.71 (1.62) Table 4: Human user evaluation results, where N is the number of dialogues, AvgLen is the average number of turns per dialogue, and Q1-6 are the scores obtained from the questionnaire. The scores for Q1-2 are percentages, and the standard deviation for each score is indicated in brackets. ## 4 Human User Evaluation Using the agenda-based user simulator and error model, we have shown that significant performance gains with limited training data can be achieved with the proposed adaptation method. However, it remains to be seen to what extent these results are representative for the scenario of real users interacting with the system [17, 18]. Ideally, the policy optimisation experiments should be carried out in online interaction with human users, which has been attempted with some success [19, 20]. However, this requires moving to more sample efficient optimisation methods and addressing many other technical challenges, which we will leave for future work. Instead, we have run a human user evaluation in which we compared three system variants, and in particular two sets of partially trained policies, corresponding to the adapted multi- dimensional system and the non-adapted one-dimensional baseline, both trained on 17k dialogues only. Users were recruited through the Amazon Mechanical Turk (AMT) crowd-sourcing platform, where they were given a task description (e.g., “You are looking for a moderately priced French restaurant. Make sure you get the phone number and address.”) and a link to the web-based interface that we have created. To enable spoken dialogue on the web interface, we use the Google Web Speech API for both ASR (for recognising the user’s speech) and TTS (for synthesizing the system’s speech); our dialogue system server receives recognised user utterances and responds with system utterances, both in text form. After finishing their conversation with the system, the user can ‘hang up’ by pressing a button and receive a token which they can use to proceed on the AMT page, where they are given a questionnaire to fill out and submit, upon which they complete the task. In the questionnaire, the subject is asked to state their opinion on 6 statements about the conversation, in the form of either a binary Yes or No (Q1 and Q2), or on a 6 point Likert scale (Q3 to Q6), ranging from ‘Strongly disagree’ to ‘Strongly agree’. * Q1: The system recommended a restaurant that matched my constraints. (Yes/No) * Q2: I got all the information I was looking for. (Yes/No) * Q3: The system understood what I was saying. (6 point Likert) * Q4: The system recognised my speech well. (6 point Likert) * Q5: The system’s responses were appropriate. (6 point Likert) * Q6: The conversation felt natural. (6 point Likert) ### 4.1 Spoken dialogue system implementation Our dialogue system takes as input the user utterance text, in the form of the ASR top hypothesis obtained from the web interface. A domain-independent dialogue act tagger is used to recognise social acts like goodbye and thanks, after which the utterance is passed to the Action State Update model for dialogue state tracking [16]. The dialogue state contains beliefs about the user goal (in terms of slot-value pairs and their confidence score), which slots are believed to be requested, which database items are or have been discussed, whether any processing problems have occurred (i.e. speech recognition or language understanding failed), and the dialogue act tags of previous user utterances. Based on the dialogue state, one or more system dialogue acts are selected using one of the trained action selection models; the three models compared in the human evaluation are highlighted in Table 3. A set of templates is used to generate a natural language system response from the selected dialogue acts. The response is synthesised on the web interface. In order to obtain results that are representative for each system variant, all 10 policies that were trained for each variant have been evaluated and the average results reported. For each dialogue, one policy is selected from the pool of 10 using a round-robin system. ### 4.2 Results The evaluation results are shown in Table 4, including the (objective) average dialogue length and the (subjective) scores from the questionnaire. As expected, the fully trained system mdim-ada-100k gets the best scores. More importantly, across all metrics the partially trained mdim-ada-17k gets better scores than the partially trained baseline one-dim-17k. Especially in terms of average dialogue length and the perceived partial and full task completion rates (Q1 resp. Q2) the difference between one-dim-17k and mdim-ada-17k is substantial, as was predicted by the experiments with the user simulator. The scores for perceived understanding (Q3) and speech recognition (Q4) could somewhat explain these differences (rather than attributing them to the used policies), but their variance is quite large. When assessing how appropriate the system responses appear to users (Q5), the difference between one- dimensional and multi-dimensional adaptation is larger. Compared to the simulated evaluation results in Table 3, the perceived success rates (Q2) are much lower in the case of the fully trained system (79.9% vs. 88.9%) and the multi-dimensional partially trained system (70% vs. 78.5%), whereas they are slightly higher in the case of the partially trained one- dimensional system (63.05% vs. 60.03%). In terms of average dialogue length, the human evaluation dialogues turn out to be shorter overall. This suggests that the human subjects might have given up more quickly when the system failed to recommend a restaurant that met their constraints, i.e. the unsuccessful dialogues were shorter. In contrast, the simulator keeps trying to complete the goal until the maximum number of user turns is reached (set to 30 turns in the configuration) or the system response act is repeated too many times (set to 3 times in the configuration). ## 5 Related work In the area of transfer learning methods for dialogue management, most approaches have focused on cross-domain adaptation or multi-domain optimisation for slot-filling dialogue, where each domain is defined in terms of slots and their possible values. In [3], Domain-Independent Parameterisation (DIP) of dialogue state and action representations is introduced to enable transfer across domains. DIP seeks to train a dialogue policy that abstracts away from the specific slots and values in a particular domain, so it is applicable to the conversational search task in other domains as well. The effectiveness of their method was demonstrated using an agenda-based user simulator when adapting from restaurant search to laptop search. Furthermore, they carried out a human user evaluation, showing that their best transferred DIP policy performed at the same level as a non- transferred in-domain policy. A Bayesian approach to dialogue management is described in [5], which uses Gaussian Process (GP) reinforcement learning, exploiting model priors of a generic dialogue policy for fast domain adaptation. They also discuss a Bayesian committee machine approach, in which each domain is handled by a separate GP policy, but when only limited data is available for a specific domain, its policy may rely on the output of the other policies. This approach has been further extended into a multi-agent learning setting, further improving performance levels during training with human users. The Multi-Agent Dialogue Policy (MADP) is proposed in [4], which consists of several slot-specific agents and a slot-independent agent. Adopting the Deep Q-Network (DQN) reinforcement learning framework [21], the parameters of the slot-independent agent and the shared parameters of the slot-specific agents that have been learned for the source domain can be transferred to a new target domain. So in this case, the multi-agent design is based on the slots in the domain definition. The benefit of this method has only been demonstrated in simulated experiments, and is focused on adapting to new slots within the same restaurant/tourist information task. A more recent incarnation of this approach is the AgentGraph framework [22], which employs Graph Neural Networks. This method was evaluated using the PyDial benchmark, demonstrating successful transfer between restaurant search and laptop shopping tasks. Where the agents in [5] are associated with domains, and the agents in [4, 22] are associated with slots (except for the generic slot-independent agent), the agents in the multi-dimensional approach to dialogue management are associated with dimensions [23]. Although this approach is very different in nature, it is not necessarily incompatible with these other multi-agent approaches. For example, it could benefit from the MADP approach by dividing up the Auto- feedback agent into sub-agents for each slot in the domain definition. The multi-dimensional design presented in this paper is an extension of the previous models described in [9, 7]. First, the system supports the generation of multiple dialogue acts in a single response; second, the Evaluation agent that was extended for this purpose is trained jointly with the other agents. Furthermore, the adaptation use-case in previous work was limited to domain adaptation only, where the slots and values changed between source and target scenario, but not the action sets. Finally, the human evaluation described in [7] included fully trained policies only, showing that a multi-dimensional system could be trained to a performance level equivalent to a one-dimensional baseline. In this paper, we have presented a human user evaluation to confirm the performance boost of partially adapted policies seen on simulated data when using the multi-dimensional adaptation method. ## 6 Conclusion We have presented a multi-dimensional approach to dialogue management, in which system response actions are selected through a combination of 4 dialogue policies, 2 of which are task-independent, and are therefore suitable for re- use when moving to a new application. In simulated policy optimisation and evaluation experiments we have shown that by re-using and adapting these task- independent policies, significant performance gains can be achieved in the early stages of training. As a first task adaptation use-case, we have looked at extending the task-specific action set with multiple slot-specific request actions, replacing the original summary request action that relied on heuristics to determine the slot to be requested. To confirm the adaptation results obtained with an agenda-based user simulator, we have carried out a crowd-sourced human user evaluation. When trained on the same limited amount of training data, the proposed multi-dimensional adaptation strategy achieved significantly better results than the one-dimensional baseline (trained from scratch without any adaptation) on all subjective metrics, including task success and appropriateness of system responses, as well as on dialogue length. In future work, we will consider more challenging use cases for applying the multi-dimensional adaptation method. We will extend the role of the social agent to improve the naturalness of the interactions, explore methods for combining different types of agents (e.g., rule-based agents, or agents trained with supervised learning), and investigate the relation between multi- dimensional dialogue management and natural language generation. Finally, to enable online learning with human users and therefore evaluate the proposed adaptation method more directly, we will explore more efficient reinforcement learning algorithms. ## References * [1] S. J. Pan and Q Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010. * [2] Matthew E Taylor and Peter Stone, “Transfer learning for reinforcement learning domains: A survey,” The Journal of Machine Learning Research, vol. 10, no. Jul, pp. 1633–1985, 2009. * [3] Zhuoran Wang, Tsung-Hsien Wen, Pei-Hao Su, and Yannis Stylianou, “Learning domain-independent dialogue policies via ontology parameterisation,” in Proceedings of the 16 Annual Meeting of the Special Interest Group on Discourse and Dialogue, Prague, Czech Republic, Sept. 2015, pp. 412–416, Association for Computational Linguistics. * [4] Lu Chen, Cheng Chang, Zhi Chen, Bowen Tan, Milica Gašić, and Kai Yu, “Policy adaptation for deep reinforcement learning-based dialogue management,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 6074–6078. * [5] Milica Gašić, Nikola Mrkšić, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young, “Dialogue manager domain adaptation using gaussian process reinforcement learning,” Computer Speech & Language, vol. 45, pp. 552–569, 2017. * [6] Harry Bunt, “Multifunctionality in dialogue,” Computer Speech & Language, vol. 25, no. 2, pp. 222–245, 2011\. * [7] Simon Keizer, Ondřej Dušek, Xingkun Liu, and Verena Rieser, “User evaluation of a multi-dimensional statistical dialogue system,” in Proceedings of the 20 Annual SIGdial Meeting on Discourse and Dialogue, Stockholm, Sweden, Sept. 2019, pp. 392–398, Association for Computational Linguistics. * [8] Simon Keizer and Harry Bunt, “Multidimensional dialogue management,” in Proceedings of the 7 SIGdial Workshop on Discourse and Dialogue, Sydney, Australia, July 2006, pp. 37–45, Association for Computational Linguistics. * [9] Simon Keizer and Verena Rieser, “Towards learning transferable conversational skills using multi-dimensional dialogue modelling,” in Proceedings 21 Workshop on the Semantics and Pragmatics of Dialogue (SemDial/SaarDial), Saarbruecken, Germany, 2017. * [10] Steve Young, Milica Gašić, Blaise Thomson, and Jason D. Williams, “POMDP-based statistical spoken dialog systems: A review,” Proceedings of the IEEE, vol. 101, no. 5, pp. 1160–1179, 2013. * [11] ISO, ISO 24617-2 Language Resource Management – Semantic annotation framework – Part2: Dialogue acts, International Organisation for Standardization, Geneva, Switzerland, 2012\. * [12] Simon Keizer and Harry Bunt, “Evaluating combinations of dialogue acts for generation,” in Proceedings of the 8 SIGdial Workshop on Discourse and Dialogue, Antwerp, Belgium, Sept. 2007, pp. 158–165, Association for Computational Linguistics. * [13] Matthew Henderson, Blaise Thomson, and Jason D. Williams, “The second dialog state tracking challenge,” in Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Philadelphia, PA, U.S.A., June 2014, pp. 263–272, Association for Computational Linguistics. * [14] Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young, “Agenda-based user simulation for bootstrapping a POMDP dialogue system,” in Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, Rochester, New York, Apr. 2007, pp. 149–152, Association for Computational Linguistics. * [15] Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction, The MIT Press, second edition, 2018. * [16] Svetlana Stoyanchev, Simon Keizer, and Rama Doddipatla, “Action state update approach to dialogue management,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 7398–7402. * [17] Jost Schatzmann and Steve Young, “The hidden agenda user simulation model,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 4, pp. 733–747, 2009. * [18] Florian Kreyssig, Inigo Casanueva, Pawel Budzianowski, and Milica Gasic, “Neural user simulation for corpus-based policy optimisation for spoken dialogue systems,” in Proceedings of the 19 Annual Meeting of the Special Interest Group on Discourse and Dialogue, Melbourne, Australia, 2018. * [19] Milica Gašić, Filip Jurčíček, Blaise Thomson, Kai Yu, and Steve Young, “On-line policy optimisation of spoken dialogue systems via live interaction with human subjects,” in IEEE Workshop on Automatic Speech Recognition Understanding (ASRU), 2011, pp. 312–317. * [20] M. Gašić, C. Breslin, M. Henderson, D. Kim, M. Szummer, B. Thomson, P. Tsiakoulis, and S. Young, “On-line policy optimisation of bayesian spoken dialogue systems via human interaction,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, pp. 8367–8371. * [21] V. Mnih, K. Kavukcuoglu, D. Silver, and et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, 2015. * [22] Lu Chen, Zhi Chen, Bowen Tan, Sishan Long, Milica Gašić, and Kai Yu, “Agentgraph: Toward universal dialogue management with structured deep reinforcement learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 9, pp. 1378–1391, 2019. * [23] Harry Bunt, “Dimensions in dialogue act annotation,” in Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy, May 2006, European Language Resources Association (ELRA).
# Anisotropic Magnetized Asteroseismic Waves B. Tripathi Department of Physics, University of Wisconsin–Madison, Madison, Wisconsin 53706, USA<EMAIL_ADDRESS>Dhrubaditya Mitra Nordita, KTH Royal Institute of Technology and Stockholm University, Hannes Alfvéns väg 12, 114 19 Stockholm, Sweden<EMAIL_ADDRESS> ###### Abstract We solve for waves in a polytropic, stratified medium with a spatially varying background magnetic field that points along a horizontal $x$-direction, and with gravity that is directed along the vertical $z$-direction. Force balance determines the magnitude of the background magnetic field, $B_{\rm 0}^{2}\sim z^{n+1}$, where $n$ is the polytropic index. Using numerical and asymptotic methods, we deduce an accurate and explicit dispersion relation for fast pressure-driven waves: $\Omega^{2}\sim K\left(2m+n\right)\left[1+(1/\mbox{M}_{\rm A})^{2}(4-2\gamma+\cos^{2}\theta-3\cos^{4}\theta)/4\right]$. Here, $\Omega$ is the frequency, $K$ the wavenumber, $\theta$ the angle the wave-vector makes with the background magnetic field, $\mbox{M}_{\rm A}$ the Alfvénic Mach number, and $m$ an integer representing the eigen state. Applications of our result are in magnetoseismology and nonlinear asteroseismology. ††journal: ApJ ## 1 Introduction The strengths of magnetic fields buried below the surface of stars are not known, though they are vital for improved understanding of the magnetic behaviour of stars. This challenge has impeded progress in the development of stellar magnetism and the evolution of magnetized stellar interiors. To estimate the magnetic field strengths, linear asteroseismology has stood as a promising candidate (Aerts et al., 2010), although nonlinear asteroseismology may also be able to provide insights. It is undisputed that the asteroseismology, both linear and nonlinear, benefits from reliable and simple expressions of anisotropic magnetic effect on linear waves. Specifically, the development of nonlinear asteroseismology (Guo, 2020; Van Beeck et al., 2021, 2023) requires linear dispersion relations to evaluate mode resonances; the linear asteroseismology directly employs dispersion relations to solve the inverse problem of detecting subsurface magnetic fields using observed stellar surface oscillations. Such magnetoseismology has witnessed fitful progress. For example, observational studies report travel-time perturbations of acoustic waves to be a critical signature of strong magnetic fields in the stellar interior (Schunker et al., 2005; Ilonidis et al., 2011). Numerical simulations of asteroseismic waves also suggest the possibility of detecting deep-seated fields, before they emerge on the surface (Singh et al., 2014, 2015, 2016, 2020; Das et al., 2020; Das, 2022). To bolster such suggestive findings, a thorough understanding of the impact of magnetic fields on asteroseismic waves is essential (Nye & Thomas, 1976; Adam, 1977; Thomas, 1983; Campos, 1983; Cally, 2007; Campos & Marta, 2015; Tripathi & Mitra, 2022). Waves in an unmagnetized polytropic atmosphere were exactly solved analytically by Lamb (1911) who derived the relation $\frac{\Omega^{2}}{2K}-\frac{(n+1)(n+1-\gamma n)K}{2\gamma^{2}\Omega^{2}}=m+\frac{n}{2},$ (1) where $\Omega$ is the frequency, $K$ the wavenumber, $n$ the polytropic index, $\gamma$ the adiabatic index, and $m$ the eigen-state index, with $m=0,1,2,...\,$. This advancement led to a series of newer and significant understanding of hydrodynamic waves. Under fast-wave approximation, $\Omega^{2}/K\gg 1$, the leading-order dispersion relation becomes $\Omega^{2}\sim K\left(2m+n\right)\/.$ (2) A similar closed-form expression for waves in a magnetized polytrope, as of yet, is unknown. Despite several progresses (e.g., Gough & Thompson, 1990; Bogdan & Cally, 1997; Cally & Bogdan, 1997), the problem, to this day, remains unsolved. Gough & Thompson (1990) treat a global problem (in spherical coordinates); there, the computation of eigenfrequenices requires numerical evaluation of complicated integral expressions [e.g., Eqs. (4.11)–(4.13) in Gough & Thompson (1990)]. A closed-form analytical formula is not available. The limitations of purely numerical approach and lack of a closed-form expression were succinctly expressed by Bogdan & Cally (1997): > “Ideally, we would wish to proceed by writing down an equation analogous to > Lamb’s formula for the magnetized polytrope. Unfortunately, this approach is > not feasible and for the most part one must instead be content with a > numerically derived visual comparison of how the allowed oscillation > frequencies depend upon the choice of the horizontal wavenumber $k$.” Analytic dispersion relations are also essential for developing wave turbulence theory in the presence of both gravity and magnetic fields. In wave turbulence theory, calculations of mode resonances require simple, explicit dispersion relations that accurately capture the magnetic effect on observed linear waves. We note that the Lamb’s dispersion relation (2), $\Omega\sim\sqrt{K}$, is similar to that of surface gravity waves in oceans (Hasselmann, 1962). However, there is a critical difference: the surface gravity waves do not couple via three-wave resonance, thus requiring a weaker four-wave coupling (Nazarenko & Lukaschuk, 2016). The Lamb waves, on the other hand, can couple via three-wave resonance, because there are infinitely many such waves (eigen states) at a given wave number, unlike only one pair of surface gravity waves at a given wave number. Thus, the infinitely many Lamb waves at a given wavenumber have distinct wave frequencies, which allow the sum of three frequencies at three wavenumbers to become null. While a wave turbulence theory for surface gravity waves has been well established, it is yet to be developed for the asteroseismic waves, whose dispersion relation in fully analytic form is a basic requirement for such a theory. The value that an analytic dispersion relation offers in resonant-coupling theory cannot be overstated when magnetic fields make the wave dispersion relation anisotropic and complicated. Motivated by these reasons, we seek here an accurate and simple formulae for the effect of magnetic fields on the Lamb waves. Introducing magnetic fields, aligned orthogonal to a vertical gravity (Fig. 1), we find, as did Bogdan & Cally (1997), that the linearized magnetohydrodynamic (MHD) equations are too cumbersome to obtain a closed-form expression for the dispersion relation, even with the fast-wave approximation, $\Omega^{2}/K\gg 1$. Here, we overcome this difficulty by using both numerical simulations and extensive use of Mathematica, followed by a variant of Jeffereys–Wentzel–Kramers–Brillouin (JWKB) approximation to deduce $\Omega^{2}\sim K\left(2m+n\right)\left[1+\frac{{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}(4-2\gamma+\cos^{2}\theta-3\cos^{4}\theta)}{4}\right],$ (3) in the limit $\Omega^{2}/K\gg 1$, with $m=0,1,2,...\,$. The parameter ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ is the inverse of the Alfvénic Mach number and $\theta$ is the angle the wave vector makes with the background magnetic field. Equation (3) is the principal result of this paper. This paper is organized in the following way. In § 2, we describe our model and present the linearized compressible MHD equations. Such equations are then numerically solved in § 3. To obtain analytical understanding of the numerical results, the linearized equations are reduced to a wave equation in § 4. The normal-form wave equation is then perturbatively solved using a variant of the JWKB theory we devise; analytical understanding is gained in § 5. With astrophysical implications and utility of our results, we conclude in § 6. ## 2 System setup and linearized perturbation equations Figure 1: An inhomogeneous magnetic field $\bm{B}_{0}(z)$, oriented orthogonal to a constant vertical gravity $\bm{g}$, is considered where the wave is allowed to propagate in an arbitrary direction, shown with $\bm{K}$, making an angle $\theta$ with $\bm{B}_{0}$. The gradient in the colormap of the box schematically represents increasing functions with depth $z$ of the magnetic field, fluid density, pressure, and sound speed. To study waves in a magnetized, stratified medium, we consider the ideal compressible MHD equations (Chandrasekhar, 1961) $\displaystyle\partial_{t}\rho+{\bm{\nabla}}\cdot\left(\rho{\bm{U}}\right)$ $\displaystyle=0\/,$ (4a) $\displaystyle\rho\left[\partial_{t}{\bm{U}}+({\bm{U}}\cdot\bm{\nabla}){\bm{U}}\right]$ $\displaystyle=-\bm{\nabla}P+\rho\bm{g}+\bm{J}\times\bm{B}\/,$ (4b) $\displaystyle\partial_{t}\bm{B}$ $\displaystyle={\bm{\nabla}}\times\left({\bm{U}}\times\bm{B}\right),$ (4c) $\displaystyle\frac{Dp}{Dt}$ $\displaystyle=c^{2}\frac{D\rho}{Dt},$ (4d) where $\rho$, ${\bm{U}}$, $P$, and $\bm{B}$ represent the density, the velocity, the pressure, and the magnetic field, respectively. The current density is ${\bm{J}}={\bm{\nabla}}\times\bm{B}/\mu_{{\rm 0}}$, where $\mu_{{\rm 0}}$ represents the magnetic permeability of the vacuum. The magnetic field is additionally constrained to be divergence-less ${\bm{\nabla}}\cdot\bm{B}=0.$ (5) We consider a local Cartesian domain where the equilibrium density $\rho_{\rm 0}$ and pressure $P_{\rm 0}$ satisfy the polytropic relation $P_{\rm 0}\sim\rho_{\rm 0}^{1+1/n},$ (6) where $n$ is the polytropic index of the gaseous atmosphere. In an unmagnetized atmosphere, the force balance gives $\partial_{z}P_{\rm 0}(z)=\rho_{\rm 0}(z)g$, where $\bm{g}=g\hat{\bm{e}}_{z}$, with $g$ as the constant acceleration due to gravity along the vertical $z$-axis, as depicted in Fig. 1. The force balance requires $\rho_{\rm 0}(z)\sim z^{n};\quad P_{\rm 0}(z)\sim z^{n+1}.$ (7) ### 2.1 Background magnetic field and sound speed We introduce an inhomogeneous background magnetic field, $\bm{B}=B_{\rm 0}(z)\hat{\bm{e}}_{x}$. The force-balance relation with the Lorentz force, $({\bm{\nabla}}\times\bm{B}/\mu_{{\rm 0}})\times\bm{B}$, then becomes $\partial_{z}P_{\rm 0}=\rho_{\rm 0}g-\frac{1}{2\mu_{{\rm 0}}}\partial_{z}B_{\rm 0}^{2}.$ (8) The solution $B_{0}^{2}(z)\sim z^{n+1}\/$ (9) also satisfies Eq. (7). The plasma $\beta$ is then a constant throughout the domain $\beta\equiv\frac{P_{\mathrm{gas}}}{P_{\mathrm{magnetic}}}=\frac{2\mu_{{\rm 0}}P_{\rm 0}}{B_{\rm 0}^{2}}.$ (10) The sound speed ($c$) may now be deduced from Eq. (8), using $P_{\rm 0}=c^{2}\rho_{\rm 0}/\gamma$, where $\gamma$ is the adiabatic index of the gas $\partial_{z}c^{2}=-\frac{nc^{2}}{z}+\frac{\gamma g}{1+\beta^{-1}}.$ (11) The solution to Eq. (11) is $c^{2}\mathrm{=}c_{\rm 0}^{2}z$, where $c_{\rm 0}^{2}$ is a constant given by $c_{\rm 0}^{2}=\frac{\gamma g}{(n+1)(1+\beta^{-1})}\/.$ (12) It is useful to express $\beta^{-1}$ in terms of the Alfvénic Mach number $\mbox{M}_{\rm A}$, which is the ratio of the Alfvén speed to the sound speed, $\beta^{-1}=\frac{B_{\rm 0}^{2}}{\mu_{0}\rho_{0}}\cdot\frac{\rho_{0}}{\gamma P_{\rm 0}}\cdot\frac{\gamma}{2}=\frac{\gamma}{2\mbox{M}_{\rm A}^{2}}\/,$ (13) using which the sound speed becomes $c^{2}=\frac{\gamma gz}{(n+1)(1+\gamma\mbox{M}_{\rm A}^{-2}/2)}.$ (14) ### 2.2 Linearized perturbation equations We now linearize the MHD equations around the background profiles $[\rho_{\rm 0},\bm{0},B_{\rm 0},P_{\rm 0}]$, introduced in Eqs. (7) and (9). Such a linearization yields evolution equations for infinitismal perturbations $[\widetilde{\rho},\widetilde{{\bm{u}}},\widetilde{{\bm{b}}},\widetilde{p}]$ as $\displaystyle\partial_{t}\widetilde{\rho}$ $\displaystyle=-(\widetilde{\bm{u}}\cdot\bm{\nabla})\rho_{\rm 0}-\rho_{\rm 0}(\bm{\nabla}\cdot\widetilde{\bm{u}})\/,$ (15a) $\displaystyle\partial_{t}\widetilde{u}_{x}$ $\displaystyle=-\frac{\partial_{x}\widetilde{p}}{\rho_{0}}+\frac{\widetilde{b}_{z}\partial_{z}B_{\rm 0}}{\mu_{{\rm 0}}\rho_{\rm 0}}\/,$ (15b) $\displaystyle\partial_{t}\widetilde{u}_{y}$ $\displaystyle=-\frac{\partial_{y}\widetilde{p}}{\rho_{0}}+\frac{B_{\rm 0}(\partial_{x}\widetilde{b}_{y}-\partial_{y}\widetilde{b}_{x})}{\mu_{{\rm 0}}\rho_{\rm 0}}\/,$ (15c) $\displaystyle\partial_{t}\widetilde{u}_{z}$ $\displaystyle=-\frac{\partial_{z}\widetilde{p}}{\rho_{\rm 0}}+\frac{\widetilde{\rho}g}{\rho_{\rm 0}}+\frac{B_{\rm 0}(\partial_{x}\widetilde{b}_{z}-\partial_{z}\widetilde{b}_{x})-\widetilde{b}_{x}\partial_{z}B_{\rm 0}}{\mu_{{\rm 0}}\rho_{\rm 0}}\/,$ (15d) $\displaystyle\partial_{t}\widetilde{b}_{x}$ $\displaystyle=-B_{\rm 0}(\partial_{y}\widetilde{u}_{y}+\partial_{z}\widetilde{u}_{z})-\widetilde{u}_{z}\partial_{z}B_{\rm 0}\/,$ (15e) $\displaystyle\partial_{t}\widetilde{b}_{y}$ $\displaystyle=-B_{\rm 0}\partial_{x}\widetilde{u}_{y}\/,$ (15f) $\displaystyle\partial_{t}\widetilde{b}_{z}$ $\displaystyle=-B_{\rm 0}\partial_{x}\widetilde{u}_{z}\/,$ (15g) $\displaystyle\partial_{t}\widetilde{p}$ $\displaystyle=-(\widetilde{\bm{u}}\cdot\bm{\nabla})P_{\rm 0}-c^{2}\rho_{0}(\bm{\nabla}\cdot\widetilde{\bm{u}})\/.$ (15h) We shall use $\widetilde{\chi}=\bm{\nabla}\cdot\widetilde{{\bm{u}}}$ (16) in the rest of the article, where helpful. Equations (15a)–(15h) can be simplified to derive a set of fewer (closed) equations $\displaystyle\partial_{t}^{2}\widetilde{u}_{x}$ $\displaystyle=c^{2}\partial_{x}\widetilde{\chi}+g\partial_{x}\widetilde{u}_{z}\/,$ (17a) $\displaystyle\partial_{t}^{2}\widetilde{u}_{y}$ $\displaystyle=c^{2}\partial_{y}\widetilde{\chi}+g\partial_{y}\widetilde{u}_{z}+\frac{c^{2}}{\mbox{M}_{\rm A}^{2}}\left(\partial_{xx}\widetilde{u}_{y}+\partial_{y}\widetilde{\chi}-\partial_{xy}\widetilde{u}_{x}\right)\/,$ (17b) $\displaystyle\partial_{t}^{2}\widetilde{u}_{z}$ $\displaystyle=c^{2}\partial_{z}\widetilde{\chi}-g\partial_{x}\widetilde{u}_{x}\left[1+\frac{\gamma}{\mbox{M}_{\rm A}^{2}(1+\gamma\mbox{M}_{\rm A}^{-2}/2)}\right]-g\partial_{y}\widetilde{u}_{y}$ $\displaystyle\hskip 5.69046pt+\frac{\gamma(1+\mbox{M}_{\rm A}^{-2})g\widetilde{\chi}}{(1+\gamma\mbox{M}_{\rm A}^{-2}/2)}+\frac{c^{2}}{\mbox{M}_{\rm A}^{2}}\left(\partial_{xx}\widetilde{u}_{z}+\partial_{z}\widetilde{\chi}-\partial_{xz}\widetilde{u}_{x}\right)\/.$ (17c) We note that the appearance of $\mbox{M}_{\rm A}^{2}$ in Eq. (17) in certain terms may seem non-trivial at first sight; however, upon inspection, we understand them as terms emerging from the effect of the Lorentz force on the background states, via terms like $\partial_{z}c^{2}(z)$ and $(\partial_{z}P_{\rm 0})/\rho_{n}$, while processing Eqs. (15a)–(15h). Admittedly, Eqs. (17a)–(17) are somewhat challenging to proceed with clarity. Thus, we now non-dimensionalize the equations to make them reasonably transparent. ### 2.3 Non-dimensionalized linear equations We define $L$ as the characteristic length scale over which the sound speed varies (and, for that matter, pressure, density, and temperature also vary) appreciably. Then we find that the characteristic sound speed in an unmagnetized polytrope is, using Eq. (14), $c_{L}=\sqrt{\gamma gL/(n+1)}$. Using $L$ and $L/c_{L}$ as the dimensional length and time units, we non- dimensionalize all variables henceforth, starting with the sound speed $C^{2}=\frac{c^{2}}{c_{L}^{2}}=\frac{Z}{1+\gamma\mbox{M}_{\rm A}^{-2}/2},$ (18) where the lowercase dimensional variables ($c$ and $z$) are cast as uppercase non-dimensional variables ($C$ and $Z$). Thus we replace all the dimensional variables in Eqs. (17a)–(17) using $\displaystyle(x,y,z)$ $\displaystyle=(LX,LY,LZ),$ (19a) $\displaystyle c^{2}$ $\displaystyle=c_{L}^{2}\frac{Z}{1+\gamma\mbox{M}_{\rm A}^{-2}/2},$ (19b) $\displaystyle\partial_{t}$ $\displaystyle\equiv\frac{c_{L}}{L}\partial_{T},$ (19c) where the uppercase characters $X,Y,$ and $T$ represent non-dimensional variables. We analyze perturbations by Fourier-transforming in the $(x,y)$-plane, viz., $\widetilde{u}_{x}(Z)=\int dK_{X}dK_{Y}d\Omega\,\hat{u}_{x}\exp\left[i(K_{X}X+K_{Y}Y+\Omega T)\right],$ (20) where the uppercase characters represent non-dimensional quantities, e.g., $\bm{K}\equiv(K_{X},K_{Y})$ is the non-dimensional wavevector in the $(x,y)$-plane, and $\Omega$ is the nondimensional frequency. Fourier analyzing Eqs. (17a)–(17) and representing $\mbox{M}_{\rm A}^{-1}$ by ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ henceforth, we write $\displaystyle\left[\frac{-\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}{iK_{X}Z}\right]\hat{u}_{x}+\left[\frac{-(n+1)(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}{\gamma Z}\right]\hat{u}_{z}=\hat{\chi}\/,$ (21a) $\displaystyle\left[-i{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}K_{X}\right]\hat{u}_{x}+\left[\frac{-\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}{iK_{Y}Z}+\frac{{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}K_{X}^{2}}{iK_{Y}}\right]\hat{u}_{y}$ $\displaystyle\hskip 56.9055pt+\left[\frac{-(n+1)(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}{\gamma Z}\right]\hat{u}_{z}=\hat{\chi}(1+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2})\/,$ (21b) $\displaystyle\left[iK_{X}\left\\{\frac{1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2}{\gamma}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\left(1+\frac{Z\partial_{Z}}{n+1}\right)\right\\}\right]\hat{u}_{x}$ $\displaystyle+\left[\frac{iK_{Y}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}{\gamma}\right]\hat{u}_{y}+\left[\frac{-\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}ZK_{X}^{2}}{n+1}\right]\hat{u}_{z}$ $\displaystyle\hskip 99.58464pt=\left(\hat{\chi}+\frac{Z\partial_{Z}\hat{\chi}}{n+1}\right)(1+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2})\/.$ (21c) ## 3 Exact numerical solution We obtain fully converged numerical solution to Eqs. (21a)–(21) by employing the spectral code “Dedalus” (Burns et al., 2020). Referring to the Dedalus methods paper (Burns et al., 2020) for more details, we briefly outline the numerical procedures employed in Dedalus for eigenvalue problems. At each horizontal wavenumber $(K_{X},K_{Y})$, the state variables—the three components of velocity—are expanded in the Chebyshev polynomials along the inhomogeneous $z$-axis. Because of the background inhomogeneity, different Chebyshev coefficients couple, creating a dense linear operator. Sparsification is provided by a change of basis from the Chebyshev polynomials of the first kind to those of the second kind. To impose boundary conditions and keep the matrix sparse, Dirichlet preconditioning is applied. Efficient solution of the resulting matrices is then found by passing the matrices $\mathcal{L}$ and $\mathcal{M}$ in the eigenvalue ($\sigma$) problem, $\mathcal{L}\mathcal{X}=\sigma\mathcal{M}\mathcal{X}$, to the “scipy” linear algebra packages. For a given spectral resolution along the $z$-axis, we solve for all the eigenvalues of the matrices. Such a non-targeted, general solution produces eigenmodes of all the linear MHD waves, including the pressure-driven and gravity-driven modes. We specialize in the high-frequency modes to assess the effect of magnetic fields on such pressure-driven modes. We identify the pressure-driven modes by comparing their eigenfrequencies with those predicted by Lamb’s solution. Figure 2: $\Omega^{2}(K_{X},K_{Y})$ for hydrodynamic and magnetized polytropes. The two plotted surfaces are visually indifferentiable because the difference ($\Delta\Omega^{2}$) between them is much smaller than $\Omega^{2}$; see Fig. 3. The parameters used are $\epsilon=0.1$, $m=20$, $n=2.5$, and $\gamma=5/3$. For variations in these parameters, the surface plot of $\Omega^{2}(K_{X},K_{Y})$ remains the same qualitatively. For the boundary condition, at the lower boundary $z=L_{z}$, we require $\widetilde{u}_{z}=0$. At the upper boundary, $z=0$, where the atmosphere ceases, we enforce, following Lamb (1911), $\displaystyle\frac{Dp}{Dt}$ $\displaystyle=0,$ (22) which implies $c^{2}\frac{D\rho}{Dt}=-c^{2}\rho_{\rm 0}\widetilde{\chi}=0=z^{n+1}\widetilde{\chi}.$ (23) We first validate that the solver successfully reproduces Lamb’s exact analytical dispersion relation in the absence of the magnetic field. We then compute the eigenvalues of the magnetized polytrope, which we present in Fig. 2. Although the two surface plots of $\Omega^{2}(K_{X},K_{Y})$ are displayed in Fig. 2—one for the hydrodynamic and other for the magnetized polytrope, the plots are visually indistinguishable. The difference between the two surface plots is shown in Fig. 3. For $K_{Y}=0$, $\Delta\Omega^{2}$ is negative, and for $K_{X}=0$, $\Delta\Omega^{2}$ is positive and relatively large. We also note a minor decrease in positive value of $\Delta\Omega^{2}$ in going from $K_{X}\approx 0$ to $K_{X}=0$. It turns out that $\Delta\Omega^{2}$ is related to the hydrodynamic squared- frequency $\Omega_{\mathrm{hydro}}^{2}$; and $\Delta\Omega^{2}/\Omega_{\mathrm{hydro}}^{2}$ is almost entirely independent of the wavevector magnitude (Fig. 4). Only angular dependence is observed. The extremely low wavenumbers in Fig. 4 correspond to very large scale waves that cannot be captured in a finite box in numerical calculation. To capture lower wavenumbers, we significantly extended domain size along the vertical $z$-axis, which allowed us to obtain fully converged numerical results for other wavenumbers. Figure 3: $\Delta\Omega^{2}=\Omega^{2}-\Omega_{\mathrm{hydro}}^{2}$ is plotted for a magnetized polytrope, with $\epsilon=0.1$, $m=20$, $n=2.5$, and $\gamma=5/3$. This surface plot shows the difference between the two surface plots in Fig. 2. Figure 4: Relative difference of squared-frequency depends on the wavevector propagation angle (spanned by the black circle), but the relative difference is insensitive to the wavevector magnitude. The parameters chosen are $\epsilon=0.1$, $m=20$, $n=2.5$, and $\gamma=5/3$. ## 4 Reduction to wave equation To analytically determine the dispersion relation, we solve the set of equations (21a)–(21) perturbatively in the limit of a weak magnetic field, i.e., ${\mathbb{\color[rgb]{0,0,0}\epsilon}}\ll 1$. By setting ${\mathbb{\color[rgb]{0,0,0}\epsilon}}=0$, we recover the Lamb’s equations for the unmagnetized polytrope (Lamb, 1911). The Lamb’s equations reduce to a second-order differential equation for $\hat{\chi}$. With the same goal, we proceed in the following manner. We rewrite Eqs. (21a)–(21) as $\begin{bmatrix}\begin{array}[]{c c c}M_{11}&M_{12}&M_{13}\\\ M_{21}&M_{22}&M_{23}\\\ M_{31}&M_{32}&M_{33}\\\ \end{array}\end{bmatrix}\begin{bmatrix}\begin{array}[]{c}\hat{u}_{x}\\\ \hat{u}_{y}\\\ \hat{u}_{z}\\\ \end{array}\end{bmatrix}=\begin{bmatrix}\begin{array}[]{c}h_{x}(\hat{\chi})\\\ h_{y}(\hat{\chi})\\\ h_{z}(\hat{\chi},\partial_{Z}\hat{\chi})\\\ \end{array}\end{bmatrix},\\\ $ (24) where the matrix elements $M_{ij}$ are independent of $Z$-derivatives, and are functions of $K_{X},K_{Y},\Omega,n,\gamma,{\mathbb{\color[rgb]{0,0,0}\epsilon}},$ and $Z$ only (for their complete expressions, see Appendix A). In arriving at Eq. (24), we have replaced $\partial_{Z}\hat{u}_{x}$ in the first term on the left-hand side of Eq. (21) with its exact expression obtained by differentiating Eq. (21a) with respect to $Z$. We then substitute $\partial_{Z}$ operation in $\partial_{Z}\hat{u}_{z}$ by writing it as $\hat{\chi}-iK_{X}\hat{u}_{x}-iK_{Y}\hat{u}_{y}$. Such a process removes $\partial_{Z}$ operation from the matrix $M$. The functions $h_{\nu}$ are linear in $\hat{\chi}$ and $\partial_{Z}\hat{\chi}$. Straightforward inversion of the matrix $M$ expresses all components of the velocity in terms of $\hat{\chi}$ and $\partial_{Z}\hat{\chi}$: $\hat{u}_{\nu}=f_{\nu}(Z)\hat{\chi}+g_{\nu}(Z)\partial_{Z}\hat{\chi},$ (25) where $\nu$ can be either $x,y,$ or $z$. We recognize that $f_{\nu}$ and $g_{\nu}$ are functions of $Z$, but do not involve $\partial_{Z}$. The three velocity components of Eq. (25) can now be subsumed into a single second-order differential equation for $\hat{\chi}$: $\partial_{Z}^{2}\hat{\chi}+P(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})\partial_{Z}\hat{\chi}+R(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})\hat{\chi}=0,$ (26) which can be recast into the normal form of the second-order differential equation by changing variable as $\hat{\chi}(Z)=\hat{\psi}(Z)\exp{\left[-\frac{1}{2}\int_{Z}dZ\,P(Z)\right]},$ (27) which reduces Eq. (26) to the wave equation $\partial_{Z}^{2}\hat{\psi}+\Gamma^{2}(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})\hat{\psi}=0.$ (28) The procedure outlined above appears straightforward. However, the analytical manipulations in arriving at Eq. (28)—a magnetized version of Lamb’s equation—require laborious and careful calculations, as $\Gamma^{2}(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})$ alone conceals an expression of exhaustive length—tens of pages of this article. In the absence of the magnetic field, the expression for $\Gamma^{2}(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}}=0)=\Gamma_{\rm 0}^{2}(Z)$ is beautifully short, $\Gamma_{\rm 0}(Z)=K^{2}(Z-\alpha)(\beta-Z)/Z^{2}$, where $\alpha$ and $\beta$ are the two turning points—two zeros of $\Gamma_{\rm 0}(Z)$. ## 5 Perturbative solution for anisotropic magnetic effect The presence of a weak magnetic field may be considered as a perturbation to Lamb’s two-turning point eigenvalue problem. Hence, the magnetic field changes both the locations of the turning points and the form of the potential $\Gamma(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})$. In general, we write the JWKB quantization condition (see e.g., Bender & Orszag, 1978; Tripathi, 2022) as $\frac{1}{\pi}\int_{Z_{1}({\mathbb{\color[rgb]{0,0,0}\epsilon}})}^{Z_{2}({\mathbb{\color[rgb]{0,0,0}\epsilon}})}\Gamma(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})\,dZ\sim\left(m+\frac{1}{2}\right);\,\,\,m=0,1,2,...,$ (29) where $m$ refers to the eigen-state index, $\Gamma(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})$ is the magnetically-modified wavenumber, and $Z_{1}({\mathbb{\color[rgb]{0,0,0}\epsilon}})$ and $Z_{2}({\mathbb{\color[rgb]{0,0,0}\epsilon}})$ are the magnetically-shifted turning points. ### 5.1 Perturbative calculations First, we expand the wavenumber $\Gamma(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})$ as $\Gamma(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})=\Gamma_{\rm 0}(Z)+{\mathbb{\color[rgb]{0,0,0}\epsilon}}\Gamma_{1}(Z)+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\Gamma_{2}(Z)+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3}\Gamma_{3}(Z)+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{4})}.$ (30) It may be noted that the leading-order effect of the Lorentz force on the wavenumber appears only at the second order $\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2})}$ in the expansion. So, $\Gamma_{1}(Z)$ above is zero. The expressions for $\Gamma_{0}(Z)$ and $\Gamma_{2}(Z)$ are $\displaystyle\Gamma_{0}(Z)$ $\displaystyle=\frac{iK\sqrt{(Z-\alpha)(Z-\beta)}}{Z},$ (31a) $\displaystyle\Gamma_{2}(Z)$ $\displaystyle=\frac{i(b_{2}Z^{2}+b_{1}Z+b_{0})}{\sqrt{(Z-\alpha)(Z-\beta)}},$ (31b) where we have used $K_{X}=K\cos\theta$ and $K_{Y}=K\sin\theta$. The parameters $\alpha$ and $\beta$ satisfy the following properties $\displaystyle\alpha\beta$ $\displaystyle=\frac{n(n+2)}{4K^{2}},$ (32a) $\displaystyle\alpha+\beta$ $\displaystyle=\left[\frac{\Omega^{2}}{K}-\frac{(n+1)(n+1-n\gamma)}{\gamma^{2}}\frac{K}{\Omega^{2}}\right]\frac{1}{K}.$ (32b) The lengthy expressions of $b_{0}$, $b_{1}$, and $b_{2}$ are presented in Appendix B. We note that these parameters depend on $\Omega,K,\theta,n,$ and $\gamma$ only. Now we expand the turning points $Z_{1}({\mathbb{\color[rgb]{0,0,0}\epsilon}})$ and $Z_{2}({\mathbb{\color[rgb]{0,0,0}\epsilon}})$ around the turning points of the unmagnetized polytrope, $Z_{1}^{(0)}=\alpha$ and $Z_{2}^{(0)}=\beta$, as $Z_{j}({\mathbb{\color[rgb]{0,0,0}\epsilon}})=Z_{j}^{(0)}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}Z_{j}^{(1)}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{j}^{(2)}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})},$ (33) where $j=1$ and $j=2$ refer to the left and the right turning points, respectively (i.e., $\alpha<\beta$). We note that, in Eq. (33), the correction term at the first order in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ is zero, i.e., $Z_{j}^{(1)}=0$. We find this result by substituting the expression for $Z_{j}({\mathbb{\color[rgb]{0,0,0}\epsilon}})$ from Eq. (33) in $\Gamma(Z,{\mathbb{\color[rgb]{0,0,0}\epsilon}})=0$, and by employing Eq. (30). Solving the resulting equation order-by-order in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ produces $Z_{j}^{(1)}=0$. We note, however, that, at the second order in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ (which is where the effect of the Lorentz force comes in action), $Z_{j}^{(2)}$ becomes non-zero. The expressions for $Z_{j}^{(2)}$ are given in Appendix C. Because the term $Z_{j}^{(2)}$ appears at the second order in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$, it may be tempting to assume that the term contributes to second order in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ itself in the JWKB integral in Eq. (29). This, however, is not the case. The term contributes to third and higher orders in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ as we show next. Expanding Eq. (29), $\displaystyle\frac{1}{\pi}\int_{Z_{1}^{(0)}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{1}^{(2)}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}}^{Z_{2}^{(0)}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{2}^{(2)}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}}$ $\displaystyle\left[\Gamma_{0}(Z)+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\Gamma_{2}(Z)+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}\right]\,dZ$ (34) $\displaystyle\sim\left(m+\frac{1}{2}\right);\,\,\,m=0,1,2,...\/.$ We now integrate $\Gamma_{\rm 0}(Z)$ as $\displaystyle\int_{Z_{1}^{(0)}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{1}^{(2)}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}}^{Z_{2}^{(0)}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{2}^{(2)}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}}\Gamma_{\rm 0}(Z)\,dZ$ (35) $\displaystyle=\left(\int_{\alpha+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{1}^{(2)}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}}^{\alpha}+\int_{\alpha}^{\beta}+\int_{\beta}^{\beta+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{2}^{(2)}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}}\right)\Gamma_{\rm 0}(Z)\,dZ$ $\displaystyle=-\frac{2{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3}K(\beta-\alpha)^{1/2}\left[Z_{1}^{(2)}\right]^{3/2}}{3\alpha}+\int_{\alpha}^{\beta}\Gamma_{\rm 0}(Z)\,dZ$ $\displaystyle\hskip 56.9055pt+\frac{2{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3}K(\beta-\alpha)^{1/2}\left[-Z_{2}^{(2)}\right]^{3/2}}{3\beta}+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{4})}\/,$ where we notice terms with ${\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3}$ arising from the second-order shifts in the turning points, ${\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{1}^{(2)}$ and ${\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}Z_{2}^{(2)}$. The additional power of ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ emerges from the integrand $\Gamma_{\rm 0}(Z)$, which has a term $\sqrt{Z-Z_{j}^{(0)}}$ that when expanded around $Z_{j}^{(0)}$ in powers of ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ contributes an ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ to the integral. In Eq. (35) the term $\int_{\alpha}^{\beta}\Gamma_{0}(Z)\,dZ$ is the integral that one finds in Lamb’s calculations: $\displaystyle\int_{\alpha}^{\beta}\Gamma_{0}(Z)\,dZ\equiv$ $\displaystyle I_{\mathrm{Lamb}}=\frac{(n+1)}{2}\left[\frac{\Omega^{2}}{K(n+1)}+\frac{(\gamma n-n-1)K}{\gamma^{2}\Omega^{2}}-1\right]+1.$ Thus, we replace the integral $\int_{\alpha}^{\beta}\Gamma_{0}(Z)\,dZ$, appearing in Eq. (35), with $I_{\mathrm{Lamb}}$ from Eq. (5.1) to obtain $\displaystyle I_{\mathrm{Lamb}}$ $\displaystyle+\frac{{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}}{\pi}\int_{\alpha}^{\beta}\Gamma_{2}(Z)dZ+\mathcal{O}{({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{3})}$ (36) $\displaystyle\hskip 28.45274pt\sim\left(m+\frac{1}{2}\right);\,\,\,m=0,1,2,...,$ which is accurate up to second order in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$. We substitute $\Gamma_{2}(Z)$ from Eq. (31b) and perform the integral in Eq. (36) to arrive at $\displaystyle I_{\mathrm{Lamb}}-{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}$ $\displaystyle\left[b_{0}+b_{1}\left(\frac{\alpha+\beta}{2}\right)+b_{2}\left(\frac{3\alpha^{2}+2\alpha\beta+3\beta^{2}}{8}\right)\right]$ (37) $\displaystyle\hskip 28.45274pt\sim\left(m+\frac{1}{2}\right);\,\,\,m=0,1,2,...\,.$ Employing fast-wave approximation, we expand each term on the left-hand side of Eq. (37) in powers of $\delta=K/\Omega^{2}$ as $\displaystyle I_{\mathrm{Lamb}}$ $\displaystyle\approx\frac{1}{\delta}\left[\frac{1}{2}+\frac{\delta(1-n)}{2}+\mathcal{O}(\delta^{2})\right],$ (38a) $\displaystyle b_{0}$ $\displaystyle\approx\frac{1}{\delta}\left[\frac{2-\gamma}{4}+\mathcal{O}(\delta^{2})\right],$ (38b) $\displaystyle b_{1}\left(\frac{\alpha+\beta}{2}\right)$ $\displaystyle\approx\frac{1}{\delta}\left[\frac{\cos^{2}\theta}{4}+\mathcal{O}(\delta^{2})\right],$ (38c) $\displaystyle b_{2}\left(\frac{3\alpha^{2}+2\alpha\beta+3\beta^{2}}{8}\right)$ $\displaystyle\approx\frac{1}{\delta}\left[\frac{-3\cos^{4}\theta}{8}+\mathcal{O}(\delta^{2})\right].$ (38d) Thus we obtain a simplified dispersion relation from Eq. (37): $\displaystyle\frac{\Omega^{2}}{K}\left[\frac{1}{2}-\frac{{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}}{4}\left\\{2-\gamma+\cos^{2}\theta-\frac{3\cos^{4}\theta}{2}\right\\}\right]$ (39) $\displaystyle\hskip 56.9055pt\sim\left(m+\frac{n}{2}\right);\,\,\,m=0,1,2,...\,.$ It turns out that Eq. (39) is not in excellent agreement with our numerical results, but replacing $\cos^{2}\theta$ with $(\cos^{2}\theta)/2$ gives excellent agreement. Informed in this way, we write the final dispersion relation $\boxed{\begin{aligned} &\frac{\Omega^{2}}{K}\left[\frac{1}{2}-\frac{{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}}{4}\left\\{2-\gamma+\frac{\cos^{2}\theta}{2}-\frac{3\cos^{4}\theta}{2}\right\\}\right]\\\ &\hskip 56.9055pt\sim\left(m+\frac{n}{2}\right);\,\,\,m=0,1,2,...\,.\end{aligned}}$ (40) ### 5.2 Comparison between theory and numerics Figure 5: Theory and numerics: Our theory predicts that $g(\theta)=\Delta\Omega^{2}/({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\Omega_{\rm hydro}^{2})+\gamma/2$ depends only on $\theta$ as given in Eq. (41). The function $g(\theta)$ is independent of the wavenumber $K$, Alfvénic Mach number $\epsilon^{-1}$, the polytropic index $n$, the adiabatic index $\gamma$, and the eigen-state index $m$. Here, we plot $g(\theta)$ obtained from our numerical solutions for several different values of $m$, ranging from $10$ to $35$, and for several different values of $K$ ($0.6\leq|K|\leq 2$), and for two values of ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ and two values of $\gamma$ (different symbols). All data points collapse on the same universal curve. The function $g(\theta)$ from our asymptotic theory, Eq. (41), is also plotted here. The asymptotic curve is indistinguishable from the exact numerical solutions. Using the Lamb’s relation, $\Omega_{\rm hydro}^{2}\sim K(2m+n)$, we rewrite Eq. (40) as $\displaystyle g(\theta)$ $\displaystyle\equiv\gamma/2+\Delta\Omega^{2}/({\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\Omega_{\rm hydro}^{2})$ (41) $\displaystyle=1+\frac{1}{4}\left(\cos^{2}\theta-3\cos^{4}\theta\right),$ where $\Delta\Omega^{2}\equiv\Omega^{2}-\Omega_{\rm hydro}^{2}$. This is a remarkable result. The right-hand-side of Eq. (41) is independent of every other possible parameter other than $\theta$ ! In Fig. 5, we plot the function $g(\theta)$ from our numerical determination of the eigenfrequencies for several different values of $m$ and $K$, and two different values of ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$, and two different values of $\gamma$. These numerical values are shown as different symbols. All the different curves fall on top of each other, creating a universal curve. We also plot our asymptotic expression, Eq. (40), which agrees very well with the numerical results. This demonstrates that our theory is in excellent agreement with numerics even for ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ as large as ${\mathbb{\color[rgb]{0,0,0}\epsilon}}=0.1$. It is surprising that, in our attempt to capture the anisotropy brought in by the magnetic field, despite a myriad of unwieldy expressions encountered on the way, an expression as simple as Eq. (40), is obtained. This simple expression is also highly accurate, as demonstrated in Fig. 5. The success of our asymptotic theory is somewhat unexpected, given that the analytical solution is accurate only up to the leading order ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ and $K/\Omega^{2}$. More accurate solutions can be obtained by using the full expressions of $b_{0},b_{1},$ and $b_{2}$ from Appendix B in Eqs. (38d)–(38b), and keeping higher-order terms in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ in Eq. (34). Finally, a remark on the effect of magnetic fields on the slow gravity-driven waves is in order. For the slow waves, the first term on the left-hand side of Eq. (1) may be neglected, which implies that such waves in unmagnetized medium become unstable when $\gamma<(1+1/n)$. This is consistent with the energy principle of Newcomb (1961). When a magnetic field is present, by applying the energy principle of Newcomb, we find, for $K_{X}\neq 0$,111For $K_{X}=0$, the criterion for the gravity waves to become unstable is slightly modified: $\gamma<(1+1/n)(1+1/\beta)-2/\beta$. the instability threshold on $\gamma$ for the gravity waves is lifted to $\gamma<(1+1/n)(1+1/\beta)$. When the magnetic field is very strong ($\beta\ll 1$), the regular perturbation series in powers of ${\mathbb{\color[rgb]{0,0,0}\epsilon}}\,(\propto 1/\sqrt{\beta})$ is possibly inadequate for unstable gravity-driven waves. Such considerations are clearly beyond the scope of the present paper. ## 6 Conclusions Here we derive, for pressure-driven waves, an accurate and simple analytical formula that captures the effects of magnetic field and five other parameters: adiabatic index, polytropic index, eigenmode state index, wavenumbers, and the angle between the wavevector and the magnetic field [$\theta\mathrm{=}\cos^{-1}(\hat{\mathbf{K}}\cdot\hat{\bm{B}}_{0})$]. Such a six-parameter-dependent formula is distilled using a perturbative solution to magnetized version of Lamb’s hydrodynamic polytropic waves. Our explicit analytical formula overcomes the limitation of previously attempted formulae for the magnetized polytrope that were presented in general integral forms; such a formulation, for instance, that of Gough & Thompson [1990; e.g., Eq. (4.11)] and Bogdan & Cally (1997), requires numerical evaluation of the eigenfrequencies, and thus leaves out the critical step of obtaining an analytical understanding and expression. We achieve such here guided by our numerical solutions and at the cost of extensive use of Mathematica for our perturbative analyses. We emphasize that the closed-form expression we obtain is not solely by use of perturbative calculation. At the last step, we needed guidance from our numerical solutions. A possible explanation for inadequacy of the perturbative calculation alone can be the ordering scheme in the perturbation series, which in this paper has two small parameters—one that corresponds to the WKB short- wavelength limit approximation and the other that is the actual small parameter, inverse of the Alfvénic Mach number. This implies that future work should develop better perturbation theory to take this inadequacy into account and to generalize our methodology to other complex wave problems. The simplicity and accuracy of our formula are encouraging to employ the formula to help solve the inverse problem of magnetoseismology. Our formula provides an explicit, analytical dependence of the observed surface oscillation frequency with the orientation and strength of the deep-seated and subsurface magnetic field. Such an information can be crucial to predict the surface-emergence location and strength of sunspots. Precursors for such may be detected by analyzing the anisotropic magnetic effect in stellar ring diagrams—the rings of constant frequencies over the wavenumber plane. Nonlinear asteroseismology can also directly benefit from our analytical work, as weak turbulence theory of asteroseismic waves inevitably requires accurate and simple expressions of linear wave frequencies in resonant triad interactions. The effect of the magnetic field on such is completely unknown. However, observations now exist that suggest resonant mode interactions are possible and can be a critical element of strongly pulsating stars (Guo, 2020). Nonlinear mode coupling of linear eigenmodes (Tripathi et al., 2023a, b) may also need to be analyzed, in addition to mode resonances. Future planned research will directly take advantage of the formula derived here to assess the role of the magnetic field and other parameters in asteroseismic wave turbulence, which is awaiting to soon enter adulthood from its infancy. ## Acknowledgments We are pleased to thank Professor Ellen G. Zweibel for helpful discussions and for her contribution in applying the energy principle of Newcomb (1961) to our system. The inception of this paper took place at Nordic Institute for Theoretical Physics (NORDITA), Sweden, while B.T. was visiting on a research fellowship generously made available by NORDITA. ## Data Availability Mathematica notebook—built to perform lengthy analytical manipulations—and Dedalus script—used for numerical solution in this paper—are available at https://github.com/BindeshTripathi/polytrope. ## Appendix A The matrix elements $M_{ij}$ and $h_{\mu}$ that appear in Eq. (24) are $\displaystyle M_{11}$ $\displaystyle=-\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2),$ (A1a) $\displaystyle M_{12}$ $\displaystyle=0,$ (A1b) $\displaystyle M_{13}$ $\displaystyle=\frac{-iK_{X}(n+1)(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}{\gamma},$ (A1c) $\displaystyle M_{21}$ $\displaystyle={\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}K_{X}K_{Y}Z,$ (A1d) $\displaystyle M_{22}$ $\displaystyle=-\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}K_{X}^{2}Z,$ (A1e) $\displaystyle M_{23}$ $\displaystyle=\frac{-iK_{Y}(n+1)(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}{\gamma},$ (A1f) $\displaystyle M_{31}$ $\displaystyle=\frac{iK_{X}}{\gamma}\left[1+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\left(\frac{3\gamma}{2}+\frac{K_{X}^{2}Z}{\Omega^{2}}\right)\right],$ (A1g) $\displaystyle M_{32}$ $\displaystyle=\frac{iK_{Y}}{\gamma}\left[1+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\left(\frac{\gamma}{2}+\frac{K_{X}^{2}Z}{\Omega^{2}}\right)\right],$ (A1h) $\displaystyle M_{33}$ $\displaystyle=\frac{-\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}K_{X}^{2}Z}{n+1},$ (A1i) $\displaystyle h_{x}$ $\displaystyle=iK_{X}Z\hat{\chi},$ (A1j) $\displaystyle h_{y}$ $\displaystyle=iK_{Y}Z\hat{\chi}(1+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}),$ (A1k) $\displaystyle h_{z}$ $\displaystyle=\hat{\chi}+\frac{Z\partial_{Z}\hat{\chi}}{n+1}+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\hat{\chi}\left[1+\frac{K_{X}^{2}Z\left\\{\gamma-(n+1)(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)\right\\}}{(n+1)\gamma\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}\right]+{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}\frac{Z\partial_{Z}\hat{\chi}}{n+1}\left[1+\frac{K_{X}^{2}Z}{\Omega^{2}(1+\gamma{\mathbb{\color[rgb]{0,0,0}\epsilon}}^{2}/2)}\right].$ (A1l) ## Appendix B The parameters introduced in Eq. (31b), while writing the expression for $\Gamma_{2}(Z)$, appear below: $\displaystyle b_{2}$ $\displaystyle=\frac{-K^{3}\cos^{4}\theta}{\Omega^{2}},$ (A2a) $\displaystyle b_{1}$ $\displaystyle=\frac{-K\left[-\Omega^{8}\gamma^{4}+\Omega^{4}\gamma^{2}K^{2}(n+1)\left\\{(n+1+\gamma-\gamma n)\cos(2\theta)-n-1-3\gamma\right\\}+2K^{4}(n+1)^{4}-K^{4}(n+1)^{3}(n+1-\gamma n)\cos(2\theta)\right]}{2\Omega^{4}\gamma^{2}\mathrm{sec}^{2}\theta\,\left[\Omega^{4}\gamma^{2}-K^{2}(n+1)^{2}\right]},$ (A2b) $\displaystyle b_{0}$ $\displaystyle=\frac{\Bigg{(}\begin{gathered}4\Omega^{8}\gamma^{4}(2-\gamma)-2\Omega^{4}\gamma^{2}K^{2}\left\\{2(n+1)^{2}(4-3\gamma)+n(2n+3)\gamma^{2}\right\\}+K^{4}(n+1)^{2}\left\\{n\gamma^{2}(2n+3)-8(n+1)^{2}(\gamma-1)\right\\}\\\ +2K^{2}\gamma\cos(2\theta)\left[K^{2}(n+1)^{2}\left\\{n(\gamma-4+2n(\gamma-1))-2\right\\}+\Omega^{4}\gamma^{2}\left\\{2(n+1)^{2}-\gamma n(2n+3)\right\\}\right]+n\gamma^{2}K^{4}(n+1)^{2}\cos(4\theta)\end{gathered}\Bigg{)}}{16\Omega^{2}\gamma^{2}K\left[\Omega^{4}\gamma^{2}-K^{2}(n+1)^{2}\right]}$ (A2e) ## Appendix C Due to the magnetic field, the locations of the turning points, $Z_{1}$ and $Z_{2}$, shift—which to the second order in ${\mathbb{\color[rgb]{0,0,0}\epsilon}}$ in Eq. (33) are given by $Z_{1}^{(2)}$ and $Z_{2}^{(2)}$: $\displaystyle Z_{1}^{(2)}$ $\displaystyle=\frac{\alpha(b_{2}\alpha^{2}+b_{1}\alpha+b_{0})}{K(\beta-\alpha)},$ (A3a) $\displaystyle Z_{2}^{(2)}$ $\displaystyle=\frac{-\beta(b_{2}\beta^{2}+b_{1}\beta+b_{0})}{K(\beta-\alpha)}.$ (A3b) ## References * Adam (1977) Adam, J. 1977, Solar Physics, 52, 293 * Aerts et al. (2010) Aerts, C., Christensen-Dalsgaard, J., & Kurtz, D. W. 2010, Asteroseismology (Springer Science & Business Media) * Bender & Orszag (1978) Bender, C. M., & Orszag, S. A. 1978, Advanced Mathematical Methods for Scientists and Engineers * Bogdan & Cally (1997) Bogdan, T. J., & Cally, P. S. 1997, Proceedings of the Royal Society of London Series A, 453, 943, doi: 10.1098/rspa.1997.0052 * Burns et al. (2020) Burns, K. J., Vasil, G. M., Oishi, J. S., Lecoanet, D., & Brown, B. P. 2020, Physical Review Research, 2, 023068, doi: 10.1103/PhysRevResearch.2.023068 * Cally (2007) Cally, P. S. 2007, Astronomische Nachrichten, 328, 286, doi: 10.1002/asna.200610731 * Cally & Bogdan (1997) Cally, P. S., & Bogdan, T. 1997, The Astrophysical Journal, 486, L67 * Campos (1983) Campos, L. 1983, Wave Motion, 5, 1 * Campos & Marta (2015) Campos, L., & Marta, A. 2015, Geophysical & Astrophysical Fluid Dynamics, 109, 168 * Chandrasekhar (1961) Chandrasekhar, S. 1961, Hydrodynamic and hydromagnetic stability, 652 pp., clarendon, Oxford, UK * Das (2022) Das, S. B. 2022, The Astrophysical Journal, 940, 92 * Das et al. (2020) Das, S. B., Chakraborty, T., Hanasoge, S. M., & Tromp, J. 2020, The Astrophysical Journal, 897, 38 * Gough & Thompson (1990) Gough, D., & Thompson, M. 1990, Monthly Notices of the Royal Astronomical Society, 242, 25 * Guo (2020) Guo, Z. 2020, ApJ, 896, 161, doi: 10.3847/1538-4357/ab911f * Hasselmann (1962) Hasselmann, K. 1962, Journal of Fluid Mechanics, 12, 481, doi: 10.1017/S0022112062000373 * Ilonidis et al. (2011) Ilonidis, S., Zhao, J., & Kosovichev, A. 2011, Science, 333, 993, doi: 10.1126/science.1206253 * Lamb (1911) Lamb, H. 1911, Proceedings of the Royal Society of London Series A, 84, 551, doi: 10.1098/rspa.1911.0008 * Nazarenko & Lukaschuk (2016) Nazarenko, S., & Lukaschuk, S. 2016, Annual Review of Condensed Matter Physics, 7, 61, doi: 10.1146/annurev-conmatphys-071715-102737 * Newcomb (1961) Newcomb, W. A. 1961, Physics of Fluids, 4, 391, doi: 10.1063/1.1706342 * Nye & Thomas (1976) Nye, A. H., & Thomas, J. H. 1976, ApJ, 204, 573, doi: 10.1086/154205 * Schunker et al. (2005) Schunker, H., Braun, D. C., Cally, P. S., & Lindsey, C. 2005, The Astrophysical Journal, 621, L149 * Singh et al. (2015) Singh, N. K., Brandenburg, A., Chitre, S., & Rheinhardt, M. 2015, Monthly Notices of the Royal Astronomical Society, 447, 3708 * Singh et al. (2014) Singh, N. K., Brandenburg, A., & Rheinhardt, M. 2014, The Astrophysical Journal Letters, 795, L8 * Singh et al. (2016) Singh, N. K., Raichur, H., & Brandenburg, A. 2016, The Astrophysical Journal, 832, 120 * Singh et al. (2020) Singh, N. K., Raichur, H., Käpylä, M. J., et al. 2020, Geophysical & Astrophysical Fluid Dynamics, 114, 196 * Thomas (1983) Thomas, J. H. 1983, Annual Review of Fluid Mechanics, 15, 321 * Tripathi (2022) Tripathi, B. 2022, Phys. Rev. D, 105, 036010, doi: 10.1103/PhysRevD.105.036010 * Tripathi et al. (2023a) Tripathi, B., Fraser, A. E., Terry, P. W., et al. 2023a, Physics of Plasmas, 30, 072107, doi: 10.1063/5.0156560 * Tripathi & Mitra (2022) Tripathi, B., & Mitra, D. 2022, ApJ, 934, 61, doi: 10.3847/1538-4357/ac79b1 * Tripathi et al. (2023b) Tripathi, B., Terry, P. W., Fraser, A. E., Zweibel, E. G., & Pueschel, M. J. 2023b, Physics of Fluids, 35, 105151, doi: 10.1063/5.0167092 * Van Beeck et al. (2021) Van Beeck, J., Bowman, D. M., Pedersen, M. G., et al. 2021, A&A, 655, A59, doi: 10.1051/0004-6361/202141572 * Van Beeck et al. (2023) Van Beeck, J., Van Hoolst, T., Aerts, C., & Fuller, J. 2023, arXiv e-prints, arXiv:2311.02972, doi: 10.48550/arXiv.2311.02972
# Conformal partial waves in momentum space Marc Gillioz (_SISSA, via Bonomea 265, 34136 Trieste, Italy_) ###### Abstract The decomposition of 4-point correlation functions into conformal partial waves is a central tool in the study of conformal field theory. We compute these partial waves for scalar operators in Minkowski momentum space, and find a closed-form result valid in arbitrary space-time dimension $d\geq 3$ (including non-integer $d$). Each conformal partial wave is expressed as a sum over ordinary spin partial waves, and the coefficients of this sum factorize into a product of vertex functions that only depend on the conformal data of the incoming, respectively outgoing operators. As a simple example, we apply this conformal partial wave decomposition to the scalar box integral in $d=4$ dimensions. ###### Contents 1. 1 Introduction 2. 2 Spin eigenstates and completeness relation 1. 2.1 Kinematics of the 4-point function 2. 2.2 Polarization tensors 3. 2.3 Spin eigenstates and normalization 4. 2.4 Completeness relation 3. 3 Vertex functions 1. 3.1 Conformal Ward identities 2. 3.2 Highest-spin functions in various orderings 3. 3.3 Lower spin functions 4. 3.4 Special kinematic limits 4. 4 Example: the scalar box integral in 4 dimensions 1. 4.1 CFT optical theorem 2. 4.2 IR divergences 3. 4.3 Conformal partial wave expansion 5. 5 Conclusions ## 1 Introduction Conformal field theory (CFT) can be defined non-perturbatively from its 2- and 3-point correlation functions, thanks to a convergent operator product expansion (OPE) that reduces the computation of higher-point functions to these elementary building blocks. In particular, the decomposition of 4-point functions into conformal partial waves is at the heart of the modern conformal bootstrap program. The bootstrap exploits the self-consistency of the OPE to put constraints on the _dynamics_ of a theory encoded in the CFT data [1, 2, 3, 4]. The problem of computing the conformal partial waves, i.e. determining the _kinematics_ underlying 4-point correlators, has been solved long ago [5, 6]. Closed-form results exist for scalar operators in dimensions $d=2$ and $4$, and various techniques for an efficient computation of the conformal partial waves in arbitrary dimensions have been developed, both for scalars [7, 8, 9, 10, 11] and for operators that carry spin [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. In this work, we address a different but related problem: we compute the conformal partial waves in the momentum-space representation of correlation functions in Minkowski space, as opposed to the ordinary Euclidean position- space approach. We focus on correlation functions involving 4 scalar primary operators $\phi_{i}$, each carrying momentum $p_{i}$, in $d\geq 3$ dimensions, of the form $\left\langle 0\right|[\phi_{1}(p_{1})\phi_{2}(p_{2})][\phi_{3}(p_{3})\phi_{4}(p_{4})]\left|0\right\rangle\equiv(2\pi)^{d}\delta^{d}(p_{1}+p_{2}+p_{3}+p_{4})\,G(p_{1},p_{2},p_{3}),$ (1.1) where the square brackets around pairs of operators on the left-hand side can mean any of the following: no particular ordering (Wightman function), a commutator (possibly retarded or advanced), a time-ordered product or its hermitian conjugate. The conformal wave expansion for $G$ takes the form $G(p_{1},p_{2},p_{3})=s^{(\Delta_{1}+\Delta_{2}+\Delta_{3}+\Delta_{4}-3d)/2}\sum_{\mathcal{O}}\lambda_{12\mathcal{O}}\lambda_{\mathcal{O}34}G_{\Delta,\ell}(p_{1},p_{2},p_{3})$ (1.2) where the sum is over primary operators $\mathcal{O}$ with scaling dimension $\Delta$ and spin $\ell$, $\lambda_{12\mathcal{O}}$ and $\lambda_{\mathcal{O}34}$ are OPE coefficients encoding the dynamical information about the theory, and $s$ is the center-of-mass energy $s=(p_{1}+p_{2})^{2}=(p_{3}+p_{4})^{2}>0,$ (1.3) which is strictly positive by the spectral condition on the correlation function (1.1). The conformal partial waves $G_{\Delta,\ell}$ are completely fixed by symmetry in terms of $\Delta$ and $\ell$, of the scaling dimensions $\Delta_{i}$ of the external operators (which may be distinct or not) and of the space-time dimension $d$ (possibly including non-integer dimensions, as our results are analytic in $d\geq 3$). We obtain $\boxed{G_{\Delta,\ell}(p_{1},p_{2},p_{3})=\sum_{m=0}^{\ell}C_{\Delta,\ell,m}\mathcal{C}_{m}^{(d-3)/2}(\cos\theta)V_{\Delta,\ell,m}^{[12]}\left(\frac{p_{1}^{2}}{s},\frac{p_{2}^{2}}{s}\right)V_{\Delta,\ell,m}^{[34]}\left(\frac{p_{3}^{2}}{s},\frac{p_{4}^{2}}{s}\right),}$ (1.4) where $\theta$ is the scattering angle, defined in terms of the momenta $p_{i}$ in eq. (2.10), and $\mathcal{C}^{(d-3)/2}_{m}(\cos\theta)$ a Gegenbauer polynomial of degree $m$ that appears in the ordinary spin partial wave expansion in $d$ dimensions. The numerical factor $C_{\Delta,\ell,m}\geq 0$ is given in eq. (2.35), and the “vertex functions” $V_{\Delta,\ell,m}^{[ab]}$ are defined in section 3, depending on which ordering is understood in the definition (1.1). This result admits a simple diagrammatic representation given in figure 1. One observes a factorization in the sense that the dependence on the external CFT data (the scaling dimensions $\Delta_{i}$) and on the “invariant masses” $p_{i}^{2}$ is entirely contained in the vertex functions $V_{\Delta,\ell,m}^{[ab]}$. Note that we have used the evident notation $p_{4}^{2}=(p_{1}+p_{2}+p_{3})^{2}$. Figure 1: Diagrammatic representation of the conformal partial wave (1.4), in which each line corresponds to a local primary operator of the CFT carrying a certain momentum. Note that the vertex functions $V_{\Delta,\ell,m}^{[ab]}$ only depend on the scaling dimensions, on the spin (the total spin $\ell$ and its projection $m$ onto a reference direction), and on the momenta of the lines attached to it. Sections 2 and 3 of this work are dedicated to the derivation of the momentum- space conformal partial waves (1.4). This derivation does not rely on solving a Casimir equation as in ordinary position space: in fact, since the conformal Ward identities are second-order differential equations with respect to the momenta $p_{i}$, one cannot make use of invariant cross-ratios in momentum space. Instead, the 4-point function depends on six Lorentz-invariant quantities (taken here to be all four $p_{i}^{2}$, $s$ and $\cos\theta$), and solving a differential equation in as many variables appears like a formidable task. This disadvantage is however counterbalanced by two elements that are unique to the Minkowski momentum-space representation of correlation functions: * • The orthogonality of momentum eigenstates, that allows to factorize the conformal partial wave into a product of 3-point functions, up to the contraction of Lorentz indices. * • The ability to use a reference frame, the center-of-mass frame, in which the 2-point function of the intermediate operator can be decomposed into polarizations that transform under irreducible representations of the rotation group $\text{SO}(d-1)$. These points have been put forward in a recent derivation of position-space conformal blocks [24], as well as in Mack’s classification of all the representations of the Lorentzian conformal group in $d=4$ [30]. The operator product expansion in general and the conformal partial wave expansion (1.2) in particular are nothing but the use of a Hilbert space completeness relation, which in a CFT can be made explicit thanks to the two properties listed above. In fact, the correlation function (1.1) computes the overlap between two states of the theory: one state obtained by acting on the vacuum with the product of operators $[\phi_{3}\phi_{4}]$, and the other state obtained acting with $[\phi_{1}\phi_{2}]^{\dagger}$. The partial wave (1.4) corresponds to the projection of this overlap of states onto a conformal family consisting of a primary operator and of all of its descendants. A corollary of this observation is that each conformal partial wave is positive in the forward scattering limit $p_{1}\to-p_{4}$, $p_{2}\to-p_{3}$ provided that $\phi_{1}=\phi_{4}$ and $\phi_{2}=\phi_{3}$. In this case the vertex functions are complex conjugate of each other, $V_{\Delta,\ell,m}^{[12]}=\big{(}V_{\Delta,\ell,m}^{[34]}\big{)}^{*}$, and the Gegenbauer polynomial evaluated at zero scattering angle ($\cos\theta=1$) is positive, hence $G_{\Delta,\ell}\geq 0$. The positivity of this configuration and its decomposition into conformal partial waves have actually been used in the derivation of sum rules for anomaly coefficients [31, 32]. It should be emphasized that the conformal partial wave expansion described in this work relies heavily on the Hamiltonian evolution being unitary, i.e. given by $\exp(iHt)$ where $t$ is the Lorentzian time and $H$ a Hamiltonian operator bounded below. In a Euclidean theory, where this evolution is replaced by $\exp(-H\tau)$ with Euclidean “time” $\tau$, the OPE only makes sense for correlation functions that are time-ordered,111If one uses radial quantization instead, then this applies to any configuration which is equivalent to a radially-ordered one under a global conformal symmetry transformation, which in practice extends to most of the space of configurations. The conformal bootstrap works precisely because it uses configurations that can be expanded into two or more distinct OPEs by different choices of quantization surfaces. a property that is lost upon Fourier transform. For this reason, the conformal partial wave expansion (1.2) differs at a fundamental level from other decompositions of the momentum-space 4-point function, such as the decomposition into Witten exchange diagrams in the case of a holographic theory (usually formulated in Mellin space [33, 34, 35]), or the decomposition into symmetric Polyakov blocks expressed directly in Euclidean momentum space [36, 37].222Instead, there is an interesting connection between the conformal partial waves expansion described in this work and the Polyakov-Regge blocks recently introduced [38], through the generalization in momentum space of the concept of CFT dispersion relations [39]. We leave the exploration of this venue for future work. For the same reason, it is difficult to compare our conformal partial wave expansion with recent results on 4-point correlators in Euclidean momentum space [40, 41, 42, 43, 44]. On the other hand, eqs. (1.2) and (1.4) are immediately suitable for comparison with results obtained using ordinary Feynman diagram computations in theories that are both perturbative and conformal. While the goal of this work is not to study such situations systematically, we present in section 4 a simple example that involves the real part of the scalar box integral in four dimensions. This box integral appears in the 4-point function of the composite operator $\Phi^{2}$ in the theory of a free boson $\Phi$, or in that of the gauge-invariant operator $\operatorname{tr}(\Phi^{i}\Phi^{i})$ in $\mathcal{N}=4$ supersymmetric Yang-Mills. The general form of this loop integral with off-shell momenta is known thanks to its dual conformal [45] or Yangian invariance [46]. Besides perturbative CFT, the conformal partial waves (1.4) will certainly find applications in other contexts. One of their salient features is the presence of double zeroes at scaling dimension of double-trace operators, a property reminiscent of the “double-twist” functional methods for the conformal bootstrap [47, 48, 49, 50, 51, 52]. However, it should be emphasized that the correlator (1.1) is ill-suited for writing a bootstrap equation, as it does not have crossing symmetry built in. Microcausality in Minkowski momentum space implies certain analyticity properties of the correlation functions that might eventually be used to place bounds on the CFT data, but the relevant methods remain to be developed. Similarly, our conformal partial wave expansion could also find use in the cosmological bootstrap program [53, 54, 55, 56, 57, 58, 59, 60], even though the connection between correlators in Minkowski momentum space and observables in de Sitter remains to be established (see ref. [61] for hints in this direction). For now, we provide the decomposition (1.2) as a tool and encourage the community to find its own applications. ## 2 Spin eigenstates and completeness relation The core of the computation of the conformal partial waves is the formulation of a Hilbert space completeness relation. In this section, we assemble the different elements, construct a complete basis of states, and detail its dependence on the kinematic invariants of the 4-point function. The computation of the overlap of these states with the pairs of operators $[\phi_{1}\phi_{2}]$ and $[\phi_{3}\phi_{4}]$ is discussed instead in section 3. ### 2.1 Kinematics of the 4-point function We begin with a detailed description of the kinematics of the 4-point function, as it will play a central role in what follows. As seen in eq. (1.1), momentum-space correlation functions involving $n$ operators are always proportional to a delta function imposing momentum conservation, and hence depend non-trivially on $n-1$ momenta only. It is convenient to introduce the double-bracket notation $\left\langle 0\right|\phi_{1}(p_{1})\cdots\phi_{n}(p_{n})\left|0\right\rangle\equiv(2\pi)^{d}\delta^{d}(p_{1}+\ldots+p_{n})\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\phi_{1}(p_{1})\cdots\phi_{n}(p_{n})\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}.$ (2.1) Whenever this notation is being used, it is understood that one can trade one of the momenta for the sum of the others, say $p_{n}=-(p_{1}+\ldots+p_{n-1})$. For the 4-point function, this mean that there are 3 independent momenta, and hence 6 Lorentz invariants.333We are always working in $d\geq 3$ dimensions. In $d=2$, three vectors cannot be linearly independent and there is one less invariant In analogy with scattering amplitudes, one can use the “invariant masses” $p_{1}^{2},\quad p_{2}^{2},\quad p_{3}^{2},\quad p_{4}^{2}=(p_{1}+p_{2}+p_{3})^{2},$ (2.2) as well as the Mandelstam variables $s=(p_{1}+p_{2})^{2},\qquad t=(p_{1}+p_{3})^{2},\qquad u=(p_{2}+p_{3})^{2},$ (2.3) subject to the constraint $s+t+u=\sum p_{i}^{2}$. Unlike scattering amplitudes, however, we do not require the $p_{i}^{2}$ to take any special value. In fact, we even want to be agnostic about their sign: the momenta $p_{i}$ might be space-like, time-like, or even light-like. However, the specific ordering of the operators in eq. (1.1) does imply a constraint on the momentum flowing between the pairs $[\phi_{1}\phi_{2}]$ and $[\phi_{3}\phi_{4}]$: defining $p\equiv-p_{1}-p_{2}=p_{3}+p_{4},$ (2.4) the correlation function (1.1) vanishes by the spectral condition whenever $p$ lies outside the forward light cone, i.e. unless $p^{0}>0,\qquad\text{and}\qquad s=p^{2}>0.$ (2.5) We will assume henceforth that these two conditions are fulfilled. With $s$ strictly positive, one can rescale all dimensionful quantities by $s$, including the 4-point function (1.1) whose overall scaling dimension is fixed by its operator content and by the space-time dimension. Since we chose in the definition (1.2) to factor out precisely the appropriate power of $s$, $G_{\Delta,\ell}$ is dimensionless. It is a function of the rescaled invariant masses $w_{i}\equiv\frac{p_{i}^{2}}{s},$ (2.6) and of a fifth dimensionless variable that we define as follows: we introduce two new linear combinations of the momenta, $q_{12}^{\mu}\equiv\frac{(p\cdot p_{2})p_{1}^{\mu}-(p\cdot p_{1})p_{2}^{\mu}}{\sqrt{(p_{1}\cdot p_{2})^{2}-p_{1}^{2}p_{2}^{2}}},\qquad\qquad q_{34}^{\mu}\equiv\frac{(p\cdot p_{3})p_{4}^{\mu}-(p\cdot p_{4})p_{3}^{\mu}}{\sqrt{(p_{3}\cdot p_{4})^{2}-p_{3}^{2}p_{4}^{2}}},$ (2.7) which have the property of being both orthogonal to $p$, and normalized in units of $s$, $q_{12}\cdot p=q_{34}\cdot p=0,\qquad\qquad q_{12}^{2}=q_{34}^{2}=-s.$ (2.8) The definitions (2.7) are non-singular as long as neither $p_{1}$ and $p_{2}$, nor $p_{3}$ and $p_{4}$ are colinear; the special configuration in which some of these momenta are colinear can be eventually reached as a limit of the general case. We now define $\theta$ to be the angle between these two vectors, $\cos\theta\equiv-\frac{q_{12}\cdot q_{13}}{s},$ (2.9) or in terms of the Mandelstam invariants $\boxed{\cos\theta=\frac{s(u-t)-(p_{1}^{2}-p_{2}^{2})(p_{3}^{2}-p_{4}^{2})}{\sqrt{(s-p_{1}^{2}-p_{2}^{2})^{2}-4p_{1}^{2}p_{2}^{2}}\sqrt{(s-p_{3}^{2}-p_{4}^{2})^{2}-4p_{3}^{2}p_{4}^{2}}}.}$ (2.10) $\theta$ corresponds in fact to the scattering angle in a $2\to 2$ inelastic process in which the four “particles” have distinct masses (possibly including negative masses squared). In the center-of-mass frame, working for definiteness in $d=3$, one can always choose $\displaystyle p$ $\displaystyle=\sqrt{s}\left(1,0,0\right),$ $\displaystyle q_{12}$ $\displaystyle=\sqrt{s}\left(0,1,0\right),$ (2.11) $\displaystyle q_{34}$ $\displaystyle=\sqrt{s}\left(0,\cos\theta,\sin\theta\right),$ which in terms of momenta $p_{i}$ corresponds to $\displaystyle p_{1}$ $\displaystyle=-\frac{\sqrt{s}}{2}\left(1+w_{1}-w_{2},\Omega_{12},0\right),$ $\displaystyle p_{2}$ $\displaystyle=-\frac{\sqrt{s}}{2}\left(1-w_{1}+w_{2},-\Omega_{12},0\right),$ (2.12) $\displaystyle p_{3}$ $\displaystyle=\frac{\sqrt{s}}{2}\left(1+w_{3}-w_{4},-\Omega_{34}\cos\theta,-\Omega_{34}\sin\theta\right),$ where we have defined $\Omega_{12}=\sqrt{(1-w_{1}-w_{2})^{2}-4w_{1}w_{2}},\qquad\Omega_{34}=\sqrt{(1-w_{3}-w_{4})^{2}-4w_{3}w_{4}}.$ (2.13) The quantities under the square roots are non-negative for any configuration of momenta. Note that $\Omega_{12}$ and $\Omega_{34}$ will play an important role later. The center-of-mass configuration is illustrated in figure 2. Figure 2: A possible configuration of momenta in the center-of-mass frame (energy along the vertical axis). The scattering angle $\theta$ corresponds to the angle between the planes spanned by $(p_{1},p_{2})$ and $(p_{3},p_{4})$, or equivalently between the vectors $q_{12}$ and $q_{34}$. In this particular example $p_{1}$ is time-like (backward directed), while $p_{2}$, $p_{3}$ and $p_{4}$ are all space-like, as seen by their position relative to the light cone (dotted lines); the covariant definition (2.10) is valid independently of the sign of the $p_{i}^{2}$. ### 2.2 Polarization tensors The Hilbert space completeness relation relies on a basis of momentum eigenstates that can be constructed out of a single primary operator insertion acting on the vacuum, for all primaries of the theory. In the case of scalar external operators, only scalar and traceless symmetric tensor operators enter the OPE. In order to avoid dealing with the contraction of tensor indices, and more importantly to obtain the factorized result (1.4), it is necessary to introduce a complete orthogonal basis of polarization tensors $\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}$, after which one could write $\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}(p)\left|0\right\rangle\equiv\sum_{m}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}\mathcal{O}^{(\ell,m)}(p)\left|0\right\rangle.$ (2.14) The goal of this section is to provide the explicit construction of such a basis, using only the momenta at hand, so that its transformation under special conformal transformations can later be determined explicitly. Such a construction is most easily done using spinor variables [62], but then it depends explicitly on the space-time dimension $d$. For scalar external operators, we will show that a $d$-independent construction can be performed. The starting point is to realize that, thanks to the ability of going to the center-of-mass frame (2.11), the polarization tensors $\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}$ can be decomposed into products of the vector $p$ and of invariant tensors of the group of spatial rotations $\text{SO}(d-1)$. An orthogonal basis of such polarization tensors is given by $\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}\equiv s^{-(\ell-m)/2}\left[\varepsilon_{\perp}^{(\mu_{1}\ldots\mu_{m}}p^{\mu_{m+1}}\cdots p^{\mu_{\ell})}-\text{traces}\right],$ (2.15) where the indices on the right-hand side are symmetrized, and the tensors $\varepsilon_{\perp}^{\mu_{1}\ldots\mu_{n}}$ form themselves a basis of traceless, symmetric tensors orthogonal to $p$, $p_{\mu_{1}}\varepsilon_{\perp}^{\mu_{1}\ldots\mu_{n}}=0.$ (2.16) This means that the tensors $\varepsilon_{\perp}^{\mu_{1}\ldots\mu_{n}}$ only have non-zero entries when all indices are spatial. In our case, one can even go further and construct explicitly the tensors $\varepsilon_{\perp}^{\mu_{1}\ldots\mu_{m}}$ out of the momenta at hand. The reason is that we are only interested in polarization tensors that live in a plane spanned by a pair of momenta, say $p_{1}$ and $p_{2}$, or equivalently $p$ and $q_{12}$. Any polarization overlapping with a state created by two scalar operators acting on the vacuum is of this form. Using the projector onto the directions transverse to $p$, $\eta_{\perp}^{\mu\nu}\equiv\eta^{\mu\nu}-\frac{p^{\mu}p^{\nu}}{p^{2}},$ (2.17) one can take $\varepsilon_{\perp}^{\mu_{1}\ldots\mu_{m}}(p,q)\equiv\sum_{n=0}^{\lfloor m/2\rfloor}\frac{m!}{n!(m-2n)!}\frac{1}{2^{2n}\left(\frac{d-3}{2}+m-n\right)_{n}}\frac{\eta_{\perp}^{(\mu_{1}\mu_{2}}\cdots\eta_{\perp}^{\mu_{2n-1}\mu_{2n}}q^{\mu_{2n+1}}\cdots q^{\mu_{m})}}{s^{m/2-n}}.$ (2.18) This is a covariant definition in terms of any pair of momenta $p$ and $q$ that satisfy $p\cdot q=0$, $p^{2}=-q^{2}=s$. Similarly, a covariant definition of the more general tensors $\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}$ is directly obtained plugging eq. (2.18) into (2.15). The final result is most easily expressed with the help of an auxiliary polarization vector $z^{\mu}$ satisfying $z^{2}=0$, in terms of which $z_{\mu_{1}}\cdots z_{\mu_{\ell}}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,q)=\frac{(z\cdot p)^{\ell-m}(z\cdot q)^{m}}{s^{\ell/2}}{}_{2}F_{1}\left(-\frac{m}{2},-\frac{m-1}{2};\frac{5-d}{2}-m;\frac{(z\cdot p)^{2}}{(z\cdot q)^{2}}\right).$ (2.19) Note that the hypergeometric series terminates both for even and odd $m$ (either one of the first two parameters is a negative integer), so that the right-hand side is in fact a homogeneous polynomial of degree $\ell$ in $z\cdot p$ and $z\cdot q$. The hypergeometric form is nevertheless convenient for its compactness, and it shows explicitly that the definition (2.19) is analytic in the space-time dimension $d$. It also gives a simple proof of the identities $\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,q)=(-1)^{m}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,-q)=(-1)^{\ell-m}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(-p,q)=(-1)^{\ell}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(-p,-q).$ (2.20) The orthogonality of the basis follows by construction. Its normalization is given by $\varepsilon_{m^{\prime}\mu_{1}\ldots\mu_{\ell}}(p,q)\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,q)=\delta_{m^{\prime}m}(-1)^{m}\mathcal{N}_{\ell,m}.$ (2.21) where $\mathcal{N}_{\ell,m}=\frac{m!(\ell-m)!}{\ell!}\frac{(d-2+2m)_{\ell-m}}{2^{\ell-m}\left(\frac{d-2}{2}+m\right)_{\ell-m}}\frac{(d-3)_{m}}{2^{m}\left(\frac{d-3}{2}\right)_{m}}\geq 0.$ (2.22) Moreover, if one considers two distinct polarization tensors defined with respect to two reference vectors $q$ and $q^{\prime}$, both orthogonal to the same vector $p$, then one one obtains the identity $\varepsilon_{m^{\prime}\mu_{1}\ldots\mu_{\ell}}(p,q)\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,q^{\prime})=\delta_{m^{\prime}m}(-1)^{m}\mathcal{N}_{\ell,m}\frac{m!}{(d-3)_{m}}\,\mathcal{C}_{m}^{(d-3)/2}(\cos\theta),$ (2.23) where $\theta$ is the angle between $q$ and $q^{\prime}$, and $\mathcal{C}_{m}^{(d-3)/2}$ is a Gegenbauer polynomial. This identity will play a central role in the computation of the conformal partial waves. ### 2.3 Spin eigenstates and normalization We show next how these polarization tensors naturally appear in the 2-point function of primary operators, and how this leads to the construction of an orthogonal basis of spin eigenstates. The 2-point function of traceless, symmetric tensor operators in momentum space satisfies [63] $\displaystyle\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}(-p)\mathcal{O}^{\nu_{1}\ldots\nu_{\ell}}(p)\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}$ $\displaystyle=s^{\Delta-d/2}\Theta(p^{0})\Theta(p^{2})B_{\Delta,\ell}\sum_{n=0}^{\ell}\frac{(-1)^{n}2^{n}\ell!}{n!(\ell-n)!}\frac{\left(\frac{d}{2}-\Delta\right)_{n}}{\left(2-\Delta-\ell\right)_{n}}$ $\displaystyle\quad\qquad\times\left[\frac{p^{(\mu_{1}}p^{\nu_{1}}\cdots p^{\mu_{n}}p^{\nu_{n}}\eta^{\mu_{n+1}\nu_{n+1}}\cdots\eta^{\mu_{\ell}\nu_{\ell})}}{s^{n}}-\text{traces}\right],$ (2.24) where $B_{\Delta,\ell}=\frac{(4\pi)^{(d+2)/2}\left(\Delta-1\right)_{\ell}}{2^{2\Delta+1}\Gamma\left(\Delta-\frac{d-2}{2}\right)\Gamma(\Delta+\ell)}.$ (2.25) This result is the unique solution (up to a normalization constant) to the conformal Ward identities for the 2-point function. Alternatively, it can be obtained from the direct Fourier transform of the position-space 2-point function in Lorentzian signature. The Heaviside $\Theta$-functions appearing in eq. (2.24) indicate that the 2-point function vanishes whenever $p$ lies outside the forward light cone. To relate eq. (2.24) with the polarization tensors described above, we consider again its projection onto the plane spanned by $p$ and a vector $q$ orthogonal to it. This amounts to replacing $\eta^{\mu\nu}\rightarrow\frac{p^{\mu}p^{\nu}-q^{\mu}q^{\nu}}{s}$ (2.26) in eq. (2.24). A bit of combinatorics allows then to rewrite the 2-point function (2.24) as $\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}(-p)\mathcal{O}^{\nu_{1}\ldots\nu_{\ell}}(p)\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}=s^{\Delta-d/2}\sum_{m=0}^{\ell}\frac{B_{\Delta,\ell,m}}{\mathcal{N}_{\ell,m}}\,\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(-p,-q)\varepsilon_{m}^{\nu_{1}\ldots\nu_{\ell}}(p,q),$ (2.27) where $B_{\Delta,\ell,m}\equiv\frac{(\Delta-\ell-d+2)_{\ell-m}}{(\Delta+m-1)_{\ell-m}}B_{\Delta,\ell}\geq 0.$ (2.28) Requiring that the 2-point function defines a positive inner product, one actually recovers the well-known unitarity bounds $\Delta\geq(d-2)/2$ for scalars, and $\Delta-\ell\geq d-2$ for traceless, symmetric tensors of spin $\ell$. Note that in the generic case the 2-point function is a sum of $\ell+1$ polarizations, the exception being spinning operators that saturate the unitarity bound: when $\Delta-\ell=d-2$, all $B_{\Delta,\ell,m}$ with $m<\ell$ vanish: conserved currents only have transverse polarizations $m=\ell$. This construction suggests to define the spin eigenstates $\left|\mathcal{O}^{(\ell,m)}(p{\,|\,}q)\right\rangle\equiv\mathcal{N}_{\ell,m}^{-1}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,q)\mathcal{O}_{\mu_{1}\ldots\mu_{\ell}}(p)\left|0\right\rangle.$ (2.29) The notation indicates that this is a state carrying momentum $p$, with total spin $\ell$, and component $m$ along a direction that is defined by a reference vector $q$. These states form an orthogonal basis, $\big{\langle}\mathcal{O}^{(\ell^{\prime},m^{\prime})}(-p^{\prime}{\,|\,}-q)\big{|}\mathcal{O}^{(\ell,m)}(p{\,|\,}q)\big{\rangle}=(2\pi)^{d}\delta^{d}(p^{\prime}-p)\delta_{\ell^{\prime}\ell}\delta_{m^{\prime}m}\mathcal{N}_{\ell,m}^{-1}B_{\Delta,\ell,m}s^{\Delta-d/2}.$ (2.30) They realize the decomposition (2.14) envisioned at the beginning of this section. Note that an equivalent definition of the spin eigenstates can be obtained more generically by group-theoretical considerations, according to which $m$ labels the representation of the state under the little group that preserves the momentum $p$ [30, 24]. The advantage of our explicit construction in terms of the momentum $p$ and of the reference vector $q$ is that it is immediately suited to examine the transformation properties of the polarization tensors under special conformal transformations, as developed later in section 3.1. ### 2.4 Completeness relation We are now equipped with a complete basis of states that can appear in the OPE of two scalar operators acting on the vacuum. Writing down a completeness relation amounts to inverting the 2-point function (2.30), and this is now a trivial matter since the states are readily orthogonal. The null states corresponding to the longitudinal polarizations ($m<\ell$) of conserved currents can be safely excluded, as we shall see in section 3.3 that their overlap with two operators acting on the vacuum is zero. Similarly, it is sufficient to consider states that carry a momentum $p$ inside the forward light cone $\mathcal{V}_{+}$, as the states are otherwise null. We write therefore $\mathds{1}=\left|0\right\rangle\left\langle 0\right|+\sum_{\mathcal{O}}\sum_{m=0}^{\ell}B_{\Delta,\ell,m}^{-1}\mathcal{N}_{\ell,m}\int\limits_{\mathcal{V}_{+}}\frac{d^{d}k}{(2\pi)^{d}}(k^{2})^{d/2-\Delta}\left|\mathcal{O}^{(\ell,m)}(k{\,|\,}q)\right\rangle\left\langle\mathcal{O}^{(\ell,m)}(-k{\,|\,}-q)\right|,$ (2.31) where the sum is over all primary operators of the theory. Spin representations other than traceless symmetric tensors have been omitted, as they do not appear in the scalar 4-point function. One can also choose different reference vectors $q$ and $q^{\prime}$ for the two states, in which case eq. (2.23) leads to the more general completeness relation $\displaystyle\mathds{1}=\left|0\right\rangle\left\langle 0\right|+\sum_{\mathcal{O}}\sum_{m=0}^{\ell}$ $\displaystyle B_{\Delta,\ell,m}^{-1}\mathcal{N}_{\ell,m}\frac{m!}{(d-3)_{m}}\mathcal{C}_{m}^{(d-3)/2}(\cos\theta)$ $\displaystyle\times\int\limits_{\mathcal{V}_{+}}\frac{d^{d}k}{(2\pi)^{d}}(k^{2})^{d/2-\Delta}\left|\mathcal{O}^{(\ell,m)}(k{\,|\,}q)\right\rangle\left\langle\mathcal{O}^{(\ell,m)}(-k{\,|\,}-q^{\prime})\right|,$ (2.32) where $\theta$ is the angle between the vectors $q$ and $q^{\prime}$. It is important to remember that, in the derivation of these completeness relations, we made the constraining assumption that the Lorentz indices of the traceless, symmetric operators only live in the subspace of the $d$-dimensional Minkowski space spanned by the momenta $p$ and $q$ (respectively $q^{\prime}$). This is sufficient when working with the kinematics of the scalar 4-point functions, provided that we use $q_{12}$ and $q_{34}$ defined in eq. (2.7) in place of $q$ and $q^{\prime}$. Plugging the completeness relation (2.32) in the correlator (1.1), and resolving the trivial delta functions, one arrives therefore at $\displaystyle G(p_{1},p_{2},p_{3})$ $\displaystyle=\sum_{\mathcal{O}}s^{d/2-\Delta}\sum_{m=0}^{\ell}C_{\Delta,\ell,m}\mathcal{C}_{m}^{(d-3)/2}(\cos\theta)$ $\displaystyle\qquad\times\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}{}[\phi_{1}(p_{1})\phi_{2}(p_{2})]\mathcal{O}^{(\ell,m)}(p{\,|\,}q_{12})\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\mathcal{O}^{(\ell,m)}(-p{\,|\,}-q_{34})[\phi_{3}(p_{3})\phi_{4}(p_{4})]\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}.$ (2.33) Note that the vacuum state does not contribute for $s>0$. This is now precisely in the form of eqs. (1.2) and (1.4), where the vertex functions are given by the 3-point correlators, and we have defined $C_{\Delta,\ell,m}=B_{\Delta,\ell,m}^{-1}\mathcal{N}_{\ell,m}\frac{m!}{(d-3)_{m}},$ (2.34) or explicitly $\boxed{C_{\Delta,\ell,m}=\frac{2^{2\Delta-\ell+1}(m!)^{2}(\ell-m)!(d-2+2m)_{\ell-m}\Gamma\left(\Delta-\frac{d-2}{2}\right)\Gamma(\Delta+\ell)}{(4\pi)^{(d+2)/2}\ell!\left(\frac{d-2}{2}+m\right)_{\ell-m}\left(\frac{d-3}{2}\right)_{m}\left(\Delta-1\right)_{m}(\Delta-\ell-d+2)_{\ell-m}}.}$ (2.35) This coefficient diverges in $d=3$ for $m>0$, but the Gegenbauer polynomial $\mathcal{C}_{m}^{(d-3)/2}$ vanishes in the same case: the product of the two is in fact well-defined in the limit $d\to 3$, and it is proportional to the Chebyshev polynomial $T_{m}(\cos\theta)$. In summary, using polarization tensors defined in terms of the kinematic variables, we have been able to write down a decomposition of the 4-point function into conformal partial waves, and to reduce the computation of these partial wave to a pair of 3-point functions, which we are going to discuss next. ## 3 Vertex functions In this section we describe the computation of the 3-point functions appearing in eq. (2.33). This computations relies essentially on the results of ref. [64], but the projection onto spin eigenstates is new. At first this looks like a major complication, as the standard approach relying on solving conformal Ward identities requires to determine the transformation of the polarization tensors under special conformal symmetry. We shall see nonetheless that the results take a particularly simple form for the highest- spin function in each conformal family ($m=\ell$), and that the lower spin functions can then be determined recursively. ### 3.1 Conformal Ward identities The starting point is to strip the 3-point correlation functions from its dynamical information, the OPE coefficient, and from an overall power of $s$ fixed by scaling symmetry. We write $\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}{}[\phi_{1}(p_{1})\phi_{2}(p_{2})]\mathcal{O}^{(\ell,m)}(p{\,|\,}q_{12})\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}\equiv\lambda_{12\mathcal{O}}\,s^{(\Delta_{1}+\Delta_{2}+\Delta-2d)/2}V_{\Delta,\ell,m}^{[12]}\left(\frac{p_{1}^{2}}{s},\frac{p_{2}^{2}}{s}\right).$ (3.1) $V_{\Delta,\ell,m}^{[12]}$ on the right-hand side is a function of two dimensionless variables only. It is precisely the vertex function that enters in the conformal partial wave (1.4). All information about scale and Poincaré symmetry is already included in this equation and in the definition (2.19) of the polarization tensors. What remains to be determined is actually contained in the action of special conformal transformations, together with a boundary condition that can be traced back to Euclidean position space. Note that the vertex function $V_{\Delta,\ell,m}^{[34]}$ in eq. (1.4) is simply related to $V_{\Delta,\ell,m}^{[12]}$ by hermitian conjugation and a change in the labels $1\to 4$ and $2\to 3$. We shall therefore focus exclusively on $V_{\Delta,\ell,m}^{[12]}$ in this whole section. Working in Minkowski space, there is no choice but to use the infinitesimal form of special conformal transformation, as their exponentiated form does not preserve the causal structure of space-time. At the level of correlation functions, this means solving a partial differential equation: the Ward identity for special conformal transformations in momentum space is a second- order differential equation that can be written as [65, 66] $\widehat{K}_{12}^{\alpha}\,\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}{}[\phi_{1}(p_{1})\phi_{2}(p_{2})]\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}(-p_{1}-p_{2})\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}=0$ (3.2) where $\widehat{K}_{12}^{\alpha}$ is the differential operator $\widehat{K}_{12}^{\alpha}=\sum_{i=1}^{2}\left[p_{i}^{\beta}\frac{\partial^{2}}{\partial p_{i\alpha}\partial p_{i}^{\beta}}-\frac{1}{2}p_{i}^{\alpha}\frac{\partial^{2}}{\partial p_{i\beta}\partial p_{i}^{\beta}}-(\Delta_{i}-d)\frac{\partial}{\partial p_{i\alpha}}\right].$ (3.3) Note that this differential equation is valid no matter what the ordering between the operators $\phi_{1}$ and $\phi_{2}$ is: the difference in orderings only means different boundary conditions, which will be discussed later in section 3.2. The Ward identity (3.2) does not apply directly to the vertex function $V_{\Delta,\ell,m}^{[12]}$, but rather to the correlator with the primary operator $\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}$ acting on the vacuum. In order to include the polarization tensors and the power of $s$ in this equation, we write, in accordance with eq. (2.14), $\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}{}[\phi_{1}(p_{1})\phi_{2}(p_{2})]\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}(-p_{1}-p_{2})\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}=\lambda_{12\mathcal{O}}\,s^{(\Delta_{1}+\Delta_{2}+\Delta-2d)/2}\sum_{m=0}^{\ell}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,q_{12})V_{\Delta,\ell,m}^{[12]}(w_{1},w_{2}).$ (3.4) We also introduce an external polarization vector $z^{\mu}$ as in eq. (2.19), and use the shorthand notation $\varepsilon_{m}\equiv z_{\mu_{1}}\cdots z_{\mu_{\ell}}\varepsilon_{m}^{\mu_{1}\ldots\mu_{\ell}}(p,q_{12}).$ (3.5) It is straightforward (if tedious) to compute the action of $\widehat{K}_{12}^{\alpha}$ on the polarization tensors, as they are explicitly defined in terms of $p_{1}$ and $p_{2}$. In doing so, we encounter derivatives of the hypergeometric function (2.19), and we find it convenient to introduce $\varepsilon^{\prime}_{m}\equiv\frac{\partial\varepsilon_{m}}{\partial(z\cdot q_{12})},$ (3.6) which describes a traceless symmetric tensor with $\ell-1$ indices. The Ward identity (3.2) is now a simple vector equation, whose components can be resolved taking its scalar product with various reference vectors. For instance, the Ward identity given by $p\cdot\widehat{K}_{12}$ becomes $\displaystyle\sum_{m=0}^{\ell}\varepsilon_{m}\bigg{\\{}\Omega_{12}^{m}\left[(1+w_{1}-w_{2})\widehat{D}_{1}+(1-w_{1}+w_{2})\widehat{D}_{2}\right]\Omega_{12}^{-m}$ $\displaystyle+\frac{(\ell-m)(d-2+\ell+m)}{2}$ $\displaystyle\bigg{\\}}V_{\Delta,\ell,m}^{[12]}(w_{1},w_{2})=0.$ (3.7) If one introduce an auxiliary vector $r$ orthogonal to both $p$ and $q_{12}$, the Ward identity $r\cdot\widehat{K}_{12}$ gives $\displaystyle\sum_{m=0}^{\ell}\bigg{\\{}\varepsilon^{\prime}_{m}\Omega_{12}^{-1}\widehat{D}_{0}+\varepsilon^{\prime}_{m-1}\frac{m(\Delta- m-d+2)(d-3+\ell+m)}{(d-3+2m)(d-5+2m)}$ $\displaystyle-\varepsilon^{\prime}_{m+1}\frac{(\ell-m)(\Delta+m-1)}{m+1}$ $\displaystyle\bigg{\\}}V_{\Delta,\ell,m}^{[12]}(w_{1},w_{2})=0.$ (3.8) Finally, the remaining content of the Ward identity can be obtained from the scalar product $q_{12}\cdot\widehat{K}_{12}$, which after subtraction of eq. (3.8) leads to $\displaystyle\sum_{m=0}^{\ell}\bigg{\\{}$ $\displaystyle\varepsilon_{m}\Omega_{12}^{m}\left(\widehat{D}_{1}-\widehat{D}_{2}\right)\Omega_{12}^{-m}$ $\displaystyle-\varepsilon^{\prime}_{m}\frac{z\cdot p}{\Omega_{12}}(\Delta-\ell-d+2)-\varepsilon^{\prime}_{m-1}\frac{z\cdot p}{\Omega_{12}^{2}}\frac{m}{d-5+2m}\widehat{D}_{0}\bigg{\\}}V_{\Delta,\ell,m}^{[12]}(w_{1},w_{2})=0.$ (3.9) Several elements of eqs. (3.7), (3.8) and (3.9) must still be explained: the quantity $\Omega_{12}=\sqrt{(1-w_{1}-w_{2})^{2}-4w_{1}w_{2}}$ was originally defined in eq. (2.13); $\widehat{D}_{1}$ and $\widehat{D}_{2}$ are elliptic differential operators of the Appell $F_{4}$ type, $\displaystyle\widehat{D}_{1}$ $\displaystyle=w_{1}(1-w_{1})\frac{\partial^{2}}{\partial w_{1}^{2}}-2w_{1}w_{2}\frac{\partial^{2}}{\partial w_{1}\partial w_{2}}-w_{2}^{2}\frac{\partial^{2}}{\partial w_{2}^{2}}$ $\displaystyle\quad+\left[c_{1}-(a+b+1)w_{1}\right]\frac{\partial}{\partial w_{1}}-(a+b+1)w_{2}\frac{\partial}{\partial w_{2}}-ab,$ (3.10) $\displaystyle\widehat{D}_{2}$ $\displaystyle=w_{2}(1-w_{2})\frac{\partial^{2}}{\partial w_{2}^{2}}-2w_{1}w_{2}\frac{\partial^{2}}{\partial w_{1}\partial w_{2}}-w_{1}^{2}\frac{\partial^{2}}{\partial w_{1}^{2}}$ $\displaystyle\quad+\left[c_{2}-(a+b+1)w_{2}\right]\frac{\partial}{\partial w_{2}}-(a+b+1)w_{1}\frac{\partial}{\partial w_{1}}-ab,$ (3.11) with $\displaystyle a$ $\displaystyle=\frac{d+\Delta-\Delta_{1}-\Delta_{2}+m}{2},$ $\displaystyle c_{1}$ $\displaystyle=1+\frac{d}{2}-\Delta_{1},$ $\displaystyle b$ $\displaystyle=\frac{2d-\Delta-\Delta_{1}-\Delta_{2}+m}{2},$ $\displaystyle c_{2}$ $\displaystyle=1+\frac{d}{2}-\Delta_{2};$ (3.12) the remaining differential operator $\widehat{D}_{0}$ is given by $\displaystyle\widehat{D}_{0}$ $\displaystyle=(1-w_{1}+w_{2})\left[-2w_{1}\frac{\partial}{\partial w_{1}}+(\Delta_{1}-d+1)\right]$ $\displaystyle\quad-(1+w_{1}-w_{2})\left[-2w_{2}\frac{\partial}{\partial w_{2}}+(\Delta_{2}-d+1)\right].$ (3.13) The interpretation of this system of partial differential equations for $\ell+1$ functions of 2 variables $w_{1}$ and $w_{2}$ appears difficult at first. This is worsened by the fact that we have not expressed the conformal Ward identities in the complete basis $\\{\varepsilon_{m}\\}$ of traceless, symmetric tensors, but have instead appealed to the additional tensors $\varepsilon^{\prime}_{m}$. Nevertheless, the system contains a lot of redundancies, due in particular to the closure of the conformal algebra in momentum space, and we will see in the next sections how a simple solution emerges. ### 3.2 Highest-spin functions in various orderings The first key observation to be made is that the Ward identities (3.7) and (3.9) contain a closed system of equations for the function $V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})$, i.e. the highest-spin component $m=\ell$. The tensors $(z\cdot p)\varepsilon^{\prime}_{i}$ appearing in eq. (3.9) are orthogonal to $\varepsilon_{\ell}$, the latter being totally transverse while the former not. When projecting against $\varepsilon_{\ell}$, these two equations give therefore the system $\displaystyle\left[(1+w_{1}-w_{2})\widehat{D}_{1}+(1-w_{1}+w_{2})\widehat{D}_{2}\right]\Omega_{12}^{-\ell}V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})$ $\displaystyle=0,$ (3.14) $\displaystyle\left(\widehat{D}_{1}-\widehat{D}_{2}\right)\Omega_{12}^{-\ell}V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})$ $\displaystyle=0,$ (3.15) which is equivalent to $\widehat{D}_{1}\left[\Omega_{12}^{-\ell}V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})\right]=\widehat{D}_{2}\left[\Omega_{12}^{-\ell}V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})\right]=0.$ (3.16) As already stated, $\widehat{D}_{1}$ and $\widehat{D}_{2}$ form a system of generalized hypergeometric differential equations of the Appell $F_{4}$ type, with a particular solution being $F_{4}(a,b;c_{1},c_{2};w_{1},w_{2})$. General solutions to such systems can be constructed from linear combinations of the four functions [67] $\begin{array}[]{r@{\hspace{18mm}}r}w_{1}^{\Delta_{1}-d/2}w_{2}^{\Delta_{2}-d/2}F_{\Delta_{1}\Delta_{2};\Delta,\ell}(w_{1},w_{2}),\hskip 51.21495pt&w_{1}^{\Delta_{1}-d/2}F_{\widetilde{\Delta}_{1}\Delta_{2};\Delta,\ell}(w_{1},w_{2}),\\\ w_{2}^{\Delta_{2}-d/2}F_{\Delta_{1}\widetilde{\Delta}_{2};\Delta,\ell}(w_{1},w_{2}),\hskip 51.21495pt&F_{\widetilde{\Delta}_{1}\widetilde{\Delta}_{2};\Delta,\ell}(w_{1},w_{2}),\end{array}$ (3.17) where we have used $\widetilde{\Delta}_{i}=d-\Delta_{i}$ and introduced the following notation for the Appell $F_{4}$ function: $F_{\Delta_{a}\Delta_{b};\Delta,\ell}(w_{1},w_{2})\equiv F_{4}\left(\begin{array}[]{c}\frac{\Delta_{a}+\Delta_{b}-\Delta+\ell}{2},\leavevmode\nobreak\ \frac{\Delta_{a}+\Delta_{b}+\Delta+\ell-d}{2}\\\ \Delta_{a}-\frac{d}{2}+1,\leavevmode\nobreak\ \Delta_{b}-\frac{d}{2}+1\end{array};w_{1},w_{2}\right).$ (3.18) Note that the function $F$ satisfies the identities $F_{\Delta_{1}\Delta_{2};\Delta,\ell}(w_{1},w_{2})=F_{\Delta_{1}\Delta_{2};\widetilde{\Delta},\ell}(w_{1},w_{2})=F_{\Delta_{2}\Delta_{1};\Delta,\ell}(w_{2},w_{1}).$ (3.19) An equivalent basis of solutions is obtained after the change of variables $(w_{1},w_{2})\to(w_{1}/w_{2},1/w_{2})$, in terms of the four functions $\begin{array}[]{r@{\hspace{8mm}}r}w_{1}^{\Delta_{1}-d/2}w_{2}^{(\Delta_{2}-\Delta_{1}-\Delta-\ell)/2}F_{\Delta_{1}\Delta;\Delta_{2},\ell}\left(\frac{w_{1}}{w_{2}},\frac{1}{w_{2}}\right),\hskip 22.76219pt&w_{2}^{(\Delta_{1}+\Delta_{2}-\Delta-\ell-d)/2}F_{\widetilde{\Delta}_{1}\Delta;\Delta_{2},\ell}\left(\frac{w_{1}}{w_{2}},\frac{1}{w_{2}}\right),\\\ w_{1}^{\Delta_{1}-d/2}w_{2}^{(\Delta_{2}-\Delta_{1}+\Delta-\ell-d)/2}F_{\Delta_{1}\widetilde{\Delta};\Delta_{2},\ell}\left(\frac{w_{1}}{w_{2}},\frac{1}{w_{2}}\right),\hskip 22.76219pt&w_{2}^{(\Delta_{1}+\Delta_{2}+\Delta-\ell-2d)/2}F_{\widetilde{\Delta}_{1}\widetilde{\Delta};\Delta_{2},\ell}\left(\frac{w_{1}}{w_{2}},\frac{1}{w_{2}}\right).\end{array}$ (3.20) The highest-spin vertex function $V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})$ is therefore a linear combination of the functions (3.17), or of the functions (3.20), multiplied by $\Omega_{12}^{\ell}$. What linear combination depends on the ordering of the operators in the 3-point function (3.1). We examine below two distinct cases. #### Wightman function When $\phi_{1}$ and $\phi_{2}$ are out-of-time-order, the Wightman function (3.1) has been computed in ref. [64] (see also ref. [68]). The key observation is that the Euclidean OPE fixes the behavior of the Wightman 3-point function in a neighborhood of $p_{1}^{2}=p^{2}=0$, i.e. around $w_{1}=0$ and $w_{2}\to-\infty$. Among the solutions to the conformal Ward identities only the first function of the list (3.20) satisfies the constraint. The normalization of the 3-point function can also be determined from the results of ref. [64], which are given in terms of traceless, symmetric tensors built out of $p$ and $p_{1}$. The comparison can be made between the term of order $(z\cdot p_{1})^{\ell}$ in that tensor, and the same power coming from the term $(z\cdot q_{12})^{\ell}$ in $\varepsilon_{\ell}^{\mu_{1}\ldots\mu_{\ell}}(p,q_{12})$, using $q_{12}^{\mu}=-2\Omega_{12}^{-1}p_{1}^{\mu}+\ldots$ We obtain finally444Note that we replaced the generic label $[12]$ with $12$ for the Wightman function. Similarly, we use $\operatorname{T}[12]$ for time-ordered product given later in eq. (3.27). $\boxed{\begin{aligned} V_{\Delta,\ell,\ell}^{12}(w_{1},w_{2})&=\frac{i^{\ell}(4\pi)^{d+2}2^{-(\Delta_{1}+\Delta_{2}+\Delta+\ell+2)}(\Delta-1)_{\ell}\left[(1-w_{1}-w_{2})^{2}-4w_{1}w_{2}\right]^{\ell/2}}{\Gamma\left(\Delta_{1}-\frac{d}{2}+1\right)\Gamma\left(\Delta-\frac{d}{2}+1\right)\Gamma\left(\frac{\Delta_{2}-\Delta_{1}+\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta_{1}+\Delta_{2}-\Delta+\ell}{2}\right)}\\\ &\quad\times w_{1}^{\Delta_{1}-d/2}(-w_{2})^{(\Delta_{2}-\Delta_{1}-\Delta-\ell)/2}F_{\Delta_{1}\Delta;\Delta_{2},\ell}\left(\frac{w_{1}}{w_{2}},\frac{1}{w_{2}}\right).\end{aligned}}$ (3.21) This expression is valid when $p_{1}$ is time-like and backward-directed (the Wightman function vanishes otherwise by the spectral condition), and when $p_{2}$ is space-like, implying $w_{1}>0$ and $w_{2}<0$. The case of time-like $p_{2}$ can be obtained in terms of the functions in the list (3.17) following the analysis of ref. [64]. Note that the normalization of OPE coefficients that we use here is such that the analytic continuation to Euclidean position space corresponds to the 3-point function $\displaystyle z_{\mu_{1}}\cdots z_{\mu_{\ell}}$ $\displaystyle\left\langle 0\right|\phi_{1}(x_{1})\phi_{2}(x_{2})\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}(0)\left|0\right\rangle$ $\displaystyle=\frac{\lambda_{12\mathcal{O}}\left[x_{2}^{2}(z\cdot x_{1})-x_{1}^{2}(z\cdot x_{2})\right]^{\ell}}{|x_{1}|^{\Delta_{1}-\Delta_{2}+\Delta+\ell}|x_{2}|^{\Delta_{2}-\Delta_{1}+\Delta+\ell}|x_{1}-x_{2}|^{\Delta_{1}+\Delta_{2}-\Delta+\ell}}.$ (3.22) #### Time-ordered product In the case where the 3-point function (3.1) contains a time-ordered product of the operators $\phi_{1}$ and $\phi_{2}$, the general structure is more complicated, as both $p_{1}$ and $p_{2}$ can be either space-like or time- like. We focus here on the case where they are both space-like, i.e. $w_{1},w_{2}<0$. In this case, the Euclidean OPE condition only gives information about the behavior at $p^{2}=0$ ($w_{1},w_{2}\to-\infty$). It implies that the highest-spin vertex function is a linear combination of the first two functions in the list (3.20), $\displaystyle V_{\Delta,\ell,\ell}^{\operatorname{T}[12]}(w_{1},w_{2})$ $\displaystyle=(-w_{2})^{(\Delta_{1}+\Delta_{2}-\Delta-\ell-d)/2}\left[(1-w_{1}-w_{2})^{2}-4w_{1}w_{2}\right]^{\ell/2}$ $\displaystyle\quad\times\left[A\,\left(\frac{w_{1}}{w_{2}}\right)^{\Delta_{1}-d/2}F_{\Delta_{1}\Delta;\Delta_{2},\ell}\left(\frac{w_{1}}{w_{2}},\frac{1}{w_{2}}\right)+B\,F_{\widetilde{\Delta}_{1}\Delta;\Delta_{2},\ell}\left(\frac{w_{1}}{w_{2}},\frac{1}{w_{2}}\right)\right].$ (3.23) The coefficients $A$ and $B$ can be fixed by symmetry arguments and by the relation to the Wightman function. The singularities of the time-ordered product have been analyzed in ref. [69], where it was found that there is a branch-point singularity at $w_{1}=0$ when $\Delta_{1}<\frac{d}{2}$, and that the coefficient of this singularity is related to the Wightman function by a multiplicative factor, giving $\lim_{w_{1}\to 0}(-w_{1}-i\epsilon)^{d/2-\Delta_{1}}V_{\Delta,\ell,\ell}^{\operatorname{T}[12]}(w_{1},w_{2})=\frac{1}{2i\sin\left[\pi\left(\frac{d}{2}-\Delta_{1}\right)\right]}\lim_{w_{1}\to 0_{+}}w_{1}^{d/2-\Delta_{1}}V_{\Delta,\ell,\ell}^{12}(w_{1},w_{2}).$ (3.24) This implies $A=\frac{i^{\ell-1}(4\pi)^{d+1}\Gamma\left(\frac{d}{2}-\Delta_{1}\right)(\Delta-1)_{\ell}}{2^{\Delta_{1}+\Delta_{2}+\Delta+\ell+1}\Gamma\left(\Delta-\frac{d}{2}+1\right)\Gamma\left(\frac{\Delta_{2}-\Delta_{1}+\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta_{1}+\Delta_{2}-\Delta+\ell}{2}\right)}.$ (3.25) The coefficient $B$ can be fixed in turn by imposing the exchange symmetry between the operators $\phi_{1}$ and $\phi_{2}$, which is built into the time- ordered product but does not appear obvious in eq. (3.23). This constraint can be resolved by going from the basis of Appell $F_{4}$ functions (3.20) to the symmetric basis (3.17). We see that the result is symmetric under the simultaneous exchange $w_{1}\leftrightarrow w_{2}$ and $\Delta_{1}\leftrightarrow\Delta_{2}$ if and only if $B=\frac{\Gamma\left(\Delta_{1}-\frac{d}{2}\right)\Gamma\left(\frac{\Delta_{2}-\Delta_{1}+\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta-\Delta_{1}-\Delta_{2}+\ell+d}{2}\right)}{\Gamma\left(\frac{d}{2}-\Delta_{1}\right)\Gamma\left(\frac{\Delta_{1}-\Delta_{2}+\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta_{1}+\Delta_{2}+\Delta+\ell-d}{2}\right)}\,A,$ (3.26) after which we arrive at $\boxed{\begin{aligned} V_{\Delta,\ell,\ell}^{\operatorname{T}[12]}(w_{1},w_{2})&=\frac{i^{\ell-1}(4\pi)^{d+1}2^{-(\Delta_{1}+\Delta_{2}+\Delta+\ell+1)}(\Delta-1)_{\ell}\left[(1-w_{1}-w_{2})^{2}-4w_{1}w_{2}\right]^{\ell/2}}{\Gamma\left(\frac{\Delta_{1}-\Delta_{2}+\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta_{2}-\Delta_{1}+\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta_{1}+\Delta_{2}-\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta_{1}+\Delta_{2}+\Delta+\ell-d}{2}\right)}\\\ &\quad\times\bigg{[}f_{\Delta_{1}\Delta_{2};\Delta,\ell}(-w_{1})^{\Delta_{1}-d/2}(-w_{2})^{\Delta_{2}-d/2}F_{\Delta_{1}\Delta_{2};\Delta,\ell}(w_{1},w_{2})\\\ &\quad\qquad+f_{\Delta_{1}\widetilde{\Delta}_{2};\Delta,\ell}(-w_{1})^{\Delta_{1}-d/2}F_{\Delta_{1}\widetilde{\Delta}_{2};\Delta,\ell}(w_{1},w_{2})\\\ &\quad\qquad+f_{\widetilde{\Delta}_{1}\Delta_{2};\Delta,\ell}(-w_{2})^{\Delta_{2}-d/2}F_{\widetilde{\Delta}_{1}\Delta_{2};\Delta,\ell}(w_{1},w_{2})\\\ &\quad\qquad+f_{\widetilde{\Delta}_{1}\widetilde{\Delta}_{2};\Delta,\ell}F_{\widetilde{\Delta}_{1}\widetilde{\Delta}_{2};\Delta,\ell}(w_{1},w_{2})\bigg{]},\end{aligned}}$ (3.27) where we have defined $f_{\Delta_{a}\Delta_{b};\Delta,\ell}=\frac{\Gamma\left(\frac{d}{2}-\Delta_{a}\right)\Gamma\left(\frac{d}{2}-\Delta_{b}\right)\Gamma\left(\frac{\Delta_{a}+\Delta_{b}+\Delta+\ell-d}{2}\right)}{\Gamma\left(1-\frac{\Delta_{a}+\Delta_{b}-\Delta+\ell}{2}\right)}.$ (3.28) This result is analytic in the scaling dimensions $\Delta_{1},\Delta_{2}$, with the exception of poles at $\Delta_{1},\Delta_{2}=\frac{d}{2}+n$ with integer $n$. This means that the result (3.27) is valid in all generality, and not only in the regime $\Delta_{1}<\frac{d}{2}$ used to fix the coefficient $A$. The apparent divergences when $\Delta_{1},\Delta_{2}=\frac{d}{2}+n$ are reminiscent of the anomalies discussed in refs. [70, 71]: they appear precisely when the powers of $w_{1}$ and $w_{2}$ are integer, in which case the terms of the different hypergeometric series can be mixed. Resolving these special cases by analytic continuation in $\Delta_{i}$ (or in $d$), one finds cancellations between the coefficients $f_{\Delta_{a}\Delta_{b};\Delta,\ell}$, so that the vertex functions remain in fact finite, however with the appearance of logarithms of $w_{1}$ and $w_{2}$. An example realizing the case $\Delta_{1}=\Delta_{2}=\frac{d}{2}$ is presented below in section 4.3. #### Other orderings Besides the Wightman function and the time-ordered product, other orderings of the operators can be envisioned, but they can all be related to the two results (3.21) and (3.27): ordinary commutators are nothing but the difference between two Wightman functions; causal commutators (i.e. advanced or retarded) are similarly related to the sum or difference between the time-ordered product and the Wightman function, and as such they coincide with the former when both momenta $p_{1}$ and $p_{2}$ are space-like; the anti-time-ordered product is complex conjugated to the time-ordered one. In order to obtain a complete picture of all the possible vertex functions, one needs of course to go beyond the kinematics assumptions made in deriving eqs. (3.21) and (3.27). Causal commutators would be a useful tool to do so, as they have well- understood domains of analyticity in terms of complex momenta. We leave this task for future work. One important property that is shared by all orderings is the vanishing of the highest-spin vertex function when the intermediate operator takes a double- trace dimension, i.e. when $\Delta=\Delta_{1}+\Delta_{2}+\ell+2n$ with integer $n$. This ensures the triviality of Gaussian theories in momentum space, and places on equal footing the contributions of single- and double-trace operators in large-$N$ theories. It also implies that the conformal partial wave expansion (1.2) shares the “dispersive” property of the double discontinuity in position space [38]. ### 3.3 Lower spin functions With the knowledge of the highest-spin vertex function, the computation of all remaining vertex functions $V_{\Delta,\ell,m}^{[12]}$ with $m<\ell$ does not require solving more partial differential equations, nor making assumptions about boundary conditions, as eq. (3.8) gives a simple two-step algebraic recursion relation: since the tensors $\varepsilon^{\prime}_{m}$ are orthogonal to each other, we get $\boxed{\begin{aligned} V_{\Delta,\ell,m}^{[12]}(w_{1},w_{2})&=\frac{m+1}{(\ell-m)(\Delta+m-1)}\\\ &\quad\times\bigg{[}\frac{1}{\sqrt{(1-w_{1}-w_{2})^{2}-4w_{1}w_{2}}}\widehat{D}_{0}V_{\Delta,\ell,m+1}^{[12]}(w_{1},w_{2})\\\ &\quad\qquad+\frac{(m+2)(\Delta- m-d)(d-1+\ell+m)}{(d-1+2m)(d+1+2m)}\,V_{\Delta,\ell,m+2}^{[12]}(w_{1},w_{2})\bigg{]},\end{aligned}}$ (3.29) using the first-order differential operator $\widehat{D}_{0}$ defined in eq. (3.13). The last line must be dropped in the first step of the recursion, namely when $m=\ell-1$; equivalently, the recursion relation can be defined in terms of the functions $V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})$ computed above and of $V_{\Delta,\ell,\ell+1}^{[12]}(w_{1},w_{2})=0$. Note that the operator $\widehat{D}_{0}$ is odd under the permutation $1\leftrightarrow 2$, which is consistent with the fact that the Gegenbauer polynomials $\mathcal{C}_{m}^{(d-3)/2}(\cos\theta)$ is similarly odd under $\cos\theta\to-\cos\theta$ for odd $m$: in this way the exchange symmetry $1\leftrightarrow 2$ of the time-ordered product is preserved for all $m$. Derivatives of Appell $F_{4}$ functions can be again expressed in terms of linear combinations of $F_{4}$ functions, so one could in principle write down closed-form formulae for the vertex functions $V_{\Delta,\ell,m}^{[12]}(w_{1},w_{2})$ with the different operator orderings. This exercise is not particularly enlightening, though, and the recursion relation (3.29) seems simple enough to provide the most efficient implementation of the vertex functions. In fact, since $\widehat{D}_{0}$ is a first-order differential operator that only raises powers of $w_{1}$ and $w_{2}$, the vertex functions with $m<\ell$ inherit the analytic structure of the highest-spin function. The case in which the operator $\mathcal{O}^{\mu_{1}\ldots\mu_{\ell}}$ is a conserved current deserves special attention. It turns out that the differential operator $\widehat{D}_{0}$ annihilates $V_{\Delta,\ell,\ell}^{[12]}(w_{1},w_{2})$ whenever the unitarity bound is saturated ($\Delta=d-2+\ell$) and the two operators $\phi_{1}$ and $\phi_{2}$ have identical scaling dimensions ($\Delta_{1}=\Delta_{2}$), two conditions that must be met for $\mathcal{O}$ to be conserved. This implies that $V_{\Delta,\ell,\ell-1}^{[12]}(w_{1},w_{2})=0$. Moreover, the factor $(\Delta- m-d)$ in eq. (3.29) ensures that the second step of the recursion is trivial as well, and so $V_{\Delta,\ell,\ell-2}^{[12]}(w_{1},w_{2})=0$. It follows immediately that all vertex functions with $m<\ell$ vanish in this case. This is not a surprise: conserved currents only admit transverse polarizations, while any longitudinal polarization defines a null state. This property is essential in the conformal partial wave expansion, as it implies that one does not need to sum over $m<\ell$ for conserved currents, and hence that the singularities in the definition (2.35) of the coefficient $C_{\Delta,\ell,m}$ never occurs. ### 3.4 Special kinematic limits We will finally examine several special kinematic limits in which the vertex functions admits a simpler form, and compare the conformal partial waves in these limits with results already existing in the literature. #### Light-like limit The first interesting case is the limit $w_{1},w_{2}\to 0$ of the vertex function with a time-ordered product. As the Appell $F_{4}$ function is analytic around the point $(w_{1},w_{2})=(0,0)$, one can see from eq. (3.27) that the limit exists if $\Delta_{1},\Delta_{2}>\frac{d}{2}$ (the opposite situation is discussed below). In this case, the recursion relation (3.29) becomes an algebraic equation, $\displaystyle V_{\Delta,\ell,m}^{\operatorname{T}[12]}(0,0)$ $\displaystyle=\frac{m+1}{(\ell-m)(\Delta+m-1)}\bigg{[}(\Delta_{1}-\Delta_{2})V_{\Delta,\ell,m+1}^{[12]}(0,0)$ $\displaystyle\hskip 71.13188pt+\frac{(m+2)(\Delta- m-d)(d-1+\ell+m)}{(d-1+2m)(d+1+2m)}\,V_{\Delta,\ell,m+2}^{\operatorname{T}[12]}(0,0)\bigg{]},$ (3.30) which is solved by $\displaystyle V_{\Delta,\ell,m}^{\operatorname{T}[12]}(0,0)$ $\displaystyle=V_{\Delta,\ell,\ell}^{\operatorname{T}[12]}(0,0)\frac{(-2)^{\ell-m}\ell!}{m!(\ell-m)!}\frac{\left(\frac{\Delta_{2}-\Delta_{1}+\Delta-\ell-d+2}{2}\right)_{\ell-m}}{\left(\Delta-1+m\right)_{\ell-m}}$ $\displaystyle\quad\times{}_{3}F_{2}\left(\begin{array}[]{c}-(\ell-m),\leavevmode\nobreak\ d-\Delta-1+m,\leavevmode\nobreak\ \frac{d-2+2m}{2}\\\ \frac{\Delta_{1}-\Delta_{2}+d-\Delta-\ell+2m}{2},\leavevmode\nobreak\ d-2+2m\end{array};1\right).$ (3.33) Remarkably, the recursion relation only depends on the difference between the external scaling dimensions, $\Delta_{1}-\Delta_{2}$, and not on their sum. In fact, when $\Delta_{1}=\Delta_{2}(\equiv\Delta_{\phi})$, all vertex functions with odd $\ell-m$ vanish identically, and one obtains $V_{\Delta,\ell,\ell-2n}^{\operatorname{T}[\phi\phi]}(0,0)=\frac{(-1)^{n}\ell!}{2^{2n}n!(\ell-2n)!}\frac{\left(\frac{\Delta-\ell-d+2}{2}\right)_{n}}{\left(\frac{d-1}{2}+\ell-2n\right)_{n}\left(\frac{3-\Delta-\ell}{2}\right)_{n}}\,V_{\Delta,\ell,\ell}^{\operatorname{T}[\phi\phi]}(0,0).$ (3.34) The conformal partial waves (1.4) can then be written as $G_{\Delta,\ell}(w_{i}=0,\cos\theta)=\frac{2^{2\Delta+1}\Gamma\left(\Delta-\frac{d}{2}+1\right)\Gamma(\Delta+\ell)}{(4\pi)^{(d+2)/2}(\Delta-1)_{\ell}}\left[V_{\Delta,\ell,\ell}^{[\phi\phi]}(0,0)\right]^{2}g_{\Delta,\ell}(\cos\theta),$ (3.35) where $g_{\Delta,\ell}$ is a polynomial of degree $\ell$ in $\cos\theta$, defined by $g_{\Delta,\ell}(\cos\theta)=\sum_{n=0}^{\ell/2}\frac{(-1)^{n}\ell!(2n)!}{2^{\ell+2n+1}(n!)^{2}}\frac{d-3+2\ell-4n}{\left(2-\frac{d}{2}-\ell\right)_{n}\left(\frac{d-3}{2}\right)_{\ell-n+1}}\frac{\big{(}\frac{2-\Delta-\ell}{2}\big{)}_{n}\big{(}\frac{2-\widetilde{\Delta}-\ell}{2}\big{)}_{n}}{\big{(}\frac{3-\Delta-\ell}{2}\big{)}_{n}\big{(}\frac{3-\widetilde{\Delta}-\ell}{2}\big{)}_{n}}\mathcal{C}_{\ell-2n}^{(d-3)/2}(\cos\theta).$ (3.36) These are precisely the polynomials defined in ref. [63], where the light-like limit of the conformal partial wave expansion was studied. The $g_{\Delta,\ell}$ have the remarkable property of interpolating continuously between the Gegenbauer polynomials $\mathcal{C}_{\ell}^{(d-3)/2}(\cos\theta)$ when $\Delta-\ell$ saturates the unitarity bound, and $\mathcal{C}_{\ell}^{(d-2)/2}(\cos\theta)$ when $\Delta-\ell\to\infty$. Using $V_{\Delta,\ell,\ell}^{\operatorname{T}[\phi\phi]}(0,0)=\frac{i^{\ell-1}(4\pi)^{d+1}2^{-(2\Delta_{\phi}+\Delta+\ell+1)}\Gamma\left(\Delta_{\phi}-\frac{d}{2}\right)^{2}\Gamma\left(\frac{\Delta+\ell+d}{2}-\Delta_{\phi}\right)(\Delta-1)_{\ell}}{\Gamma\left(\frac{\Delta+\ell}{2}\right)^{2}\Gamma\left(\Delta_{\phi}-\frac{\Delta-\ell}{2}\right)\Gamma\left(\Delta_{\phi}+\frac{\Delta+\ell-d}{2}\right)\Gamma\left(\Delta_{\phi}+\frac{\Delta-\ell}{2}-d+1\right)},$ (3.37) one can also verify that the multiplicative coefficient in eq. (3.35) also matches the definition of $G_{\Delta,\ell}(\cos\theta)$ in ref. [63].555Eq. (1.14) in ref. [63] contains a typo: the numerator is missing a factor of $\Gamma(\Delta+\ell)$ (corrected in the arXiv version). #### Scattering amplitude and form factor An alternative, maybe more interesting situation is the asymptotic limit $w_{i}\to 0$ in the case $\Delta_{i}<\frac{d}{2}$. The time-ordered vertex function (3.27) diverges in this case, but one obtains a well-defined limit by multiplying it with the appropriate powers of $w_{i}$. The peculiarity of this limit is that it allows to define objects that have an interesting field- theoretical interpretation: consider $F(s,t,u)\equiv\left[\prod_{i=1}^{3}\lim_{p_{i}^{2}\to 0_{-}}(-p_{i}^{2})^{d/2-\Delta_{i}}\right]\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\operatorname{T}[\phi_{1}(p_{1})\phi_{2}(p_{2})\phi_{3}(p_{3})\phi_{4}(-p_{1}-p_{2}-p_{3})]\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}},$ (3.38) as well as the subsequent limit $p_{4}^{2}=s+t+u\to 0$, $A(s,t)\equiv\lim_{p_{4}^{2}\to 0_{-}}(-p_{4}^{2})^{d/2-\Delta_{4}}F(s,t,u).$ (3.39) $F$ and $A$ have been interpreted respectively as a “form factor” and as an “amplitude” [69]: on the one hand, they are related to the time-ordered correlation function in eq. (3.38) by a procedure similar to the LSZ reduction in massive quantum field theory; on the other hand, from a careful analysis of the singularities in the Fourier transform from position space, one can show that they admit a conformal partial wave expansion that is reminiscent of the “unitarity cuts” method relating higher-point amplitudes to lower-point ones. This latter property follows from the fact that the form factor is related to the same limit of the partially-time-ordered function, $F(s,t,u)\sim\left[\prod_{i=1}^{3}\lim_{p_{i}^{2}\to 0_{-}}(-p_{i}^{2})^{d/2-\Delta_{\phi}}\right]\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\operatorname{T}[\phi_{1}(p_{1})\phi_{2}(p_{2})]\operatorname{T}[\phi_{3}(p_{3})\phi_{4}(-p_{1}-p_{2}-p_{3})]\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}},$ (3.40) provided that the limit is taken with $p_{1}$, $p_{2}$ backward-directed, and $p_{3}$ forward-directed. The equivalence is up to a phase involving the dilatation operator (i.e. it depends on the scaling dimension $\Delta$ of the intermediate operator in the conformal partial wave). We do not want to re- derive here the results of ref. [69], but we can directly make use of the right-hand side of the equivalence (3.40) to provide a simple computation of the form factor $F$ and of the amplitude $A$ using our conformal partial waves, up to the aforementioned phase. The computation involves two distinct vertex functions, $V_{\Delta,\ell,m}^{\operatorname{T}[12]}$ and the complex conjugate of $V_{\Delta,\ell,m}^{\bar{\operatorname{T}}[34]}$. $V_{\Delta,\ell,m}^{\operatorname{T}[12]}$ is evaluated with both $w_{1}$ and $w_{2}\to 0$. In this case the computation proceeds as in the light-like limit discussed above: the recursion relation (3.29) for the vertex function is again algebraic, though it differs from eq. (3.30) by the replacement $\Delta_{1,2}\to d-\Delta_{1,2}$. One arrives at $\displaystyle\lim_{w_{1},w_{2}\to 0_{-}}(-w_{1})^{d/2-\Delta_{1}}(-w_{2})^{d/2-\Delta_{2}}V_{\Delta,\ell,m}^{\operatorname{T}[12]}(w_{1},w_{2})$ $\displaystyle\quad=\frac{i^{\ell-1}(4\pi)^{d}\Gamma\left(\frac{d}{2}-\Delta_{1}\right)\Gamma\left(\frac{d}{2}-\Delta_{2}\right)(\Delta-1)_{\ell}\left(\frac{\Delta_{1}-\Delta_{2}+\Delta-\ell-d+2}{2}\right)_{\ell-m}\sin\left[\pi\frac{\Delta_{1}+\Delta_{2}-\Delta+\ell}{2}\right]}{2^{\Delta_{1}+\Delta_{2}+\Delta+\ell-1}\Gamma\left(\frac{\Delta_{1}-\Delta_{2}+\Delta+\ell}{2}\right)\Gamma\left(\frac{\Delta_{2}-\Delta_{1}+\Delta+\ell}{2}\right)\left(\Delta-1+m\right)_{\ell-m}}$ $\displaystyle\quad\quad\times\frac{(-2)^{\ell-m}\ell!}{m!(\ell-m)!}{}_{3}F_{2}\left(\begin{array}[]{c}-(\ell-m),\leavevmode\nobreak\ d-\Delta-1+m,\leavevmode\nobreak\ \frac{d-2+2m}{2}\\\ \frac{\Delta_{2}-\Delta_{1}+d-\Delta-\ell+2m}{2},\leavevmode\nobreak\ d-2+2m\end{array};1\right).$ (3.43) For $V_{\Delta,\ell,m}^{\bar{\operatorname{T}}[34]}$, consider first the highest-spin vertex function as defined in eq. (3.23), which in the limit $w_{3}\to 0$ becomes $\displaystyle\lim_{w_{3}\to 0_{-}}$ $\displaystyle(-w_{3})^{d/2-\Delta_{3}}V_{\Delta,\ell,\ell}^{\bar{\operatorname{T}}[34]}(w_{3},w_{4})$ $\displaystyle=A^{*}(-w_{4})^{(\Delta_{3}+\Delta_{4}-\Delta+\ell-2)/2}(1-w_{4})^{1-\Delta_{3}}$ $\displaystyle\quad\times{}_{2}F_{1}\left(\tfrac{\Delta-\ell-\Delta_{3}-\Delta_{4}+2}{2},\tfrac{\Delta_{4}-\Delta_{3}+\Delta-\ell-d+2}{2};\Delta-\tfrac{d}{2}+1;\frac{1}{w_{4}}\right).$ (3.44) where $A^{*}$ is the complex conjugate of the coefficient $A$ given in eq. (3.25), upon replacement $1\to 3$ and $2\to 4$. Applying the recursion relation (3.29) leads to $\lim_{w_{3}\to 0_{-}}(-w_{3})^{d/2-\Delta_{3}}V_{\Delta,\ell,m}^{\operatorname{T}[12]}(w_{1},w_{2})=A^{*}(-w_{4})^{(\Delta_{3}+\Delta_{4}-\Delta+\ell-2)/2}(1-w_{4})^{1-\Delta_{3}}v_{\Delta,\ell,m}\left(\frac{1}{w_{4}}\right),$ (3.45) where we have defined $\displaystyle v_{\Delta,\ell,m}(z)=\sum_{j=0}^{\ell-m}$ $\displaystyle\frac{2^{\ell-m}\ell!}{m!j!(\ell- m-j)!}\frac{\left(\frac{d}{2}-1+m\right)_{\ell- m-j}\left(\Delta-\ell-d+2+j\right)_{\ell-m-j}}{\left(d-2+2m\right)_{\ell- m-j}}$ $\displaystyle\times\frac{\left(\frac{\Delta-\ell-\Delta_{3}-\Delta_{4}+2}{2}\right)_{j}\left(\frac{\Delta_{4}-\Delta_{3}+\Delta-\ell-d+2}{2}\right)_{j}}{\left(2-\Delta-\ell\right)_{\ell-m}\left(\Delta-\frac{d}{2}+1\right)_{j}}$ $\displaystyle\times z^{j}\,{}_{2}F_{1}\left(\tfrac{\Delta-\ell-\Delta_{3}-\Delta_{4}+2}{2}+j,\tfrac{\Delta_{4}-\Delta_{3}+\Delta-\ell-d+2}{2}+j;\Delta-\frac{d}{2}+1+j;z\right).$ (3.46) The conformal partial waves corresponding to the form factor follow simply from plugging the vertex functions (3.43) and (3.45) in eq. (1.4). This gives a closed-form expression for distinct $\Delta_{i}$ that was lacking in ref. [69], as well as a new representation of the result in the case of identical operators ($\Delta_{i}=\Delta_{\phi}$): the function $f_{\Delta,\ell}(z,\cos\theta)$ computed in appendix B of ref. [69] using a method based on the conformal Casimir equations can be written $\displaystyle f_{\Delta,\ell}(z,\cos\theta)$ $\displaystyle=z^{(\Delta-\ell-2\Delta_{\phi})/2}(1-z)^{1-\Delta_{\phi}}$ $\displaystyle\quad\times\sum_{n=0}^{\ell/2}\frac{(-1)^{n}\ell!}{2^{\ell}n!}\frac{\left(\frac{d-3}{2}+\ell-2n\right)\left(\frac{\Delta-\ell-d+2}{2}\right)_{n}}{\left(\frac{d-3}{2}\right)_{\ell-n+1}\left(\frac{3-\Delta-\ell}{2}\right)_{n}}\mathcal{C}_{\ell-2n}^{(d-3)/2}(\cos\theta)v_{\Delta,\ell,\ell-2n}(z)$ (3.47) in terms of $z=1/w_{4}=p_{4}^{2}/s$. Similarly, for the scattering amplitude (3.25) one recovers the polynomials $g_{\Delta,\ell}$ of eq. (3.36) in the case of identical external operators, as well as a new set of polynomials obtained by plugging the vertex function (3.43) twice in eq. (1.4) when the external operators are distinct. These examples of special kinematics are far from being exhaustive, but they illustrate how our results can be efficiently used in various specific configurations. This concludes the derivation of the vertex functions, and we move next to a complete physical example where the conformal partial wave expansion are put to good use. ## 4 Example: the scalar box integral in 4 dimensions We present in this section an example in which the conformal partial wave expansion can be explicitly compared with a known 4-point function. We take the theory of a free, complex scalar field $\Phi$, and consider the time- ordered 4-point function involving the composite operator $\Phi^{2}$ and its complex conjugate $\overline{\Phi}^{2}$, $\left\langle 0\right|\operatorname{T}[\Phi^{2}(p_{1})\overline{\Phi}^{2}(p_{2})\Phi^{2}(p_{3})\overline{\Phi}^{2}(p_{4})]\left|0\right\rangle.$ (4.1) This correlation function is given in terms of a single Feynman diagram shown in figure 3. In $d=4$ dimensions, the integral over the loop momentum can be performed in arbitrary kinematics, and the result is a Bloch-Wigner dilogarithm [72, 73, 74]. The simplicity of this result can be traced back to presence of a dual conformal symmetry at the level of the integrand [45, 46]. Figure 3: The scalar box diagram corresponding to the 4-point function (4.1) in the theory of a free complex scalar field. The solid lines are propagators of the field $\Phi$, with arrows indicating the charge flow. The external, dashed lines correspond to the composite operators $\Phi^{2}$ and $\overline{\Phi}^{2}$. The dual graph in terms of the variables $x_{i}-x_{i+1}=p_{i}$ is shown in red, with dotted lines. ### 4.1 CFT optical theorem The 4-point function (4.1) in itself does not admit a decomposition into conformal partial waves: the time-ordered product runs over all 4 operators, and as such it does not respect the structure of the correlator (1.1). However, the real part of that function does, by an identity that is sometimes referred to as the “CFT optical theorem” [31, 32] (see also ref. [75]): by a combinatoric identity involving the time-ordered product and its hermitian conjugate $\operatorname{\overline{T}}$, it can be shown that $\displaystyle\left\langle 0\right|\operatorname{T}[\Phi^{2}(p_{1})\overline{\Phi}^{2}(p_{2})\Phi^{2}(p_{3})\overline{\Phi}^{2}(p_{4})]\left|0\right\rangle$ $\displaystyle+\left\langle 0\right|\operatorname{\overline{T}}[\Phi^{2}(p_{1})\overline{\Phi}^{2}(p_{2})\Phi^{2}(p_{3})\overline{\Phi}^{2}(p_{4})]\left|0\right\rangle$ $\displaystyle=\left\langle 0\right|\operatorname{T}[\Phi^{2}(p_{1})\overline{\Phi}^{2}(p_{2})]\operatorname{\overline{T}}[\Phi^{2}(p_{3})\overline{\Phi}^{2}(p_{4})]\left|0\right\rangle,$ (4.2) provided that the following conditions on the momenta and Mandelstam invariants are fulfilled: $p_{i}^{2}<0,\qquad\qquad s>0,\qquad\qquad t,u<0.$ (4.3) The right-hand side of this optical theorem is precisely in the form of eq. (1.1), and we can therefore define $G(s,w_{i},\cos\theta)=\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\operatorname{T}[\Phi^{2}(p_{1})\overline{\Phi}^{2}(p_{2})]\operatorname{\overline{T}}[\Phi^{2}(p_{3})\overline{\Phi}^{2}(p_{4})]\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}.$ (4.4) At the same time, the left-hand side of eq. (4.2) corresponds to (twice) the real part of the scalar box integral. This can be compared with the known results that give [74, 45]666Our real part of the box integral differs from the usual conventions by a multiplicative factor of $2^{10}\pi^{8}$ that has to do with the use of the standard CFT normalization of operators in position space: on the one hand, the free field propagator in 4 dimensions is $\mathopen{\hbox{${\langle}$}\kern-1.94444pt\leavevmode\hbox{${\langle}$}}\operatorname{T}[\Phi(p)\overline{\Phi}(-p)]\mathclose{\hbox{${\rangle}$}\kern-1.94444pt\leavevmode\hbox{${\rangle}$}}=4\pi^{2}i/p^{2}$, accounting for an overall factor of $(4\pi^{2})^{4}$; on the other hand, each vertex involving the properly-normalized CFT operator $\frac{1}{\sqrt{2}}\Phi^{2}$ gives rise to a factor of $\sqrt{2}$, contributing in total another factor of 4 to our box integral. $G(s,w_{i},\cos\theta)=\frac{(2\pi)^{7}}{su}\frac{\log\left(\frac{1-1/z}{1-1/\bar{z}}\right)}{z-\bar{z}}.$ (4.5) $z$ and $\bar{z}$ are related to the dual conformal cross ratios, given in terms of $p_{i}=x_{i}-x_{i+1}$ by the two equations $z\bar{z}=\frac{x_{12}^{2}x_{34}^{2}}{x_{13}^{2}x_{24}^{2}}=\frac{p_{1}^{2}p_{3}^{2}}{su},\qquad\qquad(1-z)(1-\bar{z})=\frac{x_{14}^{2}x_{23}^{2}}{x_{13}^{2}x_{24}^{2}}=\frac{p_{2}^{2}p_{4}^{2}}{su}.$ (4.6) ### 4.2 IR divergences In spite of the relative simplicity of (4.5) in terms of $z$ and $\bar{z}$, the dependence on $w_{i}$ and $\cos\theta$ is quite intricate: determining $z$ and $\bar{z}$ requires solving a quadratic equation involving the Mandelstam invariant $u$, which is itself given by $\displaystyle u=-\frac{s}{2}\Big{[}\,$ $\displaystyle 1-w_{1}-w_{2}-w_{3}-w_{4}-(w_{1}-w_{2})(w_{3}-w_{4})$ $\displaystyle-\cos\theta\sqrt{(1-w_{1}-w_{2})^{2}-4w_{1}w_{2}}\sqrt{(1-w_{3}-w_{4})^{2}-4w_{3}w_{4}}\Big{]}.$ (4.7) A comparison with the conformal partial wave expansion at generic kinematics is quite difficult to achieve. Instead, we will focus on the IR divergences of this box integral when some or all of the $p_{i}^{2}$ approach zero. In the simplest case where all “masses” are identical ($w_{i}=w$), we have $z,\bar{z}=\frac{1}{2}\left(1\pm\sqrt{1+\frac{8w^{2}}{(1-4w)(1-\cos\theta)}}\right),$ (4.8) and, as $w\to 0$, $G(s,w_{i}=w,\cos\theta)=-\frac{512\pi^{7}}{s^{2}(1-\cos\theta)}\log\left(\frac{2w^{2}}{1-\cos\theta}\right)+\ldots$ (4.9) The real part of the box integral has a logarithmic divergence in $w$, and moreover a pole in the forward limit $\cos\theta\to 1$. To perform the comparison with the conformal partial waves, one would like to expand $G$ in spin partial waves first, and write $G(s,w_{i},\cos\theta)=s^{-2}\sum_{\ell=0}^{\infty}\left(\ell+\tfrac{1}{2}\right)a_{\ell}(w_{i})P_{\ell}(\cos\theta),$ (4.10) or equivalently, using the orthogonality of Legendre polynomials, $a_{\ell}(w_{i})=\int_{-1}^{1}d\cos\theta\,s^{2}G(s,w_{i},\cos\theta).$ (4.11) Note that the asymptotic limit in eq. (4.9) does not commute with the partial wave expansion, since the pole in $\cos\theta$ is not integrable. One must work instead with generic $w$, and take the limit $w\to 0$ at the end. To do so, note that $G$ can be rewritten as $G(s,w_{i},\cos\theta)=-\frac{64\pi^{7}}{s^{2}\Omega_{12}\Omega_{34}}\frac{\partial}{\partial\cos\theta}\left[\log\left(\frac{1-1/z}{1-1/\bar{z}}\right)\right]^{2}.$ (4.12) The angular integral can now be performed using integration by parts after taking the IR limit in the logarithm. For identical $w_{i}=w$, the limit $w\to 0$ gives $G(s,w_{i}=w,\cos\theta)=-\frac{256\pi^{7}}{s^{2}}\frac{\partial}{\partial\cos\theta}\left[\log\left(\frac{2w^{2}}{1-\cos\theta}\right)+\ldots\right]^{2},$ (4.13) and thus $a_{\ell}(w_{i}=w)=1024\pi^{7}[\log(-w)+H_{\ell}]^{2}+\ldots$ (4.14) where $H_{\ell}$ is the $\ell$-th harmonic number, and the ellipsis indicates that we have neglected terms of order $w\log(-w)$ and higher. For each individual partial wave the leading IR divergence is now a double logarithm in $w$. To study a slightly more general form, one can also consider the situation in which the masses are equal two-by-two, say $w_{4}=w_{1}$ and $w_{3}=w_{2}$, and take the limit $w_{1}\to 0$ while keeping $w_{2}$ generic. Using eq. (4.12), one arrives at $a_{\ell}(w_{i})=\frac{256\pi^{7}}{(1-w_{2})^{2}}\left[\log(-w_{1})+\log(-w_{2})-2\log(1-w_{2})+2H_{\ell}\right]^{2}+\ldots,$ (4.15) neglecting terms that vanish as $w_{1}\to 0$. We shall now see in the next section how the partial waves (4.14) and (4.15) are matched one-to-one by the conformal partial waves. ### 4.3 Conformal partial wave expansion To compute the conformal partial waves, one simply needs to apply the recipe given in sections 2 and 3 to the 4-point function (4.4). We work with $d=4$, and with scaling dimensions of the external operator that are all equal, $\Delta_{i}=2$. Since the theory is non-interacting, it is straightforward to determine the spectrum of primary operators that enter the OPE of $\Phi^{2}$ and $\overline{\Phi}^{2}$, as these must be composites of the free field. There are two families of such operators: * • Operators of the schematic form $[\overline{\Phi}^{2}\square^{n}\partial^{\ell}\Phi^{2}]$, that have spin $\ell$ and scaling dimensions $\Delta=4+\ell+2n$. As these are precisely double-trace dimensions, the vertex functions involving them vanish. These operators do not contribute to the conformal partial wave expansion in momentum space. * • Operators of the form $[\overline{\Phi}\partial^{\ell}\Phi]$, with spin $\ell$ and scaling dimensions $\Delta=2+\ell$. There is one such operator for every spin $\ell\geq 0$, and each of them is a conserved current saturating the unitarity bound, with the exception of the scalar operator $[\overline{\Phi}\Phi]$. Therefore, the conformal partial wave decomposition (1.2) is a sum over integers labeled by the spin $\ell$, with each term given by single polarization $m=\ell$. Plugging the operator data in the coefficient $C_{\Delta,\ell,\ell}$ of eq. (2.35), one gets $G(s,w_{i},\cos\theta)=s^{-2}\sum_{\ell=0}^{\infty}\left(\ell+\tfrac{1}{2}\right)P_{\ell}(\cos\theta)a_{\ell}(w_{i}),$ (4.16) where now $a_{\ell}(w_{i})=\lambda_{\Phi^{2}\overline{\Phi}^{2}[\overline{\Phi}\partial^{\ell}\Phi]}^{2}\frac{2^{3\ell}(\ell!)^{4}}{\pi^{3}(2\ell)!}\,V_{2+\ell,\ell,\ell}^{\operatorname{T}[\phi\phi]}(w_{1},w_{2})V_{2+\ell,\ell,\ell}^{\operatorname{T}[\phi\phi]}(w_{3},w_{4})^{*}.$ (4.17) The computation of the vertex functions is slightly more complicated, as one cannot simply plug the relevant scaling dimensions in eq. (3.27): the coefficients $f_{\Delta_{1}\Delta_{2};\Delta,\ell}$ diverge precisely when $\Delta_{i}=\frac{d}{2}$. Fortunately, it is straightforward to regularize the computation of these vertex functions by analytic continuation in $d$: the free scalar theory can be defined in any $d\neq 4$, and we shall see that the limit $d\to 4$ ends up to be finite, due to cancellations among the different terms in eq. (3.27). In generic $d$, the scaling dimension of $\Phi^{2}$ and $\overline{\Phi}^{2}$ is $\Delta_{i}=d-2$, and the conserved currents entering the OPE have $\Delta=d-2+\ell$. The coefficients $f_{\Delta_{1}\Delta_{2};\Delta,\ell}$ appearing in eq. (3.27) take the values $f_{\Delta_{1}\Delta_{2};\Delta,\ell}=\Gamma(d-3+\ell)\Gamma\left(\tfrac{4-d}{2}\right),\quad f_{\widetilde{\Delta}_{1}\widetilde{\Delta}_{2};\Delta,\ell}=\ell!\Gamma\left(\tfrac{d-4}{2}\right),\quad f_{\Delta_{1}\widetilde{\Delta}_{2};\Delta,\ell}=f_{\widetilde{\Delta}_{1}\Delta_{2};\Delta,\ell}=0.$ (4.18) Since the combination $f_{\Delta_{1}\Delta_{2};\Delta,\ell}+f_{\widetilde{\Delta}_{1}\widetilde{\Delta}_{2};\Delta,\ell}$ vanishes in the limit $d\to 4$, the individual divergences cancel out and the powers $(-w_{i})^{\Delta_{i}-d/2}$ turn into logarithms. The general structure of the vertex function is still quite complicated, as it involves Appell $F_{4}$ functions and derivatives of them with respect to their parameters. But it is simple to obtain the limit in which one of their argument $w_{i}$ is small. For instance, as $w_{1}\to 0$, we get $V_{2+\ell,\ell,\ell}^{\operatorname{T}[\phi\phi]}(w_{1},w_{2})=8\pi^{5}i^{\ell+1}\frac{(2\ell)!}{2^{2\ell}(\ell!)^{3}}\frac{1}{1-w_{2}}\left[\log(-w_{1})+\log\left(\frac{-w_{2}}{(1-w_{2})^{2}}\right)+2H_{\ell}+\ldots\right].$ (4.19) This function already bears resemblance to the partial wave decomposition obtained above. Assuming all identical $w_{i}=w$, one gets $a_{\ell}(w_{i}=w)=256\pi^{7}\frac{(2\ell)!}{2^{\ell}(\ell!)^{2}}\lambda_{\Phi^{2}\overline{\Phi}^{2}[\overline{\Phi}\partial^{\ell}\Phi]}^{2}\left[\log(-w)+H_{\ell}+\ldots\right]^{2},$ (4.20) while with $w_{4}=w_{1}$ and $w_{3}=w_{2}$ we have $a_{\ell}(w_{i})=64\pi^{7}\frac{(2\ell)!}{2^{\ell}(\ell!)^{2}}\lambda_{\Phi^{2}\overline{\Phi}^{2}[\overline{\Phi}\partial^{\ell}\Phi]}^{2}\left[\log(-w_{1})+\log(-w_{2})-2\log(1-w_{2})+2H_{\ell}+\ldots\right]^{2}.$ (4.21) These coefficients match precisely eqs. (4.14) and (4.15) provided that $\lambda_{\Phi^{2}\overline{\Phi}^{2}[\overline{\Phi}\partial^{\ell}\Phi]}^{2}=\frac{2^{\ell+2}(\ell!)^{2}}{(2\ell)!}.$ (4.22) This is indeed the known expression for the OPE coefficients in the complex scalar field theory in $d=4$ dimensions. This example gives therefore a non- trivial verification of our general results in a specific conformal field theory. ## 5 Conclusions In this work, we have presented a computation of the conformal partial waves in momentum space, with an emphasis on their factorization properties: they are given as a finite sum over vertex functions and polynomials in the scattering angle. The terms of this sum are in one-to-one correspondence with the ordinary spin partial waves. The vertex functions are simple to obtain using a recursion relation descending from the highest-spin function, and the latter are expressed in terms of Appell $F_{4}$ double hypergeometric functions, which are straightforward to evaluate numerically. In some particular cases where the scaling dimensions of the external operators take (half-)integer values, the vertex function must be obtained by analytic continuation: we gave an example where this situation happens in section 4. The outcome of this work is a new mathematical tool for the study of conformal field theory, which we hope will be useful in the future. Some directions that are certainly worth investigating are the possibility to use partial wave unitarity to put constraints on the CFT data, in the spirit of the S-matrix bootstrap [76], as well as the prospect of writing CFT dispersion relations directly in momentum space [39, 38]. We also believe that our computation of the conformal partial waves illustrates the power of Minkowski momentum-space techniques in CFT. It is well known that the orthogonality of momentum eigenstates allows to factorize the computation of correlation functions: after all, this is the reason why the majority of quantum field theory results are obtained in momentum space. What is certainly less appreciated is the structural simplicity of the operator product expansion in momentum space. While 3-point function are admittedly complicated, the fact that they depend by momentum conservation on only two momenta is very powerful: they can always be thought to live on a 2-dimensional slice of Minkowski space, and their spin can be projected onto this subspace.777As already mentioned, this also implies that the ideal framework to work with momentum-space 3-point functions is in terms of spinor variables [62]. For this reason, the dependence on the scattering angle can always be factorized in the operator product expansion in momentum space. In fact, it is a simple exercise to generalize the computation of conformal partial waves to correlation functions involving more than four points, at least as long as one is interested in the comb channel [77, 78, 79, 80, 81]. Similarly, we believe that the inclusion of spinning external operators in the computation could be potentially simpler in momentum space than it is for ordinary conformal blocks in position space (see ref. [82], as well as the connection with the Mellin-space representation in refs. [83, 84]). Finally, several properties of our conformal partial waves could (and should) be studied in the future. First and foremost, we have not touched the question of the convergence of the momentum space OPE. This convergence is actually guaranteed in a distributional sense, since CFT correlation functions in Minkowski space are tempered distributions and so are their Fourier transforms, and the position-space OPE can be shown to converge distributionally [85]. Nevertheless, the situation can be quite subtle, as illustrated by the solution to this problem in $d=2$ dimensions [86]: depending on the kinematics, the OPE might or might not converge in a point- like manner. Other interesting properties of the momentum-space conformal partial wave expansion have to do with the remarkable fact that our results are valid in any space-time dimension $d\geq 3$. This would allow to study them in the limit $d\to\infty$ [87, 88], or to examine whether recursion relations between various space-time dimensions can be found [89, 90, 91]. The analyticity in $d$ also implies the ability to study momentum-space correlation functions in non-integer dimensions, where unitarity is known to be broken [92, 93]. Similarly, if one projects the OPE onto ordinary spin partial waves by working at fixed $m$, then the result is not only analytic in $d$ but also in the spin $\ell$ of the exchanged operator: in this case one can imagine formulating an OPE inversion formula in momentum space [94, 95]. It would be interesting in any case to understand the concept of light-ray operators in the same context [96, 97, 98]. We leave the study of these interesting properties for future work. ## Acknowledgments The author would like to thank the organizers of the workshop “Cosmology Meets CFT Correlators 2020” at the National Taiwan University for their hospitality, as well as the workshop’s participants for numerous stimulating discussions. ## References * [1] R. Rattazzi, V. S. Rychkov, E. Tonni, and A. Vichi, “Bounding scalar operator dimensions in 4D CFT,” JHEP 12 (2008) 031, arXiv:0807.0004 [hep-th]. * [2] S. Rychkov, EPFL Lectures on Conformal Field Theory in $D\geq 3$ Dimensions. SpringerBriefs in Physics. 2016. arXiv:1601.05000 [hep-th]. * [3] D. Simmons-Duffin, “The Conformal Bootstrap,” in Theoretical Advanced Study Institute in Elementary Particle Physics: New Frontiers in Fields and Strings, pp. 1–74. 2017\. arXiv:1602.07982 [hep-th]. * [4] D. Poland, S. Rychkov, and A. Vichi, “The Conformal Bootstrap: Theory, Numerical Techniques, and Applications,” Rev. Mod. Phys. 91 (2019) 015002, arXiv:1805.04405 [hep-th]. * [5] F. Dolan and H. Osborn, “Conformal four point functions and the operator product expansion,” Nucl. Phys. B 599 (2001) 459–496, arXiv:hep-th/0011040. * [6] F. Dolan and H. Osborn, “Conformal partial waves and the operator product expansion,” Nucl. Phys. B 678 (2004) 491–507, arXiv:hep-th/0309180. * [7] S. El-Showk, M. F. Paulos, D. Poland, S. Rychkov, D. Simmons-Duffin, and A. Vichi, “Solving the 3D Ising Model with the Conformal Bootstrap,” Phys. Rev. D 86 (2012) 025022, arXiv:1203.6064 [hep-th]. * [8] D. Simmons-Duffin, “Projectors, Shadows, and Conformal Blocks,” JHEP 04 (2014) 146, arXiv:1204.3894 [hep-th]. * [9] M. Hogervorst and S. Rychkov, “Radial Coordinates for Conformal Blocks,” Phys. Rev. D 87 (2013) 106004, arXiv:1303.1111 [hep-th]. * [10] F. Kos, D. Poland, and D. Simmons-Duffin, “Bootstrapping the $O(N)$ vector models,” JHEP 06 (2014) 091, arXiv:1307.6856 [hep-th]. * [11] F. Kos, D. Poland, and D. Simmons-Duffin, “Bootstrapping Mixed Correlators in the 3D Ising Model,” JHEP 11 (2014) 109, arXiv:1406.4858 [hep-th]. * [12] M. S. Costa, J. Penedones, D. Poland, and S. Rychkov, “Spinning Conformal Blocks,” JHEP 11 (2011) 154, arXiv:1109.6321 [hep-th]. * [13] J. Penedones, E. Trevisani, and M. Yamazaki, “Recursion Relations for Conformal Blocks,” JHEP 09 (2016) 070, arXiv:1509.00428 [hep-th]. * [14] M. S. Costa, T. Hansen, J. Penedones, and E. Trevisani, “Radial expansion for spinning conformal blocks,” JHEP 07 (2016) 057, arXiv:1603.05552 [hep-th]. * [15] L. Iliesiu, F. Kos, D. Poland, S. S. Pufu, D. Simmons-Duffin, and R. Yacoby, “Bootstrapping 3D Fermions,” JHEP 03 (2016) 120, arXiv:1508.00012 [hep-th]. * [16] L. Iliesiu, F. Kos, D. Poland, S. S. Pufu, D. Simmons-Duffin, and R. Yacoby, “Fermion-Scalar Conformal Blocks,” JHEP 04 (2016) 074, arXiv:1511.01497 [hep-th]. * [17] A. Castedo Echeverri, E. Elkhidir, D. Karateev, and M. Serone, “Deconstructing Conformal Blocks in 4D CFT,” JHEP 08 (2015) 101, arXiv:1505.03750 [hep-th]. * [18] A. Castedo Echeverri, E. Elkhidir, D. Karateev, and M. Serone, “Seed Conformal Blocks in 4D CFT,” JHEP 02 (2016) 183, arXiv:1601.05325 [hep-th]. * [19] H. Isono, “On conformal correlators and blocks with spinors in general dimensions,” Phys. Rev. D 96 no. 6, (2017) 065011, arXiv:1706.02835 [hep-th]. * [20] P. Kravchuk, “Casimir recursion relations for general conformal blocks,” JHEP 02 (2018) 011, arXiv:1709.05347 [hep-th]. * [21] G. F. Cuomo, D. Karateev, and P. Kravchuk, “General Bootstrap Equations in 4D CFTs,” JHEP 01 (2018) 130, arXiv:1705.05401 [hep-th]. * [22] D. Karateev, P. Kravchuk, and D. Simmons-Duffin, “Harmonic Analysis and Mean Field Theory,” JHEP 10 (2019) 217, arXiv:1809.05111 [hep-th]. * [23] D. Karateev, P. Kravchuk, M. Serone, and A. Vichi, “Fermion Conformal Bootstrap in 4d,” JHEP 06 (2019) 088, arXiv:1902.05969 [hep-th]. * [24] R. S. Erramilli, L. V. Iliesiu, and P. Kravchuk, “Recursion relation for general 3d blocks,” JHEP 12 (2019) 116, arXiv:1907.11247 [hep-th]. * [25] V. Schomerus and E. Sobko, “From Spinning Conformal Blocks to Matrix Calogero-Sutherland Models,” JHEP 04 (2018) 052, arXiv:1711.02022 [hep-th]. * [26] M. Isachenkov and V. Schomerus, “Integrability of conformal blocks. Part I. Calogero-Sutherland scattering theory,” JHEP 07 (2018) 180, arXiv:1711.06609 [hep-th]. * [27] J.-F. Fortin and W. Skiba, “Conformal Bootstrap in Embedding Space,” Phys. Rev. D 93 no. 10, (2016) 105047, arXiv:1602.05794 [hep-th]. * [28] J.-F. Fortin and W. Skiba, “Conformal Differential Operator in Embedding Space and its Applications,” JHEP 07 (2019) 093, arXiv:1612.08672 [hep-th]. * [29] J.-F. Fortin, V. Prilepina, and W. Skiba, “Conformal Four-Point Correlation Functions from the Operator Product Expansion,” JHEP 08 (2020) 115, arXiv:1907.10506 [hep-th]. * [30] G. Mack, “All unitary ray representations of the conformal group SU(2,2) with positive energy,” Commun. Math. Phys. 55 (1977) 1. * [31] M. Gillioz, X. Lu, and M. A. Luty, “Scale Anomalies, States, and Rates in Conformal Field Theory,” JHEP 04 (2017) 171, arXiv:1612.07800 [hep-th]. * [32] M. Gillioz, X. Lu, and M. A. Luty, “Graviton Scattering and a Sum Rule for the c Anomaly in 4D CFT,” JHEP 09 (2018) 025, arXiv:1801.05807 [hep-th]. * [33] R. Gopakumar, A. Kaviraj, K. Sen, and A. Sinha, “Conformal Bootstrap in Mellin Space,” Phys. Rev. Lett. 118 no. 8, (2017) 081601, arXiv:1609.00572 [hep-th]. * [34] R. Gopakumar and A. Sinha, “On the Polyakov-Mellin bootstrap,” JHEP 12 (2018) 040, arXiv:1809.10975 [hep-th]. * [35] J. Penedones, J. A. Silva, and A. Zhiboedov, “Nonperturbative Mellin Amplitudes: Existence, Properties, Applications,” JHEP 08 (2020) 031, arXiv:1912.11100 [hep-th]. * [36] H. Isono, T. Noumi, and G. Shiu, “Momentum space approach to crossing symmetric CFT correlators,” JHEP 07 (2018) 136, arXiv:1805.11107 [hep-th]. * [37] H. Isono, T. Noumi, and G. Shiu, “Momentum space approach to crossing symmetric CFT correlators. Part II. General spacetime dimension,” JHEP 10 (2019) 183, arXiv:1908.04572 [hep-th]. * [38] S. Caron-Huot, D. Mazáč, L. Rastelli, and D. Simmons-Duffin, “Dispersive CFT Sum Rules,” arXiv:2008.04931 [hep-th]. * [39] D. Carmi and S. Caron-Huot, “A Conformal Dispersion Relation: Correlations from Absorption,” JHEP 09 (2020) 009, arXiv:1910.12123 [hep-th]. * [40] C. Corianò and M. M. Maglio, “On Some Hypergeometric Solutions of the Conformal Ward Identities of Scalar 4-point Functions in Momentum Space,” JHEP 09 (2019) 107, arXiv:1903.05047 [hep-th]. * [41] A. Bzowski, P. McFadden, and K. Skenderis, “Conformal $n$-point functions in momentum space,” Phys. Rev. Lett. 124 no. 13, (2020) 131602, arXiv:1910.10162 [hep-th]. * [42] C. Corianò and M. M. Maglio, “The Generalized Hypergeometric Structure of the Ward Identities of CFT’s in Momentum Space in $d>2$,” Axioms 9 no. 2, (2020) 54, arXiv:2001.09622 [hep-th]. * [43] C. Corianò and M. M. Maglio, “Conformal Field Theory in Momentum Space and Anomaly Actions in Gravity: The Analysis of 3- and 4-Point Functions,” arXiv:2005.06873 [hep-th]. * [44] A. Bzowski, P. McFadden, and K. Skenderis, “Conformal correlators as simplex integrals in momentum space,” arXiv:2008.07543 [hep-th]. * [45] L. Corcoran and M. Staudacher, “The Dual Conformal Box Integral in Minkowski Space,” arXiv:2006.11292 [hep-th]. * [46] L. Corcoran, F. Loebbert, J. Miczajka, and M. Staudacher, “Minkowski Box from Yangian Bootstrap,” arXiv:2012.07852 [hep-th]. * [47] D. Mazáč, “Analytic bounds and emergence of AdS2 physics from the conformal bootstrap,” JHEP 04 (2017) 146, arXiv:1611.10060 [hep-th]. * [48] D. Mazáč and M. F. Paulos, “The analytic functional bootstrap. Part I: 1D CFTs and 2D S-matrices,” JHEP 02 (2019) 162, arXiv:1803.10233 [hep-th]. * [49] D. Mazáč and M. F. Paulos, “The analytic functional bootstrap. Part II. Natural bases for the crossing equation,” JHEP 02 (2019) 163, arXiv:1811.10646 [hep-th]. * [50] D. Mazáč, “A Crossing-Symmetric OPE Inversion Formula,” JHEP 06 (2019) 082, arXiv:1812.02254 [hep-th]. * [51] M. F. Paulos, “Analytic functional bootstrap for CFTs in $d>1$,” JHEP 04 (2020) 093, arXiv:1910.08563 [hep-th]. * [52] D. Mazáč, L. Rastelli, and X. Zhou, “A Basis of Analytic Functionals for CFTs in General Dimension,” arXiv:1910.12855 [hep-th]. * [53] N. Arkani-Hamed and J. Maldacena, “Cosmological Collider Physics,” arXiv:1503.08043 [hep-th]. * [54] H. Lee, D. Baumann, and G. L. Pimentel, “Non-Gaussianity as a Particle Detector,” JHEP 12 (2016) 040, arXiv:1607.03735 [hep-th]. * [55] N. Arkani-Hamed, D. Baumann, H. Lee, and G. L. Pimentel, “The Cosmological Bootstrap: Inflationary Correlators from Symmetries and Singularities,” JHEP 04 (2020) 105, arXiv:1811.00024 [hep-th]. * [56] C. Sleight, “A Mellin Space Approach to Cosmological Correlators,” JHEP 01 (2020) 090, arXiv:1906.12302 [hep-th]. * [57] C. Sleight and M. Taronna, “Bootstrapping Inflationary Correlators in Mellin Space,” JHEP 02 (2020) 098, arXiv:1907.01143 [hep-th]. * [58] D. Baumann, C. Duaso Pueyo, A. Joyce, H. Lee, and G. L. Pimentel, “The Cosmological Bootstrap: Weight-Shifting Operators and Scalar Seeds,” arXiv:1910.14051 [hep-th]. * [59] D. Baumann, C. Duaso Pueyo, A. Joyce, H. Lee, and G. L. Pimentel, “The Cosmological Bootstrap: Spinning Correlators from Symmetries and Factorization,” arXiv:2005.04234 [hep-th]. * [60] H. Isono, H. M. Liu, and T. Noumi, “Wavefunctions in dS/CFT revisited: principal series and double-trace deformations,” arXiv:2011.09479 [hep-th]. * [61] C. Sleight and M. Taronna, “From AdS to dS Exchanges: Spectral Representation, Mellin Amplitudes and Crossing,” arXiv:2007.09993 [hep-th]. * [62] M. Gillioz, in preparation. * [63] M. Gillioz, “Momentum-space conformal blocks on the light cone,” JHEP 10 (2018) 125, arXiv:1807.07003 [hep-th]. * [64] M. Gillioz, “Conformal 3-point functions and the Lorentzian OPE in momentum space,” Commun. Math. Phys. 379 no. 1, (2020) 227–259, arXiv:1909.00878 [hep-th]. * [65] C. Coriano, L. Delle Rose, E. Mottola, and M. Serino, “Solving the Conformal Constraints for Scalar Operators in Momentum Space and the Evaluation of Feynman’s Master Integrals,” JHEP 07 (2013) 011, arXiv:1304.6944 [hep-th]. * [66] A. Bzowski, P. McFadden, and K. Skenderis, “Implications of conformal invariance in momentum space,” JHEP 03 (2014) 111, arXiv:1304.7760 [hep-th]. * [67] H. Exton, “On the system of partial differential equations associated with Appell’s function F4,” Phys. A: Math. Gen. 28 (1995) 631. * [68] T. Bautista and H. Godazgar, “Lorentzian CFT 3-point functions in momentum space,” JHEP 01 (2020) 142, arXiv:1908.04733 [hep-th]. * [69] M. Gillioz, M. Meineri, and J. Penedones, “A Scattering Amplitude in Conformal Field Theory,” JHEP 11 (2020) 139, arXiv:2003.07361 [hep-th]. * [70] A. Bzowski, P. McFadden, and K. Skenderis, “Scalar 3-point functions in CFT: renormalisation, beta functions and anomalies,” JHEP 03 (2016) 066, arXiv:1510.08442 [hep-th]. * [71] A. Bzowski, P. McFadden, and K. Skenderis, “Renormalised CFT 3-point functions of scalars, currents and stress tensors,” JHEP 11 (2018) 159, arXiv:1805.12100 [hep-th]. * [72] G. ’t Hooft and M. Veltman, “Scalar One Loop Integrals,” Nucl. Phys. B 153 (1979) 365–401. * [73] A. Denner, U. Nierste, and R. Scharf, “A Compact expression for the scalar one loop four point function,” Nucl. Phys. B 367 (1991) 637–656. * [74] G. Duplancic and B. Nizic, “IR finite one loop box scalar integral with massless internal lines,” Eur. Phys. J. C 24 (2002) 385–391, arXiv:hep-ph/0201306. * [75] D. Meltzer and A. Sivaramakrishnan, “CFT unitarity and the AdS Cutkosky rules,” JHEP 11 (2020) 073, arXiv:2008.11730 [hep-th]. * [76] M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees, and P. Vieira, “The S-matrix bootstrap. Part I: QFT in AdS,” JHEP 11 (2017) 133, arXiv:1607.06109 [hep-th]. * [77] V. Gonçalves, R. Pereira, and X. Zhou, “$20^{\prime}$ Five-Point Function from $AdS_{5}\times S^{5}$ Supergravity,” JHEP 10 (2019) 247, arXiv:1906.05305 [hep-th]. * [78] S. Parikh, “A multipoint conformal block chain in $d$ dimensions,” JHEP 05 (2020) 120, arXiv:1911.09190 [hep-th]. * [79] J.-F. Fortin, W. Ma, and W. Skiba, “Higher-Point Conformal Blocks in the Comb Channel,” JHEP 07 (2020) 213, arXiv:1911.11046 [hep-th]. * [80] S. Hoback and S. Parikh, “Towards Feynman rules for conformal blocks,” arXiv:2006.14736 [hep-th]. * [81] I. Buric, S. Lacroix, J. A. Mann, L. Quintavalle, and V. Schomerus, “From Gaudin Integrable Models to $d$-dimensional Multipoint Conformal Blocks,” arXiv:2009.11882 [hep-th]. * [82] H. Isono, T. Noumi, and T. Takeuchi, “Momentum space conformal three-point functions of conserved currents and a general spinning operator,” JHEP 05 (2019) 057, arXiv:1903.01110 [hep-th]. * [83] C. Sleight and M. Taronna, “Spinning Witten Diagrams,” JHEP 06 (2017) 100, arXiv:1702.08619 [hep-th]. * [84] C. Sleight and M. Taronna, “Spinning Mellin Bootstrap: Conformal Partial Waves, Crossing Kernels and Applications,” Fortsch. Phys. 66 no. 8-9, (2018) 1800038, arXiv:1804.09334 [hep-th]. * [85] P. Kravchuk, J. Qiao, and S. Rychkov, “Distributions in CFT. Part I. Cross-ratio space,” JHEP 05 (2020) 137, arXiv:2001.08778 [hep-th]. * [86] M. Gillioz, X. Lu, M. A. Luty, and G. Mikaberidze, “Convergent Momentum-Space OPE and Bootstrap Equations in Conformal Field Theory,” JHEP 03 (2020) 102, arXiv:1912.05550 [hep-th]. * [87] A. Fitzpatrick, J. Kaplan, and D. Poland, “Conformal Blocks in the Large $D$ Limit,” JHEP 08 (2013) 107, arXiv:1305.0004 [hep-th]. * [88] A. Gadde and T. Sharma, “Constraining Conformal Theories in Large Dimensions,” arXiv:2002.10147 [hep-th]. * [89] M. Hogervorst, “Dimensional Reduction for Conformal Blocks,” JHEP 09 (2016) 017, arXiv:1604.08913 [hep-th]. * [90] A. Kaviraj, S. Rychkov, and E. Trevisani, “Random Field Ising Model and Parisi-Sourlas supersymmetry. Part I. Supersymmetric CFT,” JHEP 04 (2020) 090, arXiv:1912.01617 [hep-th]. * [91] S. Hoback and S. Parikh, “Dimensional reduction of higher-point conformal blocks,” arXiv:2009.12904 [hep-th]. * [92] M. Hogervorst, S. Rychkov, and B. C. van Rees, “Unitarity violation at the Wilson-Fisher fixed point in 4-$\epsilon$ dimensions,” Phys. Rev. D 93 no. 12, (2016) 125025, arXiv:1512.00013 [hep-th]. * [93] S. Giombi, R. Huang, I. R. Klebanov, S. S. Pufu, and G. Tarnopolsky, “The $O(N)$ Model in ${4<d<6}$ : Instantons and complex CFTs,” Phys. Rev. D 101 no. 4, (2020) 045013, arXiv:1910.02462 [hep-th]. * [94] S. Caron-Huot, “Analyticity in Spin in Conformal Theories,” JHEP 09 (2017) 078, arXiv:1703.00278 [hep-th]. * [95] D. Simmons-Duffin, D. Stanford, and E. Witten, “A spacetime derivation of the Lorentzian OPE inversion formula,” JHEP 07 (2018) 085, arXiv:1711.03816 [hep-th]. * [96] P. Kravchuk and D. Simmons-Duffin, “Light-ray operators in conformal field theory,” JHEP 11 (2018) 102, arXiv:1805.00098 [hep-th]. * [97] M. Kologlu, P. Kravchuk, D. Simmons-Duffin, and A. Zhiboedov, “Shocks, Superconvergence, and a Stringy Equivalence Principle,” JHEP 11 (2020) 096, arXiv:1904.05905 [hep-th]. * [98] M. Kologlu, P. Kravchuk, D. Simmons-Duffin, and A. Zhiboedov, “The light-ray OPE and conformal colliders,” arXiv:1905.01311 [hep-th].
an algorithm that constructs $\mathsf{qGR}_{k}(X,Y)$ with a delay of at most $k^{2}$ characters in $\tilde{\mathcal{O}}(k^{3})$ amortised time per character and $\tilde{\mathcal{O}}(k^{2})$ space. The delay means that at the moment when the $y$-th character of $Y$ arrives, the algorithm knows $\mathsf{qGR}_{k}(X[\mathinner{.\,.\allowbreak}y^{\prime}],Y[\mathinner{.\,.\allowbreak}y^{\prime}])$, where $|y-y^{\prime}|\leq k^{2}$. ###### Proof. If $\ell\leq k^{2}$, construct $\mathsf{qGR}_{k}(X,Y)$ via 5.18 in $\tilde{\mathcal{O}}(k^{3})$ (total) time and $\tilde{\mathcal{O}}(k^{2})$ space. Otherwise, partition the strings into non-overlapping blocks $X=X_{1}\cdots X_{p}$ and $Y=Y_{1}\cdots Y_{p}$ so that $|X_{i}|=|Y_{i}|=k^{2}$ for all $i\in[1\mathinner{.\,.\allowbreak}p)$ and $|X_{p}|=|Y_{z}|=\ell\bmod k^{2}$. Suppose that we have computed $\mathsf{qGR}_{k}(X_{1}\cdots X_{i-1},Y_{1}\cdots Y_{i-1})$ for $i\in[1\mathinner{.\,.\allowbreak}p)$. Compute $\mathsf{qGR}_{k}(X_{1}\cdots X_{i},Y_{1}\cdots Y_{i})$ in the following manner: first, compute $\mathsf{qGR}_{k}(X_{i},Y_{i})$ in $\mathcal{O}(k^{5})$ time and $\mathcal{O}(k^{2})$ space via 5.18, and then compute $\mathsf{qGR}_{k}(X_{1}\cdots X_{i},Y_{1}\cdots Y_{i})$ in $\tilde{\mathcal{O}}(k^{5})$ time and $\tilde{\mathcal{O}}(k^{2})$ space via Lemma 5.21. ∎ ### 5.4 Products of Greedy Alignments See 3.12 See 3.13 ###### Proof. We proceed by induction on $|X|+|Y|+|Z|$. In the base case, when at least one of the strings $X,Y,Z$ is empty, we set $\mathcal{A}^{X,Y}$ and $\mathcal{A}^{Y,Z}$ to be any greedy optimal alignments of $X,Y$ and $Y,Z$, respectively, so that $\mathsf{cost}(\mathcal{A}^{X,Y})=\mathsf{ed}(X,Y)$ and $\mathsf{cost}(\mathcal{A}^{Y,Z})=\mathsf{ed}(Y,Z)\leq\mathsf{ed}(X,Y)+\mathsf{ed}(X,Z)\leq\mathsf{ed}(X,Y)+\mathsf{cost}(\mathcal{A}^{X,Z})$. Moreover, it easy to check that $\mathcal{A}^{X,Z}$ is a product of $\mathcal{A}^{X,Y}$ and $\mathcal{A}^{Y,Z}$ because two out of these three alignments simply delete all characters of the non-empty string. In the inductive step, we assume that all strings $X,Y,Z$ are non-empty, and we consider several cases. 1. 1. $\mathbf{X[1]=Z[1]=Y[1]}$. We recurse on $X^{\prime}=X[2\mathinner{.\,.\allowbreak}]$, $Y^{\prime}=Y[2\mathinner{.\,.\allowbreak}]$, $Z^{\prime}=Z[2\mathinner{.\,.\allowbreak}]$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[2\mathinner{.\,.\allowbreak}],[2\mathinner{.\,.\allowbreak}]}$, which is greedy due to $X[1]\simeq_{\mathcal{A}^{X,Z}}Z[1]$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost at most $d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})+\mathsf{ed}(X,Y)=d$. We extend them so that $X[1]\simeq_{\mathcal{A}^{X,Y}}Y[1]$ and $Y[1]\simeq_{\mathcal{A}^{Y,Z}}Z[1]$, obtaining alignments of cost up to $d$. 2. 2. $\mathbf{X[1]=Z[1]\neq Y[1]}$. In this case, we have $\mathsf{ed}(X,Y)>\min(\mathsf{ed}(X[2\mathinner{.\,.\allowbreak}],Y[2\mathinner{.\,.\allowbreak}]),\mathsf{ed}(X[2\mathinner{.\,.\allowbreak}],Y),\mathsf{ed}(X,Y[2\mathinner{.\,.\allowbreak}]))$. 1. (a) If $\mathsf{ed}(X,Y)>\mathsf{ed}(X[2\mathinner{.\,.\allowbreak}],Y[2\mathinner{.\,.\allowbreak}])$, we recurse on $X^{\prime}=X[2\mathinner{.\,.\allowbreak}]$, $Y^{\prime}=Y[2\mathinner{.\,.\allowbreak}]$, $Z^{\prime}=Z[2\mathinner{.\,.\allowbreak}]$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[2\mathinner{.\,.\allowbreak}],[2\mathinner{.\,.\allowbreak}]}$, which is greedy due to $X[1]\simeq_{\mathcal{A}^{X,Z}}Z[1]$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost at most $d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})+\mathsf{ed}(X,Y)-1=d-1$. We extend them so that $X[1]\sim_{\mathcal{A}^{X,Y}}Y[1]$ and $Y[1]\sim_{\mathcal{A}^{Y,Z}}Z[1]$, obtaining alignments of cost up to $d$. 2. (b) If $\mathsf{ed}(X,Y)>\mathsf{ed}(X[2\mathinner{.\,.\allowbreak}],Y)$, we recurse on $X^{\prime}=X[2\mathinner{.\,.\allowbreak}]$, $Y^{\prime}=Y$, $Z^{\prime}=Z[2\mathinner{.\,.\allowbreak}]$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[2\mathinner{.\,.\allowbreak}],[2\mathinner{.\,.\allowbreak}]}$, which is greedy due to $X[1]\simeq_{\mathcal{A}^{X,Z}}Z[1]$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost at most $d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})+\mathsf{ed}(X,Y)-1=d-1$. We extend them so that $\mathcal{A}^{X,Y}$ deletes $X[1]$ and $\mathcal{A}^{Y,Z}$ deletes $Z[1]$, obtaining alignments of cost up to $d$. 3. (c) If $\mathsf{ed}(X,Y)>\mathsf{ed}(X,Y[2\mathinner{.\,.\allowbreak}])$, we recurse on $X^{\prime}=X$, $Y^{\prime}=Y[2\mathinner{.\,.\allowbreak}]$, $Z^{\prime}=Z$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost at most $d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})+\mathsf{ed}(X,Y)-1=d-1$. We extend them so that $\mathcal{A}^{X,Y}$ and $\mathcal{A}^{Y,Z}$ both delete $Y[1]$, obtaining alignments of cost up to $d$. 3. 3. $\mathbf{X[1]\neq Z[1]=Y[1]}$. 1. (a) If $\mathcal{A}^{X,Z}$ deletes $X[1]$, we recurse on $X^{\prime}=X[2\mathinner{.\,.\allowbreak}]$, $Y^{\prime}=Y$, $Z^{\prime}=Z$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[2\mathinner{.\,.\allowbreak}],[1\mathinner{.\,.\allowbreak}]}$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost $\leq d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})-2+\mathsf{ed}(X^{\prime},Y^{\prime})\leq d-1$. We derive $\mathcal{A}^{Y,Z}=\mathcal{A}^{Y^{\prime},Z^{\prime}}$ and extend $\mathcal{A}^{X^{\prime},Y^{\prime}}$ so that $\mathcal{A}^{X,Y}$ deletes $X[1]$, obtaining an alignment of cost up to $d$. 2. (b) If $\mathcal{A}^{X,Z}$ deletes $Z[1]$, we recurse on $X^{\prime}=X$, $Y^{\prime}=Y[2\mathinner{.\,.\allowbreak}]$, $Z^{\prime}=Z[2\mathinner{.\,.\allowbreak}]$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[1\mathinner{.\,.\allowbreak}],[2\mathinner{.\,.\allowbreak}]}$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost $\leq d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})-2+\mathsf{ed}(X^{\prime},Y^{\prime})\leq d-1$. We extend them so that $\mathcal{A}^{X,Y}$ deletes $Y[1]$ and $Y[1]\simeq_{\mathcal{A}^{Y,Z}}Z[1]$, obtaining alignments of cost up to $d$. 3. (c) If $X[1]\sim_{\mathcal{A}^{X,Z}}Z[1]$, we recurse on $X^{\prime}=X[2\mathinner{.\,.\allowbreak}]$, $Y^{\prime}=Y[2\mathinner{.\,.\allowbreak}]$, $Z^{\prime}=Z[2\mathinner{.\,.\allowbreak}]$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[2\mathinner{.\,.\allowbreak}],[2\mathinner{.\,.\allowbreak}]}$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost $\leq d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})-2+\mathsf{ed}(X^{\prime},Y^{\prime})\leq d-1$. We extend them so that $X[1]\sim_{\mathcal{A}^{X,Y}}Y[1]$ and $Y[1]\simeq_{\mathcal{A}^{Y,Z}}Z[1]$, obtaining alignments of cost up to $d$. 4. 4. $\mathbf{X[1]\neq Z[1]\neq Y[1]}$. (Note that this case allows both $X[1]=Y[1]$ and $X[1]\neq Y[1]$.) 1. (a) If $\mathcal{A}^{X,Z}$ deletes $X[1]$, we recurse on $X^{\prime}=X[2\mathinner{.\,.\allowbreak}]$, $Y^{\prime}=Y[2\mathinner{.\,.\allowbreak}]$, $Z^{\prime}=Z$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[2\mathinner{.\,.\allowbreak}],[1\mathinner{.\,.\allowbreak}]}$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost at most $d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})-2+\mathsf{ed}(X^{\prime},Y^{\prime})\leq d-2$. We extend them so that $X[1]\sim_{\mathcal{A}^{X,Y}}Y[1]$ and $\mathcal{A}^{Y,Z}$ deletes $Y[1]$, obtaining alignments of cost up to $d-1$. 2. (b) If $\mathcal{A}^{X,Z}$ deletes $Z[1]$, we recurse on $X^{\prime}=X$, $Y^{\prime}=Y$, $Z^{\prime}=Z[2\mathinner{.\,.\allowbreak}]$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[1\mathinner{.\,.\allowbreak}],[2\mathinner{.\,.\allowbreak}]}$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost $\leq d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})-2+\mathsf{ed}(X,Y)=d-2$. We derive $\mathcal{A}^{X,Y}=\mathcal{A}^{X^{\prime},Y^{\prime}}$ and extend $\mathcal{A}^{Y^{\prime},Z^{\prime}}$ so that $\mathcal{A}^{Y,Z}$ deletes $Z[1]$, obtaining an alignment of cost up to $d-1$. 3. (c) If $X[1]\sim_{\mathcal{A}^{X,Z}}Z[1]$, we recurse on $X^{\prime}=X[2\mathinner{.\,.\allowbreak}]$, $Y^{\prime}=Y[2\mathinner{.\,.\allowbreak}]$, $Z^{\prime}=Z[2\mathinner{.\,.\allowbreak}]$, and $\mathcal{A}^{X^{\prime},Z^{\prime}}=\mathcal{A}^{X,Z}_{[2\mathinner{.\,.\allowbreak}],[2\mathinner{.\,.\allowbreak}]}$. This yields greedy alignments $\mathcal{A}^{X^{\prime},Y^{\prime}}\\!\\!,A^{Y^{\prime},Z^{\prime}}$ of cost $\leq d^{\prime}=2\mathsf{cost}(\mathcal{A}^{X,Z})-2+\mathsf{ed}(X^{\prime},Y^{\prime})\leq d-2$. We extend them so that $X[1]\sim_{\mathcal{A}^{X,Y}}Y[1]$ and $Y[1]\sim_{\mathcal{A}^{Y,Z}}Z[1]$, obtaining alignments of cost up to $d-1$. In all the cases above, $\mathcal{A}^{X,Y}$ is greedy because $\mathcal{A}^{X^{\prime},Y^{\prime}}$ is greedy and $\mathcal{A}^{X,Y}$ matches $X[1]$ with $Y[1]$ whenever $X[1]=Y[1]$. Similarly, $\mathcal{A}^{Y,Z}$ is greedy because $\mathcal{A}^{Y^{\prime},Z^{\prime}}$ is greedy and $\mathcal{A}^{Y,Z}$ matches $Y[1]$ with $Z[1]$ whenever $Y[1]=Z[1]$. Moreover, $\mathcal{A}^{X,Z}$ is a product of $\mathcal{A}^{X,Y}$ and $\mathcal{A}^{Y,Z}$ because each alignment starts with $(1,1)$ and since $\mathcal{A}^{X^{\prime},Z^{\prime}}$ is a product of $\mathcal{A}^{X^{\prime},Y^{\prime}}$ and $\mathcal{A}^{Y^{\prime},Z^{\prime}}$. ∎ Assume that we are given three strings $X,Y,Z$. Let $d=\mathsf{ed}(X,Y)+2k$, and define $\mathcal{G}_{X}=\mathsf{GR}_{d}(X,Y)$ and $\mathcal{G}_{Z}=\mathsf{GR}_{d}(Y,Z)$. We show that given $\mathcal{G}_{X}$ and $\mathcal{G}_{Z}$, we can compute an optimal alignment between $X$ and $Z$ efficiently if its cost is at most $k$. Let $M=\\{(x,z):\exists y\text{ such that }(x,y)\in\mathcal{M}_{d}(X,Y)\text{ and }(y,z)\in\mathcal{M}_{d}(Y,Z)\\}$. ###### Lemma 5.23. If $\mathsf{ed}(X,Z)\leq k$, then $M\subseteq\mathcal{M}_{k}(X,Z)$. ###### Proof. Suppose that $(x,y)\in\mathcal{M}_{d}(X,Y)$ and $(y,z)\in\mathcal{M}_{d}(Y,Z)$. For a proof by contradiction, suppose that $X[x]\not\sim_{\mathcal{A}}Z[z]$ for some $\mathcal{A}\in\mathsf{GA}_{k}(X,Z)$. Then, there exists $(x^{\prime},z^{\prime})\in\mathcal{A}$ such that either $x^{\prime}\leq x$ and $z^{\prime}>z$, or $x^{\prime}>x$ and $z^{\prime}\leq z$. By symmetry, without loss of generality, we consider the first alternative. By Lemma 3.13, $\mathcal{A}$ is a product $\mathcal{A}^{X,Y}\in\mathsf{GA}_{d}(X,Y)$ and $\mathcal{A}^{Y,Z}\in\mathsf{GA}_{d}(Y,Z)$. According to Definition 3.12, this means that there exists $y^{\prime}$ such that $(x^{\prime},y^{\prime})\in\mathcal{A}^{X,Y}$ and $(y^{\prime},z^{\prime})\in\mathcal{A}^{Y,Z}$. If $y^{\prime}\leq y$, then $(y,z),(y^{\prime},z^{\prime})\in\mathcal{A}^{Y,Z}$ implies that $\mathcal{A}^{Y,Z}$ deletes $Z[z]$, contradicting $(y,z)\in\mathcal{M}(\mathcal{A}^{Y,Z})$. Similarly, if $y^{\prime}>y$, then $(x,y),(x^{\prime},y^{\prime})\in\mathcal{A}^{X,Y}$ implies that $\mathcal{A}^{X,Y}$ deletes $Y[y]$, contradicting $(x,y)\in\mathcal{M}(\mathcal{A}^{X,Y})$. This completes the proof that $X[x]\sim_{\mathcal{A}}Z[z]$ for every $\mathcal{A}\in\mathsf{GA}_{k}(X,Z)$. Due to $X[x]=Y[y]=Z[z]$, we also have $X[x]\simeq_{\mathcal{A}}Z[z]$ for every $\mathcal{A}\in\mathsf{GA}_{k}(X,Z)$, i.e., $(x,y)\in\mathcal{M}_{k}(X,Z)$ holds as claimed. ∎ ###### Lemma 5.24. If $\mathsf{ed}(X,Y)=\mathcal{O}(k)$ and $\mathcal{G}_{X},\mathcal{G}_{Z}\neq\bot$, $\mathsf{E}^{M}(X,Z)$ can be computed in $\tilde{\mathcal{O}}(k^{3})$ time and $\tilde{\mathcal{O}}(k^{2})$ space. ###### Proof. Let us first explain how $X^{M}$ can be constructed. Consider a position $x\in[1\mathinner{.\,.\allowbreak}|X|]$ such that $(x,z)\notin M$ for every $z\in[1\mathinner{.\,.\allowbreak}|Z|]$. If $X^{\mathcal{M}_{d}(X,Y)}[x]\neq\\#$, then $X^{M}[x]=X[x]=X^{\mathcal{M}_{d}(X,Y)}[x]$. Otherwise, $(x,y)\in\mathcal{M}_{d}(X,Y)$ for some $y\in[1\mathinner{.\,.\allowbreak}|Y|]$. By Lemma 5.23, we have $(y,z)\notin\mathcal{M}_{d}(Y,Z)$ for every $z\in[1\mathinner{.\,.\allowbreak}|Z|]$. Hence, $Y^{\mathcal{M}_{d}(Y,Z)}[y]\neq\\#$, and we have $X[x]=Y[y]=Y^{\mathcal{M}_{d}(Y,Z)}[y]$. In words, non-dummy characters of $X^{M}$ can be retrieved from non-dummy characters of $X^{\mathcal{M}_{d}(X,Y)}$ and non-dummy characters of $Y^{\mathcal{M}_{d}(Y,Z)}$. Thus, for each dummy segment $[\ell\mathinner{.\,.\allowbreak}r)$ in $X^{\mathcal{M}_{d}(X,Y)}$ and its counterpart $[\ell^{\prime}\mathinner{.\,.\allowbreak}r^{\prime})$ in $Y^{\mathcal{M}_{d}(X,Y)}$, we need to set $X^{M}[\ell\mathinner{.\,.\allowbreak}r):=Y^{\mathcal{M}_{d}(Y,Z)}[\ell^{\prime}\mathinner{.\,.\allowbreak}r^{\prime})$. Such a copy-paste operation can be implemented using Proposition 4.2(g)(i), in $\mathcal{O}(k^{2}\log^{4}n)$ time per dummy segment. We can also keep track of the dummy segments in $X^{M}$ within the same procedure. The algorithm for $Z^{M}$ is symmetric, and thus we can construct $\mathsf{D}(X^{M}Z^{M})$ along with the dummy segments in $\tilde{\mathcal{O}}(k^{3})$ time and $\tilde{\mathcal{O}}(k^{2})$ space. Finally, we build $\mathsf{RS}_{\\#}(X^{M})$ and $\mathsf{RS}_{\\#}(Z^{M})$ in $\mathcal{O}(k)$ time. ∎ ###### Corollary 5.25. Given $\mathcal{G}_{X}$ and $\mathcal{G}_{Z}$. If $\mathsf{ed}(X,Y)=\mathcal{O}(k)$, then we can compute $\min(k+1,\mathsf{ed}(X,Z))$ in $\tilde{\mathcal{O}}(k^{3})$ time and $\tilde{\mathcal{O}}(k^{2})$ space. ###### Proof. If $\mathcal{G}_{X}$ or $\mathcal{G}_{Z}$ equals to $\bot$, then $\mathsf{ed}(X,Z)>k$ by Lemma 3.13. Otherwise, construct $\mathsf{E}^{M}(X,Z)$ using Lemma 5.24. Finally, we pass $k$ and $\mathsf{E}^{M}(X,Z)$ to the algorithm of Corollary 5.5, and we return the resulting value $d$. Note that $M$ is a non-crossing matching of $X,Z$, so $d=k+1$ is correctly returned if $\mathsf{ed}(X,Z)>k$. If $\mathsf{ed}(X,Z)\leq k$, then Lemma 5.23 implies $M\subseteq\mathcal{M}_{k}(X,Z)$, and thus $d=\mathsf{ed}(X,Z)$ holds as claimed. The overall runtime and space complexity are dominated by the procedure of Lemma 5.24. ∎ ###### Remark 5.26. Using Proposition 5.12 instead of Corollary 5.5, we could construct $\mathsf{GR}_{k}(X,Z)$. ###### Corollary 5.27. Consider three strings $X,Y,Z$. Let $d=\mathsf{ed}(X,Y)+2k$, and assume that we are given $\mathsf{qGR}_{d}(X,Y)$ and $\mathsf{qGR}_{d}(Y,Z)$. If $\mathsf{ed}(X,Y)=\mathcal{O}(k)$, then we can compute $\min(k+1,\mathsf{ed}(X,Z))$ in $\tilde{\mathcal{O}}(k^{3})$ time and $\tilde{\mathcal{O}}(k^{2})$ space. ###### Proof. We compute $\mathcal{G}_{X}=\mathsf{GR}_{d+1}(\$_{1}X,\$_{2}Y)$ and $\mathsf{qGR}_{d+1}(\$_{2}Y,\$_{3}Z)$ in $\tilde{\mathcal{O}}(k^{2})$ time and space via Corollary 5.17, and apply Corollary 5.25 to compute $\min(k+2,\mathsf{ed}(\$_{1}X,\$_{2}Z))-1=\min(k+1,\mathsf{ed}(X,Z))$ in $\tilde{\mathcal{O}}(k^{3})$ time and $\tilde{\mathcal{O}}(k^{2})$ space. ∎ ## 6 Edit Distance Sketches ### 6.1 CGK embedding In this section, we prove Proposition 3.16 based on the CGK embedding introduced in [10]. Recall that the Hamming distance between the embeddings of two strings $X,Y\in\Sigma^{\leq n}$ is bounded in terms of the edit distance $\mathsf{ed}(X,Y)$, which allows using Hamming distance sketches to approximate edit distance. ###### Definition 6.1 (CGK embedding [10]). Consider an alphabet $\Sigma$, a sentinel character $\bot\notin\Sigma$, and a 2-independent family $\mathcal{H}$ of hash functions $h:\Sigma\to\\{0,1\\}$. For an integer $n\in\mathbb{Z}_{+}$, a (uniformly random) sequence $R\in\mathcal{H}^{3n}$, and a string $S\in\Sigma^{\leq n}$, the _CGK walk_ $W_{\mathsf{CGK}}(S)=(s_{t})_{t=1}^{3n+1}$ and the _CGK embedding_ $\mathsf{CGK}(S)\in(\Sigma\cup\\{\bot\\})^{3n}$ are defined by the following algorithm: Input: An integer $n\in\mathbb{Z}_{+}$, a string $S\in\Sigma^{\leq n}$. Randomness: A sequence $R\in\mathcal{H}^{3n}$ of 2-independent hash functions $\Sigma\to\\{0,1\\}$. Output: The CGK walk $W_{\mathsf{CGK}}(S)=(s_{t})_{t=1}^{3n+1}$ and the CGK embedding $\mathsf{CGK}(S)$. 1 $s_{1}:=1$; 2 for _$t:=1$ to $3n$_ do 3 if _$s_{t}\leq|S|$_ then 4 $\mathsf{CGK}(S)[t]:=S[s_{t}]$; 5 $s_{t+1}:=s_{t}+R_{t}(S[s_{t}])$; 6 7 else 8 $\mathsf{CGK}(S)[t]:=\bot$; 9 $s_{t+1}:=s_{t}$; 10 11 Algorithm 2 The CGK algorithm Recall from Definition 3.14 that an $m$-step walk over $S$ is complete if $s_{m+1}=|S|+1$. ###### Fact 6.2 ([10, Theorem 4.1]). Each $S\in\Sigma^{\leq n}$ satisfies $\Pr_{R}[W_{\mathsf{CGK}}(S)\text{ is complete}]\geq 1-e^{-\Omega(n)}$. The following result summarizes the central property of the CGK embedding. ###### Fact 6.3 ([10, Theorem 4.3]). For every $X,Y\in\Sigma^{\leq n}$ and every constant $c>0$, the embeddings $\mathsf{CGK}(X)$, $\mathsf{CGK}(Y)$ satisfy $\Pr_{R\in\mathcal{H}^{3n}}[\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))>c\cdot\mathsf{ed}(X,Y)^{2}]<\frac{12}{\sqrt{c}}$. If $W_{\mathsf{CGK}}(X)$ and $W_{\mathsf{CGK}}(Y)$ are complete, Definition 3.15 yields an edit-distance alignment of $X,Y$, which we call the _CGK alignment_ of $X,Y$. that they induce an edit-distance alignment ###### Fact 6.4 (see also [10, Theorem 4.2]). If $W_{\mathsf{CGK}}(X)$ and $W_{\mathsf{CGK}}(Y)$ are complete for some $X,Y\in\Sigma^{\leq n}$ and $R\in\mathcal{H}^{3n}$, then the CGK alignment of $X,Y$ belongs to $\mathsf{GA}_{\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))}(X,Y)$. ###### Proof. Let $\mathcal{A}$ be the CGK alignment of $X,Y$, i.e., the zip alignment of $W_{\mathsf{CGK}}(X)=(x_{t})_{t=1}^{3n+1}$ and $W_{\mathsf{CGK}}(Y)=(y_{t})_{t=1}^{3n+1}$. Consider $(x_{t},y_{t})\in\mathcal{B}(\mathcal{A})$, with $t\in[1\mathinner{.\,.\allowbreak}3n+1]$ chosen so that $t=3n+1$ or $(x_{t+1},y_{t+1})\neq(x_{t},y_{t})$. If $t=3n+1$, then $(x_{t},y_{t})=(|X|+1,|Y|+1)$ by the assumption that $W_{\mathsf{CGK}}(X)$ and $W_{\mathsf{CGK}}(Y)$ are complete. If $(x_{t+1},y_{t+1})=(x_{t}+1,y_{t}+1)$, then we have $\mathsf{CGK}(X)[t]=X[x_{t}]\neq Y[y_{t}]=\mathsf{CGK}(Y)[t]$. The remaining possibility $x_{t+1}-x_{t}\neq y_{t+1}-y_{t}$ holds only in the following three cases: if $x_{t}=|X|+1$ and $y_{t}\leq|Y|$ (when $\mathsf{CGK}(X)[t]=\bot\neq\mathsf{CGK}(Y)[t]=Y[y_{t}]$), if $x_{t}\leq|X|$ and $y_{t}=|Y|+1$ (when $X[x_{t}]=\mathsf{CGK}(X)[t]\neq\bot=\mathsf{CGK}(Y)[t]$), or if $x_{t}\leq|X|$, $y_{t}\leq|Y|$, and $R_{t}(X[x_{t}])\neq R_{t}(Y[y_{t}])$ (when $\mathsf{CGK}(X)[t]=X[x_{t}]\neq Y[y_{t}]=\mathsf{CGK}(Y)[t]$). Overall, we have $X[x_{t}]\neq Y[y_{t}]$ whenever $(x_{t},y_{t})\in[1\mathinner{.\,.\allowbreak}|X|]\times[1\mathinner{.\,.\allowbreak}|Y|]$, and $\mathsf{CGK}(X)[t]\neq\mathsf{CGK}(Y)[t]$ whenever $t\in[1\mathinner{.\,.\allowbreak}3n]$. Consequently, $\mathcal{A}\in\mathsf{GA}_{\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))}(X,Y)$. ∎ The final property of the CGK alignment required in this work is that its width is within $\mathcal{O}(\mathsf{ed}(X,Y))$ with good probability. For this, we first prove a fact about random walks: ###### Fact 6.5. Let $(w_{i})_{i\geq 0}$ be an unbiased lazy random walk (that is, $w_{0}=0$, and, for every $i\geq 1$, we have $\Pr[w_{i+1}=w_{i}\mid w_{0},\ldots,w_{i}]=\frac{1}{2}$, $\Pr[w_{i+1}=w_{i}+1\mid w_{0},\ldots,w_{i}]=\Pr[w_{i+1}=w_{i}-1\mid w_{0},\ldots,w_{i}]=\frac{1}{4}$). Then, for every $m,\ell\in\mathbb{Z}_{+}$, we have $\Pr[\max_{i=0}^{m}|w_{i}|\geq\ell]\leq\frac{m}{\ell^{2}}$. ###### Proof. By the reflection principle [44, Excercise 2.10], we have $\Pr[\max_{i=0}^{m}|w_{i}|\geq\ell]\leq 2\Pr[|w_{m}|\geq\ell]=2\Pr[w_{m}^{2}\geq\ell^{2}]$. By Markov’s inequality, $2\Pr[w_{m}^{2}\geq\ell^{2}]\leq\frac{2\mathbb{E}[w_{m}^{2}]}{\ell^{2}}=\frac{2\mathbb{E}[(w_{1}-w_{0})^{2}+\cdots+(w_{m}-w_{m-1})^{2}]}{\ell^{2}}=\frac{m}{\ell^{2}}.\qed$ ###### Corollary 6.6. For every constant $\delta\in(0,1)$, there exists a constant $c$ such that for sufficiently large $n$, all strings $X,Y\in\Sigma^{\leq n}$, and their CGK alignment $\mathcal{A}$, the probability over $R\in\mathcal{H}^{3n}$ that $W_{\mathsf{CGK}}(X),W_{\mathsf{CGK}}(Y)\text{ are complete, }\mathcal{A}\in\mathsf{GA}_{c\cdot\mathsf{ed}(X,Y)^{2}}(X,Y)\text{, and }0pt(\mathcal{A})\leq c\cdot\mathsf{ed}(X,Y)$ is at least $1-\delta$. ###### Proof. By Fact 6.2, $\Pr[W_{\mathsf{CGK}}(S)\text{ is incomplete}]\leq\frac{\delta}{4}$ holds for every $S\in\Sigma^{\leq n}$ and sufficiently large $n=\Omega(\log\frac{1}{\delta})$. By Fact 6.3, there is a constant $c$ such that $\Pr[\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))>c\cdot\mathsf{ed}(X,Y)^{2}]\leq\frac{\delta}{4}$. We may take arbitrarily large $c$, so we shall also assume that $c\geq\frac{4}{\delta}$. Moreover, by Fact 6.4, if $W_{\mathsf{CGK}}(X)$ and $W_{\mathsf{CGK}}(Y)$ are complete and $\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))\leq c\cdot\mathsf{ed}(X,Y)^{2}$, then $\mathcal{A}\in\mathsf{GA}_{c\cdot\mathsf{ed}(X,Y)^{2}}(X,Y)$. To bound $0pt(\mathcal{A})$, let us analyze the CGK walks $W_{\mathsf{CGK}}(X)=(x_{t})_{t=1}^{3n+1}$ and $W_{\mathsf{CGK}}(Y)=(y_{t})_{t=1}^{3n+1}$. Let $T$ be the first step such that $x_{T}=|X|+1$, $y_{T}=|Y|+1$, or $T=3n+1$. Note that it suffices to bound $\Pr[\max_{t=1}^{T}|x_{t}-y_{t}|\geq ck]$, where $k=\mathsf{ed}(X,Y)$. Let $A=\\{1\\}\cup\\{t+1:t\in[1\mathinner{.\,.\allowbreak}T)\text{ and }X[x_{t}]\neq Y[y_{t}]\\}$. Observe that $x_{t}-y_{t}=x_{t-1}-y_{t-1}$ holds for $t\in[1\mathinner{.\,.\allowbreak}T]\setminus A$, so we may focus on bounding $\Pr[\max_{t\in A}|x_{t}-y_{t}|\geq ck]$. Let $A=\\{t_{0},\ldots,t_{a}\\}$ with $t_{0}<\cdots<t_{a}$, and let $d_{i}=x_{t_{i}}-y_{t_{i}}$ for $i\in[0\mathinner{.\,.\allowbreak}a]$. Observe that $(d_{i})_{i=0}^{a}$ is an $a$-step unbiased lazy random walk and that $a\leq\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))$. Moreover, if we extend $(d_{i})_{i=0}^{a}$ to an infinite unbiased lazy random walk $(d_{i})_{i=0}^{\infty}$, then Fact 6.5 yields $\Pr[\max_{i=0}^{\lfloor ck^{2}\rfloor}|d_{i}|\geq ck]\leq\frac{ck^{2}}{c^{2}k^{2}}=\frac{1}{c}\leq\frac{\delta}{4}$. Therefore, $\Pr\left[\max_{i=0}^{a}|d_{i}|\geq ck\right]\leq\Pr\left[\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))>ck^{2}\right]+\Pr\left[{\max_{i=0}^{\lfloor ck^{2}\rfloor}|d_{i}|\geq ck}\right].$ Consequently, if $\mathsf{CGK}(X)$ and $\mathsf{CGK}(Y)$ are complete, $\mathsf{hd}(\mathsf{CGK}(X),\mathsf{CGK}(Y))\leq ck^{2}$, and the random walk satisfies $\Pr\big{[}{\max_{i=0}^{\lfloor ck^{2}\rfloor}|d_{i}|<ck}\big{]}$, then the alignment $\mathcal{A}$ satisfies the lemma. The total probability of the complementary events is at most $\delta$, so this completes the proof. ∎ To complete the proof of Proposition 3.16 (repeated below), it remains to reduce the number of random bits using Nisan’s pseudorandom generator [50]. See 3.16 ###### Proof. Let $c_{\ref{cor:CGK}}$ and $n_{\ref{cor:CGK}}$ be the constant and threshold (respectively) of Corollary 6.6 for $\delta_{\ref{cor:CGK}}=\frac{\delta}{2}$. If $n<\max(\frac{10}{\delta},n_{\ref{cor:CGK}})$, we set $\mathsf{W}(n,r,S)=(\min(t,|S|+1))_{t=1}^{3n+1}$ for all $S\in\Sigma^{\leq n}$ so that $\mathsf{W}(n,r,S)$ is trivially a $3n$-step complete walk over $S$. Now, consider strings $X,Y\in\Sigma^{\leq n}$ and the zip alignment $\mathcal{A}_{\mathsf{W}}$ of $\mathsf{W}(n,r,X)$ and $\mathsf{W}(n,r,Y)$. Observe that $\mathcal{A}_{\mathsf{W}}\in\mathsf{GA}_{0}(X,Y)$ if $X=Y$ and $\mathcal{A}_{\mathsf{W}}\in\mathsf{GA}_{n}(X,Y)$ otherwise. Moreover, $0pt(\mathcal{A}_{\mathsf{W}})=\big{|}|X|-|Y|\big{|}\leq\mathsf{ed}(X,Y)$. Consequently, the claimed conditions are (deterministically) satisfied for $c\geq n_{\ref{cor:CGK}}$. The construction algorithm uses $\mathcal{O}(\log n)$ bits and $\mathcal{O}(n)$ time. If $n\geq\max(\frac{10}{\delta},n_{\ref{cor:CGK}})$, we first develop an algorithm $\mathsf{W}^{\prime}$ that uses $\Theta(n\log\sigma)$ random bits, interpreted as a sequence $R\in\mathcal{H}^{3n}$. Specifically, each hash function $R_{t}$ is specified by an $\lceil\log\sigma\rceil$-bit integer $h_{t}$ so that $R_{t}(a)$ counts (modulo 2) the set bits in $a\textsf{ xor }h_{t}$. We set $\mathsf{W}^{\prime}(n,R,S)=(\max(t-3n+|S|,s_{t}))_{t=1}^{3n+1}$ based on the CGK walk $W_{\mathsf{CGK}}(S)=(s_{t})_{t=1}^{3n+1}$ for all $S\in\Sigma^{\leq n}$. Observe that this modification guarantees that $\mathsf{W}^{\prime}(n,R,S)$ is a complete walk over $S$ and that $\mathsf{W}^{\prime}(n,R,S)=W_{\mathsf{CGK}}(S)$ if $W_{\mathsf{CGK}}(S)$ is already complete. Consequently, by Corollary 6.6, the claimed conditions are satisfied with probability at least $1-\delta_{\ref{cor:CGK}}=1-\frac{\delta}{2}$ for $c\geq c_{\ref{cor:CGK}}$. The construction algorithm uses $\mathcal{O}(\log n)$ bits, costs $\mathcal{O}(n)$ time, reads the random bits $(h_{t})_{t=1}^{3n+1}$ from left to right, and outputs the elements of $\mathsf{W}^{\prime}(n,R,S)$ when required. In order to use Nisan’s pseudorandom generator [50], we need to argue that we only care about properties testable using $\mathcal{O}(\log n)$-bit algorithms with one-way access to the randomness $R$. As in [10], our testers are given two (read-only) strings $X,Y\in\Sigma^{\leq n}$ and a stream of random bits representing $R$ (the sequence $(h_{t})_{t=1}^{3n}$). Observe that the following properties of the zip alignment $\mathcal{A}^{\prime}$ of $\mathsf{W}^{\prime}(n,R,X)=(x_{t})_{t=1}^{3n+1}$ and $\mathsf{W}^{\prime}(n,R,Y)=(y_{t})_{t=1}^{3n+1}$ are testable in $\mathcal{O}(\log n)$ bits and $\mathcal{O}(n)$ time: * • whether $\mathcal{A}^{\prime}$ is greedy; * • whether $\mathsf{cost}(\mathcal{A}^{\prime})\leq k$ for a given integer $k\in[0\mathinner{.\,.\allowbreak}n]$; * • whether $0pt(\mathcal{A}^{\prime})\leq w$ for a given integer $w\in[0\mathinner{.\,.\allowbreak}n]$. All these testers simply construct triples $(t,x_{t},y_{t})$ for subsequent $t\in[1\mathinner{.\,.\allowbreak}3n+1]$. In this setting, Nisan’s pseudorandom generator [50], given a sequence $r$ of $\mathcal{O}(\log^{2}n)$ random bits, constructs a sequence $\mathsf{PRG}(r)$ of pseudorandom bits in $\mathcal{O}(1)$ amortized time per bit, using $\mathcal{O}(\log^{2}n)$ bits of working space. Moreover, for every tester, the probabilities of accepting a given input $\mathcal{I}$ with randomness $R$ and $\mathsf{PRG}(r)$ differ by at most $\frac{1}{n^{2}}$ [10, Theorem 5]. Consequently, setting $\mathsf{W}(n,r,S)=\mathsf{W}^{\prime}(n,\mathsf{PRG}(r),S)$, we can guarantee that the claimed condition is satisfied with probability at least $1-\frac{1}{2}\delta-\frac{(2n+3)}{n^{2}}\geq 1-\delta$. The streaming construction algorithm takes $\mathcal{O}(\log^{2}n)$ bits and $\mathcal{O}(n\log n)$ time, dominated by the generation of $\mathsf{PRG}(r)$. ∎ ### 6.2 Context encoding For a string $S\in\Sigma^{*}$, define $\overline{\mathsf{maxLZ}}(S)=\max_{[\ell\mathinner{.\,.\allowbreak}r)\subseteq[1\mathinner{.\,.\allowbreak}|S|]}|\mathsf{LZ}(\overline{S[\ell\mathinner{.\,.\allowbreak}r)})|$. ###### Observation 6.7. Consider a string $S\in\Sigma^{*}$ and an integer $k\in\mathbb{Z}_{+}$. If $\overline{\mathsf{maxLZ}}(S[\ell\mathinner{.\,.\allowbreak}r))\leq k$ holds for a non-empty fragment $S[\ell\mathinner{.\,.\allowbreak}r)$, then: 1. (a) $\overline{\mathsf{maxLZ}}(S[\ell\mathinner{.\,.\allowbreak}r])\leq k$ if and only if $\mathsf{LZ}(\overline{S[\ell\mathinner{.\,.\allowbreak}r]})\leq k$; 2. (b) $\overline{\mathsf{maxLZ}}(S[\ell+1\mathinner{.\,.\allowbreak}r))\leq k$; 3. (c) $\overline{\mathsf{maxLZ}}(S[\ell\mathinner{.\,.\allowbreak}r])\leq k+1$. ###### Proof. The first two parts follow from the fact that the size of the $\mathsf{LZ}$-factorisation of a prefix of a string is bounded by the size of the $\mathsf{LZ}$-factorisation of the string itself. The third claim follows from the optimality of the LZ77 parsing among all LZ-like parsings (4.1). ∎ ###### Definition 6.8 ((Double) Context). Consider a string $S\in\Sigma^{*}$ and an integer $k\in\mathbb{Z}_{+}$. For a position $p\in[1\mathinner{.\,.\allowbreak}|S|]$, we define the _context_ $C_{k}(S,p)$ as the longest prefix $S[p\mathinner{.\,.\allowbreak}q)$ of $S[p\mathinner{.\,.\allowbreak}|S|]$ such that $\overline{\mathsf{maxLZ}}(S[p\mathinner{.\,.\allowbreak}q))\leq k$ and the _double context_ $C^{2}_{k}(S,p)$ as the longest prefix $S[p\mathinner{.\,.\allowbreak}q)$ of $S[p\mathinner{.\,.\allowbreak}|S|]$ such that $\overline{\mathsf{maxLZ}}(S[p\mathinner{.\,.\allowbreak}r))\leq k$ and $\overline{\mathsf{maxLZ}}(S[r\mathinner{.\,.\allowbreak}q))\leq k$ for some $r\in[p\mathinner{.\,.\allowbreak}q]$. For integers $t<v$, let $\mu(t,v)$ denote the integer $w\in[t\mathinner{.\,.\allowbreak}v)$ divisible by the largest power of 2. Input: An integer $k\in\mathbb{Z}_{\geq 0}$, a string $S\in\Sigma^{*}$, and a complete walk $(s_{t})_{t=1}^{m+1}$ over $S$. Output: The string $\mathsf{CE}_{k}(W)[1\mathinner{.\,.\allowbreak}m]$. 1 $\mathsf{CE}_{k}(W):=\bot^{m}$, where $\bot=(\mathsf{LZ}(\varepsilon),0)$; 2 $s_{0}:=0;\;\mu_{0}:=0;\;u:=0$; 3 for _$t:=1$ to $m$_ do 4 if _$s_{t}\leq|S|$_ then 5 $\mu_{t}:=\mu(t,\min\\{v\in[1\mathinner{.\,.\allowbreak}m+1]:s_{v}=s_{t}+|C_{k}(S,s_{t})|\\})$; 6 if _$\mu_{t} >\mu_{t-1}$_ then 7 $\mathsf{CE}_{k}(W)[t]:=\left(\mathsf{LZ}\left(\overline{C^{2}_{k}(S,s_{t})}\right),s_{t}-s_{u}\right)$; 8 $u:=t$; 9 10 11 12 return _$\mathsf{CE}_{k}(W)$_ ; Algorithm 3 Function $\mathsf{CE}$ We say that a position $S[s]$ is covered by a fragment $S[\ell\mathinner{.\,.\allowbreak}r)$ if $\ell\leq s<r$. ###### Lemma 6.9. Consider $\mathsf{CE}_{k}(W)$ constructed for an integer $k\in\mathbb{Z}_{\geq 0}$ and an $m$-complete walk $W=(s_{t})_{t=1}^{m+1}$ over $S\in\Sigma^{*}$. Each position $s\in[1\mathinner{.\,.\allowbreak}|S|]$ satisfies $1\leq|\\{t\in[1\mathinner{.\,.\allowbreak}m]:\mathsf{CE}_{k}(W)[t]\neq\bot\text{ and }C_{k}(S,s_{t})\text{ covers }S[s]\\}|\leq\\\ \leq|\\{t\in[1\mathinner{.\,.\allowbreak}m]:\mathsf{CE}_{k}(W)[t]\neq\bot\text{ and }C^{2}_{k}(S,s_{t})\text{ covers }S[s]\\}|\leq\mathcal{O}(\log m)$ ###### Proof. Let us first bound the covering number from below. Consider an index $t\in[1\mathinner{.\,.\allowbreak}m]$ such that $s=s_{t}$, and let $t^{\prime}\in[1\mathinner{.\,.\allowbreak}m]$ be the smallest index such that $\mu_{t^{\prime}}=\mu_{t}$. Note that $s_{\mu_{t^{\prime}}}<s_{t^{\prime}}+|C_{k}(S,s_{t^{\prime}})|$ and $s_{\mu_{t}}\geq s_{t}$ by definition of $\mu$ and monotonicity of the walk $W$. Due to $\mu_{t^{\prime}}=\mu_{t}$, this implies $s_{t^{\prime}}\leq s_{t}<s_{t^{\prime}}+|C_{k}(S,s_{t^{\prime}})|$, i.e., that $S[s_{t}]$ is covered by $C_{k}(S,s_{t^{\prime}})$. Due to $\mu_{t^{\prime}-1}\neq\mu_{t}$, this guarantees $\mathsf{CE}_{k}(S)[t^{\prime}]\neq\bot$. As for the upper bound, note that the indexes $t\in[1\mathinner{.\,.\allowbreak}m]$ with $\mathsf{CE}_{k}(S)[t]\neq\bot$ have distinct values $\mu_{t}$. Let us further classify them into $\mathcal{O}(\log n)$ groups depending on the largest power of two dividing $\mu_{t}$. Consider two indexes $t<t^{\prime}$ in the same group. Since $\mu_{t}<\mu_{t^{\prime}}$ are divisible by the same largest power of two, there is a number $\nu\in(\mu_{t}\mathinner{.\,.\allowbreak}\mu_{t^{\prime}})$ divisible by a strictly larger power of two. By definition of $\mu_{t}$, we have $s_{\nu}\geq s_{t}+|C_{k}(S,s_{t})|$ and, by definition of $\mu_{t^{\prime}}$, we have $s_{\nu}<s_{t^{\prime}}$. Consequently, $s_{t^{\prime}}>s_{t}+|C_{k}(S,s_{t})|$, i.e., the contexts $C_{k}(S,s_{t})$ and $C_{k}(S,s_{t^{\prime}})$ are disjoint. By monotonicity of $\overline{\mathsf{maxLZ}}$, this also means that $s_{t}+|C^{2}_{k}(S,s_{t})|\leq s_{t^{\prime}}+|C_{k}(S,s_{t^{\prime}})|$. In particular, each position $s\in[1\mathinner{.\,.\allowbreak}|S|]$ is covered by at most one context $C_{k}(S,s_{t})$ and at most two double contexts $C^{2}_{k}(S,s_{t})$ for $t\in[1\mathinner{.\,.\allowbreak}m]$ belonging to a single group. ∎ Let $\mathcal{A}$ be an alignment between $X,Y\in\Sigma^{*}$. We define $\mathcal{B}_{X}(\mathcal{A})=\\{x:(x,y)\in\mathcal{B}(\mathcal{A})\\}\cap[1\mathinner{.\,.\allowbreak}|X|]$. Moreover, for two alignments $\mathcal{A},\mathcal{A}^{\prime}$ between $X,Y\in\Sigma^{*}$, we define $\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})$ to contain $x\in[1\mathinner{.\,.\allowbreak}|X|]$ unless $(x,y)\in\mathcal{M}(\mathcal{A})\cap\mathcal{M}(\mathcal{A}^{\prime})$ for some $y\in[1\mathinner{.\,.\allowbreak}|Y|]$. ###### Lemma 6.10. Let $b\in\mathbb{Z}_{+}$ and $\mathcal{A},\mathcal{A}^{\prime}$ be greedy alignments of strings $X$ and $Y$. The positions in $\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})$ can be covered by at most $\mathsf{cost}(\mathcal{A})+\frac{1}{b}\mathsf{cost}(\mathcal{A}^{\prime})$ contexts $C_{0pt(\mathcal{A})+0pt(\mathcal{A}^{\prime})+2b}(X,\cdot)$. Moreover, if $b>\mathsf{cost}(\mathcal{A}^{\prime})$, then the positions in $\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})$ can be covered by contexts $C_{0pt(\mathcal{A})+0pt(\mathcal{A}^{\prime})+2b}(X,x)$ with $x\in\mathcal{B}_{X}(\mathcal{A})$. ###### Proof. Let $w=0pt(\mathcal{A})$ and $w^{\prime}=0pt(\mathcal{A}^{\prime})$. Without loss of generality, we may trim the longest common prefix of $X$ and $Y$; this is feasible because both $\mathcal{A}$ and $\mathcal{A}^{\prime}$ match the common prefix, so $\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})$ only contains positions following the prefix. Moreover, we assume that $X\neq\varepsilon$; otherwise $\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})=\emptyset$ and the lemma is trivial. In the remaining case, we are guaranteed that $1\in\mathcal{B}_{X}(\mathcal{A})$. Let $\mathcal{B}_{X}(\mathcal{A})=\\{x_{1},\ldots,x_{m}\\}$, where $1=x_{1}<\cdots<x_{m}$, and let $x_{m+1}=|X|+1$. This yields a decomposition $X=X_{1}\cdots X_{m}$ into $m$ non-empty substrings $X_{i}:=X[x_{i}\mathinner{.\,.\allowbreak}x_{i+1})$. The alignment $\mathcal{A}^{\prime}$ further yields a decomposition $Y=Y_{1}\cdots Y_{m}$ into $m$ (possibly empty) substrings $Y_{i}:=Y[y_{i}\mathinner{.\,.\allowbreak}y_{i+1})$ so that $X_{i}\sim_{\mathcal{A}^{\prime}}Y_{i}$ and $|x_{i}-y_{i}|\leq w^{\prime}$ for $i\in[1\mathinner{.\,.\allowbreak}m]$. Moreover, the cost of $\mathcal{A}^{\prime}$ can be expressed as $\mathsf{cost}(\mathcal{A}^{\prime})=\sum_{i=1}^{m}c^{\prime}_{i}$, where $c^{\prime}_{i}$ denotes the cost of $\mathcal{A}^{\prime}$ restricted to $X_{i}$ and $Y_{i}$. ###### Claim 6.11. For every $i\in[1\mathinner{.\,.\allowbreak}m]$, the set $\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})\cap[x_{i}\mathinner{.\,.\allowbreak}x_{i+1})$ can be covered by at most $\lceil\frac{1}{b}(c^{\prime}_{i}+1)\rceil$ contexts $C_{w+w^{\prime}+2b}(X,\cdot)$ including $C_{w+w^{\prime}+2b}(X,x_{i})$. ###### Proof. Let $q_{i}=\max(\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})\cap[x_{i}\mathinner{.\,.\allowbreak}x_{i+1}))$. We may assume that $q_{i}\geq x_{i}+w+w^{\prime}$; otherwise, the claim holds trivially. We shall prove that $X[x_{i}+1\mathinner{.\,.\allowbreak}q_{i}-(w+w^{\prime})]$ can be decomposed into at most $2c^{\prime}_{i}+1$ phrases, each of which is a single character or has another occurrence at most $w+w^{\prime}$ positions to the right. Accounting for $X[x_{i}]$, this yields a decomposition of $X[x_{i}\mathinner{.\,.\allowbreak}q_{i}-(w+w^{\prime})]$ into at most $2c^{\prime}_{i}+2$ such phrases. The $\overline{\mathsf{maxLZ}}$ measure of the concatenation of every $2b$ subsequent phrases and $w+w^{\prime}$ following single characters does not exceed $w+w^{\prime}+2b$. Hence, $X[x_{i}\mathinner{.\,.\allowbreak}q_{i}]$ can be covered by $\lceil\frac{1}{b}(c^{\prime}_{i}+1)\rceil$ contexts $C_{w+w^{\prime}+2b}(X,\cdot)$ including $C_{w+w^{\prime}+2b}(X,x_{i})$. Thus, it remains to construct the decomposition of $X[x_{i}+1\mathinner{.\,.\allowbreak}q_{i}-(w+w^{\prime})]$ into phrases. By definition of $\mathcal{B}_{X}(\mathcal{A})$, we have ${X[x_{i}+1\mathinner{.\,.\allowbreak}x_{i+1})\simeq_{\mathcal{A}}Y[x_{i}+1+d\mathinner{.\,.\allowbreak}x_{i+1}+d)}$ for some shift $d\in[-w\mathinner{.\,.\allowbreak}w]$. We consider two cases. In the first case, we assume that $(x_{i}+1,\bar{y})\in\mathcal{A}^{\prime}$ holds for some $\bar{y}\geq x_{i}+1+d$. The greedy nature of $\mathcal{A}^{\prime}$ then guarantees that if $(x,y)\in\mathcal{A}^{\prime}$ with $x\in[x_{i}+1\mathinner{.\,.\allowbreak}q_{i})$, then $y>x+d$. We decompose $X[x_{i}+1\mathinner{.\,.\allowbreak}q_{i}-(w+w^{\prime})]$ (which is contained in $X_{i}$) into maximal phrases $X[x\mathinner{.\,.\allowbreak}x^{\prime}]$ satisfying $X[x\mathinner{.\,.\allowbreak}x^{\prime}]\simeq_{\mathcal{A}^{\prime}}Y[x+d^{\prime}\mathinner{.\,.\allowbreak}x^{\prime}+d^{\prime}]$ for some $d^{\prime}\in(d\mathinner{.\,.\allowbreak}w^{\prime}]$ and remaining single characters (deleted or substituted by $\mathcal{A}^{\prime}$). Observe that this yields at most $1+c^{\prime}_{i}$ phrases and at most $c^{\prime}_{i}$ single characters. Moreover, we have $X[x\mathinner{.\,.\allowbreak}x^{\prime}]\simeq_{\mathcal{A}^{\prime}}Y[x+d^{\prime}\mathinner{.\,.\allowbreak}x^{\prime}+d^{\prime}]\simeq_{\mathcal{A}}X[x+(d^{\prime}-d)\mathinner{.\,.\allowbreak}x^{\prime}+(d^{\prime}-d)]$ due to $x_{i}+1\leq x\leq x+(d^{\prime}-d)$ and $x^{\prime}+(d^{\prime}-d)\leq x^{\prime}+(w^{\prime}+w)\leq q_{i}$. Hence, each phrase $X[x\mathinner{.\,.\allowbreak}x^{\prime}]$ has another occurrence located $d^{\prime}-d\in[1\mathinner{.\,.\allowbreak}w+w^{\prime}]$ positions to the right. In the second case, we assume that $(x_{i}+1,\bar{y})\in\mathcal{A}^{\prime}$ holds for some $\bar{y}\leq x_{i}+1+d$. The greedy nature of $\mathcal{A}^{\prime}$ then guarantees that if $(x,y)\in\mathcal{A}^{\prime}$ with $x\in[x_{i}+1\mathinner{.\,.\allowbreak}q_{i})$, then $y<x+d$. We decompose $Y[x_{i}+1+d\mathinner{.\,.\allowbreak}q_{i}-(w+w^{\prime})+d]$ (which is contained in $Y_{i}$) into maximal phrases $Y[y\mathinner{.\,.\allowbreak}y^{\prime}]$ satisfying $Y[y\mathinner{.\,.\allowbreak}y^{\prime}]\simeq_{\mathcal{A}^{\prime}}X[y-d^{\prime}\mathinner{.\,.\allowbreak}y^{\prime}-d^{\prime}]$ for some $d^{\prime}\in[-w\mathinner{.\,.\allowbreak}d)$ and remaining single characters (deleted or substituted by $\mathcal{A}^{\prime}$). Observe that this yields at most $1+c^{\prime}_{i}$ phrases and at most $c^{\prime}_{i}$ single characters. Moreover, we have $Y[y\mathinner{.\,.\allowbreak}y^{\prime}]\simeq_{\mathcal{A}^{\prime}}X[y-d^{\prime}\mathinner{.\,.\allowbreak}y^{\prime}-d^{\prime}]\simeq_{\mathcal{A}}Y[y+(d-d^{\prime})\mathinner{.\,.\allowbreak}y+(d-d^{\prime})]$ due to $x_{i}+1\leq x_{i}+1+d-d^{\prime}\leq y-d^{\prime}$ and $y^{\prime}-d^{\prime}\leq q_{i}-(w+w^{\prime})+d-d^{\prime}\leq q_{i}$. Hence, each phrase $Y[y\mathinner{.\,.\allowbreak}y^{\prime}]$ has another occurrence located $d-d^{\prime}\in[1\mathinner{.\,.\allowbreak}w+w^{\prime}]$ characters to the right. Since $Y[x_{i}+1+d\mathinner{.\,.\allowbreak}q_{i}+d]=X[x_{i}+1\mathinner{.\,.\allowbreak}q_{i}]$, this decomposition of $Y[x_{i}+1+d\mathinner{.\,.\allowbreak}q_{i}-(w+w^{\prime})+d]$ gives an analogous decomposition of $X[x_{i}+1\mathinner{.\,.\allowbreak}q_{i}-(w+w^{\prime})]$. ∎ The set $\Delta_{X}(\mathcal{A},\mathcal{A}^{\prime})$ can be covered by $\sum_{i=1}^{m}\lceil\frac{1}{b}(c^{\prime}_{i}+1)\rceil\leq m+\frac{\mathsf{cost}(\mathcal{A}^{\prime})}{b}\leq\mathsf{cost}(\mathcal{A})+\frac{\mathsf{cost}(\mathcal{A}^{\prime})}{b}$ contexts $C_{w+w^{\prime}+2b}(X,\cdot)$. If $b\geq\mathsf{cost}(\mathcal{A}^{\prime})+1$, then the contexts starting at positions in $\mathcal{B}_{X}(\mathcal{A})$ are sufficient. ∎ ###### Lemma 6.12. Let $\mathcal{A}_{W}$ be the zip alignment of $m$-complete walks $W_{X}=(x_{t})_{t=1}^{m+1}$, $W_{Y}=(y_{t})_{t=1}^{m+1}$ over strings $X,Y\in\Sigma^{*}$, and let $k^{\prime}\in\mathbb{Z}_{+}$. If $\mathcal{A}_{W}$ is greedy, then $\mathcal{B}_{X}(\mathcal{A}_{W})\subseteq P_{X}$, where $P_{X}$ is the set of positions $x\in[1\mathinner{.\,.\allowbreak}|X|]$ such that $X[x]$ is covered by $C^{2}_{k^{\prime}}(X,x_{t})$ for some $t\in[1\mathinner{.\,.\allowbreak}m]$ with $\bot\neq\mathsf{CE}_{k^{\prime}}(W_{X})[t]\neq\mathsf{CE}_{k^{\prime}}(W_{Y})[t]$. Moreover, for every positive integer $k\geq\mathsf{ed}(X,Y)$ such that $k^{\prime}\geq 0pt(\mathcal{A}_{W})+5k$: * • every $\mathcal{A}\in\mathsf{GA}_{k}(X,Y)$ satisfies $\Delta_{X}(\mathcal{A},\mathcal{A}_{W})\subseteq P_{X}$, * • $\mathsf{hd}(\mathsf{CE}_{k^{\prime}}(X),\mathsf{CE}_{k^{\prime}}(Y))\leq c(k+\frac{1}{k}\mathsf{cost}(\mathcal{A}_{W}))\log m$ holds for a sufficiently large constant $c$. ###### Proof. Consider a position $x\in\mathcal{B}_{X}(\mathcal{A}_{W})$. By Lemma 6.9, there is an index $t\in[1\mathinner{.\,.\allowbreak}m]$ such that $\mathsf{CE}_{k^{\prime}}(W_{X})[t]\neq\bot$ and $C^{2}_{k^{\prime}}(X,x_{t})$ covers $X[x]$. To conclude that $x\in P_{X}$, it remains to prove $\mathsf{CE}_{k^{\prime}}(W_{X})[t]\neq\mathsf{CE}_{k^{\prime}}(W_{Y})[t]$. For a proof by contradiction, suppose that $\mathsf{CE}_{k^{\prime}}(W_{X})[t]=\mathsf{CE}_{k^{\prime}}(W_{Y})[t]$. This implies $C^{2}_{k^{\prime}}(X,x_{t})=C^{2}_{k^{\prime}}(Y,y_{t})$. Since $\mathcal{A}_{W}$ is greedy, we derive $X[x_{t}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|)\simeq_{\mathcal{A}_{W}}Y[y_{t}\mathinner{.\,.\allowbreak}y_{t}+|C^{2}_{k^{\prime}}(Y,y_{t})|)$, which contradicts $x\in\mathcal{B}_{X}(\mathcal{A}_{W})$. In the remainder of the proof, we assume that $k\geq\mathsf{ed}(X,Y)$ and $k^{\prime}\geq 0pt(\mathcal{A}_{W})+5k$. Next, consider a greedy alignment $\mathcal{A}$ with $\mathsf{cost}(\mathcal{A})\leq k$ and a position $x\in\Delta_{X}(\mathcal{A},\mathcal{A}_{W})$. Due to $k^{\prime}\geq 0pt(\mathcal{A}_{W})+0pt(\mathcal{A})+4k$, Lemma 6.10 shows that there exists a position $r\in\mathcal{B}_{X}(\mathcal{A}_{W})$ such that $C_{k^{\prime}}(X,r)$ covers $X[x]$ and, by Lemma 6.9, there exists a position $t\in[1\mathinner{.\,.\allowbreak}m]$ such that $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot$ and $C_{k^{\prime}}(X,x_{t})$ covers $X[r-1]$. Since $\overline{\mathsf{maxLZ}}(X[x_{t}\mathinner{.\,.\allowbreak}r)),\overline{\mathsf{maxLZ}}(X[r\mathinner{.\,.\allowbreak}x])\leq k^{\prime}$, we conclude that both $r$ and $x$ are covered by $C^{2}_{k^{\prime}}(X,x_{t})$. We shall prove that $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\mathsf{CE}_{k^{\prime}}(Y)[t]$. For a proof by contradiction, suppose that $\mathsf{CE}_{k^{\prime}}(W_{X})[t]=\mathsf{CE}_{k^{\prime}}(W_{Y})[t]$, which implies $C^{2}_{k^{\prime}}(X,x_{t})=C^{2}_{k^{\prime}}(Y,y_{t})$. Let us fix the smallest $t^{\prime}\in[t\mathinner{.\,.\allowbreak}m+1]$ such that $(x_{t^{\prime}},y_{t^{\prime}})\in\mathcal{B}(\mathcal{A}_{W})$. Observe that $X[x_{t}\mathinner{.\,.\allowbreak}x_{t^{\prime}})\simeq_{\mathcal{A}_{W}}Y[y_{t}\mathinner{.\,.\allowbreak}y_{t^{\prime}})$ and, by the greedy nature of $\mathcal{A}_{W}$, $X[x_{t}\mathinner{.\,.\allowbreak}x_{t^{\prime}})=Y[y_{t}\mathinner{.\,.\allowbreak}y_{t^{\prime}})$ is the longest common prefix of $X[x_{t}\mathinner{.\,.\allowbreak}]$ and $Y[y_{t}\mathinner{.\,.\allowbreak}]$. At the same time, due to $x_{t}<r\in\mathcal{B}_{X}(\mathcal{A}_{W})$, we have $x_{t^{\prime}}\leq r$, so $X[x_{t}\mathinner{.\,.\allowbreak}x_{t^{\prime}}]$ is a prefix of $C^{2}_{k^{\prime}}(X,x_{t})$. However, $X[x_{t}\mathinner{.\,.\allowbreak}x_{t^{\prime}}]$ is not a prefix $Y[y_{t}\mathinner{.\,.\allowbreak}]$, so $C^{2}_{k^{\prime}}(X,x_{t})$ is not a prefix of $Y[y_{t}\mathinner{.\,.\allowbreak}]$ and $C^{2}_{k^{\prime}}(X,x_{t})\neq C^{2}_{k^{\prime}}(Y,y_{t})$. The contradiction completes the proof. Finally, we bound $\mathsf{hd}(\mathsf{CE}_{k^{\prime}}(X),\mathsf{CE}_{k^{\prime}}(Y))$ using several claims. Consider a set $M$ of indices $t\in[1\mathinner{.\,.\allowbreak}m]$ such that $\mathsf{CE}_{k^{\prime}}(X)[t]$ and $\mathsf{CE}_{k^{\prime}}(Y)[t]$ differ on the first coordinate. ###### Claim 6.13. If $t\in M$ satisfies $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot$, then $[x_{t-1}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|]\cap(\mathcal{B}_{X}(\mathcal{A}_{W})\cup\\{|X|+1\\})\neq\emptyset.$ ###### Proof. For a proof by contradiction, suppose that $[x_{t-1}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|]\cap(\mathcal{B}_{X}(\mathcal{A}_{W})\cup\\{|X|+1\\})=\emptyset$. Let $L$ be the length of the longest common prefix of $X[x_{t}\mathinner{.\,.\allowbreak}]$ and $Y[y_{t}\mathinner{.\,.\allowbreak}]$. Note that $(x_{t}+L,y_{t}+L)\in\mathcal{B}(\mathcal{A}_{W})$, so $x_{t}+L\in\mathcal{B}_{X}(\mathcal{A}_{W})\cup\\{|X|+1\\}$. Consequently, $L>|C^{2}_{k^{\prime}}(X,x_{t})|\geq|C_{k^{\prime}}(X,x_{t})|$, and therefore $X[x_{t}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|]=Y[y_{t}\mathinner{.\,.\allowbreak}y_{t}+|C^{2}_{k^{\prime}}(Y,y_{t})|]$ and $X[x_{t}\mathinner{.\,.\allowbreak}x_{t}+|C_{k^{\prime}}(X,x_{t})|]=Y[y_{t}\mathinner{.\,.\allowbreak}y_{t}+|C_{k^{\prime}}(Y,y_{t})|]$. In particular, $C^{2}_{k^{\prime}}(X,x_{t})=C^{2}_{k^{\prime}}(Y,y_{t})$. If $t=1$, then we note that $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot\neq\mathsf{CE}_{k^{\prime}}(Y)[t]$, so $C^{2}_{k^{\prime}}(X,x_{t})=C^{2}_{k^{\prime}}(Y,y_{t})$ contradicts $t\in M$. In the following, we assume that $t\geq 2$. Due to $(x_{t-1},y_{t-1})\notin\mathcal{B}(\mathcal{A}_{W})$, either $(x_{t-1},y_{t-1})=(x_{t},y_{t})$, or $(x_{t-1},y_{t-1})=(x_{t}-1,y_{t}-1)$ and $X[x_{t}-1]=Y[y_{t}-1]$. In either case, we have $X[x_{t-1}\mathinner{.\,.\allowbreak}x_{t-1}+|C_{k^{\prime}}(X,x_{t-1})|]=Y[y_{t-1}\mathinner{.\,.\allowbreak}y_{t-1}+|C_{k^{\prime}}(Y,y_{t-1})|]$ due to $X[x_{t-1}\mathinner{.\,.\allowbreak}x_{t}+|C_{k^{\prime}}(X,x_{t})|]=Y[y_{t-1}\mathinner{.\,.\allowbreak}y_{t}+|C_{k^{\prime}}(Y,y_{t})|]$ and by 6.7. Consequently, $\min\\{u\in[1\mathinner{.\,.\allowbreak}m]:x_{u}=x_{t-1}+|C_{k^{\prime}}(X,x_{t-1})|\\}=\min\\{u\in[1\mathinner{.\,.\allowbreak}m]:y_{u}=y_{t-1}+|C_{k^{\prime}}(Y,y_{t-1})|\\}$ and, by a similar reasoning, $\min\\{u\in[1\mathinner{.\,.\allowbreak}m]:x_{u}=x_{t}+|C_{k^{\prime}}(X,x_{t})|\\}=\min\\{u\in[1\mathinner{.\,.\allowbreak}m]:y_{u}=y_{t}+|C_{k^{\prime}}(Y,y_{t})|\\}$. Hence, the assumption $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot$ implies $\mathsf{CE}_{k^{\prime}}(Y)[t]\neq\bot$. Therefore, $C^{2}_{k^{\prime}}(X,x_{t})=C^{2}_{k^{\prime}}(Y,y_{t})$ contradicts $t\in M$. ∎ ###### Claim 6.14. There are $\mathcal{O}((k+\frac{1}{k}\mathsf{cost}(\mathcal{A}_{W}))\log m)$ positions $t\in M$ such that $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot$. ###### Proof. Let $\mathcal{O}$ be an optimum greedy alignment between $X$ and $Y$. Due to $k^{\prime}\geq 0pt(\mathcal{O})+0pt(\mathcal{A}_{W})+4k$, from Lemma 6.10 it follows that $\Delta_{X}(\mathcal{O},\mathcal{A}_{W})$ can be covered by $\mathcal{O}(\mathsf{cost}(\mathcal{O})+\frac{1}{k}\mathsf{cost}(\mathcal{A}_{W}))=\mathcal{O}(k+\frac{1}{k}\mathsf{cost}(\mathcal{A}_{W}))$ contexts $C_{k^{\prime}}(X,\cdot)$. Let us fix such the smallest family $\mathcal{C}$ of contexts covering $\mathcal{B}_{X}(\mathcal{A}_{W})\subseteq\Delta_{X}(\mathcal{O},\mathcal{A}_{W})$. Define a set $R_{X}=\\{|X|\\}\cup\bigcup_{X[q\mathinner{.\,.\allowbreak}r]\in\mathcal{C}}\\{q-1,r+1\\}$ and note that $|R_{X}|=\mathcal{O}(|\mathcal{C}|)$. We shall prove that, if $t\in M$ and $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot$, then $C^{2}_{k^{\prime}}(X,x_{t})$ covers at least one position in $R_{X}$. This is sufficient to derive the claim because, by Lemma 6.9, every position in $X$ is covered by at most $\mathcal{O}(\log m)$ double contexts $C^{2}_{k^{\prime}}(X,x_{t})$ with $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot$. By 6.13, we have $[x_{t-1}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|]\cap(\mathcal{B}_{X}(\mathcal{A}_{W})\cup\\{|X|+1\\})\neq\emptyset$. If $\\{|X|+1\\}\in[x_{t-1}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|]$, then $|X|+1=x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|$. Hence, $C^{2}_{k^{\prime}}(X,x_{t})$ covers the position $|X|\in R_{X}$. Thus, we may assume that $[x_{t-1}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|]$ contains a position in $\mathcal{B}_{X}(\mathcal{A}_{W})$. Note that the fragment $X[q\mathinner{.\,.\allowbreak}r]\in\mathcal{C}$ covering that position in $\mathcal{B}_{X}(\mathcal{A}_{W})$ satisfies $r\geq x_{t-1}$ and $q\leq x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|$. In particular, $r+1\geq x_{t}$ and $q-1<x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|$. Now, if $C^{2}_{k^{\prime}}(X,x_{t})$ covers $q-1$ or $r+1$, then we are done. Otherwise, $q-1<x_{t}$ and $r+1\geq x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|$, so $q\leq x_{t}$ and $r\geq x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|$, i.e., $C^{2}_{k^{\prime}}(X,x_{t})$ is contained in $X[q\mathinner{.\,.\allowbreak}r]$ and, since $\overline{\mathsf{maxLZ}}$ is monotone, $\overline{\mathsf{maxLZ}}(C^{2}_{k^{\prime}}(X,x_{t}))\leq k^{\prime}$. Consequently, $C^{2}_{k^{\prime}}(X,x_{t})=C_{k^{\prime}}(X,x_{t})$ is a suffix of $X$, so $C^{2}_{k^{\prime}}(X,x_{t})$ covers the position $|X|\in R_{X}$. ∎ A symmetric argument shows that there are $\mathcal{O}((k+\frac{1}{k}\mathsf{cost}(\mathcal{A}_{W}))\log m)$ positions $t\in M$ such that $\mathsf{CE}_{k^{\prime}}(Y)[t]\neq\bot$. Consequently, $|M|=\mathcal{O}((k+\frac{1}{k}\mathsf{cost}(\mathcal{A}_{W}))\log m)$. We bound $\mathsf{hd}(\mathsf{CE}_{k^{\prime}}(X),\mathsf{CE}_{k^{\prime}}(Y))$ using the following claim. ###### Claim 6.15. $\mathsf{hd}(\mathsf{CE}_{k^{\prime}}(X),\mathsf{CE}_{k^{\prime}}(Y))\leq 2|M|$. ###### Proof. Let $M^{\prime}=\\{t\in[1\mathinner{.\,.\allowbreak}m]:\mathsf{CE}_{k^{\prime}}(X)[t]\neq\mathsf{CE}_{k^{\prime}}(Y)[t]\\}$. Note that $M\subseteq M^{\prime}$, so the claim is equivalent to $|M^{\prime}\setminus M|\leq|M|$. For every $t\in M^{\prime}\setminus M$, we shall prove that $t$ is not the leftmost position in $M^{\prime}$ and that the preceding position in $M^{\prime}$ belongs to $M$. The assumption $t\in M^{\prime}\setminus M$ implies $\mathsf{CE}_{k^{\prime}}(X)[t]\neq\bot\neq\mathsf{CE}_{k^{\prime}}(Y)[t]$. We define $t^{\prime}$ as the largest position in $[1\mathinner{.\,.\allowbreak}t)$ such that $\mathsf{CE}_{k^{\prime}}(X)[t^{\prime}]\neq\bot$ or $\mathsf{CE}_{k^{\prime}}(Y)[t^{\prime}]\neq\bot$. To see that $t^{\prime}$ is well defined, note that $t>1$ and $\mathsf{CE}_{k^{\prime}}(X)[1]\neq\bot$. Now, suppose that $t^{\prime}\notin M$. Consequently, we have $\mathsf{CE}_{k^{\prime}}(X)[t^{\prime}]\neq\bot\neq\mathsf{CE}_{k^{\prime}}(Y)[t^{\prime}]$. Due to $\mathsf{CE}_{k^{\prime}}(X)[t^{\prime}+1\mathinner{.\,.\allowbreak}t)=\bot^{t-t^{\prime}-1}$, Lemma 6.9 implies that $C_{k^{\prime}}(X,x_{t^{\prime}})$ covers $X[x_{t}-1]$, that is, $\overline{\mathsf{maxLZ}}(X[x_{t^{\prime}}\mathinner{.\,.\allowbreak}x_{t}))\leq k^{\prime}$, and hence $X[x_{t^{\prime}}\mathinner{.\,.\allowbreak}x_{t}]$ is a prefix of $C^{2}_{k^{\prime}}(X,x_{t^{\prime}})$. Moreover, $C^{2}_{k^{\prime}}(X,x_{t^{\prime}})=C^{2}_{k^{\prime}}(Y,y_{t^{\prime}})$ implies $X[x_{t^{\prime}}\mathinner{.\,.\allowbreak}x_{t}]\simeq_{\mathcal{A}_{W}}Y[y_{t^{\prime}}\mathinner{.\,.\allowbreak}y_{t^{\prime}}+x_{t}-x_{t^{\prime}}]$ by the greedy nature of $\mathcal{A}_{W}$. As $(x_{t},y_{t})\in\mathcal{A}_{W}$, we conclude that $y_{t}=y_{t^{\prime}}+x_{t}-x_{t^{\prime}}$. At the same time, since $\mathsf{CE}_{k^{\prime}}(X)[t^{\prime}+1\mathinner{.\,.\allowbreak}t)=\bot^{t-t^{\prime}-1}=\mathsf{CE}_{k^{\prime}}(Y)[t^{\prime}+1\mathinner{.\,.\allowbreak}t)$, we have $\mathsf{CE}_{k^{\prime}}(X)[t]=(\mathsf{LZ}(\overline{C^{2}_{k^{\prime}}(X,x_{t})}),x_{t}-x_{t^{\prime}})$ and $\mathsf{CE}_{k^{\prime}}(Y)[t]=(\mathsf{LZ}(\overline{C^{2}_{k^{\prime}}(Y,y_{t})}),y_{t}-y_{t^{\prime}})$. Thus, $\mathsf{CE}_{k^{\prime}}(X)[t]=\mathsf{CE}_{k^{\prime}}(Y)[t]$, which contradicts $t\in M^{\prime}$. Consequently, $t^{\prime}\in M$. In this case, $M^{\prime}\cap[t^{\prime}\mathinner{.\,.\allowbreak}t]=\\{t^{\prime},t\\}$, so $t^{\prime}$ is the position of $M^{\prime}$ preceding $t$. Our goal was to show that such a position exists and belongs to $M$, so this completes the proof of the claim. ∎ Overall, we conclude that $\mathsf{hd}(\mathsf{CE}_{k^{\prime}}(X),\mathsf{CE}_{k^{\prime}}(Y))\leq 2|M|=\mathcal{O}((k+\frac{1}{k}\mathsf{cost}(\mathcal{A}_{W}))\log m)$. ∎ ###### Lemma 6.16. Given integers $n\geq k\geq 1$, a seed $r$ of $\mathcal{O}(\log^{2}n)$ random bits, and streaming access to a string $S\in\Sigma^{\leq n}$, the string $\mathsf{CE}_{k}(\mathsf{W}(S,n,r))$ can be computed in $\tilde{\mathcal{O}}(k)$ space and $\tilde{\mathcal{O}}(nk)$ time. ###### Proof. Let us start with an auxiliary subroutine: ###### Claim 6.17. There is a streaming algorithm that computes $\mathsf{D}(C_{k}(S,p))$ for subsequent positions $p\in[1\mathinner{.\,.\allowbreak}|S|]$. The algorithm uses $\tilde{\mathcal{O}}(k)$ space and $\tilde{\mathcal{O}}(nk)$ time. Moreover, if $C_{k}(S,p)=S[p\mathinner{.\,.\allowbreak}q)$, then $\mathsf{D}(C_{k}(S,p))$ is reported while the algorithm processes $S[q]$ (or the end-of-string token if $q=|S|+1$). ###### Proof. The algorithm maintains a fragment $S[p\mathinner{.\,.\allowbreak}q)$ satisfying $\overline{\mathsf{maxLZ}}(S[p\mathinner{.\,.\allowbreak}q))\leq k$ and the encoding $\mathsf{D}(S[p\mathinner{.\,.\allowbreak}q))$. Initially, $S[p\mathinner{.\,.\allowbreak}q)=S[1\mathinner{.\,.\allowbreak}1)=\varepsilon$. In each iteration, we read $S[q]$, compute $\mathsf{D}(S[p\mathinner{.\,.\allowbreak}q])$ (using Proposition 4.2(i)), and check whether $|\mathsf{LZ}(\overline{S[p\mathinner{.\,.\allowbreak}q]})|\leq k$ (using Proposition 4.2(f)). By 6.7(a), this condition is equivalent to $\overline{\mathsf{maxLZ}}(S[p\mathinner{.\,.\allowbreak}q])\leq k$. If $\overline{\mathsf{maxLZ}}(S[p\mathinner{.\,.\allowbreak}q])\leq k$, we discard $\mathsf{D}(S[p\mathinner{.\,.\allowbreak}q))$ and increment $q$. Otherwise, we are guaranteed that $S[p\mathinner{.\,.\allowbreak}q)=C_{k}(S,p)$, so we output $\mathsf{D}(S[p\mathinner{.\,.\allowbreak}q))$, compute $\mathsf{D}(S[p+1\mathinner{.\,.\allowbreak}q))$ (using Proposition 4.2(g)), discard $\mathsf{D}(S[p\mathinner{.\,.\allowbreak}q))$ and $\mathsf{D}(S[p\mathinner{.\,.\allowbreak}q])$, and increment $p$. By 6.7(b), we are guaranteed that the invariant $\overline{\mathsf{maxLZ}}(S[p\mathinner{.\,.\allowbreak}q))\leq k$ remains satisfied. In the special case of $q=|S|+1$, we proceed as if $\overline{\mathsf{maxLZ}}(S[p\mathinner{.\,.\allowbreak}q])>k$. By 6.7(c), we store $\mathsf{D}(X)$ only for strings $X$ satisfying $|\overline{\mathsf{maxLZ}}(X)|\leq k+1$, so the space usage and the per- iteration running time is $\tilde{\mathcal{O}}(k)$. Each iteration increments either $p$ or $q$, so the algorithm reads $S$ in a streaming fashion and the amortized running time is $\tilde{\mathcal{O}}(k)$ per character. ∎ We maintain two instances of the algorithm of 6.17 and two instances of the algorithm of Proposition 3.16. We feed the first instance of 6.17 with the input stream $S$, obtaining $\mathsf{D}(C_{k}(S,q))$ for subsequent positions $q\in[1\mathinner{.\,.\allowbreak}|S|]$. Upon retrieving $\mathsf{D}(C_{k}(S,q))$, we extract $S[q]$ using Proposition 4.2(b) and forward $S[q]$ to the first instance of Proposition 3.16, which lists indices $v\in[1\mathinner{.\,.\allowbreak}3n]$ such that $s_{v}=q$. We also forward $S[q]$ to the second instance of 6.17, obtaining $\mathsf{D}(C_{k}(S,p))$ for all subsequent positions $p$ such that $\mathsf{D}(C_{k}(S,p))=S[p\mathinner{.\,.\allowbreak}q)$. Upon retrieving $\mathsf{D}(C_{k}(S,p))$, we extract $S[p]$ using Proposition 4.2(b) and forward $S[p]$ to the second instance of Proposition 3.16, which lists indices $t\in[1\mathinner{.\,.\allowbreak}3n]$ such that $s_{t}=q$. For each such position $t$, we have $C^{2}_{k}(S,s_{t})=C_{k}(S,p)C_{k}(S,q)$. We compute $\mu_{t}=\mu(t,\min\\{v\in[1\mathinner{.\,.\allowbreak}3n]:s_{v}=q\\})$ based on the output of the first instance of Proposition 3.16. If $\mu_{t}>\mu_{t-1}$, we construct $\mathsf{LZ}\left(\overline{C^{2}_{k}(S,s_{t})}\right)$ using Proposition 4.2(i) and Proposition 4.2(f). By Propositions 4.2, 3.16 and 6.17, the algorithm uses $\tilde{\mathcal{O}}(k)$ space and $\tilde{\mathcal{O}}(nk)$ time. ∎ ### 6.3 Applications of Hamming distance sketches Let us start by reminding the fingerprints (sketches) for testing string equality. ###### Fact 6.18 (see e.g. [38]). There exists a fingerprint $\psi$ (parameterized by an integer $n\in\mathbb{Z}_{+}$, a threshold $\delta$ with $1\geq\delta\geq n^{-\mathcal{O}(1)}$, an alphabet $\Sigma=[0\mathinner{.\,.\allowbreak}n^{\mathcal{O}(1)})$, and a seed of $\mathcal{O}(\log n)$ random bits) such that: 1. 1. The fingerprint $\psi(S)$ of a string $S\in\Sigma^{\leq n}$ takes $\mathcal{O}(\log\delta^{-1})$ bits. Given streaming access to $S$, it can be constructed in $\mathcal{O}(|S|)$ time using $\mathcal{O}(\log n)$ bits of space. 2. 2. For all strings $X,Y\in\Sigma^{\leq n}$, we have $\Pr[\psi(X)=\psi(Y)]\leq\delta$ if $X\neq Y$ (and $\psi(X)=\psi(Y)$ otherwise). For two equal-length strings $X,Y$, the set of _mismatch positions_ is defined as $\mathsf{MP}(X,Y)=\\{i\in[1\mathinner{.\,.\allowbreak}|X|]:X[i]\neq Y[i]\\}$ and the _mismatch information_ $\mathsf{MI}(X,Y)=\\{(i,X[i],Y[i]):i\in\mathsf{MP}(X,Y)\\}$. Below, we adapt the Hamming sketches of [16] to large alphabets. ###### Theorem 6.19. For every constant $\delta\in(0,1)$, there exists a sketch $\mathsf{sk}^{H}_{k}$ (parameterized by integers $n\geq k\geq 1$, an alphabet $\Sigma=[0\mathinner{.\,.\allowbreak}\sigma)$, and a seed of $\mathcal{O}(\log(n\log\sigma))$ random bits) such that: 1. 1. The sketch $\mathsf{sk}^{H}_{k}(S)$ of a string $S\in\Sigma^{n}$ takes $\mathcal{O}(k\log(n\sigma))$ bits. Given streaming access to $S$, it can be constructed in $\mathcal{O}(n\log(n\sigma)\log(n\log\sigma))$ time using $\mathcal{O}(k\log(n\sigma))$ bits of space. 2. 2. There exists a decoding algorithm that, given $\mathsf{sk}^{H}_{k}(X)$ and $\mathsf{sk}^{H}_{k}(Y)$ for strings $X,Y\in\Sigma^{n}$, with probability at least $1-\delta$ either returns $\mathsf{MI}(X,Y)$ or certifies that $\mathsf{hd}(X,Y)>k$. The algorithm uses $\mathcal{O}(k\log(n\sigma)\log^{2}(n\log\sigma))$ time and $\mathcal{O}(k\log(n\sigma))$ bits of space. ###### Proof. The construction of [16] satisfies the required conditions provided that $\sigma=n^{\mathcal{O}(1)}$. Henceforth, we assume without loss of generality that $\sigma$ is a power of two satisfying $\sigma\geq n\log\sigma$. We interpret each character in $\Sigma$ as a block of $b=\lceil\log_{n\log\sigma}\sigma\rceil$ characters in $[0\mathinner{.\,.\allowbreak}n\log\sigma)$. Moreover, we consider the fingerprints $\psi$ of Fact 6.18 with $\delta_{\ref{fact:kr}}=\frac{\delta}{2n}$ and $n_{\ref{fact:kr}}=\sigma_{\ref{fact:kr}}=n\log\sigma$. This construction uses $\mathcal{O}(\log n_{\ref{fact:kr}})=\mathcal{O}(\log(n\log\sigma))$ random bits and produces fingerprints of $\mathcal{O}(\log\delta^{-1}_{\ref{fact:kr}})=\mathcal{O}(\log n)$ bits. Given a string $S\in\Sigma^{n}$, we define a string $\bar{S}[1\mathinner{.\,.\allowbreak}nb]$ so that $\bar{S}[ib-j+1]=(S[i][j],\psi(S[i]))$ for $i\in[1\mathinner{.\,.\allowbreak}|S|]$ and $j\in[1\mathinner{.\,.\allowbreak}b]$. Consequently, we set $\mathsf{sk}^{H}_{k}(S):=\mathsf{sk}^{H}_{\bar{k}}(\bar{S})$ using the the sketch of [16] with $\bar{\delta}=\frac{\delta}{2}$, $\bar{n}=nb$, $\bar{k}=kb$, and $\bar{\sigma}\leq n^{\mathcal{O}(1)}\log\sigma$. Note that this is feasible since $\bar{\sigma}\leq n^{\mathcal{O}(1)}\leq\bar{n}^{\mathcal{O}(1)}$ if $n\leq\log\sigma$ and $\bar{\sigma}\leq\log^{\mathcal{O}(1)}\sigma\leq(\log_{\log^{2}\sigma}\sigma)^{\mathcal{O}(1)}\leq b^{\mathcal{O}(1)}\leq\bar{n}^{\mathcal{O}(1)}$ otherwise. This construction uses $\mathcal{O}(\log\bar{n})=\mathcal{O}(\log(n\log\sigma))$ further random bits and produces a sketch of $\mathcal{O}(\bar{k}\log\bar{n})=\mathcal{O}(kb\log(nb))=\mathcal{O}(k\log_{n\log\sigma}\sigma\log(n\log\sigma)=\mathcal{O}(k\log\sigma)$ bits. The auxiliary string $\bar{S}$ is constructed in $\mathcal{O}(\bar{n})=\mathcal{O}(n\log\sigma)$ time using $\mathcal{O}(\log(n\sigma))$ bits of space. The stream representing $\bar{S}$ is passed to the encoding algorithm of [16], which takes $\mathcal{O}(\bar{n}\log^{2}\bar{n})=\mathcal{O}(n\log_{n\log\sigma}\sigma\log^{2}(n\log\sigma))=\mathcal{O}(n\log(n\sigma)\log(n\log\sigma))$ time and $\mathcal{O}(\bar{k}\log\bar{n})=\mathcal{O}(k\log\sigma)$ bits of space. The decoding algorithm, given $\mathsf{sk}^{H}_{\bar{k}}(\bar{X})$ and $\mathsf{sk}^{H}_{\bar{k}}(\bar{Y})$, runs the decoding algorithm of [16]. If the latter certifies $\mathsf{hd}(\bar{X},\bar{Y})>\bar{Y}$, we certify that $\mathsf{hd}(X,Y)>k$. Otherwise, we interpret the output as $\mathsf{MI}(\bar{X},\bar{Y})$. For each position $i\in[1\mathinner{.\,.\allowbreak}|X|]$ such that $(ib-b\mathinner{.\,.\allowbreak}ib]\subseteq\mathsf{MP}(\bar{X},\bar{Y})$ we retrieve $X[i][j]$ and $Y[i][j]$ for each $j\in[1\mathinner{.\,.\allowbreak}b]$ from $(ib-j+1,\bar{X}[ib-j+1],\bar{Y}[ib-j+1])\in\mathsf{MI}(\bar{X},\bar{Y})$, and then we output $(i,X[i],Y[i])\in\mathsf{MI}(X,Y)$. As for correctness, with at most $\delta_{\ref{fact:kr}}+n\bar{\delta}=\delta$ probability loss, we may assume that the decoder of [16] is successful and that, for all $i\in[1\mathinner{.\,.\allowbreak}|X|]$, we have $\psi(X[i])=\psi(Y[i])$ if and only if $X[i]=Y[i]$. The latter assumption yields $\mathsf{MP}(\bar{X},\bar{Y})=\bigcup_{i\in\mathsf{MP}(X,Y)}(ib-b\mathinner{.\,.\allowbreak}ib]$ and thus $\mathsf{hd}(\bar{X},\bar{Y})=b\cdot\mathsf{hd}(X,Y)$. Hence, we correctly certify $\mathsf{hd}(X,Y)>k$ if $\mathsf{hd}(\bar{X},\bar{Y})>\bar{k}$, and we correctly reconstruct $\mathsf{MI}(X,Y)$ otherwise. The decoder uses $\mathcal{O}(\bar{k}\log^{3}\bar{n})=\mathcal{O}(k\log_{n\log\sigma}\sigma\log^{3}(n\log\sigma))=\mathcal{O}(k\log\sigma\log^{2}(n\log\sigma))$ time and $\mathcal{O}(\bar{k}\log\bar{n})=\mathcal{O}(k\log\sigma)$ bits of space. ∎ Next, consider an alphabet $\hat{\Sigma}:=\Sigma\times\mathbb{Z}_{\geq 0}$ and a function $\pi:\hat{\Sigma}\to\mathbb{Z}_{\geq 0}$ defined so that $\pi((a,v))=v$ for $(a,v)\in\hat{\Sigma}$ and $\pi(S)=\sum_{i=1}^{|S|}\pi(S[i])$ for $S\in\hat{\Sigma}^{*}$. For two strings $X,Y\in\hat{\Sigma}^{*}$ of the same length, we define the _prefix mismatch information_ $\mathsf{PMI}(X,Y)=\\{(i,\pi(X[1\mathinner{.\,.\allowbreak}i)),\pi(Y[1\mathinner{.\,.\allowbreak}i))):i\in\mathsf{MP}(X,Y)\\}.$ ###### Proposition 6.20. For every constant $\delta\in(0,1)$, there exists a sketch $\mathsf{sk}^{P}_{k}$ (parameterized by integers $n\geq k\geq 1$, an alphabet $\hat{\Sigma}=[1\mathinner{.\,.\allowbreak}\sigma]\times[0\mathinner{.\,.\allowbreak}n^{\mathcal{O}(1)}]$, and a seed of $\mathcal{O}(\log(n\log\sigma))$ random bits) such that: 1. 1. The sketch $\mathsf{sk}^{P}_{k}(S)$ of a string $S\in\hat{\Sigma}^{n}$ takes $\mathcal{O}(k\log^{2}n)$ bits. Given streaming access to $S$, it can be constructed in $\mathcal{O}(n\log n\log(n\log\sigma)+n\log^{2}n)$ time using $\mathcal{O}(k\log^{2}n+\log n\log(n\log\sigma))$ bits of space. 2. 2. There exists a decoding algorithm that, given the sketches $\mathsf{sk}^{P}_{k}(X)$ and $\mathsf{sk}^{P}_{k}(Y)$ of strings $X,Y\in\hat{\Sigma}^{n}$ satisfying $\pi(X),\pi(Y)<n$, with probability at least $1-\delta$ either returns $\mathsf{PMI}(X,Y)$ or certifies that $\mathsf{hd}(X,Y)>k$. The algorithm uses $\mathcal{O}(k\log^{4}n)$ time and $\mathcal{O}(k\log^{2}n)$ bits of space. ###### Proof. Let us fix a complete binary tree $\mathcal{T}$ with $n$ leaves, numbered with $[1\mathinner{.\,.\allowbreak}n]$ in the left-to-right order, and let $v_{1},\ldots,v_{2n-1}$ denote the nodes of $\mathcal{T}$ in the pre-order. For each node $v_{i}\in\mathcal{T}$, let $[p_{i}\mathinner{.\,.\allowbreak}q_{i})\subseteq[1\mathinner{.\,.\allowbreak}n]$ be the indices of the leaves in the subtree of $v_{i}$. Consider the fingerprints $\psi$ of Fact 6.18 parameterized by $\delta_{\ref{fact:kr}}=\frac{\delta}{4n-2}$, $n_{\ref{fact:kr}}=\mathcal{O}(n\log(n\sigma))$, and $\sigma_{\ref{fact:kr}}=2$: Given a string $S\in\hat{\Sigma}^{n}$, we define a string $\mathcal{T}(S)[1\mathinner{.\,.\allowbreak}2n)$ so that $\mathcal{T}(S)[i]=(\pi(S[p_{i}\mathinner{.\,.\allowbreak}q_{i})),\psi(S[p_{i}\mathinner{.\,.\allowbreak}q_{i})))$ for every $i\in[1\mathinner{.\,.\allowbreak}2n)$, where $\psi$ expands each character of $\hat{\Sigma}$ into a sequence of $\mathcal{O}(\log(n\sigma))$ bits. We set $\mathsf{sk}^{P}_{k}(S):=\mathsf{sk}^{H}_{k_{\ref{thm:skH}}}(\mathcal{T}(S))$, where $\mathsf{sk}^{H}_{k_{\ref{thm:skH}}}$ is the sketch of Theorem 6.19 with $\delta_{\ref{thm:skH}}=\frac{\delta}{2}$, $n_{\ref{thm:skH}}=2n-1$, $k_{\ref{thm:skH}}=\min(k\lceil{\log(2n)}\rceil,n_{\ref{thm:skH}})$, and $\sigma_{\ref{thm:skH}}\leq 2^{\mathcal{O}(\log(n/\delta_{\ref{fact:kr}}))}\leq n^{\mathcal{O}(1)}$. This construction uses $\mathcal{O}(\log n_{\ref{fact:kr}}+\log(n_{\ref{thm:skH}}\log\sigma_{\ref{thm:skH}}))=\mathcal{O}(\log(n\log\sigma))$ random bits and produces sketches of $\mathcal{O}(k_{\ref{thm:skH}}\log(n_{\ref{thm:skH}}\sigma_{\ref{thm:skH}}))=\mathcal{O}(k\log^{2}n)$ bits. The encoding algorithm transforms $S$ into $\mathcal{T}(S)$ and feeds $\mathcal{T}(S)$ into the encoding procedure of Theorem 6.19. Construction of $\mathcal{T}(S)$ is organized in $\mathcal{O}(\log n)$ layers, each responsible for nodes $v_{i}\in\mathcal{T}$ at a fixed level. The intervals $[p_{i}\mathinner{.\,.\allowbreak}q_{i})$ corresponding to these nodes are disjoint so, at any time, a layer produces a single character $\mathcal{T}(S)[i]$ and, by Fact 6.18, spends $\mathcal{O}(n\log\sigma)$ amortized time and uses $\mathcal{O}(\log n)$ bits of space to process a single character $S[j]$. Since the tree $\mathcal{T}$ is linearized in the post-order fashion, all levels can read the string $S$ with a common left-to- right pass and outputting $\mathcal{T}(S)$ does not require any buffers. Overall, construction of $\mathcal{T}(S)$ takes $\mathcal{O}(n\log n\log(n\log\sigma))$ time and uses $\mathcal{O}(\log n\log(n\log\sigma))$ bits of space. The encoder of Theorem 6.19 takes $\mathcal{O}(n_{\ref{thm:skH}}\log(n_{\ref{thm:skH}}\sigma_{\ref{thm:skH}})\log(n_{\ref{thm:skH}}\log\sigma_{\ref{thm:skH}}))=\mathcal{O}(n\log^{2}n)$ time and uses $\mathcal{O}(k_{\ref{thm:skH}}\log(n_{\ref{thm:skH}}\sigma_{\ref{thm:skH}}))=\mathcal{O}(k\log^{2}n)$ bits of space. The decoding algorithm first retrieves $\mathsf{MI}(\mathcal{T}(X),\mathcal{T}(Y))$ from $\mathsf{sk}^{H}_{k_{\ref{thm:skH}}}(\mathcal{T}(X))$ and $\mathsf{sk}^{H}_{\ref{thm:skH}}(\mathcal{T}(Y))$ using the decoder of Theorem 6.19. If the output of that subroutine can be interpreted as $\mathsf{MI}(\mathcal{T}(X),\mathcal{T}(Y))$, then we use 6.21 below to retrieve $\mathsf{MI}(X,Y)$. If the size of the obtained set does not exceed $k$, we return the set. In the remaining cases, we certify that $\mathsf{hd}(X,Y)>k$. ###### Claim 6.21. For every $X,Y\in\hat{\Sigma}^{n}$, the set $\mathsf{PMI}(X,Y)$ can be extracted from $\mathsf{MI}(\mathcal{T}(X),\mathcal{T}(Y))$ in time and space $\mathcal{O}(\mathsf{hd}(X,Y)\log n)$ with success probability at least $1-\frac{\delta}{2}$. ###### Proof. We assume that, for every $i\in[1\mathinner{.\,.\allowbreak}2n)$, the equality $\psi(X[p_{i}\mathinner{.\,.\allowbreak}q_{i}))=\psi(Y[p_{i}\mathinner{.\,.\allowbreak}q_{i}))$ implies $X[p_{i}\mathinner{.\,.\allowbreak}q_{i})=Y[p_{i}\mathinner{.\,.\allowbreak}q_{i})$. By Fact 6.18, each of the implications fails with probability at most $\delta_{\ref{fact:kr}}$, so the overall failure probability can be bounded by $\delta_{\ref{fact:kr}}\cdot(2n-1)=\frac{\delta}{2}$. Our assumption yields $\mathsf{MP}(\mathcal{T}(X),\mathcal{T}(Y))=\\{i\in[1\mathinner{.\,.\allowbreak}2n):[p_{i}\mathinner{.\,.\allowbreak}q_{i})\cap\mathsf{MP}(X,Y)\neq\emptyset\\}.$ In particular, $\mathsf{MP}(X,Y)=\\{p_{i}:v_{i}\text{ is a leaf and }i\in\mathsf{MP}(\mathcal{T}(X),\mathcal{T}(Y))\\}$. Thus, it remains to describe how to extract $\pi(X[1\mathinner{.\,.\allowbreak}p))$ and $\pi(Y[1\mathinner{.\,.\allowbreak}p))$ for each $p\in\mathsf{MP}(X,Y)$; by symmetry, we focus on $\pi(X[1\mathinner{.\,.\allowbreak}p))$. For this, we process subsequent nodes $v_{i}$ on the path from the root of $\mathcal{T}$ to the leaf $v_{j}$ such that $\\{p\\}=[p_{j}\mathinner{.\,.\allowbreak}q_{j})$ maintaining $\pi(X[1\mathinner{.\,.\allowbreak}p_{i}))$. Note that the values $\pi(X[p_{i}\mathinner{.\,.\allowbreak}q_{i}))$ can be extracted from $\mathsf{MI}(\mathcal{T}(X),\mathcal{T}(Y))$ due to $p\in[p_{i}\mathinner{.\,.\allowbreak}q_{i})$. If $v_{i}$ is the root, then $\pi(X[1\mathinner{.\,.\allowbreak}p_{i}))=\pi(\varepsilon)=0$. If $v_{i}$ is the left child of $v_{i^{\prime}}$, then $p_{i}=p_{i^{\prime}}$, so $\pi(X[1\mathinner{.\,.\allowbreak}p_{i}))=\pi(X[1\mathinner{.\,.\allowbreak}p_{i^{\prime}}))$ has already been computed. If $v_{i}$ is the right child of $v_{i^{\prime}}$, then $q_{i}=q_{i^{\prime}}$, so $\pi(X[1\mathinner{.\,.\allowbreak}p_{i}))+\pi(X[p_{i}\mathinner{.\,.\allowbreak}q_{i}))=\pi(X[1\mathinner{.\,.\allowbreak}q_{i}))=\pi(X[1\mathinner{.\,.\allowbreak}q_{i^{\prime}}))=\pi(X[1\mathinner{.\,.\allowbreak}p_{i^{\prime}}))+\pi(X[p_{i^{\prime}}\mathinner{.\,.\allowbreak}q_{i^{\prime}}))$. Consequently, $\pi(X[1\mathinner{.\,.\allowbreak}p_{i}))$ can be retrieved from $\pi(X[1\mathinner{.\,.\allowbreak}p_{i^{\prime}}))$, which has already been computed, as well as $\pi(X[p_{i}\mathinner{.\,.\allowbreak}q_{i}))$ and $\pi(X[p_{i^{\prime}}\mathinner{.\,.\allowbreak}q_{i^{\prime}}))$, which are available in $\mathsf{MI}(\mathcal{T}(X),\mathcal{T}(Y))$. When this process reaches $v_{j}$, it results in the sought value $\pi(X[1\mathinner{.\,.\allowbreak}p))=\pi(X[1\mathinner{.\,.\allowbreak}p_{j}))$. ∎ It remains to analyze correctness of the decoding algorithm. The decoder of Theorem 6.19 fails with probability at most $\delta_{\ref{thm:skH}}=\frac{\delta}{2}$ and the procedure of 6.21 fails with probability at most $\frac{\delta}{2}$. Thus, with at most $\delta$ probability loss, we may assume that both calls are successful. If $\mathsf{hd}(\mathcal{T}(X),\mathcal{T}(Y))\leq k_{\ref{thm:skH}}$, then the decoder of Theorem 6.19 retrieves $\mathsf{MI}(\mathcal{T}(X),\mathcal{T}(Y))$ and the procedure of 6.21 results in $\mathsf{MI}(X,Y)$. Depending on whether $\mathsf{hd}(X,Y)=|\mathsf{MI}(X,Y)|\leq k$ or not, our decoding algorithm thus correctly returns $\mathsf{MI}(X,Y)$ or certifies that $\mathsf{hd}(X,Y)>k$. Otherwise, the algorithm of Theorem 6.19 certifies $\mathsf{hd}(\mathcal{T}(X),\mathcal{T}(Y))>k_{\ref{thm:skH}}$, and the whole decoding procedure certifies $\mathsf{hd}(X,Y)>k$. This is correct because of $\mathsf{hd}(\mathcal{T}(X),\mathcal{T}(Y))\leq\mathsf{hd}(X,Y)\lceil{\log(2n)}\rceil$. To see this, observe that $[p_{i}\mathinner{.\,.\allowbreak}q_{i})\cap\mathsf{MP}(X,Y)=\emptyset$ implies $X[p_{i}\mathinner{.\,.\allowbreak}q_{i})=Y[p_{i}\mathinner{.\,.\allowbreak}q_{i})$ and $\mathcal{T}(X)[i]=\mathcal{T}(Y)[i]$. However, every leaf of $\mathcal{T}$ has at most $\lceil{\log(2n)}\rceil$ ancestors (including itself). Consequently, for every $j\in\mathsf{MP}(X,Y)$, there are at most $\lceil{\log(2n)}\rceil$ nodes $v_{i}$ such that $j\in[p_{i}\mathinner{.\,.\allowbreak}q_{i})$. ∎ ### 6.4 Edit Distance Sketches We are now ready to show the main result of this section. See 3.17 ###### Proof. Let $c_{\ref{prp:alg}}$ be the constant of Proposition 3.16 for $\delta_{\ref{prp:alg}}=\frac{\delta}{3}$, and let $c_{\ref{lem:bcgk}}$ be the constant of Lemma 6.12. We shall use $\mathsf{CE}_{k^{\prime}}(W)$ with $k^{\prime}=(c_{\ref{prp:alg}}+5)k$ and $W=\mathsf{W}(n,r,S)$, where $r$ is a random seed of $\mathcal{O}(\log^{2}n)$ bits. Observe that the alphabet of $\mathsf{CE}_{k^{\prime}}(W)$ can be interpreted as $[0\mathinner{.\,.\allowbreak}n^{\mathcal{O}(k)})\times[0\mathinner{.\,.\allowbreak}n]$ because the $|\mathsf{LZ}(\overline{C^{2}_{k^{\prime}}(S,s)})|=\mathcal{O}(k)$ for each $s\in[1\mathinner{.\,.\allowbreak}|S|]$. We shall use Theorems 6.19 and 6.20 with $n_{\ref{thm:skH}}=n_{\ref{prp:mpi}}=3n$, $\delta_{\ref{thm:skH}}=\delta_{\ref{prp:mpi}}=\tfrac{\delta}{3}$, $k_{\ref{thm:skH}}=k_{\ref{prp:mpi}}=\min(n_{\ref{thm:skH}},\lfloor c_{\ref{lem:bcgk}}(1+c_{\ref{prp:alg}})k\log(3n)\rfloor)$, and $\sigma_{\ref{thm:skH}}=\sigma_{\ref{prp:mpi}}=n^{\mathcal{O}(k)}.$ The sketch $\mathsf{sk}^{E}_{k}(S)$ of a string $S\in\Sigma^{\leq n}$ consists of $|S|$ as well as the sketches $\mathsf{sk}^{H}_{k_{\ref{thm:skH}}}(\mathsf{CE}_{k^{\prime}}(W))$ and $\mathsf{sk}^{P}_{k_{\ref{prp:mpi}}}(\mathsf{CE}_{k^{\prime}}(W))$. This construction uses $\mathcal{O}(\log^{2}n+\log(n_{\ref{thm:skH}}\log\sigma_{\ref{thm:skH}})+\log(n_{\ref{prp:mpi}}\log\sigma_{\ref{prp:mpi}}))=\mathcal{O}(\log^{2}n+\log(nk\log n))=\mathcal{O}(\log^{2}n)$ random bits and produces sketches of bit-size $\mathcal{O}(k_{\ref{thm:skH}}\log(n_{\ref{thm:skH}}\sigma_{\ref{thm:skH}})+k_{\ref{prp:mpi}}\log^{2}n_{\ref{prp:mpi}})=\mathcal{O}(k\log n\log(n^{\mathcal{O}(k)})+k\log^{3}n)=\mathcal{O}(k^{2}\log^{3}n).$ The encoding algorithm uses Lemma 6.16 to transform the input stream representing $S$ to an auxiliary stream representing $\mathsf{CE}_{k^{\prime}}(W)$, which we forward to the encoders constructing $\mathsf{sk}^{H}_{k_{\ref{thm:skH}}}(\mathsf{CE}_{k^{\prime}}(S))$ and $\mathsf{sk}^{P}_{k_{\ref{prp:mpi}}}(\mathsf{CE}_{k^{\prime}}(W))$. Thus, it takes $\tilde{\mathcal{O}}(nk^{\prime}+n_{\ref{thm:skH}}\log\sigma_{\ref{thm:skH}}+n_{\ref{prp:mpi}})=\tilde{\mathcal{O}}(nk)$ time and uses $\tilde{\mathcal{O}}(k^{\prime}+k_{\ref{thm:skH}}\log\sigma_{\ref{thm:skH}}+k_{\ref{prp:mpi}})=\tilde{\mathcal{O}}(k^{2})$ space. ##### Decoding Algorithm Let $\mathcal{A}_{\mathsf{W}}$ be the zip alignment of walks $W_{X}=(x_{t})_{t=1}^{3n+1}=\mathsf{W}(n,r,X)$ and $W_{Y}=(y_{t})_{t=1}^{3n+1}=\mathsf{W}(n,r,Y)$ over strings $X,Y\in\Sigma^{\leq n}$. Given sketches $\mathsf{sk}^{E}_{k}(X),\mathsf{sk}^{E}_{k}(Y)$, we run the decoders for $\mathsf{sk}^{H}_{k_{\ref{thm:skH}}}(\mathsf{CE}_{k^{\prime}}(W_{X})),\mathsf{sk}^{H}_{k_{\ref{thm:skH}}}(\mathsf{CE}_{k^{\prime}}(W_{Y}))$ and $\mathsf{sk}^{P}_{k_{\ref{prp:mpi}}}(\mathsf{CE}_{k^{\prime}}(W_{X})),\mathsf{sk}^{P}_{k_{\ref{prp:mpi}}}(\mathsf{CE}_{k^{\prime}}(W_{Y}))$. We certify $\mathsf{ed}(X,Y)>k$ if either procedure certifies $\mathsf{hd}(\mathsf{CE}_{k^{\prime}}(W_{X}),\mathsf{CE}_{k^{\prime}}(W_{Y}))>k_{\ref{thm:skH}}=k_{\ref{prp:mpi}}$. Otherwise, the decoder interprets the outputs as $\mathsf{MI}(\mathsf{CE}_{k^{\prime}}(W_{X}),\mathsf{CE}_{k^{\prime}}(W_{Y}))$ and $\mathsf{PMI}(\mathsf{CE}_{k^{\prime}}(W_{X}),\mathsf{CE}_{k^{\prime}}(W_{Y}))$, respectively. In this case, we construct $\mathsf{E}^{M}(X,Y)$ for $M=\\{(x,y)\in\mathcal{M}_{X,Y}(\mathcal{A}_{\mathsf{W}}):x\in P_{X}\text{ or }y\in P_{Y}\\}$, where $P_{X}$ and $P_{Y}$ are defined in the statement of Lemma 6.12. Finally, we compute $\mathsf{GR}_{k}(X,Y)$ using Proposition 5.12 or $\min(\mathsf{ed}(X,Y),k+1)$ using Corollary 5.5. ###### Claim 6.22. Given $\mathsf{MI}(\mathsf{CE}_{k^{\prime}}(W_{X}),\mathsf{CE}_{k^{\prime}}(W_{Y}))$ and $\mathsf{PMI}(\mathsf{CE}_{k^{\prime}}(W_{X}),\mathsf{CE}_{k^{\prime}}(W_{Y}))$, the encoding $\mathsf{E}^{M}(X,Y)$ can be constructed in $\tilde{\mathcal{O}}(k^{3})$ time using $\tilde{\mathcal{O}}(k^{2})$ space. Moreover, $|\mathsf{LZ}(X^{M}Y^{M})|=\tilde{\mathcal{O}}(k^{2})$. ###### Proof. We proceed in two phases. In the first phase, we construct the compressed representation $\mathsf{D}(X^{\prime})$ of a string $X^{\prime}\in(\Sigma\cup\\{\\#\\})^{|X|}$ such that $X^{\prime}[x]=X[x]$ if $x\in P_{X}$ and $X^{\prime}[x]=\\#$ otherwise. We initialize $X^{\prime}:=\\#^{|X|}$ (via $\mathsf{LZ}(\\#^{|X|})$ using Proposition 4.2(h)). Next, we iterate over positions $t\in[1\mathinner{.\,.\allowbreak}3n]$ such that $\bot\neq\mathsf{CE}_{k^{\prime}}(W_{X})[t]\neq\mathsf{CE}_{k^{\prime}}(W_{Y})[t]$. We retrieve $x_{t}$ and $\mathsf{LZ}\big{(}\overline{C^{2}_{k^{\prime}}(X,x_{t})}\big{)}$ from the mismatch information, convert $\mathsf{LZ}\big{(}\overline{C^{2}_{k^{\prime}}(X,x_{t})}\big{)}$ to $\mathsf{D}(C^{2}_{k^{\prime}}(X,x_{t}))$ (Proposition 4.2(h)), and update $\mathsf{D}(X^{\prime})$, setting $X^{\prime}[x_{t}\mathinner{.\,.\allowbreak}x_{t}+|C^{2}_{k^{\prime}}(X,x_{t})|):=C^{2}_{k^{\prime}}(X,x_{t})$ (Proposition 4.2(g)(i)). We also symmetrically construct the compressed representation $\mathsf{D}(Y^{\prime})$ of a string $Y^{\prime}\in(\Sigma\cup\\{\\#\\})^{|Y|}$ such that $Y^{\prime}[y]=Y[y]$ if $y\in P_{Y}$ and $Y^{\prime}[y]=\\#$ otherwise. In the second phase, we convert $\mathsf{D}(X^{\prime})$ to $\mathsf{D}(X^{M})$. Here, the goal is make sure that $X^{M}[x]=\\#$ not only for $x\in P_{X}$, but also when $X[x]\simeq_{\mathcal{A}_{\mathsf{W}}}Y[y]$ for some $y\in P_{Y}$. For this, we iterate over dummy segments $Y^{\prime}[y\mathinner{.\,.\allowbreak}y^{\prime})$. By Lemma 6.12, $\mathcal{A}_{\mathsf{W}}$ matches $Y[y\mathinner{.\,.\allowbreak}y^{\prime})$ to a fragment of $X[x\mathinner{.\,.\allowbreak}x^{\prime})$. Hence, we shall identify the shift $x-y$ and set $X^{M}[x\mathinner{.\,.\allowbreak}x^{\prime}):=\\#^{x^{\prime}-x}$ (Proposition 4.2(h)(g)(i)). Let $\mathsf{MP}_{\mathsf{CE}}:=\mathsf{MP}(\mathsf{CE}_{k^{\prime}}(W_{X}),\mathsf{CE}_{k^{\prime}}(W_{Y}))$. Define $u,v\in[1\mathinner{.\,.\allowbreak}3n+1]$ so that $u=v=3n+1$ if $\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}t])\neq y^{\prime}$ for all $t\in\mathsf{MP}_{\mathsf{CE}}$; otherwise, $u$ is the smallest index in $\mathsf{MP}_{\mathsf{CE}}$ with $\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}u])=y^{\prime}$, whereas $v$ is the smallest index $\mathsf{MP}_{\mathsf{CE}}$ with $\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}v))=\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}u))$. We claim that $x-y=\pi(\mathsf{CE}_{k^{\prime}}(W_{X})[1\mathinner{.\,.\allowbreak}v))-\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}v))$. If $y^{\prime}\leq|Y|$, then $y^{\prime}-1\notin P_{Y}$ and $y^{\prime}\in P_{Y}$. Hence, $Y[y^{\prime}]$ is covered by $C^{2}_{k^{\prime}}(Y,y_{t})$ for some $t\in\mathsf{MP}_{\mathsf{CE}}$ with $\mathsf{CE}_{k^{\prime}}(W_{X})[t]\neq\bot$, whereas $Y[y^{\prime}-1]$ is not covered by $C^{2}_{k^{\prime}}(Y,y_{t})$. Hence, there is $t\in\mathsf{MP}_{\mathsf{CE}}$ with $y_{t}=y^{\prime}$, and therefore $y_{u}=y^{\prime}$. Similarly, $y_{u}=y^{\prime}$ holds if $y^{\prime}=|Y|+1$. Let $w\in[1\mathinner{.\,.\allowbreak}3n]$ be the smallest index such that $\mathsf{CE}_{k^{\prime}}(W_{Y})[w]\neq\bot$ and $C^{2}_{k^{\prime}}(Y,y_{t})$ covers $Y[y^{\prime}-1]$ (such an index exists by Lemma 6.9). Due to $y^{\prime}-1\in P_{Y}$, we have $\mathsf{CE}_{k^{\prime}}(W_{X})[w]=\mathsf{CE}_{k^{\prime}}(W_{Y})[w]$, and hence $x-y=x_{w}-y_{w}$. Definition of $w$ further yields $\mathsf{CE}_{k^{\prime}}(W_{Y})[w+1\mathinner{.\,.\allowbreak}v)=\bot^{u-w-1}$, and thus $\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}v))=y_{w}$. Moreover, since $\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}t))=y_{w}$ for $t\in[w+1\mathinner{.\,.\allowbreak}v]$, we have $\pi(\mathsf{CE}_{k^{\prime}}(W_{X})[1\mathinner{.\,.\allowbreak}v))=x_{w}$ by definition of $v$. This completes the proof that $x-y=\pi(\mathsf{CE}_{k^{\prime}}(W_{X})[1\mathinner{.\,.\allowbreak}v))-\pi(\mathsf{CE}_{k^{\prime}}(W_{Y})[1\mathinner{.\,.\allowbreak}v))$. To derive $\mathsf{E}^{M}(X,Y)$, it suffices to convert $\mathsf{D}(Y^{\prime})$ to $\mathsf{D}(Y^{M})$ (symmetrically), and to construct $\mathsf{D}(X^{M}Y^{M})$ using Proposition 4.2(i). Since $\overline{\mathsf{maxLZ}}(C^{2}_{k^{\prime}}(X,x_{t})),\overline{\mathsf{maxLZ}}(C^{2}_{k^{\prime}}(Y,y_{t}))=\mathcal{O}(k)$ holds for all $t\in[1\mathinner{.\,.\allowbreak}3n+1]$ and since $|\mathsf{MP}_{\mathsf{CE}}|=\mathcal{O}(k\log n)$, the $\overline{\mathsf{maxLZ}}(\cdot)$ measure of all intermediate strings is $\tilde{\mathcal{O}}(k^{2})$. Consequently, the $\tilde{\mathcal{O}}(k)$ applications of Proposition 4.2 cost $\tilde{\mathcal{O}}(k^{3})$ time and use $\tilde{\mathcal{O}}(k^{2})$ space. ∎ To complete the complexity analysis, observe that the decoding procedure of Theorem 6.19 uses $\mathcal{O}(k_{\ref{thm:skH}}\log(n_{\ref{thm:skH}}\sigma_{\ref{thm:skH}})\log^{2}(n_{\ref{thm:skH}}\log\sigma_{\ref{thm:skH}}))=\tilde{\mathcal{O}}(k^{2})$ time and $\mathcal{O}(k_{\ref{thm:skH}}\log(n_{\ref{thm:skH}}\sigma_{\ref{thm:skH}}))=\tilde{\mathcal{O}}(k^{2})$ bits of space. The procedure of Proposition 6.20 uses $\mathcal{O}(k_{\ref{prp:mpi}}\log^{2}n_{\ref{prp:mpi}})=\tilde{\mathcal{O}}(k)$ bits of space and costs $\mathcal{O}(k_{\ref{prp:mpi}}\log^{4}n_{\ref{prp:mpi}})=\tilde{\mathcal{O}}(k)$ time. Finally, due to $|\mathsf{LZ}(X^{M}Y^{M})|=\tilde{\mathcal{O}}(k^{2})$ and $|\mathcal{B}_{k}(X,Y)|=\mathcal{O}(k^{5})$ (Lemma 3.11), the algorithm of Proposition 5.12 uses $\tilde{\mathcal{O}}(k^{2})$ space and costs $\tilde{\mathcal{O}}(k^{5})$ time (dominating the overall decoding complexity). If we only aim to retrieve $\min(\mathsf{ed}(X,Y),k+1)$, the algorithm of Corollary 5.5 takes $\tilde{\mathcal{O}}(k^{2})$ time and space (in which case the overall decoding uses $\tilde{\mathcal{O}}(k^{2})$ space and costs $\tilde{\mathcal{O}}(k^{3})$ time. It remains to argue that the decoding algorithm is correct. With $\delta_{\ref{prp:alg}}=\frac{\delta}{3}$ probability loss, we may assume that $\mathcal{A}_{\mathsf{W}}\in\mathsf{GA}_{c_{\ref{prp:alg}}(\mathsf{ed}(X,Y))^{2}}(X,Y)$ and $0pt(\mathcal{A}_{\mathsf{W}})\leq c_{\ref{prp:alg}}\mathsf{ed}(X,Y)$. With $\delta_{\ref{thm:skH}}+\delta_{\ref{prp:mpi}}=\frac{2\delta}{3}$ further probability loss, we may assume that the decoders of Theorems 6.19 and 6.20 are successful. If $\mathsf{ed}(X,Y)>k$, then the correctness follows from Propositions 5.12 and 5.5 because $M\subseteq\mathcal{M}_{X,Y}(\mathcal{A}_{\mathsf{W}})$ is a non-crossing matching. Otherwise, Proposition 3.16 guarantees $\mathsf{cost}(\mathcal{A}_{\mathsf{W}})\leq c_{\ref{prp:alg}}k^{2}$ and $0pt(\mathcal{A}_{\mathsf{W}})\leq c_{\ref{prp:alg}}k$ so, in particular, $k^{\prime}\geq 0pt(\mathcal{A}_{\mathsf{W}})+5k$. By Lemma 6.12, we thus have $\mathsf{hd}(\mathsf{CE}_{k^{\prime}}(W_{X}),\mathsf{CE}_{k^{\prime}}(W_{X}))\leq c_{\ref{lem:bcgk}}(1+c_{\ref{prp:alg}})k\log(3n)$. Hence, the decoders of Theorems 6.19 and 6.20 report the mismatch information. Lemma 6.10 further implies that $\Delta_{X}(\mathcal{A},\mathcal{A}_{\mathsf{W}})\subseteq P_{X}$ and $\Delta_{Y}(\mathcal{A},\mathcal{A}_{\mathsf{W}})\subseteq P_{Y}$ for all $\mathcal{A}\in\mathsf{GA}_{k}(X,Y)$. In particular, $\mathcal{M}_{X,Y}(\mathcal{A})\subseteq M$, and therefore $\mathcal{M}_{k}(X,Y)\subseteq M$. Consequently, the algorithms of Propositions 5.12 and 5.5 correctly compute $\mathsf{GR}_{k}(X,Y)$ and $\min(\mathsf{ed}(X,Y),k+1)$, respectively. ∎ Next, we boost the success probability and strengthen the sketches so that we can retrieve $\mathsf{qGR}_{k}(X,Y)$ instead of $\mathsf{GR}_{k}(X,Y)$. ###### Corollary 6.23. There is a sketch $\mathsf{sk}^{\mathsf{q}}_{k}$ (parametrized by $\delta\in(0,\frac{1}{2})$, integers $n\geq k\geq 1$, an alphabet $\Sigma=[0\mathinner{.\,.\allowbreak}n^{\mathcal{O}(1)})$, and a seed of $\mathcal{O}(\log^{2}n\log(1/\delta))$ random bits) such that: 1. (a) The sketch $\mathsf{sk}^{\mathsf{q}}_{k}(S)$ of a string $\Sigma^{\leq n}$ takes $\mathcal{O}(k^{2}\log^{3}n\log(1/\delta))$ bits. Given streaming access to $S$, it can be constructed in $\tilde{\mathcal{O}}(nk\log(1/\delta))$ time using $\tilde{\mathcal{O}}(k^{2}\log(1/\delta))$ space. 2. (b) There exists a decoding algorithm that, given $\mathsf{sk}^{\mathsf{q}}_{k}(X),\mathsf{sk}^{\mathsf{q}}_{k}(Y)$ for $X,Y\in\Sigma^{\leq n}$, with probability at least $1-\delta$ computes $\mathsf{qGR}_{k}(X,Y)$. The algorithm takes $\tilde{\mathcal{O}}(k^{5}\log(1/\delta))$ time and uses $\tilde{\mathcal{O}}(k^{2}\log(1/\delta))$ space. ###### Proof. We shall use $\mu=\mathcal{O}(\log(1/\delta))$ sketches $\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}$ of 3.17 with $\delta_{\ref{thm:ske}}=\frac{1}{3}$, $n_{\ref{thm:ske}}=n+1$, $k_{\ref{thm:ske}}=k+1$, and independent seeds. For each of the $\mu$ sketches $\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}$, the sketch $\mathsf{sk}^{\mathsf{q}}_{k}(S)$ contains $\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}(\$_{1}S)$, $\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}(\$_{2}S)$, where $\$_{1},\$_{2}\notin\Sigma$ are two distinct symbols. This construction uses $\mathcal{O}(\mu\log^{2}n_{\ref{thm:ske}})=\mathcal{O}(\log^{2}n\log(1/\delta))$ random bits and produces sketches of $\mathcal{O}(\mu k^{2}_{\ref{thm:ske}}\log^{3}n_{\ref{thm:ske}})=\mathcal{O}(k^{2}\log^{3}n\log(1/\delta))$ bits. The encoding algorithm calls $2\mu$ instances of the encoding algorithm of 3.17. Hence, it uses $\tilde{\mathcal{O}}(\mu k^{2}_{\ref{thm:ske}})=\mathcal{O}(k^{2}\log(1/\delta))$ space and costs $\tilde{\mathcal{O}}(\mu n_{\ref{thm:ske}}k_{\ref{thm:ske}})=\tilde{\mathcal{O}}(nk\log(1/\delta))$ time. The decoding algorithm, given $\mathsf{sk}^{\mathsf{q}}_{k}(X),\mathsf{sk}^{\mathsf{q}}_{k}(Y)$ for $X,Y\in\Sigma^{\leq n}$, runs the decoder of 3.17 for $\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}(\$_{1}X),\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}(\$_{1}Y)$ for each of the $\mu$ sketches $\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}$. This yields $\mu$ candidates for $\mathsf{GR}_{k_{\ref{thm:ske}}}(\$_{1}X,\$_{2}Y)$, which we convert to candidates for $\mathsf{qGR}_{k}(X,Y)$ using Corollary 5.17. Finally, we determine the majority answer among the $\mu$ candidates. The equality test uses Proposition 4.2(i)(c) to compare two candidates for $\mathsf{D}(X^{M}Y^{M})$. Recall that the decoding procedures of 3.17 use $\tilde{\mathcal{O}}(\mu k^{5}_{\ref{thm:ske}})=\tilde{\mathcal{O}}(k^{5}\log(1/\delta))$ time and $\tilde{\mathcal{O}}(\mu k^{2}_{\ref{thm:ske}})=\tilde{\mathcal{O}}(k^{2}\log(1/\delta))$ space. The applications of Corollary 5.17 and the equality tests take $\tilde{\mathcal{O}}(\mu k^{2}_{\ref{thm:ske}})=\tilde{\mathcal{O}}(k^{2}\log(1/\delta))$ time and space. The entire decoding algorithm uses $\tilde{\mathcal{O}}(k^{2}\log(1/\delta))$ space and $\tilde{\mathcal{O}}(\mu k^{5}_{\ref{thm:ske}})=\tilde{\mathcal{O}}(k^{5}\log(1/\delta))$ time. As for correctness, since the $\mu$ sketches $\mathsf{sk}^{E}_{k_{\ref{thm:ske}}}$ are independent, by the Chernoff bound, the majority answer is wrong with probability at most $\exp(-\mathcal{O}(\mu))$. Setting $\mu=\mathcal{O}(\log(1/\delta))$ (with a sufficiently large constant factor) guarantees a success probability of $1-\delta$. ∎ ## 7 Pattern Matching with $k$ Edits In this section, we present solutions for pattern matching with $k$ edits in the semi-streaming and streaming settings. ### 7.1 Periodicity under Edit Distance We start by recalling combinatorial properties of strings periodic under the edit distance. See 3.2 ###### Claim 7.1. Suppose that a string $X$ is a prefix of a string $Y$, where $|X|<|Y|\leq 2|X|$. If $X$ is $k$-periodic with $k$-period $Q$, $|Q|\leq|X|/128k$, then either $Y$ is not $k$-periodic, or $Y$ is $k$-periodic with $k$-period $Q$. ###### Proof. Suppose by contradiction that $Y$ is $k$-periodic with $k$-period $Q^{\prime}\neq Q$. Let $q=|Q|$ and $q^{\prime}=|Q^{\prime}|$. Assume first $q\leq q^{\prime}$. Fix an alignment of the smallest cost between $Y$ and a prefix of $(Q^{\prime})^{\infty}$. It induces an alignment $\mathcal{A}^{\prime}$ of cost at most $2k$ between $X$ and a prefix of $(Q^{\prime})^{\infty}$ and hence generates a partition $X=X_{1}X_{2}\ldots X_{z}$, where each $X_{i}$, $1\leq i\leq z-1$, is aligned with $Q^{\prime}$ and $X_{z}$ is aligned with a prefix of $Q^{\prime}$. From $|Q|\leq|X|/128k$ we obtain $|X|\geq 128k$ and therefore $z\geq(|X|-2k)/q^{\prime}\geq\frac{128-2}{128}|X|/(|Y|/128k)\gg 20k$ Consider fragments $X_{1}X_{2}X_{3}X_{4}$, $X_{5}X_{6}X_{7}X_{8}$, and so on. The total number of such fragments is at least $(20k-3)/4>4k$, and at least $2k+1$ of them are not edited under $\mathcal{A}^{\prime}$. Fix an optimal alignment $\mathcal{A}$ between $X$ and a prefix $Q^{\infty}$. The cost of $\mathcal{A}$ is bounded from above by $2k$, and therefore there is at least one fragment $X_{4i+1}X_{4i+2}X_{4i+3}X_{4i+4}$ that is not edited. Consider one such fragment $F$. On the one hand, $F=Q^{\prime}Q^{\prime}Q^{\prime}Q^{\prime}$. On the other hand, $F=\mathsf{suff}(Q)Q^{j}\mathsf{pref}(Q)$, where $\mathsf{suff}(Q)$ and $\mathsf{pref}(Q)$ are some suffix and some prefix of $Q$, respectively. Suppose first that $q^{\prime}$ is a multiple of $q$. In this case, $Q^{\prime}=\left(Q[\ell+1\mathinner{.\,.\allowbreak}q]Q[1\mathinner{.\,.\allowbreak}\ell]\right)^{r}$, where $\ell=q-|\mathsf{suff}(Q)|$ and an integer $r$, which contradicts the fact that $Q^{\prime}$ is primitive. Otherwise, consider the copy of $Q$ that contains $F[q^{\prime}]$. Consider also a substring $QQQ$ of $F$ formed by the copy of $Q$ that contains $F[2q^{\prime}]$, and the preceding and succeeding copies of $Q$. We then obtain that there is an occurrence of $Q$ in $QQQ$ that is not aligned with any copy of $Q$ (otherwise, $q^{\prime}$ is a multiple of $q$), and therefore $Q$ is not primitive, a contradiction. The case $q>q^{\prime}$ can be treated analogously. ∎ Note that 7.1 implies in particular that a string can have at most one $k$-period. ### 7.2 Semi-streaming Algorithm We first present a deterministic algorithm for pattern matching with $k$ edits in the semi-streaming setting. #### 7.2.1 Preprocessing Stage Consider a set $\Pi$ of $\mathcal{O}(\log m)$ prefixes $P_{i}$ of $P$ initialised to contain $P$ itself as well as the prefixes of length $2^{\ell}$ for all $0\leq\ell\leq\lfloor{\log|P|}\rfloor$. Order the prefixes by lengths, and consider two consecutive prefixes $P^{\prime},P^{\prime\prime}$. If $P^{\prime}$ is $k$-periodic with $k$-period $Q^{\prime}$ while $P^{\prime\prime}$ is not $k$-periodic, we add two more prefixes to $\Pi$. Namely, if $\ell$ be the maximum integer such that $P[1\mathinner{.\,.\allowbreak}\ell]$ is $k$-periodic with $k$-period $Q^{\prime}$, add to $\Pi$ the prefixes $P[1\mathinner{.\,.\allowbreak}\ell]$ and $P[1\mathinner{.\,.\allowbreak}\ell+1]$. Note that $P[1\mathinner{.\,.\allowbreak}\ell+1]$ is not $k$-periodic by 7.1. Let $\Pi=\\{P_{1},P_{2},\ldots,P_{z}\\}$ be the resulting set of prefixes. We assume that the patterns are ordered in the ascending order of their lengths. During the preprocessing step, for each $i$ such that $P_{i}$ is $k$-periodic, we compute its $k$-period $Q_{i}$. We use notation $\ell_{i}=P_{i}$ and $q_{i}=|Q_{i}|$ (if defined). Importantly, we do not have to store $Q_{i}$ explicitly, we can simply memorize its endpoints in $P_{i}$ which takes $\mathcal{O}(\log m)$ extra space in total. We also store, for each of the $\mathcal{O}(k)$ rotations $D$ of $Q_{i}$ that can be a difference of a chain of $k$-edit occurrences of $P_{i}$ (Corollary 3.5), the encodings $\mathsf{qGR}_{32k}(D,D)$ and $\mathsf{qGR}_{30k}(P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])$, where $\Delta_{i}=\ell_{i}-\ell_{i-1}+k$. #### 7.2.2 Main Stage The main stage of the algorithm starts after we have preprocessed the pattern. During the main stage, we receive the text as a stream, one character of a time. We exploit the following result: ###### Fact 7.2 (cf. [59]). Given a read-only string $X$ of length $m$ and a streaming string $Y$. There is a dynamic programming algorithm that correctly identifies all prefixes $Y^{\prime}$ of $Y$ within edit distance at most $k\leq m$ from $X$, as well as $\mathsf{ed}(Y^{\prime},X)$ itself. The algorithm takes $\mathcal{O}(km)$ time and $\mathcal{O}(k)$ space besides the space required to store $X$. ##### Chains of $k$-edit occurrences. During the main stage of the algorithm, we store the following information. Let $r$ be the newly arrived position of the text $T$. For each $2\leq i\leq z$, consider all $k$-edit occurrences of $P_{i-1}$ in $T[r-\ell_{i}-k+1\mathinner{.\,.\allowbreak}r]$. We call such occurrences _active_. We denote the set of right endpoints of the active occurrences by $\mathsf{aOCC}_{k}^{E}(P_{i-1},T)$. By Corollaries 3.3 and 3.5, $\mathsf{aOCC}_{k}^{E}(P_{i-1},T)$ forms $\mathcal{O}(k^{3})$ chains. For each chain $\mathcal{C}$, we store the following information: 1. 1. The leftmost position $lp$ and the size $|\mathcal{C}|$ of $\mathcal{C}$; 2. 2. An integer $\mathsf{ed}(\mathcal{C})$ equal to the smallest edit distance from $P_{i-1}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ for every endpoint $r\in\mathcal{C}$; 3. 3. If $|\mathcal{C}|\geq 2$, we store the shift $\Delta$ of the difference $D=Q_{i-1}[1+\Delta\mathinner{.\,.\allowbreak}q_{i-1}]Q_{i-1}[1\mathinner{.\,.\allowbreak}q_{i-1}]$ of $\mathcal{C}$. If $p^{\ast}$ is the first position added to $\mathcal{C}$ (it can be different from $lp$ as we will update chains to contain only active occurrences), then at the position $(p^{\ast}+1)$ we start running the dynamic programming algorithm for $T[p^{\ast}+1\mathinner{.\,.\allowbreak}]$ and $P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}]$ (Fact 7.2). Furthermore, consider the moment when we detect the second position in $\mathcal{C}$ (if it exists) and hence the difference $D$ of the chain. Starting from this moment, for every newly added position $p\in\mathcal{C}$, at the position $(p+1)$ we start computing the greedy encoding $\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}p+\Delta_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])$. We continue running the algorithm until either the computation is over or a new position in the chain is detected. In the end, we compute the encoding for the rightmost position in the chain. ##### Detecting new $k$-edit occurrences of $P_{i}$. We now explain how to detect new $k$-edit occurrences of the prefixes $P_{i}$. Let $r$ be the latest arrived position of $T$. If $i=1$, then since $k\geq 1$, $r\in\mathsf{aOCC}_{k}^{E}(P_{1},T)$. Below we consider three possible cases for $i\geq 2$: $P_{i-1}$ is $k$-periodic, $P_{i}$ is not $k$-periodic; $P_{i-1}$ is not $k$-periodic; $P_{i-1}$ and $P_{i}$ are $k$-periodic. ###### Case 1: $P_{i-1}$ is $k$-periodic, $P_{i}$ is not $k$-periodic. By construction, in this case $\ell_{i-1}+1=\ell_{i}$. The position $r\in\mathsf{aOCC}_{k}^{E}(P_{i},T)$ iff one of the following conditions is satisfied: 1. 1. The smallest edit distance from $P_{i-1}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ is at most $k-1$ (this corresponds to the case when the last character of $P_{i}$ is deleted in an optimal alignment of a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ and $P_{i}$); 2. 2. The smallest edit distance from $P_{i-1}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r-1]$ is at most $k-1$ and $P_{i}[\ell_{i}]\neq T[r]$ (this corresponds to the case when the last character of $P_{i}$ is substituted for $T[r]$ in an optimal alignment of a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ and $P_{i}$); 3. 3. The smallest edit distance from $P_{i-1}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r-1]$ is at most $k$ and $P_{i}[\ell_{i}]=T[r]$ (this corresponds to the case when the last character of $P_{i}$ is matched with $T[r]$ in an optimal alignment of a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ and $P_{i}$); 4. 4. The smallest edit distance from $P_{i}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r-1]$ is at most $k-1$ (this corresponds to the case when $T[r]$ is deleted in an optimal alignment of a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ and $P_{i}$). We can decide which of the conditions is satisfied, and therefore whether $r\in\mathsf{aOCC}_{k}^{E}(P_{i},T)$ in $\mathcal{O}(k^{3})$ time using $\mathsf{aOCC}_{k}^{E}(P_{i-1},T)$ and $\mathsf{aOCC}_{k}^{E}(P_{i},T)$. Moreover, we can compute the smallest edit distance from $P_{i}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ if it is bounded by $k$. For the next two cases, we will use the following simple observation that follows from Fact 2.3: ###### Observation 7.3. Let $\mathsf{ed}_{i-1}(r^{\prime})$ be the smallest edit distance from $P_{i-1}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r^{\prime}]$, and define $d=\min_{\begin{subarray}{c}r^{\prime}\in\mathsf{aOCC}_{k}^{E}(P_{i-1},T)\\\ r^{\prime}\in[r+1-\Delta_{i},r+1-\Delta_{i}+2k]\end{subarray}}\\{\mathsf{ed}_{i-1}(r^{\prime})+\mathsf{ed}(P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}],T[r^{\prime}+1\mathinner{.\,.\allowbreak}r])\\}$ (2) The smallest edit distance from $P_{i}$ to $T[1\mathinner{.\,.\allowbreak}r]$ is equal to $d$ if $d\leq k$ and is larger than $k$ otherwise. It follows that to decide whether $r\in\mathsf{aOCC}_{k}^{E}(P_{i},T)$, it suffices to compute the value $\min\\{d,k+1\\}$, where $d$ is as defined above. ###### Case 2: $P_{i-1}$ is not $k$-periodic. In this case, $\mathsf{aOCC}_{k}^{E}(P_{i-1},T)$ is stored as $\mathcal{O}(k^{3})$ chains of size one. Therefore, we can find the positions $r^{\prime}$ from Eq. 2 in $\mathcal{O}(k^{3})$ time. Moreover, for each position $r^{\prime}$, we run the dynamic programming algorithm for $T[r^{\prime}+1\mathinner{.\,.\allowbreak}]$ and $P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}]$, which outputs the edit distance between $T[r^{\prime}+1\mathinner{.\,.\allowbreak}r]$ and $P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}]$ if it is at most $k$. As we also know the smallest edit distance between $P_{i-1}$ and a suffix of $T[1\mathinner{.\,.\allowbreak}r^{\prime}]$, we can compute $d$ in $\mathcal{O}(k^{3})$ time. ###### Case 3: $P_{i-1}$ and $P_{i}$ are $k$-periodic. We can identify all positions $r^{\prime}$ from Eq. 2 in $\mathcal{O}(k^{3})$ time. (It suffices to check each of the $\mathcal{O}(k^{3})$ chains that we store for $P_{i-1}$). We must now test each of these positions. Consider a position $r^{\prime}$ and let $\mathcal{C}$ be the chain containing it. It suffices to compute the edit distance between $P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}]$ and $T[r^{\prime}+1\mathinner{.\,.\allowbreak}r]$ as we already know the smallest edit distance from $P_{i-1}$ to a suffix of $T[1\mathinner{.\,.\allowbreak}r^{\prime}]$. If $|\mathcal{C}|=1$, the distance has been computed by the dynamic programming algorithm. Otherwise, we use quasi-greedy encodings. On a high level, our goal is to compute the edit distance between $\pi=P[\ell_{i-1}\mathinner{.\,.\allowbreak}\ell_{i}]$ and $\tau=T[r^{\prime}+1\mathinner{.\,.\allowbreak}r]$ via a string $\mu=D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}]$, where $D$ is the difference of $\mathcal{C}$ and $\Delta_{i}=\ell_{i}-\ell_{i-1}+k$. ###### Lemma 7.4. There is $\mathsf{ed}(\pi,\mu)\leq 26k$. ###### Proof. As $P_{i-1}$ and $P_{i}$ are $k$-periodic, by 7.1 we obtain that $P_{i}=P[1\mathinner{.\,.\allowbreak}\ell_{i}]$ is $k$-periodic with $k$-period $Q_{i}=Q_{i-1}$, that is, there is a prefix of $Q_{i}^{\infty}$ such that the edit distance between it and $P_{i}$ is at most $2k$. By Fact 2.3, there is a substring $Q_{i}^{\infty}[r\mathinner{.\,.\allowbreak}t]$ such that $|r-\ell_{i-1}|\leq 2k$ and $|t-\ell_{i}|\leq 2k$ and $\mathsf{ed}(Q_{i}^{\infty}[r\mathinner{.\,.\allowbreak}t],\pi)\leq 2k$. By the triangle inequality, we obtain that $\mathsf{ed}(Q_{i}^{\infty}[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}],\pi)\leq 6k$. Let $a=\ell_{i-1}-7k\pmod{q_{i}}$ and $b=\ell_{i-1}+7k\pmod{q_{i}}$. By Corollary 3.5, $D$ is a rotation of $Q_{i-1}=Q_{i}$ with shift $\Delta$, where $\Delta\in[a-3k,b+3k]$ if $a\leq b$ and $\Delta\in[0,b+3k]\cup[a-3k,q_{i})$ if $a>b$. It follows that $D^{\infty}=Q_{i}^{\infty}[s\mathinner{.\,.\allowbreak}]$, where $|s-\ell_{i-1}|\leq 10k$. As $\mu=D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}]=Q_{i}^{\infty}[s\mathinner{.\,.\allowbreak}s+\Delta_{i}-1]$, by the triangle inequality we obtain $\mathsf{ed}(\mu,Q_{i}^{\infty}[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}])\leq 20k$. Applying the triangle inequality one more time, we obtain the claim. ∎ Let $\mathcal{G}_{P}=\mathsf{qGR}_{30k}(\pi,\mu)$ and $\mathcal{G}_{T}=\mathsf{qGR}_{30k}(\mu,\tau)$. By Corollary 5.27, knowing $\mathcal{G}_{P}$ and $\mathcal{G}_{T}$ is sufficient to compute the edit distance between $\pi$ and $\tau$. Note that we do not know $\mathcal{G}_{T}$ yet, we must compute it using the available information. Let $p$ be the rightmost position in $\mathcal{C}$. 1. 1. Recall that at the position $(p+1)$ we launched an algorithm that is computing $\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}p+\Delta_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])$ with a delay of $k$ characters (Corollary 5.22). We have $p+\Delta_{i}\geq r^{\prime}+\Delta_{i}\geq r$. Therefore, upon reaching $r$, we can use the memory of the algorithm to compute $\mathcal{G}_{T,1}=\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}r],D^{\infty}[1\mathinner{.\,.\allowbreak}r-p])$ in $\tilde{\mathcal{O}}(k^{5})$ time and $\tilde{\mathcal{O}}(k^{2})$ space (Lemma 5.21). 2. 2. By the definition of chains, $T[r^{\prime}+1\mathinner{.\,.\allowbreak}p]=D^{j}$ for some integer $j$. By Lemma 5.21, we can use $\mathsf{qGR}_{32k}(D,D)$ computed during the preprocessing step to compute $\mathcal{G}_{T,2}=\mathsf{qGR}_{32k}(T[r^{\prime}+1\mathinner{.\,.\allowbreak}p],D^{j})$ in $\tilde{\mathcal{O}}(k^{5})$ time and $\tilde{\mathcal{O}}(k^{2})$ space. By applying Lemma 5.21 again, we can compute $\mathsf{qGR}_{32k}(T[r^{\prime}+1\mathinner{.\,.\allowbreak}p],D^{j}D^{\infty}[1\mathinner{.\,.\allowbreak}r-p])$ in $\tilde{\mathcal{O}}(k^{5})$ time and $\tilde{\mathcal{O}}(k^{2})$ space. 3. 3. Finally, we compute via Corollary 5.22 the encoding $\mathsf{qGR}_{30k}(\varepsilon,D^{\infty}[r-p+1\mathinner{.\,.\allowbreak}\Delta_{i}])$, where $\varepsilon$ is the empty string and $\Delta_{i}-(r-p)\leq 2k$. We then apply 5.20 to compute $\mathsf{qGR}_{30k+(\Delta_{i}-(r-p))}(T[r^{\prime}+1\mathinner{.\,.\allowbreak}p],D^{j}D^{\infty}[1\mathinner{.\,.\allowbreak}r-p])$ and further Lemma 5.21 to compute $\mathsf{qGR}_{30k}(T[r^{\prime}+1\mathinner{.\,.\allowbreak}p],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])=\mathcal{G}_{T}$. ##### Updating the chains. When we detect a new $k$-edit occurrence of $P_{i}$, we must decide if it should be added to some existing chain or if we must create a new chain for this occurrence. To this end, for each $1\leq i\leq z$, we consider $\mathcal{O}(k)$ of its rotations of $Q_{i}$ that can be the difference of a chain of $k$-edit occurrences of $P_{i}$ in $T$ (Corollary 3.5). For each rotation $R$, we run a constant-space and linear-time deterministic pattern matching algorithm [54]. The algorithm processes the text $T$ as a stream and if there is an occurrence $T[\ell\mathinner{.\,.\allowbreak}r]$ of the rotation reports it while reading $T[r]$. The algorithm uses $\mathcal{O}(1)$ space and $\mathcal{O}(1)$ amortised time per character of $T$. Suppose that we detect a new right endpoint $r$ of a $k$-edit occurrence $T[\ell\mathinner{.\,.\allowbreak}r]$ of $P_{i}$. We must decide whether $r$ belongs to an existing chain of $k$-edit occurrences of $P_{i}$ or starts a new one. In order to do this, we first find the chain $\mathcal{C}$ that contains $r-q_{i}+1$ if it exists by checking each chain in turn. We then check that the smallest edit distance from a suffix of $T[1\mathinner{.\,.\allowbreak}r]$ to $P_{i}$ equals to $\mathsf{ed}(\mathcal{C})$ and that $T[r-q_{i}+1\mathinner{.\,.\allowbreak}r]$ is equal to the difference of the chain. (Recall that we run the exact pattern matching algorithm for each rotation of $Q_{i}$ that can be the difference of a chain so that both checks can be performed in $\mathcal{O}(1)$ time). If these conditions are not satisfied, we create a new chain that contains $r$ only. Otherwise, we add $r$ to $\mathcal{C}$ (i.e., increment the size of $\mathcal{C}$). To finalize the update of the chains, we must delete all $k$-edit occurrences that become inactive: for each $i$ and for each chain of $k$-edit occurrences of $P_{i}$, we “delete” the first $k$-edit occurrence if it starts before $r-\ell_{i}-k+1$. To “delete” a $k$-edit occurrence, we simply update the endpoints of the first occurrence in the chain and the total number of occurrences in the chain. #### 7.2.3 Analysis We summarize the results of this section: ###### Theorem 7.5. Assume a read-only pattern $P$ of length $m$ and a streaming text $T$ of length $n$. There is a deterministic algorithm that finds the set $\mathsf{OCC}_{k}(P,T)$ using $\tilde{\mathcal{O}}(k^{5})$ space and $\tilde{\mathcal{O}}(k^{6})$ amortised time per character of the text $T$. ###### Proof. Let us first upper bound the space complexity of the algorithm. For each $i=1,\ldots,z=\mathcal{O}(\log m)$, we store the set $\mathsf{aOCC}_{k}(P_{i},T)$ as $\mathcal{O}(k^{3})$ chains. For each chain, we launch the dynamic programming algorithm (Fact 7.2), which takes $\tilde{\mathcal{O}}(k^{2})$ space and Corollary 5.22 that takes $\tilde{\mathcal{O}}(k^{2})$ space. The pattern matching algorithms for the rotations of $Q_{i}$ take $\tilde{\mathcal{O}}(k)$ space in total. Finally, testing if a position of the text is the rightmost position of a $k$-edit occurrence of $P_{i}$ requires $\tilde{\mathcal{O}}(k^{2})$ space. We now show the time bound. Updating the chains takes $\tilde{\mathcal{O}}(1)$ time. At any time, we run $\tilde{\mathcal{O}}(k^{3})$ instances of Corollary 5.22 that takes $\mathcal{O}(k^{3})$ amortised time per character. To test each position $r$, we spend $\tilde{\mathcal{O}}(k\cdot k^{3})$ time. ∎ ### 7.3 Streaming Algorithm We now modify the algorithm for the semi-streaming model to develop a fully streaming algorithm. W.l.o.g., assume $k\leq m$ and take $\delta=1/n^{c}$ for $c$ large enough. #### 7.3.1 Preprocessing We define the prefixes $P_{i}$ and their periods $Q_{i}$ exactly in the same way as in Section 7.2. Recall that $\ell_{i}=|P_{i}|$ and $q_{i}=|Q_{i}|$. For every $i>1$, we store the following information, where all sketches $\mathsf{sk}^{\mathsf{q}}$ (6.23) are parametrized by probability $\delta$, maximal length $\Delta_{i}$, the alphabet of $P$ and $T$, and a seed of $\mathcal{O}(\log^{2}n\log(1/\delta))$ random bits: 1. 1. $P[\ell_{i}]$ and the sketch $\mathsf{sk}^{\mathsf{q}}_{k}(P[\ell_{i-1}\mathinner{.\,.\allowbreak}\ell_{i}])$; 2. 2. For each of rotation $D$ of $Q_{i}$ that can be a difference of a chain of $k$-edit occurrences of $P_{i}$, the encoding $\mathsf{qGR}_{30k}(P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])$, where $\Delta_{i}=\ell_{i}-\ell_{i-1}+k$; 3. 3. For each of rotation $D$ of $Q_{i}$ that can be a difference of a chain of $k$-edit occurrences of $P_{i}$, the sketches $\mathsf{sk}^{\mathsf{q}}_{32k}(D)$ and $\mathsf{sk}^{\mathsf{q}}_{32k}(D[1\mathinner{.\,.\allowbreak}\Delta_{i}\pmod{q_{i}}])$. #### 7.3.2 Main Stage As in Section 7.2, for each $i$, we store $\mathsf{aOCC}_{k}^{E}(P_{i-1},T)$ in $\mathcal{O}(k^{3})$ chains. For each chain $\mathcal{C}$, we store the leftmost position $lp$ in it, its size $|\mathcal{C}|$, and the smallest edit distance, $\mathsf{ed}(\mathcal{C})$, from a suffix of $T[1\mathinner{.\,.\allowbreak}lp]$ to $P_{i-1}$. If $|\mathcal{C}|\geq 2$, we also store its difference (defined by the shift of the rotation of $Q_{i-1}$). If $p^{\ast}$ is the first position added to $\mathcal{C}$, then at the position $(p^{\ast}+1)$, we start running the streaming algorithm of 6.23(a) for computing the sketch $\mathsf{sk}^{\mathsf{q}}_{k}(T[p^{a}st+1\mathinner{.\,.\allowbreak}p^{a}st+\Delta_{i}])$. Furthermore, consider the moment when we detect the second position in $\mathcal{C}$ (if it exists) and hence the difference $D$ of the chain. Starting from this moment, for every newly added position $p\in\mathcal{C}$, at the position $(p+1)$ we start computing the quasi-greedy encoding $\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}p+\Delta_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])$ as follows: Assume that we have computed $\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}p+\ell\cdot q_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\ell\cdot q_{i}])$. Suppose first that $(\ell+1)\cdot q_{i}\leq\Delta_{i}$. While reading $T[p+\ell\cdot q_{i}+1\mathinner{.\,.\allowbreak}p+(\ell+1)\cdot q_{i}]$, we compute $\mathsf{sk}^{\mathsf{q}}_{32k}(T[p+\ell\cdot q_{i}+1\mathinner{.\,.\allowbreak}p+(\ell+1)\cdot q_{i}])$ again via the algorithm of 6.23(a). We then use this sketch and $\mathsf{sk}^{\mathsf{q}}_{32k+1}(D)$ to compute $\mathsf{qGR}_{32k}(T[p+\ell\cdot q_{i}+1\mathinner{.\,.\allowbreak}p+(\ell+1)\cdot q_{i}],D)$ (6.23(b)) and then $\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}p+(\ell+1)\cdot q_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}(\ell+1)\cdot q_{i}])$ (Lemma 5.21). If $(\ell+1)\cdot q_{i}>\Delta_{i}$, we use the sketches $\mathsf{sk}^{\mathsf{q}}_{32k}(T[p+\ell\cdot q_{i}\mathinner{.\,.\allowbreak}p+\Delta_{i}])$ and $\mathsf{sk}^{\mathsf{q}}_{32k}(D[1\mathinner{.\,.\allowbreak}\Delta_{i}\pmod{q_{i}}])$, the rest is analogous. We continue running the algorithm until either the computation is completed or a new $k$-edit occurrence in the chain has been detected. In other words, in the end we compute the encoding for the rightmost position in the chain. ##### Detecting new $k$-edit occurrences of $P_{i}$. We now explain how we modify the algorithm for detecting new $k$-edit occurrences of the prefixes $P_{i}$. The algorithm for Case 1 does not change. Instead of the dynamic programming algorithm in Case 2, we use $\mathsf{sk}^{\mathsf{q}}_{k}$ and use 6.23(b) and then Corollary 5.27 to compute the edit distance. It remains to explain how we modify the algorithm for Case 3. We exploit Eq. 2 again. We can find all positions $r^{\prime}$ in $\mathcal{O}(k^{3})$ time. To test a position $r^{\prime}$, it suffices to compute the edit distance between $P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}]$ and $T[r^{\prime}+1\mathinner{.\,.\allowbreak}r]$. Let $\mathcal{C}$ be the chain with difference $D$ that contains $r^{\prime}$. If $|\mathcal{C}|=1$, we use the edit distance sketch, and otherwise the quasi-greedy encodings. By Lemma 7.4, $\mathsf{ed}(P[\ell_{i-1}\mathinner{.\,.\allowbreak}\ell_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])\leq 26k$ and hence by Corollary 5.27, the edit distance between $P[\ell_{i-1}+1\mathinner{.\,.\allowbreak}\ell_{i}]$ and $T[r^{\prime}+1\mathinner{.\,.\allowbreak}r]$ can be computed from the encodings $\mathcal{G}_{P}=\mathsf{qGR}_{30k}(\pi,\mu)$ and $\mathcal{G}_{T}=\mathsf{qGR}_{30k}(\mu,\tau)$ for $\pi=P[\ell_{i-1}\mathinner{.\,.\allowbreak}\ell_{i}]$, $\mu=D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}]$, and $\tau=T[r^{\prime}+1\mathinner{.\,.\allowbreak}r]$. $\mathcal{G}_{P}$ was computed during the preprocessing step and we store it explicitly. Hence, we only need to explain how to compute $\mathcal{G}_{T}$. Let $p$ be the rightmost position in the chain $\mathcal{C}$. 1. 1. Recall that $T[r^{\prime}+1\mathinner{.\,.\allowbreak}p]=D^{j}$ for $j=(p-r^{\prime})/q_{i}$. We first compute $\mathsf{qGR}_{32k}(D,D)$ from $\mathsf{sk}^{\mathsf{q}}_{32k}(D,D)$ via 6.23(b), and then $\mathcal{G}_{T,1}=\mathsf{qGR}_{32k}(T[r^{\prime}+1\mathinner{.\,.\allowbreak}p],D^{j})=\mathsf{qGR}_{32k}(D^{j},D^{j})$ in $\tilde{\mathcal{O}}(k^{5})$ time and $\tilde{\mathcal{O}}(k^{2})$ space as in Section 7.2. 2. 2. At the position $(p+1)$ we launched the streaming algorithm computing $\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}p+\Delta_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\Delta_{i}])$ with a delay of $q_{i}$ characters. We have $p+\Delta_{i}\geq r^{\prime}+\Delta_{i}\geq r$. Therefore, at a position $p+\ell\cdot q_{i}$, where $\ell=\lfloor(r-p+1)/q_{i}\rfloor$, the algorithm computes $\mathcal{G}_{T,2}=\mathsf{qGR}_{32k}(T[p+1\mathinner{.\,.\allowbreak}p+\ell\cdot q_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}\ell\cdot q_{i}])$. The algorithm then continues to compute the sketch $\mathsf{sk}^{\mathsf{q}}_{32k}(T[p+\ell\cdot q_{i}+1\mathinner{.\,.\allowbreak}r])$. We use this sketch and the sketch $\mathsf{sk}^{\mathsf{q}}_{32k}(D[1\mathinner{.\,.\allowbreak}\Delta_{i}\pmod{q_{i}}])$ to compute $\mathcal{G}_{T,3}=\mathsf{qGR}_{30k}(T[p+\ell\cdot q_{i}+1\mathinner{.\,.\allowbreak}r],D[1\mathinner{.\,.\allowbreak}\Delta_{i}\pmod{q_{i}}])$ via 6.23(b) and 5.20. 3. 3. We finally concatenate $\mathcal{G}_{T,1}$ and $\mathcal{G}_{T,2}$ to obtain $\mathsf{qGR}_{32k}(T[r^{\prime}+1\mathinner{.\,.\allowbreak}r^{\prime}+(\ell+j)\cdot q_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}(\ell+j)\cdot q_{i}])$, and then $\mathsf{qGR}_{32k}(T[r^{\prime}+1\mathinner{.\,.\allowbreak}r^{\prime}+(\ell+j)\cdot q_{i}],D^{\infty}[1\mathinner{.\,.\allowbreak}(\ell+j)\cdot q_{i}])$ and $\mathcal{G}_{T,3}$ to obtain $\mathcal{G}_{T}$ via Lemma 5.21 (note that the difference of lengths of strings in $\mathcal{G}_{T,3}$ is bounded by $2k$). ##### Updating the chains. When we detect a new $k$-edit occurrence of $P_{i}$, we must decide if it should be added to some existing chain or if we must create a new chain for this occurrence. We use the algorithm of Section 7.2, but replace the constant-space pattern matching algorithm with the streaming pattern matching algorithm [52] that for a rotation of $Q_{i}$ takes $\tilde{\mathcal{O}}(1)$ space and $\tilde{\mathcal{O}}(1)$ time per character and retrieves all its occurrences correctly with probability at least $1-\delta$. #### 7.3.3 Analysis We summarize the results of this section: ###### Theorem 7.6. Given a pattern $P$ of length $m$ and a text $T$ of length $n$. There is a streaming algorithm that finds the set $\mathsf{OCC}_{k}^{E}(P,T)$ using $\tilde{\mathcal{O}}(k^{5})$ space and $\tilde{\mathcal{O}}(k^{8})$ amortised time per character of the text $T$. The algorithm computes $\mathsf{OCC}_{k}^{E}(P,T)$ correctly with high probability. ###### Proof. By Corollary 3.5, for a fixed $i$ only $\mathcal{O}(k)$ rotations of $Q_{i}$ can be a difference of a chain of occurrences of $P_{i}$. The sketch $\mathsf{sk}^{\mathsf{q}}_{30k+1}(\cdot)$ takes $\tilde{\mathcal{O}}(k^{2})$ space (6.23(a)) and $\mathsf{qGR}_{30k}(\cdot,\cdot)$ $\tilde{\mathcal{O}}(k^{2})$ space as well (Corollary 5.17). Therefore, the information computed during the preprocessing stage occupies $\tilde{\mathcal{O}}(k^{3})$ space. During the main stage, we store $\tilde{\mathcal{O}}(k^{3})$ chains. For each chain, we run the algorithm of 6.23(a) that takes $\tilde{\mathcal{O}}(k^{2})$ space. The algorithm than computes the quasi-greedy encoding (6.23(b)) takes $\tilde{\mathcal{O}}(k^{2})$ space as well. In total, the information we store for the chains occupies $\tilde{\mathcal{O}}(k^{5})$ space. When checking for new occurrences, we apply Lemmas 5.21 and 5.27, which require an overhead of $\tilde{\mathcal{O}}(k^{2})$ space. Finally, the streaming pattern matching algorithms for the rotations of $Q_{i}$ that can be differences of occurrences of $P_{i}$ take $\tilde{\mathcal{O}}(k)$ space in total. The space bound follows. At any time, we run $\tilde{\mathcal{O}}(k^{3})$ instances of the algorithm of 6.23(a) that takes $\tilde{\mathcal{O}}(k)$ amortised time per character. In addition, for every character we run $\tilde{\mathcal{O}}(k^{3})$ instances of the algorithms of Lemmas 5.21 and 5.27 taking $\tilde{\mathcal{O}}(k^{8})$ time in total. The pattern matching algorithms for the rotations of $Q_{i}$ take $\tilde{\mathcal{O}}(k)$ time per character. Note that the only probabilistic procedures in the algorithm are streaming pattern matching [52] and that of 6.23(b) that computes quasi-greedy encodings. These procedures are called $\mathrm{poly}(n,k)=\mathrm{poly}(n)$ times. By choosing the constant $c$ in $\delta=1/n^{c}$ large enough, we can guarantee that the algorithm is correct with high probability by the union bound. ∎ ## References * [1] Amir Abboud, Thomas Dueholm Hansen, Virginia Vassilevska Williams, and Ryan Williams. Simulating branching programs with edit distance and friends: Or: A polylog shaved is a lower bound made. In STOC 2016, pages 375––388, 2016. * [2] Karl R. Abrahamson. Generalized string matching. SIAM J. Comput., 16(6):1039–1051, 1987. doi:10.1137/0216067. * [3] Amihood Amir, Moshe Lewenstein, and Ely Porat. Faster algorithms for string matching with $k$ mismatches. Journal of Algorithms, 50(2):257–275, 2004. URL: https://doi.org/10.1016/S0196-6774(03)00097-X, doi:10.1016/S0196-6774(03)00097-X. * [4] Ajesh Babu, Nutan Limaye, Jaikumar Radhakrishnan, and Girish Varma. Streaming algorithms for language recognition problems. Theor. Comput. Sci., 494:13–23, 2013. * [5] Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). In STOC 2015, pages 51––58, 2015. doi:10.1145/2746539.2746612. * [6] Djamal Belazzougui and Qin Zhang. Edit distance: Sketching, streaming, and document exchange. In FOCS 2016, pages 51–60, 2016. doi:10.1109/FOCS.2016.15. * [7] Dany Breslauer and Zvi Galil. Real-time streaming string-matching. ACM Trans. Algorithms, 10(4):22:1–22:12, 2014. doi:10.1145/2635814. * [8] Karl Bringmann and Marvin Künnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In FOCS 2015, pages 79–97, 2015. doi:10.1109/FOCS.2015.15. * [9] Diptarka Chakraborty, Debarati Das, and Michal Koucký. Approximate online pattern matching in sublinear time. In FSTTCS 2019, volume 150 of LIPIcs, pages 10:1–10:15, 2019\. doi:10.4230/LIPIcs.FSTTCS.2019.10. * [10] Diptarka Chakraborty, Elazar Goldenberg, and Michal Koucký. Streaming algorithms for embedding and computing edit distance in the low distance regime. In STOC 2016, pages 712–725, 2016. doi:10.1145/2897518.2897577. * [11] Timothy M. Chan, Shay Golan, Tomasz Kociumaka, Tsvi Kopelowitz, and Ely Porat. Approximating text-to-pattern Hamming distances. In STOC 2020, pages 643–656, 2020. URL: https://doi.org/10.1145/3357713.3384266, doi:10.1145/3357713.3384266. * [12] Panagiotis Charalampopoulos, Tomasz Kociumaka, and Philip Wellnitz. Faster approximate pattern matching: A unified approach. In FOCS 2020, pages 978–989, 2020. doi:10.1109/FOCS46700.2020.00095. * [13] Raphaël Clifford, Allyx Fontaine, Ely Porat, Benjamin Sach, and Tatiana Starikovskaya. Dictionary matching in a stream. In ESA 2015, volume 9294 of LNCS, pages 361–372, 2015. doi:10.1007/978-3-662-48350-3\\_31. * [14] Raphaël Clifford, Allyx Fontaine, Ely Porat, Benjamin Sach, and Tatiana Starikovskaya. The _k_ -mismatch problem revisited. In SODA 2016, pages 2039–2052, 2016. doi:10.1137/1.9781611974331.ch142. * [15] Raphaël Clifford, Markus Jalsenius, and Benjamin Sach. Cell-probe bounds for online edit distance and other pattern matching problems. In SODA 2015, pages 552–561, 2015. doi:10.1137/1.9781611973730.37. * [16] Raphaël Clifford, Tomasz Kociumaka, and Ely Porat. The streaming k-mismatch problem, 2019. doi:10.1137/1.9781611975482.68. * [17] Richard Cole and Ramesh Hariharan. Approximate string matching: A simpler faster algorithm. SIAM J. Comput., 31(6):1761–1782, 2002. doi:10.1137/S0097539700370527. * [18] Funda Ergün, Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou. Streaming periodicity with mismatches. In APPROX 2017, pages 42:1–42:21, 2017. * [19] Funda Ergün, Elena Grigorescu, Erfan Sadeqi Azer, and Samson Zhou. Periodicity in data streams with wildcards. In CSR 2018, pages 90–105, 2018. doi:10.1007/978-3-319-90530-3_9. * [20] Funda Ergün, Hossein Jowhari, and Mert Sağlam. Periodicity in streams. In APPROX 2010, pages 545–559, 2010. doi:10.1007/978-3-642-15369-3\\_41. * [21] N. François, F. Magniez, M. de Rougemont, and O. Serre. Streaming property testing of visibly pushdown languages. In ESA 2016, volume 57 of LIPIcs, pages 43:1–43:17, 2016\. * [22] Moses Ganardi, Danny Hucke, Daniel König, Markus Lohrey, and Konstantinos Mamouras. Automata theory on sliding windows. In STACS 2018, volume 96 of LIPIcs, pages 31:1–31:14, 2018\. doi:10.4230/LIPIcs.STACS.2018.31. * [23] Moses Ganardi, Danny Hucke, and Markus Lohrey. Querying regular languages over sliding windows. In FSTTCS 2016, volume 65 of LIPIcs, pages 18:1–18:14, 2016\. doi:10.4230/LIPIcs.FSTTCS.2016.18. * [24] Moses Ganardi, Danny Hucke, and Markus Lohrey. Randomized sliding window algorithms for regular languages. In ICALP 2018, volume 107 of LIPIcs, pages 127:1–127:13, 2018\. doi:10.4230/LIPIcs.ICALP.2018.127. * [25] Moses Ganardi, Danny Hucke, and Markus Lohrey. Sliding window algorithms for regular languages. In LATA 2018, volume 10792, pages 26–35, 2018. doi:10.1007/978-3-319-77313-1\\_2. * [26] Moses Ganardi, Danny Hucke, Markus Lohrey, and Tatiana Starikovskaya. Sliding window property testing for regular languages. In ISAAC 2019, volume 149 of LIPIcs, pages 6:1–6:13, 2019\. doi:10.4230/LIPIcs.ISAAC.2019.6. * [27] Moses Ganardi, Artur Jeż, and Markus Lohrey. Sliding windows over context-free languages. In MFCS 2018, volume 117 of LIPIcs, pages 15:1–15:15, 2018\. doi:10.4230/LIPIcs.MFCS.2018.15. * [28] Paweł Gawrychowski, Oleg Merkurev, Arseny M. Shur, and Przemyslaw Uznański. Tight tradeoffs for real-time approximation of longest palindromes in streams. Algorithmica, 81(9):3630–3654, 2019. doi:10.1007/s00453-019-00591-8. * [29] Paweł Gawrychowski, Jakub Radoszewski, and Tatiana Starikovskaya. Quasi-periodicity in streams. In CPM, volume 128 of LIPIcs, pages 22:1–22:14, 2019. doi:10.4230/LIPIcs.CPM.2019.22. * [30] Pawel Gawrychowski and Tatiana Starikovskaya. Streaming dictionary matching with mismatches. In CPM 2019, volume 128 of LIPIcs, pages 21:1–21:15, 2019\. doi:10.4230/LIPIcs.CPM.2019.21. * [31] Paweł Gawrychowski and Przemysław Uznański. Towards unified approximate pattern matching for Hamming and L_1 distance. In ICALP 2018, volume 107 of LIPIcs, pages 62:1–62:13, 2018\. doi:10.4230/LIPIcs.ICALP.2018.62. * [32] Shay Golan, Tomasz Kociumaka, Tsvi Kopelowitz, and Ely Porat. The streaming k-mismatch problem: Tradeoffs between space and total time. In CPM 2020, volume 161 of LIPIcs, pages 15:1–15:15, 2020\. doi:10.4230/LIPIcs.CPM.2020.15. * [33] Shay Golan, Tsvi Kopelowitz, and Ely Porat. Towards optimal approximate streaming pattern matching by matching multiple patterns in multiple streams. In ICALP 2018, volume 107 of LIPIcs, pages 65:1–65:16, 2018\. doi:10.4230/LIPIcs.ICALP.2018.65. * [34] Shay Golan and Ely Porat. Real-time streaming multi-pattern search for constant alphabet. In ESA 2017, volume 107 of LIPIcs, pages 41:1–41:15, 2017\. doi:10.4230/LIPIcs.ESA.2017.41. * [35] Tomohiro I. Longest common extensions with recompression. In CPM 2017, volume 78 of LIPIcs, pages 18:1–18:15, 2017\. * [36] Russell Impagliazzo and Ramamohan Paturi. On the complexity of $k$-SAT. Journal of Computer and System Sciences, 62(2):367–375, 2001. doi:10.1006/jcss.2000.1727. * [37] Ce Jin, Jelani Nelson, and Kewen Wu. An improved sketching bound for edit distance, 2021. doi:10.4230/LIPIcs.STACS.2021.45. * [38] Richard M. Karp and Michael O. Rabin. Efficient randomized pattern-matching algorithms. IBM J. of R&D, 31(2):249–260, 1987. * [39] Dominik Kempa and Tomasz Kociumaka. Resolution of the Burrows-Wheeler transform conjecture, 2020. doi:10.1109/FOCS46700.2020.00097. * [40] S. Kosaraju. Efficient string matching. 1987\. * [41] Gad M. Landau, Eugene W. Myers, and Jeanette P. Schmidt. Incremental string comparison. SIAM J. Comput., 27(2):557–582, April 1998. * [42] Gad M. Landau and Uzi Vishkin. Efficient string matching with $k$ mismatches. Theoretical Computer Science, 43:239–249, 1986. doi:10.1016/0304-3975(86)90178-7. * [43] Gad M. Landau and Uzi Vishkin. Fast parallel and serial approximate string matching. J. Algorithms, 10(2):157–169, 1989. doi:10.1016/0196-6774(89)90010-2. * [44] David Levin and Yuval Peres. Markov Chains and Mixing Times. American Mathematical Society, 2017. doi:10.1090/mbk/107. * [45] Frédéric Magniez, Claire Mathieu, and Ashwin Nayak. Recognizing well-parenthesized expressions in the streaming model. SIAM J. Comput., 43(6):1880–1905, 2014. doi:10.1137/130926122. * [46] William J. Masek and Michael S. Paterson. A faster algorithm computing string edit distances. Journal of Computer and System Sciences, 20(1):18–31, 1980. doi:https://doi.org/10.1016/0022-0000(80)90002-1. * [47] O. Merkurev and A. M. Shur. Searching runs in streams. In SPIRE 2019, volume 11811 of LNCS, pages 203–220, 2019\. * [48] Oleg Merkurev and Arseny M. Shur. Searching long repeats in streams. In CPM 2019, volume 128 of LIPIcs, pages 31:1–31:14, 2019\. * [49] Gonzalo Navarro. A guided tour to approximate string matching. ACM Comput. Surv., 33(1):31–88, March 2001. doi:10.1145/375360.375365. * [50] Noam Nisan. Pseudorandom generators for space-bounded computation. Comb., 12(4):449–461, 1992. doi:10.1007/BF01305237. * [51] Takaaki Nishimoto, Tomohiro I, Shunsuke Inenaga, Hideo Bannai, and Masayuki Takeda. Dynamic index and LZ factorization in compressed space. Discrete Applied Mathematics, 274:116–129, 2020. Stringology Algorithms. doi:https://doi.org/10.1016/j.dam.2019.01.014. * [52] Benny Porat and Ely Porat. Exact and approximate pattern matching in the streaming model. In FOCS 2009, pages 315–323, 2009. doi:10.1109/FOCS.2009.11. * [53] Jakub Radoszewski and Tatiana Starikovskaya. Streaming _k_ -mismatch with error correcting and applications. Journal of Information and Computation, 271:104513, 2020. doi:10.1016/j.ic.2019.104513. * [54] Wojciech Rytter. On maximal suffixes and constant-space linear-time versions of KMP algorithm. Theoretical Computer Science, 299(1):763 – 774, 2003. doi:https://doi.org/10.1016/S0304-3975(02)00590-X. * [55] Süleyman Cenk Sahinalp and Uzi Vishkin. Efficient approximate and dynamic matching of patterns using a labeling paradigm (extended abstract). In FOCS 1996, pages 320–328, 1996. doi:10.1109/SFCS.1996.548491. * [56] Peter H. Sellers. The theory and computation of evolutionary distances: Pattern recognition. Journal of Algorithms, 1(4):359 – 373, 1980. doi:https://doi.org/10.1016/0196-6774(80)90016-4.
# Communication games, sequential equilibrium, and mediators Ivan Geffner Cornell University Supported in part by NSF grants IIS-1703846 and IIS-0911036, ARO grant W911NF-17-1-0592, MURI grant W911NF-19-1-0217 from the ARO, and a grant from Open Philanthropy.Declarations of interest: none. Joseph Y. Halpern Cornell University11footnotemark: 1 22footnotemark: 2 We consider _$k$ -resilient sequential equilibria_, strategy profiles where no player in a coalition of at most $k$ players believes that it can increase its utility by deviating, regardless of its local state. We prove that all $k$-resilient sequential equilibria that can be implemented with a trusted mediator can also be implemented without the mediator in a synchronous system of $n$ players if $n>3k$. In asynchronous systems, where there is no global notion of time and messages may take arbitrarily long to get to their recipient, we prove that a $k$-resilient sequential equilibrium with a mediator can be implemented without the mediator if $n>4k$. These results match the lower bounds given by Abraham, Dolev, and Halpern [2008b] and Geffner and Halpern [2023] for implementing a Nash equilibrium without a mediator (which are easily seen to apply to implementing a sequential equilibrium) and improve the results of Gerardi [2004], who showed that, in the case that $k=1$, a sequential equilibrium can be implemented in synchronous systems if $n\geq 5$. ## 1 Introduction A standard technique for dealing with complicated problems in distributed computing, cryptography, and game theory is to show that the problem can be solved under the assumption that there is a _trusted mediator_ , a third party to whom all agents in the system can communicate, and then showing that we can _implement_ the mediator; that is, the agents just communicating among themselves (using what economists call _cheap talk_) can simulate what the mediator does. In the distributed computing and cryptography literature, the focus has been on _secure multiparty computation_ Goldreich et al. [1987]; Yao [1982]. Given $n$ agents in which each agent $i$ has input $x_{i}$, the goal is to compute some function $f(x_{1},\ldots,x_{n})$ without revealing anything about the agents’ inputs besides what can be deduced from the output of the function itself. With a mediator, this is trivial: each agent $i$ just tells the mediator $x_{i}$, and the mediator tells the agents $f(x_{1},\ldots,x_{n})$. It is known that, as long as no more than $k$ agents are malicious, this mediator can be implemented if $n>3k$. Game theory adds incentives to the mix. While in the work done in cryptography, it is simply assumed that malicious agents will do what they can to bring down the system, in the game-theoretic setting, it is assumed that agents will deviate from a strategy only if it is in their best interests to do so. To make this precise, in the game theory literature, three games were considered: an _underlying_ normal-form game $\Gamma$, a game $\Gamma_{d}$ with a mediator, where, after communicating with the mediator, players make a move in $\Gamma$ and get the same payoffs that they do in $\Gamma$, and a _cheap-talk game_ $\Gamma_{\mathit{CT}}$, where players just talk to each other and then make a move in the underlying game. Forges [1990] and Barany [1992] showed that a Nash equilibrium (NE) in the mediator game $\Gamma_{d}$ could be _implemented_ by a NE in the cheap-talk game if $n\geq 4$ in the sense that the distribution over action profiles in the NE in $\Gamma_{d}$ is the same as that in the NE in $\Gamma_{\mathit{CT}}$. As is well known, Nash equilibrium considers only deviations on the _equilibrium path_ (situations that arise with positive probability if the NE is played). In extensive-form games such as $\Gamma_{\mathit{CT}}$, Nash equilibrium does not always describe what intuitively would be reasonable play. For example if $A$ tells $B$ to give her $1 or she will destroy the world, $B$ giving $1 to $A$ and $A$ not destroying the world is a NE. However, this equilibrium is based on a non-credible threat. Would $A$ really destroy the world if she doesn’t get $1? For this reason, game theorists have considered solution concepts such as _sequential equilibrium_ Kreps and Wilson [1982], where players cannot increase their utility by deviating even off the equilibrium path (see Section 13 for a formal definition). Not surprisingly, the question of whether a sequential equilibrium with a mediator can be implemented has been considered before in the game theory literature. Ben-Porath [2003] claimed that a sequential equilibrium could be implemented in $\Gamma_{\mathit{CT}}$ if $n>3$ provided that there is a _punishment strategy_ (a way for players to punish players who are caught cheating—see Section 3 for a formal definition);111Unfortunately, there is a serious error in Ben-Porath’s proof; see Abraham et al. [2008b]; Ben-Porath [2021]. Gerardi [2004] showed that, in the case that $k=1$, a sequential equilibrium can be implemented if $n\geq 5$. Nash and sequential equilibria incentivize individual participants to follow the given strategy since otherwise they would get a lower expected payoff. However, if players can form coalitions and deviate in a coordinated way, then a coalition of players may have incentive to deviate in a Nash or sequential equilibrium. For instance, consider a normal-form game for $n$ players in which players can play either $0$ or $1$. If at least $n-1$ players play $0$, they all get $0$ utility, but if two or more players play $1$, these players get a utility of $1$ and everyone else gets a utility of $-1$. Clearly, playing $0$ is a Nash equilibrium of this game. However, if two players can deviate together, they can increase their payoff by playing $1$ instead of $0$. Designing mechanisms that incentivize not only individuals, but also coalitions to act as intended has become increasingly important, especially in Internet applications (e.g., blockchain), where the same person can maintain several identities. Not surprisingly, coalitions have received significant attention recently in the literature (see, e.g., Eyal and Sirer [2014]; Heller [2010]). Abraham, Dolev, Gonen and Halpern [2006] (ADGH from now on) introduced the notion of _$k$ -resilient Nash equilibrium_, which is a strategy profile in which no coalition of up to $k$ players can increase their payoff by deviating. They also generalized Forges and Barany’s results by showing that any $k$-resilient Nash equilibrium in $\Gamma_{d}$ can be implemented in $\Gamma_{\mathit{CT}}$ if the number of players $n$ satisfies $n>3k$, and proved that the $n>3k$ bound is tight Abraham et al. [2008b]. More precisely, if $n\leq 3k$, then there exists a game $\Gamma_{d}$ for $n$ players and a mediator and a $k$-resilient Nash equilibrium $\vec{\sigma}$ in $\Gamma_{d}$ that can’t be implemented in $\Gamma_{\mathit{CT}}$. Two natural questions follow from these results. First, whether we can generalize Gerardi’s result and give necessary and sufficient conditions on $n$ and $k$ to implement a $k$-resilient sequential equilibrium (which is defined analogously to $k$-resilient Nash equilibrium), and second, whether the $n\geq 5$ upper bound given by Gerardi is actually tight. It is easy to check that the $n>3k$ lower bound given in Abraham et al. [2008b] for Nash equilibrium applies without change to $k$-resilient sequential equilibria. In particular, if $k=1$, it shows that at least $4$ players are required to implement a sequential equilibrium, but it was not known whether $n\geq 5$ is necessary. Here, we show that we can always implement a $k$-resilient sequential equilibrium in a game with a mediator and $n$ players if $n>3k$, matching the lower bound given in Abraham et al. [2008b] and improving Gerardi’s result for $k=1$. Following Gerardi [2004] and Gerardi and Myerson [2007], we also relate our results to two other solution concepts in normal-form games: _correlated equilibrium_ Aumann [1987] and _communication equilibrium_ Forges [1986]; Myerson [1986]. Both of these concepts can be understood in terms of games with mediators. A correlated equilibrium is an equilibrium in a game with a mediator where the players do not talk to the mediator; the mediator simply tells the agent which strategy to play. A _communication equilibrium_ in a Bayesian game is an equilibrium in a game with a mediator where the players can tell the mediator their types, and the mediator then tells the players what strategy to play (see Appendix A for formal definitions). Our results give a characterization of the set $SE_{k}(\Gamma_{\mathit{CT}})$ of outcomes that can be implemented with $k$-resilient sequential equilibria in the cheap- talk game $\Gamma_{\mathit{CT}}$; if $\Gamma$ is a normal-form game, then $SE_{k}(\Gamma_{\mathit{CT}})$ is the set of $k$-resilient correlated equilibria of $\Gamma$, while if $\Gamma$ is a Bayesian game, $SE_{k}(\Gamma_{\mathit{CT}})$ is the set of $k$-resilient communication equilibria of $\Gamma$. Sequential equilibrium involves describing not only what the players do at each point in an extensive-form game, but describing what their beliefs are, even off the equilibrium path. It must be shown that players are always best responding to their beliefs. The key difficulty in our proof involves finding appropriate beliefs that are consistent with the equilibrium strategy. The proof given by Gerardi involves an extensive case analysis for both consistency and sequential rationality, which makes it hard to generalize to other settings. Our proof introduces an interesting new technique. We show that all strategies in $\Gamma_{\mathit{CT}}$ (even those that are not an equilibrium) admit a consistent _$k$ -paranoid belief system_, where all coalitions $K$ of at most $k$ players always believe that, if other players deviated, they did so by sending inappropriate messages only to players in $K$. That is, all coalitions of size at most $k$ believe that the remaining players are being truthful among themselves. We show that, given a $k$-resilient Nash equilibrium $\vec{\sigma}$, we can extend it to a $k$-resilient sequential equilibrium by constructing a $k$-paranoid belief system. The idea is that, given these $k$-paranoid beliefs, players in a coalition $K$ will not believe that there is anything that they can do to prevent the remaining players from playing their part of the equilibrium. We then apply these results to the $k$-resilient Nash equilibrium of Abraham et al. [2006] to get our result. All of the results presented so far assume a _synchronous_ setting: communication proceeds in atomic rounds, and all messages sent during round $r$ are received by round $r+1$. In an _asynchronous_ setting, there are no rounds and messages sent by the players may take arbitrarily long to get to their recipients. Asynchrony is a standard assumption in the distributed computing and cryptography literature, precisely because many systems that practitioners care about, such as markets or the internet, are asynchronous in practice. Considering asynchronous systems can have significant implications for how players will play a game. For instance, in an online second-price auction, the seller can benefit from inserting fake transactions whose value is between that of the highest and second-highest bid immediately after a new highest bid is received, thus increasing the second-highest price at no cost Roughgarden [2020]. This type of attack can be carried out only in asynchronous or _partially synchronous_ systems (where there is an upper bound on how long messages take to arrive), since in a synchronous system, all bids are received at the same time. (In a synchronous system, the seller would have to guess what the players will bid in order to benefit from a fake transaction.) This example shows that asynchrony and partial synchrony allow players to influence the communication pattern in ways that they can’t in a synchronous setting. Other examples include the fact that, in a fully asynchronous system, a player that didn’t send a message is indistinguishable from a player whose message is delayed (which means that players can safely deviate by not sending messages for a while, which they cannot do in a synchronous system), and that, in a partially synchronous system, players may read other players’ messages before sending their own and adapt their strategy accordingly. As a result, the number of honest players required to implement a given functionality is greater in an asynchronous setting. For instance, the upper bound for asynchronous secure computation is $n>4k$ Ben-Or et al. [1993], as opposed to $n>3k$ bound in synchronous systems proved by Ben-Or, Goldwasser, and Wignerson [1988]. The simplicity of our approach allows us to generalize our main result to the asynchronous setting with very little work. Given a $k$-resilient sequential equilibria $\vec{\sigma}$ in an asynchronous game with $n$ players and a mediator, we extend implementation given by Abraham et al. [2019b] of asynchronous $k$-resilient Nash equilibria in $\Gamma_{\mathit{CT}}$ for $n>4k$ by constructing a consistent $k$-paranoid belief system. Reasoning analogous to that used in the synchronous case shows that the resulting strategy and belief system form a $k$-resilient sequential equilibria. In the asynchronous setting, this result is also optimal, since it matches the $n>4k$ lower bound given by Geffner and Halpern [2023]. We believe that the techniques presented in this paper can be applied to design $k$-resilient sequential equilibria in many other scenarios as well. The rest of the paper is organized as follows. In Section 2, we provide the concepts and definitions required to understand the main results and its proofs (some of the most basic definitions can be found in Appendix A). In Section 3, we state the main result and give a short outline of its proof. The details are given in Section 4. In Section 5, we provide our characterization of $SE_{k}(\Gamma_{CT})$. In Section 6, we carefully define asynchronous systems and extend the results of Section 6 to the asynchronous case; these results are further extended in Section 7 assuming that players can send arbitrarily precise real numbers in their messages (rather than just rational numbers, which is what we assume up to this point). ## 2 $k$-resilient equilibrium In this section, we extend the different types of equilibria that appear in the literature (which are discussed in Appendix A) to account for deviations of coalitions of at most $k$ players. For future reference, in normal-form games, we use $n$ to denote the number of players, $P$ to denote the set of players, $A$ to denote the set of possible action profiles, and $U$ to denote the tuple of utility functions. In Bayesian games, we use $T$ to denote the possible type profiles and $q$ to denote the prior distribution over $T$. Moreover, in extensive-form games, we use $G$ to denote the game tree, $M$ to denote the function that outputs which player moves at each node, and $R$ to denote the tuple of equivalence relations between the nodes in which two nodes $v$ and $w$ are equivalent according to the $i$th relation if $i$ cannot distinguish these between $v$ and $w$ (i.e., if $v$ and $w$ are in the same partition). The full definitions can be found in Appendix A. Intuitively, traditional notions of equilibria guarantee that no individual player can increase her own payoff by deviating from the proposed strategy. For each of these equilibria we consider also its $k$-resilient variant, which states that no coalition of at most $k$ players can jointly increase their payoff by deviating, even if they do so in a coordinated way. We also consider the notion of _strong $k$-resilience_, which guarantees that no individual player inside the coalition can increase its own payoff, even at the expense of other coalition members (as opposed to the original notion, in which no member of the coalition can be worse than by following the proposed strategy). ### 2.1 $k$-resilient Nash, sequential, Bayesian Nash, and communication equilibrium We begin by extending the definition of Nash equilibrium (Definition 11) to $k$-resilient Nash equilibrium: ###### Definition 1. In a normal-form game $\Gamma$, a (mixed) strategy profile $\vec{\sigma}=(\sigma_{1},\ldots,\sigma_{n})$ is a _$k$ -resilient Nash equilibrium_ (resp., _strongly $k$-resilient Nash equilibrium_) if, for all coalitions $K$ such that $|K|\leq k$, and all strategies $\vec{\sigma}^{\prime}_{K}\in\Delta(A_{K})$ $u_{i}(\vec{\sigma}_{K},\vec{\sigma}_{-K})\geq u_{i}(\vec{\sigma}^{\prime}_{K},\vec{\sigma}_{-K})$ for some (resp., for all) $i\in K$. We can similarly extend the notion of correlated equilibrium (see Definition 12 of Appendix A) to $k$-resilient correlated equilibrium: ###### Definition 2. Given a normal-form game $\Gamma=(n,A,U)$, a distribution $p\in\Delta(A)$ is a _$k$ -resilient correlated equilibrium_ (resp., _strongly $k$-resilient correlated equilibrium_) if, for all subsets $K\subseteq P$ such that $|K|\leq k$, and all action profiles $\vec{a}^{\prime}_{K},\vec{a}^{\prime\prime}_{K}$ for players in $K$ such that $p(\vec{a}^{\prime}_{K})>0$, $\sum_{\vec{a}\in\Delta(A):\vec{a}_{K}=\vec{a}^{\prime}_{K}}u_{i}(\vec{a}^{\prime}_{K},\vec{a}_{-K})p(\vec{a}\mid\vec{a}_{K}=\vec{a}^{\prime}_{K})\geq\sum_{\vec{a}\in\Delta(A):\vec{a}_{K}=\vec{a}^{\prime}_{K}}u_{i}(a^{\prime\prime}_{K},a_{-K})p(\vec{a}\mid\vec{a}_{K}=\vec{a}^{\prime}_{K})$ for some (resp., for all) $i\in K$. Simply put, a distribution $p\in\Delta(A)$ is a $k$-resilient correlated equilibrium if no subset of at most $k$ players would be better off by deviating if they knew that the action profile used was sampled according to $p$ (and they knew their components of the profile), even if they could coordinate their deviations. The definition of $k$-resilient Nash equilibrium in extended-form games is equivalent. However, the extension of sequential equilibrium (Definition 13) to $k$-resilient sequential equilibrium requires additional definitions Abraham et al. [2019a]. ###### Definition 3. Let $\Gamma=(P,G,M,U,R)$ be an extensive-form game and $K\subseteq P$ be a subset of players. We define the equivalence relation $\sim_{K}$ by $v\sim_{K}v^{\prime}$ iff $v\sim_{i}v^{\prime}$ for all $i\in K$.222The $\sim_{i}$ relation defines player $i$’as information sets in an extensive- form game. Se Appendix A for the formal definition. The equivalence classes of $\sim_{K}$ are called _$K$ -information sets_. Intuitively, two nodes are related according to $\sim_{K}$ if they are indistinguishable for all players $i\in K$. With this, we can extend the notion of belief systems to account for coalitions of players that share information between themselves. ###### Definition 4. A _$k$ -belief system_ $b$ in a game $\Gamma$ is a function that maps each $K$-information set $I$ such that $K\subseteq P$ and $|K|\leq k$ to a distribution over the nodes in $I$. We can define what it means for a $k$-belief system to be consistent with $\vec{\sigma}$ just as in the case of standard belief systems. We say that a $k$-belief system $b$ is consistent with strategy $\vec{\sigma}$ if there exists a sequence $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ of completely- mixed strategies that converges to $\vec{\sigma}$ such that the beliefs induced by Bayes’ rule converge to $b$. With these definitions in hand, we can find the desired generalization of sequential equilibrium. ###### Definition 5. A pair $(\vec{\sigma},b)$ consisting of a strategy profile $\vec{\sigma}$ and a $k$-belief system $b$ consistent with $\vec{\sigma}$ is a _$k$ -resilient sequential equilibrium_ (resp., _strongly $k$-resilient sequential equilibrium_) if, for all $K\subseteq P$ with $|K|\leq k$, all $K$-information sets $I$, and all strategies $\tau_{K}$ for players in $K$ (w.r.t. $\sim_{K}$), $u_{i}(\vec{\sigma},I,b)\geq u_{i}((\tau_{K},\vec{\sigma}_{-K}),I,b)$ for some (resp., for all) $i\in K$. Note that the notions of $1$-resilient correlated equilibrium and $1$-resilient sequential equilibrium are equivalent to the standard notions of correlated equilibrium and sequential equilibrium, respectively. Note that, in games with a mediator $d$, since the mediator never deviates from its strategy, a belief system $b$ is consistent with strategy $\vec{\sigma}+\sigma_{d}$ (where $\vec{\sigma}+\sigma_{d}$ means that players play strategy profile $\vec{\sigma}$ and the mediator plays strategy $\sigma_{d}$) if there exists a sequence $\vec{\sigma}^{1}+\sigma_{d},\vec{\sigma}^{2}+\sigma_{d},\ldots$ of completely-mixed strategies that converges to $\vec{\sigma}+\sigma_{d}$ such that the beliefs induced by Bayes’ rule converge to $b$. The definition of $k$-resilient sequential equilibrium in mediator games is identical to the one given in Definition 5, using this definition of consistent beliefs. Intuitively, this means that the players believe that the mediator never deviates from its protocol. In a Bayesian game with type space $T$ (see Appendix A.2 for the formal definitions), a _correlated strategy profile_ is a map $\mu:T\rightarrow\Delta(A)$. Note that all distributions over strategy profiles can be viewed as correlated strategy profiles, but the converse is not true. For instance, in a game with two players in which $A_{i}=T_{i}=\\{0,1\\}$ for all $i$, a correlated strategy profile may consist of both players playing action $t_{1}+t_{2}\bmod 2$; this cannot be represented by a strategy profile, since players must independently choose their action given their type. The expected utility of player $i$ in a coalition $K$ when playing (a possibly correlated) strategy profile $\vec{\mu}$, where $T_{K}$ is the set of possible type profiles of the players in $K$, is $u_{i}^{K}(\vec{\mu})=\sum_{\vec{t}_{K}\in T_{K}}q(\vec{t}_{K})\sum_{\vec{t}\in T}q(\vec{t}\mid\vec{t}_{K})u_{i}(\vec{\mu}(\vec{t})),$ where $u_{i}(\vec{\mu}(\vec{t}))$ denotes the expected utility of player $i$ when an action profile is chosen according to $\mu(\vec{t})$. Intuitively, we are assuming that players in $K$ can share their types, which is why we condition on $\vec{t}_{K}$. With this, we can extend Bayesian Nash equilibrium (see Definition 14 in Appendix A) to $k$-resilient Bayesian Nash equilibrium: ###### Definition 6. In a Bayesian game $\Gamma=(P,T,q,A,U)$, a strategy profile $\vec{\mu}:=(\mu_{1},\ldots,\mu_{n})$ is a _$k$ -resilient Bayesian Nash equilibrium_ (resp., _strongly $k$-resilient Bayeisan Nash equilibrium_) if, for all coalitions $K$ with $|K|\leq k$ and all maps $\mu^{\prime}_{K}:T_{K}\rightarrow\Delta(A_{K})$, $u_{i}^{K}(\vec{\mu})\geq u_{i}^{K}(\vec{\mu}_{-K},\mu^{\prime}_{K})$ for some (resp., for all) $i\in K$. We can similarly generalize communication equilibrium (see Definition 15 of Appendix A)) to $k$-resilient communication equilibrium: ###### Definition 7. A correlated strategy profile $\mu:T\rightarrow\Delta(A)$ is a _$k$ -resilient communication equilibrium_ (resp., _strongly $k$-resilient communication equilibrium_ of Bayesian Game $\Gamma=(P,T,q,A,U)$ if, for all $K\subseteq P$ such that $|K|\leq k$, all $\vec{t}_{K}\in T_{K}$, all $\psi:T_{K}\rightarrow T_{K}$, and all $\varphi:A_{K}\rightarrow A_{K}$, we have that $\sum_{\vec{t}_{-K}\in T_{-K}}\sum_{\vec{a}\in A}q(t_{-K},\vec{t}_{K})\mu(\vec{a}\mid\vec{t}_{-K},\vec{t}_{K})u_{i}(\vec{t}_{-K},\vec{t}_{K},\vec{a})\geq$ $\sum_{\vec{t}_{-K}\in T_{-K}}\sum_{\vec{a}\in A}q(t_{-K},\vec{t}_{K})\mu(\vec{a}\mid\vec{t}_{-K},\psi(\vec{t}_{K}))u_{i}(\vec{t}_{-K},\vec{t}_{K},\vec{a}_{-K},\varphi(\vec{a}_{K}))$ for some (resp., for all) $i\in K$. As in the case of the standard communication equilibrium, we can identify a correlated strategy profile $\mu$ with a canonical game with a mediator in which each player $i$ must send its type $t_{i}$ to the mediator, then the mediator samples an action profile $\vec{a}$ from $\mu(\vec{t})$ and sends $a_{i}$ to each player $i$, and then each player $i$ plays whatever it received from the mediator. In Definition 7, the functions $\psi$ and $\varphi$ represent possible deviations for players in $K$ in this canonical game; $\psi$ describes how they might lie to the mediator about their type (by sending type profile $\psi(\vec{t}_{K})$ instead of their actual type profile $\vec{t}_{K}$), while $\varphi$ represents how they might deviate in what they do (playing strategy profile $\varphi(\vec{a}_{K})$ instead of the strategy profile $\vec{a}_{K}$ suggested by the mediator). Definition 7 says that $\mu$ is a $k$-resilient (resp., strongly $k$-resilient) communication equilibrium if none of these deviations is profitable for all players (resp., for some player) in $K$. ### 2.2 Cheap-talk games Given a normal-form game $\Gamma=(n,A,U)$ or a Bayesian game $\Gamma=(P,T,q,A,U)$, consider an extension $\Gamma_{\mathit{CT}}$ in which players can communicate among themselves before playing an action in $\Gamma$. We call this extension a _cheap-talk game_. More precisely, in a cheap-talk game $\Gamma_{\mathit{CT}}$, players are able to exchange messages and play an action in $\Gamma$ (although they may play an action in $\Gamma$ at most once). The payoff of each player is defined by the utilities $U$ of $\Gamma$, given the action profile played in $\Gamma$. In this paper we always assume that messages are _authenticated_ , so that each recipient knows who sent the message, and messages are never corrupted or modified. We can view cheap-talk games as possibly infinite extensive-form games in which the nodes are the _global histories_ $\vec{h}=(h_{1},\ldots,h_{n})$ of the players, and the actions associated with an edge involve either performing an internal computation (e.g., tossing a random coin or updating the local state with the messages received), sending messages, or playing an action in the underlying game. A global history $\vec{h}$ is just a tuple of local histories, where the local history $h_{i}$ of player $i$ contains all internal computations, all messages sent by $i$ and received by $i$, along with their time stamps (in the synchronous case) or the number of times that $i$ was scheduled since the beginning of the game (in the asynchronous case). Two global histories $\vec{h}=(h_{1},\ldots,h_{n})$ and $\vec{h}^{\prime}=(h_{1}^{\prime},\ldots,h_{n}^{\prime})$ are indistinguishable by player $i$ if $h_{i}=h^{\prime}_{i}$. Analogously, $\vec{h}$ and $\vec{h}^{\prime}$ are indistinguishable by a coalition $K$ of players if $\vec{h}_{K}=\vec{h}^{\prime}_{K}$. A strategy profile $\vec{\sigma}$ in a cheap-talk game $\Gamma_{\mathit{CT}}$ _induces_ a strategy profile $\vec{\tau}$ in $\Gamma$ if the outcome that results from playing $\vec{\sigma}$ in $\Gamma_{\mathit{CT}}$ is identical to that that results from playing $\vec{\tau}$ in $\Gamma$. If $\Gamma$ is a normal-form game, the outcome that results from playing $\vec{\sigma}$ is the distribution over action profiles in $\Gamma$ that results when $\vec{\sigma}$ is played in $\Gamma_{\mathit{CT}}$. If $\Gamma$ is a Bayesian game, the outcome is a map from type profiles to distributions over action profiles (i.e., a correlated strategy profile). A strategy profile $\vec{\sigma}$ _implements_ a strategy profile $\vec{\sigma}^{\prime}$ if $\vec{\sigma}$ and $\vec{\sigma}^{\prime}$ induce the same strategy profiles in $\Gamma$. Unless specified otherwise, we assume that all communication in cheap-talk games is _synchronous_ , which means that it proceeds in rounds and that all messages sent in a given round by each player are guaranteed to be received by their recipient at the beginning of the next round. The analysis of _asynchronous_ communication is done in section 6. These definitions are illustrated by the following example. ###### Example 1. Consider a Bayesian game $\Gamma$ with two players. Each player $i\in\\{1,2\\}$ has a type $t_{i}\in\\{0,1\\}$ and can play actions in $\\{0,1\\}$. If the action $a_{i}$ played by player $i$ satisfies $a_{i}=t_{1}t_{2}$, then that player gets a utility of 1; otherwise, she gets 0\. Consider a strategy profile $\vec{\sigma}$ for the cheap-talk extension $\Gamma_{\mathit{CT}}$ of $\Gamma$ in which each player sends its type to the other player in the first round, then the players each locally compute $t_{1}t_{2}$ and play it in the underlying game. It is easy to check that $\vec{\sigma}$ is a Bayesian Nash equilibrium (both players get the maximum possible payoff), and that it implements the correlated strategy profile that maps a given type profile $(t_{1},t_{2})$ to the distribution that assigns probability $1$ to $(t_{1}t_{2},t_{1}t_{2})$ and $0$ to everything else. ## 3 The main result Given a normal-form or Bayesian game $\Gamma$, let $\Gamma_{\mathit{CT}}$ be its cheap-talk extension and $\Gamma_{d}$ be its cheap-talk extension, where players can also communicate with a trusted mediator $d$. Let $SE_{k}(\Gamma_{\mathit{CT}})$ (resp., $SE_{k}(\Gamma_{d})$) denote the set of possible strategies in $\Gamma$ induced by $k$-resilient sequential equilibria in $\Gamma_{\mathit{CT}}$ (resp., $\Gamma_{d})$. Note that if $\Gamma$ is a normal-form game, $SE_{k}(\Gamma_{\mathit{CT}})$ is a set of distributions over action profiles, while if $\Gamma$ is a Bayesian game, $SE_{k}(\Gamma_{\mathit{CT}})$ is a set of correlated strategy profiles. We define $SE_{k}^{S}$ equivalently but for strong $k$-resilient sequential equilibria instead. In the rest of this paper, we consider only equilibria in which each action profile is sampled with rational probability. The reason for this restriction is that it is a standard assumption in distributed systems that messages have finite length and that players cannot perform arbitrarily large operations. This means that agents cannot encode real numbers into messages and they cannot perform operations on real numbers, in general. If we relax these constraints and allow players to send messages and operate with arbitrary real numbers, our results can be extended to all equilibria inside the convex hull of rational equilibria, using techniques due to Gerardi [2004]. These techniques are described in Section 7. Our main result shows that we can implement a (strongly) $k$-resilient equilibrium with a mediator and $n$ players using cheap talk if $n>3k$. ###### Theorem 1. If $\Gamma=(P,T,q,A,U)$ is a Bayesian game for $n$ players and $n>3k$, then $SE_{k}(\Gamma_{\mathit{CT}})=SE_{k}(\Gamma_{d})$ and $SE_{k}^{S}(\Gamma_{\mathit{CT}})=SE_{k}^{S}(\Gamma_{d})$. As shown in Section 5, the sets $SE_{k}(\Gamma_{d})$ and $SE_{k}^{S}(\Gamma_{d})$ have a relatively simple characterization. For instance, it can be shown that if $\Gamma$ is a normal-form game, then $SE_{k}(\Gamma_{d})$ is the set of $k$-resilient correlated equilibria of $\Gamma$; and if $\Gamma$ is a Bayesian game, then $SE_{k}(\Gamma_{d})$ is the set of $k$-resilient communication equilibria of $\Gamma$. (Gerardi [2004] proved these results for $k=1$.) ### 3.1 Outline of the proof The proof of Theorem 1 is divided into two parts. We first describe a relatively simple strategy profile $\vec{\sigma}$ in $\Gamma_{\mathit{CT}}$ that implements the desired strategy profile in $\Gamma_{d}$ and tolerates deviations of up to $k$ players. Moreover, with $\vec{\sigma}$, no subset of at most $k$ players can sabotage the joint computation or learn anything other than what they were originally supposed to learn. These security properties of $\vec{\sigma}$ guarantee that it is a $k$-resilient Nash equilibrium. The next step in the proof is extending $\vec{\sigma}$ to a $k$-resilient sequential equilibrium $\vec{\sigma}^{seq}$. The key idea for doing this is showing that there exists a belief system $b$ such that all coalitions $K$ of at most $k$ players believe that all players that deviated from the protocol did so by sending inappropriate messages only to players in $K$; that is, each coalition $K$ of size at most $k$ believes that if $i,j\not\in K$, then $i$ and $j$ always send each other the messages that they are supposed to send according to $\vec{\sigma}$. If players have belief system $b$, then all coalitions $K$ with $|K|\leq k$ believe, regardless of their local history, that the remaining players will always compute their part of the sequential equilibrium correctly, since $\vec{\sigma}$ tolerates deviations by up to $k$ players and all the players not in $K$ get the messages that they are supposed to get according to $\vec{\sigma}$ from all players except possibly those in $K$ (and thus do not observe deviations by more than $k$ players). Moreover, in the construction of $\vec{\sigma}$, if fewer than $k$ players deviate, all players eventually receive enough information to compute their own action, even if they don’t take part in the computation. These two properties are enough to guarantee that players in $K$ cannot gain by deviating during the communication phase, since (they believe that) nothing they can do will affect what the remaining players play in the underlying game, and they can’t learn any additional information. Constructing a strategy profile that implements a given sequential equilibrium with a mediator is in general nontrivial, since players must not only jointly compute the action that the mediator sends to each player, but this must be done in such a way that $i$ does not learn anything from the communication phase other than its own action $a_{i}$. For if $i$ was able to get such information, $i$ might be tempted to deviate (since it might have more information about what the utility of deviating would be). There are well-known techniques in distributed computing for constructing protocols with the appropriate security properties. The standard solution (which we also use) involves distributing the information about the necessary computations between the players in such a way that each player knows only a part or _share_ of that information. Players can thus jointly compute the mediator’s local history and simulate everything that the mediator would do, without leaking any information. More precisely, if a player $i$ would send its type $t_{i}$ to the mediator in the mediator game, in the cheap-talk game it instead shares $t_{i}$ among all players in such a way that no player (or coalition of players) can learn anything about it. When all the types have been distributed, players can then compute the actions that the mediator would have sent to each player in the game with the mediator. However, each player $i$ does not directly compute its own action $a_{i}$, but rather its share of every action $a_{j}$. Then each player $i$ sends the share of each action $a_{j}$ to player $j$. This guarantees that only $j$ learns $a_{j}$. There are two standard primitives that we use in this construction, _verifiable secret sharing_ (VSS) and _circuit computation_ (CC). Roughly speaking, VSS takes as input a value $v$, and distributes to each player $i$ a share $s^{i}_{v}$ of $v$. If we are interested in $k$-resilience, we can arrange for these shares to have the property that knowing a subset of at most $k$ shares gives no information about $v$, while knowing strictly more than $k$ suffices to reconstruct $v$. CC is used to distribute the shares of the output of a function for which each of the players has shares of all of its inputs. More precisely, given a function $f:D^{t}\rightarrow D$ and $t$ values $v_{1},\ldots,v_{t}$ such that each player $i$ knows only its own share $s_{v_{j}}^{i}$ of $v_{j}$, for $j=1,\ldots,t$, the CC procedure takes as inputs the shares $s_{v_{1}}^{i},\ldots,s_{v_{t}}^{i}$ of each player $i$, and allows player $i$ to compute its share $s^{i}$ of $f(v_{1},\ldots,v_{t})$ in such a way that no player learns any information about $v_{1},\ldots,v_{t}$. Therefore, VSS allows players to share private values in a secret way, and CC allows players to generate (shares of) new values without leaking any information about all values previously shared. We define VSS and CC formally in Appendix B, and show that, if $n>3k$, then it is possible to implement VSS and CC in a $k$-resilient way, so that they both tolerate deviations by up to $k$ players. In the next section, we prove Theorem 1 assuming that we can implement VSS and CC. ## 4 Proof of Theorem 1 In this section, we prove Theorem 1 for the case of $k$-resilient sequential equilibrium. The case of strongly $k$-resilient sequential equilibrium is similar. ### 4.1 Constructing a $k$-resilient Nash equilibrium By definition, $SE_{k}(\Gamma_{\mathit{CT}})\subseteq SE_{k}(\Gamma_{d})$. It thus remains to show that a $k$-resilient sequential equilibrium with a mediator can be implemented by a $k$-resilient sequential equilibrium in the cheap-talk game. Let $Com_{k}(\Gamma)$ be the set of $k$-resilient communication equilibria of $\Gamma$. As we noted in Section 3, $SE_{k}(\Gamma_{d})=Com_{k}(\Gamma)$, and thus $SE_{k}(\Gamma_{\mathit{CT}})\supseteq SE_{k}(\Gamma_{d})$ reduces to showing that all $k$-resilient communication equilibria $\mu$ of $\Gamma$ can be implemented with cheap talk without a mediator. We begin by constructing a strategy profile $\vec{\sigma}$ in $\Gamma_{\mathit{CT}}$ that is a $k$-resilient Nash equilibrium and induces the same correlated strategy profile as $\mu$. Given $\mu$, consider the canonical protocol $\vec{\sigma}_{can}$ for $n$ players and a mediator in which each player $i$ sends its type $t_{i}$ to the mediator, and the mediator then computes the type profile $\vec{t}$ of the players using the messages received, replacing the types of players that didn’t send theirs by a default type $\bot$. (Since we are working in synchronous systems for this part of the proof, the mediator can tell if a player did not send his type.) The mediator then samples $\vec{a}\in\Delta(A)$ according to the distribution $\mu(\vec{t})$ and sends $a_{i}$ to each player $i$. If $i$ sent its true type at the beginning of the game and receives an action $a_{i}$ from the mediator, it plays $a_{i}$. Otherwise, it plays the best response assuming that the remaining players are playing their part of $\mu(\vec{t})$ (note that the best response is well-defined since both $\mu$ and the distribution $q$ of type profiles are common knowledge). It is straightforward to check that $\vec{\sigma}_{can}$ induces the same correlated strategy profile as $\mu$, and that $\vec{\sigma}_{can}$ is a $k$-resilient sequential equilibrium. The details are left to the reader. The idea for constructing $\vec{\sigma}$ is that instead of having a mediator that receives the types of the players and then computes $\mu$, the players perform these tasks by themselves using VSS and CC to help them simulate the mediator: each player $i$, instead of sending its type $t_{i}$ to the mediator, shares it among all the players using a $k$-resilient VSS implementation. After the number of rounds needed to perform VSS, players compute $(a_{1},\ldots,a_{n}):=\mu(t_{1},\ldots,t_{n})$ with CC, using as input the shares of each of the types $t_{i}$. Each player $i$ then sends to each player $j$ $i$’s share of $a_{j}$. Thus, each player $i$ can compute $a_{i}$ without learning anything about the other players’ actions. Note that, since $\vec{a}$ is sampled from $\mu(\vec{t})$, players are required to randomize in a coordinated way. The problem of jointly sampling an element from a distribution with rational probabilities over a finite set can be reduced to the problem of sampling an integer from $[N]:=\\{1,\ldots,N\\}$ for some $N$, as the following proposition shows. ###### Proposition 1. Given a correlated strategy profile $\mu$ with rational parameters, there exists an integer $N$ and a function $\mu^{*}:T\times[N]\rightarrow A$ such that, if $X_{\vec{t}}$ is the distribution over action profiles obtained from playing $\mu^{*}(\vec{t},r)$ when $r$ is uniformly sampled from $[N]$, then $X_{\vec{t}}=\mu(\vec{t})$ for all $\vec{t}\in T$. ###### Proof. Fix an integer $N$ such that, for all $\vec{t}$ and all action profiles $\vec{a}$, the probability $\mu(\vec{t})(\vec{a})$ is an integer multiple of $1/N$. For each $\vec{t}$, partition $N$ into subsets $C_{1}^{\vec{t}},C_{2}^{\vec{t}},\ldots,C_{m}^{\vec{t}}$ such that $|C_{i}^{\vec{t}}|$ is proportional to the probability of $\mu(\vec{t})(\vec{a}^{i})$ for some indexing $\vec{a}^{1},\ldots,\vec{a}^{m}$ of the action profiles. We take $\mu^{*}(\vec{t},r)$ to be $\vec{a}^{i}$ if $r\in C_{i}^{\vec{t}}$ for some $i$. It is straightforward to check that $\mu^{*}$ satisfies the desired properties. ∎ Players can thus compute the desired action profile using a deterministic function $\mu^{*}$, given that they are able to jointly sample an integer $r\in N$ uniformly at random. We next present a formal construction of the protocol $\vec{\sigma}$ described above. This protocol is an adaptation of the multiparty computation protocol of Ben-Or, Goldwasser, and Wigderson [1988] to a setting in which the output of the function must be sampled from a common distribution. 1. Phase 1: Each player $i$ generates a random number $r_{i}$ between $1$ and $N$ uniformly at random. Then $i$ shares $r_{i}$ and its type $t_{i}$ using a $k$-resilient VSS procedure. If a player $i$ does not share $t_{i}$ or $r_{i}$, then $t_{i}$ and $r_{i}$ are taken to be a default type $\bot\in T_{i}$ and $0$, respectively. 2. Phase 2: For each player $j$, let $r_{j}^{i}$ be $i$’s share of $r_{j}$ and let $t_{j}^{i}$ be $i$’s share of $t_{j}$. Each player $i$ initiates a $k$-resilient CC of the functions $\mu^{*}_{1},\ldots,\mu^{*}_{n}$, where $\mu^{*}_{j}(\vec{t},\vec{r})$ is the $j$th component of $\mu^{*}(\vec{t},r)$ and $r:=r_{1}+\ldots+r_{n}\bmod N$. Player $i$’s inputs for each of these CC instances are the shares $r_{1}^{i},\ldots,r_{n}^{i}$ and $t_{1}^{i},\ldots,t_{n}^{i}$ computed in Phase 1. 3. Phase 3: For each player $i$, let $o_{j}^{i}$ be $i$’s output of the CC instance of $\mu^{*}_{j}$ (this output is $i$’s share of $\mu^{*}_{j}(\vec{t},\vec{r})$). For each player $j$, player $i$ sends $o_{j}^{i}$ to $j$. 4. Phase 4: If $i$ shared its true type in Phase 1 and $i$ receives at least $2k+1$ shares $o_{i}^{j}$ from different players such that every subset of those shares of size $k+1$ defines the same secret $a_{i}$, $i$ plays $a_{i}$ and terminates. Otherwise, $i$ plays a best response given its local history, assuming that all the remaining players shared their true type and play their part of the communication equilibrium. The optimal response depends on which was the value $t_{i}$ that $i$ shared in Phase 1 (note that $t_{i}$ is 0 if $i$ didn’t successfully share a type), and which shares of $o_{i}$ $i$ received from other players. Note that these values give a posterior distribution over the possible types of the other players and the actions that they played since $i$ is assuming that they share their true type $\vec{t}_{-i}$ and play their part of the communication equilibrium with input $\vec{t}$. Since the VSS and CC primitives are $k$ resilient, the protocol above is explicitly defined in all scenarios in which at most $k$ players deviate. However, there might be histories in which more than $k$ players deviate and other players are not able to continue because either they didn’t receive enough messages or the messages they received were inconsistent. (For example, players will not able to jointly compute the functions $u_{i}^{*}$ in Phase 2 if insufficiently many players send messages. This means they won’t be able to start Phase 3.) We extend the protocol to cover all these cases using the following convention: if a player is not able to continue because the protocol does not explicitly state what to do in a certain situation, the player does not take part in subsequent phases. For instance, if a player $i$ is not able to compute the functions $u_{i}^{*}$ in Phase 2, then $i$ does not take part in Phase 3. Note, however, that the action played at the end of Phase 4 is always specified even if a player doesn’t complete some of the previous phases. ###### Proposition 2. If $n>3k$, $\vec{\sigma}$ is a $k$-resilient Nash equilibrium and induces the same correlated strategy profile as $\mu$. ###### Proof. Clearly $\vec{\sigma}$ induces the same correlated strategy profile as $\mu$ by construction. The fact that $\vec{\sigma}$ is a $k$-resilient NE follows directly from the properties of $k$-resilient VSS and $k$-resilient CC: no matter what a coalition of at most $k$ players does, the remaining $n-k$ players are guaranteed to terminate all their VSS and CC instances correctly. Since honest players choose $r_{i}$ uniformly at random, the number $r:=r_{1}+\ldots+r_{n}\bmod N$ is also randomly distributed in $[N]$ regardless of what a coalition of at most $k$ players decides to share. This guarantees that the actions $a_{1},\ldots,a_{n}$ encoded by the shares $(o_{1}^{1},\ldots,o_{1}^{n}),\ldots,(o_{n}^{1},\ldots,o_{n}^{n})$ of honest players are distributed according to $\mu(\vec{t})$. The correctness of the output of each player follows from the fact that if a player $i$ waits until receiving at least $2k+1$ shares that define the same secret $a_{i}$, at least $k+1$ of those shares were sent by honest players and thus define the correct value $a_{i}$ (note that if $n>3k$, all players are guaranteed to be able to reconstruct their action $a_{i}$ in Phase 4 even if a coalition of $k$ players deviates, since at least $2k+1$ players are honest). To complete the proof, note that the secrecy properties of the $k$-resilient VSS and CC guarantee that no coalition of at most $k$ players can learn anything besides their own actions, and thus that playing the action $a_{i}$ computed in Phase 4 is optimal. ∎ ### 4.2 Constructing a $k$-resilient sequential equilibrium Proposition 2 shows that, for all $\mu\in Com_{k}(\Gamma)$, there exists a strategy $\vec{\sigma}$ in $\Gamma_{\mathit{CT}}$ such that $\vec{\sigma}$ is a $k$-resilient Nash equilibrium that implements $\mu$. Our aim is to extend $\vec{\sigma}$ to a $k$-resilient sequential equilibrium. The difficulty in constructing such an extension is due to the fact that the $k$-resilience of VSS and CC guarantees that no coalition of at most $k$ players can prevent the remaining players from carrying out the VSS and CC computations if no other player deviates (which is all that is needed to show that the construction gives a $k$-resilient Nash equilibrium). However, it is not clear what the coalition can accomplish starting at a point off the equilibrium path where they detect that other players are deviating as well. For instance, consider a normal-form game $\Gamma$ for $n$ players such that the set of actions for each player is $[n]$. The payoffs are as follows: $\vec{a}$, if at least $n-k$ coordinates of an action profile $\vec{a}$ are different, then all players get a payoff of $0$; otherwise, all players get a payoff of $1$. Consider the $k$-resilient correlated equilibrium $p$ in which all possible permutations of $[n]$ are played with equal probability, giving all players a payoff of 0. Suppose that players play the protocol $\vec{\sigma}$ described in Section 4.1. If at least $n-k$ of the players are honest, it is guaranteed by construction that all of them will play different actions by the end of the protocol, and thus that all players will get a payoff of 0 regardless of what the remaining $k$ players do. However, consider the case that a coalition $K$ of at most $k$ players follows the protocol but at some point detects an inconsistency among their local histories. Since players in $K$ were following the protocol faithfully, it must be the case that one of the remaining $n-k$ players deviated. If this leads the players in $K$ to believe that there is a chance that at least two of the remaining players played the same action $a\in[n]$, it is worthwhile for players in $K$ to deviate and play $a$ at the end of the protocol. If this is the case, then we do not have a sequential equilibrium. Our approach to showing that $\mu$ can be implemented with a $k$-resilient sequential equilibrium is to show that there exists a belief system such that all coalitions $K$ of size at most $k$ believe, regardless of their local history, that * (a) the remaining players will successfully complete their part of $\vec{\sigma}$, no matter what the players in $K$ do; * (b) players in $K$ cannot infer anything about the action profile $\vec{a}$ being played other than their own part $\vec{a}_{K}$ (and $\mu$). It is important to stress that property (b) does in fact depend on the beliefs of players in $K$: If some player $i$ in $K$ receives a message $\bot$ from a player $j$ that was not supposed to be sent on the equilibrium path, if $i$ believes that $j$ sends that message only when $j$ has type $t_{j}$ (whether or not this is actually the case), then $i$ will believe that it has acquired important information when it receives $\bot$, and adjust its play accordingly. However, if $i$ believes that $j$ chose that message uniformly at random, then $i$ will not adjust its play. The following proposition shows that if properties (a) and (b) are satisfied, then $\vec{\sigma}$ is $k$-sequentially rational (i.e., it is never in the interest of coalitions of size at most $k$ to deviate from $\vec{\sigma}$). ###### Proposition 3. If $b$ is a belief system compatible with $\vec{\sigma}$ that satisfies properties (a) and (b), then $(\vec{\sigma},b)$ is a $k$-resilient sequential equilibrium. ###### Proof. If $b$ satisfies property (a), then at all nodes in the game tree (including nodes off the equilibrium path), no matter what the players in $K$ do, the remaining players will eventually terminate their part of $\vec{\sigma}$ and thus sample an action profile $\vec{a}\in\mu(\vec{t})$ and play their part $\vec{a}_{-K}$. Property (a) also guarantees that by the end of Phase 4 (and the end of the communication phase), players not in $K$ will send the shares of $\vec{a}_{K}$ to players in $K$, which means that players in $K$ believe that they will always learn $\vec{a}_{K}$. Property (b) guarantees that players in $K$ won’t learn anything besides their own actions. Properties (a) and (b) together imply that players in $K$ believe that they cannot prevent the remaining players from playing their part of an action profile $\vec{a}$ sampled according to $\mu$, and that players in $K$ will eventually learn $\vec{a}_{K}$ and nothing else. Since $\mu$ is a $k$-resilient communication equilibrium, it is always a best response for the players in $K$ to share their own type in Phase 1, as long as they can continue the basic protocol. Similar arguments show that following the basic protocol if they can (i.e., they receive messages enough consistent with the protocol) is a best response in phases 2 and 3. Moreover, it also follows from properties (a) and (b) that, since the players in $K$ cannot influence the outcome, doing nothing is a best response if they cannot continue the basic protocol (in all phases). It remains to show that the action played in Phase 4 is optimal. By construction, this reduces to showing that if a coalition $K$ of players with $|K|\leq k$ shared their true type in Phase 1 and were able to compute their actions $a_{i}$ in Phase 4, it is optimal for them to play $a_{i}$ (note that otherwise $i$ is already best responding given that the belief system $b$ satisfies properties (a) and (b)). This follows from the fact that, by properties (a) and (b), players in $K$ believe that all remaining players computed $\vec{a}$ and thus played $\vec{a}_{-K}$. Since $\mu$ is a $k$-resilient communication equilibrium, it is then always in the interest of players in $K$ to play $\vec{a}_{K}$. ∎ Proposition 3 shows that Theorem 1 reduces to finding a belief system $b$ compatible with $\vec{\sigma}$ that satisfies properties (a) and (b). We start by showing how such a belief system can be constructed for a fixed subset $K$ with $|K|\leq k$. Consider the sequence $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ of strategy profiles in which player $i$ proceeds as follows with $\vec{\sigma}^{m}_{i}$: Given local history $h_{i}$, $i$ does exactly what it would have done with $\vec{\sigma}$, except that if it is supposed to send a message $msg$ to another player $j$, if $j\in K$, with probability $1/m$ it replaces $msg$ by a random message $msg^{\prime}$ sampled according to some fixed distribution that gives all messages positive probability, and sends $msg^{\prime}$ instead. If $j\not\in K$, $i$ replaces $msg$ with $msg^{\prime}$ with probability $m^{-m}$. That is, players play as in $\vec{\sigma}$, except that they may lie to players in $K$ with a low probability, and to players outside of $K$ with a much lower probability. By construction, each $\vec{\sigma}^{m}$ is completely mixed and the belief $b$ induced by the limits of the beliefs in $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ satisfies the properties (a) and (b) described above for the set $K$. To see this, note that property (a) is satisfied because players in $K$ believe that the remaining $n-k$ players are being truthful between themselves (note that the probability of anyone lying to a player not in $K$ is negligible) and thus, since the VSS and CC implementations are $k$-resilient, that they can carry out all the necessary computations without the help of players in $K$. Moreover, since players in $K$ believe that if they get a message incompatible with $\vec{\sigma}$ (i.e., a message that could not have been according to $\vec{\sigma}$, given their joint histories), then they are getting a message drawn from some fixed distribution, they learn no useful information about the action profile being played by getting this message. Property (b) follows. This example shows how to construct a $k$-belief system compatible with $\vec{\sigma}$ that satisfies properties (a) and (b) for a particular subset $K$ with $|K|\leq k$, but it unfortunately does not satisfy these properties for all such sets simultaneously. Note that it is critical that players in $K$ believe that they are the only ones being lied to, and also that the lies sent to players in $K$ convey no information since they do not depend on the senders’ local histories (in the example, they are sampled from a fixed distribution). The next definitions generalizes these ideas: ###### Definition 8. Given history $h_{i}$ of player $i$, a message $msg$ in $h_{i}$ from $i$ to $j$ is a _lie_ if it is incompatible with $\vec{\sigma}$. A subset $K$ of players is _truthful relative to $\vec{\sigma}$ in global history $\vec{h}$_ if, in $\vec{h}$, no player in $K$ ever _lies_ (i.e., sends a message that is a lie) to another player in $K$. Note that since the local history $h_{i}$ of player $i$ includes all of $i$’s randomization up to the point where it is supposed to send a message, given local history $h_{i}$, there is a unique message sent by $i$ to $j$ that is compatible with $\vec{\sigma}$, even if $\vec{\sigma}$ is a mixed strategy (intuitively, when $i$ sends a message to $j$, all of $i$’s coins have already been tossed and their outcomes appear in $i$’s history). A message from player $i$ to player $j$ in local history $h_{i}$ is _correct_ if it is not a lie (relative to the protocol $\vec{\sigma}$). ###### Definition 9 ($k$-paranoid belief system). Given a $k$-belief system $b$ and the local histories $\vec{h}_{K}$ of a subset $K$ of players, a global history $\vec{h}$ is _$b$ -consistent_ with $\vec{h}_{K}$ if the probability according to $b$ that players have global history $\vec{h}$ conditional on players in $K$ having history $\vec{h}_{K}$ is non-zero. Given a protocol $\vec{\sigma}$, a $k$-belief system $b$ is _$k$ -paranoid_ if * (1) for all subsets $K\subseteq[n]$ with $|K|\leq k$ and all histories $\vec{h}_{K}$ of players in $K$, in all histories $\vec{h}$ that are $b$-consistent with $\vec{h}_{K}$, the set $\overline{K}$ of players is truthful relative to $\vec{\sigma}$ (where $\overline{K}$ is the complement of $K$); * (2) there exists a sequence $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ of protocols that induces $b$ such that the lies relative to $\vec{\sigma}$ sent in $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ do not depend on the players’ local histories. Intuitively, a $k$-belief system is $k$-paranoid if all subsets of at most $K$ players believe, regardless of their local history, that they are the only ones being lied to, and also that the lies received convey no information. Our earlier discussion provides a proof of the following proposition. ###### Proposition 4. Let $\vec{\sigma}$ be the protocol presented in Section 4.1. If $b$ is a $k$-paranoid belief system consistent with $\vec{\sigma}$, then $b$ satisfies properties (a) and (b). Thus, by Proposition 3, Theorem 1 reduces to find a $k$-paranoid belief system $b$ that is consistent with $\vec{\sigma}$. The first candidate for generating such a belief system is the sequence $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ of strategy profiles mentioned earlier, except that with strategy $\sigma_{i}^{m}$, rather than player $i$ lying to player $j$ with probability $1/m$ if $j$ is in some fixed set $K$ and with probability $m^{-m}$ if $j$ is not in $K$, now $i$ lies to all players with probability $1/m$. We might hope that if players detect that they received inappropriate messages, they would assume that they are the only ones being lied to, since lies are highly unlikely. For instance, if a player $i$ receives four inconsistent messages, we would hope that $i$ would believe that these are the only lies. Unfortunately, this is not necessarily the case. If an earlier lie by some other player could result in players sending these messages to $i$ according to $\vec{\sigma}$, then $i$ would instead believe that there was only one lie. To prevent this, we want players to believe that earlier lies are much less likely than later lies. It turns out that if we do this, then we can get the result that we want. ###### Proposition 5. For all strategies $\vec{\sigma}$ in $\Gamma_{\mathit{CT},syn}$, there exists a $k$-paranoid belief system $b$ that is consistent with $\vec{\sigma}$. ###### Proof. Consider the sequence $\vec{\sigma}^{1},\vec{\sigma}^{2},\vec{\sigma}^{3},\ldots$ of strategy profiles where in $\vec{\sigma}^{n}$ players act exactly as they do in $\vec{\sigma}$, except that whenever $i$ would send a message $msg$ to another player $j$, with probability $m^{-\left((2n)^{-r}\right)}$ (where $r$ is the current round of communication), $i$ replaces this message by a message $msg^{\prime}$ chosen according to some fixed distribution and sends $msg^{\prime}$ instead. Intuitively, these are protocols where each player lies to other players with a probability that (greatly) increases with the round number and that decreases with $m$. It is straightforward to check that $\vec{\sigma}^{1},\vec{\sigma}^{2},\vec{\sigma}^{3},\ldots$ converges to $\vec{\sigma}$. Let $b$ be the belief system induced by the sequence. We want to show that $b$ is $k$-paranoid. By construction, $b$ satisfies property (2) of Definition 9 since lies in $\vec{\sigma}^{1},\vec{\sigma}^{2},\vec{\sigma}^{3},\ldots$ are sampled from a fixed distribution. To show that it also satisfies property (1), we need some preliminary lemmas. Given a global history $\vec{h}$ of $\vec{\sigma}$, let $L(\vec{h})$ be the sequence $(\ell_{1},\ell_{2},\ldots,\ell_{R})$, where $R$ is the total number of communication rounds in $\vec{h}$ and $\ell_{r}$ is the number of lies sent at round $r$ in $\vec{h}$. ###### Lemma 1. If $P^{m}$ is the probability on global histories induced by $\vec{\sigma}^{m}$, and $\vec{h}$ and $\vec{h}^{\prime}$ are global history profiles such that $L(\vec{h})<L(\vec{h}^{\prime})$ in lexicographical order (the smaller sequence is the one that has the smaller value in the first position in which they differ), then $\lim_{m\rightarrow\infty}\frac{P^{m}(\vec{h}^{\prime})}{P^{m}(\vec{h})}=0$. ###### Proof. If $L(\vec{h})=(\ell_{1},\ell_{2},\ldots,\ell_{R})$, then $\log(P^{m}(\vec{h}))=-\log(m)\left(\sum_{r=1}^{n}\ell_{r}(2n)^{-r}\right)+\Theta(1),$ where $\Theta(1)$ denotes an expression that is bounded by some constant. Thus, if $L(\vec{h})=(\ell_{1},\ell_{2},\ldots,\ell_{R})$ and $L(\vec{h}^{\prime})=(\ell^{\prime}_{1},\ldots,\ell^{\prime}_{R^{\prime}})$, we must show that if $L(\vec{h})<L(\vec{h}^{\prime})$, then $\sum_{r=1}^{R^{\prime}}\ell_{r}(2n)^{-r}<\sum_{r=1}^{R^{\prime}}\ell^{\prime}_{r}(2n)^{-r}$, which is equivalent to showing that $\sum_{r=1}^{\max(R,R^{\prime})}(\ell^{\prime}_{r}-\ell_{r})(2n)^{-r}>0$ (taking the elements beyond the end of a sequence to be 0). Since each player sends exactly one message to each other player, we have that $0\leq\ell_{i}\leq n$. Therefore, if $r^{*}$ is the first round such that $\ell^{\prime}_{r}\not=\ell_{r}$, we have that $\sum_{r=1}^{\max(R,R^{\prime})}(\ell^{\prime}_{r}-\ell_{r})(2n)^{-r}\geq(2n)^{-r^{*}}-n\sum_{r=r^{*}+1}^{\max(R,R^{\prime})}(2n)^{-r}=(2n)^{-r^{*}}\left(1-n\sum_{r=1}^{\max(R,R^{\prime})-r^{*}}(2n)^{-r}\right),$ and $1-n\sum_{r=1}^{\max(R,R^{\prime})-r^{*}}(2n)^{-r}>1-n\sum_{r>0}(2n)^{-r}>1-n\sum_{r>0}2^{-r}n^{-1}=0$ ∎ ###### Lemma 2. If $|K|\leq k$ and $\vec{h}_{K}^{*}$ is a local history of the players in $K$, then for every global history profile $\vec{h}$ such that $\overline{K}$ is not truthful in $\vec{h}$, $\lim_{m\rightarrow\infty}P^{m}(\vec{h}\mid\vec{h}_{K}^{*})=0.$ ###### Proof. Suppose that for some global history $\vec{h}$, $P^{m}(\vec{h}\mid\vec{h}_{K}^{*})>0$ and $\overline{K}$ is not truthful in $\vec{h}$. Since $P^{m}(\vec{h}\mid\vec{h}_{K}^{*})=\frac{P^{m}(\vec{h})}{\sum_{\vec{h}^{\prime}:\vec{h}^{\prime}_{K}=\vec{h}_{K}^{*}}P^{m}(\vec{h}^{\prime})},$ by Lemma 1, this result reduces to finding a global history $\vec{h}^{\prime}$ such that $\vec{h}^{\prime}_{K}=\vec{h}_{K}$ and $L(\vec{h}^{\prime})<L(\vec{h})$. This history can be constructed as follows: suppose that $r$ is the first round in $\vec{h}$ where a player $i\in\overline{K}$ lies to another player $j\in\overline{K}$. Consider a history $\vec{h}^{\prime}$ in which players in $K$ perform the same internal computations and send exactly the same messages as in $\vec{h}$, but players in $\overline{K}$ only do so until round $r-1$. In round $r$, players in $\overline{K}$ perform exactly the same internal computations as in $\vec{h}$ and send the same messages to players in $K$, but they send to other players in $\overline{K}$ the _correct_ messages (i.e., the messages players in $\overline{K}$ were supposed to send according to the protocol). Notice that since a player $i$’s local history includes the random bits that the players needs, there is a unique correct message, even if player $i$ is uses a randomized protocol. From round $r+1$ on, players in $\overline{K}$ send players in $K$ exactly the same messages as in $\vec{h}$. Clearly, by construction, $\vec{h}^{\prime}_{K}=\vec{h}_{K}$. Let $L(\vec{h})=(\ell_{1},\ell_{2},\ldots,\ell_{R})$ and $L(\vec{h}^{\prime})=(\ell^{\prime}_{1},\ell^{\prime}_{2},\ldots,\ell^{\prime}_{R})$. Then $\ell_{i}=\ell^{\prime}_{i}$ for all $i<r$, since players have the same local histories in $\vec{h}$ and $\vec{h}^{\prime}$ for the first $r-1$ rounds of communication, and they also send exactly the same messages. A similar argument shows that, for all $r^{\prime}\geq r$, the players in $K$ tell identical lies in round $r^{\prime}$ in both $h$ and $\vec{h}^{\prime}$. Finally, note that strictly fewer lies are told by players in $\overline{K}$ in round $r$ of $\vec{h}^{\prime}$ than in round $r$ of $\vec{h}$, since the players in $\overline{K}$ have exactly the same local history in both $\vec{h}$ and $\vec{h}^{\prime}$, send exactly the same messages to players in $K$ in both, but in $\vec{h}^{\prime}$ they tell no lies to each other whereas, by assumption, there is at least one such lie told in $\vec{h}$. Thus, $\ell^{\prime}_{r}<\ell_{r}$, and $L(\vec{h}^{\prime})<L(\vec{h})$ as desired. ∎ This shows that $b$ satisfies property (1) of Definition 9, and thus that $b$ is a $k$-paranoid belief system consistent with $\vec{\sigma}$, completing the proof of Proposition 5. ∎ ## 5 Characterization of $SE_{k}(\Gamma_{d})$ and $SE_{k}^{S}(\Gamma_{d})$ In this section we characterize the sets $SE_{k}(\Gamma_{d})$, and $SE_{k}^{S}(\Gamma_{d})$ (note that, by Theorem 1, the characterization of $SE_{k}(\Gamma_{\mathit{CT}})$ and $SE_{k}^{S}(\Gamma_{\mathit{CT}})$ is identical if $n>3$.) If $\Gamma$ is a normal-form game, let $CE_{k}(\Gamma)$ denote the set of possible distributions over action profiles induced by a $k$-resilient correlated equilibrium in $\Gamma$; if $\Gamma$ is a Bayesian game, let $Com_{k}(\Gamma)$ denote the set of possible maps from type profiles to distributions over action profiles induced by $k$-resilient communication equilibria in $\Gamma$. We also denote by $CE_{k}^{S}(\Gamma)$ the set of possible distributions over action profiles induced by a strongly $k$-resilient correlated equilibrium in $\Gamma$. $Com_{k}^{S}(\Gamma)$ and $SE_{k}^{S}(\Gamma)$ are defined analogously. The first characterization shows that, if $\Gamma$ is a normal-form game, then $SE_{k}(\Gamma_{d})$ is the set of $k$-resilient correlated equilibria of $\Gamma$ and $SE_{k}^{S}(\Gamma_{d})$ is the set of strongly $k$-resilient correlated equilibria of $\Gamma$. ###### Proposition 6. If $\Gamma=(P,A,U)$ is a normal-form game for $n$ players, then $SE_{k}(\Gamma_{d})=CE_{k}(\Gamma)$ and $SE_{k}^{S}(\Gamma_{d})=CE_{k}^{S}(\Gamma)$ for all $k\leq n$. ###### Proof. Clearly, the distribution over actions induced by a $k$-resilient sequential equilibrium (resp., strongly $k$-resilient sequential equilibrium) $\vec{\sigma}$ in $\Gamma_{d}$ is also a $k$-resilient correlated equilibrium (resp., strongly $k$-resilient correlated equilibrium) of $\Gamma$, for otherwise there exists a coalition $K$ of at most $k$ players such that all (resp., some) members of the coalition can increase their utility by deviating from $\vec{\sigma}$ by playing a different action in the underlying game. For the opposite inclusion, observe that all $k$-resilient (resp., strongly $k$-resilient) correlated equilibria $p$ of $\Gamma$ can be easily implemented with a mediator the mediator samples an action profile $\vec{a}$ following distribution $p$ and gives $a_{i}$ to each player $i$. Then each player $i$ plays whatever is sent by the mediator. ∎ Theorem 1 and Proposition 6 together imply the following corollary: ###### Corollary 1. If $\Gamma=(P,A,U)$ is a normal-form game for $n$ players and $n>3k$, then $SE_{k}(\Gamma_{\mathit{CT}})=CE_{k}(\Gamma)$ and $SE_{k}^{S}(\Gamma_{\mathit{CT}})=CE_{k}^{S}(\Gamma)$. For Bayesian games, we can show that the set of sequential equilibria in $\Gamma_{d}$ is equal to the set of communication equilibria of $\Gamma$. ###### Proposition 7. If $\Gamma=(P,T,q,A,U)$ is a Bayesian game for $n$ players and $n\geq k$, then $SE_{k}(\Gamma_{d})=Com_{k}(\Gamma)$ and $SE_{k}^{S}(\Gamma_{d})=Com_{k}^{S}(\Gamma)$ . ###### Proof. Suppose that $\vec{\sigma}$ is a $k$-resilient (resp., strongly $k$-resilient) sequential equilibrium of $\Gamma_{d}$. Then the correlated strategy profile $\mu$ induced by $\vec{\sigma}$ in $\Gamma$ must be a $k$-resilient (resp., strongly $k$-resilient) communication equilibrium. If $\mu$ is not a $k$-resilient (resp., strongly $k$-resilient) communication equilibrium, then there exists a coalition $K$ of players with $|K|\leq k$ and two functions $\psi:T_{K}\rightarrow T_{K}$ and $\varphi:A_{K}\rightarrow A_{K}$ such that the inequality of Definition 7 does not hold for some (resp., for all) $i\in K$. It follows that if agents in $\Gamma_{d}$ $K$ play $\vec{\sigma}$ as if they had type profile $\psi(\vec{t}_{K})$ instead of their true types, and then play action $\varphi(\vec{a}_{K})$ instead of the action profile $\vec{a}_{K}$, the utility of all (resp., some) agents would strictly increase, which contradicts the assumption that $\vec{\sigma}$ is a $k$-resilient (resp., strongly $k$-resilient) sequential equilibrium of $\Gamma_{d}$. For the opposite inclusion, recall from the discussion after Definition 7 that we can identify a correlated strategy profile $\mu$ in $\Gamma$ with a canonical strategy $\vec{\sigma}$ in $\Gamma_{d}$ in which players tell the mediator their type and the mediator computes which action profile they should play according to $\mu$. If $\mu$ is a $k$-resilient (resp., strongly $k$-resilient) communication equilibrium, then the canonical strategy $\vec{\sigma}$ is a $k$-resilient (resp., strongly $k$-resilient) sequential equilibrium of $\Gamma_{d}$ that induces $\mu$ (see the discussion after Definition 7 for details). ∎ Theorem 1 and Proposition 7 together imply the following: ###### Corollary 2. If $\Gamma=(P,T,q,A,U)$ be a Bayesian game for $n$ players and $n>3k$, then $SE_{k}(\Gamma_{\mathit{CT}})=Com_{k}(\Gamma)$ and $SE_{k}^{S}(\Gamma_{\mathit{CT}})=Com_{k}^{S}(\Gamma)$ . ## 6 The asynchronous setting Up to this point, as in the game-theory literature, we assumed that the communication between players proceeded in synchronous rounds. That is, at each round $t$, all the players send messages to each other player (we identify not sending a message with sending a special message $\bot$) and these messages are received by their intended recipients before the beginning of round $t+1$. The assumption that we identify not sending a message with sending $\bot$ is made without loss of generality—player $j$ can tell if player $i$ has not sent her a message. Moreover, it is typically assumed when analyzing games with mediators and communication games that it is common knowledge when the communication phase ends and that, after it ends, all the players simultaneously move in the underlying game. We call this the _synchronous_ setting. In this section, we consider an _asynchronous_ setting that is quite standard in the distributed computing literature. In the asynchronous setting, there is no global notion of time, and messages may take arbitrary amounts of time to be delivered (although we do assume that all messages are eventually delivered). Thus, we can no longer identify not sending a message with sending $\bot$; if $j$ has not received a message from player $i$, $j$ is not sure if this is because $i$ did not send $j$ a message or if $i$ sent a message that has not yet been delivered. For ease of exposition, we assume that message delivery is under the control of a _scheduler_ , who also decides when each player gets to move, with the guarantee that all players eventually get to move if they want to move. More precisely, in an asynchronous setting, the players and the scheduler alternate turns. During a player’s turn, it receives all the messages delivered to it by the scheduler since its last turn; it may also perform additional computation and send messages to other players (note that these messages are not received until they are delivered by the scheduler to their recipients). When the player finishes all necessary operations and sends all the necessary messages, it ends its turn, at which point it become’s the scheduler’s turn. During its turn, the scheduler can deliver messages that have been sent but not yet delivered, or it may schedule a player. If the scheduler schedules player $i$, the scheduler’s turn ends and $i$’s turn begins. What internal computations a player $i$ performs, what messages $i$ sends, and to which players $i$ sends messages during its turn is dictated by $i$’s strategy, which is a function from $i$’s local history to operations and messages (i.e., a strategy tells a player what to do as a function of its local history). Note that $i$’s local history is simply an ordered list containing, for each time that $i$ was scheduled, the messages that $i$ sent and received, and the internal computations that $i$ performed. (We can represent this as a game, in which case we would identify each information set of $i$ with a local history.) Note that $i$ does not in general know how many times another player $j$ has been scheduled since $i$’s last turn. The scheduler’s behavior is also encoded in its strategy, which describes which messages the scheduler delivers and which players are scheduled, as a function of the scheduler’s local history. The scheduler’s local history is again an ordered list, which includes which players have been scheduled up to that point and which messages have been delivered. Note that the scheduler does not know the contents of the messages sent by the players, but it can still identify the messages by, for instance, labelling them by the order they were sent. For example, an entry in the scheduler’s local history could be “delivered player 3’s fourth message”. The only constraint on the scheduler’s strategy is that it must eventually deliver all messages that have been sent and, starting from any local history, it must eventually schedule all players that have not terminated. (We assume that “done” is a signal that a player can send the scheduler that indicates that it has terminated.) If $\Gamma_{\mathit{CT}}$ is asynchronous, the payoff of the players may depend on the strategy $\sigma_{e}$ of the scheduler. This means that the definition of implementation and of the solution concepts must take the scheduler into account. We extend the definitions of Nash equilibrium, correlated equilibrium, sequential equilibrium, and communication equilibrium by requiring that the relevant inequality holds for all choices of $\sigma_{e}$. For example a strategy profile $\vec{\sigma}$ is a Nash equilibrium in an asynchronous setting if, for all $i\in P$, all strategies $\tau_{i}$ for player $i$, and all schedulers $\sigma_{e}$, $u_{i}(\vec{\sigma},\sigma_{e})\geq u_{i}(\tau_{i},\vec{\sigma}_{-i},\sigma_{e})$. Since the action profile played might depend on the scheduler, a strategy $\vec{\sigma}$ in $\Gamma_{\mathit{CT}}$ might induce more than one strategy in $\Gamma$ (see Example 2 below). A strategy profile $\vec{\sigma}$ for $\Gamma_{\mathit{CT}}$ _implements a set $S$ of strategy profiles in $\Gamma$_ in an asynchronous setting if (a) for every scheduler $\sigma_{e}$ in $\Gamma_{\mathit{CT}}$, the outcome obtained when playing $\vec{\sigma}$ with scheduler $\sigma_{e}$ is the same as that obtained when playing some strategy profile in $S$, and (b) for every strategy profile $\vec{\tau}$ in $S$, there exists a scheduler $\sigma_{e}$ such that the outcome obtained when playing $\vec{\sigma}$ with $\sigma_{e}$ is the same as that obtained when playing $\vec{\tau}$. Intuitively, $\vec{\sigma}$ implements $S$ if the set of outcomes that result from playing $\vec{\sigma}$ with different choices of scheduler are precisely those that result from playing strategy profiles in $S$. As in the synchronous setting, if $\Gamma$ is a normal-form game, we take the strategy profiles in $S$ to consist of distributions over action profiles in $\Gamma$, while if $\Gamma$ is a Bayesian game, the strategy profiles in $S$ are correlated strategy profiles. In cheap-talk games in synchronous systems, we assumed (as is standard in the literature) that the game tree is finite and at the last move on a path, the players play an action in the underlying game; we take the utility of a leaf to be the utility of the corresponding action profile in the normal-form game. We cannot assume this in asynchronous systems. Indeed, in asynchronous systems, to allow for players who deviate from an equilibrium, we must consider game trees with infinite paths. For example, a player $i$ may wait forever for a message that never arrives, because the sender deviated and never sent it, so $i$ may never play an action in the underlying game. We must thus define what the payoffs are on infinite paths and at the leaf of a finite path where some players do not play an action in the underlying game. This amounts to treating players who have not played an action as, in fact, having played some action. The two main approaches for doing this are the _default- move approach_ Abraham et al. [2019b], where if player $i$ never plays an action in the underlying game, its action is sampled from a default distribution, and the _Aumann and Hart_ (AH) approach Aumann and Hart [2003], where the action played by $i$ is some function of $i$’s local history. The AH approach essentially assumes that players can leave a “will” defining the action in the underlying game that they should play if they never actually play an action in the underlying game while playing the cheap-talk game. In the asynchronous setting (with both the AH and the default move approach), we say that a strategy profile $\vec{\sigma}$ in $\Gamma_{\mathit{CT}}$ _implements_ a strategy profile $\vec{\tau}$ in $\Gamma$ if the set of strategy profiles implemented by $\vec{\sigma}$ (for different choices of scheduler) is $\\{\vec{\tau}\\}$ (thus, playing $\vec{\sigma}$ results in the same outcome no matter what the scheduler does). The following example illustrates some of the new subtleties that asynchrony introduces. ###### Example 2. Let $\Gamma$ be a normal-form game for $n$ players where the set of actions is $\\{1,2,\ldots,n\\}$, and let $\Gamma_{d,asyn}$ be a cheap-talk extension of $\Gamma$ in the asynchronous setting, where there is a trusted mediator $d$ and the AH approach is used. Suppose that $\vec{\sigma}$ is the strategy profile that proceeds as follows: each player sends an empty message to the mediator when they are first scheduled. The mediator waits until it receives a message, and then sends a message to all players with the index of the player whose message was received first. When a player $j$ receives a message from the mediator with a number $i\in\\{1,2,\ldots,n\\}$, player $j$ plays action $i$. If player $j$ never receives a message from the mediator, then according to its will, $j$ plays action $j$. The player whose index appears the most often in the resulting action profile receives a payoff of 1 (if there are ties, they all get 1), while the remaining players get 0. This game can be viewed as a race. The player whose message gets to the mediator first receives a payoff of 1 while the rest receive 0. Note that, regardless of the scheduler, all players play the same action in $\Gamma$. However, the scheduler decides which action is played. Thus, the strategy $\vec{\sigma}$ in the example implements the set of strategies $\\{(1,1\ldots,1),(2,2,\ldots,2),\ldots,(n,n,\ldots,n)\\}$. This strategy is not a Nash equilibrium. To see this, consider the scheduler $\sigma_{e}$ that schedules the players sequentially (first player 1, then player 2, and so on). If player $i$ sends two messages to the mediator and is the only player to do so, then the scheduler delivers $i$’s first message before it delivers any other player’s message. If more than one player sends two messages to the mediator, the schedulers chooses one of these players at random and delivers her message first. The remaining messages are delivered in some random order. With this scheduler, the players benefit by deviating and sending two messages to the mediator instead of just one. Thus, $\vec{\sigma}$ is not a Nash equilibrium. A similar argument can be used to show that, for all games $\Gamma$, if $\vec{\sigma}$ is a $k$-resilient Nash equilibrium in $\Gamma_{\mathit{CT}}$, then the payoffs of the players when playing $\vec{\sigma}$ cannot depend on the scheduler (see Abraham et al. [2019b]). This example shows that in asynchronous systems, the set of possible deviations is much larger than in synchronous systems. However, there are cases where controlling the scheduling of the messages and players does not give that much power to an adversary. For instance, if we consider an asynchronous cheap-talk extension of Example 1, the strategy profile in which both agents send their type to each other the first time they are scheduled and play action $t_{1}t_{2}$ is a Nash equilibrium (note that this is almost equivalent to the strategy proposed in Example 1, however there is no notion of “rounds” in an asynchronous setting). Many of the results that hold in the synchronous case also hold in the asynchronous case, but may require a larger proportion of the players to be honest. For instance, asynchronous multiparty secure computation tolerates deviations by at most a quarter of the agents, while synchronous multiparty secure computation tolerates deviations by at most a third Ben-Or et al. [1988, 1993]. Our results hold for similar bounds. (An intuitive explanation of why there is a difference between the thresholds for the synchronous and asynchronous settings can be found in Appendix B.) ### 6.1 Main results Given a normal-form or Bayesian game $\Gamma$, let $\Gamma_{\mathit{CT},asyn}$ and $\Gamma_{d,asyn}$ be the asynchronous equivalents of $\Gamma_{\mathit{CT}}$ and $\Gamma_{d}$ respectively (i.e., the asynchronous cheap-talk and mediator extensions of $\Gamma$). As in Section 3, let $SE_{k}(\Gamma_{\mathit{CT},asyn})$ (resp., $SE_{k}^{S}(\Gamma_{\mathit{CT},asyn})$) denote the set of possible strategy profiles in $\Gamma$ induced by $k$-resilient sequential equilibria in $\Gamma_{\mathit{CT},asyn}$. However, since in the asynchronous setting the outcome may depend on the scheduler (see the discussion in Section 6), if $\Gamma$ is a normal-form game, then $SE_{k}(\Gamma_{\mathit{CT},asyn})$ is a set of sets of strategy profiles, and if $\Gamma$ is a Bayesian game, then $SE_{k}(\Gamma_{\mathit{CT},asyn})$ is a set of sets of correlated strategy profiles. In the asynchronous setting we have equivalent results to those in the synchronous setting. ###### Theorem 2. If $\Gamma=(P,T,q,A,U)$ is a Bayesian game for $n$ players and $n>4k$, then $SE_{k}(\Gamma_{\mathit{CT},asyn})=SE_{k}(\Gamma_{d,asyn})$ and $SE_{k}^{S}(\Gamma_{\mathit{CT},asyn})=SE_{k}^{S}(\Gamma_{d,asyn})$ with both the default move and AH approaches. Note that the only difference with respect to Theorem 1, is that in the asynchronous setting we require that $n>4k$ instead of $n>3k$. This difference comes down to the fact that, in the asynchronous setting, players cannot tell if a message has not been received because the player didn’t send it or because the message is being delayed by the scheduler. In fact, this is the reason why most of the distributed computing primitives in asynchronous systems are resilient to deviations of up to a fourth of the players, as opposed to a third in the synchronous case (see Appendix B for more details). As in the synchronous case, $SE_{k}(\Gamma_{\mathit{CT},asyn})$ and $SE_{k}^{S}(\Gamma_{\mathit{CT},asyn})$ have a relatively simple characterization. This is described in Section 6.3. ### 6.2 Proof of Theorem 2 (outline) The proof of Theorem 2 is quite similar to the proof of Theorem 1. Given a strategy profile $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}\in SE_{k}(\Gamma_{d,asyn})$, we first construct a $k$-resilient Nash equilibrium $\vec{\sigma}$ that implements $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$ and guarantees that, if less than $k$ players deviate, all players will eventually be able to compute the action that they should play, and then we construct a $k$-paranoid belief system $b$ that is consistent with $\vec{\sigma}$ just as in the proof of Theorem 1. An argument similar to the one used in the synchronous case shows that $(\vec{\sigma},b)$ is indeed a $k$-resilient sequential equilibrium. The only major difference between the synchronous and the asynchronous case is that, in an asynchronous system, constructing a $k$-resilient Nash equilibrium $\vec{\sigma}$ that implements $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$ is far more complicated than in a synchronous system (the reason for this can be found below), and is beyond the scope of this work. In this paper, we only provide a very high-level overview of the construction used in Geffner and Halpern [2021] and Abraham et al. [2019b] that satisfies all the properties we need. As in the case of Theorem 1, we prove Theorem 2 for the case of $k$-resilient sequential equilibrium. The case of strongly $k$-resilient equilibrium is analogous. As discussed in Section 3, in general, there is no simple description of $SE_{k}(\Gamma_{d,asyn})$; thus, the proof of Theorem 2 consists of showing that arbitrary interactions $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$ with a trusted mediator in the asynchronous setting can be emulated by a strategy $\vec{\sigma}$ without the mediator that tolerates deviations of coalitions of size at most $k$. We proceed as in the proof of Theorem 1: we first construct a $k$-resilient Nash equilibrium $\vec{\sigma}$ that implements $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$, and then we construct a $k$-paranoid belief system to extend the Nash equilibrium to a $k$-resilient sequential equilibrium $\vec{\sigma}_{seq}$. The main problem is that the construction of a $k$-resilient Nash equilibrium is not as simple as in the synchronous setting. The communication between the players and the mediator can be arbitrarily convoluted (as opposed to the players just sending their types and receiving their suggested action), and it may depend on the scheduler. This means that, to be sure that all possible cases are considered, players in $\vec{\sigma}$ must somehow be able to simulate all possible scheduling orders for the players and the messages in $\vec{\sigma}^{\prime}+\sigma_{d}$. (In a given run, the particular order that they choose must depend on the actual scheduler in $\vec{\sigma}$.) The simulation is quite nontrivial, but the high-level idea is that each player $i$ in $\vec{\sigma}$ simulates its counterpart in $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$, except that whenever it would send a message $msg$ in $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$, $i$ shares $msg$ using VSS. Moreover, each player simultaneously plays its part simulating the mediator, which means using a _consensus protocol_ (see Appendix C) to guarantee that the nonfaulty players agree on when the mediator is scheduled (there exist $t$-resilient consensus protocols in asynchronous systems if $n>3t$ Abraham et al. [2008a]) and using CC to compute which messages it receives, which messages it sends, and what the contents of those messages are. With this procedure, each player is able to compute its share of every message sent by the mediator. If that message was supposed to be sent to player $j$, then each player $i$ sends its share of the message to $j$ in order for $j$ to be able to reconstruct it. The full construction can be found in Geffner and Halpern [2021]. It satisfies a property called _$k$ -bisimulation_, which states the following: * (a) For all coalitions $K$ of size at most $k$, all strategies $\vec{\tau}_{K}$ for players in $K$, and all schedulers $\sigma_{e}$, there exists a strategy $\vec{\tau}^{\prime}_{K}$ and a scheduler $\sigma_{e}^{\prime}$ such that $(\vec{\sigma}_{-K},\tau_{K},\sigma_{e})$ and $((\vec{\sigma}^{\prime}_{-K},\tau^{\prime}_{K})+\sigma^{\prime}_{d},\sigma^{\prime}_{e})$ are identically distributed. * (b) For all coalitions $K$ of size at most $k$, all strategies $\vec{\tau}^{\prime}_{K}$ for players in $K$, and all schedulers $\sigma^{\prime}_{e}$, there exists a strategy $\vec{\tau}_{K}$ and a scheduler $\sigma_{e}$ such that $(\vec{\sigma}_{-K},\tau_{K},\sigma_{e})$ and $((\vec{\sigma}^{\prime}_{-K},\tau^{\prime}_{K})+\sigma^{\prime}_{d},\sigma^{\prime}_{e})$ are identically distributed. Intuitively, if $\vec{\sigma}$ $k$-bisimulates $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$, then all possible distributions over action profiles in $\vec{\sigma}$ when a coalition of at most $k$ players deviates also occur in $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$ when the same coalition deviates (possibly in a different way). In particular, if a coalition $K$ of players with $|K|\leq k$ can increase their payoff in $\vec{\sigma}$ by deviating, they could already do so in $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$, which contradicts the assumption that $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$ is a $k$-resilient sequential equilibrium. This implies that $\vec{\sigma}$ is a $k$-resilient Nash equilibrium that implements $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$ (a result shown by Abraham, Dolev, Geffner, and Halpern [2019b]). In the proof of Theorem 2 we use a slightly modified variant of Geffner and Halpern’s construction. In this variant, each player $i$ terminates only if all messages sent in the past are correct according to $\vec{\sigma}$ and $i$’s local history. If $i$ ever lied to some player $j$, $i$ sends the correct message before terminating. The purpose of this is that even if a player $j$ is not able to proceed with $\vec{\sigma}$ (for instance, because it received incorrect messages), $j$ would still believe that in the future it will still be able to reconstruct the (simulated) mediator’s messages as long as the remaining players are able to continue. Additionally, as in the synchonous case, if players cannot continue with the protocol (for instance, because they received incorrect messages), they do nothing. It is easy to check that this variant still $k$-bisimulates $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$ and, thus, $\vec{\sigma}$ is also a $k$-resilient Nash equilibrium that implements $\vec{\sigma}^{\prime}+\sigma^{\prime}_{d}$. The rest of the proof of Theorem 2 consists of finding an appropriate $k$-paranoid belief system consistent with $\vec{\sigma}$. If these beliefs exist then, as in the synchronous case, no coalition $K$ of up to $k$ players would have an incentive to deviate from the proposed protocol since, according to their beliefs, the remaining players would still be able to compute their part of the action profile $\vec{a}$ (which is sampled from a communication equilibrium), and thus it is optimal for players in $K$ to play their part of $\vec{a}$ as well. Note that, since the protocol states that players should send the correct messages at the very end of the protocol, with any belief system consistent with $\vec{\sigma}$, players in $K$ always believe that they will be able to compute their part of $\vec{a}$ if they wait long enough, regardless of their local history. This means that proving Theorem 2 reduces to proving an analogue of Proposition 5 for asynchronous systems: ###### Proposition 8. For all strategies $\vec{\sigma}$ in $\Gamma_{\mathit{CT},asyn}$, there exists a $k$-paranoid belief system $b$ that is consistent with $\vec{\sigma}$. ###### Proof. Consider a sequence $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ of strategy profiles defined just as in the proof of Proposition 5, and consider a randomized scheduler $\sigma_{e}$ such that all valid _schedule prefixes_ (i.e., ordered list of players scheduled and messages delivered so far) occur with positive probability. For instance, $\sigma_{e}$ could be a scheduler that at each point in time either delivers a message or chooses a player to be scheduled uniformly at random. Consider the belief system $b$ induced by the previous sequence of strategies with scheduler $\sigma_{e}$. An argument similar to that given in Lemmas 1 and 2 shows that $b$ is a $k$-paranoid belief system, as desired. ∎ ### 6.3 Characterization of $SE_{k}(\Gamma_{d,asyn})$ and $SE_{k}^{S}(\Gamma_{d,asyn})$ In the asynchronous setting, the description of $SE_{k}(\Gamma_{d,asyn})$ is a bit more convoluted than those of Section 5. Reasoning similar to that used in the proof of Proposition 6 shows that for a fixed scheduler, a $k$-resilient (resp., strongly $k$-resilient) sequential equilibrium in $SE_{k}(\Gamma_{d,asyn})$ induces a $k$-resilient (resp., strongly $k$-resilient) correlated equilibrium in $\Gamma$. Thus, $SE_{k}(\Gamma_{d,asyn})\subseteq\mathcal{P}(CE_{k}(\Gamma))$ and $SE_{k}^{S}(\Gamma_{d,asyn})\subseteq\mathcal{P}(CE_{k}^{S}(\Gamma))$, where $\mathcal{P(S)}$ denotes the power set of $S$ (i.e., the set of all subsets of $S$). The next proposition gives a precise description of $SE_{k}(\Gamma_{d,asyn})$ and $SE_{k}^{S}(\Gamma_{d,asyn})$. ###### Proposition 9. Given a set $S$ of strategies, let $\mathcal{P}_{=}(S)$ the set of nonempty subsets $S^{\prime}$ of $S$ such that every element of $S^{\prime}$ gives the same expected utility to all players. If $\Gamma=(P,A,U)$ is a normal-form game for $n$ players and $n\geq k$, then $SE_{k}(\Gamma_{d,asyn})=\mathcal{P}_{=}(CE_{k}(\Gamma))$ and $SE_{k}^{S}(\Gamma_{d,asyn})=\mathcal{P}_{=}(CE_{k}^{S}(\Gamma))$. ###### Proof. In games with a mediator, we write $\sigma_{d}$ to denote a generic strategy for the mediator, and $\vec{\sigma}+\sigma_{d}$ to denote a strategy profile for the players and the mediator. To show that $SE_{k}(\Gamma_{d,asyn})\subseteq\mathcal{P}_{=}(CE_{k}(\Gamma))$ and $SE_{k}^{S}(\Gamma_{d,asyn})\subseteq\mathcal{P}_{=}(CE_{k}^{S}(\Gamma))$, suppose by way of contradiction that some strategy $\vec{\sigma}+\sigma_{d}$ in $SE_{k}(\Gamma_{d,asyn})$ (resp., $SE_{k}^{S}(\Gamma_{d,asyn})$) induces a set $S$ of strategies such that, for some player $i$, there exist two strategies $\vec{\tau},\vec{\tau}^{\prime}\in S$ induced by schedulers $\sigma_{e}$ and $\sigma_{e}^{\prime}$, respectively, such that $u_{i}(\vec{\tau})<u_{i}(\vec{\tau}^{\prime})$. Consider a scheduler $\sigma_{e}^{\prime\prime}$ that does the following: it first schedules player $i$. If $i$ sends a message to itself when it is first scheduled, then $\sigma_{e}$ acts like $\sigma^{\prime}_{e}$, and otherwise it acts like $\sigma_{e}$. By construction, when the scheduler is $\sigma^{\prime\prime}_{e}$, $i$ gains by deviating from $\sigma_{i}$ and sending a message to itself when it is first scheduled. (This construction will not work if $\sigma_{i}$ already requires $i$ to send a message to itself; in this case, we construct $\sigma^{\prime\prime}_{e}$ so that the signal from $i$ to the scheduler is encoded differently, for example, by $i$ sending two messages to itself.) This shows that $SE_{k}(\Gamma_{d,asyn})\subseteq\mathcal{P}_{=}(CE_{k}(\Gamma))$ and $SE_{k}^{S}(\Gamma_{d,asyn})\subseteq\mathcal{P}_{=}(CE_{k}^{S}(\Gamma))$. To prove the opposite inclusions, since we consider only equilibria with rational probabilities, the set $CE_{k}(\Gamma)$ (resp., $CE_{k}^{S}(\Gamma)$) is countable, as is any subset $S\subseteq CE_{k}(\Gamma)$ (resp., any subset $S\subseteq CE_{k}^{S}(\Gamma)$). Given $S=\\{\vec{\tau}^{1},\vec{\tau}^{2},\ldots\\}$ such that for all $k$ and players $i$ and $j$, $u_{i}(\vec{\tau}^{k})=u_{j}(\vec{\tau}^{k})$, consider the following strategy $\vec{\sigma}+\sigma_{d}$. Each player sends an empty message to the mediator when it is first scheduled. Let $N$ be the number of times the mediator that has been scheduled before receiving a message from some player. The mediator samples an action profile $\vec{a}$ from $\vec{\tau}^{N}$ (or from $\vec{\tau}^{1}$ if $S$ is finite and has fewer than $N$ elements), and sends $a_{i}$ to each player $i$. Players play whatever they receive from the mediator. It is straightforward to check that for each strategy $\vec{\tau}$ in $S$, there is a scheduler $\sigma_{e}$ that induces $\vec{\tau}$ when the players and the mediator play $\vec{\sigma}+\sigma_{d}$. To check that $\vec{\sigma}+\sigma_{d}$ is indeed a $k$-resilient (resp., strongly $k$-resilient) correlated equilibrium, note that all strategies in $S$ are $k$-resilient (resp., strongly $k$-resilient) correlated equilibria of $\Gamma$ that give the same utility to each player, and thus (a) after determining the number of times $t$ that the mediator is scheduled before receiving the first message, no coalition of $k$ or less players can benefit from playing an action profile different from the one suggested by the mediator, and (b) the expected utility of each player is independent of $t$. This shows that $\vec{\sigma}+\sigma_{d}$ is indeed a $k$-resilient (resp., strongly $k$-resilient) correlated equilibrium. ∎ Theorem 2 and Proposition 9 together imply: ###### Corollary 3. If $\Gamma=(P,A,U)$ is a normal-form game for $n$ players and $n>4k$, then $SE_{k}(\Gamma_{\mathit{CT},asyn})=\mathcal{P}_{=}(CE_{k}(\Gamma))$ and $SE_{k}^{S}(\Gamma_{\mathit{CT},asyn})=\mathcal{P}_{=}(CE_{k}^{S}(\Gamma))$ with both the DM and AH approaches. For Bayesian games, we do not believe that there is a simple description of $SE_{k}(\Gamma_{d,asyn})$. To understand why, note that a key step in the proofs of Theorems 1 and 2 is for players to send their types to the mediator. In asynchronous systems, the mediator cannot distinguish between players that don’t send their type to the mediator and players that send it but the receipt of the message is delayed by the scheduler. Nevertheless, if players can somehow punish those that never send their type to the mediator, we can guarantee that it is optimal for all players to send a type to the mediator, and thus we can extend the constructions used in the proof of Propositions 9 and 7 to describe $SE_{k}(\Gamma_{d,asyn})$ and $SE_{k}^{S}(\Gamma_{d,asyn})$. ###### Definition 10 (Ben-Porath [2003]; Abraham et al. [2006]). If $\Gamma=(P,T,q,S,U)$ is a Bayesian game for $n$ players and $\mu\in Com_{k}(\Gamma)$ or $\mu\in Com_{k}^{S}(\Gamma)$, then a _$k$ -punishment equilibrium_ with respect to $\mu$ is a $k$-resilient Bayesian Nash equilibrium $\vec{\tau}$ such that $u_{i}(\mu)>u_{i}(\vec{\tau})$ for all players $i$. Intuitively, a $k$-punishment equilibrium with respect to $\mu$ is a strategy profile that is a $k$-resilient Bayesian Nash equilibrium which, if played by at least $n-k$ players, results in all players being worse off than they would be with $\mu$. If a $k$-punishment equilibrium exists, then we have the following. ###### Proposition 10. If $\Gamma=(P,T,q,S,U)$ is a Bayesian game, $S$ is a subset of $k$-resilient (resp., strongly $k$-resilient) communication equilibria of $\Gamma$ such that, for all players $i$ and all types $t_{i}$ of $i$, the expected utility of $i$ given $t_{i}$ is the same for all $\mu$ in $S$ (i.e., $u_{i}(\mu\mid t_{i})=u_{i}(\mu^{\prime}\mid t_{i})$ for all $\mu,\mu^{\prime}\in S$), and there exists a $k$-punishment equilibrium $\vec{\tau}$ with respect to some $\mu$ in $S$, then $S\in SE_{k}(\Gamma_{d,asyn})$ (resp., $S\in SE_{k}^{S}(\Gamma_{d,asyn})$) with the AH approach or with the default move approach if, for each player $i$, $i$’s default move is $\tau_{i}$. Note that if $\vec{\tau}$ is a $k$-punishment equilibrium with respect to some $\mu$ in $S$, then it is a $k$-punishment equilibrium with respect to all $\mu\in S$, since all $\mu$ give the same utility to the players. ###### Proof. This proof follows the same lines as the proof of Proposition 9, with the only difference that the players must tell the mediator their type at the beginning of the game. The punishment strategy is required to force players to tell their true type; if players do not tell report a type, the remaining players will play their part of the punishment strategy (note that in Proposition 9, a punishment strategy is not needed since players have no types and thus the mediator requires no inputs). Moreover, this type must be their true type since all strategies in $S$ are $k$-resilient communication equilibria of $\Gamma$. Since we are considering only distributions with rational probabilities, the set $S$ is countable. Let $S=\\{\mu^{1},\mu^{2},\ldots\\}$. Consider a strategy $\vec{\sigma}+\sigma_{d}$ in which each player sends its type to the mediator when it is first scheduled and the mediator waits until it receives the type of each player. Let $N$ be the number of times that the mediator is scheduled before receiving all the types. The mediator samples an action profile $\vec{a}$ from $\mu^{N}$ (or from $\mu^{1}$ if $S$ is finite and has fewer than $N$ elements), and sends $a_{i}$ to each player $i$. If a player sent its true type to the mediator, it plays whatever it receives from the mediator. Otherwise, it plays the optimal action conditioned on what it sent and what it receives from the mediator. If player $i$ never receives an action, it plays $\tau_{i}$ (either because it is the default action or because $i$ leaves it in its will). As in the proof of Proposition 9, it is easy to check that $\vec{\sigma}$ induces $S$. We next show that $\vec{\sigma}$ is a $k$-resilient (resp., strongly $k$-resilient) sequential equilibrium for some belief system $b$. Consider a sequence of strategy profiles $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ such that, in $\vec{\sigma}^{N}$, each player $i$ acts as follows. The first time $i$ is scheduled, it sends its type $t_{i}$ to the mediator with probability $1-1/N$ (instead of sending its type to the mediator with probability 1, as in $\vec{\sigma}$). Moreover, each time it is scheduled, $i$ sends a random message to each other player $j$ with probability $1/N$. Whenever a player $i$ receives an action $a_{i}$ from the mediator, it plays $a_{i}$. Otherwise, it plays $\vec{\tau}(t_{i})$. By construction, the strategy profiles $\vec{\sigma}^{1}+\sigma_{d},\vec{\sigma}^{2}+\sigma_{d},\ldots$ are completely mixed and converge to $\vec{\sigma}$. Let $b$ be the belief system induced by $\vec{\sigma}^{1}+\sigma_{d},\vec{\sigma}^{2}+\sigma_{d},\ldots$. According to $b$, since all messages received from other players are randomly generated, they can be ignored. Thus, $i$’s utility depends only on the message received by the mediator. If player $i$ receives a message from the mediator, then it must be that every player sent its type and therefore, if $i$ sent its true type, it is optimal for $i$ to play the action received. If $i$ did not report its true type, by construction $i$ also plays its optimal action. On the other hand, if $i$ never receives a message from the mediator, then according to its beliefs (as characterized by $b$), it must be that no one else got a message and thus everyone else is going to play their part of $\vec{\tau}$. Since $\vec{\tau}$ is a $k$-resilient Nash equilibrium, playing $\tau_{i}$ is optimal. This shows that $(\vec{\sigma},b)$ is a $k$-resilient (resp., strongly $k$-resilient) sequential equilibrium. ∎ Proposition 10 implies the following: ###### Corollary 4. If $\Gamma=(P,T,q,S,U)$ is a Bayesian game and $\mu$ is a $k$-resilient (resp., strongly $k$-resilient) correlated equilibrium of $\Gamma$ such that there exists a $k$-punishment equilibrium $\vec{\tau}$ w.r.t. $\mu$, then $\\{\mu\\}\in SE_{k}(\Gamma_{d,asyn})$ (resp., $\\{\mu\\}\in SE_{k}^{S}(\Gamma_{d,asyn})$) with the AH approach or with the default move approach if the default move is $\vec{\tau}$. ## 7 Extending the set of equilibria Up to now we have considered only distributions over action profiles that involve rational probabilities. We can extend our result to some extent to distributions described by real numbers. Specifically, suppose that players can send messages and operate with real numbers, and an equilibrium $e=\sum_{i}\lambda_{i}e_{i}$ is a convex combination of countable rational equilibria $e_{i}$. If players could jointly sample a real number $r\in[0,1)$ uniformly at random, they could easily implement $e$: after sampling $r\in[0,1]$ uniformly at random, they play their part of the rational equilibrium $e_{m}$ such that $\sum_{i=1}^{m-1}\lambda_{i}\leq r<\sum_{i=1}^{m}\lambda_{i}$. (Our earlier results show that $e_{m}$ can be implemented.) Note that this procedure guarantees that players implement $e_{m}$ with probability $\lambda_{m}$, as desired. The following is a $k$-resilient strategy $\vec{\tau}$ that allows players to jointly sample $r$ in the synchronous setting if $n>3k$: 1. 1. Each player privately generates a random number $r_{i}\in[0,1)$ and broadcasts it using Bracha’s $k$-resilient broadcast protocol Bracha [1987]. If $i$ doesn’t broadcast a number, $r_{i}$ is assumed to be $0$. 2. 2. Players take $r:=frac(r_{1}+\ldots+r_{n})$, where $frac(x)$ is the fractional part of $x$. In the asynchronous setting, this process is harder because players cannot distinguish players that didn’t broadcast a value from players that are being delayed by the scheduler. Moreover, players cannot agree on using only a subset $C$ of the values shared since players might want to influence the computation of $C$ in such a way that the number $r$ chosen is beneficial for them. However, if $n>4k$, each player can generate $r_{i}$ as in the synchronous setting, and then compute $r$ using a $k$-resilient secure computation Ben-Or et al. [1993]. ## 8 Conclusion We have shown that, for all Bayesian games $\Gamma$ for $n$ players, all $k$-resilient sequential equilibria in $\Gamma_{d}$ can also be implemented in $\Gamma_{\mathit{CT}}$ if $n>3k$ in the synchronous setting or $n>4k$ in the asynchronous setting. These results are optimal since it follows from Abraham et al. [2008b] and Geffner and Halpern [2023] that if $n\leq 3k$ in the synchronous case or $n\leq 4k$ in the asynchronous case, there are $k$-resilient sequential equilibria in $\Gamma_{d}$ that cannot be implemented even with a $k$-resilient Nash equilibrium. Our results also allow us to characterize the sets $SE_{k}(\Gamma_{\mathit{CT}})$ of all possible strategies in $\Gamma$ induced by $k$-resilient sequential equilibria of $\Gamma_{ACT}$, generalizing results of Gerardi [2004]. If players have perfect information (i.e. $\Gamma$ is a normal-form game), then $SE_{k}(\Gamma_{\mathit{CT}})$ is the set of $k$-resilient correlated equilibria of $\Gamma$; if $\Gamma$ is a Bayesian game, then $SE_{k}(\Gamma_{\mathit{CT}})$ is the set of $k$-resilient communication equilibria. In the asynchronous setting, we show that $SE_{k}(\Gamma_{\mathit{CT}})=SE_{k}(\Gamma_{d})$, but characterizing the set $SE_{k}(\Gamma_{d})$ of $k$-resilient sequential equilibria with a trusted mediator is still an open problem. However, if $\Gamma$ has a $k$-punishment equilibrium, then $SE_{k}(\Gamma_{d})$ is also the set of $k$-resilient communication equilibria of $\Gamma$. If $\Gamma$ has a $k$-punishment equilibrium, Abraham, Dolev, Gonen, and Halpern [2006] and Abraham, Dolev, Geffner, and Halpern [2019b] showed that all $k$-resilient Nash equilibria in $\Gamma_{d}$ could be implemented in $\Gamma_{\mathit{CT}}$ if $n>2k$ in the synchronous setting or $n>3k$ in the asynchronous setting respectively. The question of whether these results can be extended to $k$-resilient sequential equilibria is still open. ## Appendix A Game theory: basic definitions In this section, we review the basic concepts and definitions that are needed for this paper. We begin by presenting several types of games that are common in the literature, each of them with its own settings. For each of these games, we also introduce some of the _solution concepts_ that we are interested in. A solution concept describes what rational agents should do in a game, according to some theory of rationality. Typically, a solution concept determines for each game a set of _strategy profiles_ , where a strategy profile $\vec{s}=(s_{1},\ldots,s_{n})$ is a tuple consisting of one strategy for each agent, and a strategy is a description of how each player should move/act in all “situations” in the given game. (As we shall see, what counts as a situation depends on the type of game we consider.) ### A.1 Nash, Correlated, and Sequential Equilibrium A normal-form game $\Gamma$ is a tuple $(P,A,U)$, where $P=\\{1,\ldots,n\\}$ is the set of _players_ , $A=A_{1}\times\cdots A_{n}$, where $A_{i}$ is the set of possible actions for player $i\in P$, and $U=(u,\ldots,u_{n})$ is an $n$-tuple of _utility_ functions $u_{i}:A\rightarrow\mathbb{R}$, again, one for each player $i\in P$. A _pure strategy profile_ $\vec{a}$ is an $n$-tuple of actions $(a_{1},\ldots,a_{n})$, with $a_{i}\in A_{i}$. A _mixed strategy_ for player $i$ is an element of $\Delta(A_{i})$, the set of probability distributions on $A_{i}$. We extend $u_{i}$ to mixed strategy profiles $\sigma=(\sigma_{1},\ldots,\sigma_{n})$ by defining $u_{i}(\sigma)=\sum_{(a_{1},\ldots,a_{n})\in A}\sigma_{1}(a_{1})\ldots\sigma_{n}(a_{n})u_{i}(a_{1},\ldots,a_{n})$: that is, the sum over all pure strategy profiles $\vec{a}$ of the probability of playing $\vec{a}$ according to $\sigma$ times the utility of $\vec{a}$ to $i$. ###### Definition 11. In a normal-form game $\Gamma$, a (mixed) strategy profile $\vec{\sigma}=(\sigma_{1},\ldots,\sigma_{n})$ is a _Nash equilibrium_ if, for all $i\in P$ and strategies $\sigma_{i}^{\prime}\in\Delta(A_{i})$, $u_{i}(\sigma_{i},\vec{\sigma}_{-i})\geq u_{i}(\sigma^{\prime}_{i},\vec{\sigma}_{-i}).$ ###### Definition 12. In a normal-form game $\Gamma=(P,A,U)$, a distribution $p\in\Delta(A)$ is a _correlated equilibrium_ (CE) if, for all $i\in P$ and all $a^{\prime}_{i},a^{\prime\prime}_{i}\in A_{i}$ such that $p(a^{\prime}_{i})>0$, $\sum_{\vec{a}\in\Delta(A):a_{i}=a^{\prime}_{i}}u_{i}(a^{\prime}_{i},\vec{a}_{-i})p(\vec{a}\mid a_{i}=a^{\prime}_{i})\geq\sum_{\vec{a}\in\Delta(A):a_{i}=a^{\prime}_{i}}u_{i}(a_{i}^{\prime\prime},\vec{a}_{-i})p(\vec{a}\mid a_{i}=a^{\prime}_{i}).$ Intuitively, for a distribution $p$ over action profiles to be a correlated equilibrium, it cannot be worthwhile for player $i$ to deviate from $a_{i}$ to $a_{i}^{\prime}$ if $i$ knows only that $\vec{a}$ is sampled from distribution $p$ (and $a_{i}$). Note that all Nash equilibria are correlated equilibria as well. An _extensive-form game_ $\Gamma$ is a tuple $(P,G,M,U,R)$, where * • $P=\\{1,\ldots,n\\}$ is the set of players. * • $G$ is a game tree in which each node represents a possible state of the game and each of the edges going out of a node an action that a player can perform in the state corresponding to that node. (In the sequel, we refer to the edges as actions.) * • $M$ is a function that associates with each non-leaf node of $G$ (we let $G^{\circ}$ denote the non-leaf nodes) a player in $P$ that describes which player moves at each of these nodes; if $M(v)=i$, then all the edges going out of node $v$ must correspond to actions that player $i$ can play. * • $U=(u_{1},\ldots,u_{n})$ is an $n$-tuple of utility functions from the leaves of $G$ to $\mathbb{R}$. * • $R=(\sim_{1},\ldots,\sim_{n})$ is an $n$-tuple of equivalence relations on the nodes of $G^{\circ}$, one for each player. Intuitively, if $v\sim_{i}v^{\prime}$, then nodes $v$ and $v^{\prime}$ are indistinguishable to player $i$. Clearly, if $v\sim_{i}v^{\prime}$ for some $i\in[n]$, then the set of possible actions that can be performed at $v$ and $v^{\prime}$ must be identical and $M(v)=M(v^{\prime})$ (for otherwise, $i$ would have a basis for distinguishing $v$ and $v^{\prime}$). The equivalence relation $\sim_{i}$ induces a partition of $G^{\circ}$ into equivalence classes called _information sets_ (of player $i$). It is more standard in game theory to define $\sim_{i}$ as an equivalence relation only on the nodes where $i$ moves. We define it on all non-leaf nodes because if $K$ is a subset of players, we are then able to define $\sim_{K}$ as the the intersection of $\sim_{i}$ for $i\in K$. Intuitively, $v\sim_{K}v^{\prime}$ if the agents in $K$ cannot distinguish $v$ from $v^{\prime}$, even if they pool all their information together. We use the equivalence relation $\sim_{K}$ when defining $k$-resilient sequential equilibrium. A _(pure) strategy_ $s_{i}$ for player $i$ in an extensive-form game $\Gamma$ is a function that maps each node $v$ where $i$ moves to an action that can be performed at $v$, with the constraint that if two nodes are indistinguishable to $i$, then $s_{i}$ must choose the same action on both. More precisely, if $v\sim_{i}v^{\prime}$ then $s_{i}(v)=s_{i}(v^{\prime})$. Each pure strategy profile $\vec{s}$ induces a unique path from the root of the tree to some leaf $\ell\in G\setminus G^{\circ}$ where, given node $v$ on the path, the next node in the path is the node reached from $v$ by the edge (action) $s_{M(v)}(v)$. The payoff of player $i$ given strategy $\vec{\sigma}$ is given by $u_{i}(\ell)$, although we also write $u_{i}(\vec{\sigma})$ for simplicity. A _behavioral strategy_ for player $i$ allows randomization. More precisely, a behavioral strategy is a function $s_{i}$ that maps each node $v$ such that $M(v)=i$ to a distribution over $i$’s possible actions at $v$ (again, with the requirement that $s_{i}(v)=s_{i}(v^{\prime})$ if $v\sim_{i}v^{\prime}$). We denote by $u_{i}(\vec{\sigma})$ the expected payoff of player $i$ if players play (behavioral) strategy profile $\vec{\sigma}$. Nash equilibrium is defined for extensive-form games just as it is in strategic-form games. Given a NE $\vec{\sigma}$, the _equilibrium path_ consists of the nodes of $G$ that can be reached with positive probability using $\vec{\sigma}$. It is well known that in extensive-form games, Nash equilibrium does not always describe what intuitively would be a reasonable play. For instance, consider the following game for two players in which $(p_{1},p_{2})$ describes the payoff of players 1 and 2 respectively: In this case, the strategy profile in which player 1 plays $B$ at node $a$ and player 2 plays $L$ at node $b$ and $R$ at $c$ is a Nash equilibrium. However, the reason that it is better for player 1 to play $B$ is that if she plays $A$, then player 2 would play $L$ and they would both get a utility of $-5$. This means that player 1 is being influenced by an irrational threat, since if 1 plays $A$, it is in player 2’s best interest to play $R$ instead of $L$. In order to avoid these situations, we can extend the notion of Nash equilibrium to _subgame-perfect Nash equilibrium_ , in which, roughly speaking, the strategy must be a Nash equilibrium, not only in $\Gamma$, but in all the of subgames of $\Gamma$ as well. In the example above, the only subgame-perfect equilibrium is the one given by $\sigma_{1}(a)=A$, $\sigma_{2}(b)=R$ and $\sigma_{2}(c)=R$. Unfortunately, subgame-perfect equilibrium may not be well-defined if players have nontrivial information sets. Consider the following game, where $b$ and $b^{\prime}$ are in the same information set for player 2, as are $c$ and $c^{\prime}$. At node $b^{\prime}$, player 2 would play $L$; at $b$, he would play $R$. For player 2 to decide its move, it must have a belief about the probability of being at each node of the information set, and this belief must be consistent with the strategy $\vec{\sigma}$ being played. For example, if $\sigma_{1}(a)=B$, then if player 2 is in information set $\\{b,b^{\prime}\\}$, then 2 would be sure that the true node is $b^{\prime}$. We capture the players’ beliefs by using a _belief system_ $b$, which is a function from information sets $I$ to a probability distribution over nodes in $I$. Intuitively, if $I$ is an information set for player $i$, then $b$ represents how likely it is for $i$ to be at each of the nodes of $I$. Note that if $\vec{\sigma}$ is _completely mixed_ , which means that all actions are taken with positive probability, then $b$ is uniquely determined by Bayes’ rule. More generally, we say that a belief system $b$ is _consistent_ with a strategy $\vec{\sigma}$ if there exists a sequence $\vec{\sigma}^{1},\vec{\sigma}^{2},\ldots$ of completely-mixed strategies that converges to $\vec{\sigma}$ such that the beliefs induced by Bayes’ rule converge to $b$. Given an extensive-form game $\Gamma=(P,G,M,U,R)$, a strategy profile $\vec{\sigma}$, a belief system $b$, and an information set $I$ for player $i$, let $u_{i}(\vec{\sigma},I,b)$ denote $i$’s expected utility conditional on being in information set $I$ and having belief system $b$, if $\vec{\sigma}$ is played. ###### Definition 13 (Kreps and Wilson [1982]). A pair $(\vec{\sigma},b)$ consisting of a strategy profile $\vec{\sigma}$ and a belief system $b$ consistent with $\vec{\sigma}$ is a _sequential equilibrium_ if, for all players $i$, all information sets $I$ of player $i$, and all strategies $\tau_{i}$ for player $i$, $u_{i}(\vec{\sigma},I,b)\geq u_{i}((\tau_{i},\vec{\sigma}_{-i}),I,b).$ Even though it is standard to define a sequential equilibrium as a pair $(\vec{\sigma},b)$ consisting of strategy profile and a belief system, for convenience we also say that a strategy profile $\vec{\sigma}$ is a sequential equilibrium if there exists a belief system $b$ such that $(\vec{\sigma},b)$ is a sequential equilibrium. ### A.2 Bayesian games In all of the previous definitions, the utility of each player is assumed to be common knowledge (_perfect information_). However, this is not always the case. Bayesian games capture this idea by assuming that each player $i$ has a type $t_{i}\in T_{i}$ sampled from a distribution $q\in\Delta(T)$, where $T=(T_{1},\ldots,T_{n})$, and that the utility $u_{i}$ of $i$ is not only a function of the action profile being played, but also of its type $t_{i}$. Formally, a _Bayesian games_ is a tuple $(P,T,q,A,U)$, where, as in normal- form games, $P$, $A$, and $U$ are the set of players, their actions, and their utility functions, respectively; $T$ is the set of possible type profiles, and $q$ is a distribution in $\Delta(T)$. A _strategy_ in a Bayesian game for player $i$ is a map $\mu_{i}:T_{i}\rightarrow\Delta(A_{i})$. Intuitively, a strategy in a Bayesian game tells player $i$ how to choose its action given its type. Since the distribution $q$ is common knowledge, given a strategy profile $\vec{\mu}$ in $\Gamma$, the expected utility of a player $i$ is $u_{i}(\vec{\mu})=\sum_{t_{i}\in T_{i}}q(t_{i})\sum_{\vec{t}}q(\vec{t}\mid t_{i})u_{i}(\vec{\mu}(\vec{t})),$ (1) where $u_{i}(\vec{\mu}(\vec{t}))$ denotes the expected utility of player $i$ when an action profile is chosen according to $(\mu(\vec{t}))$. This allows us to define Bayesian Nash equilibrium as follows: ###### Definition 14. In a Bayesian Game $\Gamma=(P,T,q,A,U)$, a strategy profile $\vec{\mu}:=(\mu_{1},\ldots,\mu_{n})$ is a _(Bayesian) Nash equilibrium_ if, for all players $i$ and all strategies $u^{\prime}_{i}$ for $i$, $u_{i}(\vec{\mu})\geq u_{i}(\vec{\mu}_{-i},\mu^{\prime}_{i}).$ We can generalize correlated equilibrium to Bayesian games as follows. We can view a correlated equilibrium as a distribution $p\in\Delta(A)$ such that if a trusted mediator samples an action profile $\vec{a}\in p$ and sends action $a_{i}$ to each player $i$, it is always better for $i$ to play $a_{i}$ rather than something else. In a Bayesian game, the mediator instead samples the action profile from a distribution that depends on the type profile. More precisely, suppose that players send their types to a trusted mediator, the mediator samples an action profile $\vec{a}$ from a distribution $\mu(\vec{t})$ that depends on the type profile $\vec{t}$ received, and then sends action $a_{i}$ to each player $i$. We say that $\mu$ is a _communication equilibrium_ if it is optimal for the players to (a) tell their true type to the mediator, and (b) play the action sent by the mediator. The following definitions make this precise. ###### Definition 15. Forges [1986]; Myerson [1986] A correlated strategy profile $\mu:T\rightarrow\Delta(A)$ is a _communication equilibrium_ of Bayesian Game $\Gamma=(P,T,q,A,U)$ if, for all $i\in P$, all $t_{i}\in T_{i}$, all $\psi:T_{i}\rightarrow T_{i}$ and all $\varphi:A_{i}\rightarrow A_{i}$, we have that $\sum_{\vec{t}_{-i}\in T_{-i}}\sum_{\vec{a}\in A}q(t_{-i},t_{i})\mu(\vec{a}\mid\vec{t}_{-i},t_{i})u_{i}(\vec{t}_{-i},t_{i},\vec{a})\geq$ $\sum_{\vec{t}_{-i}\in T_{-i}}\sum_{\vec{a}\in A}q(t_{-i},t_{i})\mu(\vec{a}\mid\vec{t}_{-i},\psi(t_{i}))u_{i}(\vec{t}_{-i},t_{i},\vec{a}_{-i},\varphi(a_{i})).$ We can combine the notions of extensive-form game and Bayesian game in the obvious way to get _extensive-form Bayesian games_ : we start with an extensive form game, add a type space $T$ and a commonly known distribution $q$ on $T$, and then have the utility function depend on the type profile as well the leaf reached. We leave formal details to the reader. ## Appendix B VSS and CC: a review In this section, we present the two main primitives used to construct the strategies required for the proof of Theorem 1. The purpose of these primitives is that agents are able to distribute information between them (verifiable secret sharing), and compute any relevant data from the information shared without learning anything besides the data itself (circuit computation). Intuitively, verifiable secret sharing is designed to distribute between the players a given piece of information $y$ in some finite field $\mathbb{F}_{q}$ in such a way that each agent $i$ knows only a part $y_{i}$ of the secret that does not reveal any information about the secret by itself. In fact, with $k$-resilient verifiable secret sharing, even if $k$ agents collude and put their information together, they still cannot deduce anything about the shared secret, but any subset of $k+1$ agents can compute the secret with no error given their pieces of information. To achieve these properties, the idea is that the agent $i$ that knows the secret $y$ shares it using Shamir’s secret- sharing scheme Shamir [1979]: $i$ gives each agent $j$ a value $y_{j}$ such that there exists a polynomial $p$ of degree $k$ such that $p(j)=y_{j}$ for each $j$ and $p(0)=y$. It is easy to check that, if $p$ is chosen uniformly at random, then the values $y_{i}$ received by any subset of $k$ agents is uniformly distributed in $\mathbb{F}_{q}^{k}$, and thus convey no information about $y$. However, any subset of $k+1$ agents can reconstruct the secret $y$ by interpolating their $k+1$ points. Note that, since up to $k$ agents may misreport their values, players cannot just simply reconstruct the secret using the first $k+1$ points they receive. In fact, they must wait until receiving at least $2k+1$ points that lie on the same polynomial $p$ of degree $k$. In that case, even if $k$ players misreport their values, at least $k+1$ of the points in $p$ are guaranteed to be reported by honest players, which ensures that $p$ defines the correct secret. At a high level, this is why this primitive requires that the number of players $n$ satisfies $n>3k$: even if $k$ players misreport their values, at least $2k+1$ of those values are shared by honest players, and thus all players are guaranteed to eventually find $2k+1$ points that lie on the same polynomial of degree $k$, which allows them to reconstruct the secret correctly. In order to share a secret using Shamir’s secret-sharing scheme, it is unfortunately not enough to have the sender choose a polynomial $p$ uniformly at random such that $p(0)=y$ and send each agent $i$ the value of $p(i)$. This protocol is vulnerable when the sender is malicious: it could generate $n$ points that do not lie on the same polynomial, and agents could never reconstruct the secret. In order to guarantee that the shared points define some secret, players have to follow a non-trivial protocol that checks the validity of the points without leaking information about the underlying secret. In the synchronous setting, Ben-Or, Goldwasser, and Wigderson [1988] provide a $k$-resilient VSS protocol if $n>3k$; in the asynchronous setting, Ben-Or, Canetti, and Goldreich [1993] provide a $k$-resilient VSS protocol if $n>4k$. Intuitively, in asynchronous systems, $n>4k$ is necessary since $k$ of the $4k+1$ players might be delayed arbitrarily (and are indistinguishable from deviating players that didn’t send a message), and $k$ might send the wrong values, which leaves $2k+1$ points to interpolate without error. A more rigorous description of the properties of VSS can be found in Appendix B.1. In circuit computation, players generate the shares of either the sum or product of two shared secrets without leaking information about the secrets themselves. More precisely, suppose that secrets $y,z\in\mathbb{F}_{q}$ are shared among the players. Then $k$-resilient circuit computation that allows each player to compute its share of $y+z$ or $yz$ without leaking any information about $y$ or $z$ to any coalition of at most $k$ players. By successively using this primitive, players can compute the shares of any secret that is the output of a circuit with addition and multiplication gates given the inputs. This, in turn, allows agents to compute the shares of the output of any function whose domain and range are finite, since, without loss of generality, we can then take the domain and range to be $\mathbb{F}_{q}$ for some $q$ sufficiently large, and any function $f:\mathbb{F}_{q}\rightarrow\mathbb{F}_{q}$ can be viewed as a polynomial on $\mathbb{F}_{q}$, and so involves only addition and multiplication. An implementation of a $k$-resilient circuit computation protocol is given by Ben-Or, Goldwasser, and Wigderson [1988] in the case of synchronous systems, and by Ben-Or, Canetti, and Goldreich [1993] in the case of asynchronous systems. A more detailed description of the properties of CC can be found in Appendix B.2. ### B.1 Verifiable secret sharing Verifiable secret sharing allows a player (the sender) to securely distribute shares of a private value $v$ among all players. A protocol $\vec{\sigma}$ is a _$k$ -resilient implementation of VSS in synchronous systems_ if, for all senders $s\in[n]$, all coalitions $K\subseteq[n]$ of at most $k$ players, and all strategies $\vec{\tau}_{K}$ for players in $K$, the following holds of $(\vec{\sigma}_{-K},\vec{\tau}_{K})$: * (a) All players not in $K$ terminate. * (b) If $s^{i}$ is $i$’s output, then there exists a polynomial $r$ of degree $k$ such that $r(i)=s^{i}$ for all $i\not\in K$. Moreover, if the sender $s$ is not in $K$, $r(0)$ is its input $v$. * (c) If the sender $s$ is not in $K$, the distribution over histories $h_{K}^{v}$ of players in $K$ when $s$ shares $v$ is the same for all $v\in\mathbb{F}_{q}$. Properties (a) and (b) guarantee that, regardless of what a coalition of at most $k$ players does, all honest players terminate and their outputs are consistent even when the sender is deviating from the protocol, which means that even if the sender decides not to share an input or not send messages at all, all honest players eventually agree on some shares. Property (c) guarantees that no coalition of $k$ players can learn anything about the secret being shared if the sender is not in the coalition. In asynchronous systems, properties (a) and (b) cannot be guaranteed simultaneously, since a player that sends no messages is indistinguishable from a player that is being delayed by the scheduler. In this case we have that: * (a1) If the sender is not in $K$, all players not in $K$ terminate. * (a2) If a player $i\not\in K$ terminates, all players not in $K$ terminate. * (b) If $S$ is the subset of players not in $K$ that terminate and $s^{i}$ is the output of player $i\in S$, then there exists a polynomial $r$ of degree $k$ such that $r(i)=s^{i}$ for all $i\in S$. Moreover, if the sender $s$ is not in $K$, $r(0)$ is its input $v$. * (c) If the sender $s$ is not in $K$, then for all schedulers, the distribution over histories $h_{K}^{v}$ of players in $K$ when $s$ shares $v$ is the same for all $v\in\mathbb{F}_{q}$. Note that, in the asynchronous setting, a player might not terminate if the
# Language Guided Adversarial Purification ###### Abstract Adversarial purification using generative models demonstrates strong adversarial defense performance. These methods are classifier and attack- agnostic, making them versatile but often computationally intensive. Recent strides in diffusion and score networks have improved image generation and, by extension, adversarial purification. Another highly efficient class of adversarial defense methods known as adversarial training requires specific knowledge of attack vectors, forcing them to be trained extensively on adversarial examples. To overcome these limitations, we introduce a new framework, namely Language Guided Adversarial Purification (LGAP), utilizing pre-trained diffusion models and caption generators to defend against adversarial attacks. Given an input image, our method first generates a caption, which is then used to guide the adversarial purification process through a diffusion network. Our approach has been evaluated against strong adversarial attacks, proving its effectiveness in enhancing adversarial robustness. Our results indicate that LGAP outperforms most existing adversarial defense techniques without requiring specialized network training. This underscores the generalizability of models trained on large datasets, highlighting a promising direction for further research. Index Terms— Adversarial purification, Language guidance, Diffusion ## 1 Introduction The use of deep neural networks, especially within the realm of computer vision, has ushered in transformative advancements in various applications. Despite these strides, a consistent vulnerability is the susceptibility of such models to adversarial perturbations [1]. These perturbations, often imperceptible, can fool even the most sophisticated neural networks, causing them to misclassify inputs. Addressing this alarming vulnerability has become a research imperative, leading to a rapidly growing body of literature dedicated to understanding and defending against these adversarial threats [2, 3]. Fig. 1: Illustration of LGAP. A pre-trained image-captioning model (BLIP) generates captions for input images, providing a textual representation of the visual content. Leveraging the generated captions, purified images are created via the diffusion model. The red dashed lines represent the adversarial image input, while the green dotted lines indicate the resulting purified image. Historically, adversarial training, introduced by Goodfellow et al. [1], has been posited as an effective defense strategy. This approach, which integrates adversarial examples into the training phase, aims to strengthen models against specific adversarial attacks. However, its efficacy is often limited to the spectrum of attacks encountered during training, thereby leaving models vulnerable to novel adversarial strategies. This constraint underscores the necessity for alternative defensive paradigms. Given their inherent capability to generate or transform data, generative models have recently been explored as potential tools for adversarial purification [4, 5, 6]. Within this domain, diffusion models have emerged as particularly promising candidates. Recent studies, as exemplified by Nie et al. [7] and Carlini et al. [8], have harnessed the potential of score-based and diffusion models towards purification of adversarial samples. Primarily, the adversarial purification techniques have focussed only on the image modality, despite promising performance of diffusion models in multi- modal tasks such as text-to-image generation [9]. Thus, in our work, we investigate the impact of language towards the robustness of vision models. Our research focuses on defensive strategy based on vision and language models trained on large datasets. By leveraging the capabilities of such models trained jointly on language and vision tasks, we propose a novel framework of Language Guided Adversarial Purification (LGAP), as illustrated in Figure 1. This novel framework, which seamlessly integrates a caption generator and a pre-trained diffusion model with a classifier, leverages the inherent generalisability of these models to purify an adversarial input. To the best of our knowledge, language based adversarial purification has not been addressed in the literature. We conduct elaborate empirical evaluations across benchmark datasets, including ImageNet [10], CIFAR-10 [11] and CIFAR-100 [11]. The results of evaluation against $L_{\infty}$ norm attacks corroborate the robustness of our framework. Notably, for the ImageNet, our method reveals better performance compared to previous techniques. ## 2 Related Works Diffusion models in image generation: The landscape of image generation has been revolutionized by diffusion models. Rooted in the foundational works of Sohl-Dickstein et al. [12] and later extended by Song et al. [13] and Ho et al. [14], these models have exhibited unparalleled prowess in generating high- quality image samples. Song et al. [15] further advanced this domain by combining generative learning mechanisms with stochastic differential equations, thereby broadening the horizon of diffusion models. Language-image pretraining: A significant milestone in deep learning, language-image pretraining bridges the gap between textual and visual data. Pioneering models such as CLIP [16] and BLIP [17] have leveraged vast amounts of text and image data to jointly train vision and language models, demonstrating tremendous progress in multi-modal tasks. Adversarial training: The foundational work of Madry et al. [2] established adversarial training as a robust method for safeguarding neural networks from known adversarial attacks. While the effectiveness of the method is well- recognized, its scalability and adaptability have been enhanced through inspirations from metric learning [18] and self-supervised paradigms [19]. However, the computational demands of adversarial training has spurred research into more efficient training methods [20, 21]. Adversarial purification: Generative models have emerged as a pioneer in the adversarial purification realm. Initial endeavors, such as those by Samangouei et al. [4], harnessed GANs for purification. Subsequent innovations leaned on energy-based models (EBMs) to refine the purification process using Langevin dynamics [22]. Notably, the intersection of score networks and diffusion models with adversarial purification has been explored recently, with promising results against benchmark adversarial attacks [6, 7]. ## 3 Proposed Method We propose a novel defense strategy against adversarial attacks on classification models by leveraging language guidance in diffusion models for adversarial purification. For a clean sample $\mathbf{x}$ with label $y$, and a target neural network $f_{\boldsymbol{\theta}}$, the adversary aims to produce $\mathbf{x}_{\text{adv}}$ by introducing adversarial perturbations. This results in a prediction $f_{\boldsymbol{\theta}}(\mathbf{x}_{adv})$ that differs from the original prediction $f_{\boldsymbol{\theta}}(\mathbf{x})=y$. The underlying premise of the proposed method is to preprocess the input $\mathbf{x}$ through a diffusion model conditioned on a caption to remove any adversarial perturbations before feeding it to $f_{\boldsymbol{\theta}}$. We first discuss the caption generation followed by purification using diffusion model. ### 3.1 Image captioning For image captioning, we use a caption generator from BLIP [17]. BLIP has a multi-modal encoder-decoder architecture which consists of three major components a unimodal encoder for generating image and text embeddings, an image-grounded text encoder that computes cross attention and self-attention between the two encodings to give a multimodal representation of image text pair, and an image-grounded text decoder that uses casual self-attention to give the text caption. We use the unimodal encoder and image-grounded text decoder to generate the captions. Given an input $\mathbf{x}$, the captions are generated as, $\text{{Caption}}_{\text{BLIP}}=\text{{BLIP}}(\mathbf{x}).$ We show some sample captions in Figure 2. We can see that the captions for the clean samples (top row) contains the true label. In the second row, adversarial samples are given and the classifier’ prediction is incorrect. Here, truck is classified as ship. However, the BLIP caption still contains the true label truck, though the caption is not the same as that of clean sample. Thus, using these captions can condition the diffusion models with true semantics which can enhance purification of the adversarial images. Next, we discuss the diffusion based purification. ### 3.2 Diffusion purification process Latent diffusion process In a standard diffusion model [14], the diffusion process can be defined as: ${\bf{x}_{t}}=\sqrt{1-\beta_{t}}\cdot{\bf{x}_{t-1}}+\sqrt{\beta_{t}}\cdot\boldsymbol{\epsilon}_{t}$ where $\beta_{t}\in(0,1)$ is the variance schedule, ${\bf{x}_{t}}$ is the noisy sample, and $\boldsymbol{\epsilon}_{t}$ is the noise at time step $t$. In Latent Diffusion Models [9], this process is applied in latent space: $\displaystyle\mathbf{z}_{0}=\mathcal{E}(\mathbf{x})$ $\displaystyle{\bf{z}_{t}}=\sqrt{1-\beta_{t}}\cdot{\bf{z}_{t-1}}+\sqrt{\beta_{t}}\cdot{\boldsymbol{\epsilon}_{t}}$ where $\mathbf{z}_{0}$ is the latent vector obtained from the encoder $\mathcal{E}$ and $\mathbf{z}_{t}$ is the noisy latent vector at time step $t$. Reverse process in latent space In the reverse process, the aim is to recover $\mathbf{z}_{0}$ from $\mathbf{z}_{T}$ given a sequence of noise terms $\boldsymbol{\epsilon}_{t}$. Mathematically, this is defined as: ${\mathbf{z}_{t}}=g_{\theta}({\mathbf{z}_{t+1}},t,{\boldsymbol{\epsilon}_{t}})$ where ${g_{\theta}}$ is a parameterized model. Additionally, ${g_{\theta}}$ is conditioned on text by augmenting ${g_{\theta}}$ architecture with cross attention layers. Since our goal is to leverage the BLIP generated captions, we condition the diffusion model as: ${\mathbf{z}_{t}}=g_{\theta}({\mathbf{z}_{t+1}},t,{\boldsymbol{\epsilon}_{t}},\mathbf{C})$ where $\mathbf{C}={\tau_{\theta}}(\text{{Caption}}_{\text{BLIP}})$, and $\tau_{\theta}$ is text encoder. Since BLIP is a powerful model, the likelihood that it correctly identifies the image is high. This gives a better guidance to diffusion model compared to image-only case. Final image reconstruction and training Finally, the reconstructed image $\hat{\mathbf{x}}$ can be obtained from the reconstructed latent representation ${\mathbf{z}}_{0}$ as, $\hat{\mathbf{x}}=\mathcal{D}({\mathbf{z}}_{0})$, where $\mathcal{D}$ is the decoder. Given model $f_{\boldsymbol{\theta}}$, clean image $\mathbf{x}$, its corresponding pre-processed sample $\hat{\mathbf{x}}$ and labels $y$, we optimize for, $\arg\min_{\boldsymbol{\theta}}\frac{1}{n}\sum_{i=1}^{n}\mathcal{L}_{CE}(f_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{i}),{y}_{i})$, where $\mathcal{L}_{CE}$ is the cross-entropy loss and $n$ is the number of samples. In contrast to adversarial training of several epochs with adversarial samples, we only need a few epochs of finetuning with pre-processed clean samples. Further, compared to score or diffusion-based purification, which extensively trains these models, we only need minimal training of the classifier. ## 4 Experiments and Results ### 4.1 Experimental settings Datasets and network architectures: Our experimental evaluation involves three datasets, namely CIFAR-10 [11], CIFA-100 [11] and ImageNet [10]. We utilize the base models from RobustBench [23] model zoo for CIFAR-10 and ImageNet. For CIFAR-100 we train the model following Yoon et al. [6]. We compare our approach against other adversarial purification strategies on CIFAR-10, adhering to their experimental configurations. We also evaluate our method against preprocessor blind attacks on ImageNet. Regarding classifier architectures, we opt for two prevalent models: ResNet-50 [24] for ImageNet and WideResNet-28-10 [25] for CIFAR-10 and CIFAR-100. We fine-tune the WideResNet on images generated from the diffusion network for 15 epochs. We utilize Adam optimizer with a $10^{-3}$ learning rate. For generating captions, we use pre-trained BLIP [17] with default hyperparameters, and for the diffusion process, we use a pre-trained latent diffusion model from [9] with default parameters except for the noise parameter t. We set t to 0.5 for CIFAR-10, and CIFAR-100 and 0.1 for ImageNet. We will be releasing the code soon. Fig. 2: Purified samples given by LGAP. The first, second, and third rows contain clean, adversarial, and purified samples. The BLIP generated captions are given on the right, and the predicted label is on top of the image. Methods | Accuracy(%) | Architecture ---|---|--- | Natural | Robust | Raw WideResNet | 95.80 | 0.00 | WRN-28-10 Adv. purification methods | | | LGAP | 90.03 | 71.68 | WRN-28-10 Yoon et al. [6]* | | | ($\sigma$ = 0.1)* | 93.09 | 85.45 | WRN-28-10 ($\sigma$ = 0.25)* | 86.14 | 80.24 | WRN-28-10 Hill et al. [26]* | 84.12 | 78.91 | WRN-28-10 Shi et al. [5]* | 96.93 | 63.10 | WRN-28-10 Du et al. [27]* | 48.7 | 37.5 | WRN-28-10 Grathwohl et al. [22]* | 75.5 | 23.8 | WRN-28-10 Song et al. [3]* | | | Natural + PixelCNN | 82 | 61 | ResNet-62 AT + PixelCNN | 90 | 70 | ResNet-62 Adv training methods | | | Madry et al. [2]* | 87.3 | 70.2 | ResNet-56 Dong et al. [28]* | 84.98 | 51.29 | ResNet-18 Table 1: Results for preprocessor blind PGD attack for CIFAR-10, within an $L_{\infty}$ $\epsilon$-ball, where $\epsilon$ = 8/255. Data sourced from existing literature is indicated by an asterisk *. Adversarial attacks: We test our algorithm against preprocessor blind PGD attacks, in which the adversary has complete visibility into the classifier but is uninformed about the purification model. We also evaluate our algorithm against strong adaptive attack, which involves more complex scenarios due to our purification algorithm’s iterative nature through neural networks, potentially leading to obfuscated gradients. To rigorously test our defense mechanism, we use potent adaptive attacks, such as Backward Pass Differentiable Approximation (BPDA) [29] and its variations. We experiment with the basic form of BPDA, where the purification function is approximated as the identity function. We further validate its robustness using Expectation Over Time (EOT) attacks [29]. Method | Accuracy ---|--- | Natural | Robust LGAP | 58.71 | 39.82 Yoon et al.[6]* | 77.83 | 43.21 Adversarial training methods | | Madry et al.[2]* | 59.58 | 25.47 Li et al.[30]* | 61.01 | 28.88 Table 2: Preprocessor blind PGD attack for CIFAR-100, $\epsilon$ = $8$/$255$. Data sourced from existing literature is indicated by an asterisk *. Methods | Accuracy(%) | Architecture ---|---|--- Attacks | Natural | Robust | Adv. Purification Methods | | | LGAP | 90.30 | | BPDA 40+EOT | | 44.96 | WRN-28-10 BPDA | | 62.63 | WRN-28-10 Yoon et al. [6] ($\sigma$ = 0.25)* | | | BPDA 40+EOT | 86.14 | 70.01 | WRN-28-10 Yoon et al. [6] ($\sigma$ = 0.0)* | | | BPDA | 90.60 | 76.87 | WRN-28-10 Hill et al. [26]* | | | BPDA 50+EOT | 84.12 | 54.9 | WRN-28-10 Song et al. [3]* | | | BPDA | 95.00 | 9 | ResNet-62 Yang et al. [31]* | | | BPDA 1000 | 94.8 | 40.8 | ResNet-18 (+AT. p = 0.4 ->0.6) | 88.7 | 55.1 | WRN-28-10 (+AT. p = 0.6 ->0.8) | 91 | 52.9 | WRN-28-10 Approx. Input | 89.4 | 41.5 | ResNet-18 Shi et al. [5]* | | | Classifier PGD 20 | 91.89 | 53.58 | WRN-28-10 Adv. training methods | | | Madry et al. [2]* | 87.3 | 45.8 | ResNet-18 Carmon et al. [32]* | 89.67 | 63.1 | WRN-28-10 Table 3: Adaptive attacks for CIFAR-10, $\epsilon$ = 8/255. ### 4.2 Comparison with state of the art The results for preprocessor blind setup shown in Table 1 on CIFAR10 show that our method gives better robust performance than most previous methods, specifically adversarial training methods, while maintaining comparable performance on natural images. Our method achieves a robust accuracy of 71.68%, which clearly outperforms seven out of ten methods including two adversarial defense methods and five adversarial purification methods. A snapshot of adversarial samples and their corresponding purified images is given in Figure 2. We further extend our evaluation to the CIFAR-100 dataset, with the robust performance comparisons listed in Table 2. Unlike other methods, such as the one by Yoon et al., which demands training a score network and noise parameter tuning, our method, LGAP delivers competitive results with substantially lower computational overhead. Table 3 shows robust accuracy of our method against BPDA attack for CIFAR-10. Our method outperforms most previous techniques of adversarial purification and adversarial training. The gap in accuracy between our method and some recent techniques remains owing to other methods training the purification model on CIFAR10. Yoon et al. and Hill et al. which show better robust performance, train diffusion and EBM networks on CIFAR-10 for 200,000 iterations [6, 26]. Whereas our method requires no such training. Method | Accuracy | Architecture ---|---|--- Attacks | Natural | Robust | Undefended | 76.76 | 0 | ResNet-50 LGAP | 69.09 | | ResNet-50 AA | | 57.12 | BPDA-40 | | 45.31 | PGD-10 | | 52.73 | Adv training methods | | | Salman et al. [33]* | 64.02 | | ResNet-50 AA | | 34.96 | Wong et al. [21]* | 55.62 | | ResNet-50 AA | | 26.24 | Table 4: Preprocessor blind attacks for ImageNet, $\epsilon$ = $4$/$255$. Table 4 shows the robust performance of our method for ImageNet. Due to the high computational cost of some attacks, we evaluate on a fixed set of 2048 as robust accuracy does not change much on the sampled subset compared to the whole subset [7]. We can see that even against strong adaptive attack such as BPDA-40, LGAP attains an accuracy of 45.31% demonstrating the efficacy of the proposed method. The enhanced performance of the method can be attributed to the diffusion model trained on ImageNet. Similarly, a diffusion model trained on CIFAR-10 is expected to yield improved results when applied to CIFAR-10 classification. ## 5 Conclusion Our method addressed key limitations in adversarial defense by introducing a language-guided purification approach. Unlike traditional methods, which require extensive computational resources and specific attack knowledge, our method leverages pre-trained diffusion models and caption generators. This reduces computational overhead and enhances scalability. Empirical tests show our approach is robust, outperforming conventional methods in several metrics, despite trailing some diffusion-based methods. Notably, this performance is achieved with minimal training and do not require adversarial samples or training the score or diffusion networks, thus broadening the method’s applicability and setting a new efficiency standard. Our method underscores the generalizability of deep learning models trained on large datasets, pointing to avenues for future research, especially in model generalizability. ## References * [1] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy, “Explaining and harnessing adversarial examples,” in ICLR, 2015. * [2] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu, “Towards deep learning models resistant to adversarial attacks,” in ICLR, 2018. * [3] Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman, “Pixeldefend: Leveraging generative models to understand and defend against adversarial examples,” in ICLR, 2018. * [4] Pouya Samangouei, Maya Kabkab, and Rama Chellappa, “Defense-gan: Protecting classifiers against adversarial attacks using generative models,” in ICLR, 2018. * [5] Changhao Shi, Chester Holtz, and Gal Mishne, “Online adversarial purification based on self-supervised learning,” in ICLR, 2020. * [6] Jongmin Yoon, Sung Ju Hwang, and Juho Lee, “Adversarial purification with score-based generative models,” in ICML, 2021. * [7] Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Animashree Anandkumar, “Diffusion models for adversarial purification,” in ICML, 2022. * [8] Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, Mingjie Sun, and J Zico Kolter, “(certified!!) adversarial robustness for free!,” in ICLR, 2022. * [9] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer, “High-resolution image synthesis with latent diffusion models,” in CVPR, 2022. * [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR, 2009. * [11] Alex Krizhevsky, Geoffrey Hinton, et al., “Learning multiple layers of features from tiny images,” 2009\. * [12] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in ICML, 2015, pp. 2256–2265. * [13] Yang Song and Stefano Ermon, “Generative modeling by estimating gradients of the data distribution,” NeurIPS, vol. 32, 2019. * [14] Jonathan Ho, Ajay Jain, and Pieter Abbeel, “Denoising diffusion probabilistic models,” NeurIPS, vol. 33, pp. 6840–6851, 2020. * [15] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole, “Score-based generative modeling through stochastic differential equations,” in ICLR, 2020. * [16] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al., “Learning transferable visual models from natural language supervision,” in ICML, 2021. * [17] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi, “Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation,” in ICML, 2022. * [18] Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray, “Metric learning for adversarial robustness,” NeurIPS, vol. 32, 2019. * [19] Kejiang Chen, Yuefeng Chen, Hang Zhou, Xiaofeng Mao, Yuhong Li, Yuan He, Hui Xue, Weiming Zhang, and Nenghai Yu, “Self-supervised adversarial training,” in ICASSP, 2020. * [20] Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein, “Adversarial training for free!,” NeurIPS, vol. 32, 2019. * [21] Eric Wong, Leslie Rice, and J Zico Kolter, “Fast is better than free: Revisiting adversarial training,” in ICLR, 2019. * [22] Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky, “Your classifier is secretly an energy based model and you should treat it like one,” in ICLR, 2019. * [23] Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein, “Robustbench: a standardized adversarial robustness benchmark,” arXiv preprint arXiv:2010.09670, 2020. * [24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in CVPR, 2016. * [25] Sergey Zagoruyko and Nikos Komodakis, “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016. * [26] Mitch Hill, Jonathan Mitchell, and Song-Chun Zhu, “Stochastic security: Adversarial defense using long-run dynamics of energy-based models,” in ICLR, 2021. * [27] Yilun Du and Igor Mordatch, “Implicit generation and modeling with energy based models,” in NeurIPS, 2019. * [28] Junhao Dong, Seyed-Mohsen Moosavi-Dezfooli, Jianhuang Lai, and Xiaohua Xie, “The enemy of my enemy is my friend: Exploring inverse adversaries for improving adversarial training,” in CVPR, 2023. * [29] Anish Athalye, Nicholas Carlini, and David Wagner, “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples,” in ICML, 2018. * [30] Xiang Li and Shihao Ji, “Defense-VAE: A fast and accurate defense against adversarial attacks,” in Machine Learning and Knowledge Discovery in Databases, Peggy Cellier and Kurt Driessens, Eds. pp. 191–207, Springer International Publishing. * [31] Yuzhe Yang, Guo Zhang, Dina Katabi, and Zhi Xu, “Me-net: Towards effective adversarial robustness with matrix estimation,” in ICML, 2019. * [32] Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang, “Unlabeled data improves adversarial robustness,” NeurIPS, 2019. * [33] Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry, “Do adversarially robust imagenet models transfer better?,” NeurIPS, 2020.
11institutetext: P. Sopasakis 22institutetext: KU Leuven, Department of Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics, Kasteelpark Arenberg 10, 3001 Leuven, Belgium. Tel.: +32-486-928034 22email<EMAIL_ADDRESS>33institutetext: H. Sarimveis 44institutetext: School of Chemical Engineering, National Technical University of Athens, 9 Heroon Polytechneiou Street, 15780 Zografou Campus, Athens, Greece. Tel.: +30-210-7723237 44email<EMAIL_ADDRESS>55institutetext: P. Macheras 66institutetext: Department of Pharmacy, University of Athens, Panepistimiopolis Zografou, 15784 Athens, Greece. Tel: +30-210-7274026 66email<EMAIL_ADDRESS>77institutetext: A. Dokoumetzidis88institutetext: Department of Pharmacy, University of Athens, Panepistimiopolis Zografou, 15784 Athens, Greece. Tel: +30-210-7274122 88email<EMAIL_ADDRESS> # Fractional Calculus in Pharmacokinetics Pantelis Sopasakis Haralambos Sarimveis Panos Macheras Aristides Dokoumetzidis ###### Abstract We are witnessing the birth of a new variety of pharmacokinetics where non- integer-order differential equations are employed to study the time course of drugs in the body: this is dubbed “fractional pharmacokinetics”. The presence of fractional kinetics has important clinical implications such as the lack of a half-life, observed, for example with the drug amiodarone and the associated irregular accumulation patterns following constant and multiple-dose administration. Building models that accurately reflect this behaviour is essential for the design of less toxic and more effective drug administration protocols and devices. This article introduces the readers to the theory of fractional pharmacokinetics and the research challenges that arise. After a short introduction to the concepts of fractional calculus, and the main applications that have appeared in literature up to date, we address two important aspects. First, numerical methods that allow us to simulate fractional order systems accurately and second, optimal control methodologies that can be used to design dosing regimens to individuals and populations. ###### Keywords: Fractional pharmacokinetics Numerical methods Drug Administration Control Drug Dosing ††journal: J. Pharmacokinet. Pharmacodyn. ## Introduction ### Background Diffusion is one of the main mechanisms of various transport processes in living species and plays an important role in the distribution of drugs in the body. Processes such as membrane permeation, dissolution of solids and dispersion in cellular matrices are considered to take place by diffusion. Diffusion is typically described by Fick’s law, which, in terms of the pharmacokinetics of drugs, gives rise to exponential washout curves that have a characteristic time scale, usually expressed as a half-life. However, in the last few decades, strong experimental evidence has suggested that this is not always true and diffusion processes that deviate from this law have been observed. These are either faster (super-diffusion) or slower (sub-diffusion) modes of diffusion compared to the regular case West and Deering (1994); Ionescu et al (2017). Such types of diffusion give rise to kinetics that are referred to as anomalous to indicate the fact that they stray from the standard diffusion dynamics West and Deering (1994). Moreover, anomalous kinetics can also result from reaction-limited processes and long-time trapping. Anomalous kinetics introduces memory effects into the distribution process that need to be accounted for to correctly describe it. An distinctive feature of anomalous power-law kinetics is that it lacks a characteristic time scale contrary to exponential kinetics. A mathematical formulation that describes such anomalous kinetics is known as fractal kinetics Kopelman (1988); Macheras (1996); Pereira (2010) where explicit power functions of time in the form of time- dependent coefficients are used to account for the memory effects replacing the original rate constants. In pharmacokinetics several datasets have been characterised by power laws, epmpirically Wise (1985); Tucker et al (1984), while the first article that utilised fractal kinetics in pharmacokinetics was Macheras (1996). Molecules that their kinetics presents power law behaviour incude those distributed in deeper tissues, such as amiodarone Weiss (1999) and bone-seeking elements, such as calcium, lead, strontium and plutonium Macheras (1996); Phan et al (2006). An alternative theory to describe anomalous kinetics employs fractional calculus Sokolov et al (2002); Podlubny (1999), which introduces derivatives and integrals of fractional order, such as half or three quarters. Although fractional calculus was introduced by Leibniz more than 300 years ago, it is only within the last couple of decades that real-life applications have been explored Magin (2004a, b, c). It has been shown that differential equations with fractional derivatives (FDEs) describe experimental data of anomalous diffusion more accurately. Although fractional-order derivatives were first introduced as a novel mathematical concept with unclear physical meaning, nowadays, a clear connection between diffusion over fractal spaces (such as networks of capillaries) and fractional-order dynamics has been established Butera and Paola (2014); Chen et al (2010); Metzler et al (1994); Copot et al (2014). In particular, in anomalous diffusion the standard assumption that the mean square displacement $\langle x^{2}\rangle$ is proportional to time does not hold. Instead, it is $\langle x^{2}\rangle\sim t^{\nicefrac{{2}}{{d}}}$, where $d\neq 2$ is the associated fractal dimension Gmachowski (2015); Klafter and Sokolov (2011); Eirich (1990). Fractional kinetics Dokoumetzidis and Macheras (2011) was introduced in the pharmaceutical literature in Dokoumetzidis and Macheras (2009) and the first example of fractional pharmacokinetics which appeared in that article was amiodarone, a drug known for its power-law kinetics Tucker et al (1984). Since then, other applications of fractional pharmacokinetics have appeared in the literature. Kytariolos et al. Kytariolos et al (2010) presented an application of fractional kinetics for the development of nonlinear in vitro-in vivo correlations. Popović et al. have presented several applications of fractional pharmacokinetics to model drugs, namely for diclofenac Popović et al (2010), valproic acid Popović et al (2011), bumetanide Popović et al (2013) and methotrexate Popović et al (2015). Copot et al. have further used a fractional pharmacokinetics model for propofol Copot et al (2013). In most of these cases the fractional model has been compared with an equivalent ordinary PK model and was found superior. FDEs have been proposed to describe drug response too, apart from their kinetics. Verotta has proposed several alternative fractional PKPD models that are capable of describing pharmacodynamic times series with favourable properties Verotta (2010). Although these models are empirical, i.e., they have no mechanistic basis, they are attractive since the memory effects of FDEs can link smoothly the concentration to the response with a variable degree of influence, while the shape of the responses generated by fractional PKPD models can be very flexible and parsimonious (modelled using few parameters). Applications of FDEs in pharmacokinetics fall in the scope of the newly-coined discipline of mathematical pharmacology Van der Graaf et al (2015) to which this issue is dedicated. Mathematical pharmacology utilises applied mathematics, beyond the standard tools used commonly in pharmacometrics, to describe drug processes in the body and assist in controlling them. This paper introduces the basic principles of fractional calculus and FDEs, reviews recent applications of fractional calculus in pharmacokinetics and discusses their clinical implications. Moreover, some aspects of drug dosage regimen optimisation based on control theory are presented for the case of pharmacokinetic models that follow fractional kinetics. Indeed, in clinical pharmacokinetics and therapeutic drug monitoring, dose optimisation is carried out usually by utilising Bayesian methodology. Optimal control theory is a powerful approach that can be used to optimise drug administration, which can handle complicated constraints and has not been used extensively for this task. The case of fractional pharmacokinetic models is of particular interest for control theory and poses new challenges. A problem that has hindered the applications of fractional calculus is the lack of efficient general-purpose numerical solvers, as opposed to ordinary differential equations. However, in the past few years significant progress has been achieved in this area. This important issue is discussed in the paper and some techniques for numerical solution of FDEs are presented. ### Fractional derivatives Derivatives of integer order $n\in\mathbb{N}$, of functions $f:\mathbb{R}\to\mathbb{R}$ are well defined and their properties have been extensively studied in real analysis. The basis for the extension of such derivatives to real orders $\alpha\in\mathbb{R}$ commences with the definition of an integral of order $\alpha$ which hinges on Cauchy’s iterated ($n$-th order) integral formula and gives rise to the celebrated Riemann-Liouville integral ${{}_{0}^{\mathrm{rl}}\mathrm{I}}^{\alpha}_{t}$ Hennion and Hanert (2013): $\displaystyle{{}_{0}^{\mathrm{rl}}\mathrm{I}}^{\alpha}_{t}f(t)={}_{0}\mathrm{D}_{t}^{-\alpha}f(t)=\tfrac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-\tau)^{\alpha-1}f(\tau)\mathrm{d}\tau,$ (1) where $\Gamma$ is the Euler gamma function. In Eq. (1) we assume that $f$ is such that the involved integral is well-defined and finite. This is the case if $f$ is continuous everywhere except at finitely many points and left- bounded, i.e., $\\{f(x);0\leq x\leq t\\}$ is bounded for every $t\geq 0$. Note that for $\alpha\in\mathbb{N}$, ${{}_{0}^{\mathrm{rl}}\mathrm{I}}^{\alpha}_{t}f$ is equivalent to the ordinary $\alpha$-th order integral. Hereafter, we focus on derivatives of order $\alpha\in[0,1]$ as only these are of interest to date in the study of fractional pharmacokinetics. The left-side subscript $0$ of the ${}_{0}\mathrm{D}_{t}^{\alpha}$ and ${{}_{0}^{\mathrm{rl}}\mathrm{I}}_{t}^{\alpha}$ operators, denotes the lower end of the integration limits, which in this case has been assumed to be zero. However, alternative lower bounds can be considered leading to different definitions of the fractional derivative with slightly different properties. When the lower bound is $-\infty$, the entire history of the studied function is accounted for, which is considered preferable in some applications and is referred to as the Weyl definition Magin (2004a). It is intuitive to define a fractional-order derivative of order $\alpha$ via ${}_{0}\mathrm{D}_{t}^{\alpha}=\mathrm{D}^{1}{\,}_{0}\mathrm{D}_{t}^{\alpha-1}=\mathrm{D}^{1}\,{{}_{0}^{\mathrm{rl}}\mathrm{I}}^{\alpha-1}$, (where $\mathrm{D}^{1}$ is the ordinary derivative of order $1$), or equivalently $\displaystyle{}_{0}\mathrm{D}_{t}^{\alpha}f(t)=\tfrac{\mathrm{d}}{\mathrm{d}t}\left[\tfrac{1}{\Gamma(1-\alpha)}\int_{0}^{t}\frac{f(\tau)}{(t-\tau)^{\alpha}}\mathrm{d}\tau\right]$ (2) This is the Riemann-Liouville definition — one of the most popular constructs in fractional calculus. One may observe that the fractional integration is basically a convolution between function $f$ and a power law of time, i.e., ${}_{0}\mathrm{D}_{t}^{-\alpha}f(t)=t^{\alpha-1}\ast f(t)$, where $\ast$ denotes the convolution of two functions. This explicitly demonstrates the memory effects of the studied process. Fractional derivatives possess properties that are not straightforward or intuitive; for example, the half derivative of a constant $f(t)=\lambda$ with respect to $t$ does not vanish and instead is ${}_{0}\mathrm{D}_{t}^{\nicefrac{{1}}{{2}}}\lambda=\nicefrac{{\lambda}}{{\sqrt{\pi t}}}$. Perhaps the most notable shortcoming of the Riemann-Liouville definition with the $0$ lower bound is that when used in differential equations it gives rise to initial conditions that involve the fractional integral of the function and are difficult to interpret physically. This is one of the reasons the Weyl definition was introduced, but this definition may not be very practical for most applications either, as it involves an initial condition at time $-\infty$ Magin (2004a); Samko et al (1993). A third definition of the fractional derivative, which is referred to as the Caputo definition is preferable for most physical processes as it involves explicitly the initial condition at time zero. The definition is: $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\alpha}f(t)=\tfrac{1}{\Gamma(1-\alpha)}\int_{0}^{t}\frac{\tfrac{\mathrm{d}f(\tau)}{\mathrm{d}\tau}}{(t-\tau)^{\alpha}}\mathrm{d}\tau$ (3) where the upper-left superscript $\mathrm{c}$ stands for Caputo. The Caputo derivative is interpreted as ${}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\alpha}={{}_{0}^{\mathrm{rl}}\mathrm{I}}^{\alpha-1}\mathrm{D}^{1}$, that is $\mathrm{D}^{1}$ and ${{}_{0}^{\mathrm{rl}}\mathrm{I}}^{\alpha-1}$ are composed in the opposite way compared to the Riemann-Liouville definition. The Caputo derivative gives rise to initial value problems of the form $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\alpha}x(t)$ $\displaystyle=F(x(t)),$ (4a) $\displaystyle x(0)$ $\displaystyle=x_{0}.$ (4b) This definition for the fractional derivative, apart from the more familiar initial conditions, comes with some properties similar to those of integer- order derivatives, one of them being that the Caputo derivative of a constant is in fact zero. Well-posedeness conditions, for the existence and uniqueness of solutions, for such fractional-order initial value problems are akin to those of integer-order problems Deng and Deng (2014),(Podlubny, 1999, Chap. 3). The various definitions of the fractional derivative give different results, but these are not contradictory since they apply for different conditions and it is a matter of choosing the appropriate one for each specific application. All definitions collapse to the usual derivative for integer values of the order of differentiation. A fourth fractional derivative definition, which is of particular interest is the Grünwald-Letnikov, which allows the approximate discretisation of fractional differential equations. The Grünwald-Letnikov derivative, similar to integer order derivatives, is defined via a limit of a fractional-order difference. Let us first define the Grünwald-Letnikov difference of order $\alpha$ and step-size $h$ of a left-bounded function $f$ as $\displaystyle{{}^{\mathrm{gl}}\Delta}^{\alpha}_{h}f(t)=\tfrac{1}{h^{\alpha}}\sum_{j=0}^{\left\lfloor\nicefrac{{t}}{{h}}\right\rfloor}c_{j}^{\alpha}f(t-jh),$ (5) with $c^{\alpha}0=1$ and for $j\in\mathbb{N}_{>0}$ $\displaystyle c_{j}^{\alpha}=(-1)^{j}\prod_{i=0}^{j-1}\frac{\alpha-i}{i+1}.$ (6) The Grünwald-Letnikov difference operator leads to the definition of the Grünwald-Letnikov derivative of order $\alpha$ which is defined as (Samko et al, 1993, Section 20) $\displaystyle{{}^{\mathrm{gl}}\mathrm{D}}_{t}^{\alpha}f(t)=\lim_{h\to 0^{+}}{{}^{\mathrm{gl}}\Delta}^{\alpha}_{h}f(t),$ (7) insofar as the limit exists. This definition is of great importance in practice, as it allows us to replace ${{}^{\mathrm{gl}}\mathrm{D}}_{t}$ with ${{}^{\mathrm{gl}}\Delta}_{h}^{\alpha}$ provided that $h$ is adequately small, therefore it serves as an Euler-type discretisation of fractional-order continuous time dynamics. By doing so, we produce a discrete-time yet infinite-dimensional approximation of the system since ${{}^{\mathrm{gl}}\Delta}_{h}^{\alpha}f(t)$ depends on $f(t-jh)$ for all $j\in\mathbb{N}_{>0}$. However, we may truncate this series up to a finite history $\nu$, defining $\displaystyle{{}^{\mathrm{gl}}\Delta}_{h,\nu}^{\alpha}f(t)=\tfrac{1}{h^{\alpha}}\sum_{j=0}^{\min\\{\nu,\left\lfloor\nicefrac{{t}}{{h}}\right\rfloor\\}}c_{j}^{\alpha}f(t-jh).$ (8) This leads to discrete-time finite-memory approximations which are particularly useful as we shall discuss in what follows. ## Fractional Kinetics The most common type of kinetics encountered in pharmaceutical literature are the so called “zero order” and “first order”. Here the “order” refers to the order of linearity and is not to be confused with the order of differentiation, i.e., a zero order process refers to a constant rate and a first order to a proportional rate. The fractional versions of these types of kinetics are presented below and take the form of fractional-order ordinary differential equations. Throughout this presentation the Caputo version of the fractional derivative is considered for reasons already explained. ### Zero-order kinetics The simplest kinetic model is the zero-order model where it is assumed that the rate of change of the quantity $q$, expressed in [mass] units, is constant and equal to $k_{0}$, expressed in [mass]/[time] units. Zero-order systems are governed by differential equations of the form $\displaystyle\frac{\mathrm{d}q}{\mathrm{d}t}=k_{0}.$ (9) The solution of this equation with initial condition $q(0)=0$ is $\displaystyle q(t)=k_{0}t.$ (10) The fractional-order counterpart of such zero-order kinetics can be obtained by replacing the derivative of order $1$ by a derivative of fractional order $\alpha$, that is $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\alpha}q(t)=k_{0,f},$ (11) where $k_{0,f}$ is a constant with units [mass]/[time]α. The solution of this equation for initial condition $q(0)=0$ is a power law for $t>0$ Podlubny (1999) $\displaystyle q(t)=\frac{k_{0,f}}{\Gamma(\alpha+1)}t^{\alpha}.$ (12) ### First-order kinetics The first-order differential equation, where the rate of change of the quantity $q$ is proportional to its current value, is described by the ordinary differential equation $\displaystyle\frac{\mathrm{d}q}{\mathrm{d}t}=-k_{1}q(t).$ (13) Its solution with initial condition of $q(0)=q_{0}$ is given by the classical equation of exponential decay $\displaystyle q(t)=q_{0}\exp(-k_{1}t).$ (14) Likewise, the fractional-order analogue of such first-order kinetics is derived by replacing $\nicefrac{{\mathrm{d}}}{{\mathrm{d}t}}$ by the fractional-order derivative ${}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\alpha}$ yielding the following fractional differential equation $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\alpha}q(t)=-k_{1,f}q(t),$ (15) where $k_{1,f}$ is a constant with [time]-α units. The solution of this equation can be found in most books or papers of the fast growing literature on fractional calculus Podlubny (1999) and for initial condition $q(0)=q_{0}$ it is $\displaystyle q(t)=q_{0}\mathcal{E}_{\alpha}(-k_{1,f}t^{\alpha}),$ (16) where $\mathcal{E}_{\alpha}$ is the Mittag-Leffler function Podlubny (1999) which is defined as $\displaystyle\mathcal{E}_{\alpha}(t)=\sum_{k=0}^{\infty}\frac{t^{k}}{\Gamma(\alpha k+1)}.$ (17) The function $\mathcal{E}_{\alpha}(t)$ is a generalisation of the exponential function and it collapses to the exponential when $\alpha=1$, i.e., $\mathcal{E}_{1}(t)=\exp(t)$. Alternatively, Eq. (16) can be reparametrised by introducing a time scale parameter with regular time units, $\tau_{f}=k_{1,f}^{-1/\alpha}$ and then becomes $q(t)=q_{0}\mathcal{E}_{\alpha}(-(t/\tau_{f})^{\alpha})$. The solution of Eq. (15) basically means that the fractional derivative of order $\alpha$ of the function $\mathcal{E}_{\alpha}(t^{\alpha})$ is itself a function of the same form, exactly like the classic derivative of an exponential is also an exponential. It also makes sense to restrict $\alpha$ to values $0\leq\alpha\leq 1$, since for values of a larger than $1$ the solution of Eq. (15) is not monotonic and negative values for $q$ may appear, therefore, it is not applicable in pharmacokinetics and pharmacodynamics. Based on these elementary equations the basic relations for the time evolution in drug disposition can be formulated, with the assumption of diffusion of drug molecules taking place in heterogeneous space. The simplest fractional pharmacokinetic model is the one- compartment model with i.v. bolus administration and the concentration, c, can be expressed by Eq. (16) divided by a volume of distribution, as $\displaystyle c(t)=c_{0}\mathcal{E}_{\alpha}(-k_{1,f}t^{\alpha}),$ (18) with $\alpha\leq 1$ and $c_{0}$ is the ratio [dose]/[apparent volume of distribution]. This equation for times $t\ll 1$ behaves as a stretched exponential, i.e., as ${\sim}\exp(-k_{1,f}t^{\alpha}/\Gamma(1+\alpha))$, while for large values of time it behaves as a power-law, ${\sim}t^{-\alpha}/\Gamma(1-\alpha)$ (see Figure 1) Mainardi (2014). This kinetics is, therefore, a good candidate to describe various datasets exhibiting power-law-like kinetics due to the slow diffusion of the drug in deeper tissues. Moreover, the relevance of the stretched exponential (Weibull) function with the Mittag-Leffler function probably explains the successful application of the former function in describing drug release in heterogeneous media Papadopoulou et al (2006). Eq. (18) is a relationship for the simplest case of fractional pharmacokinetics. It accounts for the anomalous diffusion process, which may be considered to be the limiting step of the entire kinetics. Classic clearance may be considered not to be the limiting process here and is absent from the equation. Figure 1: Amount-time profile according to Eq. (16) for $\alpha=0.5$ (solid curve) along with its approximation at values of $t$ close to $0$ by a stretched exponential function $\exp(-t^{\nicefrac{{1}}{{2}}}/\Gamma(1.5))$ (bottom dashed curve) and an approximation at high values of $t$ by a power function $t^{-\nicefrac{{1}}{{2}}}/\Gamma(0.5)$ (top dashed-dotted curve). Note that the time axis is logarithmic. The evolution of the amount of drug starts as a stretched exponential and eventually shapes as a power function. ### The Laplace transform for FDEs Fractional differential equations (FDEs) can be easily written in the Laplace domain since each of the fractional derivatives can be transformed similarly to the ordinary derivatives, as follows, for order $\alpha\leq 1$: $\displaystyle\mathcal{L}\\{{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\alpha}f(t)\\}=s^{\alpha}F(s)-s^{\alpha-1}f(0^{+}),$ (19) where $F(s)$ is the Laplace transform of $f(t)$ Podlubny (1999). For $\alpha=1$, Eq. (19) reduces to the classic well-known expression for ordinary derivatives, that is, $\mathcal{L}\\{\dot{f}(t)\\}=sF(s)-f(0^{+})$. Let us take as an example the following simple FDE $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1/2}q(t)=-q(t),$ (20) with initial value $q(0)=1$ which can be written in the Laplace domain, by virtue of Eq. (19), as follows $\displaystyle s^{\nicefrac{{1}}{{2}}}Q(s)-s^{-\nicefrac{{1}}{{2}}}q(0)=-Q(s),$ (21) where $Q(s)=\mathcal{L}\\{q(t)\\}(s)$ is the Laplace transform of $q$. After simple algebraic manipulations, we obtain $\displaystyle Q(s)=\frac{1}{s+\sqrt{s}}.$ (22) By applying the inverse Laplace transform to Eq. (22) the closed-form analytical solution of this FDE can be obtained; this involves a Mittag- Leffler function of order one half. In particular, $\displaystyle q(t)=\mathcal{E}_{\nicefrac{{1}}{{2}}}(-t^{\nicefrac{{1}}{{2}}})=e^{t}(1+\operatorname{erf}(-\sqrt{t})).$ (23) Although more often than not it is easy to transform an FDE in the Laplace domain, it is more difficult to apply the inverse Laplace transform so as to solve it explicitly for the system variables so that an analytical solution in the time domain is obtained. However, it is possible to perform that step numerically using a numerical inverse Laplace transform (NILT) algorithm De Hoog et al (1982) as described above. ## Fractional Pharmacokinetics ### Multi-compartmental models A one-compartment pharmacokinetic model with i.v. bolus administration can be easily fractionalised as in Eq. (15) by changing the derivative on the left- hand-side of the single ODE to a fractional order. However, in pharmacokinetics and other fields where compartmental models are used, two or more ODEs are often necessary and it is not as straightforward to fractionalise systems of differential equations, especially when certain properties such as mass balance need to be preserved. When a compartmental model with two or more compartments is built, typically an outgoing mass flux becomes an incoming flux to the next compartment. Thus, an outgoing mass flux that is defined as a rate of fractional order, cannot appear as an incoming flux into another compartment, with a different fractional order, without violating mass balance Dokoumetzidis et al (2010a). It is therefore, in general, impossible to fractionalise multi-compartmental systems simply by changing the order of the derivatives on the left hand side of the ODEs. One approach to the fractionalisation of multi-compartment models is to consider a common fractional order that defined the mass transfer from a compartment $i$ to a compartment $j$: the outflow of one compartment becomes an inflow to the other. This implies a common fractional order known as a commensurate order. In the general case, of non-commensurate-order systems, a different approach for fractionalising systems of ODEs needs to be applied. In what follows, a general form of a fractional two-compartment system is considered and then generalised to a system of an arbitrary number of compartments, which first appeared in Dokoumetzidis et al (2010b). A general ordinary linear two-compartment model, is defined by the following system of linear ODEs, $\displaystyle\frac{\mathrm{d}q_{1}(t)}{\mathrm{d}t}$ $\displaystyle=-k_{12}q_{1}(t)+k_{21}q_{2}(t)-k_{10}q_{1}(t)+u_{1}(t),$ (24a) $\displaystyle\frac{\mathrm{d}q_{2}(t)}{\mathrm{d}t}$ $\displaystyle=k_{12}q_{1}(t)-k_{21}q_{2}(t)-k_{20}q_{2}(t)+u_{2}(t),$ (24b) where $q_{1}(t)$ and $q_{2}(t)$ are the mass or molar amounts of material in the respective compartments and the $k_{ij}$ constants control the mass transfer between the two compartments and elimination from each of them. The notation convention used for the indices of the rate constants is that the first corresponds to the source compartment and the second to the target one, e.g. $k_{12}$ corresponds to the transfer from compartment 1 to 2, $k_{10}$ corresponds to the elimination from compartment 1, etc. The units of all the $k_{ij}$ rate constants are (1/[time]). $u_{i}(t)$ are input rates in each compartment which may be zero, constant or time dependent. Initial values for $q_{1}$ and $q_{2}$ have to be considered also, $q_{1}(0)$ and $q_{2}(0)$, respectively. In order to fractionalise this system, first the ordinary system is integrated, obtaining a system of integral equations and then the integrals are fractionalised as shown in Dokoumetzidis et al (2010b). Finally, the fractional integral equations, are differentiated in an ordinary way. The resulting fractional system contains ordinary derivatives on the left hand side and fractional Riemann-Liouville derivatives on the right hand side: $\displaystyle\frac{\mathrm{d}q_{1}(t)}{\mathrm{d}t}$ $\displaystyle=-k_{12,f}{\,}_{0}\mathrm{D}_{t}^{1-\alpha_{12}}q_{1}(t)+k_{21,f}{\,}_{0}\mathrm{D}_{t}^{1-\alpha_{21}}q_{2}(t)-k_{10,f}{\,}_{0}\mathrm{D}_{t}^{1-\alpha_{10}}q_{1}(t)+u_{1}(t),$ (25a) $\displaystyle\frac{\mathrm{d}q_{2}(t)}{\mathrm{d}t}$ $\displaystyle=k_{12,f}{\,}_{0}\mathrm{D}_{t}^{1-\alpha_{12}}q_{1}(t)-k_{21,f}{\,}_{0}\mathrm{D}_{t}^{1-\alpha_{21}}q_{2}(t)-k_{20,f}{\,}_{0}\mathrm{D}_{t}^{1-\alpha_{20}}q_{2}(t)+u_{2}(t),$ (25b) where $0<\alpha_{ij}\leq 1$ is a constant representing the order of the specific process. Different values for the orders of different processes may be considered, but the order of the corresponding terms of a specific process are kept the same when these appear in different equations, e.g., there can be an order $\alpha_{12}$ for the transfer from compartment 1 to 2 and a different order $\alpha_{21}$ for the transfer from compartment 2 to 1, but the order for the corresponding terms of the transfer, from compartment 1 to 2, $\alpha_{12}$, is the same in both equations. Also the index $f$ in the rate constant was added to emphasise the fact that these are different to the ones of Eq. (24) and carry units [time]-α. It is convenient to rewrite the above system Eq. (25) with Caputo derivatives. An FDE with Caputo derivatives accepts the usual type of initial conditions involving the variable itself, as opposed to RL derivatives which involve an initial condition on the derivative of the variable, which is not practical. When the initial value of $q_{1}$ or $q_{2}$ is zero then the respective RL and Caputo derivatives are the same. This is convenient since a zero initial value is very common in compartmental analysis. When the initial value is not zero, converting to a Caputo derivative is possible, for the particular term with a non-zero initial value. The conversion from a RL to a Caputo derivative of the form that appears in Eq. (25) is done with the following expression: $\displaystyle{}_{0}\mathrm{D}_{t}^{1-\alpha_{ij}}q_{i}(t)={}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha_{ij}}q_{i}(t)+\frac{q_{i}(0)t^{\alpha_{ij}-1}}{\Gamma(\alpha_{ij})}.$ (26) Summarising the above remarks about initial conditions, we may identify three cases: (i) The initial condition is zero and then the derivative becomes a Caputo by definition. (ii) The initial condition is non-zero, but it is involved in a term with an ordinary derivative so it is treated as usual. (iii) The initial condition is non-zero and is involved in a fractional derivative which means that in order to present a Caputo derivative, an additional term, involving the initial value appears, by substituting Eq. (26). Alternatively, a zero initial value for that variable can be assumed, with a Dirac delta input to account for the initial quantity for that variable. So, a general fractional model with two compartments, Eq. (25), was formulated, where the fractional derivatives can always be written as Caputo derivatives. It is easy to generalise the above approach to a system with an arbitrary number of $n$ compartments as follows $\displaystyle\frac{\mathrm{d}q_{i}(t)}{\mathrm{d}t}=-k_{i0}{\,}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha_{i0}}q_{i}(t)-\sum_{j\neq i}k_{ij}{\,}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha_{ij}}q_{i}(t)+\sum_{j\neq i}k_{ji}{\,}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha_{ji}}q_{j}(t)+u_{i}(t),$ (27) for $i=1,\ldots,n$, where Caputo derivatives have been considered throughout since, as explained above, this is feasible. This system of Eqs. (27) is too general for most purposes as it allows every compartment to be connected with every other. Typically the connection matrix would be much sparser than that, with most compartments being connected to just one neighbouring compartment while only a few “hub” compartments would have more than one connections. The advantage of the described approach of fractionalisation is that each transport process is fractionalised separately, rather than fractionalising each compartment or each equation. Thus, processes of different fractional orders can co-exist since they have consistent orders when the corresponding terms appear in different equations. Also, it is important to note that dynamical systems of the type (27) do not suffer from pathologies such as violation of mass balance or inconsistencies with the units of the rate constants. As mentioned, FDEs can be easily written in the Laplace domain. In the case of FDEs of the form of Eq. (27), where the fractional orders are $1-\alpha_{ij}$, the Laplace transform of the Caputo derivative becomes $\displaystyle\mathcal{L}\\{{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha_{ij}}q_{i}(t)\\}=s^{1-\alpha_{ij}}Q_{i}(s)-s^{-\alpha_{ij}}q_{i}(0).$ (28) An alternative approach for fractionalisation of non-commensurate fractional pharmacokinetic systems has been proposed in Popović et al (2013), where the conditions that the pharmacokinetic parameters need to satisfy for the mass balance to be preserved have been defined. ### A one-compartment model with constant rate input After the simplest fractional pharmacokinetic model with one-compartment and i.v. bolus of Eq. (18), the same model with fractional elimination, but with a constant rate input is considered Dokoumetzidis et al (2010b). Even in this simple one-compartment model, it is necessary to employ the approach of fractionalising each process separately, described above, since the constant rate of infusion is not of fractional order. That would have been difficult if one followed the approach of changing the order of the derivative of the left hand side of the ODE, however here it is straightforward. The system can be described by the following equation $\displaystyle\frac{\mathrm{d}q(t)}{\mathrm{d}t}=k_{01}-k_{10,f}{\,}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha}q(t),$ (29) with $q(0)=0$ and where $k_{01}$ is a zero-order input rate constant, with units [mass]/[time], $k_{10,f}$ is a rate constant with units [time]-α and $\alpha$ is a fractional order less than $1$. Eq. (29) can be written in the Laplace domain as $\displaystyle sQ(s)-q(0)=\frac{k_{01}}{s}-k_{10,f}(s^{1-\alpha}Q(s)-s^{-\alpha}q(0)).$ (30) Since $q(0)=0$, Eq. (30) can be solved to obtain $\displaystyle Q(s)=\frac{k_{01}s^{\alpha-2}}{s^{\alpha}+k_{10,f}}.$ (31) By applying the following inverse Laplace transform formula (Equation (1.80) in Podlubny (1999), page 21): $\displaystyle\mathcal{L}^{-1}\left\\{\frac{s^{\mu-\nu}}{s^{\mu}+k}\right\\}=t^{\nu-1}\mathcal{E}_{\mu,\nu}(-kt^{\mu}),$ (32) where $\mathcal{E}_{\mu,\nu}$ is the Mittag-Leffler function with two parameters. For $\mu=\alpha$ and $\nu=2$, we obtain $\displaystyle q(t)=k_{01}t\mathcal{E}_{\alpha,2}(-k_{10,f}t^{\alpha}).$ (33) In Theorem 1.4 of Podlubny (1999), the following expansion for the Mittag- Leffler function is proven to hold asymptotically as $|z|\to\infty$: $\displaystyle\mathcal{E}_{\mu,\nu}(z)=-\sum_{k=1}^{p}\frac{z^{-k}}{\Gamma(\nu-\mu)}+\mathcal{O}(|z|^{1-p}).$ (34) Applying this formula for Eq. (33) and keeping only the first term of the sum since the rest are of higher order, the limit of Eq. (33) for $t$ going to infinity can be shown that it is $\lim_{t\to\infty}q(t)=\infty$ for all $0<\alpha<1$ Dokoumetzidis et al (2010b). The fact that the limit of $q(t)$ in Eq. (33) diverges as $t$ goes to infinity, for $\alpha<1$, means that unlike the classic case, for $\alpha=1$ — where (33) approaches exponentially the steady state $\nicefrac{{k_{01}}}{{k_{10,f}}}$, for $\alpha<1$ — there is infinite accumulation. In Figure 2 a plot of (33) is shown for $\alpha=0.5$ demonstrating that in the fractional case the amount grows unbounded. In the inset of Figure 2 the same profiles are shown a 100-fold larger time span, demonstrating the effect of continuous accumulation. The lack of a steady state under constant rate administration which results to infinite accumulation is one of the most important clinical implications of fractional pharmacokinetics. It is clear that this implication extents to repeated doses as well as constant infusion, which is the most common dosing regimen, and can be important in chronic treatments. In order to avoid accumulation, the constant rate administration must be adjusted to a rate which decreases with time. Indeed, in Eq. (29), if the constant rate of infusion $k_{01}$ is replaced by the term $f(t)=k_{01}t^{1-\alpha}$, Hennion and Hanert (2013) the solution of the resulting FDE is, instead of Eq. (33), the following $\displaystyle q(t)=k_{01}\Gamma(\alpha)t^{\alpha}\mathcal{E}_{\alpha,\alpha+1}(-k_{10,f}t^{\alpha}).$ (35) The drug quantity $q(t)$ in Eq. (35) converges to the steady state $\Gamma(\alpha)\nicefrac{{k_{01}}}{{k_{10,f}}}$ as time goes to infinity, while for the special case of $\alpha=1$, the steady state is the usual $\nicefrac{{k_{01}}}{{k_{10,f}}}$. Similarly, for the case of repeated doses, if a steady state is intended to be achieved, in the presence of fractional elimination of order $\alpha$, then the usual constant rate of administration, e.g., a constant daily dose, needs to be replaced by an appropriately decreasing rate of administration. As shown in Hennion and Hanert (2013), this can be either the same dose, but given at increasing inter-dose intervals, i.e., $T_{i}=(T_{i-1}^{\alpha}+\alpha\Delta\tau^{\alpha})^{\nicefrac{{1}}{{\alpha}}}$, where $T_{i}$ is the time of the $i$-th dose and $\Delta\tau$ is the inter- dose interval of the corresponding kinetics of order $\alpha=1$; or a decreasing dose given at constant intervals, i.e., $q_{0,i}=\nicefrac{{q_{0}}}{{\alpha}}((i+1)^{\alpha}-i^{\alpha})$. In this way, an ever decreasing administration rate is implemented which compensates the decreasing elimination rate due to the fractional kinetics. Figure 2: Amount-time profiles for $\alpha=0.5$: (solid line) amount of drug versus time according to Eq. (33) with constant infusion where there is unbounded accumulation of drug, (dashed line) time-profile according to Eq. (35), with power-law infusion where the amount of drug approaches a steady state. (Inset) The same profiles for $100$ times longer time span. ### A two-compartment i.v. model Based on the generalised approach for the fractionalisation of compartmental models, which allows mixing different fractional orders, developed above, a two-compartment fractional pharmacokinetic model is considered, shown schematically in Figure 3. Compartment 1 (central) represents general circulation and well perfused tissues while compartment 2 (peripheral) represents deeper tissues. Three transfer processes (fluxes) are considered: elimination from the central compartment and a mass flux from the central to the peripheral compartment, which are both assumed to follow classic kinetics (order 1), while a flux from the peripheral to the central compartment is assumed to follow slower fractional kinetics accounting for tissue trapping Petráš and Magin (2011); Dokoumetzidis and Macheras (2009). The system is formulated mathematically as follows: $\displaystyle\frac{\mathrm{d}q_{1}(t)}{\mathrm{d}t}$ $\displaystyle=-(k_{12}+k_{10})q_{1}(t)+k_{21,f}{\,}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha}q_{2}(t),$ (36a) $\displaystyle\frac{\mathrm{d}q_{2}(t)}{\mathrm{d}t}$ $\displaystyle=k_{12}q_{1}(t)-k_{21,f}{\,}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha}q_{2}(t),$ (36b) where $\alpha<1$ and initial conditions are $q_{1}(0)=q_{1,0}$, $q_{2}(0)=0$ which account for a bolus dose injection and no initial amount in the peripheral compartment. Note, that it is allowed to use Caputo derivatives here since the fractional derivatives involve only terms with $q_{2}$ for which there is no initial amount, which means that Caputo and RL derivatives are identical. Figure 3: A fractional two-compartment PK model with an i.v. bolus. Elimination from the central compartment and a mass flux from the central to the peripheral compartment, which are both assumed to follow classic kinetics (order 1), while a flux from the peripheral to the central compartment is assumed to follow slower fractional kinetics, accounting for tissue trapping (dashed arrow). Applying the Laplace transform, to the above system the following algebraic equations are obtained: $\displaystyle sQ_{1}(s)-q_{1}(0)$ $\displaystyle=-(k_{12}+k_{10})Q_{1}(s)+k_{21}(s^{1-\alpha}Q_{2}(s)-s^{-\alpha}q_{2}(0)),$ (37a) $\displaystyle sQ_{2}(s)-q_{2}(0)$ $\displaystyle=k_{12}Q_{1}(s)-k_{21,f}(s^{1-\alpha}Q_{2}(s)-s^{-\alpha}q_{2}(0)).$ (37b) Solving for $Q_{1}(s)$ and $Q_{2}(s)$ and substituting the initial conditions, $\displaystyle Q_{1}(s)$ $\displaystyle=\frac{q_{1,0}(s^{\alpha}+k_{21,f})}{(s+k_{12}+k_{10})(s^{\alpha}+k_{21,f})-k_{12}k_{21,f}}$ (38a) $\displaystyle Q_{2}(s)$ $\displaystyle=\frac{q_{1,0}s^{\alpha-1}k_{12}}{(s+k_{12}+k_{10})(s^{\alpha}+k_{21,f})-k_{12}k_{21,f}}$ (38b) Using the above expression for $Q_{1}$ and $Q_{2}$, Eq. (38a) and Eq. (38b), respectively, can be used to simulate values of $q_{1}(t)$ and $q_{2}(t)$ in the time domain by an NILT method. Note, that primarily, $q_{1}(t)$ is of interest, since in practice we only have data from this compartment. The output for $q_{1}(t)$ from this numerical solution may be combined with the following equation $\displaystyle c(t)=\frac{q_{1}(t)}{V},$ (39) where $c$ is the drug concentration in the blood and $V$ is the apparent volume of distribution. Eq. (39) can be fitted to pharmacokinetic data in order to estimate parameters $V$, $k_{10}$, $k_{12}$, $k_{21,f}$ and $\alpha$. The closed-form analytical solution of Eq. (36), can be expressed in terms of an infinite series of generalised Wright functions as demonstrated in the book by Kilbas et al. Kilbas et al (2006), but these solutions are hard to implement and apply in practice. Moloni in Moloni (2015) derived analytically the inverse Laplace transform of $Q_{1}(s)$ as: $\displaystyle q_{1}(t)$ $\displaystyle=q_{1,0}\sum_{n=0}^{\infty}(-1)^{n}k_{21,f}^{n}\sum_{l=0}^{n}\frac{k_{10}^{l}n!}{(n-l)!l!}\Big{(}t^{l+\alpha n}\mathcal{E}_{1,l+\alpha n+1}^{n+1}(-(k_{10}+k_{12})t)$ $\displaystyle+t^{l+\alpha(n+1)}\mathcal{E}_{1,l+\alpha(n+1)+1}^{n+1}(-(k_{10}+k_{12})t)\Big{)},$ (40a) while for $Q_{2}$ the inverse Laplace transform works out to be Moloni (2015): $\displaystyle q_{2}(t)=q_{1,0}k_{12}\sum_{n=0}^{\infty}(-1)^{n}k_{21,f}^{n}\sum_{l=0}^{n}\frac{k_{10}^{l}n!}{(n-l)!l!}t^{l+\alpha n}\mathcal{E}_{1,l+\alpha n+2}^{n+1}(-(k_{10}+k_{12})t).$ (40b) Figure 4: Concentration-time profile of amiodarone in the central compartment. The solid line corresponds to the fractional-order pharmacokinetic model of Eq. (36) with the parameter estimates: $\alpha=0.587$, $k_{10}=1.49\,\mathrm{days}^{-1}$, $k_{12}=2.95\,\mathrm{days}^{-1}$, $k_{21,f}=0.48\,\mathrm{days}^{-\alpha}$ and $\nicefrac{{q_{0}}}{{V}}=4.72\,\nicefrac{\mathrm{ng}}{\mathrm{ml}}$. The circles correspond to experimental measurements. An application of the fractional two compartment model, the system of Eq. (36) to amiodarone PK data was presented in Dokoumetzidis et al (2010b). Amidarone is an antiarrhythmic drug known for its anomalous, non-exponential pharmacokinetics, which has important clinical implications due to the accumulation pattern of the drug in long-term administration. The fractional two-compartment model of the previous section was used to analyse an amiodarone i.v. data-set which first appeared in Holt et al (1983) and estimates of the model parameters were obtained. Analysis was carried out in MATLAB while the values for $q_{1}(t)$ of Eq. (40a) were simulated using a NILT algorithm De Hoog et al (1982) from the expression of $Q_{1}(s)$ in the Laplace domain. In Figure 4 the model-based predictions are plotted together the data demonstrating good agreement for the 60 day period of this study. The estimated fractional order was $\alpha=0.587$ and the non-exponential character of the curve is evident, while the model follows well the data both for long and for short times, unlike empirical power-laws which explode at $t=0$. ## Numerical methods for fractional-order systems For simple fractional-order models, as we discussed previously, there may exist closed-form analytical solutions which involve the one-parameter or two- parameters Mittage-Leffler function, or are given by more intricate analytical expressions such as Eq. (40). Interestingly, even for simple analytical solutions such as Eq. (16), the Mittag-Leffler function itself is evaluated by solving an FDE numerically Kaczorek (2011). This fact, without discrediting the value of analytical solutions, necessitates the availability of reliable numerical methods that allow us to simulate and study fractional-order systems. The availability of accurate discrete-time approximations of the trajectories of such systems is important not only for simulating, but also for the design of open-loop or closed-loop administration strategies, based on control theory Sopasakis et al (2015). Time-domain approximations are less parsimonious than $s$-domain ones, but are more suitable for control applications as we discuss in the next section. There can be identified four types of solutions for fractional-order differential equations: (i) closed-form analytical solutions, (ii) approximations in the Laplace $s$-domain using integer-order rational transfer functions, (iii) numerical approximation schemes in the discrete time domain and (iv) numerical inverse Laplace transforms. Each of these comes with certain advantages as well as limitations; for example, closed-form analytical solutions are often not available, while the inverse Laplace function requires an explicit closed form for $u_{i}(t)$ so it cannot be used for administration rates that are defined implicitly or are arbitrary signals. Approximations in the $s$-domain are powerful modelling tools, but they fail to provide error bounds on the concentration of the administered drug in the time domain which are necessary in clinical practice. In regard to closed-form analytical solutions, when available, they involve special functions such as the Mittag-Leffler function $\mathcal{E}_{\alpha,\beta}(t)=\sum_{k=0}^{\infty}{t^{k}}/{\Gamma(\alpha k+\beta)}$ whose evaluation calls, in turn, for some numerical approximation scheme. Analytical closed-form solutions of fractional differential equations are available for pharmacokinetic systems Verotta (2010). Typically for the evaluation of this function we resort to solving an FDE numerically Garrappa (2015); Seybold and Hilfer (2009); Gorenflo et al (2002) because the series in the definition of $\mathcal{E}_{\alpha,\beta}(t)$ converges rather slowly and no exact error bounds as available so as to establish meaningful termination criteria. ### Transfer functions and integer-order approximations Fractional-order systems, like their integer-order counterparts, can be modelled in the Laplace $s$-domain via transfer functions, that is, $\displaystyle F_{ij}(s)=\frac{Q_{j}(s)}{U_{i}(s)},$ (41) where $Q_{j}(s)$ and $U_{i}(s)$ are the Laplace transforms of the drug quantity $q_{j}(t)$ and the administration rate $u_{i}(t)$. If the pharmacokinetics is described by linear fractional-order models such as the ones discussed above, $F_{ij}$ will be a fractional-order rational function (a quotient of polynomials with real exponents). Rational approximations aim at approximating such transfer functions — which involve terms of the form $s^{\alpha}$, $\alpha\in\mathbb{R}$ — by ordinary transfer functions of the form $\displaystyle\tilde{F}_{ij}(s)=\frac{P_{ij}(s)}{S_{ij}(s)},$ (42) where $P_{ij}$ and $S_{ij}$ are polynomials and the degree of $P_{ij}$ is no larger than the degree of $S_{ij}$. For convenience with notation, we henceforth drop the subscripts $ij$. Padé Approximation: The Padé approximation of order $[m/n]$, $m,n\in\mathbb{N}$, at a point $s_{0}$ is rather popular and leads to rational functions with $\deg P=m$ and $\deg S=n$ Silva et al (2006). Matsuda-Fujii continuous fractions approximation: This method consists in interpolating a function $F(s)$, which is treated as a black box, across a set of logarithmically spaced points Matsuda and Fujii (1993). By letting the selected points be $s_{k}$, $k=0,1,2,\dots$, the approximation is written as the continued fractions expansion $F(s)=\alpha_{0}+\frac{s-s_{0}}{\alpha_{1}+\frac{s-s_{1}}{\alpha_{2}+\frac{s-s_{2}}{\alpha_{3}+\dots}}}$ (43) where, $\alpha_{i}=\upsilon_{i}(s_{i})$, $\upsilon_{0}(s)=H(s)$ and $\upsilon_{i+1}(s)=\frac{s-s_{i}}{\upsilon_{i}(s)-\alpha_{i}}$. Oustaloup’s method: Oustaloup’s method is based on the approximation of a function of the form: $H(s)=s^{\alpha},$ (44) with $\alpha>0$ by a rational function $\widehat{H}(s)=c_{0}\prod_{k=-N}^{N}\frac{s+\omega_{k}}{s+\omega^{\prime}_{k}},$ (45) within a range of frequencies from $\omega_{b}$ to $\omega_{h}$ Oustaloup et al (2000). The Oustaloup method offers an approximation at frequencies which are geometrically distributed about the characteristic frequency $\omega_{u}=\sqrt{\omega_{b}\omega_{h}}$ — the geometric mean of $\omega_{b}$ and $\omega_{h}$. The parameters $\omega_{k}$ and $\omega_{k}^{\prime}$ are determined via the design formulas Petráš (2011) $\displaystyle\omega_{k}^{\prime}=\omega_{b}\left(\frac{\omega_{h}}{\omega_{b}}\right)^{\frac{k+N+0.5(1+\alpha)}{2N+1}},$ (46a) $\displaystyle\omega_{k}=\omega_{b}\left(\frac{\omega_{h}}{\omega_{b}}\right)^{\frac{k+N+0.5(1-\alpha)}{2N+1}},$ (46b) $\displaystyle c_{0}=\left(\frac{\omega_{h}}{\omega_{b}}\right)^{-\frac{r}{2}}\prod_{k=-N}^{N}\frac{\omega_{k}}{\omega_{k}^{\prime}}.$ (46c) Parameters $\omega_{b}$, $\omega_{h}$ and $N$ are design parameters of the Oustaloup method. There are a few more methods which have been proposed in the literature to approximate fractional-order systems by rational transfer functions such as Charef et al (1992); Carlson and Halijak (1964), as well as data-driven system identification techniques Gao and Liao (2012). ### Time domain approximations Several methods have been proposed which attempt to approximate the solution to a fractional-order initial value problem in the time domain. Grünwald-Letnikov: This is the method of choice in the discrete time domain where ${{}^{\mathrm{gl}}\mathrm{D}}_{t}^{\alpha}x(t)$ is approximated by its discrete-time finite-history variant $({{{}^{\mathrm{gl}}\Delta}}^{\alpha}_{h,\nu}x)_{k}$ which is proven to have bounded error with respect to $({{{}^{\mathrm{gl}}\Delta}}_{h}^{\alpha}x)_{k}$ Sopasakis and Sarimveis (2017). The boundedness of this approximation error is a singular characteristic of this approximation method and is suitable for applications in drug administration where it is necessary to guarantee that the drug concentration in the body will remain within certain limits Sopasakis and Sarimveis (2017). As an example, system (15) is approximated (with sampling time $h$) by the discrete-time linear system $\displaystyle\tfrac{1}{h^{\alpha}}\sum_{j=0}^{\min\\{\nu,\left\lfloor\nicefrac{{t}}{{h}}\right\rfloor\\}}c_{j}^{\alpha}q_{k+1-j}=-k_{1,f}q_{k},$ (47) where $q_{k}=q(kh)$. The discretisation of the Grünwald-Letnikov derivative suffers from the fact that the required memory grows unbounded with the simulation time. The truncation of the Grünwald-Letnikov series up to some finite memory gives rise to viable solution alorithms which can be elegantly described using triangular strip matrices as described in Podlubny (2000) and are available as a MATLAB toolbox. Approximation by parametrisation: Approximate time-domain solutions can be obtained by assuming a particular parametric form for the solution. Such a method was proposed by Hennion and Hanert Hennion and Hanert (2013) where $q(t)$ is approximated by finite-length expansions of the form $\sum_{j=0}^{N}A_{j}\phi_{j}(t)$, where $\phi_{j}$ are Chebyshev polynomials and $A_{j}$ are constant coefficients. By virtue of the computability of fractional derivatives of $\phi_{j}$, the parametric approximation is plugged into the fractional differential equation which, along with the initial conditions, yields a linear system. What is notable in this method is that expansions as short as $N=10$ lead to very low approximation errors. Likewise, other parametric forms can be used. For example, Zainal and Kılıçman (2014) used a Fourier-type expansion and Kumar and Agrawal (2006) used piecewise quadratic functions. Numerical integration methods: Fractional-order initial value problems can be solved with various numerical methods such as the Adams-Bashforth-Moulton predictor-corrector (ABMPC) method Zayernouri and Matzavinos (2016) and fractional linear multi-step methods (FLMMs) Lubich (1986); Garrappa (2015). These methods are only suitable for systems of FDEs in the form $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\gamma}x(t)$ $\displaystyle=f(t,x(t)),$ (48a) $\displaystyle x^{(k)}(0)$ $\displaystyle=x_{0,k},\,k=0,\dots,m-1$ (48b) where $\gamma$ is a rational, and $m=\lceil\gamma\rceil$. Typically in pharmacokinetics we encounter cases with $0<\gamma\leq 1$, therefore, we will have $m=1$. Let us give an example on how this applies to the two-compartment model we presented above. In order to bring Eq. (36) in the above form, we need to find a rational approximation of two derivatives, $1-\alpha$ and $1$. If we can find a satisfying rational approximation of $1-a\approxeq p/q$, then the first order derivative follows trivially. Now, Eq. (36) can be written as $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\gamma}x_{0}$ $\displaystyle=x_{1}$ (49a) $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\gamma}x_{1}$ $\displaystyle=x_{2}$ (49b) $\displaystyle\ \,\vdots$ $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\gamma}x_{q-1}$ $\displaystyle=-(k_{12}+k_{10})x_{0}{+}k_{21,f}x_{q+p}{+}u$ (49c) $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\gamma}x_{q}$ $\displaystyle=x_{q+1}$ (49d) $\displaystyle\ \,\vdots$ $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\gamma}x_{2q-1}$ $\displaystyle=k_{12}x_{0}-k_{21,f}x_{q+p}$ (49e) with initial conditions $x_{0}(0)=q_{1}(0)$, $x_{q}(0)=q_{2}(0)$ and $x_{i}(0)=0$ for $i\notin\\{1,q\\}$, and $\gamma=1/q$. This system is in fact a linear fractional-order system for which closed-form analytical solutions are available (Kaczorek, 2011, Thm. 2.5). In particular, Eq. (49) can be written concisely in the form $\displaystyle{}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{\gamma}\mathbf{x}(t)$ $\displaystyle=A\mathbf{x}(t)+Bu(t)$ (50a) $\displaystyle\mathbf{x}(0)$ $\displaystyle=\mathbf{x}_{0},$ (50b) where $\mathbf{x}=(x_{0},x_{1},\ldots,x_{2q-1})$ and $\mathbf{x}_{0}=(x_{0}(0),\ldots,x_{2q-1}(0))$ and matrices $A$ and $B$ can be readily constructed from the above dynamical equations. This fractional-order initial value problem has the closed-form analytical solution (Kaczorek, 2011, Thm. 2.5) $\displaystyle\mathbf{x}(t)=\mathcal{E}_{\gamma}(At^{\gamma})\mathbf{x}_{0}+\int_{0}^{t}\sum_{k=0}^{\infty}\frac{A^{k}(t-\tau)^{k\gamma}}{\Gamma((k+1)\gamma)}Bu(\tau)\mathrm{d}\tau.$ (51) Evidently, the inherent complexity of this closed-form analytical solution — which would require the evaluation of slowly-converging series — motivates and necessitates the use of numerical methods. The number of states of system (49) is $2q$, therefore, the rational approximation should aim at a small $q$. Yet another reason to choose small $q$ is that small values of $\gamma$ render the system hard to simulate numerically because they increase the effect of and dependence on the memory. Adams-Bashforth-Moulton predictor-corrector (ABMPC): Methods of the ABMPC type have been generalised to solve fractional-order systems. The key idea is to evaluate $({{{}_{0}^{\mathrm{rl}}\mathrm{I}}}^{\gamma}f)(t,x(t))$ by approximating $f$ with appropriately selected polynomials. Solutions of Eq. (48) satisfy the following integral representation $\displaystyle x(t)=\sum_{k=0}^{m-1}x_{0,k}\frac{t^{k}}{k!}+({{{}_{0}^{\mathrm{rl}}\mathrm{I}}}^{\gamma}f)(t,x(t)),$ (52) where the first term on right hand side will be denoted by $T_{m-1}(t).$ This is precisely an integral representation of Eq. (48). The integral on the right hand side of the previous equation can be approximated, using an uniformly spaced grid $t_{n}=nh$, by $\frac{h^{\gamma}}{\gamma(\gamma+1)}\sum_{j=0}^{n+1}a_{j,n+1}f(t_{j})$ for suitable coefficients $a_{j,n+1}$ Diethelm et al (2002). The numerical approximation of the solution of Eq. (48) is $\displaystyle x(t_{n+1})$ $\displaystyle=T_{m-1}(t_{n+1})+\tfrac{h^{\gamma}}{\Gamma(\gamma+2)}f(t_{n+1},x_{p}(t_{n+1}))+\sum_{j=1}^{n}a_{j,n+1}f(t_{j},x(t_{j})).$ (53a) The equation above is usually referred to as the corrector formula and $x_{p}(t_{n+1})$ is given by the predictor formula $\displaystyle x_{P}(t_{n+1})=T_{m-1}(t_{n})$ $\displaystyle+\tfrac{1}{\Gamma(\gamma)}\sum_{j=0}^{n}b_{j,n+1}f(t_{j},x(t_{j})).$ (53b) Unfortunately, the convergence error of ABMPC when $0<\gamma<1$ is $\mathcal{O}(h^{1+\gamma})$, therefore, a rather small step size $h$ is required to attain a reasonable approximation error. A modification of the basic predictor-corrector method with more favourable computational cost is provided in Garrappa (2010) for which the MATLAB implementation fde12 is available. Lubich’s method: Fractional linear multi-step methods (FLMM) Lubich (1986) are a generalisation of linear multi-step methods (LMM) for ordinary differential equations. The idea is to approximate the Riemann-Liouville fractional-order integral operator (1) with a discrete convolution, known as a convolution quadrature, as $\displaystyle({{}_{0}^{\mathrm{rl}}\mathrm{I}}_{h}^{\gamma}f)(t)\approxeq h^{\gamma}\sum_{j=0}^{n}\omega_{n-j}f(t_{j})+h^{\gamma}\sum_{j=0}^{s}w_{n,j}f(t_{j}),$ (54) for $t_{j}=jh$, where ($w_{n,j}$) and ($\omega_{n}$) are independent of $h$. Surprisingly, the latter can be constructed from any linear multistep method for arbitrary fractional order $\gamma$ Lubich (1986). FLMM constructed this way will inherit the same convergence rate and at least the same stability properties as the original LMM method Lubich (1985). A MATLAB implementation of Lubich’s method, namely flmm2 Garrappa (2015) which is based on Garrappa (2015) is available. However, these methods do not perform well for small $\gamma$. According to Herceg et al (2017), for the case of amiodarone, it is reported that values of $\gamma$ smaller than $0.1$ give poor results and often do not converge, while using a crude approximation given by $\gamma=1/5$, flmm2 was shown to outperform fde12 in terms of accuracy and stability at bigger step sizes $h$. ### Numerical inverse Laplace The inverse Laplace transform of a transfer function $F(s)$ — on the Laplace $s$-domain — is given by the complex integral $\displaystyle f(t)=\mathcal{L}^{-1}\\{F\\}(t)=\lim_{T\rightarrow\infty}\frac{1}{2\pi i}\int_{\sigma-iT}^{\sigma+iT}e^{st}F(s)\mathrm{d}s,$ (55) where $\sigma$ is any real number larger than the real parts of the poles of $F$. The numerical inverse Laplace (NILT) approach aims at approximating the above integral numerically. Such numerical methods apply also to cases where $F$ is not rational as it is the case with fractional-order systems (55). In the approach of De Hoog et al., the integral which appears in Eq. (55) is cast as a Fourier transform which can then be approximated by a Fourier series followed by standard numerical integration (e.g., using the trapezoid rule) De Hoog et al (1982). Though quite accurate for a broad class of functions, these methods are typically very demanding from the computational point of view. An implementation of the above method is available online Hollenbeck (1998). In special cases analytical inversion can be done by means of Mittag-Leffler function Lin and Lu (2013); Kexue and Jigen (2011). A somewhat different approach is taken by Valsa and Brančik (1998), where authors approximate $e^{st}$, the kernel of the inverse Laplace transformation, by a function of the form $\tfrac{e^{st}}{1+e^{-2a}e^{2st}}$ and choose $a$ appropriately so as to achieve an accurate inversion. In general, numerical inversion methods can achieve high precision, but they are not suitable for control design purposes, especially for optimal control problems. Moreover, there is not one single method that gives the most accurate inversion for all types of functions. An overview of the most popular inversion methods used in engineering practice is given in Hassanzadeh and Pooladi-Darvish (2007). ## Drug administration for fractional pharmacokinetics Approaches for drug administration scheduling can be classified according to the desired objective into methods where (i) we aim at stabilising the concentration in certain organs or tissues towards certain desired values (set points) Sopasakis et al (2014) and (ii) the drug concentration needs to remain within certain bounds which define a therapeutic window Sopasakis et al (2015). Another level of classification alludes to the mode of administration where we identify the (i) continuous administration by, for instance, intravenous continuous infusion, (ii) bolus administration, where the medicine is administered at discrete time instants Sopasakis et al (2015); Rivadeneira et al (2015) and (iii) oral administration where the drug is administered both at discrete times and from a discrete set of dosages (e.g., tablets) Sopasakis and Sarimveis (2012). Drug administration is classified according to the way in which decisions are made in regard to how often, at what rate and/or what amount of drug needs to be administered to the patient. We can identify (i) open-loop administration policies where the patient follows a prescribed dosing scheme without adjusting the dosage and (ii) closed-loop administration where the dosage is adjusted according to the progress of the therapy Sopasakis et al (2014). The latter is suitable mainly for hospitalised patients who are under monitoring and where drug concentration measurements are available or the effect of the drug can be quantified. Such is the case of controlled anaesthesia Krieger and Pistikopoulos (2014). However, applications of closed-loop policies extend beyond hospitals, such as in the case of glucose control in diabetes Favero et al (2014). Despite the fact that optimisation-based methods are well-established in numerous scientific disciplines along with their demonstrated advantages over other approaches, to date, empirical approaches remain popular Kannikeswaran et al (2016); Fukudo et al (2017); De Ocenda et al (2016); Savic et al (2017). In the following sections we propose decision making approaches for optimal drug administration using as a benchmark a fractional two-compartment model using, arbitrarily and as a benchmark, the parameters values of amiodarone. We focus on the methodological framework rather than devising an administration scheme for a particular compound. We discuss three important topics in optimal administration of compounds which are governed by fractional pharmacokinetics. Next, we formulate the drug administration problem as an optimal control problem we prescribe optimal therapeutic courses to individuals with known pharmacokinetic parameters. Lastly, we design optimal administration strategies for populations of patients whose pharmacokinetic parameters are unknown or inexactly known. We present an advanced closed-loop controlled administration methodology based on model predictive control. ### Individualised administration scheduling In this section we present an optimal drug administration scheme based on the two-compartment fractional model of Eq. (36) and assuming that the drug is administered into the central compartment continuously. We will show that the same model and the same approach can be modified to form the basis for bolus administration. In order to state the optimal control problem for optimal drug administration we first need to discretise the two-compartment model (36) with a (small) sampling time $t_{c}$ $\displaystyle t_{c}^{-1}(x_{k+1}-x_{k})=Ax_{k}+F\,{{}^{\mathrm{gl}}\Delta}^{1-a}_{t_{c},\nu}x_{k}+Bu_{k}$ (56) where $x_{k}=\left[q_{1}(kt_{c})\,\,q_{2}(kt_{c})\right]^{\prime}$ and $u_{k}$ is the administration rate at the discrete time $k$. The left hand side of (56) corresponds to the forward Euler approximation of the first-order derivative, and we shall refer to $t_{c}=10^{-2}\,\mathrm{days}$ as the control sampling time. On the right hand side of (56), ${{}^{\mathrm{gl}}\Delta}^{1-a}_{t_{c},\nu}$ has replaced the fractional-order operator ${}_{0}^{\mathrm{c}}\mathrm{D}_{t}^{1-\alpha}$. Matrices $A,F$ and $B$ are $\displaystyle A=\begin{bmatrix}-(k_{12}+k_{10})&0\\\ k_{21,f}&0\\\ \end{bmatrix},F=\begin{bmatrix}0&k_{21,f}\\\ 0&-k_{21,f}\\\ \end{bmatrix},B=\begin{bmatrix}1\\\ 0\\\ \end{bmatrix}.$ (57) The discrete-time dynamic equations of the system can now be stated as $\displaystyle x_{k+1}=x_{k}+t_{c}\bigg{(}Ax_{k}+\frac{F}{t_{c}^{1-a}}\sum_{j=0}^{\nu}c_{j}^{1-a}x_{k-j}+Bu_{k}\bigg{)},$ (58) where $c_{j}^{\alpha}=(-1)^{j}\binom{\alpha}{j}$. By augmenting the system with past values as $\mathbf{x}_{k}=(x_{k},x_{k-1},\dots,x_{k-\nu+1})$ we can rewrite Eq. (58) as a finite-dimension linear system $\displaystyle\mathbf{x}_{k+1}=\hat{A}\mathbf{x}_{k}+\hat{B}u_{k}.$ (59) Matrices $\hat{A}$ and $\hat{B}$ are straightforward to derive and are given in Sopasakis and Sarimveis (2017). The therapeutic session will last for $N_{d}=Nt_{c}=7\,\mathrm{days}$ in total, where $N$ is called the prediction horizon. Since it is not realistic to administer the drug to the patient too frequently, we assume that the patient is to receive their treatment every $t_{d}=0.5\,\mathrm{days}$. The administration schedule must ensure that the concentration of drug in all compartments never exceeds the minimum toxic concentration limits while tracking the prescribed reference value as close as possible. To this aim we formulate the constrained optimal control problem Bertsekas (2017) $\displaystyle\min_{u_{0},\ldots,u_{N_{d}-1}}J$ $\displaystyle=\sum_{k=0}^{\nicefrac{{N_{d}}}{{t_{c}}}+1}(x_{\mathrm{ref},k}-x_{k})^{\prime}Q(x_{\mathrm{ref},k}-x_{k})$ (60a) subject to the constraints $\displaystyle\mathbf{x}_{k+1}$ $\displaystyle=\hat{A}\mathbf{x}_{k}+\hat{B}u_{j},\text{ for }\ kt_{c}=jt_{d}$ (60b) $\displaystyle\mathbf{x}_{k+1}$ $\displaystyle=\hat{A}\mathbf{x}_{k},\text{ otherwise}$ (60c) $\displaystyle 0$ $\displaystyle\leq x_{k}\leq 0.5$ (60d) $\displaystyle 0$ $\displaystyle\leq u_{j}\leq 0.5$ (60e) for $k=0,\ldots,N$; $j=0,\ldots,N_{d}-1$. In the above formulation $x_{\mathrm{ref},k}$ is the desired drug concentration at time $k$ and operator ′ denotes vector transposition. Any deviation from set point is penalised by a weight matrix $Q$, where here we chose $Q=\mathrm{diag}([0\;1])$. Note that we are tracking only the second state. Our underlying GL model has a relative history of $t_{c}\nu=5\,\mathrm{days}$. Optimal drug concentrations are denoted by $u^{\star}_{k}$, for $k=0,\dots,N_{d}-1$ and they correspond to dosages administered intravenously at times $kt_{d}$. In the optimal control formulation we have implicitly assumed that $t_{d}$ is an integer multiple of $t_{c}$, which is not restrictive since $t_{c}$ can be chosen arbitrarily. In Figure 5 we present the pharmacokinetic profile of a patient following the prescribed optimal administration course. Finally, the problem in Eq. (60) is a standard quadratic problem with simple equality and inequality constraints that can be solved at low computational complexity. Such problems can be easily formulated in YALMIP Löfberg (2004) or ForBES Stella et al (2017) for MATLAB or CVXPY Diamond and Boyd (2016) for Python. Figure 5: Drug administration scheduling via optimal control. The behaviour of the controlled system was simulated using the ABMPC method (dashed red line). The predicted pharmacokinetic profile using the Grünwald-Letnikov approximation (solid blue line) and fde12 are indistinguishable. The set-point on $q_{2}$ (dashed orange line) and the maximum allowed concentration (black solid lines) are shown in the figure. The optimal control framework offers great flexibility in using arbitrary system dynamics, constraints on the administration rate and the drug concentration in the various compartments, and cost functions which encode the administration objectives. The quadratic function which we proposed in Eq. (60) is certainly not the only admissible choice. For instance, other possible choices are 1. 1. $J=\sum_{k=0}^{\nicefrac{{N_{d}}}{{t_{c}}}+1}(x_{\mathrm{ref},k}-x_{k})^{\prime}Q(x_{\mathrm{ref},k}-x_{k})+\beta u_{k}^{2}$, where we also penalise the total amount of drug that is administered throughout the prediction horizon, 2. 2. $J=\sum_{k=0}^{\nicefrac{{N_{d}}}{{t_{c}}}+1}\mathrm{dist}^{2}(x_{k},[\underline{$x$},\bar{x}])$, where $[\underline{$x$},\bar{x}]$ is the therapeutic window and $\mathrm{dist}^{2}(\cdot,[\underline{$x$},\bar{x}])$ is the squared distance function defined as $\mathrm{dist}^{2}(x_{k},[\underline{$x$},\bar{x}])=\min_{z_{k}}\\{\|x_{k}-z_{k}\|^{2};\ \underline{$x$}\leq z_{k}\leq\bar{x}\\}$. ### Administration scheduling for populations In the previous section, we assumed that the pharmacokinetic parameters are known aiming at an individualised dose regimen. When designing an administration schedule for a population of patients — without the ability to monitor the distribution of the drug or the progress of the therapy — then $J$ becomes a function of the pharmacokinetic parameters (in our case $k_{10}$, $k_{12}$, $k_{21,f}$ and $\alpha$). That said, $J$ becomes a random quantity which follows an — often unknown or inexactly known — probability distribution. In order to formulate an optimal control problem we now need to extract a characteristic value out of the random quantity $J$. There are two popular ways to do so leading to different problem statements. First, we may consider the maximum value of $J$, $\max J$, over all possible values of $k_{10}$, $k_{12}$, $k_{21,f}$ and $\alpha$. This leads to robust or minimax problem formulations which consist in solving Bertsekas (2017) $\displaystyle\min_{u_{0},\ldots,u_{N_{d}-1}}\max_{k_{10},k_{12},k_{21,f},\alpha}J(u_{0},\ldots,u_{N_{d}-1},k_{10},k_{12},k_{21,f},\alpha),$ (61) subject to the system dynamics and constraints. In Eq. (61), the maximum is taken with respect to the worst-case values of $k_{10}$, $k_{12}$, $k_{21,f}$ and $\alpha$, that is, their minimum and maximum values. Apparently, the minimax approach does not make use of any probabilistic or statistical information which is typically available for the model parameters. As a result, it is likely to be overly conservative and lead to poor performance. Figure 6: Drug administration to a population of $100$ (randomly generated) individuals by stochastic optimal control. The yellow thin lines correspond to the predicted individual responses to the administration and the black lines represent the average response. On the other hand, in the stochastic approach we minimise the Bertsekas and Shreve (1996) expected cost $\mathbb{E}J$ $\displaystyle\min_{u_{0},\ldots,u_{N_{d}-1}}\mathbb{E}_{k_{10},k_{12},k_{21,f},\alpha}\ J(u_{0},\ldots,u_{N_{d}-1},k_{10},k_{12},k_{21,f},\alpha),$ (62) subject to the system dynamics and constraints. Open-loop stochastic optimal control methodologies have been proposed for the optimal design of regimens under uncertainty for classical integer-order pharmacokinetics Schumitzky et al (1983); Lago (1992); Bayard et al (1994), yet, to the best of our knowledge, no studies have been conducted for the effectiveness of stochastic optimal control for fractional pharmacodynamics. The expectation in Eq. (62) can be evaluated empirically given a data-set of estimated pharmacokinetic parameters $(k_{10}^{(i)},k_{12}^{(i)},k_{21,f}^{(i)},\alpha^{(i)})_{i=1}^{L}$ by minimising the sample average of $J$, that is $\displaystyle\min_{u_{0},\ldots,u_{N_{d}-1}}\tfrac{1}{L}\sum_{i=1}^{L}J(u_{0},\ldots,u_{N_{d}-1},k_{10}^{(i)},k_{12}^{(i)},k_{21,f}^{(i)},\alpha^{(i)}).$ (63) The effect of $L$ on the accuracy of the derived optimal decisions is addressed in Campi et al (2009) where probabilistic bounds are provided. Simulation results with the stochastic optimal control methodology on a population of $100$ patients using $L=20$ are shown in Figure 6. Note that in both the minimax and the stochastic approach, a common therapeutic course $u_{0},\ldots,u_{N_{d}-1}$ is sought for the whole population of patients. In stochastic optimal control it is customary to adapt the state constraints in a probabilistic context requiring that they be satisfied with certain probability, that is $\displaystyle\mathrm{P}[x_{k}\leq x_{\max}]\geq 1-\beta,$ (64) where $\beta\in[0,1]$ is a desired level of confidence. Such constraints are known as chance constraints or probabilistic constraints. Such problems, unless restrictive assumptions are imposed on the distribution of $x_{k}$, lead to nonconvex optimisation problems and solutions can be approximated by Monte Carlo sampling methods (Shapiro et al, 2009, Sec. 5.7.1) or by means of convex relaxations known as ambiguous chance constraints where (64) are replaced by $\mathrm{AV@R}_{\beta}[x_{k}]\leq x_{\max}$, where $\mathrm{AV@R}_{\beta}[x_{k}]$ is the average value-at-risk of $x_{k}$ at level $\beta$ (Shapiro et al, 2009, Sec. 6.3.4). The minimax and the expectation operator are the two “extreme” choices with the former relating to complete lack of probabilistic information and the latter coming with the assumption of exact knowledge of the underlying distribution of pharmacokinetic parameters. Other operators can be chosen to account for imperfect knowledge of that distribution, therefore, bridging the gap between minimax and stochastic optimal control. Suitable operators for optimal control purposes are the coherent risk measures which give rise to risk-averse optimal control which is the state of the art in optimsation under uncertainty Herceg et al (2017). ### Model Predictive Control Model predictive control fuses an optimal control open-loop decision making process with closed-loop control when feedback is available. Following its notable success and wide adoption in the industry, MPC has been proposed for several cases of drug administration for drugs that follow integer-order kinetics such as erythropoietin Gaweda et al (2008), lithium ions Sopasakis et al (2015), propofol for anaesthesia Ionescu et al (2008) and, most predominantly, insulin administration to diabetic patients Schaller et al (2016); Hovorka et al (2004); Toffanin et al (2013); Parker et al (1999). In MPC, at every time instant, we solve an optimal control problem which produces a plan of future drug dosages over a finite prediction horizon by minimising a performance index $J$. The future distribution of the drug in the organism is predicted with a dynamical model such as the ones based on the Grünwald-Letnikov discretisation presented in the previous sections. Out of the planned sequence of dosages, the first one is actually administered to the patient. At the next time instant, the drug concentration is measured and the same procedure is repeated in a fashion known as receding horizon control Rawlings and Mayne (2009). A fractional variant of the classical PID controller, namely a PIλDμ fractional-order controller, has been proposed in Sopasakis and Sarimveis (2014) for the controlled administration. However, the comparative advantage of MPC is that it can inherently account for administration rate, drug amount and drug concentration constraints which are of capital importance. In Sopasakis et al (2015); Sopasakis and Sarimveis (2017), the truncated Grünwald-Letnikov approximation serves as the basis to formulate MPC formulations with guaranteed asymptotic stability and satisfaction of the constraints at all time instants. We also single out impulsive MPC, which was proposed in Sopasakis et al (2015), which is particularly well-suited for applications of bolus administration where the patient is injected the medication (e.g., intravenously), thus abruptly increasing the drug concentration. In such cases, it is not possible to achieve equilibrium, but instead the objective becomes to keep the concentration within certain therapeutic limits. Model predictive control can further be combined with stochastic Patrinos et al (2014); Sopasakis et al (2017) and risk-averse Herceg et al (2017) optimal control aiming at a highly robust administration that is resilient to the inexact knowledge of the pharmacokinetic parameters of the patient as well as potential time varying phenomena (e.g., change in the PK/PD parameter values due to illness, drug-drug interactions and alternations due to other external influences). Furthermore, in feedback control scenarios, state information is often partially available. For example, when a multi-compartment model is used, concentration measurement from only a single compartment is available. As an example, in the case of multi-compartment physiologically based models, we would normally only have information from the blood compartment. In such cases, a state observer can be used to produce estimates of concentrations or amounts of drug in compartments where we do not have access. A state observer is a dynamical system which, using observations of (i) those concentrations that can be measures and (ii) amount or rate of administered drug at every time instant, produces estimates $\hat{q}_{i}$ of those amounts/concentrations of drug to which we do not have access. State observers are designed so that $\hat{q}_{i}(t)-q_{i}(t)\to 0$ as $t\to\infty$, that is, although at the beginning (at $t=0$) we only have an estimate $\hat{q}_{i}(0)$ of $q_{i}(0)$, we shall eventually obtain better results which converge to the actual concentrations. State observers, such as the Kalman filter, have been successfully employed in the administration of compounds following integer- order dynamics Sopasakis et al (2014) including applications of artificial pancreas Patek et al (2007); Wang et al (2014). Observers are, furthermore, employed to filter out measurement errors and modelling errors. When the system model is inexact, the system dynamics in Eq. (59) becomes $\displaystyle\mathbf{x}_{k+1}=\hat{A}\mathbf{x}_{k}+\hat{B}u_{k}+d_{k},$ (65) where $d_{k}$ serves as a model mismatch term. We may then assume that $d_{k}$ follows itself some dynamics often as trivial as $d_{k+1}=d_{k}$. We may then formulate the following linear dynamical system with state $(\mathbf{x}_{k},d_{k})$ $\displaystyle\begin{bmatrix}\mathbf{x}_{k+1}\\\ d_{k+1}\end{bmatrix}=\begin{bmatrix}\hat{A}&I\\\ 0&I\end{bmatrix}\begin{bmatrix}\mathbf{x}_{k}\\\ d_{k}\end{bmatrix}+\begin{bmatrix}\hat{B}\\\ 0\end{bmatrix}u_{k},$ (66) and build a state observer for $(\mathbf{x}_{k},d_{k})$ jointly. The estimates $(\hat{\mathbf{x}}_{k},\hat{d}_{k})$ are then fed to the MPC. This leads to an MPC formulation known as offset-free MPC Sopasakis et al (2014) that can control systems with biased estimates of the pharmacokinetic parameters. Model predictive control is a highly appealing approach because it can account for imperfect knowledge of the pharmacokinetic parameters of the patient, measurement noise, partial availability of measurements and constraints while it decides the amount of administered drug by optimising a cost function thus leading to unparalleled performance. Moreover, the tuning of MPC is more straightforward compared to other control approaches such as PID or fractional PID. In MPC the main tuning knob is the cost function $J$ of the involved optimal control problem which reflects and encodes exactly the control objectives. ## Conclusions Fractional kinetics offers an elegant description of anomalous kinetics, i.e., non-exponential terminal phases, the presence of which has been acknowledged in pharmaceutical literature extensively. The approach offers simplicity and a valid scientific basis, since it has been applied in problems of diffusion in physics and biology. It introduces the Mittag-Leffler function which describes power law behaved data well, in all time scales, unlike the empirical power- laws which describe the data only for large times. Despite the mathematical difficulties, fractional pharmacokinetics offer undoubtedly a powerful and indispensable approach for the toolbox of the pharmaceutical scientist. Solutions of fractional-order systems involve Mittag-Leffler-type functions, or other special functions whose numerical evaluation is nontrivial. Several approximation techniques have been proposed as we outlined above for fractional systems with different levels of accuracy, parsimony and suitability for optimal administration scheduling or control. In particular, those based on discrete time-domain approximations of the system dynamics with bounded approximation error, such as the truncated Grünwald-Letnikov approximation, are most suitable for optimal control applications. There nowadays exist software that implement most of the algorithms that are available in the literature and facilitate their practical use. Further research effort needs to be dedicated in deriving error bounds in the time domain for available approximation methods that will allow their adoption in optimal control formulations. The optimal control framework is suitable for the design of administration courses both to individuals as well as to populations where the intra-patient variability of pharmacokinetic parameters needs to be taken into account. In fact, stochastic optimal control offers a data-driven decision making solution that enables us to go from sample data to administration schemes for populations. Optimal control offers great flexibility in formulating optimal drug dosing problems and different problem structures arise naturally for different modes of administration (continuous, bolus intravenous, oral and more). At the same time, further research is necessary to make realistic assumptions and translate them into meaningful optimisation problems. MPC methodologies are becoming popular for controlled drug administration. Yet, their potential for fractional-order pharmacokinetics and their related properties, especially in regard to stochastic systems and the characterisation of invariant sets, needs to be investigated. Overall, at the intersection of fractional systems theory, pharmacokinetics, numerical analysis, optimal control and model predictive control, spring numerous research questions which are addressed in the context of mathematical pharmacology. ## References * West and Deering (1994) West BJ, Deering W (1994) Fractal physiology for physicists: Lévy statistics. Physics Reports 246(1):1–100 * Ionescu et al (2017) Ionescu C, Lopes A, Copot D, Machado J, Bates J (2017) The role of fractional calculus in modeling biological phenomena: A review. Communications in Nonlinear Science and Numerical Simulation 51:141 – 159, DOI http://dx.doi.org/10.1016/j.cnsns.2017.04.001 * Kopelman (1988) Kopelman R (1988) Fractal reaction kinetics. Science 241(4873):1620 – 1626 * Macheras (1996) Macheras P (1996) A fractal approach to heterogeneous drug distribution: calcium pharmacokinetics. Pharmaceutical research 13(5):663–670 * Pereira (2010) Pereira L (2010) Fractal pharmacokinetics. Computational and Mathematical Methods in Medicine 11:161–184 * Wise (1985) Wise ME (1985) Negative power functions of time in pharmacokinetics and their implications. Journal of pharmacokinetics and biopharmaceutics 13(3):309–346 * Tucker et al (1984) Tucker G, Jackson P, Storey G, Holt D (1984) Amiodarone disposition: polyexponential, power and gamma functions. European journal of clinical pharmacology 26(5):655–656 * Weiss (1999) Weiss M (1999) The anomalous pharmacokinetics of amiodarone explained by nonexponential tissue trapping. Journal of pharmacokinetics and biopharmaceutics 27(4):383–396 * Phan et al (2006) Phan G, Le Gall B, Deverre JR, Fattal E, Bénech H (2006) Predicting plutonium decorporation efficacy after intravenous administration of DTPA formulations: study of pharmacokinetic-pharmacodynamic relationships in rats. Pharmaceutical research 23(9):2030–2035 * Sokolov et al (2002) Sokolov IM, Klafter J, Blumen A (2002) Fractional kinetics. Physics Today 55(11):48–54 * Podlubny (1999) Podlubny I (1999) Fractional differential equations, Mathematics in Science and Engineering, vol 198. Academic Publisher, San Diego, California * Magin (2004a) Magin RL (2004a) Fractional calculus in bioengineering, part 1. Critical Reviews™ in Biomedical Engineering 32(1):1–104 * Magin (2004b) Magin RL (2004b) Fractional calculus in bioengineering, part3. Critical Reviews™ in Biomedical Engineering 32(3 – 4):195–377 * Magin (2004c) Magin RL (2004c) Fractional calculus in bioengineering, part 2. Critical Reviews™ in Biomedical Engineering 32(2):105–194 * Butera and Paola (2014) Butera S, Paola MD (2014) A physically based connection between fractional calculus and fractal geometry. Annals of Physics 350:146 – 158, DOI http://dx.doi.org/10.1016/j.aop.2014.07.008 * Chen et al (2010) Chen W, Sun H, Zhang X, Korošak D (2010) Anomalous diffusion modeling by fractal and fractional derivatives. Computers & Mathematics with Applications 59(5):1754 – 1758, DOI http://dx.doi.org/10.1016/j.camwa.2009.08.020 * Metzler et al (1994) Metzler R, Glöckle WG, Nonnenmacher TF (1994) Fractional model equation for anomalous diffusion. Physica A: Statistical Mechanics and its Applications 211(1):13 – 24, DOI http://dx.doi.org/10.1016/0378-4371(94)90064-7 * Copot et al (2014) Copot D, Ionescu CM, Keyser RD (2014) Relation between fractional order models and diffusion in the body. In: IFAC World Congress, Cape Town, South Africa, vol 47, pp 9277–9282, DOI http://dx.doi.org/10.3182/20140824-6-ZA-1003.02138 * Gmachowski (2015) Gmachowski L (2015) Fractal model of anomalous diffusion. European Biophysics Journal 44(8):613–621, DOI 10.1007/s00249-015-1054-5 * Klafter and Sokolov (2011) Klafter J, Sokolov IM (2011) First Steps in Random Walks: From Tools to Applications. Oxford University Press * Eirich (1990) Eirich FR (1990) The fractal approach to heterogeneous chemistry, surfaces, colloids, polymers, vol 28. John Wiley & Sons, New York, DOI https://doi.org/10.1002/pol.1990.140280608 * Dokoumetzidis and Macheras (2011) Dokoumetzidis A, Macheras P (2011) The changing face of the rate concept in biopharmaceutical sciences: from classical to fractal and finally to fractional. Pharmaceutical Research 28(5):1229–1232 * Dokoumetzidis and Macheras (2009) Dokoumetzidis A, Macheras P (2009) Fractional kinetics in drug absorption and disposition processes. Journal of pharmacokinetics and pharmacodynamics 36(2):165–178 * Kytariolos et al (2010) Kytariolos J, Dokoumetzidis A, Macheras P (2010) Power law IVIVC: An application of fractional kinetics for drug release and absorption. European Journal of Pharmaceutical Sciences 41(2):299–304 * Popović et al (2010) Popović JK, Atanacković MT, Pilipović AS, Rapaić MR, Pilipović S, Atanacković TM (2010) A new approach to the compartmental analysis in pharmacokinetics: fractional time evolution of diclofenac. Journal of pharmacokinetics and pharmacodynamics 37(2):119–134 * Popović et al (2011) Popović JK, Dolićanin D, Rapaić MR, Popović SL, Pilipović S, Atanacković TM (2011) A nonlinear two compartmental fractional derivative model. European journal of drug metabolism and pharmacokinetics 36(4):189–196 * Popović et al (2013) Popović JK, Poša M, Popović KJ, Popović DJ, Milošević N, Tepavčević V (2013) Individualization of a pharmacokinetic model by fractional and nonlinear fit improvement. European journal of drug metabolism and pharmacokinetics 38(1):69–76 * Popović et al (2015) Popović JK, Spasić DT, Tošić J, Kolarović JL, Malti R, Mitić IM, Pilipović S, Atanacković TM (2015) Fractional model for pharmacokinetics of high dose methotrexate in children with acute lymphoblastic leukaemia. Communications in Nonlinear Science and Numerical Simulation 22(1):451–471 * Copot et al (2013) Copot D, Chevalier A, Ionescu CM, De Keyser R (2013) A two-compartment fractional derivative model for propofol diffusion in anesthesia. In: IEEE International Conference on Control Applications, pp 264–269, DOI https://doi.org/10.1109/CCA.2013.6662769 * Verotta (2010) Verotta D (2010) Fractional dynamics pharmacokinetics-pharmacodynamic models. Journal of pharmacokinetics and pharmacodynamics 37(3):257–276 * Van der Graaf et al (2015) Van der Graaf PH, Benson N, Peletier LA (2015) Topics in mathematical pharmacology. Journal of Dynamics and Differential Equations 28(3-4):1337–1356, DOI 10.1007/s10884-015-9468-4 * Hennion and Hanert (2013) Hennion M, Hanert E (2013) How to avoid unbounded drug accumulation with fractional pharmacokinetics. Journal of Pharmacokinetics and Pharmacodynamics 40(6):691–700, DOI 10.1007/s10928-013-9340-2 * Samko et al (1993) Samko S, Kilbas A, Marichev O (1993) Fractional integral and derivatives. Gordon & Breach Science Publishers * Deng and Deng (2014) Deng J, Deng Z (2014) Existence of solutions of initial value problems for nonlinear fractional differential equations. Applied Mathematics Letters 32:6 – 12, DOI http://dx.doi.org/10.1016/j.aml.2014.02.001 * Mainardi (2014) Mainardi F (2014) On some properties of the Mittag-Leffler function $\mathcal{E}_{\alpha}(-t^{\alpha})$, completely monotone for $t>0$ with $0<\alpha<1$. Discrete and Continuous Dynamical Systems – Series B 19(7):2267–2278, DOI 10.3934/dcdsb.2014.19.2267 * Papadopoulou et al (2006) Papadopoulou V, Kosmidis K, Vlachou M, Macheras P (2006) On the use of the weibull function for the discernment of drug release mechanisms. International Journal of Pharmaceutics 309(1):44 – 50, DOI https://doi.org/10.1016/j.ijpharm.2005.10.044 * De Hoog et al (1982) De Hoog FR, Knight J, Stokes A (1982) An improved method for numerical inversion of Laplace transforms. SIAM Journal on Scientific and Statistical Computing 3(3):357–366 * Dokoumetzidis et al (2010a) Dokoumetzidis A, Magin R, Macheras P (2010a) A commentary on fractionalization of multi-compartmental models. Journal of pharmacokinetics and pharmacodynamics 37(2):203–207 * Dokoumetzidis et al (2010b) Dokoumetzidis A, Magin R, Macheras P (2010b) Fractional kinetics in multi-compartmental systems. Journal of Pharmacokinetics and Pharmacodynamics 37(5):507–524 * Popović et al (2013) Popović JK, Pilipović S, Atanacković TM (2013) Two compartmental fractional derivative model with fractional derivatives of different order. Communications in Nonlinear Science and Numerical Simulation 18(9):2507 – 2514, DOI http://dx.doi.org/10.1016/j.cnsns.2013.01.004 * Petráš and Magin (2011) Petráš I, Magin RL (2011) Simulation of drug uptake in a two compartmental fractional model for a biological system. Communications in Nonlinear Science and Numerical Simulation 16(12):4588 – 4595, DOI https://doi.org/10.1016/j.cnsns.2011.02.012 * Kilbas et al (2006) Kilbas AA, Srivastava HM, Trujillo JJ (2006) Theory and applications of fractional differential equations, vol 204. Elsevier Science Limited * Moloni (2015) Moloni S (2015) Applications of fractional calculus to pharmacokinetics. Master’s thesis, University of Patras, Department of Mathematics, Patras, Greece * Holt et al (1983) Holt DW, Tucker GT, Jackson PR, Storey GC (1983) Amiodarone pharmacokinetics. American Heart Journal 106(4):840–847, DOI http://dx.doi.org/10.1016/0002-8703(83)90006-6 * Kaczorek (2011) Kaczorek T (2011) Selected Problems of Fractional Systems Theory. Springer Berlin Heidelberg * Sopasakis et al (2015) Sopasakis P, Ntouskas S, Sarimveis H (2015) Robust model predictive control for discrete-time fractional-order systems. In: IEEE Mediterranean Conference on Control and Automation, pp 384–389 * Verotta (2010) Verotta D (2010) Fractional compartmental models and multi-term Mittag–Leffler response functions. Journal of Pharmacokinetics and Pharmacodynamics 37(2):209–215, DOI 10.1007/s10928-010-9155-3 * Garrappa (2015) Garrappa R (2015) Numerical evaluation of two and three parameter Mittag-Leffler functions. SIAM Journal of Numerical Analysis 53:1350–1369, DOI http://doi.org/10.1137/140971191 * Seybold and Hilfer (2009) Seybold H, Hilfer R (2009) Numerical algorithm for calculating the generalized mittag-leffler function. SIAM Journal on Numerical Analysis 47(1):69–88, DOI 10.1137/070700280, URL https://doi.org/10.1137/070700280 * Gorenflo et al (2002) Gorenflo R, Loutchko J, Luchko Y (2002) Computation of the mittag-leffler function and its derivatives. Fractional Calculus & Applied Analysis 5:1–26 * Silva et al (2006) Silva M, Machado J, Barbosa R (2006) Comparison of different orders Padé fractional order PD0.5 control algorithm implementations. IFAC Proceedings Volumes 39(11):373–378 * Matsuda and Fujii (1993) Matsuda K, Fujii H (1993) H(infinity) optimized wave-absorbing control - analytical and experimental results. Journal of Guidance, Control, and Dynamics 16(6):1146–1153 * Oustaloup et al (2000) Oustaloup A, Levron F, Mathieu B, Nanot FM (2000) Frequency-band complex noninteger differentiator: characterization and synthesis. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 47(1):25–39 * Petráš (2011) Petráš I (2011) Fractional derivatives, fractional integrals, and fractional differential equations in matlab. In: Assi A (ed) Engineering Education and Research Using MATLAB, InTech, DOI http://doi.org/10.5772/19412 * Charef et al (1992) Charef A, Sun HH, Tsao YY, Onaral B (1992) Fractal system as represented by singularity function. IEEE Transactions on Automatic Control 37(9):1465–1470 * Carlson and Halijak (1964) Carlson G, Halijak C (1964) Approximation of fractional capacitors $1/s^{1/n}$ by a regular Newton process. IEEE Transactions on Circuits Theory 11(2):210–213, DOI https://doi.org/10.1109/TCT.1964.1082270 * Gao and Liao (2012) Gao Z, Liao X (2012) Rational approximation for fractional-order system by particle swarm optimization. Nonlinear Dynamic 67(2):1387–1395, DOI https://doi.org/10.1007/s11071-011-0075-6 * Sopasakis and Sarimveis (2017) Sopasakis P, Sarimveis H (2017) Stabilising model predictive control for discrete-time fractional-order systems. Automatica 75:24–31 * Podlubny (2000) Podlubny I (2000) Matrix approach to discrete fractional calculus. Fractional Calculus & Applied Analysis 3:359–386 * Zainal and Kılıçman (2014) Zainal NH, Kılıçman A (2014) Solving fractional partial differential equations with corrected fourier series method. Abstract and Applied Analysis 2014:1–5, DOI 10.1155/2014/958931 * Kumar and Agrawal (2006) Kumar P, Agrawal OP (2006) An approximate method for numerical solution of fractional differential equations. Signal Processing 86(10):2602–2610, DOI http://dx.doi.org/10.1016/j.sigpro.2006.02.007 * Zayernouri and Matzavinos (2016) Zayernouri M, Matzavinos A (2016) Fractional Adams-Bashforth/Moulton methods: An application to the fractional Keller-Segel chemotaxis system. Journal of Computational Physics 317:1–14 * Lubich (1986) Lubich C (1986) Discretized fractional calculus. SIAM Journal on Mathematical Analysis 17(3):704–719 * Garrappa (2015) Garrappa R (2015) Trapezoidal methods for fractional differential equations: Theoretical and computational aspects. Mathematics and Computers in Simulation 110:96–112 * Diethelm et al (2002) Diethelm K, Ford NJ, Freed AD (2002) A predictor-corrector approach for the numerical solution of fractional differential equations. Nonlinear Dynamic 29(1):3–22, DOI https://doi.org/10.1023/A:1016592219341 * Garrappa (2010) Garrappa R (2010) On linear stability of predictor-corrector algorithms for fractional differential equations. International Journal of Computer Mathematics 87(10):2281–2290, DOI http://doi.org/10.1080/00207160802624331 * Lubich (1985) Lubich C (1985) Fractional linear multistep methods for Abel-Volterra integral equations of the second kind. Mathematics of Computation 45(172):463–469 * Garrappa (2015) Garrappa R (2015) Software for fractional differential equations. https://www.dm.uniba.it/Members/garrappa/Software, accessed: * Herceg et al (2017) Herceg D, Ntouskas S, Sopasakis P, Dokoumetzidis A, Macheras P, Sarimveis H, Patrinos P (2017) Modeling and administration scheduling of fractional-order pharmacokinetic systems. In: IFAC World Congress, Toulouse, France * Hollenbeck (1998) Hollenbeck KJ (1998) INVLAP.M: A MATLAB function for numerical inversion of Laplace transforms by the de Hoog algorithm. http://www.mathworks.com/matlabcentral/answers/uploaded_files/1034/invlap.m, accessed: * Lin and Lu (2013) Lin SD, Lu CH (2013) Laplace transform for solving some families of fractional differential equations and its applications. Advances in Difference Equations 2013(1):137 * Kexue and Jigen (2011) Kexue L, Jigen P (2011) Laplace transform and fractional differential equations. Applied Mathematics Letters 24(12):2019 – 2023 * Valsa and Brančik (1998) Valsa J, Brančik L (1998) Approximate formulae for numerical inversion of Laplace transforms. International Journal of Numerical Modelling: Electronic Networks, Devices and Fields 11(3):153–166 * Hassanzadeh and Pooladi-Darvish (2007) Hassanzadeh H, Pooladi-Darvish M (2007) Comparison of different numerical Laplace inversion methods for engineering applications. Applied Mathematics and Computation 189(2):1966 – 1981 * Sopasakis et al (2014) Sopasakis P, Patrinos P, Sarimveis H (2014) Robust model predictive control for optimal continuous drug administration. Computer Methods and Programs in Biomedicine 116(3):193–204, DOI http://dx.doi.org/10.1016/j.cmpb.2014.06.003 * Sopasakis et al (2015) Sopasakis P, Patrinos P, Sarimveis H, Bemporad A (2015) Model predictive control for linear impulsive systems. IEEE Transactions on Automatic Control 60:2277–2282, DOI http://dx.doi.org/10.1109/TAC.2014.2380672 * Rivadeneira et al (2015) Rivadeneira PS, Ferramosca A, González AH (2015) MPC with state window target control in linear impulsive systems. In: 5th IFAC Conference on Nonlinear Model Predictive Control NMPC 2015, vol 48, pp 507–512, DOI http://dx.doi.org/10.1016/j.ifacol.2015.11.329 * Sopasakis and Sarimveis (2012) Sopasakis P, Sarimveis H (2012) An integer programming approach for optimal drug dose computation. Computer Methods and Programs in Biomedicine 108(3):1022–1035, DOI http://dx.doi.org/10.1016/j.cmpb.2012.06.008 * Krieger and Pistikopoulos (2014) Krieger A, Pistikopoulos EN (2014) Model predictive control of anesthesia under uncertainty. Computers & Chemical Engineering 71:699 – 707, DOI https://doi.org/10.1016/j.compchemeng.2014.07.025 * Favero et al (2014) Favero SD, Bruttomesso D, Palma FD, Lanzola G, Visentin R, Filippi A, Scotton R, Toffanin C, Messori M, Scarpellini S, Keith-Hynes P, Kovatchev BP, DeVries JH, Renard E, Magni L, Avogaro A, and CC (2014) First use of model predictive control in outpatient wearable artificial pancreas. Diabetes Care 37(5):1212–1215, DOI 10.2337/dc13-1631 * Kannikeswaran et al (2016) Kannikeswaran N, Lieh-Lai M, Malian M, Wang B, Farooqi A, Roback MG (2016) Optimal dosing of intravenous ketamine for procedural sedation in children in the ED — a randomized controlled trial. The American Journal of Emergency Medicine 34(8):1347 – 1353, DOI https://doi.org/10.1016/j.ajem.2016.03.064 * Fukudo et al (2017) Fukudo S, Matsueda K, Haruma K, Ida M, Hayase H, Akiho H, Nakashima Y, Hongo M (2017) Optimal dose of ramosetron in female patients with irritable bowel syndrome with diarrhea: A randomized, placebo-controlled phase II study. Neurogastroenterology & Motility 29(6):e13,023, DOI 10.1111/nmo.13023 * De Ocenda et al (2016) De Ocenda VR, Almeida-Prieto S, Luzardo-Álvarez A, Barja J, Otero-Espinar F, Blanco-Méndez J (2016) Pharmacokinetic model of florfenicol in turbot (scophthalmus maximus): establishment of optimal dosage and administration in medicated feed. Journal of Fish Diseases 40(3):411–424, DOI 10.1111/jfd.12525 * Savic et al (2017) Savic R, Weiner M, MacKenzie W, Engle M, WC W, Johnson J, Nsubuga P, Nahid P, Nguyen N, Peloquin C, Dooley K, Dorman S (2017) Defining the optimal dose of rifapentine for pulmonary tuberculosis: Exposure-response relations from two phase II clinical trials. Clinical Pharmacology & Therapeutics 102(2):321–331, DOI 10.1002/cpt.634 * Bertsekas (2017) Bertsekas DP (2017) Dynamic Programming and Optimal Control, 4th edn. Athena Scientific, Nashua, USA * Löfberg (2004) Löfberg J (2004) YALMIP: A toolbox for modeling and optimization in MATLAB. In: IEEE International Symposium on Computer Aided Control Systems Design, New Orleans, LA, USA, pp 284–289, DOI http://doi.org/10.1109/CACSD.2004.1393890 * Stella et al (2017) Stella L, Themelis A, Patrinos P (2017) Forward–backward quasi-newton methods for nonsmooth optimization problems. Computational Optimization and Applications 67(3):443–487, DOI 10.1007/s10589-017-9912-y, forBES Software available at https://github.com/kul-forbes/ForBES * Diamond and Boyd (2016) Diamond S, Boyd S (2016) CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research 17(83):1–5 * Bertsekas and Shreve (1996) Bertsekas DP, Shreve SE (1996) Stochastic Optimal Control: The Discrete-Time Case. Athena Scientific, Nashua, USA * Schumitzky et al (1983) Schumitzky A, Milman M, Katz D, D’Argenio DZ, Jelliffe RW (1983) Stochastic control of pharmacokinetic systems. In: The Seventh Annual Symposium on Computer Applications in Medical Care, 1983. Proceedings., pp 222–225, DOI 10.1109/SCAMC.1983.764595 * Lago (1992) Lago P (1992) Open-loop stochastic control of pharmacokinetic systems: A new method for design of dosing regimens. Computers and Biomedical Research 25(1):85–100, DOI 10.1016/0010-4809(92)90037-b * Bayard et al (1994) Bayard D, Milman M, Schumitzky A (1994) Design of dosage regimens: A multiple model stochastic control approach. International Journal of Bio-Medical Computing 36(1):103 – 115, DOI http://dx.doi.org/10.1016/0020-7101(94)90100-7 * Campi et al (2009) Campi MC, Garatti S, Prandini M (2009) The scenario approach for systems and control design. Annual Reviews in Control 33(2):149 – 157, DOI http://dx.doi.org/10.1016/j.arcontrol.2009.07.001 * Shapiro et al (2009) Shapiro A, Dentcheva D, Ruszczyński (2009) Lectures on stochastic programming: modeling and theory. SIAM * Herceg et al (2017) Herceg D, Sopasakis P, Bemporad A, Patrinos P (2017) Risk-averse model predictive control. https://arxiv.org/abs/1704.00342 * Gaweda et al (2008) Gaweda AE, Jacobs AA, Aronoff GR, Brier ME (2008) Model predictive control of erythropoietin administration in the anemia of ESRD. American Journal of Kidney Diseases 51(1):71–79, DOI 10.1053/j.ajkd.2007.10.003 * Ionescu et al (2008) Ionescu CM, Keyser RD, Torrico BC, Smet TD, Struys MM, Normey-Rico JE (2008) Robust predictive control strategy applied for propofol dosing using BIS as a controlled variable during anesthesia. IEEE Transactions on Biomedical Engineering 55(9):2161–2170, DOI 10.1109/TBME.2008.923142 * Schaller et al (2016) Schaller S, Lippert J, Schaupp L, Pieber TR, Schuppert A, Eissing T (2016) Robust PBPK/PD-based model predictive control of blood glucose. IEEE Transactions on Biomedical Engineering 63(7):1492–1504, DOI 10.1109/TBME.2015.2497273 * Hovorka et al (2004) Hovorka R, Canonico V, Chassin L, Haueter U, Massi-Benedetti M, Federici M, Pieber T, Schaller H, Schaupp L, Vering T, Wilinska M (2004) Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes. Physiological Measurement 25(4):905–920, DOI 10.1088/0967-3334/25/4/010 * Toffanin et al (2013) Toffanin C, Messori M, Palma FD, Nicolao GD, Cobelli C, Magni L (2013) Artificial pancreas: Model predictive control design from clinical experience. Journal of Diabetes Science and Technology 7(6):1470–1483, DOI 10.1177/193229681300700607 * Parker et al (1999) Parker RS, Doyle FJ, Peppas NA (1999) A model-based algorithm for blood glucose control in type I diabetic patients. IEEE Transactions on Biomedical Engineering 46(2):148–157, DOI 10.1109/10.740877 * Rawlings and Mayne (2009) Rawlings J, Mayne D (2009) Model Predictive Control: Theory and Design. Nob Hill Publishing * Sopasakis and Sarimveis (2014) Sopasakis P, Sarimveis H (2014) Controlled drug administration by a fractional PID. In: IFAC World Congress, Cape Town, South Africa, pp 8421–8426 * Patrinos et al (2014) Patrinos P, Sopasakis P, Sarmiveis H, Bemporad A (2014) Stochastic model predictive control for constrained discrete-time Markovian switching systems. Automatica 50(10):2504–2514, DOI http://dx.doi.org/10.1016/j.automatica.2014.08.031 * Sopasakis et al (2017) Sopasakis P, Herceg D, Patrinos P, Bemporad A (2017) Stochastic economic model predictive control for Markovian switching systems. In: IFAC World Congress * Patek et al (2007) Patek SD, Breton MD, Chen Y, Solomon C, Kovatchev B (2007) Linear quadratic gaussian-based closed-loop control of type 1 diabetes. Journal of Diabetes Science and Technology 1(6):834–841 * Wang et al (2014) Wang Q, Molenaar P, Harsh S, Freeman K, Xie J, Gold C, Rovine M, Ulbrecht J (2014) Personalized state-space modeling of glucose dynamics for type 1 diabetes using continuously monitored glucose, insulin dose, and meal intake. Journal of Diabetes Science and Technology 8(2):331–345, DOI 10.1177/1932296814524080
d\eta+2\beta^{\prime}\int_{0}^{B}e^{2\beta^{\prime}\eta}4h^{3}G_{B}^{\varepsilon}\partial_{\eta}G_{B}^{\varepsilon}d\eta$ $\displaystyle\quad=-\beta^{\prime}\int_{0}^{B}\int_{-\pi}^{\pi}\left(e^{2\beta^{\prime}\eta}\sin\phi(\varphi_{B}^{\varepsilon})^{2}-2e^{2\beta^{\prime}\eta}4h^{3}G_{B}^{\varepsilon}\sin\phi\varphi_{B}^{\varepsilon}\right)d\phi d\eta$ $\displaystyle\quad=-\beta^{\prime}\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}\sin\phi(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta.$ The right-hand side of equation (4.3) can be estimated by $\displaystyle\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}S_{2}(\varphi^{\varepsilon}_{B}-4h^{3}G^{\varepsilon}_{B})d\phi d\eta\leq\frac{1}{4}\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta+\|e^{\beta^{\prime}\eta}S_{2}\|_{L^{2}([0,B]\times[-\pi,\pi))}^{2}.$ Taking the above inequalities into (4.3), we obtain $\displaystyle\frac{3}{4}\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta+\frac{1}{2}\int_{0}^{B}\int_{-\pi}^{\pi}(F(\varepsilon;\eta)-2\beta^{\prime})\sin\phi e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta$ $\displaystyle\quad+\frac{1}{2}\int_{\sin\phi<0}|\sin\phi|(\varphi_{B}^{\varepsilon})^{2}(0,\cdot)d\phi\leq\frac{1}{2}\int_{\sin\phi>0}\sin\phi\varphi_{b}^{2}d\phi+\|e^{\beta^{\prime}\eta}S_{2}\|_{L^{2}([0,B]\times[-\pi,\pi))}^{2}.$ For $\varepsilon$ sufficiently small, for example $\varepsilon\leq(4-3\delta)/16$, inequality (3.11) implies for any $\eta\in\mathbb{R}_{+}$, $\displaystyle|F(\varepsilon;\eta)-2\beta^{\prime}|\leq 2\beta^{\prime}+\frac{4}{4-3\delta}\varepsilon\leq 1+\frac{1}{4}=\frac{5}{4}.$ (4.32) With this, the previous inequality implies (16). Moreover, we can get from equation (4.8) that $\displaystyle\int_{0}^{B}e^{2\beta^{\prime}\eta}|\partial_{\eta}^{2}G_{B}^{\varepsilon}|^{2}d\eta$ $\displaystyle\quad\leq\int_{0}^{B}e^{2\beta^{\prime}\eta}|F(\varepsilon;\eta)\partial_{\eta}G_{B}^{\varepsilon}|^{2}d\eta+\int_{0}^{B}e^{2\beta^{\prime}\eta}|\langle\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon}\rangle|^{2}d\eta+\int_{0}^{B}e^{2\beta^{\prime}\eta}|S_{1}|^{2}d\eta$ $\displaystyle\quad\leq\int_{0}^{B}|F(\varepsilon;\eta)|^{2}e^{2\beta^{\prime}\eta}\langle\sin\phi(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})\rangle d\eta+\int_{0}^{B}\left(\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi\cdot\int_{-\pi}^{\pi}d\phi\right)d\eta$ $\displaystyle\qquad+\int_{0}^{B}e^{2\beta^{\prime}\eta}|S_{1}|^{2}d\eta$ $\displaystyle\quad\leq\|F\|_{L^{\infty}(\mathbb{R}_{+})}^{2}2\pi\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta+2\pi\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta$ $\displaystyle\qquad+\int_{0}^{B}e^{2\beta^{\prime}\eta}|S_{1}|^{2}d\eta$ $\displaystyle\quad\leq 2\pi\left(\frac{4}{4-3\delta}\varepsilon+1\right)\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta+\int_{0}^{B}e^{2\beta^{\prime}\eta}|S_{1}|^{2}d\eta.$ For $\varepsilon$ sufficiently small, for example $\varepsilon\leq(4-3\delta)/16$, the above inequality becomes $\displaystyle\int_{0}^{B}e^{2\beta^{\prime}\eta}|\partial_{\eta}^{2}G_{B}^{\varepsilon}|^{2}d\eta$ $\displaystyle\quad\leq\frac{5}{2}\pi\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta+\int_{0}^{B}e^{2\beta^{\prime}\eta}|S_{1}|^{2}d\eta$ $\displaystyle\quad\leq 10\pi\int_{\sin\phi>0}\sin\phi\varphi_{b}^{2}d\phi+20\pi\|e^{\beta^{\prime}\eta}S_{2}\|_{L^{2}([0,B]\times[-\pi,\pi))}^{2}+\|e^{\beta^{\prime}\eta}S_{1}\|_{L^{2}([0,B])}^{2},$ and hence is uniformly bounded in $B$. Let $C_{1}:=10\pi\int_{\sin\phi>0}\sin\phi\varphi_{b}^{2}d\phi+20\pi\|e^{\beta^{\prime}\eta}S_{2}\|_{L^{2}([0,B]\times[-\pi,\pi))}^{2}+\|e^{\beta^{\prime}\eta}S_{1}\|_{L^{2}([0,B])}^{2}$. Using the bounded condition $\partial_{\eta}G_{B}^{\varepsilon}(B)=0$, we have for any $\eta\in[0,B]$, $\displaystyle|\partial_{\eta}G_{B}^{\varepsilon}(\eta)|$ $\displaystyle=-\left|\int_{\eta}^{B}\partial_{s}^{2}G_{B}^{\varepsilon}(s)ds\right|=\left|\int_{\eta}^{B}e^{-\beta^{\prime}s}e^{\beta^{\prime}s}\partial_{s}^{2}G_{B}^{\varepsilon}(s)ds\right|$ $\displaystyle\leq\left(\int_{\eta}^{B}e^{-2\beta^{\prime}s}ds\right)^{\frac{1}{2}}\int_{0}^{B}e^{2\beta^{\prime}s}|\partial_{s}^{2}G_{B}^{\varepsilon}(s)|^{2}ds$ $\displaystyle\leq C_{1}\frac{\sqrt{e^{-2\beta^{\prime}\eta}-e^{-2\eta^{\prime}B}}}{\sqrt{2\beta^{\prime}}}\leq C_{1}(2\beta^{\prime})^{-\frac{1}{2}}e^{-\beta^{\prime}\eta}.$ Using the boundary condition $G_{B}^{\varepsilon}(0)=0$, we have $\displaystyle|G_{B}^{\varepsilon}(\eta)|$ $\displaystyle=\left|\int_{0}^{\eta}\partial_{s}G_{B}^{\varepsilon}(s)ds\right|\leq\int_{0}^{\eta}|\partial_{s}G_{B}^{\varepsilon}(s)|ds\leq C_{1}(2\beta^{\prime})^{-\frac{1}{2}}\int_{0}^{\eta}e^{-\beta^{\prime}s}ds$ $\displaystyle\leq\sqrt{2}C_{1}(\beta^{\prime})^{-\frac{3}{2}}(1-e^{-\beta^{\prime}\eta})\leq\sqrt{2}C_{1}(\beta^{\prime})^{-\frac{3}{2}}.$ ### 4.4. Existence on the half line Next we pass to the limit $B\to\infty$ in (4.8)-(4.9) to show the existence for system (4.1)-(4.2) on the half line. ###### Proof of Theorem 13. By the uniform estimate (4.29), $G_{B}^{\varepsilon}\in L^{\infty}_{\rm loc}([0,\infty))$. Hence there exists a subsequence such that $\displaystyle G_{B}^{\varepsilon}\rightharpoonup^{*}G^{\varepsilon},\quad\text{weakly star in }L_{\rm loc}^{\infty}([0,\infty)),$ (4.33) $\displaystyle G_{B}^{\varepsilon}\rightharpoonup G^{\varepsilon},\quad\text{weakly in }L_{\rm loc}^{2}([0,\infty)).$ Moreover, by the estimate (16), $\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon}$ is uniformly bounded in $L^{2}_{\rm loc}([0,\infty);L^{2}([-\pi,\pi)))$, hence $\displaystyle\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon}\rightharpoonup\varphi^{\varepsilon}-4h^{3}G^{\varepsilon},\quad\text{weakly in }L^{2}_{\rm loc}([0,\infty);L^{2}([-\pi,\pi))),$ and so $\displaystyle\varphi_{B}^{\varepsilon}\rightharpoonup\varphi^{\varepsilon},\quad\text{weakly in }L^{2}_{\rm loc}([0,\infty);L^{2}([-\pi,\pi))).$ (4.34) With (4.33) and (4.34), we can pass to the limit in (4.8)-(4.9) with boundary conditions (4.10)-(4.11) and obtain that $(G^{\varepsilon},\varphi^{\varepsilon})$ solve system (4.1)-(4.2) in the sense of distributions and satisfy the boundary conditions (4.3)-(4.4). Moreover, we can pass to the limit $B\to\infty$ in (4.30) and obtain $\displaystyle\partial_{\eta}G^{\varepsilon}=\langle\sin\phi\varphi^{\varepsilon}(\eta,\cdot)\rangle$ (4.35) for $\eta\in\mathbb{R}_{+}$ almost everywhere. By the semi-lower continuity of the $L^{2}$ norm, we can take $\liminf$ in (16) and use $\displaystyle\liminf_{B\to\infty}\int_{0}^{B}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi_{B}^{\varepsilon}-4h^{3}G_{B}^{\varepsilon})^{2}d\phi d\eta\geq\int_{0}^{\infty}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi^{\varepsilon}-4h^{3}G^{\varepsilon})^{2}d\phi d\eta$ and obtain (13). ∎ ### 4.5. Decay properties of the solutions We can use the above derived uniform estimates to show that solutions to system (4.1)-(4.2) decay to constants as $\eta\to\infty$. We set $\displaystyle C_{2}:=4\int_{\sin\phi>0}\sin\phi\varphi_{b}^{2}d\phi+8\|e^{\beta^{\prime}\eta}S_{2}\|_{L^{2}([0,\infty)\times[-\pi,\pi))}^{2}.$ Then $\displaystyle\int_{0}^{\infty}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi^{\varepsilon}-4h^{3}G^{\varepsilon})^{2}d\phi d\eta\leq C_{2}.$ (4.36) By the relation (4.35), we get $\displaystyle\int_{0}^{\infty}e^{2\beta^{\prime}\eta}|\partial_{\eta}G^{\varepsilon}|^{2}d\eta$ $\displaystyle=\int_{0}^{\infty}e^{2\beta^{\prime}\eta}\left|\int_{-\pi}^{\pi}\sin\phi(\varphi^{\varepsilon})d\phi\right|^{2}d\eta=\int_{0}^{\infty}e^{2\beta^{\prime}\eta}\left|\int_{-\pi}^{\pi}\sin\phi(\varphi^{\varepsilon}-4h^{3}G^{\varepsilon})d\phi\right|^{2}d\eta$ $\displaystyle\leq\int_{0}^{\infty}e^{2\beta^{\prime}\eta}\left(\int_{-\pi}^{\pi}\sin^{2}\phi d\phi\right)\left(\int_{-\pi}^{\pi}(\varphi^{\varepsilon}-4h^{3}G^{\varepsilon})^{2}d\phi\right)d\eta$ $\displaystyle=\pi\int_{0}^{\infty}\int_{-\pi}^{\pi}e^{2\beta^{\prime}\eta}(\varphi^{\varepsilon}-4h^{3}G^{\varepsilon})^{2}d\phi\leq\pi C_{2}.$ Therefore, $\lim_{\eta\to\infty}G^{\varepsilon}(\eta)$ exists and is finite. To show this, we can take $M_{2}>M_{1}>0$ to be some large numbers and $\displaystyle|G^{\varepsilon}(M_{1})-G^{\varepsilon}(M_{2})|$ $\displaystyle=\left|\int_{M_{1}}^{M_{2}}\partial_{\eta}G^{\varepsilon}d\eta\right|\leq\int_{M_{1}}^{M_{2}}e^{-\beta^{\prime}\eta}e^{\beta^{\prime}\eta}|\partial_{\eta}G^{\varepsilon}|d\eta$ $\displaystyle\leq\left(\int_{M_{1}}^{M_{2}}e^{-2\beta^{\prime}\eta}d\eta\right)^{\frac{1}{2}}\left(\int_{M_{1}}^{M_{2}}e^{2\beta^{\prime}\eta}|\partial_{\eta}G^{\varepsilon}|^{2}d\eta\right)^{\frac{1}{2}},$ $\displaystyle=\sqrt{(e^{-2\beta^{\prime}M_{1}}-e^{-2\beta^{\prime}M_{2}})}\pi\sqrt{C_{2}/(2\beta^{\prime})}.$ Since for $M_{1},M_{2}$ sufficiently large, $e^{-2\beta^{\prime}M_{1}}-e^{-2\beta^{\prime}M_{2}}$ is sufficiently small. $\lim_{\eta\to\infty}G^{\varepsilon}(\eta)$ exists. Let $G^{\varepsilon}_{\infty}=\lim_{\eta\to\infty}G^{\varepsilon}(\eta)$. Then we take $M_{1}=\eta$, $M_{2}=\infty$ in the above inequality and obtain $\displaystyle|G^{\varepsilon}(\eta)-G^{\varepsilon}_{\infty}|\leq\pi\sqrt{C_{2}/(2\beta^{\prime})}e^{-\beta^{\prime}\eta}.$ Moreover, taking $M_{1}=0$, $M_{2}=\infty$ gives $\displaystyle|G^{\varepsilon}_{\infty}|\leq\pi\sqrt{C_{2}/(2\beta^{\prime})}.$ Due to the exponentially convergence of $h$, we have $\displaystyle|4h^{3}(\eta)G^{\varepsilon}(\eta)-4h_{\infty}^{3}G^{\varepsilon}_{\infty}|$ $\displaystyle\leq 4h^{3}(\eta)|G^{\varepsilon}(\eta)-G^{\varepsilon}_{\infty}|+4|h^{3}(\eta)-h_{\infty}^{3}||G^{\varepsilon}_{\infty}|\leq Ce^{-\alpha\eta}+Ce^{-\beta^{\prime}\eta}$ $\displaystyle\leq 2Ce^{-\beta^{\prime}\eta},$ (4.37) taking $1>\alpha>\beta^{\prime}$. To show the exponential convergence of $\varphi^{\varepsilon}$, we use the formulas (A.4), (A.5) and (20). In the case $\sin\phi>0$, we have $\displaystyle|\varphi^{\varepsilon}(\eta,\phi)-4h_{\infty}^{3}G_{\infty}^{\varepsilon}|$ $\displaystyle=\left|\int_{0}^{\eta}\frac{4h^{3}(\xi)G^{\varepsilon}(\xi)+S_{2}(\xi,\phi^{\prime}(\phi,\eta,\xi))}{\sin\phi^{\prime}(\phi,\eta,\xi)}e^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}d\xi-4h^{3}_{\infty}G_{\infty}^{\varepsilon}\right|$ $\displaystyle\leq\int_{0}^{\eta}\frac{|4h^{3}(\xi)G^{\varepsilon}(\xi)-4h_{\infty}^{3}G^{\varepsilon}_{\infty}|}{\sin\phi^{\prime}(\phi,\eta,\xi)}e^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}d\xi$ $\displaystyle\quad+\int_{0}^{\eta}\frac{|S_{2}(\xi,\phi^{\prime}(\phi,\eta,\xi))|}{\sin\phi^{\prime}(\phi,\eta,\xi)}e^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}d\xi$ $\displaystyle\quad+\left|1-\int_{0}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\xi)}e^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}d\xi\right||4h_{\infty}^{3}G_{\infty}^{\varepsilon}|$ $\displaystyle\quad\leq 2C\int_{0}^{\eta}\frac{e^{-\beta^{\prime}\xi}}{\sin\phi^{\prime}(\phi,\eta,\xi)}e^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}d\xi+Ce^{-\int_{0}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}$ $\displaystyle\quad\leq 2C\int_{0}^{\eta}\frac{1}{1-\beta^{\prime}\sin\phi^{\prime}(\phi,\eta,\xi)}de^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}-\frac{1}{2}\beta^{\prime}\xi}+Ce^{-\int_{0}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}$ $\displaystyle\quad\leq\frac{2C}{1-\beta^{\prime}}(e^{-\beta^{\prime}\eta}-e^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}})+Ce^{-\int_{0}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}$ $\displaystyle\quad\leq\frac{3C}{1-\beta^{\prime}}e^{-\beta^{\prime}\eta},$ where we use $\beta^{\prime}<1/2$ and $\sin\phi^{\prime}\leq 1$ in the above inequality. One can also check for $\sin\phi<0$, there exists positive constants $C>0$, $|\varphi^{\varepsilon}(\eta,\phi)-4h_{\infty}^{3}G_{\infty}^{\varepsilon}|\leq Ce^{-\beta^{\prime}\eta},$ which follows a similar estimate as the case for $\sin\phi>0$ above and the details are similar to the proof of (3.47). ### 4.6. Uniqueness In order to show the uniqueness, let $(G_{1}^{\varepsilon},\varphi_{1}^{\varepsilon})$ and $(G_{2}^{\varepsilon},\varphi_{2}^{\varepsilon})$ be two solutions of the problem (4.1)-(4.4). Let $g:=G_{1}^{\varepsilon}-G_{2}^{\varepsilon}$ and $\psi:=\varphi_{1}^{\varepsilon}-\varphi_{2}^{\varepsilon}$, then they satisfy $\displaystyle\partial_{\eta}^{2}g+F(\varepsilon;\eta)\partial_{\eta}g+\langle\psi-4h^{3}g\rangle=0,$ (4.38) $\displaystyle\sin\phi\partial_{\eta}\psi+F(\varepsilon;\eta)\cos\phi\partial_{\phi}\psi+\psi-4h^{3}g=0,$ (4.39) with boundary conditions $\displaystyle g(0)=0,\quad\psi(0,\phi)=0,\quad\text{for any }\sin\phi>0.$ Moreover, as $\eta\to\infty$, $G_{1}^{\varepsilon}\to G_{1,\infty}$, $G_{2}^{\varepsilon}\to G_{2,\infty}$ and $\psi\to 4h_{\infty}^{3}(G_{1,\infty}-G_{2,\infty})$, where $G_{1,\infty}$, $G_{2,\infty}$ are constants. We multiply the first equation by $4h^{3}g$ and the second by $\psi$, and integrate over $[0,\infty)\times[-\pi,\pi)$ to get $\displaystyle\int_{0}^{\infty}4h^{3}|\partial_{\eta}g|^{2}d\eta+\int_{0}^{\infty}\partial_{\eta}(4h^{3})g\partial_{\eta}gd\eta-\int_{0}^{\infty}F(\varepsilon;\eta)4h^{3}g\partial_{\eta}gd\eta+\frac{1}{2}\int_{0}^{\infty}\int_{-\pi}^{\pi}F(\varepsilon;\eta)\cos\phi\partial_{\phi}\psi^{2}d\phi d\eta$ $\displaystyle\quad+\int_{-\pi}^{0}|\sin\phi|\psi^{2}(0,\cdot)d\phi+\int_{0}^{\infty}\int_{-\pi}^{\pi}(\psi-4h^{3}g)^{2}d\phi d\eta=0.$ (4.40) From the relation (4.35), we have $\partial_{\eta}g=\langle\sin\phi\varphi\rangle$. Thus $\displaystyle-\int_{0}^{\infty}F(\varepsilon;\eta)4h^{3}g\partial_{\eta}gd\eta+\frac{1}{2}\int_{0}^{\infty}\int_{-\pi}^{\pi}F(\varepsilon;\eta)\cos\phi\partial_{\phi}\psi^{2}d\phi d\eta$ $\displaystyle\quad=-\int_{0}^{\infty}\int_{-\pi}^{\pi}F(\varepsilon;\eta)4h^{3}g\sin\phi\varphi d\phi d\eta+\frac{1}{2}\int_{0}^{\infty}\int_{-\pi}^{\pi}F(\varepsilon;\eta)\sin\phi\psi^{2}d\phi d\eta$ $\displaystyle\quad=-\frac{1}{2}\int_{0}^{\infty}\int_{-\pi}^{\pi}F(\varepsilon;\eta)\sin\phi(\psi-4h^{3}g)^{2}d\phi d\eta.$ The spectral assumption implies $\displaystyle\int_{0}^{\infty}4h^{3}|\partial_{\eta}g|^{2}d\eta+\int_{0}^{\infty}\partial_{\eta}(4h^{3})g\partial_{\eta}gd\eta\geq 0.$ Therefore, we get from (4.6) that $\displaystyle\int_{-\pi}^{0}|\sin\phi|\psi^{2}(0,\cdot)d\phi+\int_{0}^{\infty}\int_{-\pi}^{\pi}\left(1+\frac{1}{2}F(\varepsilon;\eta)\right)(\psi-4h^{3}g)^{2}d\phi d\eta=0.$ For $\eta$ sufficiently small, $1+\tfrac{1}{2}F(\varepsilon;\eta)>0$ for any $\eta\in\mathbb{R}_{+}$, hence the above equation implies $\psi=4h^{3}g$ almost everywhere on $\mathbb{R}_{+}\times[-\pi,\pi)$ and $\psi(0,\phi)=0$ for almost every $\phi\in[-\pi,\pi)$. Hence $\partial_{\eta}g=\langle\sin\phi\psi\rangle=\langle\sin\phi 4h^{3}g\rangle=0$, which with $g(0)=0$ implies $g=0$ on $\mathbb{R}_{+}$. We also have $\psi=4h^{3}g=0$. Hence system (4.38)-(4.39) only has trival solution and the solution to the problem (4.1)-(4.4). ∎ ###### Remark 17. Compared to the flat boundary case without geometric corrections [6], the effect of the geometric corrections could be seen from (4.32). for $\varepsilon$ not sufficiently small, one cannot get a $L^{2}$ estimate on $\|e^{\beta^{\prime}\eta}(\varphi^{\varepsilon}-4h^{3}G^{\varepsilon})\|_{L^{2}(\mathbb{R}_{+}\times[-\pi,\pi])}$. However, for $\varepsilon$ sufficiently small, the geometric correction contributes to a small error of this $L^{2}$ norm so does not affect the boundness of this norm, which is crucial for proving Theorem 13. ## 5\. Convergence of the diffusive limit ### 5.1. Errors of the approximation Next we calculate the approximation errors. Using systems (2.71)-(2.79) and (2.56)-(2.67), we can calculate $\displaystyle\mathcal{R}_{1}\left(\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon}),\sum_{k=0}^{N}\varepsilon^{k}(\psi_{k}^{\varepsilon}+\bar{\psi}_{k}^{\varepsilon})\right)$ $\displaystyle\quad=\varepsilon^{2}\Delta\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon})+\left\langle\sum_{k=0}^{N}\varepsilon^{k}(\psi_{k}^{\varepsilon}+\bar{\psi}_{k}^{\varepsilon})+\left(\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon})\right)^{4}\right\rangle$ $\displaystyle\quad=\varepsilon^{2}\Delta\sum_{k=0}^{N}\varepsilon^{k}T_{k}^{\varepsilon}+\sum_{k=0}^{N}\varepsilon^{k}\langle\psi_{k}^{\varepsilon}\rangle+\partial_{\eta}^{2}\sum_{k=0}^{N}\varepsilon^{k}\bar{T}_{k}^{\varepsilon}+F(\varepsilon;\eta)\partial_{\eta}\sum_{k=0}^{N}\varepsilon^{k}\bar{T}_{k}^{\varepsilon}-\frac{\chi_{0}\varepsilon^{2}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\sum_{k=0}^{N}\varepsilon^{k}\bar{T}_{k}^{\varepsilon}$ $\displaystyle\qquad+\left\langle\sum_{k=0}^{N}\varepsilon^{k}\bar{\psi}_{k}^{\varepsilon}-\left(\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon})\right)^{4}\right\rangle$ $\displaystyle\quad=\mathcal{R}_{1}\left(\sum_{k=0}^{N}\varepsilon^{k}T_{k}^{\varepsilon},\sum_{k=0}^{N}\varepsilon^{k}\psi_{k}^{\varepsilon}\right)+\left\langle\left(\sum_{k=0}^{N}\varepsilon^{k}T_{k}^{\varepsilon}\right)^{4}\right\rangle$ $\displaystyle\qquad+\sum_{k=0}^{N}\varepsilon^{k}(\partial_{\eta}^{2}\bar{T}_{k}^{\varepsilon}-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\bar{T}_{k}^{\varepsilon}+\langle\bar{\psi}_{k}^{\varepsilon}-\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)\rangle+\frac{1}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}^{\varepsilon})$ $\displaystyle\qquad+\frac{\varepsilon^{N+1}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N-1}^{\varepsilon}+\frac{\varepsilon^{N+2}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N}^{\varepsilon}-\sum_{k=N+1}^{4N}\varepsilon^{k}\langle\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)\rangle$ $\displaystyle\quad=\mathcal{R}_{1}\left(\sum_{k=0}^{N}\varepsilon^{k}T_{k},\sum_{k=0}^{N}\varepsilon^{k}\psi_{k}\right)-\sum_{k=0}^{N}\varepsilon^{k}\langle\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)-\mathcal{C}(T^{\varepsilon},k)\rangle$ $\displaystyle\qquad-\sum_{k=N+1}^{4N}\varepsilon^{k}\langle\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)-\mathcal{C}(T^{\varepsilon},k)\rangle$ $\displaystyle\qquad+\frac{\varepsilon^{N+1}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N-1}^{\varepsilon}+\frac{\varepsilon^{N+2}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N}^{\varepsilon}+\sum_{k=0}^{N}\varepsilon^{k}\langle\mathcal{C}(\bar{T}^{\varepsilon}+P^{\varepsilon},k)-\mathcal{C}(P^{\varepsilon},k)\rangle$ $\displaystyle\qquad+\sum_{k=0}^{N}\varepsilon^{k}(\partial_{\eta}^{2}\bar{T}_{k}^{\varepsilon}-\frac{\varepsilon}{(1-\varepsilon\eta)}\partial_{\eta}\bar{T}_{k}^{\varepsilon}+\frac{1}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}^{\varepsilon}+\langle\bar{\psi}_{k}^{\varepsilon}-\mathcal{C}(\bar{T}^{\varepsilon}+P^{\varepsilon},k)+\mathcal{C}(P^{\varepsilon},k)\rangle),$ (5.1) and $\displaystyle\mathcal{R}_{2}\left(\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon}),\sum_{k=0}^{N}\varepsilon^{k}(\psi_{k}^{\varepsilon}+\bar{\psi}_{k}^{\varepsilon})\right)$ $\displaystyle\quad=\varepsilon\beta\cdot\nabla\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon})+\sum_{k=0}^{N}\varepsilon^{k}(\psi_{k}^{\varepsilon}+\bar{\psi}_{k}^{\varepsilon})-\left(\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon})\right)^{4}$ $\displaystyle\quad=\varepsilon\beta\cdot\nabla\sum_{k=0}^{N}\varepsilon^{k}\psi_{k}^{\varepsilon}+\sum_{k=0}^{N}\varepsilon^{k}\psi_{k}^{\varepsilon}+\sin\phi\partial_{\eta}\sum_{k=0}^{N}\varepsilon^{k}\bar{\psi}_{k}^{\varepsilon}-\frac{\varepsilon}{1-\varepsilon\eta}\cos\phi\partial_{\phi}\sum_{k=0}^{N}\varepsilon^{k}\bar{\psi}_{k}^{\varepsilon}$ $\displaystyle\qquad-\frac{\varepsilon}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\sum_{k=0}^{N}\varepsilon^{k}\bar{\psi}_{k}^{\varepsilon}+\sum_{k=0}^{N}\varepsilon^{k}\bar{\psi}_{k}^{\varepsilon}-\left(\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon})\right)^{4}$ $\displaystyle\quad=\mathcal{R}_{2}\left(\sum_{k=0}^{N}\varepsilon^{k}T_{k}^{\varepsilon},\sum_{k=0}^{N}\varepsilon^{k}\psi_{k}^{\varepsilon}\right)+\left(\sum_{k=0}^{N}\varepsilon^{k}T_{k}^{\varepsilon}\right)^{4}$ $\displaystyle\qquad+\sum_{k=0}^{N}\varepsilon^{k}(\sin\phi\partial_{\eta}\bar{\psi}_{k}^{\varepsilon}-\frac{\varepsilon}{1-\varepsilon\eta}\cos\phi\partial_{\phi}\bar{\psi}_{k}^{\varepsilon}+\bar{\psi}_{k}^{\varepsilon}-\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)-\frac{1}{(1-\varepsilon\eta)}\cos\phi\partial_{\theta}\bar{\psi}_{k-1})$ $\displaystyle\qquad-\frac{\varepsilon^{N+1}}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{N}^{\varepsilon}-\sum_{k=N+1}^{4N}\varepsilon^{k}\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)$ $\displaystyle\quad=\mathcal{R}_{2}\left(\sum_{k=0}^{N}\varepsilon^{k}T_{k}^{\varepsilon},\sum_{k=0}^{N}\varepsilon^{k}\psi_{k}^{\varepsilon}\right)-\sum_{k=0}^{N}\varepsilon^{k}(\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)-\mathcal{C}(T^{\varepsilon},k))-\sum_{k=N+1}^{4N}\varepsilon^{k}(\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)-\mathcal{C}(T^{\varepsilon},k))$ $\displaystyle\qquad-\frac{\varepsilon^{N+1}}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{N}^{\varepsilon}+\sum_{k=0}^{N}\varepsilon^{k}(\mathcal{C}(\bar{T}^{\varepsilon}+P^{\varepsilon},k)-\mathcal{C}(P^{\varepsilon},k))$ $\displaystyle\qquad+\sum_{k=0}^{N}\varepsilon^{k}(\sin\phi\partial_{\eta}\bar{\psi}_{k}^{\varepsilon}-\frac{\varepsilon}{1-\varepsilon\eta}\cos\phi\partial_{\phi}\bar{\psi}_{k}^{\varepsilon}-\frac{1}{1-\varepsilon\eta}\partial_{\theta}\bar{\psi}^{\varepsilon}_{k-1}+\bar{\psi}_{k}^{\varepsilon}-\mathcal{C}(\bar{T}^{\varepsilon}+P^{\varepsilon},k)+\mathcal{C}(P^{\varepsilon},k)).$ (5.2) By the definition (2.56) of $(\bar{T}_{0}^{\varepsilon},\bar{\psi}_{0}^{\varepsilon})$, we get (we drop the superscripts $\varepsilon$ in the calculations below) $\displaystyle E_{0}^{0}:=$ $\displaystyle\partial_{\eta}^{2}\bar{T}_{0}-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\bar{T}_{0}+\langle\bar{\psi}_{0}-(\bar{T}_{0}+T_{0}(0))^{4}+T_{0}^{4}(0)\rangle$ $\displaystyle=$ $\displaystyle\partial_{\eta}^{2}(\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty}))-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}(\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty}))+\langle\chi(\tilde{\psi}_{0}-\tilde{\psi}_{0,\infty})-(\bar{T}_{0}+T_{0}(0))^{4}+T_{0}^{4}(0)\rangle$ $\displaystyle=$ $\displaystyle(\tilde{T}_{0}-\tilde{T}_{0,\infty})\partial_{\eta}^{2}\chi+2\partial_{\eta}\chi\partial_{\eta}\tilde{T}_{0}+\chi\partial_{\eta}^{2}\tilde{T}_{0}-\frac{\varepsilon}{(1-\varepsilon\eta)}\partial_{\eta}\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty})$ $\displaystyle-\frac{\varepsilon}{(1-\varepsilon\eta)}\chi\chi_{0}\partial_{\eta}\tilde{T}_{0}+\langle\chi(\tilde{\psi}_{0}-\tilde{\psi}_{0,\infty})-(\bar{T}_{0}+T_{0}(0))^{4}+T_{0}^{4}(0)\rangle$ $\displaystyle=$ $\displaystyle(\tilde{T}_{0}-\tilde{T}_{0,\infty})\partial_{\eta}^{2}\chi+2\partial_{\eta}\chi\partial_{\eta}\tilde{T}_{0}-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty})+\chi(\partial_{\eta}^{2}\tilde{T}_{0}+F(\varepsilon;\eta)\partial_{\eta}\tilde{T}_{0}+\langle\tilde{\psi}_{0}-\tilde{\psi}_{0,\infty}\rangle)$ $\displaystyle-\langle(\bar{T}_{0}+T_{0}(0))^{4}-T_{0}^{4}(0)\rangle$ $\displaystyle=$ $\displaystyle(\tilde{T}_{0}-\tilde{T}_{0,\infty})\partial_{\eta}^{2}\chi+2\partial_{\eta}\chi\partial_{\eta}\tilde{T}_{0}-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty})$ $\displaystyle+\chi\langle\tilde{T}_{0}^{4}-\tilde{T}_{0,\infty}^{4}\rangle-\langle(\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty})+\tilde{T}_{0,\infty})^{4}-\tilde{T}_{0,\infty}^{4}\rangle,$ (5.3) where we use the fact that $\partial_{\eta}^{2}\chi=0$ and $\chi\chi_{0}=\chi$. We can also calculate $\displaystyle E_{0}^{1}:=$ $\displaystyle\sin\phi\partial_{\eta}\bar{\psi}_{0}-\frac{\varepsilon}{1-\varepsilon\eta}\cos\phi\partial_{\phi}\bar{\psi}_{0}+\bar{\psi}_{0}-(\bar{T}_{0}+T_{0}(0))^{4}+T_{0}^{4}(0)$ $\displaystyle=$ $\displaystyle\sin\phi(\tilde{\psi}_{0}-\tilde{\psi}_{0,\infty})\partial_{\eta}\chi+\chi(\sin\phi\partial_{\eta}\tilde{\psi}_{0}+F(\varepsilon;\eta)\cos\phi\partial_{\phi}\tilde{\psi}_{0}+\tilde{\psi}_{0}-\tilde{T}_{0}^{4})$ $\displaystyle+\chi(\tilde{T}_{0}^{4}-\tilde{T}_{0,\infty}^{4})-((\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty})+\tilde{T}_{0,\infty})^{4}-\tilde{T}_{0,\infty}^{4})$ $\displaystyle=$ $\displaystyle\sin\phi(\tilde{\psi}_{0}-\tilde{\psi}_{0,\infty})\partial_{\eta}\chi+\chi(\tilde{T}_{0}^{4}-\tilde{T}_{0,\infty}^{4})-((\chi(\tilde{T}_{0}-\tilde{T}_{0,\infty})+\tilde{T}_{0,\infty})^{4}-\tilde{T}_{0,\infty}^{4}).$ (5.4) Similarly, by (2.67), we get $\displaystyle E_{k}^{0}:=$ $\displaystyle\partial_{\eta}^{2}\bar{T}_{k}-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\bar{T}_{k}+\frac{1}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}+\langle\bar{\psi}_{k}-\mathcal{C}(\bar{T}+P,k)+\mathcal{C}(P,k)\rangle$ (5.5) $\displaystyle=$ $\displaystyle\partial_{\eta}^{2}(\chi(\tilde{T}_{k}-\tilde{T}_{k,\infty}))-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}(\chi(\tilde{T}_{k}-\tilde{T}_{k,\infty}))+\frac{1}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}$ $\displaystyle+\langle\chi(\tilde{\psi}_{k}-\tilde{\psi}_{k,\infty})-4(P_{0}+\bar{T}_{0})^{3}(P_{k}+\bar{T}_{k})+4P_{0}^{3}P_{k}\rangle-\langle\mathcal{E}(P+\bar{T},k-1)-\mathcal{E}(P,k-1)\rangle$ $\displaystyle=$ $\displaystyle(\tilde{T}_{k}-\tilde{T}_{k,\infty})\partial_{\eta}^{2}\chi+2\partial_{\eta}\chi\partial_{\eta}\tilde{T}_{k}-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\chi(\tilde{T}_{k}-\tilde{T}_{k,\infty})$ $\displaystyle+\chi\bigg{\langle}\partial_{\eta}^{2}\tilde{T}_{k}+F(\varepsilon;\eta)\partial_{\eta}\tilde{T}_{k}+\tilde{\psi}_{k}-4\tilde{T}_{0}^{3}\tilde{T}_{k}-4(\tilde{T}_{0}^{3}-P_{0}^{3})(P_{k}-P_{k}(0))-\mathcal{E}(\bar{T}+P,k-1)$ $\displaystyle+\mathcal{E}(P,k-1)+\frac{\chi_{0}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}\bigg{\rangle}+\chi\langle 4\tilde{T}_{0}^{3}\tilde{T}_{k}-4\tilde{T}_{0,\infty}^{3}\tilde{T}_{k,\infty}\rangle$ $\displaystyle-\langle 4(P_{0}+\bar{T}_{0})^{3}(P_{k}+\bar{T}_{k})-4P_{0}^{3}P_{k}-\chi(4(\tilde{T}_{0}^{3}-P_{0}^{3})(P_{k}-P_{k}(0)))\rangle$ $\displaystyle-(1-\chi)\langle\mathcal{E}(\bar{T}+P,k-1)-\mathcal{E}(P,k-1)\rangle+\frac{1-\chi}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}$ (5.6) $\displaystyle=$ $\displaystyle(\tilde{T}_{k}-\tilde{T}_{k,\infty})\partial_{\eta}^{2}\chi+2\partial_{\eta}\chi\partial_{\eta}\tilde{T}_{k}-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\chi(\tilde{T}_{k}-\tilde{T}_{k,\infty})+\chi\langle 4\tilde{T}_{0}^{3}\tilde{T}_{k}-4\tilde{T}_{0,\infty}^{3}\tilde{T}_{k,\infty}\rangle$ $\displaystyle-\langle 4(P_{0}+\bar{T}_{0})^{3}(P_{k}+\bar{T}_{k})-4P_{0}^{3}P_{k}-\chi(4(\tilde{T}_{0}^{3}-P_{0}^{3})(P_{k}-P_{k}(0)))\rangle$ $\displaystyle-(1-\chi)\langle\mathcal{E}(\bar{T}+P,k-1)-\mathcal{E}(P,k-1)\rangle+\frac{1-\chi}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}$ (5.7) and $\displaystyle E_{k}^{1}:=$ $\displaystyle\sin\phi\partial_{\eta}\bar{\psi}_{k}-\frac{\varepsilon}{1-\varepsilon\eta}\cos\phi\partial_{\phi}\bar{\psi}_{k}-\frac{1}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{k-1}+\bar{\psi}_{k}-\mathcal{C}(\bar{T}+P,k)+\mathcal{C}(P,k)$ $\displaystyle=$ $\displaystyle\sin\phi\partial_{\eta}\chi(\tilde{\psi}_{k}-\tilde{\psi}_{k,\infty})+\chi\bigg{(}\sin\phi\partial_{\eta}\tilde{\psi}_{k}+F(\varepsilon;\eta)\cos\phi\partial_{\eta}\tilde{\psi}_{k}-\frac{\chi_{0}}{(1-\varepsilon\eta)}\partial_{\theta}\bar{\psi}_{k-1}$ $\displaystyle+\tilde{\psi}_{k}-4\tilde{T}_{0}^{3}\tilde{T}_{k}-4(\tilde{T}_{0}^{3}-P_{0}^{3})(P_{k}-P_{k}(0))-\mathcal{E}(\bar{T}+P,k-1)+\mathcal{E}(P,k-1)\bigg{)}$ $\displaystyle+\chi(4\tilde{T}_{0}^{3}\tilde{T}_{k}-4\tilde{T}_{0,\infty}^{3}\tilde{T}_{k,\infty})-(4(P_{0}+\bar{T}_{0})^{3}(P_{k}+\bar{T}_{k})-4P_{0}^{3}P_{k}-\chi(4(\tilde{T}_{0}^{3}-P_{0}^{3})(P_{k}-P_{k}(0))))$ $\displaystyle-(1-\chi)(\mathcal{E}(\bar{T}+P,k-1)-\mathcal{E}(P,k-1))-\frac{1-\chi}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{k-1},$ $\displaystyle=$ $\displaystyle\sin\phi\partial_{\eta}\chi(\tilde{\psi}_{k}-\tilde{\psi}_{k,\infty})+\chi(4\tilde{T}_{0}^{3}\tilde{T}_{k}-4\tilde{T}_{0,\infty}^{3}\tilde{T}_{k,\infty})$ $\displaystyle-(4(P_{0}+\bar{T}_{0})^{3}(P_{k}+\bar{T}_{k})-4P_{0}^{3}P_{k}-\chi(4(\tilde{T}_{0}^{3}-P_{0}^{3})(P_{k}-P_{k}(0))))$ $\displaystyle-(1-\chi)(\mathcal{E}(\bar{T}+P,k-1)-\mathcal{E}(P,k-1))-\frac{1-\chi}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{k-1}.$ (5.8) Using the formulas (5.1) and (5.1), we get from the above equations $\displaystyle\mathcal{R}_{1}(T^{a,\varepsilon},\psi^{a,\varepsilon})$ $\displaystyle\quad=\varepsilon^{N+1}\Delta T_{N-1}+\varepsilon^{N+2}\Delta T_{N}^{\varepsilon}-\sum_{k=0}^{N}\varepsilon^{k}\langle\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)-\mathcal{C}(T^{\varepsilon},k)-\mathcal{C}(\bar{T}^{\varepsilon}+P^{\varepsilon},k)+\mathcal{C}(P^{\varepsilon},k)\rangle$ $\displaystyle\qquad-\sum_{k=N+1}^{4N}\varepsilon^{k}\langle\mathcal{C}(T+\bar{T},k)\rangle+\frac{\varepsilon^{N+1}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N-1}^{\varepsilon}+\frac{\varepsilon^{N+2}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N}^{\varepsilon}+\sum_{k=0}^{N}\varepsilon^{k}E_{k}^{0},$ (5.9) and $\displaystyle\mathcal{R}_{2}(T^{a,\varepsilon},\psi^{a,\varepsilon})$ $\displaystyle=\varepsilon^{N+1}\beta\cdot\nabla\psi_{N}^{\varepsilon}-\sum_{k=0}^{N}\varepsilon^{k}(\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k)-\mathcal{C}(T^{\varepsilon},k)-\mathcal{C}(\bar{T}^{\varepsilon}+P^{\varepsilon},k)+\mathcal{C}(P^{\varepsilon},k))$ $\displaystyle\quad-\sum_{k=N+1}^{4N}\varepsilon^{k}(\mathcal{C}(T^{\varepsilon}+\bar{T}^{\varepsilon},k))-\frac{\varepsilon^{N+1}}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{N}^{\varepsilon}+\sum_{k=0}^{N}\varepsilon^{k}E_{k}^{1}.$ (5.10) ### 5.2. Estimation on the approximation errors The approximation errors calculated are expected to be small for large $N$. Here we estimate on the terms in (5.1)-(5.1) and prove the following lemma. ###### Lemma 18. Assume the boundary data in (1.3)-(1.4) satisfy $T_{b}\in C^{2}(\Omega)$, $\psi_{b}\in C^{1}(\Gamma_{-})$. Let $(T^{a,\varepsilon},\psi^{a,\varepsilon})$ be the approximate solution constructed in section 2. Assume the solution $\tilde{T}_{0}^{\varepsilon}$ to (2.56) satisfy the spectral assumption (A) and $\tilde{T}_{0}^{\varepsilon}\geq a$ for some constant $a>0$. Then for sufficient large $N$, $\displaystyle\|\mathcal{R}_{1}(T^{a},\psi^{a})\|_{L^{2}(\Omega)},~{}\|\mathcal{R}_{1}(T^{a},\psi^{a})\|_{L^{\infty}(\Omega)}\leq C\varepsilon^{N+1}+(N+2)Ce^{-\frac{\lambda\delta}{4\varepsilon}},$ (5.11) $\displaystyle\|\mathcal{R}_{2}(T^{a},\psi^{a})\|_{L^{2}(\Omega\times\mathbb{S}^{2})},~{}\|\mathcal{R}_{2}(T^{a},\psi^{a})\|_{L^{\infty}(\Omega\times\mathbb{S}^{2})}\leq C\varepsilon^{N+1}+(N+2)Ce^{-\frac{\lambda\delta}{4\varepsilon}}.$ (5.12) for some constant $0\leq\lambda\leq 1/2$, and $C>0$ a constant independent of $\varepsilon$. ###### Proof. The proof of the lemma follows from the decay property of solutions for the nonlinear Milne and linear Milne problems (Theorem 4 and Theorem 13). Below we give the estimates on the terms related to geometric corrections. Other terms also appear in the flat case and the estimates can be found in [6]. Since the solutions to Milne problems with geometric corrections decay to constant solutions exponentially, same as the Milne problem without geometric corrections in [6], these terms can be estimated in the same way. First by Theorem 4 and Theorem 13, there exists a constant $0\leq\lambda<1/2$ such that $\displaystyle|\tilde{T}_{k}^{\varepsilon}(\eta)-\tilde{T}_{k,\infty}^{\varepsilon}|\leq Ce^{-\lambda\eta},\quad|\tilde{\psi}_{k}^{\varepsilon}(\eta)-\tilde{\psi}_{k,\infty}^{\varepsilon}|\leq Ce^{-\lambda\eta}.$ By the definition of $\bar{T}_{k}^{\varepsilon},\bar{\psi}_{k}^{\varepsilon}$ The terms involving $\partial_{\theta}^{2}$ in (5.1)-(5.1) can be estimated by $\displaystyle\left|\frac{\varepsilon^{N+1}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N-1}^{\varepsilon}\right|+\left|\frac{\varepsilon^{N+2}}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{N}^{\varepsilon}\right|+\left|\frac{\varepsilon^{N+1}}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{N}^{\varepsilon}\right|\leq C\varepsilon^{N+1},$ since $\bar{T}_{k}^{\varepsilon},\bar{\psi}_{k}^{\varepsilon}$ are bounded functions. For the term $E_{0}^{0}$ given in (5.1), the term related to geometric correction can be estimated by $\displaystyle\left|\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\chi(\tilde{T}_{0}^{\varepsilon}-\tilde{T}_{0,\infty}^{\varepsilon})\right|\leq\frac{C\varepsilon}{1-\varepsilon\eta}1_{\tfrac{1}{4}\delta\leq\varepsilon\eta\leq\tfrac{3}{8}\delta}e^{-\lambda\eta}\leq\frac{C}{1-\tfrac{3}{8}\delta}e^{-\frac{\lambda\delta}{4\varepsilon}}.$ The term $E_{0}^{1}$ is the same as the case without geometric correction [6]. For $E_{k}^{0}$, the related terms are $\displaystyle\left|-\frac{\varepsilon}{1-\varepsilon\eta}\partial_{\eta}\chi(\tilde{T}_{k}^{\varepsilon}-\tilde{T}_{k,\infty}^{\varepsilon})\right|+\left|\frac{1-\chi}{(1-\varepsilon\eta)^{2}}\partial_{\theta}^{2}\bar{T}_{k-2}^{\varepsilon}\right|$ $\displaystyle\quad\leq\frac{C\varepsilon}{1-\varepsilon\eta}1_{\tfrac{1}{4}\delta\leq\varepsilon\eta\leq\tfrac{3}{8}\delta}e^{-\lambda\eta}+\frac{1}{(1-\varepsilon\eta)^{2}}1_{\tfrac{1}{4}\delta\leq\varepsilon\eta\leq\tfrac{3}{8}\delta}e^{-\lambda\eta}$ $\displaystyle\quad\leq\frac{C}{1-\tfrac{3}{8}\delta}e^{-\frac{\lambda\delta}{4\varepsilon}}+\frac{C}{(1-\tfrac{3}{8}\delta)^{2}}e^{-\frac{\lambda\delta}{4\varepsilon}}.$ The geometric correction related term in $E_{k}^{1}$ can be estimated similarly by $\displaystyle\left|-\frac{1-\chi}{1-\varepsilon\eta}\cos\phi\partial_{\theta}\bar{\psi}_{k-1}\right|\leq\frac{C}{1-\tfrac{3}{8}\delta}e^{-\frac{\lambda\delta}{4\varepsilon}}.$ Combing the above estimates, we can obtain ∎ ### 5.3. Proof of Theorem 1 The proof of Theorem 1 is based on Banach fixed point theorem by showing that system (1.1)-(1.2) has a unique solution around $(T^{a,\varepsilon},\psi^{a,\varepsilon})$ and is given in [6, section 4]. A crucial step in the proof is based on the following lemma. ###### Lemma 19. Let $(T^{a,\varepsilon},\psi^{a,\varepsilon})$ be the approximate solution constructed in the section 2. Assuming the spectral assumption (A) holds for the solution $\tilde{T}_{0}^{\varepsilon}$ of the nonlinear Milne problem (2.56) and $\tilde{T}_{0}^{\varepsilon}\geq a$ for some constant $a>0$, and $T^{a,\varepsilon}$ and $\tilde{T}^{\varepsilon}_{0}$ are positive. The the following inequality holds $\displaystyle-\int_{\Omega}4(T^{a,\varepsilon})^{3}g\Delta gdx=\int_{\Omega}4(T^{a,\varepsilon})^{3}|\nabla g|^{2}dx-\int_{\Omega}\nabla(4(T^{a,\varepsilon})^{3})\cdot g\nabla gdx\geq\kappa\int_{\Omega}|\nabla g|^{2}dx-C\|g\|_{L^{2}(\Omega)}^{2},$ (5.13) for any function $g$ satisfying $g(0)=0$ and for some constants $\kappa>0$, $C>0$ depending on $M$, where $M<1$ is the constant of the spectral assumption. ###### Proof. Note that $T^{a,\varepsilon}=\sum_{k=0}^{N}\varepsilon^{k}(T_{k}^{\varepsilon}+\bar{T}_{k}^{\varepsilon})$ where $\bar{T}_{k}^{\varepsilon}=\chi(1-r)(\tilde{T}_{k}^{\varepsilon}-\tilde{T}_{k,\infty}^{\varepsilon})$. In the domain $1-r>\tfrac{3}{8}\delta$, $\chi(1-r)=0$ and $T^{a,\varepsilon}=\sum_{k=0}^{N}\varepsilon^{k}T_{k}^{\varepsilon}$, which only contain the interior approximations. Since $\|T_{k}^{\varepsilon}\|_{C^{s}(\Omega)}$ is bounded for any $s>0$ and $k=1,\ldots,N$, $\displaystyle\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}4(T^{a,\varepsilon})^{3}|\nabla g|^{2}dx-\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}\nabla(4(T^{a,\varepsilon})^{3})\cdot g\nabla gdx$ $\displaystyle\quad=\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}4(T^{a,\varepsilon})^{3}|\nabla g|^{2}dx-\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}2(T^{a,\varepsilon})^{3/2}\nabla g\cdot 6(T^{a,\varepsilon})^{1/2}\nabla T^{a,\varepsilon}gdx$ $\displaystyle\quad\geq\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}4(T^{a,\varepsilon})^{3}|\nabla g|^{2}dx-\frac{1}{2}\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}4(T^{a,\varepsilon})^{3}|\nabla g|^{2}dx-\frac{1}{2}\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}36T^{a,\varepsilon}|\nabla T^{a,\varepsilon}|^{2}g^{2}dx$ $\displaystyle\quad\geq\int_{\Omega\cap\\{1-r>\tfrac{3}{8}\delta\\}}2(T^{a,\varepsilon})^{3}|\nabla g|^{2}dx-C\|g\|_{L^{2}(\Omega)}^{2}.$ (5.14) In the domain $1-r\leq\frac{3}{8}\delta$, boundary layer effects play a role. First we split the integral as $\displaystyle\int_{\Omega\cap\\{1-r\leq\tfrac{3}{8}\delta\\}}4(T^{a,\varepsilon})^{3}|\nabla g|^{2}dx-\int_{\Omega\cap\\{1-r\leq\tfrac{3}{8}\\}}\nabla(4(T^{a,\varepsilon})^{3})\cdot g\nabla gdx$ $\displaystyle\quad=\int_{\Omega\cap\\{1-r\leq\tfrac{3}{8}\delta\\}}4(T^{a,\varepsilon})^{3}|\partial_{\eta}T^{a,\varepsilon}|^{2}dx+\int_{\Omega\cap}=:I_{1}+I_{2}.$ Since $\|\nabla_{x^{\prime}}(T^{a})\|_{L^{2}(\Omega)}$ is bounded, we can estimate $I_{1}$ the same as (5.14): $\displaystyle I_{1}\geq\int_{\Omega\cap\\{x_{1}\leq\tfrac{3}{8}\\}}2(T^{a})^{3}|\nabla_{x^{\prime}}g|^{2}dx-C\|g\|_{L^{2}(\Omega)}^{2}.$ (5.15) To estimate $I_{2}$, we use the spectral assumption (A). The composite approximate solution $(T^{a},\psi^{a})$ is close to the solution $(\tilde{T}_{0},\tilde{\psi}_{0})$ of the nonlinear Milne problem (2.56). Using the equation $\displaystyle(T^{a})^{3}=\left(\sum_{k=0}^{N}\varepsilon^{k}(T_{k}+\bar{T}_{k})\right)^{3}=(T_{0}+\bar{T}_{0})^{3}+\varepsilon G,$ where $G=3(T_{0}+\bar{T}_{0})^{2}\sum_{k=1}^{N}\varepsilon^{k-1}(T_{k}+\bar{T}_{k})+6(T_{0}+\bar{T}_{0})^{2}(\sum_{k=1}^{N}\varepsilon^{k-1}(T_{k}+\bar{T}_{k}))^{2}+3(T_{0}+\bar{T}_{0})(\sum_{k=1}^{N}\varepsilon^{k-1}(T_{k}+\bar{T}_{k}))^{2}$, we can rewrite $I_{2}$ as $\displaystyle I_{2}$ $\displaystyle=\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}\int_{0}^{\frac{3}{8\varepsilon}}4(T_{0}+\bar{T}_{0})^{3}+4\varepsilon G)|\partial_{\eta}g|^{2}d\eta dx^{\prime}-\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}\int_{0}^{\frac{3}{8\varepsilon}}\partial_{\eta}(4(T_{0}+\bar{T}_{0})^{3}+4\varepsilon G)g\partial_{\eta}gd\eta dx^{\prime}$ $\displaystyle=\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}(4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}-\partial_{\eta}(4\tilde{T}_{0}^{3})g\partial_{\eta}g)d\eta+\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}(4(T_{0}+\bar{T}_{0})^{3}-4\tilde{T}_{0}^{3})|\partial_{\eta}g|^{2}d\eta$ $\displaystyle\quad-\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}\partial_{\eta}(4(T_{0}+\bar{T}_{0})^{3}-4\tilde{T}_{0}^{3})g\partial_{\eta}gd\eta dx^{\prime}+\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}\varepsilon 4(G|\partial_{\eta}g|^{2}-\partial_{\eta}Gg\partial_{\eta}g)d\eta$ $\displaystyle=:I_{21}+I_{22}+I_{23}+I_{24}.$ The spectral assumption (A) implies $\displaystyle I_{21}$ $\displaystyle=\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}(4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}-\partial_{\eta}(4\tilde{T}_{0}^{3})g\partial_{\eta}g)d\eta$ $\displaystyle\geq\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}(4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}-\frac{1}{2}(4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}+36\tilde{T}_{0}|\partial_{\eta}\tilde{T}_{0}|^{2}g^{2}))d\eta$ $\displaystyle\geq\frac{1}{2\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}(4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}-36\tilde{T}_{0}|\partial_{\eta}\tilde{T}_{0}|^{2}g^{2})d\eta$ $\displaystyle\geq\frac{1-M}{2\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}d\eta.$ For $I_{22}$, since $\bar{T}_{0}=T_{0}+\chi(\varepsilon\eta)(\tilde{T}_{0}-T_{0}(0))$, it holds that $\displaystyle(T_{0}+\bar{T}_{0})^{3}-\tilde{T}_{0}^{3}$ $\displaystyle=(T_{0}+\chi(\varepsilon\eta)(\tilde{T}_{0}-T_{0}(0))-\tilde{T}_{0})(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2})$ $\displaystyle=((T_{0}-T_{0}(0))-(1-\chi(\varepsilon\eta))(\tilde{T}_{0}-\bar{T}_{0}))(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2})$ $\displaystyle=(\partial_{x_{1}}T_{0}(\xi)\varepsilon\eta-(1-\chi(\varepsilon\eta))(\tilde{T}_{0}-\bar{T}_{0}))(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2}).$ Since we are considering the integration over $x_{1}=\varepsilon\eta\in[0,\tfrac{3}{8}\delta]$ and $(1-\chi(\varepsilon\eta))$ is supported on $[\tfrac{1}{4}\delta,\tfrac{3}{8}\delta]$, $\displaystyle I_{22}=\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3}{8\varepsilon}}(4(T_{0}+\bar{T}_{0})^{3}-4\tilde{T}_{0}^{3})|\partial_{\eta}g|^{2}d\eta\leq\frac{3\delta C}{8\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{\delta}{2\varepsilon}}|\partial_{\eta}g|^{2}d\eta.$ For $I_{23}$, due to $\displaystyle\partial_{\eta}((T_{0}+\bar{T}_{0})^{3}-\tilde{T}_{0}^{3})$ $\displaystyle=\partial_{\eta}(T_{0}+\chi(\varepsilon\eta)(\tilde{T}_{0}-T_{0}(0))-\tilde{T}_{0})(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2})$ $\displaystyle\quad+(T_{0}+\chi(\varepsilon\eta)(\tilde{T}_{0}-T_{0}(0))-\tilde{T}_{0})\partial_{\eta}(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2})$ $\displaystyle=\varepsilon\chi^{\prime}(\varepsilon\eta)(\tilde{T}_{0}-{T}_{0}(0))(\tilde{T}_{0}-T_{0}(0))-\tilde{T}_{0})(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2})$ $\displaystyle\quad+(\chi(\varepsilon\eta)-1)\partial_{\eta}\tilde{T}_{0}(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2})$ $\displaystyle\quad+(\partial_{x_{1}}T_{0}(\xi)\varepsilon\eta-(1-\chi(\varepsilon\eta))(\tilde{T}_{0}-\bar{T}_{0}))\partial_{\eta}(\tilde{T}_{0}^{2}+\tilde{T}_{0}(T_{0}+\bar{T}_{0})+(T_{0}+\bar{T}_{0})^{2}),$ with consideration of $\varepsilon\eta\in[0,\tfrac{3}{8}\delta]$ and and $(1-\chi(\varepsilon\eta))$ being supported on $[\tfrac{1}{4}\delta,\tfrac{3}{8}\delta]$, it holds that $\displaystyle I_{23}$ $\displaystyle=-\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}\partial_{\eta}(4(T_{0}+\bar{T}_{0})^{3}-4\tilde{T}_{0}^{3})g\partial_{\eta}gd\eta\leq\frac{C}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}(\varepsilon|g||\partial_{\eta}g|+\frac{3\delta}{8}|g||\partial_{\eta}g|)d\eta$ $\displaystyle\leq\frac{3\delta C}{8\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}(g^{2}+|\partial_{\eta}g|^{2})d\eta.$ For $I_{24}$, we have $\displaystyle I_{24}=\frac{1}{\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}\varepsilon 4(G|\partial_{\eta}g|^{2}-\partial_{\eta}Gg\partial_{\eta}g)d\eta\leq\frac{C}{\varepsilon}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}(|g|^{2}+|\partial_{\eta}g|^{2})d\eta.$ Combing the above estimates gives $\displaystyle I_{2}$ $\displaystyle\geq\frac{1-M}{2\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}d\eta-\frac{3\delta C}{8\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}(|g|^{2}+|\partial_{\eta}g|^{2})d\eta$ $\displaystyle\quad-\frac{C}{\varepsilon}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}(|g|^{2}+|\partial_{\eta}g|^{2})d\eta.$ By the assumption of the lemma, $\tilde{T}_{0}\geq a$, hence $4\tilde{T}_{0}^{3}\geq 4a^{3}$ for some constant $a>0$. We can take sufficiently small $\varepsilon$ and $\delta$ such that $\varepsilon<(1-M)a^{3}/C$ and $3\delta C/8\leq(1-M)/8$, and we get from the above inequality $\displaystyle I_{2}\geq\frac{1-M}{4\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}d\eta.$ Combing this with (5.14) and (5.15) implies $\displaystyle-\int_{\Omega}4(T^{a})^{3}g\Delta gdx$ $\displaystyle=\int_{\Omega}4(T^{a})^{3}|\nabla g|^{2}dx-\int_{\Omega}\nabla(4(T^{a})^{3})\cdot g\nabla gdx$ $\displaystyle\geq\int_{\Omega\cap\\{x_{1}>\tfrac{3}{8}\delta\\}}2(T^{a})^{3}|\nabla g|^{2}dx+\int_{\Omega\cap\\{x_{1}\leq\tfrac{3}{8}\delta\\}}2(T^{a})^{3}|\nabla_{x^{\prime}}g|^{2}dx$ $\displaystyle\quad+\frac{1-M}{4\varepsilon^{2}}\int_{\mathbb{T}^{2}}dx^{\prime}\int_{0}^{\frac{3\delta}{8\varepsilon}}4\tilde{T}_{0}^{3}|\partial_{\eta}g|^{2}d\eta-C\|g\|_{L^{2}(\Omega)}^{2}$ $\displaystyle\geq\kappa\|\nabla g\|_{L^{2}(\Omega)}^{2}-C\|g\|_{L^{2}(\Omega)}^{2},$ where $\kappa=\min\\{2a^{3},(1-M)a^{3}\\}$, which finishes the proof. ∎ With the above lemma, Theorem 1 can be proved in the same way as [6, Theorem 1]. ## Appendix A Transport equation with geometric correction in half-space Consider the equation in $(\eta,\phi)\in\mathbb{R}_{+}\times[-\pi,\pi)$: $\displaystyle\sin\phi\partial_{\eta}f+F(\eta)\cos\phi\partial_{\phi}f=H(\eta,\phi),$ (A.1) $\displaystyle f(0,\phi)=h(\phi),\quad\text{for }\sin\phi>0.$ (A.2) The following lemma was proved in [10, Pages 1512-1514]. ###### Lemma 20. There exists a solution to the above problem given by $\displaystyle f(\eta,\phi)=\mathcal{A}h(\phi)+\mathcal{T}H(\eta,\phi),$ (A.3) with $\mathcal{A}$, $\mathcal{H}$ are defined in the following: for $\sin\phi>0$, $\displaystyle\mathcal{A}h(\phi)=h(\phi^{\prime}(\phi,\eta,0))e^{-\int_{0}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\xi)}d\xi},\quad\mathcal{T}H(\eta,\phi)=\int_{0}^{\eta}\frac{h(\xi,\phi^{\prime}(\phi,\eta,\xi))}{\sin\phi^{\prime}(\phi,\eta,\xi)}e^{-\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(\phi,\eta,\rho)}d\rho}d\xi,$ (A.4) for $\sin\phi<0$ and $|E(\phi,\eta)|\leq e^{-V(\varepsilon;\infty)}$, $\displaystyle\mathcal{A}h(\phi)$ $\displaystyle=0,\quad\mathcal{T}H(\eta,\phi)=\int_{\eta}^{\infty}\frac{h(\xi,-\phi^{\prime}(-\phi,\eta,\xi))}{\sin\phi^{\prime}(-\phi,\eta,\xi)}e^{\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(-\phi,\eta,\rho)}d\rho}d\xi,$ (A.5) and for $\sin\phi<0$ and $|E(\phi,\eta)|\geq e^{-V(\varepsilon;\infty)}$, $\displaystyle\mathcal{A}h(\phi)=h(\phi^{\prime}(-\phi,\eta,0))e^{-\int_{0}^{\eta_{+}}+\int_{\eta}^{\eta_{+}}\frac{1}{\sin\phi^{\prime}(-\phi,\eta,\xi)}d\xi},$ $\displaystyle\mathcal{T}H(\eta,\phi)=\int_{0}^{\eta_{+}}\frac{h(\xi,\phi^{\prime}(-\phi,\eta,\xi))}{\sin\phi^{\prime}(-\phi,\eta,\xi)}e^{-\int_{\xi}^{\eta_{+}}+\int_{\eta}^{\eta_{+}}\frac{1}{\sin\phi^{\prime}(-\phi,\eta,\rho)}d\rho}d\xi$ $\displaystyle\qquad\qquad\quad+\int_{\eta}^{\eta_{+}}\frac{h(\xi,-\phi^{\prime}(-\phi,\eta,\xi))}{\sin\phi^{\prime}(-\phi,\eta,\xi)}e^{\int_{\xi}^{\eta}\frac{1}{\sin\phi^{\prime}(-\phi,\eta,\rho)}d\rho}d\xi.$ (A.6) Moreover, $\mathcal{A}$ and $\mathcal{H}$ satisfies $\displaystyle\|e^{\beta\eta}\mathcal{A}h\|_{L^{\infty}(\mathbb{R}_{+}\times[-\pi,\pi))}\leq\|h\|_{L^{\infty}((0,\pi))},\quad\text{for any }0\leq\beta\leq 1,$ (A.7) $\displaystyle\|e^{\beta\eta}\mathcal{T}H\|_{L^{\infty}(\mathbb{R}_{+}\times[-\pi,\pi))}\leq C\|e^{\beta\eta}H\|_{L^{\infty}(\mathbb{R}_{+}\times[-\pi,\pi))},\quad\text{for any }0\leq\beta\leq\tfrac{1}{2}.$ (A.8) ## Appendix B Existence for the linearized Milne problem without geometric correction Consider the following linear Milne problem on the half-line: $\displaystyle\partial_{\eta}G+\langle\varphi-4h^{3}G\rangle=S_{1},$ (B.1) $\displaystyle\sin\phi\partial_{\eta}\varphi+\varphi-4h^{3}G=S_{2},$ (B.2) with boundary conditions $\displaystyle G(0)=G_{b},\quad\varphi(0,\phi)=\varphi_{b}(\phi),\quad\text{for }\sin\phi>0.$ (B.4) The following theorem was proved in [6, Theorem 2]. ###### Theorem 21. Assume $h$ satisfies the spectral assumption (A). Assume $S_{1}=S_{1}(\eta)$, $S_{2}S_{2}(\eta,\phi)$ satisfy $|S_{1}(\eta)|,|S_{2}(\eta,\phi)|\leq C_{S}e^{-\beta\eta}$ for any $\eta\in\mathbb{R}_{+}$, $\phi\in[-\pi,\pi)$. Then there exists a unique bounded solution $(G,\varphi)\in L^{2}_{\rm loc}(\mathbb{R}_{+})\times L^{2}(\mathbb{R}_{+}\times[-\pi,\pi))$ to system (B.1)-(B.2) with boundary conditions (B.4). Moreover there exists a constant $G_{\infty}\in\mathbb{R}$ such that $\displaystyle|G(\eta)-G_{\infty}|\leq Ce^{-\beta\eta},\quad|\varphi(\eta,\phi)-4h^{3}_{\infty}G_{\infty}|\leq Ce^{-\beta\eta},\quad\text{for any }\eta\in\mathbb{R}_{+},~{}\phi\in[-\pi,\pi),$ (B.5) where $C>0$ is a positive constant depending on $\beta$ and $\int_{\sin\phi>0}\sin\phi\varphi_{b}^{2}d\phi$. The theorem was proved in [6] by first showing the existence on bounded interval and then using uniform estimate to extend the solution to the half- line. Consider system (B.1)-(B.2) on the interval $\eta\in[0,B]$ with the following boundary conditions at $\eta=B$, $\displaystyle\partial_{\eta}G(B)=0,\quad\varphi(B,\phi)=\varphi(B,-\phi),\quad\text{for }\sin\phi>0.$ (B.6) The following theorem holds. ###### Theorem 22. Let the assumptions of Theorem 22 hold. Then there exists a unique solution $(G,\varphi)\in C^{2}([0,B])\times C^{1}([0,B]\times[-\pi,\pi))$ to system (B.1)-(B.2) with boundary condtions (B.4)-(B.6). ## References * [1] A. Bensoussan, J. L. Lions, and G. C. Papanicolaou, Boundary layers and homogenization of transport processes, Publications of the Research Institute for Mathematical Sciences, 15 (1979), pp. 53–157. * [2] J. Clouët and R. Sentis, Milne problem for non-grey radiative transfer, Kinetic & Related Models, 2 (2009), p. 345. * [3] L. C. Evans, Partial differential equations and monge-kantorovich mass transfer, Current developments in mathematics, 1997 (1997), pp. 65–126. * [4] M. Ghattassi, X. Huo, and N. Masmoudi, On the diffusive limits of radiative heat transfer system i: Well-prepared initial and boundary conditions, SIAM Journal on Mathematical Analysis, 54 (2022), pp. 5335–5387. * [5] , Diffusive limits of the steady state radiative heat transfer system: Boundary layers, accepted for publication in Journal de Mathématiques Pures et Appliquées, (2023). * [6] , Stability of the nonlinear milne problem for radiative heat transfer system, Archive for Rational Mechanics and Analysis. arXiv preprint arXiv:2207.10769, (2023). * [7] G.-M. Gie, M. Hamouda, and R. Temam, Boundary layers in smooth curvilinear domains: parabolic problems, Discrete & Continuous Dynamical Systems, 26 (2010), p. 1213. * [8] G.-M. Gie, C.-Y. Jung, and R. Temam, Recent progresses in boundary layer theory, Discrete & Continuous Dynamical Systems, 36 (2016), p. 2521. * [9] S. Ukai, Solutions of the boltzmann equation, in Studies in Mathematics and its Applications, vol. 18, Elsevier, 1986, pp. 37–96. * [10] L. Wu and Y. Guo, Geometric correction for diffusive expansion of steady neutron transport equation, Communications in Mathematical Physics, 336 (2015), pp. 1473–1553.
# Non-Standard Interactions of Supernova Neutrinos and Mass Ordering Ambiguity at DUNE Sudip Jana<EMAIL_ADDRESS>Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany Yago Porto <EMAIL_ADDRESS>Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-170, Santo André, SP, Brazil ###### Abstract We show that non-standard neutrino interactions (NSI) can notably modify the pattern of resonant flavor conversion of neutrinos within supernovae and significantly impact the neutronization burst signal in forthcoming experiments such as the Deep Underground Neutrino Experiment (DUNE). The presence of NSI can invert the energy levels of neutrino matter eigenstates and even induce a new resonance in the inner parts close to the proto-neutron star. We demonstrate how DUNE can use these new configurations of energy levels to have sensitivity to NSIs down to $\mathcal{O}(0.1)$. We also elucidate how the effect may result in a puzzling confusion of normal and inverted mass orderings by highlighting the emergence or vanishing of the neutronization peak, which distinguishes between the two mass orderings. Potential implications are analyzed thoroughly. _Introduction_.– In recent decades, extensive efforts and data from solar, atmospheric, reactor, and accelerator neutrino experiments have provided robust evidence for neutrino oscillations, indicating the presence of neutrino masses and mixing. However, the origin of these phenomena remains unestablished, leaving room for potential new physics, particularly in the form of non-standard neutrino interactions (NSIs). First introduced by Wolfenstein in 1978 Wolfenstein:1977ue , NSIs have been the subject of intense scrutiny since then, providing an avenue for exploring new aspects of neutrino physics. These interactions involve higher-dimensional operators with neutrinos and matter, as represented by the equations: $\displaystyle\mathcal{L}_{\mathrm{NC}}=-2\sqrt{2}G_{F}\sum_{f,P,\alpha,\beta}\varepsilon_{\alpha\beta}^{f,P}\left(\bar{\nu}_{\alpha}\gamma^{\mu}P_{L}\nu_{\beta}\right)\left(\bar{f}\gamma_{\mu}Pf\right)$ (1) $\displaystyle\mathcal{L}_{\mathrm{CC}}=-2\sqrt{2}G_{F}\sum_{f,P,\alpha,\beta}\varepsilon_{\alpha\beta}^{f,P}\left(\bar{\nu}_{\alpha}\gamma^{\mu}P_{L}\ell_{\beta}\right)\left(\bar{f}\gamma_{\mu}Pf^{\prime}\right)$ where $\varepsilon$ represents the strength of NSI relative to the weak scale, $P\in{P_{L},P_{R}}$ indicates chirality projection operators and the sum is over matter fermions $f,f^{\prime}\in{e,u,d}$. Such NSIs modify the matter potential experienced by neutrinos, adding substantial intricacy to the determination of neutrino oscillation parameters. For instance, the existence of NSI introduces ambiguity in the determination of $\theta_{12}$ from the solar neutrino data Miranda:2004nb . Furthermore, NSI effects have been shown to alleviate the tension between solar and KamLAND data, as it flattens the solar neutrino spectrum at high energies ($>3$ MeV) and generates larger day- night asymmetry Maltoni:2015kca . In this study, we advocate for the utilization of supernova neutrinos to investigate the influence of these NSI on the precise determination of neutrino oscillation parameters. Figure 1: A simplified picture of flavor conversions of supernova neutrinos in presence of NSI. During a supernova (SN) explosion, as the progenitor’s core collapses to form a neutron star, about $10^{53}$ erg of gravitational binding energy is released in the form of neutrinos Colgate:1966ax ; Arnett:1966 ; Bethe:1985sox ; Wilson:1985 ; Janka:2017vlw . Originating in the SN core with energies in the tens of MeV, these neutrinos travel through the stellar mantle and envelope, undergoing modifications in mixing and oscillations through neutrino-matter interactions111Matter effects can also influence the evolution of high-energy astrophysical neutrinos with energies greater than 100 TeV Dev:2023znd . Wolfenstein:1977ue ; Mikheyev:1985zog ; Mikheev:1986wj ; Mikheev:1986if . The consequences of neutrino propagation in the SN-dense environment offer significant potential for probing new physics, and these effects are observable by flavor-sensitive Earth-based detectors Valle:1987gv ; Nunokawa:1997ct ; Nunokawa:1998vh ; Esteban-Pretel:2007zkv ; deGouvea:2019goq ; Tang:2020pkp ; Jana:2022tsa ; Jana:2023ufy ; dosSantos:2023skk ; Bendahman:2023hjj . A galactic SN, a rare event in the Milky Way occurring once or twice per century, was last observed in 1987 from the Large Magellanic Cloud, 51 Kpc away Tammann:1994ev ; Kamiokande-II:1987idp ; Bionta:1987qt ; Alekseev:1988gp . Neutrino detectors recorded about 25 $\bar{\nu}_{e}$ events during that event, significantly advancing our understanding of core-collapse processes and neutrino emission Olsen:2021uvt ; Li:2023ulf ; DedinNeto:2023hhp ; Fiorillo:2023frv . Future detectors, especially the Deep Underground Neutrino Experiment (DUNE), are anticipated to capture hundreds of thousands of events in various channels during a galactic supernova IceCube:2011cwc ; DarkSide20k:2020ymr ; Hyper-Kamiokande:2021frf ; KM3NeT:2021moe ; JUNO:2023dnp ; DUNE:2020zfm , with DUNE’s unique capability to detect a clean $\nu_{e}$ signal in the critical first 40 ms after core bounce, known as the neutronization burst phase DUNE:2020zfm ; Cuesta:2023nnt ; Zhu:2018rwc . In this article, we investigate the features of neutrino emission in the neutronization burst phase, elucidate the anticipated resonant flavor conversion within the stellar envelope triggered by NSI [cf.Fig 1] and discuss the crucial role of DUNE in exploring new physics impacting supernova neutrinos. _Dynamics of Resonant Flavor Conversions_.– Figure 2: Configuration of energy levels for neutrinos and antineutrinos for NO (left) and IO (right). For each panel, neutrino lines are shown for positive $V_{e}$, while antineutrinos are plotted with negative $V_{e}$. The location of the MSW resonances is indicated by the letters $L$ and $H$. Solid lines represent the effective matter eigenstates, while the dashed lines follow the track of flavor states. Matter eigenstates mix with different flavors at different locations; this is shown by the flavor tags in red, blue, and green colors. Understanding the flavor composition of SN neutrinos reaching the Earth during the neutronization burst phase requires monitoring how the initial fluxes generated in the core evolve while traveling outward within the star (see Appendix for details). In the dense environment where neutrinos propagate inside the star, vacuum oscillations are suppressed. However, the density variation in the way out of the SN instigates a flavor evolution dictated by a time (or space) dependent Hamiltonian that can trigger Mikheyev-Smirnov- Wolfenstein (MSW) resonant flavor conversion in specific layers of the matter profile Wolfenstein:1977ue ; Mikheyev:1985zog ; Mikheev:1986wj ; Mikheev:1986if ; Mikheev:1987jp . Assuming that neutrinos propagate radially outwards, the evolution equation for the flavor state $\nu=(\nu_{e},\nu_{\mu},\nu_{\tau})^{T}$ is $i\frac{d}{dr}\nu=\mathcal{H}\nu.$ (2) $\mathcal{H}$ is the Hamiltonian in the flavor basis given by $\mathcal{H}=\frac{1}{2E}U\left(\begin{array}[]{ccc}0&0&0\\\ 0&\Delta m_{21}^{2}&0\\\ 0&0&\Delta m_{31}^{2}\end{array}\right)U^{\dagger}+V_{e}\left(\begin{array}[]{ccc}1&0&0\\\ 0&0&0\\\ 0&0&0\end{array}\right),$ (3) where $E$ and $U$ denote the neutrino energy and the Pontecorvo-Maki-Nakagawa- Sakata (PMNS) matrix. The matter potential due to charged current interactions can be expressed as $V_{e}=\sqrt{2}G_{F}\frac{\rho}{m_{N}}Y_{e},$ (4) where $Y_{e}$ is the electron number fraction and $\rho$ is the matter density (see Fig. 7 in the Appendix for the profile of $\rho$ and $Y_{e}$). The antineutrino Hamiltonian is similar to $\mathcal{H}$, with the only difference being that the matter potential inverts sign $\bar{V}_{e}=-V_{e}$. In dense matter, $\rho>10^{5}$ $\text{g/cm}^{3}$, $V_{e}$ is much bigger than all other matrix elements in Eq. 3, suppressing mixing and making $\nu_{e}$ the heaviest effective eigenstate in matter, $\nu_{e}\approx\nu_{3}^{m}$ for normal ordering (NO) and $\nu_{e}\approx\nu_{2}^{m}$ for inverted ordering (IO), the superscript $m$ denotes an effective eigenstate in matter. In a vacuum, however, $\nu_{e}$ is mostly associated with $\nu_{1}$. Therefore, as matter density decreases, so does $V_{e}$, and $\nu_{e}$ will transit from $\nu_{3}^{m}$($\nu_{2}^{m})$ to $\nu_{1}^{m}$. In doing so, it has to cross energy levels twice for NO ($\nu_{3}^{m}\rightarrow\nu_{2}^{m}\rightarrow\nu_{1}^{m}$) and once for IO ($\nu_{2}^{m}\rightarrow\nu_{1}^{m}$). The level crossing diagram is shown in Fig. 2. The red dashed line follows the evolution of the electron flavor, $\nu_{e}$ on the right of $V_{e}=0$ and $\bar{\nu}_{e}$ on the left. Note that $\bar{\nu}_{e}\approx\bar{\nu}_{1}^{m}$ in the inner regions, and there is no level crossing for antineutrinos in NO, while there is one level crossing in IO: $\bar{\nu}_{3}^{m}\rightarrow\bar{\nu}_{1}^{m}$. While $\nu_{e}$ and $\bar{\nu}_{e}$ are effective eigenstates in the SN environment, $\nu_{\mu}$, $\nu_{\tau}$, and their antiparticles are not effective eigenstates due to near maximal vacuum mixing. In this situation, it is convenient to diagonalize the $\nu_{\mu}-\nu_{\tau}$ subspace and work with the effective eigenstates $\nu^{\prime}_{\mu}$, $\nu^{\prime}_{\tau}$, and their antiparticles. The primed states have constant energy levels as a function of $V_{e}$, as shown in Fig. 2. The crossing $\nu_{e}\leftrightarrow\nu^{\prime}_{\tau}$ is called $H-$resonance and occurs when $V_{e}(\rho_{H})\approx\frac{\Delta m_{31}^{2}}{2E}\cos\theta_{13},$ (5) which corresponds to higher densities compared to the $L-$resonance, $\nu_{e}\leftrightarrow\nu^{\prime}_{\mu}$, that happens when $V_{e}(\rho_{L})\approx\frac{\Delta m_{21}^{2}}{2E}\cos\theta_{12}.$ (6) Note that Eq. 5 is satisfied for neutrinos in NO and for antineutrinos in IO ($\Delta m^{2}_{31}=-|\Delta m^{2}_{31}|$ and $V_{e}$ is negative) while Eq. 6 is satisfied only for neutrinos in both orderings. Using the profile in Fig. 7, and oscillation parameters from Esteban:2018ppq , the $H-$resonance happens at $r\sim 10^{4}$ Km ($\rho_{H}$ in the range $10^{3}-10^{4}$ $\text{g/cm}^{3}$) and the $L-$resonance at $r\sim 10^{5}$ Km ($\rho_{L}\sim 1-10$ $\text{g/cm}^{3}$). Efficient resonant conversion only happens if neutrinos cross resonance layers adiabatically Dighe:1999bi . We will assume perfect adiabaticity of the $H-$ and $L-$resonances so that transitions between distinct energy eigenstates are negligible. Moreover, the produced flux during the neutronization burst phase is mostly $\nu_{e}\approx\nu_{3}^{m}(\nu_{2}^{m})$ for NO (IO). Due to adiabaticity, the initial eigenstates are preserved and reach vacuum as $\nu_{3}(\nu_{2})$. Consequently, for NO only $|U_{e3}|^{2}\approx 0.02$ of the initial $\nu_{e}$ flux, $F_{e}^{i}$, survives and reaches Earth, and some amount of $\nu_{e}$, ($1-|U_{e3}|^{2})F_{x}^{i}\approx 0.98F_{x}^{i}$, comes from the conversion of non-electron flavors, $F_{\mu}^{i}=F_{\tau}^{i}=F_{x}^{i}$: $F_{e}^{NO}=|U_{e3}|^{2}F_{e}^{i}+(1-|U_{e3}|^{2})F_{x}^{i}\approx 0.02F_{e}^{i}+0.98F_{x}^{i}.$ (7) Similarly for IO, $F_{e}^{IO}=|U_{e2}|^{2}F_{e}^{i}+(1-|U_{e2}|^{2})F_{x}^{i}\approx 0.3F_{e}^{i}+0.7F_{x}^{i}.$ (8) Due to $F_{e}^{i}$ being approximately ten times $F_{x}^{i}$ at the neutronization peak, the $\nu_{e}$ flux is greater in the Inverted Ordering (IO) than in the Normal Ordering (NO) during the initial 20 milliseconds of neutrino emission. This disparity enhances the visibility of the peak in neutrino detectors for the IO scenario. In the following sections, we will analyze signals derived from flux equations (7) and (8) in DUNE and discuss the presence or absence of the peak as a discriminative criterion between the two mass orderings. Figure 3: Configuration of energy levels for neutrinos and antineutrinos for NO (left) and IO (right) with the inclusion of $\varepsilon_{\tau\tau}^{d}>0.33$. See text for details. _Effect of NSI on the Energy Levels_.– The introduction of NSIs has a pronounced impact on the configuration of energy levels and the flavor composition of SN neutrinos detected on Earth. To demonstrate this effect, an additional energy term involving NSI parameters must be incorporated into the Hamiltonian mentioned in Eq. 3. Our focus here is on NSI specifically influencing $\nu_{\tau}$, potentially originating from interactions with electrons, $u$-quarks, and $d$-quarks. Recent systematic investigations indicate the feasibility of obtaining substantial values for $\varepsilon_{\tau\tau}$ in various radiative neutrino mass models Babu:2019mfe . It is important to note that the approach to $\nu_{\mu}$ NSI is analogous and yields numerically equivalent results. The effective strength of the NSI parameter $\varepsilon^{\text{eff}}_{\tau\tau}$ can be expressed in relation to interactions involving electrons, $u$-quarks, and $d$-quarks as follows: $\varepsilon^{\text{eff}}_{\tau\tau}=\varepsilon_{\tau\tau}^{e}+(2\varepsilon_{\tau\tau}^{u}+\varepsilon_{\tau\tau}^{d})\frac{n_{p}}{n_{e}}+(\varepsilon_{\tau\tau}^{u}+2\varepsilon_{\tau\tau}^{d})\frac{n_{n}}{n_{e}}.$ (9) In various models of neutrino mass, new interactions emerge, giving rise to diverse NSI scenarios where $\varepsilon_{\tau\tau}^{u}$, $\varepsilon_{\tau\tau}^{d}$, or $\varepsilon_{\tau\tau}^{e}$ may be exclusively generated, or in certain instances, combinations of two NSIs or all three NSIs. For instance, a scalar leptoquark with Standard Model gauge quantum numbers ($3,2,1/6$), denoted as $\tilde{R_{2}}$, can induce only $\varepsilon_{\tau\tau}^{d}$, while the leptoquark ${R_{2}}$ with charges ($3,2,7/6$) exclusively leads to $\varepsilon_{\tau\tau}^{u}$. On the other hand, the leptoquark ${S_{3}}$ with charges ($\bar{3},3,1/3$) can give rise to both $\varepsilon_{\tau\tau}^{d}$ and $\varepsilon_{\tau\tau}^{u}$. Further elaboration on these scenarios can be found in the detailed discussion provided in reference Babu:2019mfe . To optimize the signal, we exclusively consider quark NSIs here. Although leptonic NSIs may have some impact, they are expected to be less significant compared to quark NSIs, and therefore, they do not affect the phenomenology we are concentrating on. If only $\varepsilon_{\tau\tau}^{d}$ exhibits a non-zero value, and under the condition of charge neutrality ($n_{p}=n_{e}$), we obtain: $\varepsilon^{\text{eff}}_{\tau\tau}=\varepsilon_{\tau\tau}^{d}\left(\frac{2-Y_{e}}{Y_{e}}\right).$ (10) Considering non-zero values of $\varepsilon_{\tau\tau}^{u}$, we obtain: $\varepsilon^{\text{eff}}_{\tau\tau}=\varepsilon_{\tau\tau}^{u}\left(\frac{1+Y_{e}}{Y_{e}}\right).$ (11) Now, we focus on the region of the SN matter profile where $H-$ and $L-$ resonance occur, specifically for $r>10^{4}$ Km. In this particular zone, $Y_{e}=0.5$ (see Fig. 7 in the Appendix). The existence of either $\varepsilon_{\tau\tau}^{d}$ or $\varepsilon_{\tau\tau}^{u}>0.33$ indicates that the effective matter potential for $\nu_{\tau}$ surpasses that of $\nu_{e}$, thereby modifying the dynamics of flavor conversion. The Hamiltonian in the presence of NSI is expressed as follows: $\mathcal{H}^{NSI}=\frac{1}{2E}U\left(\begin{array}[]{ccc}0&0&0\\\ 0&\Delta m_{21}^{2}&0\\\ 0&0&\Delta m_{31}^{2}\end{array}\right)U^{\dagger}+V_{e}\left(\begin{array}[]{ccc}1&0&0\\\ 0&0&0\\\ 0&0&3\varepsilon^{u,d}_{\tau\tau}\end{array}\right).$ (12) For $\varepsilon^{u,d}_{\tau\tau}>0.33$, $\nu_{\tau}$ is the heaviest eigenstate in matter, $\nu_{\tau}\approx\nu_{3}^{m}(\nu_{2}^{m})$ for NO (IO). Hence, the main difference from the standard case is that $\nu_{e}$ starts as the second heaviest eigenstate, $\nu_{e}\approx\nu_{2}^{m}(\nu_{1}^{m})$ for NO (IO), see Fig. 3. For NO, $\nu_{e}$ crosses level once through a $L-$resonance, $\nu_{2}^{m}\rightarrow\nu_{1}^{m}$, at densities given by Eq. 6. While for IO, $\nu_{e}$ is already produced as $\nu_{1}^{m}$ and reaches vacuum mostly as $\nu_{1}$ without crossing levels. For antineutrinos, $\bar{\nu}_{e}\approx\bar{\nu}_{2}^{m}(\bar{\nu}_{1}^{m})$, and crosses levels only for NO, via $H-$resonance. In this case, however, the potential of $\nu_{\tau}$, $V_{\tau}=\varepsilon_{\tau\tau}V_{e}$, is higher than $V_{e}$, so the difference $V_{e}-V_{\tau}$ is negative. The resonance condition is found by modifying Eq. 5 with $V_{e}\rightarrow(1-\varepsilon_{\tau\tau})V_{e}$. Neutrinos fail to satisfy this new condition in NO. However, antineutrinos in NO can fulfill this condition since their potentials exhibit an inverted sign, denoted as $\bar{V_{e}}=-V_{e}$. Furthermore, the eigenstates are not represented in the primed basis, as the potential for $\nu_{\tau}$ significantly surpasses the mixing terms, distinctly separating $\nu_{\mu}$ and $\nu_{\tau}$. Assuming perfect adiabaticity of $H-$ and $L-$ resonances, we find the final flux for both orderings: $F_{e}^{NO}(\varepsilon_{\tau\tau}^{u,d}>0.33)=|U_{e2}|^{2}F_{e}^{i}+(1-|U_{e2}|^{2})F_{x}^{i}\approx 0.3F_{e}^{i}+0.7F_{x}^{i},$ (13) and, $F_{e}^{IO}(\varepsilon_{\tau\tau}^{u,d}>0.33)=|U_{e1}|^{2}F_{e}^{i}+(1-|U_{e1}|^{2})F_{x}^{i}\approx 0.7F_{e}^{i}+0.3F_{x}^{i}.$ (14) It is noteworthy that Eq.13 is equivalent to Eq.8 since, in both instances, $\nu_{e}$ is generated as $\nu_{2}^{m}$. Consequently, the neutronization burst phase in the NO scenario with NSI and the standard IO scenario without NSI are indistinguishable. In the next sections, we analyze the signals from Eq. 13 and Eq. 14 at DUNE, emphasizing potential confusion in discerning between the two mass orderings in the presence of NSI. _New Resonances triggered by NSI_.– Figure 4: Configuration of energy levels for neutrinos and antineutrinos for NO (left) and IO (right) with the inclusion of $0.1<\varepsilon^{d}_{\tau\tau}<0.33$ and $\varepsilon^{d}_{e\tau}>10^{-5}$.The MSW resonance locations are marked by the letters $L$ and $H$, while the NSI-triggered resonance location is marked by the letter $C$. We will now explore scenarios in which NSI can induce an additional resonance, designated as the $C$-resonance, alongside the existing $L-$ and $H-$ resonances. While examining NSI within the range $0.1<\varepsilon_{\tau\tau}^{u,d}<0.33$, we find that it influences the flavor evolution of neutrinos during the neutronization burst phase. Despite such values of $\varepsilon_{\tau\tau}^{u,d}$ not modifying the energy level pattern in the region $r>10^{3}$ Km where $L-$ and $H-$resonances occur ($Y_{e}\approx 0.5$), it does alter the energy levels inside the iron core ($r<10^{3}$ Km) where neutrinos are produced ($0.2<Y_{e}<0.5$). In this configuration, the $C-$ resonance emerges inside the iron core, as illustrated in Fig. 4. Figure 5: Expected number of $\nu_{e}$ events per bin of $5$ ms at DUNE in the time interval from $-5$ to $40$ ms corresponding to the neutronization burst phase for NO (left) and IO (right). $0$ ms represents the time of core bounce. Blue lines are computed from fluxes in Eqs. 7 and 8, assuming the distance between the SN and Earth to be $10$ Kpc. Green lines are computed from fluxes modified by the presence of $\varepsilon_{\tau\tau}^{d}>0.33$ in Eqs. 13 and 14. The similarity between the green line on the left panel and the blue line on the right panel displays the degeneracy between the NO$+$NSI and standard IO cases. Red lines are computed from fluxes modified by the presence of $0.1<\varepsilon_{\tau\tau}<0.33$ and $10^{-6}<\varepsilon_{e\tau}<10^{-5}$. For $\varepsilon_{e\tau}$ in such interval, counting rates are intermediate between the NSI case with $\varepsilon_{\tau\tau}>0.33$ and the standard scenario. For the totally adiabatic $C-$resonance ($\varepsilon_{e\tau}>10^{-5}$), results are degenerate with the ones for $\varepsilon_{\tau\tau}>0.33$. In order to examine the emergence of the $C-$resonance, we rewrite Eq. 12 by incorporating the expression for $\varepsilon_{\tau\tau}$ from Eq. 10 for $d$-quark NSI. Here, we concentrate exclusively on the matter potential term, which dominates over the vacuum term in regions of high core densities: $\mathcal{H}^{NSI}\approx V_{e}\left(\begin{array}[]{ccc}1&0&0\\\ 0&0&0\\\ 0&0&\varepsilon_{\tau\tau}^{d}\left(\frac{2-Y_{e}}{Y_{e}}\right)\end{array}\right).$ (15) Note that our focus here is on $d$-quark NSI. Nevertheless, the qualitative outcomes remain similar for $u$-quark NSI due to the resemblance between Eq. 10 and Eq. 11. In the subsequent section, we analyze the consequential effects by concurrently considering NSIs involving both $d$-quark and $u$-quark. Using the $Y_{e}$ values from Fig. 7 (See Appendix for details), a level crossing occurs within the range of $50$ to $1000$ Km for $0.1<\varepsilon^{d}_{\tau\tau}<0.33$ as $Y_{e}$ transitions from $0.2$ to $0.5$. Prior to the crossing point, $\varepsilon_{\tau\tau}^{d}(2-Y_{e})/Y_{e}>1$, rendering $\nu_{\tau}$ as the heaviest state, and $\nu_{e}$ as the second heaviest: $\nu_{e}\approx\nu_{2}^{m}(\nu_{1}^{m})$ for NO (IO). Subsequent to the crossing point, the scenario inverts with $\varepsilon_{\tau\tau}^{d}(2-Y_{e})/Y_{e}<1$, causing the energy levels of $\nu_{e}$ and $\nu_{\tau}$ to intersect, as depicted in Fig. 4. This crossing can result in resonant conversion if we introduce off-diagonal NSI222Significant flavor off-diagonal NSI can resolve discrepancies in the determination of the standard CP-phase $\delta_{CP}$ between the NO$\nu$A and T2K long-baseline accelerator experiments Chatterjee:2020kkm ; Denton:2020uda . $\varepsilon_{e\tau}$ and couple $\nu_{e}$ and $\nu_{\tau}$ states: $\mathcal{H}^{NSI}\approx V_{e}\left(\begin{array}[]{ccc}1&0&\varepsilon_{e\tau}^{d}\left(\frac{2-Y_{e}}{Y_{e}}\right)\\\ 0&0&0\\\ \varepsilon_{e\tau}^{d}\left(\frac{2-Y_{e}}{Y_{e}}\right)&0&\varepsilon_{\tau\tau}^{d}\left(\frac{2-Y_{e}}{Y_{e}}\right)\end{array}\right).$ (16) In such a scenario, the resonance condition can be expressed as $\varepsilon_{\tau\tau}^{d}\left(\frac{2-Y_{e}}{Y_{e}}\right)=1,$ (17) and can be satisfied by both neutrinos and antineutrinos simultaneously. Furthermore, Eq. 17 is independent of matter density, energy, and mass orderings. The adiabaticity condition is now formulated and can be expressed as $\gamma_{C}=\left|\frac{4\left(\mathcal{H}_{e\tau}^{NSI}\right)^{2}}{\dot{\mathcal{H}}_{\tau\tau}^{NSI}-\dot{\mathcal{H}}_{ee}^{NSI}}\right|\approx\left|\frac{16V_{e}(\varepsilon_{e\tau}^{d})^{2}}{Y_{e}\dot{Y_{e}}(1+\varepsilon_{\tau\tau}^{d})^{3}}\right|>1.$ (18) Eq. 18 is satisfied for very small values of off-diagonal NSI: $\varepsilon_{e\tau}>10^{-5}$. The resonance is partially adiabatic for $\varepsilon_{e\tau}$ values within the range of $10^{-6}$ to $10^{-5}$. In the limit of total adiabaticity, the produced $\nu_{e}$ reaches vacuum as $\nu_{2}(\nu_{1})$ for NO (IO). The scenario is analogous to the one stated in the last section by Eq. 13 and Eq. 14. When adiabaticity is totally violated, the results are identical to the standard case in Eq. 7 and Eq. 8. Partial adiabaticity provides fluxes, which are intermediary between the standard scenario and the $\varepsilon^{d}_{\tau\tau}>0.33$ scenario. In the next section, we analyze these signals in DUNE. Figure 6: Projected sensitivities for NSI $\varepsilon_{\tau\tau}$ at DUNE. For the totally adiabatic $C-$resonance, we consider $\varepsilon_{e\tau}>10^{-5}$. Global-fit bound on NSI, as shown by the hatched region, is adopted from Ref. Coloma:2023ixt . We want to stress here that the search for $\varepsilon_{\tau\tau}^{u,d}$ during the neutronization burst phase is limited to values of $\mathcal{O}(0.1)$. Conversely, in later phases of SN neutrino emission, when $Y_{e}$ can attain values below $\mathcal{O}(0.1)$, smaller values of $\varepsilon_{\tau\tau}^{u,d}$ could potentially alter observables. Note that for $Y_{e}\ll 1$, Eq. 10 and Eq. 11 transforms as: $\varepsilon_{\tau\tau}^{eff}\approx 2\frac{\varepsilon_{\tau\tau}^{d}}{Y_{e}}\hskip 14.22636pt\text{and}\hskip 14.22636pt\varepsilon_{\tau\tau}^{eff}\approx\frac{\varepsilon_{\tau\tau}^{u}}{Y_{e}}.$ (19) Consequently, $\varepsilon_{\tau\tau}^{u,d}$ can be as small as $Y_{e}\sim O(0.01)$, while maintaining the $\varepsilon^{eff}_{\tau\tau}$ value of order one necessary to satisfy the resonance criterion in Eq. 17. Therefore, sensitivity to $\varepsilon^{u,d}_{\tau\tau}$ can be improved to $\mathcal{O}(0.01)$ by further investigating flavor conversion that occurs during other phases of neutrino emission. Signal analysis at DUNE.– Here, we investigate the expected SN neutrino signal spectrum at DUNE under both the standard scenario and the consideration of NSI effects. The graphical representation of the analysis, depicted in Fig. 5, illustrates the projected number of $\nu_{e}$ events per 5 ms interval within the time range of -5 to 40 ms (with 0 ms denoting the moment of core bounce). The scenarios for Normal Ordering (NO) and Inverted Ordering (IO) are presented on the left and right sides, respectively, assuming a distance of 10 Kpc between the supernova (SN) and Earth. For detailed information regarding the computation of DUNE event spectra, see the Appendix. The initial neutrino fluxes ($F_{e}^{i}$ and $F_{x}^{i}$) are taken from a simulation of a 15 solar mass progenitor garching , as outlined in deGouvea:2019goq ; Jana:2022tsa . The standard case, represented by the blue lines in Fig. 5 [cf. Eq. 7 and Eq. 8], highlights a notable feature—the visibility of the neutronization peak, which distinguishes between the two mass orderings. The green lines are computed from Eq. 13 and Eq. 14 which includes NSI ($\varepsilon_{\tau\tau}^{d}>0.33$). Crucially, as evident from Eq. 8 and Eq. 13, along with Fig. 5, the introduction of NSI leads to a scenario where perfect confusion arises between the two mass orderings (green lines on the left panel and blue lines on the right panel are identical). DUNE can distinguish between the standard case and signals characterized by $\varepsilon^{d}_{\tau\tau}>0.33$, as depicted in each panel of Fig. 5. We analyze it using the $\chi^{2}$ estimator, $\chi^{2}=\min_{\xi}\left(\sum_{i=1}^{n}2\left[(1+\xi)F_{i}-D_{i}+D_{i}\ln\left(\frac{D_{i}}{(1+\xi)F_{i}}\right)\right]\right).$ (20) $F_{i}$ and $D_{i}$ are the number of $\nu_{e}$ events in the $i$-th time bin for $\varepsilon_{\tau\tau}^{d}>0.33$ and $\varepsilon_{\tau\tau}^{d}=0$, respectively. The parameter $\xi$ is allowed to vary in the range $-1<\xi\leq 1000$ with no penalty and we select the minimum to make our derived sensitivity as much as possible independent of the overall normalization. We find for NO, $\chi^{2}(NO)\approx 27,$ (21) and, for IO, $\chi^{2}(IO)\approx 11.$ (22) Notably, $\chi^{2}(NO)$ surpasses $\chi^{2}(IO)$ by a considerable margin. This discrepancy is primarily attributed to the more pronounced differentiation between the green and blue lines in Fig. 5 for NO, arising from the presence (green) or absence (blue) of the peak around $\sim 5$ ms. Conversely, for IO, both lines incorporate the peak, and the blue line could mimic the green one even without NSI, contingent on different spectral parameters of the supernova. We want to stress that the green lines in Fig. 5 also represent the case of NSI in the range $0.1<\varepsilon_{\tau\tau}^{d}<0.33$ with adiabatic $C$-resonance ($\varepsilon_{e\tau}^{d}>10^{-5}$). Partial adiabaticity of the $C$-resonance ($10^{-6}<\varepsilon_{e\tau}^{d}<10^{-5}$) provides counting rates between the standard scenario and the $\varepsilon^{d}_{\tau\tau}>0.33$ scenario; see red lines in Fig. 5. We exclusively analyze scenarios with $d$-quark NSI. However, for completeness, Fig. 6 displays projected sensitivities considering the presence of both $d$-quark and $u$-quark NSIs. NSI affecting the flavor evolution of SN neutrinos can also be studied in the electron antineutrino ($\bar{\nu}_{e}$) channel using other future experiments such as Hyper-Kanmiokande Hyper-Kamiokande:2021frf and JUNO JUNO:2023dnp . Although these experiments will certainly provide more statistics than DUNE, their observations will lack some specific features of the time signal that are present in the $\nu_{e}$ channel (such as the peak at $5$ ms) that can be crucial in diagnosing the presence of NSI. The quark NSI we discussed, which is capable of inducing resonant flavor conversion of supernova neutrinos, may also be subject to complementary tests. For instance, it could produce a Glashow-like resonance feature detectable by neutrino telescopes Babu:2022fje ; Babu:2019vff ; Huang:2021mki or be examined in future collider experiments, potentially resolving existing degeneracies Babu:2020nna . Conclusions.– We have shown that the NSIs have a profound impact on supernova neutrino flavor conversion, significantly impacting the neutronization burst signals. The presence of NSI can invert the energy levels of neutrino matter eigenstates and even induce a new resonance in the inner parts close to the proto-neutron star. We showcase how the forthcoming experiments, such as DUNE, can exploit these altered energy level configurations to achieve sensitivity to NSIs at the order of $\mathcal{O}(0.1)$. Furthermore, we elaborate on how this phenomenon may lead to a puzzling ambiguity between normal and inverted mass orderings, and we thoroughly analyze its potential implications. Acknowledgments.– We are grateful to André de Gouvêa for an illuminating discussion and email correspondence. We also thank Pedro A. N. Machado and Manibrata Sen for useful discussions. We wish to acknowledge the Center for Theoretical Underground Physics and Related Areas (CETUP*) and the Institute for Underground Science at SURF for hospitality and for providing a stimulating environment. The work of YP was supported by the São Paulo Research Foundation (FAPESP) Grant No. 2023/10734-3 and 2023/01467-1 and by the National Council for Scientific and Technological Development (CNPq) Grant No. 151168/2023-7. S.J. thanks Centro de Ciências Naturais e Humanas at Universidade Federal do ABC and ICTP South American Institute for Fundamental Research for their warm hospitality during the completion of this work. ## Appendices ### .1 Neutronization burst phase Figure 7: Status of the matter density $\rho$ and and electron number fraction $Y_{e}$ at $4.37$ ms after core bounce. Serpico:2011ir ; Tang:2020pkp . The SN neutronization burst phase encompasses the period of $\sim 40$ ms after core bounce when the $\nu_{e}$ luminosity dominates over all other flavors, $\bar{\nu}_{e}$, $\nu_{x}$ and $\bar{\nu}_{x}$, with $x=\mu,\tau$ ($\nu_{e}$ luminosity can be $10-100$ times higher than luminosity in other flavors, see deGouvea:2019goq ; Jana:2022tsa ). During the core collapse, the inner core is compressed and reaches nuclear densities, while the matter falling above it bounces back and launches a shock wave, which dissociates nuclei into its component nucleons as it travels outwards. The capture of electrons in the environment by the dissociated protons, $e^{-}+p\rightarrow n+\nu_{e}$, is responsible for generating the large $\nu_{e}$ burst during the neutronization burst phase. Theoretical uncertainties in the calculation of the neutronization fluxes are believed to be as small and $10\%$ Serpico:2011ir ; Wallace:2015xma ; OConnor:2018sti ; Kachelriess:2004ds and uncertainties related to neutrino-neutrino refraction Mirizzi:2015eza ; Capanema:2024hdm can also be avoided. Therefore, the burst phase presents a great opportunity to search for new physics with SN neutrinos. Neutrinos are produced in the region $r<100$ Km, which is opaque due to very high densities ($\rho>10^{11}$ $\text{g/cm}^{3}$), and, at $r\sim 100$ Km, they start free streaming. The efficient electron capture at the production region reduces its electron number fraction, $Y_{e}=n_{e}/(n_{p}+n_{n})$, where $n_{e}$, $n_{p}$ and $n_{n}$ are the electron, proton, and neutron number densities, respectively, to levels below the one found in the envelope ($Y_{e}=0.5$). Precisely, this difference between $Y_{e}$ in inner and outer layers might produce observable consequences for non-zero NSI. Further details can be seen in Fig. 7, where we plot $\rho$ and $Y_{e}$ at $\sim 5$ ms Fischer:2009af ; Tang:2020pkp . This instant is representative of the matter profile during the neutronization peak, the most distinct feature of the burst phase. ### .2 DUNE: technical details The Deep Underground Neutrino Experiment (DUNE) will be composed of four-time projection chambers, each with $10$ Kton of liquid argon, placed underground in the Long-Baseline Neutrino Facility (LNBF) in South Dakota, United States DUNE:2020zfm ; Cuesta:2023nnt . DUNE will primarily detect neutrinos with energies of GeV and higher coming from a beam produced at Fermilab. Nevertheless, DUNE is also sensitive to SN neutrinos in the range from about $5$ MeV to tens of MeV via the charged-current interaction $\nu_{e}+{}^{40}\mathrm{Ar}\rightarrow{}^{40}\mathrm{~{}K}^{*}+e^{-},$ (23) that separates the electron flavor and has a much higher cross-section than the elastic scattering, $\nu_{e,\mu,\tau}+e^{-}\rightarrow\nu_{e,\mu,\tau}+e^{-}$, which is the current main detection channel for $\nu_{e}$ Scholberg:2012id ; Zhu:2018rwc . We compute the event spectrum for $\nu_{e}$ at DUNE as Capozzi:2018rzl $\frac{dN_{\nu_{e}}}{dE_{r}}=\frac{N_{\mathrm{Ar}}}{4\pi R^{2}}\int dE_{\nu_{e}}F_{\nu_{e}}\left(E_{\nu_{e}}\right)\sigma_{\nu_{e}+\mathrm{Ar}}\left(E_{\nu_{e}}\right)W\left(E_{r},E_{\nu_{e}}\right)$ (24) where $N_{\mathrm{Ar}}$ is the number of target ${}^{40}\mathrm{Ar}$ nuclei in the chambers, $R$ is the distance between the SN and the Earth, $F_{\nu_{e}}$ is the $\nu_{e}$ flux leaving the SN with energy $E_{\nu_{e}}$, $\sigma_{\nu_{e}+\mathrm{Ar}}$ is the cross section for the interaction in Eq. 23 generated by MARLEY Gardiner:2018zfg , $W$ is the gaussian energy resolution with $\sigma_{E}/\text{MeV}=0.11\sqrt{E_{r}/\text{MeV}}+0.02E_{r}/\text{MeV}$ and $E_{r}$ is the reconstructed electron energy. ## References * (1) L. Wolfenstein, “Neutrino Oscillations in Matter,” Phys. Rev. D 17 (1978) 2369–2374. * (2) O. G. Miranda, M. A. Tortola, and J. W. F. Valle, “Are solar neutrino oscillations robust?,” JHEP 10 (2006) 008, arXiv:hep-ph/0406280. * (3) M. Maltoni and A. Y. Smirnov, “Solar neutrinos and neutrino physics,” Eur. Phys. J. A 52 (2016) no. 4, 87, arXiv:1507.05287. * (4) S. A. Colgate and R. H. White, “The Hydrodynamic Behavior of Supernovae Explosions,” Astrophys. J. 143 (1966) 626. * (5) W. D. Arnett, “Gravitational collapse and weak interactions,” Can. J. Phys. 44 (1966) 2553–2594. * (6) H. A. Bethe and J. R. Wilson, “Revival of a stalled supernova shock by neutrino heating,” Astrophys. J. 295 (1985) 14–23. * (7) J. R. Wilson, “Supernovae and Post-Collapse Behavior,” Numerical Astrophysics (1985) 422. * (8) H. T. Janka, “Neutrino Emission from Supernovae,” arXiv:1702.08713. * (9) P. S. B. Dev, S. Jana, and Y. Porto, “Flavor Matters, but Matter Flavors: Matter Effects on Flavor Composition of Astrophysical Neutrinos,” arXiv:2312.17315. * (10) S. P. Mikheyev and A. Y. Smirnov, “Resonance Amplification of Oscillations in Matter and Spectroscopy of Solar Neutrinos,” Sov. J. Nucl. Phys. 42 (1985) 913–917. * (11) S. P. Mikheev and A. Y. Smirnov, “Resonant amplification of neutrino oscillations in matter and solar neutrino spectroscopy,” Nuovo Cim. C 9 (1986) 17–26. * (12) S. P. Mikheev and A. Y. Smirnov, “Neutrino Oscillations in a Variable Density Medium and Neutrino Bursts Due to the Gravitational Collapse of Stars,” Sov. Phys. JETP 64 (1986) 4–7, arXiv:0706.0454. * (13) J. W. F. Valle, “Resonant Oscillations of Massless Neutrinos in Matter,” Phys. Lett. B 199 (1987) 432–436. * (14) H. Nunokawa, J. T. Peltoniemi, A. Rossi, and J. W. F. Valle, “Supernova bounds on resonant active sterile neutrino conversions,” Phys. Rev. D 56 (1997) 1704–1713, arXiv:hep-ph/9702372. * (15) H. Nunokawa, R. Tomas, and J. W. F. Valle, “Type II supernovae and neutrino magnetic moments,” Astropart. Phys. 11 (1999) 317–325, arXiv:astro-ph/9811181. * (16) A. Esteban-Pretel, R. Tomas, and J. W. F. Valle, “Probing non-standard neutrino interactions with supernova neutrinos,” Phys. Rev. D 76 (2007) 053001, arXiv:0704.0032. * (17) A. de Gouvêa, I. Martinez-Soler, and M. Sen, “Impact of neutrino decays on the supernova neutronization-burst flux,” Phys. Rev. D 101 (2020) no. 4, 043013, arXiv:1910.01127. * (18) J. Tang, T. Wang, and M.-R. Wu, “Constraining sterile neutrinos by core-collapse supernovae with multiple detectors,” JCAP 10 (2020) 038, arXiv:2005.09168. * (19) S. Jana, Y. P. Porto-Silva, and M. Sen, “Exploiting a future galactic supernova to probe neutrino magnetic moments,” JCAP 09 (2022) 079, arXiv:2203.01950. * (20) S. Jana and Y. Porto, “Resonances of Supernova Neutrinos in Twisting Magnetic Fields,” Phys. Rev. Lett. 132 (2024) no. 10, 101005, arXiv:2303.13572. * (21) M. V. dos Santos, P. C. de Holanda, P. Dedin Neto, and E. Kemp, “Effects of quantum decoherence in a future supernova neutrino detection,” Phys. Rev. D 108 (2023) no. 10, 103032, arXiv:2306.17591. * (22) M. Bendahman et al., “Prospects for realtime characterization of core-collapse supernova and neutrino properties,” arXiv:2311.06216. * (23) G. A. Tammann, W. Loeffler, and A. Schroder, “The Galactic supernova rate,” Astrophys. J. Suppl. 92 (1994) 487–493. * (24) Kamiokande-II, K. Hirata et al., “Observation of a Neutrino Burst from the Supernova SN 1987a,” Phys. Rev. Lett. 58 (1987) 1490–1493. * (25) R. M. Bionta et al., “Observation of a Neutrino Burst in Coincidence with Supernova SN 1987a in the Large Magellanic Cloud,” Phys. Rev. Lett. 58 (1987) 1494. * (26) E. N. Alekseev, L. N. Alekseeva, I. V. Krivosheina, and V. I. Volchenko, “Detection of the Neutrino Signal From SN1987A in the LMC Using the Inr Baksan Underground Scintillation Telescope,” Phys. Lett. B 205 (1988) 209–214. * (27) J. Olsen and Y.-Z. Qian, “Comparison of simulated neutrino emission models with data on Supernova 1987A,” Phys. Rev. D 104 (2021) no. 12, 123020, arXiv:2108.08463. [Erratum: Phys.Rev.D 106, 109904 (2022)]. * (28) S. W. Li, J. F. Beacom, L. F. Roberts, and F. Capozzi, “Old Data, New Forensics: The First Second of SN 1987A Neutrino Emission,” arXiv:2306.08024. * (29) P. Dedin Neto, M. V. d. Santos, P. C. de Holanda, and E. Kemp, “SN1987A neutrino burst: limits on flavor conversion,” Eur. Phys. J. C 83 (2023) no. 6, 459, arXiv:2301.11407. * (30) D. F. G. Fiorillo, M. Heinlein, H.-T. Janka, G. Raffelt, E. Vitagliano, and R. Bollig, “Supernova simulations confront SN 1987A neutrinos,” Phys. Rev. D 108 (2023) no. 8, 083040, arXiv:2308.01403. * (31) IceCube, R. Abbasi et al., “IceCube Sensitivity for Low-Energy Neutrinos from Nearby Supernovae,” Astron. Astrophys. 535 (2011) A109, arXiv:1108.0171. [Erratum: Astron.Astrophys. 563, C1 (2014)]. * (32) DarkSide 20k, P. Agnes et al., “Sensitivity of future liquid argon dark matter search experiments to core-collapse supernova neutrinos,” JCAP 03 (2021) 043, arXiv:2011.07819. * (33) Hyper-Kamiokande, K. Abe et al., “Supernova Model Discrimination with Hyper-Kamiokande,” Astrophys. J. 916 (2021) no. 1, 15, arXiv:2101.05269. * (34) KM3NeT, S. Aiello et al., “The KM3NeT potential for the next core-collapse supernova observation with neutrinos,” Eur. Phys. J. C 81 (2021) no. 5, 445, arXiv:2102.05977. * (35) JUNO, A. Abusleme et al., “Real-time Monitoring for the Next Core-Collapse Supernova in JUNO,” arXiv:2309.07109. * (36) DUNE, B. Abi et al., “Supernova neutrino burst detection with the Deep Underground Neutrino Experiment,” Eur. Phys. J. C 81 (2021) no. 5, 423, arXiv:2008.06647. * (37) DUNE, C. Cuesta, “Supernova and solar neutrino searches at DUNE,” in 18th International Conference on Topics in Astroparticle and Underground Physics. 11, 2023. arXiv:2311.06134. * (38) G. Zhu, S. W. Li, and J. F. Beacom, “Developing the MeV potential of DUNE: Detailed considerations of muon-induced spallation and other backgrounds,” Phys. Rev. C 99 (2019) no. 5, 055810, arXiv:1811.07912. * (39) S. P. Mikheev and A. Y. Smirnov, “Neutrino Oscillations in an Inhomogeneous Medium: Adiabatic Regime,” Sov. Phys. JETP 65 (1987) 230–236. * (40) I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, I. Martinez-Soler, and J. Salvado, “Updated constraints on non-standard interactions from global analysis of oscillation data,” JHEP 08 (2018) 180, arXiv:1805.04530. [Addendum: JHEP 12, 152 (2020)]. * (41) A. S. Dighe and A. Y. Smirnov, “Identifying the neutrino mass spectrum from the neutrino burst from a supernova,” Phys. Rev. D 62 (2000) 033007, arXiv:hep-ph/9907423. * (42) K. S. Babu, P. S. B. Dev, S. Jana, and A. Thapa, “Non-Standard Interactions in Radiative Neutrino Mass Models,” JHEP 03 (2020) 006, arXiv:1907.09498. * (43) S. S. Chatterjee and A. Palazzo, “Nonstandard Neutrino Interactions as a Solution to the $NO\nu A$ and T2K Discrepancy,” Phys. Rev. Lett. 126 (2021) no. 5, 051802, arXiv:2008.04161. * (44) P. B. Denton, J. Gehrlein, and R. Pestes, “$CP$ -Violating Neutrino Nonstandard Interactions in Long-Baseline-Accelerator Data,” Phys. Rev. Lett. 126 (2021) no. 5, 051801, arXiv:2008.01110. * (45) P. Coloma, M. C. Gonzalez-Garcia, M. Maltoni, J. a. P. Pinheiro, and S. Urrea, “Global constraints on non-standard neutrino interactions with quarks and electrons,” JHEP 08 (2023) 032, arXiv:2305.07698. * (46) “Garching Core-Collapse Supernova Archive,”. https://wwwmpa.mpa-garching.mpg.de/ccsnarchive/. * (47) K. S. Babu, P. S. B. Dev, and S. Jana, “Probing neutrino mass models through resonances at neutrino telescopes,” Int. J. Mod. Phys. A 37 (2022) no. 11n12, 2230003, arXiv:2202.06975. * (48) K. S. Babu, P. S. Dev, S. Jana, and Y. Sui, “Zee-Burst: A New Probe of Neutrino Nonstandard Interactions at IceCube,” Phys. Rev. Lett. 124 (2020) no. 4, 041805, arXiv:1908.02779. * (49) G.-y. Huang, S. Jana, M. Lindner, and W. Rodejohann, “Probing new physics at future tau neutrino telescopes,” JCAP 02 (2022) no. 02, 038, arXiv:2112.09476. * (50) K. S. Babu, D. Gonçalves, S. Jana, and P. A. N. Machado, “Neutrino Non-Standard Interactions: Complementarity Between LHC and Oscillation Experiments,” Phys. Lett. B 815 (2021) 136131, arXiv:2003.03383. * (51) P. D. Serpico, S. Chakraborty, T. Fischer, L. Hudepohl, H.-T. Janka, and A. Mirizzi, “Probing the neutrino mass hierarchy with the rise time of a supernova burst,” Phys. Rev. D 85 (2012) 085031, arXiv:1111.4483. * (52) J. Wallace, A. Burrows, and J. C. Dolence, “Detecting the Supernova Breakout Burst in Terrestrial Neutrino Detectors,” Astrophys. J. 817 (2016) no. 2, 182, arXiv:1510.01338. * (53) E. O’Connor et al., “Global Comparison of Core-Collapse Supernova Simulations in Spherical Symmetry,” J. Phys. G 45 (2018) no. 10, 104001, arXiv:1806.04175. * (54) M. Kachelriess, R. Tomas, R. Buras, H. T. Janka, A. Marek, and M. Rampp, “Exploiting the neutronization burst of a galactic supernova,” Phys. Rev. D 71 (2005) 063003, arXiv:astro-ph/0412082. * (55) A. Mirizzi, I. Tamborra, H.-T. Janka, N. Saviano, K. Scholberg, R. Bollig, L. Hudepohl, and S. Chakraborty, “Supernova Neutrinos: Production, Oscillations and Detection,” Riv. Nuovo Cim. 39 (2016) no. 1-2, 1–112, arXiv:1508.00785. * (56) A. Capanema, Y. Porto, and M. M. Saez, “The Flavor Composition of Supernova Neutrinos,” arXiv:2403.14762. * (57) T. Fischer, S. C. Whitehouse, A. Mezzacappa, F. K. Thielemann, and M. Liebendorfer, “Protoneutron star evolution and the neutrino driven wind in general relativistic neutrino radiation hydrodynamics simulations,” Astron. Astrophys. 517 (2010) A80, arXiv:0908.1871. * (58) K. Scholberg, “Supernova Neutrino Detection,” Ann. Rev. Nucl. Part. Sci. 62 (2012) 81–103, arXiv:1205.6003. * (59) F. Capozzi, B. Dasgupta, and A. Mirizzi, “Model-independent diagnostic of self-induced spectral equalization versus ordinary matter effects in supernova neutrinos,” Phys. Rev. D 98 (2018) no. 6, 063013, arXiv:1807.00840. * (60) S. J. Gardiner, Nuclear Effects in Neutrino Detection. PhD thesis, UC, Davis, 2018.
# Error terms for the motives of discriminant complements and a Cayley- Bacharach theorem Ishan Banerjee ###### Abstract In this paper we prove under some simplifying hypotheses questions of Picoco and Levinson-Ullery on Cayley-Bacharach sets. Our results imply that, under suitable hypotheses Cayley-Bacharach sets lie on curves of low degree. We then use these results to estimate error terms to the normalized motive of the space of smooth degree $d$ hypersurfaces in $\mathbb{P}^{n}$ as $d$ grows to infinity. The error term can be expressed in terms of a certain ‘sum over points’ on plane cubic curves and the associated Hodge structure can be expressed in terms of the cohomology of the moduli space of elliptic curves. We also prove convergence of the motive of degree $d$ hypersurfaces in $\mathbb{P}^{n}$ as $n$ grows to infinity as well as other results on discriminant complements of high dimensional varieties. ## 1 Introduction Let $X$ be a smooth projective variety over a field $\mathbb{F}$. Let $\mathcal{L}$ be an ample line bundle on $X$. Let $\Sigma(X,\mathcal{L}):=\\{f\in H^{0}(X,\mathcal{L})|\exists p\in X,f(p)=0,df(p)=0\\}.$ We call $\Sigma(X,\mathcal{L})$ the discriminant variety associated to $X$ and $\mathcal{L}$, it consists of those sections of $\mathcal{L}$ which define a singular hypersurface in $X$. Let $U(X,\mathcal{L})=H^{0}(X,\mathcal{L})\setminus\Sigma(X,\mathcal{L}).$ Theorems of Vakil-Wood and Poonen mply that after some suitable normalization, the motive of $U(X,\mathcal{L}^{j})$ converges (in an appropriate sense) as $j\to\infty$ (see [16] and [14]). In this paper we study the _rate of convergence_ of the motive of $U(X,\mathcal{L}^{j})$. Our main contributions in this regard are: 1. 1. An improvement of known rates of convergence for all $X$ and $\mathcal{L}$ (Theorem 1.6). Interestingly enough, our results suggest that the rate of convergence grows faster with the increase in the dimension of $X$, as in the upper bounds we establish on the ‘error term’ decrease as $\dim X\to\infty$. 2. 2. In the special case of $\mathbb{P}^{n}$, we obtain a leading error term (see Theorem 1.9 ) for the convergence of $U(\mathbb{P}^{n},\mathcal{O}(d))$ as $d\to\infty$. 3. 3. We establish upper bounds on the difference between the motive of $U(X,\mathcal{O}(d))$ and a special value of the motivic zeta function of $X$ for all smooth projective varieties $X$ (see Theorem 1.7). Our bound depends only on the dimension of $X$. Our bounds improve as the dimension of $X$ increases. As a consequence we establish (for $d\geq 3$ ) that after a suitable normalisation, the motive of $U(\mathbb{P}^{n},\mathcal{O}(d))$ converges to a certain limit of motivic zeta functions as $n\to\infty$ (see Theorem 1.7). It is important to note here that we do not assume that $d\to\infty$ or take some kind of limit in which the line bundle $\mathcal{L}$ becomes more and more ample. In order to establish the above results, we prove several theorems about Cayley Bacharach sets in $\mathbb{P}^{n}$ which are of independent interest (the definition of Cayley-Bacharach sets will be recalled in Section 2). Our results regarding Cayley-Bacharach sets are as follows: 1. 1. We show that under certain hypotheses, Cayley-Bacharach subsets of $\mathbb{P}^{n}$ are forced to lie on curves of low degree (see Theorem 1.13 ). This is partial progress on a question of Picoco(see Question 1.1 of [13] ) 2. 2. We establish Conjecture 1.2 of [11] under certain additional hypotheses (Theorem 1.15 ). The conjecture states that a subset $Z$ of $\mathbb{P}^{n}$ that is Cayley-Bacharach for $\mathcal{O}(r)$, that satisfies $|Z|\leq(d+1)r+1$ lies on a plane configuration of dimension $d$. ### 1.1 Grothendieck ring of varieties In this section, will briefly discuss the Grothendieck ring of varieties. We largely follow section 1 of [16] where the Grothendieck ring of varieties is discussed in more detail. The Grothendieck ring of varieties $\mathcal{M}$ over the field $\mathbb{F}$ (also denoted $K_{0}(Var_{\mathbb{F}})$) is defined as follows, in terms of generators and relations. 1. 1. Given a finite type $\mathbb{F}$ scheme $X$, we have an element $[X]\in\mathcal{M}$. Isomorphic schemes define the same element of $\mathcal{M}$. 2. 2. The collection of all $[X]$ generate $\mathcal{M}$ as an abelian group. 3. 3. Given any finite type scheme $X$ and a closed subscheme $Y$ with complement $U=X\setminus Y$, we have the relation $[X]=[U]+[Y].$ 4. 4. Given $[X],[Y]\in\mathcal{M}$ we define $[X][Y]=[X\times Y]$. This gives $\mathcal{M}$ the structure of a ring. Let $\mathbb{L}=[\mathbb{A}^{1}].$ Let $\mathcal{M}_{\mathbb{L}}=\mathcal{M}[\mathbb{L}^{-1}]$. There is an increasing filtration on $\mathcal{M}$, with varieties of dimension $\leq\ell$, generating the $\ell$th piece of the filtration. We will call this filtration the dimensional filtration on $\mathcal{M}$. The filtration extends to a filtration on $\mathcal{M}_{\mathbb{L}}$. We denote the completion of $\mathcal{M}_{\mathbb{L}}$ with respect to this filtration by $\hat{\mathcal{M}_{\mathbb{L}}}$. ###### Remark 1. In this paper we will often have formulae where an element $x\in\mathcal{M}$ is multiplied by a vector space over the base field $\mathbb{F}$. These formulae are to be understood in the following sense- we identify a vector space $V$ with $\mathbb{L}^{\dim V}$. Any expression of the form $xV$ where $x\in\mathcal{M}$ is simply shorthand for $x\mathbb{L}^{\dim V}$. Similarly an expression of the form $xV^{-1}$ is understood to mean $x\mathbb{L}^{-\dim V}$. Let us now assume $\mathbb{F}=\mathbb{C}$. Let $\mathcal{M}^{Hdg}$ denote the Grothendieck group of mixed Hodge structures . Let $\mathbb{L}^{Hdg}=[H^{2}_{c}(\mathbb{A}^{1},\mathbb{Q})]$. We define $\mathcal{M}_{\mathbb{L}}^{Hdg}=\mathcal{M}^{Hdg}[(\mathbb{L}^{Hdg})^{-1}].$ There is an increasing filtration on $\mathcal{M}^{Hdg}$ where the $\ell$th piece of the filtration is generated by Hodge structures of weight $\leq\ell$. The filtration extends to a filtration on $\mathcal{M}_{\mathbb{L}}^{Hdg}$. We denote the completion of $\mathcal{M}_{\mathbb{L}}^{Hdg}$ with respect to this filtration $\hat{\mathcal{M}_{\mathbb{L}}}^{Hdg}$ There is a specialisation map $\phi:\mathcal{M}\to\mathcal{M}^{Hdg}$ defined on smooth proper generators by $[X]\mapsto\sum_{k}(-1)^{k}H^{k}_{c}(X,\mathbb{Q}).$ The map extends to a map $\mathcal{M}_{\mathbb{L}}\to\mathcal{M}_{\mathbb{L}}^{Hdg}$ and a map $\hat{\mathcal{M}_{\mathbb{L}}}\to\hat{\mathcal{M}_{\mathbb{L}}}^{Hdg}.$ There is a further specialisation of the map $\hat{\mathcal{M}_{\mathbb{L}}}\to\hat{\mathcal{M}_{\mathbb{L}}}^{Hdg}$ to a map $e:\hat{\mathcal{M}_{\mathbb{L}}}\to\mathbb{Z}[x,y][[x^{-1},y^{-1}]]$ defined on generators by $e(X)=\sum_{(p,q)\in\mathbb{N}^{2}}\sum_{k\in\mathbb{N}}(-1)^{k}h^{p,q}(H^{k}_{c}(X,\mathbb{Q}))x^{p}y^{q}.$ Here $h^{p,q}$ denotes the $(p,q)^{th}$ Hodge number of the mixed Hodge structure $H^{k}_{c}(X,\mathbb{Q})$. We will call the polynomial $e(X)$ the Serre polynomial of $X$. #### Motivic zeta function Let $X$ be a quasi-projective variety over $\mathbb{F}$. For $n>0$ we define $\textrm{Sym}^{n}X$ to be the $n^{th}$ symmetric power of $X.$ For $n=0$, we set $\textrm{Sym}^{0}X$ to be $\textrm{Spec}\mathbb{F}$. We define the Kapranov motivic zeta function of $X$, $Z_{X}(t)\in\mathcal{M}_{\mathbb{L}}[[t]]$ by $Z_{X}(t)=\sum_{k=0}^{\infty}[\textrm{Sym}^{k}X]t^{k}.$ We define $\zeta_{X}(s)=Z_{X}(\mathbb{L}^{-s}).$ For $s>\dim X$, $\zeta_{x}(s)\in\hat{\mathcal{M}}_{\mathcal{L}}.$ #### 1.1.1 A note on brackets In this paper we will deal with many large formulae in the Grothendieck ring. As a result we have decided to omit the square brackets in our formulae when referring to the class of a variety, i.e. we will refer to $[X]$ as $X$. This is because keeping the brackets in the notation would lead to a huge number of brackets and be quite difficult to parse. ### 1.2 Meta problems for convergence In this subsection we describe several meta problems about convergence of a sequence in the completion of a filtered ring. Theorems 1.6, 1.9, and 1.7 will be instances of these metaproblems. This subsection is in a strict sense not essential to the rest of the paper and the main results of the paper will still be completely understandable without this subsection. However we include this section for motivation and to place some of our results in a wider context. Let $R$ be a ring with an increasing bi-infinite filtration $F^{l}$. Assume that $\cap_{\ell\in\mathbb{Z}}F^{\ell}R=0,\cup_{\ell\in\mathbb{Z}}F^{\ell}R=R$. Let $\hat{R}$ be the completion of $R$ with respect to the filtration. Now we define a list of meta problems regarding convergence in $\hat{R}$ in increasing order of difficulty. ###### Problem 1.1. Let $((x_{n}))$ be a sequence in $\hat{R}$. Show that $\lim_{n\to\infty}x_{n}$ exists. ###### Problem 1.2. Let $((x_{n}))$ be a sequence in $\hat{R}$. Show that $\lim_{n\to\infty}x_{n}$ exists. Give an explicit description of the limit. ###### Problem 1.3. Let $((x_{n}))$ be a sequence in $\hat{R}$. Show that $\lim_{n\to\infty}x_{n}=L$. Find $\ell_{n}\in\mathbb{Z},\ell_{n}\to\infty$ such that $x_{n}-L\in F^{-\ell_{n}}\hat{R}$. ###### Problem 1.4. Let $((x_{n}))$ be a sequence in $\hat{R}$. Show that $\lim_{n\to\infty}x_{n}=L$. Find $\ell_{n}\in\mathbb{Z},\ell_{n}\to\infty$ such that $x_{n}-L\in F^{-\ell_{n}}\hat{R}$ and for $n\gg 0$ $x_{n}-L\not\in F^{-\ell_{n}+1}\hat{R}$. One could just as well propose a variant of Problem 4 where we require $x_{n}-L\not\in F^{-\ell_{n}+C}\hat{R}$ where $C\in\mathbb{R}$ is some constant not depending on $n$. ###### Problem 1.5. Let $((x_{n}))$ be a sequence in $\hat{R}$ such that $\lim_{n\to\infty}x_{n}=L$ Construct a sequence $y_{n}$ such that $x_{n}-L=y_{n}+z_{n}$, where $z_{n}$ is much smaller than $y_{n}$. Of course Problem 5 as stated is quite imprecise and one would need to decide what it means for $z_{n}$ to be much smaller than $y_{n}$. For example, if the $y_{n}$ we construct are all units one could ask that, $\lim_{n\to\infty}\frac{z_{n}}{y_{n}}=0$. ### 1.3 Statement of results on convergence We will adopt the following notation: 1. 1. Let $X$ be a scheme over a field $\mathbb{F}$. For a closed subscheme $Y\subseteq X$ we denote the ideal sheaf of $Y$ by $I_{Y}$. 2. 2. Let $\mathcal{F}$ be a coherent sheaf on $X$. We define $h^{i}(X,\mathcal{F}):=\dim_{\mathbb{F}}H^{i}(X,\mathcal{F}).$ 3. 3. We define $N_{X}^{*}$ to be the conormal sheaf of $Y$ in $X$ it is the pullback of the sheaf $I_{Y}$ to $Y$. 4. 4. Given $X$ and $Y$ as above we define $N_{X}(Y)$ to be the subscheme of $X$ defined by the sheaf $I^{2}_{Y}$, i.e. the first infinitesmal neighbourhood of $Y$. Given $x,y\in\hat{\mathcal{M}}_{\mathbb{L}}$, we will say $x=y$ up to dimension $r$ if $x-y$ lies in the $F^{r}\hat{\mathcal{M}}_{\mathbb{L}}$, where $F^{r}$ refers to the dimension filtration on $\hat{\mathcal{M}}_{\mathbb{L}}$. Let $X$ be a smooth variety over a field $\mathbb{F}$, $\mathcal{L}$ an ample line bundle bundle on $X$. Then Proposition 3.7 of [16] states that $\lim_{d\to\infty}\frac{U(X,\mathcal{L}^{d})}{H^{0}(X,\mathcal{L}^{d})}=\zeta_{X}^{-1}(\dim X+1).$ This can be seen as as an instance of Problem 1. However the proof of Proposition 3.7 in [16] actually gives more. Let $m(d)$ be the largest integer such that for all zero dimensional reduced subschemes $Z\subseteq X$ of length $\leq m(d)$, the map $H^{0}(X,\mathcal{L}^{d})\to H^{0}(N_{X}(Z),\mathcal{L}^{d})$ is surjective. Then the proof of Proposition 3.7 in [16] actually implies that $\frac{U(X,\mathcal{L}^{d})}{H^{0}(X,\mathcal{L}^{d})}-\zeta_{X}^{-1}(n+1)$ lies in $F^{-m(d)}\hat{\mathcal{M}_{\mathcal{L}}}$. We improve this result to the following: ###### Theorem 1.6. Let $X,\mathcal{L},m(d)$ be as above. Then there exists $C>0$ such that for all $d\gg 0$, $\frac{U(X,\mathcal{L}^{d})}{H^{0}(X,\mathcal{L}^{d})}=\zeta_{X}^{-1}(n+1)$ up to dimension $-m(d)(\dim X)+C$. ###### Remark 2. The constant $C$ in the above theorem depends on $X$ and $\mathcal{L}$, it is related to the dimensions of the spaces of curves in $X$ such that the line bundle $\mathcal{L}$ restricts to a low degree. However, it is fairly straightforward to adapt the proof to obtain a constant $C^{\prime}$, satisfying $C<C^{\prime}$ where $C^{\prime}$ depends only on the dimension of $X$. We note that the improvement is greater for varieties of high dimension. This motivates us to study whether there are bounds on the difference $\frac{U(X,\mathcal{L}^{d})}{H^{0}(X,\mathcal{L}^{d})}-\zeta_{X}^{-1}(\dim X+1)$ that improve with the dimension of $X$ and are valid for small values of $d$. ###### Theorem 1.7. Let $X$ be a smooth projective variety, let $\mathcal{L}$ be a _very_ ample line bundle. Then for any $d\geq 3$, $\frac{U(X,\mathcal{L}^{d})}{H^{0}(X,\mathcal{L}^{d})}=\zeta_{X}^{-1}(\dim X+1),$ up to dimension $\sqrt{\dim X}/3$. As a corollary we have: ###### Corollary 1.8. Let $X_{n}$ be a sequence of smooth projective varieties such that $\dim X_{n}\to\infty$ and $\lim_{n\to\infty}\zeta_{X}^{-1}(\dim X_{n}+1)$ exists. Let $\mathcal{L}_{n}$ be a very ample line bundle on $X_{n}$. Let $d_{n}\geq 3$ be a sequence of positive integers. Then $\lim_{n\to\infty}\frac{U(X_{n},\mathcal{L}_{n}^{d_{n}})}{H^{0}(X_{n},\mathcal{L}_{n}^{d_{n}})}=\lim_{n\to\infty}\zeta_{X_{n}}^{-1}(\dim X_{n}+1).$ In particular $\lim_{n\to\infty}\frac{U(\mathbb{P}^{n},\mathcal{O}(d_{n}))}{H^{0}(\mathbb{P}^{n},\mathcal{O}(d_{n}))}=\prod_{k=1}^{\infty}(1-\mathbb{L}^{-k}).$ Finally we identify the leading error term for the sequence $\frac{U(\mathbb{P}^{n},\mathcal{O}(d))}{H^{0}(\mathbb{P}^{n},\mathcal{O}(d))}$, where $n$ is fixed and $d$ goes to infinity, providing a complete answer to Problem 5 in this instance. ###### Theorem 1.9. Let $n\geq 2$. Then for $d\gg n$, there exists $\epsilon>0$. $\frac{U(\mathbb{P}^{n},\mathcal{O}(d))}{H^{0}(\mathbb{P}^{n},\mathcal{O}(d))}-Z_{\mathbb{P}^{n}}^{-1}(n+1)=Y(d)+Z(d)$ where: 1. 1. $\dim Z(d)\leq-(\frac{3n}{2}+\epsilon)d$. 2. 2. For $d\equiv 1\mod{4}$ , the virtual Hodge structure of $Y(d)$ has highest weight term $E_{k}(\mathbb{L}^{Hdg})^{n^{2}-k(n+1)+1}$ in $K_{0}(MHS),$ where $E_{d}$ is the highest weight term of the virtual Hodge structure corresponding to $H^{1}(\mathcal{M}_{1,1},H_{k})$. Here $\mathcal{M}_{1,1}$ is the moduli space of elliptic curves, $H_{k}$ is the Hodge bundle corresponding to $k^{th}$ tensor power of the relative canonical bundle and $k=\frac{3d+1}{2}$. $Y(d)$ is explicitly defined in Section 6. ###### Remark 3. $Y(d)$ has an explicit description in terms of configurations of points on smooth plane cubic curves a precise description is given in Section 6 (defined just before Proposition 6.14) and its virtual Hodge structure may be readily computed from that description using the results of [7]. ### 1.4 Motivation for our results on convergence Our motivation for the results proven above comes from two different sources, homological stability and number theory. We will first discuss some motivation related to homological stability. #### 1.4.1 Homological stability It was established by [15] that the spaces $U(\mathbb{P}^{n},\mathcal{O}(d))$ satisfy homological stability. Later on Aumonier in [1] and Das-Howe in [3] established homological stability for the sequence of spaces $U(X,\mathcal{L}^{d}$. However, it was known (and this is even mentioned in [15]) that the range of homological stability given in [15] is not optimal. It was of interest therefore to see where and how homological stability fails and explain the first class in $H^{*}(U(\mathbb{P}^{n},\mathcal{O}(d)),\mathbb{Q})$ that is not stable. Related to the above is the desire to establish ‘secondary homological stability’-like results for the family of spaces $U(\mathbb{P}^{n},\mathcal{O}(d)).$ For instance the paper [5] establishes that. It would be interesting to establish some analogue of these results in the case of $U(\mathbb{P}^{n},\mathcal{O}(d)),$ however there are several difficulties in trying to do so, first of all there are no maps between different $U(\mathbb{P}^{n},\mathcal{O}(d))$ for different values of $d$ and secondly the nature of the spaces $U(\mathbb{P}^{n},\mathcal{O}(d))$ is very different from the kind of spaces people have established ‘secondary stability’ for; secondary stability results are known in the setting of configuration spaces on manifolds and other more topological and less algebraic geometric spaces as compared to discriminant complements. The techniques used to establish such results are much more topological than the ones in this paper. In the end we were unable to establish such a ‘secondary stability’ theorem for the homology of the spaces $U(\mathbb{P}^{n},\mathcal{O}(d))$ and this paper does not really deal with results at the level of homology. One may in fact combine the techniques of this paper with those of [15] to conclude that $H^{k}(U(\mathbb{P}^{n},\mathcal{O}(d),\mathcal{O}(d))\cong H^{k}(GL_{n+1}(\mathbb{C}),\mathbb{Q}),$ for $k\leq\frac{nd}{2}$. However we do not include this result as we do not believe this is optimal either and we do not how to obtain the optimal answer. In some sense our failure to get such a result is a consequence of some of the differences between cohomology and the Grothendieck ring, in the Grothendieck ring setting, if a variety $X$ admits a stratification with understandable pieces, we understand the class of $[X]$ in the Grothendieck ring. However if a variety $X$ has a stratification with understandable pieces, it is still not clear what the cohomology of $X$ is, the best we can do is obtain a spectral sequence whose $E_{1}$ page is related to the cohomology of the strata. However it is often not clear in these situations what the differentials in this spectral sequence are. As a result we were unable to adapt some of the arguments involved in proving Theorem 1.9 to a cohomological setting. It still may very well be possible to establish a cohomological analogue of Theorem 1.9 but it would require somewhat different techniques than we have used/ are aware of. However it is possible to establish that for fixed $d\geq 3$, $H^{k}(U(\mathbb{P}^{n},\mathcal{O}(d)))\cong H^{k}(GL_{n+1}(\mathbb{C}))$ for $k\leq f(n)$ where $f(n)$ is some function of $n$ that grows to $\infty$. This is a consequence of combining the techniques involved in proving Theorem 1.7 with those of [15]. // We believe it should be possible to get a cohomological analogue of Theorem 1.7 as well. While we have not established any secondary stability results for the sequence of spaces $U(\mathbb{P}^{n},\mathcal{O}(d))$, we have established an analogue of secondary stability in the setting of $K_{0}(Var_{\mathbb{F}});$ by the results of [16], we have $\lim_{d\to\infty}\frac{U(\mathbb{P}^{n},\mathcal{O}(d))}{H^{0}(\mathbb{P}^{n},\mathcal{O}(d))}\to\frac{1}{Z_{\mathbb{P}^{n}}(n+1)}$ Theorem 1.9 refines this result to give us an estimate of the difference between the limit and the elements of the sequence. Furthermore the proof of Theorem 1.9 identifies this leading error term as arising from the failure of the proof of [16] to give an equality between $\frac{1}{Z_{\mathbb{P}^{n}}(n+1)}$ and $\frac{U(\mathbb{P}^{n},\mathcal{O}(d))}{H^{0}(\mathbb{P}^{n},\mathcal{O}(d))}$; the proof of Theorem 1.13 in [16] crucially involves the fact that if $Z$ is a collection of points in $X$ whose length is small compared to $d$, then the vanishing of $1$-jets at the points of $Z$ impose linearly independent conditions. This fails when the length of $Z$ is comparable to $d$ and our error term $Y(d)$ is related to (and in fact defined in terms) of the variety parametrizing collections of points where this linear independence just begins to fail. #### 1.4.2 Arithmetic statistics A second source of motivation comes from number theory, in particular from arithmetic statistics. In many situations in arithmetic statistics, one is interested in establishing asymptotic expressions for the number of some quantity of arithmetic interest. For instance as a prototype, one has Roberts conjecture (established in [2]) which says that the number of degree $3$ extensions of $\mathbb{Q}$ of discriminant $\leq X$ is $aX+bX^{\frac{5}{6}}+o(X^{\frac{5}{6}})$. We view Theorem 1.9 as a Theorem of this kind, we have essentially established that the class of the discriminant complement $U(X,\mathcal{L})$ is $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))Z_{\mathbb{P}^{n}}^{-1}(n+1)+H^{0}(\mathbb{P}^{n},\mathcal{O}(d))Y(d)+\textrm{ lower order terms},$ where lower order is to be understood in terms of the dimension filtration. A natural question to ask here is if Theorem 1.9 and 1.7 have consequences with regard to point counting over finite fields. More precisely, one may ask if one has a result of the following form: Let $\mathbb{F}=\mathbb{F}_{q}$. Let $X,\mathcal{L},d$ be as in Theorem 1.7. Is there a bound on $\frac{\\#U(X,\mathcal{L}^{d})}{\\#3H^{0}(X,\mathcal{L}^{d})}-\\#Z_{X}^{-1}(-n-1)$ analogous to Theorem 1.9? Does $\lim_{n\\\ to\infty}\\#\frac{U(\mathbb{P}^{n},\mathcal{O}(d))}{H^{0}(\mathbb{P}^{n},\mathcal{O}(d))}$ exist and equal $\prod_{i=1}^{\infty}(1-\frac{1}{q^{i}})?$ Does one have $\\#\frac{(U(\mathbb{P}^{n},\mathcal{O}(d)))}{H^{0}(\mathbb{P}^{n},\mathcal{O}(d))}=\\#Z_{\mathbb{P}^{n}}(-n-1)+\\#Y(d)+\textrm{ lower order terms}?$ We believe that the answer to all the above questions is yes and we are currently working on solving the above questions. The reason why point counting formulae like the above do not immediately follow from Theorems 1.7 and 1.9 is that the functor $\\#$ is not continuous with respect to the dimension filtration, varieties of small dimension can have an arbitrary number of points. In addition to this some of the methods we use are not well adapted to the setting of point counts, e.g. the sum over ordered partitions constructions that we use involve a very large number of terms that may cause problems when added up. However we believe that we may combine our methods with that of [14] to obtain the above results. ### 1.5 Cayley-Bacharach sets In this section, we will define what Cayley-Bacharach sets are and prove a few basic propositions about them. This may appear to be quite unrelated to the material of the previous section, however it is not as the results of the previous section rely on our results on Cayley-Bacharach sets. In this section, we assume $\mathbb{F}$ to be an algebraically closed field. Let $X$ be a projective variety over $\mathbb{F}$. Let $\mathcal{L}$ be a line bundle on $X$. ###### Definition 1.10. A zero dimensional reduced subscheme $Z\subseteq X$ is said to be Cayley- Bacharach for $\mathcal{L}$ if given $f\in H^{0}(X,\mathcal{L})$, if $f$ vanishes on all but one point of $Z$ then $f$ vanishes on all of $Z$. We will now reformulate the above definition. Given any $Z^{\prime}\subseteq Z\subseteq X$ we have restriction maps $H^{0}(X,\mathcal{L})\to H^{0}(Z,\mathcal{L})\to H^{0}(Z^{\prime},\mathcal{L})$. We hence have a map $\textrm{Im}(H^{0}(X,\mathcal{L})\to H^{0}(Z,\mathcal{L}))\to\textrm{Im}(H^{0}(X,\mathcal{L})\to H^{0}(Z^{\prime},\mathcal{L})).$ Given a zero dimensional scheme $Z$, we will use $|Z|$ to denote the length of $Z$. ###### Proposition 1.11. A zero dimensional reduced subscheme $Z\subseteq X$ is Cayley-Bacharach for $\mathcal{L}$ iff for all $Z^{\prime}\subseteq Z$, $|Z\setminus Z^{\prime}|=1$, the restriction map $\textrm{Im}(H^{0}(X,\mathcal{L})\to H^{0}(Z,\mathcal{L}))\to\textrm{Im}(H^{0}(X,\mathcal{L})\to H^{0}(Z^{\prime},\mathcal{L}))$ (1) is an isomorphism. ###### Proof. If $Z^{\prime}\subseteq Z$, then the homomorphism in (1) is always surjective. For it to be to an isomorphism is equivalent therefore to it having zero kernel. But if $f$ is in the kernel of (1), it is an element of $H^{0}(X,\mathcal{L})$ that restricts to $0$ on $Z^{\prime}$. If $Z$ is Cayley-Bacharach this implies that $f$ rstricts to $0$ on $Z$ and hence the kernel of the (1) is trivial. Conversely if the kernel of (1) is trivial, then any $f$ restricting to zero on $Z^{\prime}$ restricts to $0$ on $Z$, and hence if the kernel of (1) is trivial for all choices of $Z^{\prime}$, then $Z$ is Cayley-Bacharach for $\mathcal{L}$. ∎ ### 1.6 Results on Cayley-Bacharach sets In[13], Picoco asks the following question. ###### Question 1.12. Let $e,d>0$. Suppose $Z\subseteq\mathbb{P}^{n}$ is a reduced set of points and that $|Z|<e(d-e+3)-1$. Then is it true that if $Z$ is Cayley-Bacharach for $\mathcal{O}(d)$, it lies on a curve of degree $<e$? While we do not know if Picoco’s question has an affirmative answer we have made the following partial result. ###### Theorem 1.13. Let $d\gg e\geq 1$. Let $Z$ be a Cayley Bacharach subset of $\mathbb{P}^{n}$ for $\mathcal{O}(d)$. Then there exists a function $f$, depending on $n$ but not on $d$ such that if $|Z|<ed-f(e),$ $Z$ lies on a curve of degree $\leq e$. In [11](Conjecture 1.2 in the paper) Levinson and Ullery made the following conjecture ###### Conjecture 1.14. Let $d\geq 1$. Suppose $Z\subseteq\mathbb{P}^{n}$ is a Cayley-Bacharach set for $\mathcal{O}(d)$. Suppose $|Z|\leq(e+1)d+1$ Then $Z$ is contained in a union of positive dimensional linear subspaces $L_{1},\dots L_{k}$ of $\mathbb{P}^{n}$ such that $\sum_{i=1}^{k}\dim L_{i}\leq e$. ###### Theorem 1.15. Let $d\gg e$. Then Conjecture 1.14 is true. One can replace $d\gg e$ in 1.15 with $d$ greater than a certain quartic polynomial in $e$, by going through the proof. The proof of Theorem 1.15 (assuming Theorem 1.13 ) is fairly straightforward and is essentially a consequence of the fact that an integral curve of degree $e$ lies in an $e$ dimensional linear subspace of projective space. ### 1.7 Acknowledgements I would like to thank Eduard Looijenga and Madhav Nori for helpful and clarifying discussions. I’d like to thank Jake Levinson, Brooke Ullery and David Stapleton for discussing some of their ideas with me. I’d like to thank Ronno Das, and Aaron Landesmann for suggestion on how to improve the paper. I’d like to thank Jordan Ellenberg for a question that led to Theorem 1.7. Finally, I would like to thank Joshua Mundinger for extensive comments on several drafts of this paper. ## 2 Cayley-Bacharach subsets lie on curves In this section we will prove Theorems 1.13 and 1.15 . Our strategy to prove Theorem 1.13 is to induct on the dimension of the ambient space. In the case of $\mathbb{P}^{2}$ (the base case of our induction), we have the following theorem of Lopez and Pirola (this is a subcase of Lemma 2.5 of [12]). ###### Proposition 2.1. Let $Z\subseteq\mathbb{P}^{2}$ be a Cayley Bacharach set for $\mathcal{O}(d)$. Let $e\geq 1$. If $|Z|\leq e(d-e+3)-1$, then $Z$ lies on a curve of degree $\leq e-1$. Our induction strategy will involve linearly projecting onto smaller dimensional projective spaces. We will need the following propositions in our proof, which deal with projections and Cayley Bacharach sets. ###### Proposition 2.2. Let $n\geq 2$. Let $Z\subseteq\mathbb{P}^{n}$ be a Cayley-Bacharach set for $\mathcal{O}(d)$. Let $P\in\mathbb{P}^{n}\setminus Z$. Let $H\cong\mathbb{P}^{n-1}$ be a hyperplane not containing $P$. Let $\pi:\mathbb{P}^{n}\setminus\\{P\\}\to H$ be the projection map. Suppose that the scheme theoretic image of $Z$, $\pi(Z)$ is reduced. Then $\pi(Z)$ is a Cayley-Bacharach set for $\mathcal{O}(d)$. ###### Proof. We note that given a linear projection $\pi$, defined by projecting from a point $P$ and a zero dimensional reduced subscheme $Z$, $\pi(Z)$; the conditions that $P$ does not lie on a line joining any two of the points in $Z$ and that $\pi(Z)$ is reduced are equivalent. Without loss of generality, we may assume $P=[0:0:\dots:0:1]$ and $H$ is the hyperplane defined by $X_{n+1}=0.$ Then the projection map $\pi$ is defined by $[x_{1}:x_{2}:\dots:x_{n}:x_{n+1}]\mapsto[x_{1}:x_{2}:\dots:x_{n}].$ Let $Z=\\{P_{1},P_{2},\dots,P_{m}\\}.$ Let $\pi(Z)=\\{\pi(P_{1}),\pi(P_{2}),\dots,\pi(P_{m})\\}$. Let $f\in H^{0}(\mathbb{P}^{n-1},\mathcal{O}(d))$ be a section vanishing at $\\{\pi(P_{1}),\dots,\pi(P_{m-1})\\}.$ Such an $f$ is given by a degree $d$ homogenous polynomial $p$ in the variables $X_{1}\dots,X_{n}$. However the $p$ can be thought of as a polynomial in the variables $X_{1}\dots,X_{n},X_{n+1}$ as well, with no dependence on $X_{n+1}$. We observe that for any $Q\in\mathbb{P}^{n}-\\{[0:\dots:0:1]\\}$, $p(Q)=0$ iff $p(\pi(Q))=0$. Hence $p(P_{i})=0$ for $i\in\\{1,\dots m-1\\}.$ But since $Z$ is Cayley-Bacharach for $\mathcal{O}(d)$ this implies $p$ vanishes on $P_{m}$ as well. But this implies $p$ vanishes on $\pi(P_{m})$. Hence $\pi(Z)$ is Cayley-Bacharach for $\mathcal{O}(d)$. ∎ ###### Proposition 2.3. Let $n\geq 3$. Let $P_{1},P_{2}$ be two distinct points in $\mathbb{P}^{n}$. Let $H$ be a hyperplane in $\mathbb{P}^{n}$ such that $P_{1},P_{2}\not\in H$. Let $C_{1},C_{2}$ be two curves in $H$ such that $\textrm{deg}(C_{i})\leq e_{i}$. Let $S_{i}$ be the cone over $C_{i}$ with apex $P_{i}$. Then $S_{1}\cap S_{2}=C\cup\Gamma$, where $C$ is a curve and $\Gamma$ is zero dimensional. Furthermore $\deg(C)+|\Gamma|\leq e_{1}e_{2}$. ###### Proof. We begin by noting that a cone over an irreducible curve is irreducible and that cones with different apexes are distinct subvarieties. This immediately implies that $S_{1}$ and $S_{2}$ have no irreducible components in common and $S_{1}\cap S_{2}$ is a lower dimensional subvariety of $\mathbb{P}^{n}$ and hence is of the form $C\cup\Gamma$, where $C$ is a curve and $\Gamma$ is zero dimensional. We note that $\deg(S_{i})=\deg(C_{i})$. We now apply the generalised Bezout’s theorem of [4] (bottom of page 223), which states that the total degree of $S_{1}\cap S_{2}$ (i.e. $\deg C+|\Gamma|$) is less than or equal to $\deg(S_{1})\deg(S_{2})\leq e_{1}e_{2}$. ∎ ###### Proposition 2.4. Let $d\gg e>0$. Let $Z\subseteq\mathbb{P}^{n}$ be a Cayley-Bacharach set for $\mathcal{O}(d)$. Let $C\subseteq\mathbb{P}^{n}$ be a reduced curve. Let $\Gamma\subseteq\mathbb{P}^{n}$ be a finite set of points disjoint from $C$. Suppose $\deg(C)+|\Gamma|\leq e^{2}$. If $Z\subseteq C\cup\Gamma$, then $Z\subseteq C$. ###### Proof. For a fixed value of $\deg(C)+|\Gamma|$, there exists $d_{0}$ such that for $d>d_{0}$, the map $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(C\cup\Gamma,\mathcal{O}(d))$ is surjective. Suppose $P\in Z\cap\Gamma$, and that $P\not\in C$. Then there exists a section $f\in H^{0}(C\cup\Gamma,\mathcal{O}(d))$ that is nonzero at $P$ and vanishes on $C\cup\Gamma\setminus\\{P\\}$. Since the map $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(C\cup\Gamma,\mathcal{O}(d))$ is surjective, we can find a polynomial $p\in H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$ that restricts to $f$. Such a $p$ vanishes on $Z\setminus\\{P\\}$ and doesn’t vanish on $P$. This contradicts the Cayley-Bacharach property of $Z$. Hence no such $p$ exists, i.e. $Z$ is contained in $C$. ∎ ###### Proposition 2.5. Let $d\gg e>0$. Let $C\subseteq\mathbb{P}^{n}$ be a reduced curve of degree $e$. Let $Z\subseteq C$ be a Cayley-Bacharach set for $\mathcal{O}(d)$. We assume that $Z$ is not contained in any curve of degree $<e$. Then there is a positive real valued increasing function $f$ not depending on $Z$ or $d$ such that $|Z|\geq ed-f(e)$. ###### Proof. The proof of this proposition is essentially Riemann-Roch for curves. However, we first need to do some work to convert the given curve $C$ into a smooth curve to which we can apply the Riemann Roch theorem. We first note that a Cayley-Bacharach set must contain at least $d$ points (see for instance [11]). Furthermore there is a function $\phi_{1}$, such that any curve $C_{0}$ of degree $e$ satisfies $|\textrm{Sing}(C_{0})|\leq\phi_{1}(e)$ . Hence for $d\gg e$, $Z$ is not contained in $\textrm{Sing}(C)$. Let $C_{1},\dots,C_{k}$ be the irreducible components of $C$. We note that $k\leq e$. We can similarly conclude that for any irreducible component $C_{i}$ of $C$, the intersection $Z\cap(C_{i}\setminus\textrm{Sing}(C))$ is nonempty. If this were not the case, $Z$ would be a Cayley-Bacharach set contained in the union of a curve $C^{\prime}=\cup_{j\neq i}C_{j}$ and a finite set, $\textrm{Sing}C_{i}$. But for $d\gg e$,we may apply the argument of Proposition 2.4 to conclude $Z\subseteq C^{\prime}$ which contradicts the fact that $Z$ is not contained in a curve of degree $<e$. Let $\pi:\tilde{C}\to C$ be a resolution of singularities. Let $\tilde{Z}=\pi^{-1}(Z)\cup\pi^{-1}(\textrm{Sing}C)$. Note that there is some function $\phi_{2}$ such that $|\tilde{Z}|-|Z|<\phi_{2}(e)$. Let $P\in Z\setminus\textrm{Sing}(C)$. Let $\tilde{P}=\pi^{-1}(P)$. Suppose there exists $\tilde{f}\in H^{0}(\tilde{C},\mathcal{O}(d))$ vanishing at $\tilde{Z}-\tilde{P}$, such that $\tilde{f}(\tilde{P})\neq 0$. Then $\tilde{f}$ is the pullback of a section $f\in H^{0}(C,\mathcal{O}(d))$ vanishing at $Z\setminus\\{P\\}$ and not vanishing at $P$. For $d\gg e$, $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$ surjects onto $H^{0}(C,\mathcal{O}(d))$ and hence $f$ is the restriction of some element of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$. This contradicts the Cayley-Bacharach property of $Z$. Hence no $f$ and consequently no such $\tilde{f}$ can exist. We may now apply the Riemann Roch theorem to the curve $\tilde{C}$. Let $\tilde{C}_{1},\dots,\tilde{C}_{k}$ be the corresponding components of $\tilde{C}$. Let $\tilde{P}\in\pi^{-1}((Z\cap C_{i})\setminus\textrm{Sing}(C))$. Let $\pi(\tilde{P})=P$. We note that any $\tilde{f}\in H^{0}(\tilde{C}_{i},\mathcal{O}(d))$ that vanishes on $\pi^{-1}(\textrm{Sing}(C))\cup\pi^{-1}(Z)\setminus\\{P\\}$ is automatically the pullback of a section $f\in H^{0}(C,\mathcal{O}(d))$ vanishing at $Z\setminus\\{P\\}$ and hence $\tilde{f}$ is forced to vanish at $P$. Let $Y_{i}=\pi^{-1}(\textrm{Sing}(C))\cup\pi^{-1}(Z)$. We have established that the restriction map $H^{0}(\tilde{C}_{i},\mathcal{O}(d))\to H^{0}(Y_{i},\mathcal{O}(d))$ is not surjective. Let $|Y_{i}|$ denote the line bundle on $\tilde{C}_{i}$ associated with the divisor $Y_{i}$. Thus, $H^{1}(\tilde{C}_{i},\mathcal{O}(d)\otimes|Y_{i}|^{\vee})\neq 0.$ By Riemann-Roch this implies that $d\deg C_{i}-|\pi^{-1}(Z\cap C_{i})|-|\pi^{-1}(\textrm{Sing}(C)\cap C_{i})|\leq\textrm{genus(C)}.$ (2) If we sum (2) over all $i$ we obtain $d\deg(C)-\sum_{i}(|\pi^{-1}(Z\cap C_{i})|-|\pi^{-1}(\textrm{Sing}(C)\cap C_{i})|)\leq\sum_{i}\textrm{genus}(C_{i}).$ Hence $\sum_{i}(|\pi^{-1}(Z\cap C_{i})|\geq d\deg(C)-\sum_{i}\textrm{genus}(C_{i})-\sum|\pi^{-1}(\textrm{Sing}(C)\cap C_{i})|).$ (3) We note that $|\pi^{-1}(Z\cap C_{i})|-|Z\cap C_{i}|\leq\phi_{2}(e)$, with $\phi_{2}$ as defined above. Hence, $|Z\cap C_{i}|\geq d(\deg C_{i})-\phi_{2}(e)-\phi_{1}(e)$. Furthermore $\sum_{i}|Z\cap C_{i}|-|Z|\leq k\textrm{Sing}(C)\leq e\phi_{1}(e).$ Combining this with 3 we have $|Z|\geq d\deg(C)-\sum_{i}(\textrm{genus}(C_{i}))-\sum_{i}|\pi^{-1}(\textrm{Sing}(C)\cap C_{i})|)-e\phi_{2}(e)$ $\geq d\deg(C)-\sum_{i}(\textrm{genus}(C_{i}))-2e\phi_{2}(e).$ Now we note that the genus of each individual $C_{i}$ is bounded in terms of $e$ and $n$ by the Castelnuovo bound. Let us call this bound $\phi_{3}(e)$. This gives us $|Z|\geq d\deg(C)-e(\phi_{3}(e)+2\phi_{2}(e)).$ We let $f(e)=\max_{j\leq e}(1,j(\phi_{3}(j)+2\phi_{2}(j)))$. This establishes the result. ∎ We now prove Theorem 1.13. ###### Proof of Theorem 1.13. We will prove the theorem by induction on $n$. For $n=2$, the Theorem follows from Proposition 2.1. Let us assume the Theorem for Cayley-Bacharach sets in $\mathbb{P}^{n}$ for some $n\geq 2$. Let $Z\subseteq{\mathbb{P}^{n+1}}$ be a Cayley Bacharach set for $\mathcal{O}(d)$, with $|Z|<ed-f(e)$. Let $P_{1},P_{2}$ be two distinct points disjoint from $Z$ such that no line passing through two of the points of $Z$ contains $P_{1}$ or $P_{2}$. Let $H$ be a hyperplane in $\mathbb{P}^{n+1}$ not containing $P_{1}$ or $P_{2}$. We project $Z$ linearly from $P_{i}$ onto $H$ to obtain a set $Z_{i}$. The set $Z_{i}$ is Cayley-Bacharach by Proposition 2.2. By the induction hypothesis $Z_{i}$ lies on a curve $C_{i}$ of degree $e_{i}\leq e$. Therefore $Z$ lies on the intersection of $S_{1}$ and $S_{2}$, where $S_{i}$ is the cone over $C_{i}$ with apex $P_{i}$. By Proposition 2.3 $S_{1}\cap S_{2}=C\cup\Gamma$ where $\deg(C)+|\Gamma|\leq e_{1}e_{2}$. By Proposition 2.4, $Z\subseteq C$. Let us replace $C$ by the curve of minimal degree containing $Z$. By Proposition 2.5, $|Z|\geq\deg(C)d-f(\deg(C))$. But our assumption implies that $|Z|\leq ed-f(e)$. Since $d\gg e$, this implies $\deg(C)\leq e$. ∎ We now move on to the proof of Theorem 1.15. ###### Proof of Theorem 1.15. Let $Z$ be the given Cayley-Bacharach set. By Theorem 1.13, for $d\gg e$ and $(e+1)d+1\leq ed-f(e)$ $Z$ is contained in a curve $C$ of degree at most $e+1$. We remind the reader that an integral curve of degree $e$ is contained in an $e$ dimensional linear subspace and an integral curve of degree $e$ that is not contained in an $e-1$ dimensional subspace is a rational normal curve. As a result the only curves of degree $e+1$ that are not contained in a union of linear subspaces whose dimensions sum to $\leq e$ are unions of rational normal curves. Hence for our purposes we may assume $Z$ is a Cayley- Bacharach set for $\mathcal{O}(d)$ that is contained in a union of $k$ rational normal curves $C_{i}$ with $\deg(C_{i})=r_{i}$ and $\sum r_{i}=e+1$, as the result follows in all other cases. . Suppose two rational normal curves $C_{i}$ and $C_{j}$ intersect at $k$ points. Then by the properties of rational normal curves one may conclude that their union $C_{i}\cup C_{j}$ is contained in a subspace of dimension $\deg C_{i}+\deg C_{j}+1-k$. So if $k>1$ for any pair $(i,j)$ we may replace $P_{i},P_{j}$ by the span of $P_{i}$ and $P_{j}$ to obtain a new sequence of linear subspaces whose dimensions sum to $\leq e$. Hence we may assume $|C_{i}\cap C_{j}|\leq 1$ as the result follows in all other cases. Furthermore we claim that the intersection graph of the $C_{i}$ is a forest. This is established as follows: If $|C_{i}\cap C_{j}|=|C_{j}\cap C_{k}|=|C_{i}\cap C_{k}|=1$ , then $\dim\textrm{span}P_{i}\cup P_{j}\cup P_{k}<\dim P_{i}+\dim P_{j}+\dim P_{k}$ unless $|C_{i}\cap C_{j}\cap C_{k}|\neq 0$. A similar argument implies that there are no cycles in the intersection graph, and hence it is a forest. Let $d(C_{i})$ denote the degree of the vertex $C_{i}$ in the intersection graph. We claim that $|Z\cap C_{i}|+d(C_{i})\geq r_{i}d+2$. For suppose that $|Z\cap C_{i}|+d(C_{i})<r_{i}d+2$. Let $S$ be the set of points in $C_{i}$ that are also in some $C_{j}$ for $j\neq i$. Let $C_{i}^{\circ}=C_{i}\setminus S$, i.e. the interior of $C_{i}$ in $C$. Let $p\in Z\cap C_{i}^{\circ}$. We note that $|Z\cap C_{i}^{\circ}\setminus{p}|+|S|\leq|Z\cap C_{i}|-1+d(C_{i})\leq r_{i}d$, by assumption. However, given any set of $\leq r_{i}d$ points not containing $p$ in $C_{i}$, there is a section of $f\in H^{0}(C_{i},\mathcal{O}(d))$ vanishing at those points and not at $p$ (Here we use the fact that $C_{i}$ is isomorphic to $\mathbb{P}^{1}$). We may extend $f$ from $C_{i}$ to $C$, by defining it to be zero outside $C_{i}$. Then $f$ vanishes on $Z\setminus\\{p\\}$ but not at $p$. This contradicts the Cayley- Bacharach property of $Z$. Hence $|Z\cap C_{i}^{\circ}|+d(C_{i})\geq r_{i}d+2$. By summing this inequality over $i$, we obtain $|Z|+\sum_{i}(d(C_{i}))\geq\sum_{i}|Z\cap C_{i}^{\circ}|+\sum_{i}d(C_{i})\geq\sum_{i}r_{i}d+2k=(e+1)d+2k.$ By the above inequality, to establish that $|Z|\geq d(e+1)$ suffices to establish that $2k-\sum_{i}d(C_{i})\geq 1$. We note that $\sum_{i}d(C_{i})$ is twice the number of edges in the intersection graph. As a result, $2k-\sum_{i}d(C_{i})$ is twice the Euler characteristic of the intersection graph. Since the intersection graph is a forest, the Euler characteristic is always $\geq 1$. ∎ ## 3 Preparatory lemmas The purpose of this section is to state and prove Lemma 3.1 which will be used later on in the paper to prove our theorems on convergence of the motives of discriminant complements. It is essentially a corollary of Theorem 1.13 . ###### Lemma 3.1. Let $d\gg e>0$. Let $f$ be as in Theorem 1.13 Let $Z\subseteq\mathbb{P}^{n}$ be a reduced zero dimensional subscheme. Suppose $2|Z|\leq ed-f(e)$. Suppose $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))\neq 0$. Then there is a function $f$ such that if $2|Z|\leq ed-f(e)$, then there is a subset $Z^{\prime}\subseteq Z$ and a curve $C$ of degree $\leq e$ such that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z^{\prime}}(d))=h^{1}(N_{\mathbb{P}^{n}}(C),I^{2}_{Z^{\prime}}(d))$. Furthermore the curve of minimal degree satisfying the above conditions is unique. Before embarking on the proof of lemma, we will need a few propositions. ###### Proposition 3.2. Assume that $\mathbb{F}$ is algebraically closed. Let $Z\subseteq\mathbb{P}^{n}$ be a reduced zero dimensional subscheme. Suppose that the restriction map $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(Z,\mathcal{O}(d))$ is not surjective. Then $Z$ has a subset $Z^{\prime}$ that is Cayley-Bacharach for $\mathcal{O}(d)$. ###### Proof. Let $Z^{\prime}\subseteq Z$ be such that $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(Z^{\prime},\mathcal{O}(d))$ is not surjective and for any proper subset $Z^{\prime\prime}\subseteq Z^{\prime}$, the restriction map $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(Z^{\prime\prime},\mathcal{O}(d))$ is surjective. Clearly, such a $Z^{\prime}$ exists as the restriction map is surjective for singleton sets. Let us now take any subset $Z^{\prime\prime}\subseteq Z^{\prime}$ such that $|Z^{\prime}\setminus Z^{\prime\prime}|=1$. Since $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(Z^{\prime},\mathcal{O}(d))$ is not surjective, $\dim Im(H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(Z^{\prime},\mathcal{O}(d)))<\dim H^{0}(Z^{\prime},\mathcal{O}(d)))=|Z^{\prime}|.$ However $Im(H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(Z^{\prime},\mathcal{O}(d)))$ surjects onto $Im(H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(Z^{\prime\prime},\mathcal{O}(d)))=H^{0}(Z^{\prime\prime},\mathcal{O}(d))$ which is of dimension $|Z^{\prime\prime}|=|Z^{\prime}|-1$. However, since the dimension of the source is at most $|Z^{\prime}|-1$, the restrcition map is forced to be an isomorphism, which establishes that $Z^{\prime}$ is Cayley Bacharach. ∎ ###### Proposition 3.3. Assume that $\mathbb{F}$ is algebraically closed. Let $Z\subseteq\mathbb{P}^{n}$ be a reduced zero dimensional subscheme. Suppose $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))\neq 0$. Then $Z$ has a subset $Z^{\prime}$ that is Cayley-Bacharach for $\mathcal{O}(\lceil(d/2)\rceil)$. ###### Proof. We observe that Theorem 1.1 of [6] of implies that if $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))\neq 0$, then $h^{1}(\mathbb{P}^{n},I_{Z}(\lceil(d/2)\rceil)\neq 0$. Associate to the short exact sequence $0\to I_{Z}(\lceil(d/2)\rceil)\to\mathcal{O}(\lceil(d/2)\rceil)\to\mathcal{O}_{Z}(\lceil(d/2)\rceil)\to 0$ , i we have a long exact sequence in cohomology, part of which is as follows: $H^{0}(\mathbb{P}^{n},\mathcal{O}(\lceil(d/2)\rceil))\to H^{0}(Z,\mathcal{O}(\lceil(d/2)\rceil))\to H^{1}(\mathbb{P}^{n},I_{Z}(\lceil(d/2)\rceil))\to H^{1}(\mathbb{P}^{n},\mathcal{O}(\lceil(d/2)\rceil)))$ But $H^{1}(\mathbb{P}^{n},\mathcal{O}(\lceil(d/2)\rceil)))\neq 0,$. As a result the fact that $h^{1}(\mathbb{P}^{n},I_{Z}(\lceil(d/2)\rceil)\neq 0$implies that the map $H^{0}(\mathbb{P}^{n}\mathcal{O}(\lceil(d/2)\rceil))\to H^{0}(Z\mathcal{O}(\lceil(d/2)\rceil))$ can not be surjective. We then apply Proposition 3.2 to conclude that $Z$ has a subset $Z^{\prime}$ that is Cayley-Bacharach for $\mathcal{O}(\lceil(d/2)\rceil)$. ∎ ###### Proposition 3.4. Let $e,n>0$. Let $Z\subseteq\mathbb{P}^{n}$ be a reduced set of points. Let $C$ be a curve of degree $e$. Let $d\gg e,n$. Suppose $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))\neq 0$. Let $Z^{\prime}=Z\cap C$. Suppose $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))>h^{1}(\mathbb{P}^{n},I^{2}_{Z^{\prime}}(d)).$ Let $Z^{\prime\prime}=Z\setminus Z^{\prime}$. Then $H^{0}(\mathbb{P}^{n},\mathcal{O}(d-2e))\to H^{0}(N(Z^{\prime\prime}),\mathcal{O}(d-2e))$ is not surjective. ###### Proof. Let $D\subseteq\mathbb{P}^{n}$ be a divisor containing $C$ not containing any point of $Z^{\prime\prime}$. Let $|D|$ denote the degree of $D$. Suppose for the sake of contradiction that $H^{0}(\mathbb{P}^{n},\mathcal{O}(d-2|D|)$ surjects onto $H^{0}(N(Z^{\prime\prime}),\mathcal{O}(d-2|D|)$. Let $f\in H^{0}(\mathbb{P}^{n},|D|)$ be a section whose zero locus is $D$. We have a subspace $f^{2}\otimes H^{0}(\mathbb{P}^{n},\mathcal{O}(d-2|D|)\subseteq H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$ that surjects onto $H^{0}(Z^{\prime\prime},\mathcal{O}(d))$ and maps to $0$ in $H^{0}(N(Z^{\prime}),\mathcal{O}(d))$. We note that $H^{0}(N(Z),\mathcal{O}(d))\cong H^{0}(N(Z^{\prime}),\mathcal{O}(d))\oplus H^{0}(N(Z^{\prime\prime}),\mathcal{O}(d)).$ Some elementary linear algebra then implies that the codimension of the image of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$ in $H^{0}(N(Z),\mathcal{O}(d))$ is equal to the codimension of the image of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$ in $H^{0}(N(Z^{\prime}),\mathcal{O}(d))$, since the map surjects onto the other factor. This contradicts the assumption that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))>h^{1}(\mathbb{P}^{n},I^{2}_{Z^{\prime}}(d)).$ ∎ We now proceed to prove Lemma 3.1 ###### Proof of Lemma 3.1. Let us first prove the lemma under the assumption that our base field $\mathbb{F}$ is algebraically closed. We begin by applying Proposition 3.2 to conclude that $Z$ contains a subset $Z_{0}$ that is Cayley Bacharach for $\mathcal{O}(\lceil d/2\rceil)$. We then apply Theorem 1.13 to conclude that there is a curve $C$ that contains $Z_{0}$. Let $Z^{\prime}=Z\cap C$. If $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z^{\prime}}(d))$ we are done and $C$ is our required curve. If not, we proceed as follows: 1. 1. By Proposition 3.4 $Z^{\prime\prime}=Z\setminus Z^{\prime}$ must satisfy $h^{1}(\mathbb{P}^{n},I^{2}_{Z^{\prime\prime}}(d-2e)>0$. Hence it must contain a Cayley Bacharach subset $Z^{\prime\prime\prime}$ for $\mathcal{O}(\lceil d/2\rceil-e)$, which is contained on a curve $C^{\prime}$. 2. 2. We replace our original curve $C$ by $C\cup C^{\prime}$ and replace $Z^{\prime}$ by $Z\cap(C\cup C^{\prime})$. 3. 3. We repeat steps 1 and 2 until $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z^{\prime}}(d))$. Since $|Z^{\prime}|$ increases every time we perform step $2$ and $|Z^{\prime}|<|Z|$ this process terminates. We then have our required curve $C$ and subset $Z^{\prime}$. If our base field $\mathbb{F}$ is not algebraically closed, we may pass to the algebraic closure $\bar{\mathbb{F}}$ and then find a curve $C$ defined over $\bar{\mathbb{F}}$ containing $Z$ and of degree $\leq e$. We will prove that $C$ is defined over $\mathbb{F}$ which will complete the proof of the lemma. Let $g\in Gal(\bar{\mathbb{F}},\mathbb{F})$. We claim that $g(C)=C$. If $g(C)\neq C$, there is some irreducible component $C_{i}\subseteq C$ such that $g(C_{i})\neq C_{i}$. But $Z\cap C_{i}\subseteq|g(C_{i})\cap C_{i}|\leq\deg(C_{i})^{2}\leq e^{2}.$ However, $Z\cap C_{i}\geq e^{2}$ (see proof of Proposition 2.4). Hence for $d\gg e$, $g(C_{i})=C_{i}$ and hence $C$ is defined over $\mathbb{F}$. This completes the proof of existence. Let us now turn to proving uniqueness. It suffices to prove uniqueness over an algebraically closed field. Suppose that $C_{1},C_{2}$ are curves of the same degree, with $\deg(C_{i})\leq e\ll d$, such that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z_{i}}(d))=l$ where $Z_{i}=Z\cap C_{i}$ and no curve of strictly smaller degree satisfies the conditions of the lemma. Let $W=Z_{1}\cap Z_{2}$. Suppose $h^{1}(\mathbb{P}^{n},I^{2}_{W}(d))<h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=l$. We will show that this implies that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))>h^{1}(\mathbb{P}^{n},I^{2}_{Z_{i}}(d))$ which contradicts our assumption. Let $Z_{i}^{\prime}=Z_{i}\setminus W$. Let $U\subseteq H^{0}(Z_{1}^{\prime}\cup Z_{2}^{\prime}\cup W,\mathcal{O}(d))$ be the image of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$ which is of codimension $l$. Let $\pi_{i}:H^{0}(Z,\mathcal{O}(d))\to H^{0}(Z_{i},\mathcal{O}(d))$ be the restriction map. Let $\pi:H^{0}(Z,\mathcal{O}(d))\to H^{0}(W,\mathcal{O}(d))$ be the restriction map. By our assumption, $\pi_{i}(U)$ is of codimension $l$ in $H^{0}(Z_{i},\mathcal{O}(d))$. Therefore $U\subseteq\pi_{i}^{-1}(\pi_{i})(U)$ and both are of codimension $l$ in $H^{0}(Z,\mathcal{O}(d))$. Hence we have $U=\pi_{1}^{-1}(\pi_{1}(U))=\pi_{2}^{-1}(\pi_{2}(U)).$ The only way the above equation can hold is if $U=\pi^{-1}(\pi(U))$ and as a result $\pi(U)$ is also of codimension $l$, which contradicts $h^{1}(\mathbb{P}^{n},I^{2}_{Z_{0}}(d))<h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=l$. Hence $h^{1}(\mathbb{P}^{n},I^{2}_{Z_{0}}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=l>0$. However in this case we note that $Z_{0}\subseteq C_{1}\cap C_{2}$. Let $C_{1}\cap C_{2}=C\cup\Gamma$ where $C$ is a reduced curve and $\Gamma$ is a finite set of points. Either $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$, in which case $C$ is a curve of smaller degree than $C_{1}$ or $C_{2}$ satisfying the conditions of the lemma. Otherwise $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))<h^{1}(\mathbb{P}^{n},I^{2}_{Z_{0}}(d))$ However, this implies that $Z_{0}\setminus(Z\cap C)\subseteq\Gamma$ contains a Cayley Bacharach set for $\mathcal{O}(d-2\deg C)$ by the combination of Proposition 3.4 and Proposition 3.3. But $|\Gamma|\leq e^{2}$ and a Cayley Bacharach set for $\mathcal{O}(d-2\deg C)$ must contain at least $d-2e$ points. Since $d\gg e$, we are done. ∎ ## 4 Proof of Theorem 1.6 In this section we prove Theorem 1.6. We will begin by reviewing some material in section 2 of [16]. ###### Definition 4.1. Let $S$ be a nonempty finite set. An ordered partition $\lambda$ of the set $S$ is a tuple $(A_{1},\dots,A_{m})$, where the $A_{i}$ are nonempty disjoint subsets of $S$ such that $A_{1}\cup\dots\cup A_{m}=S$. The empty set $\varnothing$ has a single ordered partition called the empty partition and also denoted $\varnothing$. We will denote the ordered partition of $\\{1,\dots,n\\}$ with only one set by $[n]$. We can associate two numbers to an ordered partition $\lambda$. If $\lambda=(A_{1},A_{2},\dots.A_{m})$ is an ordered partition of a nonempty finite set $S$, we define $|\lambda|=|S|$ and $||\lambda||=m$. For the empty partition $\varnothing$, we define $||\varnothing||=|\varnothing|=0$. In what follows we will have to consider sums over ordered partitions of a certain length. By $\sum_{|\lambda|=k}f(\lambda)$ we mean that we sum the values of $f(\lambda)$ as $\lambda$ ranges over all ordered partitions of $\\{1,\dots,k\\}$. ###### Definition 4.2. . Let $X$ be a quasiprojective variety over $\mathbb{F}$. Let $S$ be a nonempty finite set. Let $\lambda=(A_{1},A_{2},\dots,A_{m})$ be an ordered partition of $S$. We define $\textrm{Sym}^{\lambda}(X):=\prod_{i=1}^{m}\textrm{Sym}^{|A_{i}|}(X)$. There is a quotient map $\pi:X^{|S|}\to\textrm{Sym}^{\lambda}(X)$ We define the $\lambda$ configuration space of $X$, $w_{\lambda}(X)\subseteq\textrm{Sym}^{\lambda}(X)$ to be $\pi(X^{|S|}\setminus\Delta)$, where $\Delta$ is the big diagonal. We define $\textrm{Sym}^{\varnothing}(X)=w_{\varnothing}(X)=\textrm{Spec}\textrm{ }\mathbb{F}$. If $\lambda$ is an ordered partition and $X$ is a quasiprojective variety, any $Z\in w_{\lambda}(X)$ defines a reduced zero dimensional subscheme of $X$ which we will denote $\textrm{supp}(Z)$. For any subvariety $Y\subseteq X$, we say $Z\subseteq Y$ if $\textrm{supp}(Z)\subseteq Y$. There is a forgetful covering map $w_{\lambda}(X)\to w_{|\lambda|}(X)$ defined by $Z\mapsto\textrm{supp}(Z).$ ###### Definition 4.3. Let $X$ be a quasi projective variety over $\mathbb{F}$. Let $\mathcal{L}$ be a line bundle on $X$. We define $W_{\geq\lambda}(X)=\\{(f,Z)\in H^{0}(X,\mathcal{L})\times w_{\lambda}(X)|Z\subseteq\textrm{Sing}(f)\\}.$ For $N\geq 0$, we define $W_{\lambda,\geq N}(X):=\\{(f,Z)\in W_{\geq\lambda}(X)||\textrm{Sing}(f)|\geq N\\}.$ We define $W_{\lambda}(X):=\\{(f,Z)\in W_{\geq\lambda}(X)||\lambda|=|\textrm{Sing}(f)|\\}.$ Our notation does not involve the line bundle $\mathcal{L}$. We do not believe this will cause confusion as the line bundle $\mathcal{L}$ will be fixed throughout. We note that $W_{\varnothing}(X)=U(X,\mathcal{L})$ the discriminant complement. ###### Definition 4.4. Let $S_{1}\subseteq S_{2}$ be two nonempty finite sets. Let $\lambda_{i}=(A_{1}^{i},A_{2}^{i},\dots,A^{i}_{m_{i}})$ be ordered partitions of $S_{i}$. We say $\lambda_{1}$ is a subpartition of $\lambda_{2}$, denoted $\lambda_{1}\subseteq\lambda_{2}$, if there is an increasing function $f:\\{1,\dots,m_{1}\\}\to\\{1,\dots,m_{2}\\}$ such that $A_{j}^{1}\subseteq A_{f(j)}^{2}$ for all $j\in\\{1,\dots,m_{1}\\}$. If this is the case we may form an ordered partition of $S_{3}=S_{1}\setminus S_{2}$ as follows. We define $B_{i}=A_{i}^{2}-A_{j}^{1}$ if $i=f(j)$ for $j\in\\{1,2\dots,m_{1}\\}$, otherwise we define $B_{i}=A_{i}$. We define $\lambda_{3}$ to be the partition corresponding to the tuple $(B_{1},\dots B_{m_{2}})$ after deleting empty sets. In what follows we will need to sum over all subpartitions of a given partition $\lambda$. By $\sum_{\alpha\subseteq\lambda}f(\alpha)$ we mean the sum of $f(\alpha)$ where $\alpha$ ranges over all subpartitions of $\lambda$. ###### Definition 4.5. We assume $X$ is a quasi projective variety and $\mathcal{L}$ is a very ample line bundle on it. We define the _forced positive dimensional singularities_ subset of $w_{\lambda}(X)$ denoted $w_{\lambda}^{p}(X)$ by $w_{\lambda}^{p}(X)=\\{Z\in w_{\lambda}(X)|\textrm{if }f\in H^{0}(X,\mathcal{L}),Z\subseteq\textrm{Sing}(f),\textrm{ then }\dim\textrm{Sing}(f)\geq 1\\}.$ Let $w_{\lambda}^{n}(X)=w_{\lambda}(X)\setminus w_{\lambda}^{p}(X).$ Let $W_{\geq\lambda}^{p}(X)=\\{(f,Z)\in W_{\geq\lambda}(X)|Z\in w_{\lambda}^{p}(X)\\}.$ Let $W_{\geq\lambda}^{n}(X)=W_{\geq\lambda}(X)\setminus W_{\geq\lambda}^{p}(X).$ Let $W_{\lambda,\geq N}^{p}(X)=\\{(f,Z)\in W_{\lambda,\geq N}(X)|Z\in w_{\lambda}^{p}(X)\\}.$ Let $W_{\lambda,\geq N}^{n}(X)=W_{\lambda,\geq N}(X)\setminus W_{\lambda,\geq N}^{p}(X).$ ###### Proposition 4.6. Let $X,\lambda$ be as above. The set $w_{\lambda}^{p}(X)$ is a constructible subset of $w_{\lambda}(X).$ ###### Proof. Let $j$ be a positive integer. Let $w_{\lambda,j}(X)=\\{(Z,f_{1},\dots,f_{j})|Z\in w_{\lambda}(X),f_{i}\in H^{0}(X,\mathcal{L}),Z\subseteq\textrm{Sing}(f_{i})\\}.$ There is a natural forgetful map $w_{\lambda,j}(X)\to w_{\lambda}(X)$. Let $w_{\lambda,j}^{p}(X)=\\{(Z,f_{1},\dots f_{j})|f_{i}\textrm{form a basis for }H^{0}(X,I^{2}_{Z}\otimes\mathcal{L})\\}.$ Clearly $w_{\lambda}^{p}(X)$ is the union of the images of $w_{\lambda,j}^{p}(X)$. Hence it suffices to establish that the $w_{\lambda,j}^{p}(X)$ are constructible. Let $E_{\lambda,j}(X)=\\{(P,Z,f_{1},\dots,f_{j})\in X\times w_{\lambda^{j}}(X)|P\in\cap\textrm{Sing}(f_{i})\\}.$ There is an obvious forgetful map $\pi:E_{\lambda_{j}}(X)\to w_{\lambda,j}(X)$. Consider a flattening stratification of $S_{1},\dots S_{k}$ $w_{\lambda,j}$ with respect to $\pi.$ We note that $(Z,f_{1},\dots,f_{j})\in w_{\lambda,j}^{p}(X)$ iff the $f_{i}$ form a basis of $H^{0}(X,I^{2}_{Z}\otimes\mathcal{L})$ and if $\pi^{-1}((Z,f_{1},\dots,f_{j}))$ is positive dimensional. However the dimension of $\pi^{-1}((Z,f_{1},\dots,f_{j}))$ is constant on each stratum. Hence $w_{\lambda,j}^{p}(X)$ is just the union of a collection of strata, intersected with the set of $(Z,f_{1},\dots,f_{j})$ where the $f_{i}$ form a basis of $H^{0}(X,I^{2}_{Z}\otimes\mathcal{L})$. However the latter is clearly a constructible set as is each stratum. Hence $w_{\lambda,j}^{p}(X)$ is the union of an intersection of constructible sets and is hence constructible. ∎ ###### Remark 4. The p and n in the notation $w_{\lambda}^{p},w_{\lambda^{n}}$ is intended to mean positive singularities and not positive dimensional singularities. These variants of the $w_{\lambda}$ are largely used in the final section of this paper and are necessary because we will often want to disregard elements of $w_{\lambda}^{p}$ for various technical reasons. The necessity of using the $w_{\lambda}^{n}$ is almost exclusive to Proposition 6.16 and the reader is advised to not pay too much attention to it at a first reading. ###### Proposition 4.7. Let $N$ be a positive integer. The following equations are true, where both sides are treated as part of $\mathcal{M}_{\mathbb{L}}$ $W_{\varnothing}=\sum_{|\lambda|\leq N}(-1)^{||\lambda||}W_{\geq\lambda}-\sum_{|\lambda|\leq N}(-1)^{||\lambda||}W_{\lambda,\geq N+1}.$ $W_{\varnothing}=\sum_{|\lambda|\leq N}(-1)^{||\lambda||}W^{n}_{\geq\lambda}-\sum_{|\lambda|\leq N}(-1)^{||\lambda||}W^{n}_{\lambda,\geq N+1}.$ ###### Proof. The first equation follows from the proof of Proposition 3.7 in [16]. The second follows immediately from the first and the fact that $W_{\geq\lambda}-W_{\lambda,\geq N-|\lambda|+1}=W^{n}_{\geq\lambda}-W^{n}_{\lambda,\geq N+1}.$ ∎ ###### Proposition 4.8. Let $Z$ be a constructible subset of $w_{[n]}(X)$. Let $\pi_{\lambda}:w_{\lambda}(X)\to w_{[|\lambda|]}(X)$ be the covering map. Let $\phi:\mathcal{M}\to\mathcal{M}_{Hdg}$ denote the specialisation homomorphism. Then $\phi(\sum_{|\lambda|=n}(-1)^{||\lambda||}(\pi_{\lambda}^{-1}(Z)))=H^{*}_{c}(Z,\pm\mathbb{Q})$ , where $\pm\mathbb{Q}$ is the restriction of the alternating sheaf on $w_{[n]}(X)$. Also, for $k>3$, $H^{*}_{c}(w_{k}(\mathbb{P}^{1}),\pm\mathbb{Q})=H^{*}_{c}(w_{k}(\mathbb{A}^{1}),\pm\mathbb{Q})=0.$ ###### Proof. Let $S_{\lambda}$ be the local system $(\pi_{\lambda})_{*}\mathbb{Q}$. Then by [3] $\sum_{|\lambda|=n}(-1)^{||\lambda||}S_{\lambda}=\pm\mathbb{Q}$ as a virtual representation. Thus, $\sum_{|\lambda|=n}(-1)^{||\lambda||}H^{*}_{c}(Z,S_{\lambda})=H^{*}_{c}(Z,\pm\mathbb{Q})$. However $H^{*}_{c}(Z,S_{\lambda})=H^{*}_{c}(\pi_{\lambda}^{-1}(Z),\mathbb{Q})$. This completes the proof. The equality $H^{*}_{c}(w_{k}(\mathbb{P}^{1}),\pm\mathbb{Q})=H^{*}(w_{k}(\mathbb{A}^{1}),\pm\mathbb{Q})=0,$ is established in Lemma 2 of [17]. ∎ We define $w_{\lambda}^{l}(X)=\\{Z\in w_{\lambda}(X)|h^{0}(X,I^{2}_{Z}(d))+h^{0}(N(Z),\mathcal{O}(d))=l+h^{0}(X,\mathcal{O}(d))\\}.$ We note that the $w_{\lambda}^{l}(X)$ are constructible, this follows immediately from the fact that $\\{Z\in w_{\lambda}(X)|h^{0}(X,I^{2}_{Z}(d))+h^{0}(N(Z),\mathcal{O}(d))\geq l+h^{0}(X,\mathcal{O}(d))\\}$ is closed, which is a consequence of semicontinuity of cohomology. ###### Proposition 4.9. Let $X$ be a smooth projective variety of dimension $n$. Let $\lambda$ be an ordered partition. Then, $[W_{\geq\lambda}(X)]=H^{0}(X,\mathcal{L})\sum_{l=0}^{\infty}[w_{\lambda}^{l}(X)]\mathbb{L}^{l-|\lambda|(n+1)}.$ ###### Proof. Let $\pi:W_{\lambda}(X)\to w_{\lambda}(X)$ be the projection map. We note that $w_{\lambda}(X)$ is the disjoint union of $w_{\lambda}^{l}(X)$. Therefore $W_{\lambda}(X)$ is the disjoint union of $\pi^{-1}(w_{\lambda}^{l}(X)),$ and hence $[W_{\lambda}(X)]=\sum_{l=0}^{\infty}[\pi^{-1}(w_{\lambda}^{l}(X))].$ The map $\pi:\pi^{-1}(w_{\lambda}^{l}(X))\to w_{\lambda}^{l}(X)$ is a vector bundle of dimension $h^{0}(X,\mathcal{L})-|\lambda|(n+1)+l$ by Grauert’s theorem (see for instance p. 288 Cor. 12.9 [9] ). Hence $[\pi^{-1}(w_{\lambda}^{l}(X))]=H^{0}(X,\mathcal{L})[w_{\lambda}^{l}(X)]\mathbb{L}^{l-|\lambda|(n+1)}.$ ∎ ###### Proposition 4.10. Let $X$ be a smooth projective variety of dimension $n$. Let $C\subseteq X$ be a reduced curve of degree $e$. Let $Z\subseteq C$ be a reduced set of points. Let $d\gg\deg C$. Then $h^{1}(X,I^{2}_{Z}(d))\leq\max(2|Z|-ed,0)+(n-1)\max(|Z|-ed,0)-k.$ ###### Proof. This follows immediately from Corollary 1.1.25 in [10]. ∎ Let $\psi_{e,d}(l)$ denote the minimum $m$ such that $l\geq\max(2m-ed,0)+(n-1)\max(m-ed,0)-k.$ ###### Proposition 4.11. Let $X$ be a smooth projective variety of dimension $n\geq 2$. Let $l\geq 1$. Let $d\gg e>0$. Let $k$ be as in Proposition 4.10. Let $\lambda$ be an ordered partition such that $e\geq|\lambda|/d$. Then $\dim w_{\lambda}^{l}(X)\leq\max_{e>0}(|\lambda|(n)-(n-1)\psi_{e,d}(l))+K$ where $K=\max_{r\leq e}\dim Chow_{r}(X)$ where $Chow_{r}(X)$ is the Chow variety of reduced curves of degree $r$ in $X$. Let $r_{0}$ denote the minimal degree of any curve in $X$. Let $(nr_{0}+1)d>N>r_{0}nd$ Then, there exists $C$ such that $\dim W_{\lambda,\geq N}^{l}(X)\leq h^{0}(X,\mathcal{O}(d))-r_{0}nd+K.$ ###### Proof. By Lemma 3.1 any $Z\in w_{\lambda}^{l}(X)$ must be of the form $Z^{\prime}\cup Z^{\prime\prime}$ where $Z^{\prime}=Z\cap C$ for some curve $C$ of degree $e^{\prime}\leq e$. Furthermore $l=h^{1}(X,I^{2}_{Z}(d))=h^{1}(X,I^{2}_{Z^{\prime}}(d))$. By Proposition 4.10, this implies that $|Z^{\prime}|\geq\psi_{e,d}(l)$. As a result, $\dim w_{\lambda}^{l}(X)\leq\max_{|\lambda|/d>e>0}(|\lambda|(n)-(n-1)\psi_{e,d}(l))+\max_{|\lambda|/d>e>0}\dim Chow(e).$ ∎ We now prove Theorem 1.6, The essential idea is as follows- we use the techniques of [16] to argue that the difference between $\frac{U(X,\mathcal{L})}{H^{0}(X,\mathcal{L})}-\zeta^{-1}(\dim X+1)$ is a certain sum of contributions corresponding to configurations of points contained in $w_{\lambda}^{l}(X)$ where $l\geq 1$. We then use Proposition 4.11 to bound this contribution. ###### Proof of Theorem 1.6. By Proposition 4.7 $U(X,\mathcal{L})=W_{\varnothing}(X)=\sum(-1)^{||\lambda||}(W_{\geq\lambda}(X)-W_{\lambda,\geq N+1}).$ By Proposition 4.11 the terms $W_{\lambda,\geq N+1})=0$ up to dimension $-m(d)(\dim X)+C$. Also for $||\lambda||\leq N,l\geq 1$, $w_{\lambda}^{l}$ lies in $F$. Hence we may apply Proposition and conclude that $W_{\geq\lambda}=w_{\lambda}\mathbb{L}^{-|\lambda|(n+1)}$ upto dimension $-m(d)(\dim X)+C$. This implies that $W_{\varnothing}(X)=\sum_{\lambda}(-1)^{||\lambda||}w_{\lambda}(X)\mathbb{L}$, up to dimension $-m(d)(\dim X)+C$.. But by Proposition 3.7 of [16] the right hand side of this last equation equal $Z_{X}^{-1}(\dim X+1)$. ∎ ## 5 Proof of Theorem 1.7 In this section we will prove Theorem 1.7. The proof of Theorem 1.7 is similar to that of Theorem 1.6 in that we use Proposition 4.7 and provide bounds on the dimensions of $w^{l}_{\lambda}(X)$. However our assumptions are different. This section is strictly speaking independent of Sections 2 and 3 , and we do not use any results on Cayley-Bacharach sets. We define the projective hull of a subscheme $Y\subseteq\mathbb{P}^{n}$, denoted $phull(Y)$ to be the intersection of all projective subspaces of $\mathbb{P}^{n}$ containing $Y$. ###### Proposition 5.1. Let $n,m\geq 0$. Let $Z\subseteq\mathbb{P}^{n}$ be a reduced zero dimensional subscheme. SUppose $|Z|\geq m+2$. Suppose that $phull(Z)$ in $\mathbb{P}^{n}$ is $m$ dimensional. Let $d\geq 3$. Then the kernel of the restriction map $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(N(Z),\mathcal{O}(d))$ is of codimension $\geq(m+1)(n+1)+n-1$. ###### Proof. We claim that it suffices to prove the Proposition in the special case when $Z=\\{P_{0},\dots,P_{m},P\\}$ where the projective hull of the $P_{i}$ is $m$ dimensional, and $P$ is in $phull(\\{P_{0},\dots,P_{m}\\}).$ To see this, first note that if $Z^{\prime}\subseteq Z$ the kernel of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(N(Z),\mathcal{O}(d))$ is contained in the kernel of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(N(Z^{\prime}),\mathcal{O}(d)).$ As a result, to establish the Proposition for $Z$, it suffices to establish it for any subset $Z^{\prime}\subseteq Z$. We then note that given any $Z$ whose projective hull is $m$ dimensional, we can find a $Z_{ind}\subseteq Z$ such that $|Z_{ind}|=m+1$ and $phull(Z_{ind})=phull(Z)$. We now consider any point $P\in Z\setminus Z_{ind}$ and note that $Z^{\prime}=Z_{ind}\cup\\{P\\}$ is a subset of $Z$ satisfying the conditions of the claim and that to prove the Proposition for $Z$ it suffices to prove it for $Z^{\prime}$. In light of the above paragraph, we now assume $Z=\\{P_{0},\dots,P_{m},P\\},$ where $phull(P_{0},\dots,P_{m})$ is $m$ dimensional and contains $P$. We may further assume that $P_{i}=[0:0\dots:0:1:0\dots 0],$ here the $1$ is in the $i+1^{st}$ position and $P=[x_{0}:\dots x_{m}:0\dots:0].$ This is because there is an element $g\in PGL_{n+1}(\mathbb{F})$ satisfying $g(P_{i})=[0:0\dots:0:1:0\dots 0].$ We now claim that it suffices to prove the proposition in the case where $P=[1:x:0\dots,:0]$. to see this, note that we may assume without loss of generality that if $P=[x_{0}:x_{1}:\dots,x_{m}:0\dots:0],$ $x_{0}\neq 0$ and $x_{1}\neq 0$. For $t\in\mathbb{A}^{1}$, let $Z_{t}$ be the reduced zero dimensional subscheme $\\{P_{1},\dots P_{m}.P_{t}\\}$, where $P_{t}=[x_{0}:x_{1}:tx_{2},\dots tx_{m}:0\dots 0].$ The $Z_{t}$ form a family of subschemes over $\mathbb{A}^{1},$ with the original subscheme $Z$ being equal to $Z_{1}$. Furthermore, given $t\neq 0$, there is a $g\in PGL_{n+1}(\mathbb{F})$ such that $g(Z_{t})=Z_{1}$. This has the consequence that the dimension of the kernel of the restriction of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(N(Z_{t}),\mathcal{O}(d))$ is independent of $t$ as long as $t\neq 0$. The kernel of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(N(Z_{t}),\mathcal{O}(d))$ is isomorphic to $H^{0}(\mathbb{P}^{n},I^{2}_{Z_{t}}(d)).$ Semicontinuity now implies that $h^{0}(\mathbb{P}^{n},I^{2}_{Z_{0}}(d))\geq h^{0}(\mathbb{P}^{n},I^{2}_{Z_{1}}(d)).$ As a result it suffices to prove the proposition for $Z_{0}$. However $Z_{0}$ is of the required form. To complete the proof, we directly compute the bound on the codimension of the kernel when $Z$ is of the form $\\{P_{0},\dots,P_{m},P\\}$ where $P_{i}=[0:0\dots:0:1:0\dots 0],$ and $P=[1:x:0\dots:0]$. Let $f$ be a section of $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$, we may think of $f$ as a homogenous polynomial. Then requiring $f$ to vanish to second order at $P_{i}$ implies that the coefficients of $X_{i}^{d}$ and $X_{j}X_{i}^{d-1}$ are zero. Let $a_{k,r}$ be the coefficient to $X_{k}X_{1}^{r}X_{0}{d-1-r}$. Requiring that the partial derivatives of $f$ vanish at $P$ is equivalent to requiring that for $k\in\\{2,\dots n\\}$, $\sum_{r=0}^{d}x^{r}a_{k,r}=0.$ We note that the above linear relations are linearly independent and hence the kernel of $H^{0}(\mathbb{P}^{,}\mathcal{O}(d))\to H^{0}(N(Z),\mathcal{O}(d))$ is of codimension at least $(m+1)(n+1)+n-1$. ∎ The above proof gives us a little more information than the statement of Proposition 5.1. We record this information in the following Proposition. ###### Proposition 5.2. Let $m,n\geq 1$. Let $d\geq 3$ Let $P_{i}=[0:\dots 1::0\dots:0]\in\mathbb{P}^{n}$, where the $1$ is in the $i+1^{th}$ position. Let $P\in phull\\{P_{0},\dots,P_{m}\\}$ Let $V_{i}^{j}$ be any basis of $T_{P_{i}}(\mathbb{P}^{n})$. Let $V^{j}$ be a basis of $T_{P}\mathbb{P}^{n}$. Then requiring a polynomial $f\in H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$ to satisfy $f(P_{i})=0,\frac{\partial}{\partial V_{i}^{j}}(f)(P_{i})=0,\frac{\partial}{\partial v^{j}}f(p)=0,$ imposes $(m+1)(n+1)+n-1$ linearly independent conditions on $f$. ###### Proof. We omit the proof as this is already present in the proof of Proposition 5.1. ∎ ###### Proposition 5.3. Let $n,m\geq 0$. Let $Z\subseteq X\subseteq\mathbb{P}^{n}$ be a reduced set of points, $m+1<|Z|<\dim X$. Let $d\geq 3$. Let $\dim phull(Z)=m$. Then the kernel of the restriction map $H^{0}(X,\mathcal{O}(d))\to H^{0}(N(Z),\mathcal{O}(d))$ is of codimension $\geq(m+1)(\dim X+1)+\dim X-1$. ###### Proof. This proof is similar to that of Proposition 5.1 . It suffices to prove the proposition in the case where $Z=\\{P_{0},\dots,P_{m},P\\}$ where $P_{i}=[0:\dots:0:1:\dots:0]$ where there is a $1$ in the $i+1$th position. This is by an argument identical to that in Proposition 5.1. Let $V_{i}^{j}$ be a basis of $T_{P_{i}}X$. Let $v^{j}$ be a basis of $T_{P}X$.It suffices to prove that requiring a polynomial $f$ to satisfy at $f(P_{i})=0,$ for all $i$, $f(P_{i})=0$ and that $\partial_{v_{i}^{j}}(f)(P_{i})$, $\partial_{v_{i}}(f)(P)$ impose linearly independent conditions on $f$. However, this follows from Proposition 5.2, since in that proof we show that a larger set of linear equations impose linearly independent conditions on $f$. ∎ ###### Proposition 5.4. Let $n\geq 0$. Let $X$ be a smooth projective variety of dimension $n$. Let $\lambda$ be an ordered partition, $|\lambda|\leq\sqrt{n}/3$. Let $l\geq 1$. Then $\dim w_{\lambda}^{l}(X)+l-(n+1)(|\lambda|)\leq-\sqrt{n}/{3}$ and $\dim W_{\lambda,\geq N}^{l}(X)\leq h^{0}(X,\mathcal{O}(d))-\sqrt{n}/{3}.$ In addition we have $\dim W_{\lambda,\geq N}^{0}(X)\leq h^{0}(X,\mathcal{O}(d))-\sqrt{n}/{3}.$ ###### Proof. Let $l\geq 1$. Let $Z\in w_{\lambda}^{l}(X)$. Then by definition, $h^{0}(X,I^{2}_{Z}(d))=h^{0}(X,\mathcal{O}(d))-|Z|(\dim X+1)+l$. Let $\dim phull(Z)=m$. Then by Proposition 5.3, $|Z|(\dim X+1)-l\geq(m+1)(\dim X+1)-l$. But a simple dimension count shows that the space of $|\lambda|$ points in $X$ contained in an $m$ dimensional subspace is of dimension $\leq(m+1)\dim X+(|\lambda|-m-1)m$. This implies that $\dim w_{\lambda}^{l}(X)\leq\max_{m}(m+1)\dim X+(|\lambda|-m-1)m.$ We will now bound the quantity $(m+1)\dim X+(|\lambda|-m-1)m+l-(\dim X+1)|\lambda|$. By the above we have $(m+1)\dim X+(|\lambda|-m-1)m+l-(\dim X+1)|\lambda|$ $\leq(\dim X+1)|\lambda|-l-\dim X+1+(|\lambda|-m-1)m+l-(\dim X+1)|\lambda|$ $=-\dim X+1+(|\lambda|-m-1)m\leq-\dim X+1+(|\lambda|)^{2}\leq-\dim X+1+\frac{\dim X}{9}$ $\leq-\frac{8}{9}\dim X+1\leq-\frac{\sqrt{\dim X}}{3}.$ As a result $\dim w_{\lambda}^{l}(X)+l-(n+1)(|\lambda|)\leq-\sqrt{n}/{3}$ . The inequality $\dim W_{\lambda,\geq N+1}^{l}(X)\leq h^{0}(X,\mathcal{O}(d))-\sqrt{n}/{3}$ follows similarly to the above for $l\geq 1$ and is immediate if $l=0$. This implies the result. ∎ We note here that the proof of the above Proposition gives us a better bound on the dimension of $w_{\lambda}^{l}(X)$ than we claimed in the result. However we are unable to utilise this improved bound. The sticking factor is that we only have the bound $\dim W_{\lambda,\geq N}^{0}(X)\leq h^{0}(X,\mathcal{O}(d))-N$, which we believe to be tight. ###### Proof of Theorem 1.7. Let $N=\sqrt{n}/3$. By Proposition 4.7 $\frac{W_{\varnothing}(X)}{H^{0}(X,\mathcal{O}(d))}=\sum_{|\lambda|\leq N}(-1)^{||\lambda||}\frac{W_{\geq\lambda}(X)}{H^{0}(X,\mathcal{O}(d))}-\frac{W_{\lambda,\geq N+1}(X)}{H^{0}(X,\mathcal{O}(d)}.$ By Proposition 4.7 the right hand side equals $\sum_{|\lambda|\leq N}w_{\lambda}^{0}(X)\mathbb{L}^{-|\lambda|(\dim X+1)}+\sum_{|\lambda|\leq N}w_{\lambda}^{l}(X)(\mathbb{L}^{-|\lambda|)(\dim X+1)+l}-\sum_{|\lambda|\leq N}\frac{W_{\lambda,\geq N+1}(X)}{H^{0}(X,\mathcal{O}(d))}.$ By Proposition 4.7 the first term equals $Z_{X}^{-1}(\dim X+1)-\sum_{|\lambda|\leq N}w_{\lambda}^{l}(X)\mathbb{L}^{-|\lambda|(\dim X+1)}-\sum_{|\lambda|>N}(-1)^{||\lambda||}w_{|\lambda}(X)\mathbb{L}^{-|\lambda|(\dim X+1)}=Z_{X}^{-1}(\dim X+1)$ up to dimension $-N$. By Proposition 5.4 the latter two terms are of dimension $\leq-N$ which implies the result. ∎ ## 6 Proof of Theorem 1.9 In this section we prove Theorem 1.9 . The proof of Theorem 1.9 is similar to that of Theorems 1.6 and 1.7 , albeit a bit more involved. In this section we will exclusively work with $X=\mathbb{P}^{n}$. As a result we will drop the $\mathbb{P}^{n}$ from our notation, i.e. we will refer to $w_{\lambda}(\mathbb{P}^{n})$ as $w_{\lambda}$, $W_{\geq\lambda}(\mathbb{P}^{n})=W_{\geq\lambda}$ etc. However if the argument of $w_{\lambda},$ is not $\mathbb{P}^{n}$ we will explicitly write down what the space is. The following basic proposition will be used repeatedly in this section. ###### Proposition 6.1. Let $C$ be a smooth curve in $\mathbb{P}^{n}$. Then for $d\gg\deg C$, $h^{0}(N(C),\mathcal{O}(d))=h^{0}(C,\mathcal{O}(d))+h^{0}(C,N_{C}^{*}(d)).$ ###### Proof. Let $I$ denote the ideal sheaf of $C$. Consider the following chain of inclusions of sheaves on $\mathbb{P}^{n}$- $0\subseteq I^{2}(d)\subseteq I(d)\subseteq\mathcal{O}(d).$ We will break the above exact sequence into short exact sequences. We have a short exact sequence $0\to I^{2}(d)\to I(d)\to I/I^{2}(d)\to 0.$ Under the assumption that $d\gg\deg(C)$, Serre vanishing implies that $H^{1}(\mathbb{P}^{n},I^{2}(d))=0$. As a result, $h^{0}(\mathbb{P}^{n},I(d))=h^{0}(\mathbb{P}^{n},I^{2}(d))+h^{0}(\mathbb{P}^{n},I/I^{2}(d)).$ We also have the short exact sequence $0\to I(d)\to\mathcal{O}(d)\to\mathcal{O}/I(d).$ Since $d\gg\deg(C)$, $h^{0}(\mathbb{P}^{n},\mathcal{O}(d))=h^{0}(\mathbb{P}^{n},I(d))+h^{0}(\mathbb{P}^{n},\mathcal{O}/I(d)).$ Finally we have the short exact sequence $0\to I^{2}(d)\to\mathcal{O}(d)\to\mathcal{O}/I^{2}(d).$ Since $d\gg\deg(C)$, $h^{0}(\mathbb{P}^{n},\mathcal{O}(d))=h^{0}(\mathbb{P}^{n},I^{2}(d))+h^{0}(\mathbb{P}^{n},\mathcal{O}/I^{2}(d)).$ As a result, we have $h^{0}(\mathbb{P}^{n},\mathcal{O}(d)/I^{2}(d))=h^{0}(\mathbb{P}^{n},\mathcal{O}/I(d))+h^{0}(\mathbb{P}^{n},I/I^{2}(d)).$ But we now note that $\mathcal{O}/I^{2}$ is the push forward of the structure sheaf of $N(C)$ and $\mathcal{O}/I$ is the pushforward of the structure sheaf of $C$. As a result $h^{0}(\mathbb{P}^{n},\mathcal{O}(d)/I^{2}(d))=h^{0}(N(C),\mathcal{O}(d))$ and $h^{0}(\mathbb{P}^{n},\mathcal{O}(d)/I(d))=h^{0}(C,\mathcal{O}(d)).$ Finally, we note that $I/I^{2}$ is the pushforward of the conormal sheaf of $C$ and as a result $h^{0}(\mathbb{P}^{n},I/I^{2}(d))=h^{0}(C,N^{*}(d))$. This gives us the required result. ∎ ###### Definition 6.2. Let $\textrm{Chow}(r)$ be the Chow variety of degree $d$ curves in $\mathbb{P}^{n}$. Let $M\subseteq\mathbb{P}^{n}$ be a constructible subset of $Chow(r)$. Let $\lambda$ be an ordered partition. We define $w_{\lambda}^{M,l}\subseteq w_{\lambda}^{l}$ as consisting of all $Z\in w_{\lambda}$ such that there exists $C\in M$ satisfying $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))$ and furthermore there is no curve $C^{\prime}$ of degree less than $r$ such that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C^{\prime}}(d)).$ ###### Proposition 6.3. Let $d,r,l\geq 0$. Let $\lambda$ be an ordered partition. Let $M$ be a constructible subset of $Chow(r)$. Assume $d\gg r.$ The subset $w_{\lambda}^{M,l}$ is constructible and hence defines an element in $\mathcal{M}$. ###### Proof. Let $\tilde{w}_{\lambda}^{M,l}:=\\{(Z,C)\in w_{\lambda}^{M,l}\times M|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))\\}.$ We note that since $d\gg r$, by Proposition 3.1 for any $Z\in w_{\lambda}^{M,l}$, there exists a unique $(Z,C)\in\tilde{w}_{\lambda}^{M,l}$ mapping onto it under the projection, thus it suffices to establish that $\tilde{w}_{\lambda}^{M,l}$ is constructible. Consider $X=w_{\lambda}\times M\times\mathbb{P}^{n}$. The variety $X$ has two ideal sheaves on it $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$ corresponding to the subvarieties $E_{1}=\\{(Z,C,P)\in\tilde{w}_{\lambda}^{M,l}\times\mathbb{P}^{n}|P\in Z\\}$ and $E_{2}=\tilde{w}_{\lambda}^{M,l}\times\mathbb{P}^{n}|P\in Z\cap C\\}$. Let $\pi:\tilde{w}_{\lambda}^{M,l}\times\mathbb{P}^{n}\to\tilde{w}_{\lambda}^{M,l}$ denote the projection. Now we may consider a stratification $S_{i}$ of $\tilde{w}_{\lambda}^{M,l}$ such that the sheaves $\pi_{*}\mathcal{F}_{i}(d)$ are flat on each stratum. The semicontinuity theorem then implies that one may further refine the stratification $S_{i}$ to a stratification $S_{i}^{\prime}$ on which $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))$ and $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ are constant. We then note that $\tilde{w}_{\lambda}^{M,l}$ is precisely the subset of $X$ where $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))$ and $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ are equal to $l$. Thus $\tilde{w}_{\lambda}^{M,l}$ is a union of strata and is thus constructible. ∎ Let $\pi:W_{\geq\lambda}\to w_{\lambda}$ denote the natural projection. Let $W_{\geq\lambda}^{M,l}(\mathbb{P}^{n})=\pi^{-1}(w_{\lambda}^{M,l})$. Let $W_{\lambda}^{M,l}=W_{\geq\lambda}^{M,l}\cap W_{\lambda}$. We define $\bar{w}_{\lambda}^{M,l}=\\{Z\in w_{\lambda}|\exists C\in M,h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))=l\\}.$ We note that $w_{\lambda}^{M,l}\subseteq\bar{w}_{\lambda}^{M,l}.$ We start by decomposing the varieties $w_{\lambda}^{l}$ into pieces. ###### Proposition 6.4. Let $l\geq 1$. $w_{\lambda}^{l}=\sum_{r=1}^{\infty}w_{\lambda}^{\textrm{Chow}(r),l}.$ ###### Proof. We note that the sum is finite, for $r$ sufficiently large $w_{\lambda}^{\textrm{Chow}(r),l}$ is empty. It then suffices to prove that the collection $w_{\lambda}^{\textrm{Chow}(r),l}$ partitions $w_{\lambda}^{l}$. This follows from Proposition 3.1. ∎ ###### Proposition 6.5. Let $e\geq 1$. Let $\mathcal{A}$ be a finite collection of constructible subsets defining a partition of $\textrm{Chow}(e).$ Then $w_{\lambda}^{\textrm{Chow}(e),l}=\sum_{M\in\mathcal{A}}w_{\lambda}^{M,l}.$ ###### Proof. This follows from the fact that a partition of $\textrm{Chow}(e)$ induces a partition of $w_{\lambda}^{\textrm{Chow}(e),l}$. ∎ We now prove a dimension bound on several of the $w_{\lambda}^{M,l}$. ###### Proposition 6.6. Let $e\geq 4$. Let $M\subseteq\textrm{Chow}(e)$ be a constructible set. Let $n\geq 2,l\geq 1$. Then there exists $\epsilon>0$ such that, $\dim(W_{\geq\lambda}^{M,l}-W^{M,l}_{\geq\lambda,\geq N+1})\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d.$ Furthermore $\dim w_{\lambda}^{M,l}\leq|\lambda|(n)-l-(\frac{3n}{2}+\epsilon)d$. ###### Proof. Let $M\subseteq\textrm{Chow}(e)$. We will first prove that $\dim w_{\lambda}^{M,l}\leq|\lambda|(n)-l-(\frac{3n}{2}+\epsilon)d$. We note that $\dim w_{\lambda}^{M,l}\leq\dim\tilde{w}_{\lambda}^{M,l}$. By Proposition 4.10 any $(Z,C)\in w_{\lambda}^{M,l}$ must have $|Z\cap C|\geq\psi_{e,d}(l)$. Thus $\dim w_{\lambda}^{M,l}\leq\dim\tilde{w}_{\lambda}^{M,l}\leq\dim M+(|\lambda|-\psi_{e,d}(l))(n)+\psi_{e,d}(l).$ An inspection of the function $\psi_{e,d}$ then implies that for $e\geq 4$ and $d\gg 0$, $\dim w_{\lambda}^{M,l}\leq|\lambda|(n)-l-(\frac{3n}{2}+\epsilon)d.$ Since $W_{\geq\lambda,l}^{M}=w_{\lambda}^{M,l}H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\mathbb{L}^{-|\lambda|(n+1)+l}$, the result follows. ∎ ###### Proposition 6.7. Let $d_{0}\geq 1$. Let $M\subseteq Chow(d_{0})$ be a constructible set. Then $w_{\lambda}^{M,l}\subseteq\bar{w}_{\lambda}^{M,l}$. Furthermore $\bar{w}_{\lambda}^{M,l}\setminus w_{\lambda}^{M,l}=\cup_{d>d_{0},l^{\prime}>l}w_{\lambda}^{M^{\prime}(d),l^{\prime}},$ where $M^{\prime}(d)$ is the subset of $Chow(d)$ consisting of reducible curves that contain a curve in $M$. ###### Proof. This follows from Lemma 3.1. ∎ ###### Proposition 6.8. Let $X$ and $Y$ be algebraic varieties over $\mathbb{F}$. Then $w_{\lambda}(X\amalg Y)=\sum_{\alpha\subseteq\lambda}w_{\alpha}(X)w_{\lambda\setminus\alpha}(Y).$ ###### Proof. This follows immediately from the fact that $w_{\lambda}(X\amalg Y)$ is partitioned into subsets isomorphic to $w_{\alpha}(X)\times w_{\lambda\setminus\alpha}(Y)$ as $\alpha$ ranges over all subpartitions of $\lambda$. ∎ ###### Proposition 6.9. Let $X$ be an algebraic variety. Let $Y$ be a subvariety of $X$. Suppose $[X],[Y]$ are polynomials in $\mathbb{L}$. Then for any $\lambda$, $w_{\lambda}(X),w_{\lambda}(Y)$ and $w_{\lambda}(X\setminus Y)$ are polynomials in $\mathbb{L}$. ###### Proof. For $X=\mathbb{A}^{n}$, $[\textrm{Sym}^{\lambda}(\mathbb{A}^{n})]=\mathbb{L}^{|\lambda|n}$. Let $X$ and $Y$ be such that $Y\subseteq X$ and $w_{\lambda}(X)$, $w_{\lambda}(Y)$ can always be expressed as a polynomial in $\mathbb{L}$. We claim that if $U=X\setminus Y$ then $w_{\lambda}(U)$ can also be expressed as a polynomial in $\mathbb{L}$ for all $\lambda$. Suppose for the sake of contradiction that $\lambda_{0}$ is an ordered partition such that $w_{\lambda_{0}}(U)$ is not a polynomial in $\mathbb{L}$ and furthermore for all $\lambda^{\prime}\subseteq\lambda_{0}$, $w_{\lambda^{\prime}}(U)$ is a polynomial in $\mathbb{L}$. Then by Proposition 6.8, $w_{\lambda_{0}}(U)=w_{\lambda_{0}}(X)-\sum_{\lambda^{\prime}\subsetneq\lambda_{0}}w_{\lambda^{\prime}}(U)w_{\lambda_{0}\setminus\lambda^{\prime}}(Y)$ which is a polynomial in $\mathbb{L}$. Hence the class of varieties such that $w_{\lambda}(X)$ is always a polynomial contains $\mathbb{A}^{n}$ and is closed under differences and unions. This completes the proof of the proposition. ∎ In the next proposition we describe the possible reduced $0$\- dimensional subschemes $Z$ contained in a curve $C$ of low degree ($\leq 3$) such that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=l>0$. ###### Proposition 6.10. 1. 1. Let $T\subseteq\mathbb{P}^{n}$ be a smooth rational normal curve of degree $r\leq 3$. Let $Z\subseteq T$ be a reduced zero dimensional subscheme. Let $d\gg r$. Then, $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ depends only on $|Z|$. Furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-\max(2|Z|-rd,0)|+(n-1)\max(|Z|-d,0)\leq K$ for some constant $K$. 2. 2. Let $d\gg e$. Let $C_{1},C_{2}$ be disjoint reduced curves of degree $\leq e$. Let $Z=Z_{1}\amalg Z_{2}$, with $Z_{i}\subseteq C_{i}$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z_{1}}(d))+h^{1}(\mathbb{P}^{n},I^{2}_{Z_{2}}(d))$. 3. 3. Let $L_{1},L_{2}$ be two coplanar lines. Let $Z=Z_{1}\amalg Z_{2}$ such that $Z_{i}\subseteq L_{i}$ and $Z\cap L_{1}\cap L_{2}=\varnothing$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ only depends on $|Z_{1}|$ and $|Z_{2}|$ and furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L_{1}),I^{2}_{Z_{1}}(d))-h^{1}(N(L_{2}),I^{2}_{Z_{2}}(d))|\leq 2(n+1)$. 4. 4. Let $L_{1},L_{2}$ be two coplanar lines. Let $\\{p\\}=L_{1}\cap L_{2}$ Let $Z=Z_{1}\amalg Z_{2}\amalg\\{p\\}$ such that $Z_{i}\subseteq L_{i}$ and $Z_{i}\cap L_{1}\cap L_{2}=\varnothing$. Then Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ only depends on $|Z_{1}|$ and $|Z_{2}|$ and furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L_{1}),I^{2}_{Z_{1}}(d))-h^{1}(N(L_{2}),I^{2}_{Z_{2}}(d))|\leq 4(n+1).$ 5. 5. Let $L,C$ be a line and a conic intersecting at one point, transversally.Let $Z=Z_{1}\amalg Z_{2}$ such that $Z_{1}\subseteq L,Z_{2}\subseteq C$ and $Z\cap L\cap C=\varnothing$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ only depends on $|Z_{1}|$ and $|Z_{2}|$ and furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L),I^{2}_{Z_{1}}(d))-h^{1}(N(C),I^{2}_{Z_{2}}(d))|\leq 2(n+1).$ 6. 6. Let $L,C$ be a line and a conic intersecting at one point, transversally.Let $Z=Z_{1}\amalg Z_{2}\amalg\\{p\\}$ such that $Z_{1}\subseteq L,Z_{2}\subseteq C$ and $Z_{i}\cap L\cap C=\varnothing$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ only depends on $|Z_{1}|$ and $|Z_{2}|$ and furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L),I^{2}_{Z_{1}}(d))-h^{1}(N(C),I^{2}_{Z_{2}}(d))|\leq 4(n+1)$. 7. 7. Let $L,C$ be a line and a conic intersecting at one point, nontransversally.Let $Z=Z_{1}\amalg Z_{2}$ such that $Z_{1}\subseteq L,Z_{2}\subseteq C$ and $Z\cap L\cap C=\varnothing$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ only depends on $|Z_{1}|$ and $|Z_{2}|$ and furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L),I^{2}_{Z_{1}}(d))-h^{1}(N(C),I^{2}_{Z_{2}}(d))|\leq 4(n+1).$ 8. 8. Let $L,C$ be a line and a conic intersecting at one point, nontransversally.Let $Z=Z_{1}\amalg Z_{2}\amalg\\{p\\}$ such that $Z_{1}\subseteq L,Z_{2}\subseteq C$ and $Z_{i}\cap L\cap C=\varnothing$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ only depends on $|Z_{1}|$ and $|Z_{2}|$ and furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L),I^{2}_{Z_{1}}(d))-h^{1}(N(C),I^{2}_{Z_{2}}(d))|\leq 8(n+1)$. 9. 9. Let $L,C$ be a line and a conic intersecting at two points. Let $Z_{1}=Z\cap L,Z_{2}=Z\cap C$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ only depends on $|Z_{1}|,|Z_{2}|,|Z_{1}\cap Z_{2}|$ and furthermore, $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L),I^{2}_{Z_{1}}(d))-h^{1}(N(C),I^{2}_{Z_{2}}(d))|\leq 8(n+1)$. 10. 10. Let $L_{1},L_{2},L_{3}$ be three lines meeting in the following configuration. Let $Z\subseteq L_{1}\cup L_{2}\cup L_{3}$. Let $Z_{i}=Z\cap L_{i}$. Then $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-\sum h^{1}(N(L_{i}),I^{2}_{Z_{i}}(d))|\leq 12(n+1).$ Furthermore, $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ depends only on the configuration of the lines as well as $|Z_{i}|,|Z_{i},\cap Z_{j}|,|Z_{i}\cap Z_{j}\cap Z_{k}|$. 11. 11. Let $E$ be a plane cubic in $\mathbb{P}^{n}$. Let $Z\subseteq\mathbb{P}^{n}$. Then $h^{1}(\mathbb{P}^{n},I^{2}_{E}(d))=h^{1}(E,\mathcal{O}(d)-2Z)+h^{1}(E,N^{*}\otimes(\mathcal{O}(d)-Z))$. Since $E$ is a complete intersection, $N^{*}$ splits and the term $h^{1}(E,N^{*}\otimes(\mathcal{O}(d)-Z))=0$, if $|Z|<3d-5$. The first term $h^{1}(\mathcal{O}(d)-2Z)=\begin{cases}0&\text{if}\ 2|Z|<3d\text{or}2|Z|=3d\text{and}2Z\neq\mathcal{O}(d)\\\ 1,&\text{if}\ 2Z=\mathcal{O}(d)\\\ 2|Z|-3d,&\text{otherwise}\end{cases}$ (4) ###### Proof. We begin with the proof of (1). By Proposition 6.1, $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(N(T),I^{2}_{Z}(d))=h^{1}(T,\mathcal{O}(d)-2|Z|)+h^{1}(T,N^{*}\otimes(\mathcal{O}(d)-|Z|))$ Here $N^{*}$ is the conormal bundle of $T$ in $\mathbb{P}^{n}$, and $|Z|$ is the line bundle on $T$ corresponding to $Z$. Since $T$ is isomorphic to $\mathbb{P}^{1}$, the above two $h^{1}$ terms only depend on $|Z|$ and it is also easy to see that $|h^{1}(T,\mathcal{O}(d)-2|Z|)+h^{1}(T,N^{*}\otimes(\mathcal{O}(d)-|Z|))$ $-\max(2|Z|-rd,0)|+(n-1)\max(|Z|-d,0)\leq K$ for some constant $K$. We move on to the proof of (2). We note that since $d\gg e$, the restriction map $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))\to H^{0}(N(C_{1}\cup C_{2}),\mathcal{O}(d))$ is surjective (by Serre vanishing). Since $C_{1}$ and $C_{2}$ are disjoint, $N(C_{1}\cup C_{2})=N(C_{1})\cup N(C_{2})$ and also $H^{0}(N(C_{1}\cup C_{2}),\mathcal{O}(d))\cong H^{0}(N(C_{1}),\mathcal{O}(d))\oplus H^{0}(N(C_{2}),\mathcal{O}(d)).$ The map $H^{0}(N(C_{1}\cup C_{2}),\mathcal{O}(d))\to H^{0}(N(Z),\mathcal{O}(d))$ is then the direct sum of the maps $H^{0}(N(C_{1}),\mathcal{O}(d))\to H^{0}(N(Z_{1}),\mathcal{O}(d))$ and $H^{0}(N(C_{2}),\mathcal{O}(d))\to H^{0}(N(Z_{2}),\mathcal{O}(d)).$ We note that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))$ is the codimension of the image of first map and $h^{1}(\mathbb{P}^{n},I^{2}_{Z_{1}}(d))$ (resp. $h^{1}(\mathbb{P}^{n},I^{2}_{Z_{2}}(d))$) is the codimension of the second (resp. third) map. Since codimension is additive over direct sums, we have the required equality. We now move on to the proofs of (3) and (4). Since $d\gg 0$, $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(N(L_{1}\cup L_{2}),I^{2}_{Z}(d))$. We note that we have a map $\pi:N(L_{1})\amalg N(L_{2})\to N(L_{1}\cup L_{2})$. There is a short exact sequence of sheaves on $N(L_{1}\cup L_{2})$, $0\to I^{2}_{Z}(d)\to\pi_{*}\pi^{*}I^{2}_{Z}(d)\to\mathcal{F}\to 0,$ where the quotient sheaf $\mathcal{F}$ is supported on $N(p)$, where $p$ is the intersection point of the two lines. Associated to the above short exact sequence of sheaves we have a long exact sequence in cohomology, a part of which is as follows: $H^{0}(N(L_{1}\cup L_{2}),\mathcal{F})\to H^{1}(N(L_{1}\cup L_{2}),I^{2}(d))\to H^{1}(N(L_{1}\cup L_{2}),\pi_{*}\pi^{*}I^{2}_{Z}(d))\to 0.$ The fact that the above long exact sequence is exact at the last term is due to the fact that $H^{1}(N(L_{1}\cup L_{2}),\mathcal{F})=0$ as its support is $0$ dimensional. Furthermore, $H^{0}(N(L_{1}\cup L_{2}),\mathcal{F})\cong H^{0}(N(p),I^{2}_{Z\cap p}(d))$. Hence $h^{0}(N(p),I^{2}_{Z\cap p}(d))\leq n+1$ (it is either $0$ or $n+1$ depending on whether $p\in Z$). We note that $H^{1}(N(L_{1}\cup L_{2}),\pi_{*}\pi^{*}I^{2}_{Z}(d))\cong H^{1}(N(L_{1}),I^{2}_{Z\cap L_{1}}(d))\oplus H^{1}(N(L_{2}),I^{2}_{Z\cap L_{2}}(d)).$ Let $M=Im(H^{0}(N(L_{1}\cup L_{2}),\mathcal{F})\to H^{1}(N(L_{1}\cup L_{2}),I^{2}(d))).$ Then by the long exact sequence we have that $h^{1}(N(L_{1}\cup L_{2}),I^{2}_{Z}(d))=h^{1}(N(L_{1}),I^{2}_{Z\cap L_{1}}(d))+h^{1}(N(L_{2}),I^{2}_{Z\cap L_{2}}(d))+\dim_{\mathbb{F}}M.$ We note that $\dim_{\mathbb{F}}M\leq h^{0}(\mathcal{F})\leq n+1$ We further note that $0\leq h^{1}(N(L_{i}),I^{2}_{Z\cap L_{i}}(d))-h^{1}(N(L_{i}),I^{2}_{Z_{i}}(d))\leq n+1.$ Hence $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L_{1}),I^{2}_{Z_{1}}(d))-h^{1}(N(L_{2}),I^{2}_{Z_{2}}(d))|\leq(n+1)$ in the case when $p\not\in Z$ (the situation of case (3)) and $|h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))-h^{1}(N(L_{1}),I^{2}_{Z_{1}}(d))-h^{1}(N(L_{2}),I^{2}_{Z_{2}}(d))|\leq 3(n+1)$ in the case when $p\in Z$ (the situation of case (4)). To finish the proof of (3) and (4) we must show that $\dim M$ only depends on $|Z_{1}|$ and $|Z_{2}|$. Since $\dim M$ is also equal to $h^{0}(N(L_{1}\cup L_{2}),\mathcal{F})-\dim\textrm{ker}H^{0}(N(L_{1}\cup L_{2}),\mathcal{F})\to H^{1}(N(L_{1}\cup L_{2}),I^{2}(d))$ $=h^{0}(N(L_{1}\cup L_{2}),\mathcal{F})-\dim Im(H^{0}(N(L_{1}\cup L_{2}),\pi_{*}\pi^{*}(I^{2}(Z(d))))\to H^{0}(N(L_{1}\cup L_{2}),\mathcal{F})),$ it suffices to show that $\dim(Im(H^{0}(N(L_{1}\cup L_{2}),\pi_{*}\pi^{*}(I^{2}(Z(d))))\to H^{0}(N(L_{1}\cup L_{2}),\mathcal{F}))$ depends only on $|Z_{1}|$ and $|Z_{2}|$. In case $p\in Z$, $H^{0}(N(L_{1}\cup L_{2}),\mathcal{F})=0$, so this is immediate. This establishes (4). We will now complete the proof of (3). The map $H^{0}(N(L_{1}\cup L_{2}),\pi_{*}\pi^{*}(I^{2}(Z(d))))\to H^{0}(N(L_{1}\cup L_{2}),\mathcal{F})$ is isomorphic to the map $H^{0}(N(L_{1}),I^{2}_{Z_{1}}(d))\oplus H^{0}(N(L_{2}),I^{2}_{Z_{2}}(d))\to H^{0}(N(p),\mathcal{O}(d))$ given by restriction. It suffices to show that the image of $H^{0}(N(L_{1}),I^{2}_{Z_{1}}(d))\to H^{0}(N(p),\mathcal{O}(d))$ is dependent on $|Z_{1}|$ only. This happens in all but one case. We record what happens below without proof since it is elementary and just a computation. If $2|Z_{1}|+1\leq d$, the map $H^{0}(N(L_{1}),I^{2}_{Z_{1}}(d))\to H^{0}(N(p),\mathcal{O}(d))$ is surjective. We note that if $2|Z_{1}|\geq d+1$ and $|Z_{1}|<d$ the image consists of all vectors vanishing at $p$ and whose first derivative along the tangent direction to $L_{1}$ also vanishes at $p$. If $|Z_{1}|\geq d$, $H^{0}(N(L),I^{2}_{Z_{1}}(d))=0$, so the image is also zero. The remaining case is when $2|Z_{1}|=d,$ In this situation the image is constrained by the fact that any $f\in H^{0}(N(L_{1}),I^{2}_{Z_{1}}(d))$is such that $(f(p),\partial_{v}f(p))$ is forced to lie in some fixed one dimensional linear subspace. Now the only way that the image of $H^{0}(N(L_{1}),I^{2}_{Z_{1}}(d))\oplus H^{0}(N(L_{2}),I^{2}_{Z_{2}}(d))\to H^{0}(N(p),\mathcal{O}(d))$ can have different dimensions for the same value of $|Z_{1}|,|Z_{2}|$ is if either $2|Z_{1}|=d$ or $2|Z_{2}|=d$. Assume that $2|Z_{1}|=d$ and $2|Z_{2}|+1\leq d$. Then in this case the map is surjective, so the dimension of the image is fixed. Similarly, if $2|Z_{1}|=d$ and $|Z_{2}|\geq d$, the image is of dimension $n$ and dependent only on the sizes of $Z_{1},Z_{2}$. If $2|Z_{1}|=d$ and $2|Z_{2}|\geq d+1$ one gets that the image is still surjective. In the case when $2|Z_{1}|=2|Z_{2}|=d$ the map is still surjective and in particular only depends on the sizes of $|Z_{1}|$ and $|Z_{2}|$. The proof of (5)- (10) are almost identical to that of (3) and (4), so we omit them. Case (11) is a standard application of Riemann-Roch for an elliptic curve. ∎ ###### Proposition 6.11. Let $M$ be one of the following: 1. 1. The space of twisted cubic curves in $\textrm{Chow}(3)$. 2. 2. The space of skew triples of lines in $\textrm{Chow}(3)$. 3. 3. The space of pairs of a line and a disjoint conic in $\textrm{Chow}(3)$. 4. 4. The space of pairs of a line intersecting a conic at a point in $\textrm{Chow}(3)$. 5. 5. The space of pairs of a line intersecting a conic at two points in $\textrm{Chow}(3)$. 6. 6. The space of smooth planar conics in $\textrm{Chow}(2)$. 7. 7. The space of skew lines in $\textrm{Chow}(2)$. 8. 8. The space of coplanar lines in $\textrm{Chow}(2)$. 9. 9. $\textrm{Chow}(1)$. Then there exists $\epsilon>0$ $\sum_{|\lambda|\leq N}(-1)^{||\lambda||}W^{M}_{\geq\lambda}=0,$ up to dimension $-(\frac{3n}{2}+\epsilon)d+h^{0}(\mathbb{P}^{n},\mathcal{O}(d))$. Furthermore for $k\leq N$, $\sum_{|\lambda|=k}(-1)^{||\lambda||}w^{M,l}_{\lambda}=0$ up to dimension $-(\frac{3n}{2}+\epsilon)d-l+|\lambda|n$. . Furthermore for $k\leq N$, $\sum_{|\lambda|=k}(-1)^{||\lambda||}w^{M,l,n}_{\lambda}=0,$ up to dimension $-(\frac{3n}{2}+\epsilon)d-l+|\lambda|n$. ###### Proof. The proof of the Proposition 6.11 in these 9 separate cases are almost identical. However there are some minor differences. We will prove the Proposition 6.11 in detail in the cases numbered 1,2 and 4. We will indicate how the proof is to be adjusted in all remaining cases. Let us begin with the proof in the case when $M$ is the moduli space of twisted cubic curves, i.e. case (1). Let $T$ be a fixed twisted cubic curve. Then, $\bar{w}_{\lambda}^{M,l}=Mw_{\lambda}^{T,l}$ since the family of twisted cubics over $M$ is Zariski locally-trivial. Note that $\bar{w}_{\lambda}^{T,l}=\pi_{\lambda}^{-1}(\bar{w}_{|\lambda|}^{M,l}).$ By Proposition 6.10 $\bar{w}_{|\lambda|}^{\\{T\\},l}$ consists of pairs $(Z,T)$ and $|Z\cap T|=k_{1}$ (where $k_{1}$ is some number depending on $l$ necessarily bigger than $3$ for $d$ big enough). Hence $\bar{w}_{\lambda}^{T,l}=\sum_{\alpha\subseteq\lambda,|\alpha|=k_{1}}w_{\alpha}(T)w_{\lambda-\alpha}(\mathbb{P}^{n}\setminus T)$ By Proposition 6.9 $w_{\alpha}(T)w_{\lambda-\alpha}(\mathbb{P}^{n}\setminus T)$ is a polynomial in $\mathbb{L}$. Thus it suffices to establish that the image of the above class in $K_{0}(MHS)$ is $0$. By Proposition 4.8 the image of $\sum_{|\lambda|=k_{1}+k_{2}}(-1)^{||\lambda||}\sum_{\alpha\subseteq\lambda,|\alpha|=k_{1},|\lambda|=k_{1}+k_{2}}w_{\alpha}(T)w_{\lambda-\alpha}(\mathbb{P}^{n}\setminus T)$ in $K_{0}(MHS)$ is the class of $H_{*}^{c}(w_{k_{1}}(T)\times w_{k_{2}}(\mathbb{P}^{n}\setminus T),\pm\mathbb{Q})$. by Lemma 2 in [17] is $0$, which establishes that $\sum_{|\lambda|=k_{1}+k_{2}}\bar{w}_{\lambda}^{T,l}(\mathbb{P}^{n})=0.$ By Proposition 6.7 $\bar{w}_{\lambda}^{M,l}(\mathbb{P}^{n})-w_{\lambda}^{M,l}(\mathbb{P}^{n})\subseteq\cup_{d=1}^{\infty}w_{\lambda}^{M^{\prime}(d),l}(\mathbb{P}^{n}),$ where $M^{\prime}(d)$ is the moduli space of degree $d$ curves strictly containing a twisted cubic. By Proposition 6.6 this is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d-(n+1)|\lambda|-l.$ Hence $\sum_{|\lambda|=k_{1}+k_{2}}w_{\lambda}^{T,l}(\mathbb{P}^{n})=0,$ up to dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d-(n+1)||\lambda||-l.$ Note that in case (1), for a fixed value of $l$, $w_{\lambda}^{M,l,n}$ is either equal to $w_{\lambda}^{M,l}$ or is $0$. Hence $\sum_{|\lambda|=k_{1}+k_{2}}w_{\lambda}^{T,l,n}(\mathbb{P}^{n})=0,$ up to dimension $h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}-\epsilon)d-(n+1)|\lambda|-l.$ Since $W_{\geq\lambda}^{M}=\sum_{l=0}^{\infty}W_{\geq\lambda}^{M,l}$ and $W_{\geq\lambda}^{M,l}=w_{\geq\lambda}^{M,l}\mathbb{L}^{h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-|\lambda|(n+1)+l}$ case (1) follows. We will now deal with case when $M$ is the space of skew triples of lines. Claim: Let $L_{1},L_{2},L_{3}$ be a fixed skew triple of lines. Let $Z\subseteq L_{1}$ be a subset of size $n$. Let $f(n)=h^{1}(N(L),I^{2}_{Z}(d))$ (One can check that this quantity only depends on $|Z|$ ). Let $S_{l}=\\{(a,b,c)|f(a)+f(b)+f(c)=l\\}$. One can check that $S_{l}$ is a finite set. Firstly, since any family of skew triples of lines is Zariski locally trivial, $\sum_{|\lambda|=k}(-1)^{||\lambda||}\bar{w}_{\lambda}^{M,l}(\mathbb{P}^{n})=M\sum_{|\lambda|=k}(-1)^{||\lambda||}\bar{w}_{\lambda}^{L_{1}\cup L_{2}\cup L_{3},l}(\mathbb{P}^{n}).$ By Proposition 6.10, this equals $\sum_{|\lambda|=k}(-1)^{||\lambda||}\sum_{(k_{1},k_{2},k_{3})\in S_{l}}\sum_{\begin{subarray}{c}\alpha_{3}\subseteq\alpha_{2}\subseteq\alpha_{1}\subseteq\lambda\\\ (|\alpha_{1}|,|\alpha_{2}|,|\alpha_{3}|)=(k_{1},k_{2},k_{3})\end{subarray}}\bar{w}_{\alpha_{3}}(L_{3})\bar{w}_{\alpha_{2}-\alpha_{3}}(L_{2})\bar{w}_{\alpha_{1}-\alpha_{2}}(L_{1})\bar{w}_{\lambda-\alpha_{1}}(\mathbb{P}^{n}-L_{1}\cup L_{2}\cup L_{3}).$ By Proposition 6.9 the right hand side of the above equation is a polynomial in $\mathbb{L}$. By Proposition 4.8, the image of the right hand side in $K_{0}(MHS)$ is a sum of terms of the form $H_{*}^{c}(w_{k_{1}}(L_{1})\times w_{k_{2}}(L_{2})\times w_{k_{3}}(L_{3}))$which by Proposition in [17] is zero. This implies $\sum_{|\lambda|=k}(-1)^{||\lambda||}\bar{w}_{\lambda}^{M,l}(\mathbb{P}^{n})=0$ . As in case(1) $\sum_{|\lambda|=k}(-1)^{||\lambda||}\bar{w}_{\lambda}^{M,l}(\mathbb{P}^{n})-w_{\lambda}^{M,l}(\mathbb{P}^{n})$ is of dimension less than equal to that of $\cup_{d=1}^{\infty}w_{\lambda}^{M^{\prime}(d),l}(\mathbb{P}^{n}),$ where $M^{\prime}(d)$ is the moduli space of degree $d$ curves strictly containing a skew triple of lines. By Proposition 6.6 this is $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}-\epsilon)d-(n+1)||\lambda||-l.$ Case (4) follows. The proofs of the other cases are nearly identical. In each case we prove that $\bar{w}_{\lambda}^{M,l}=M\bar{w}_{\lambda}^{\\{C\\},l}$ where $C\in M$, since $M$ is Zariski locally trivial in all of the cases. We then use Proposition 6.10 to explicitly describe $\bar{w}_{|\lambda|}^{\\{C\\},l}$. In each of our cases $H^{*}(\bar{w}_{|\lambda|}^{\\{C\\},l},\pm\mathbb{Q})=0$. As a result by Proposition 4.8 $\sum_{|\lambda|=n}(-1)^{||\lambda||}\bar{w}_{\lambda}^{\\{C\\},l}$ is mapped to $0$ in $K_{0}(MHS).$ We then note that $\bar{w}_{\lambda}^{\\{C\\},l}=\pi_{\lambda}^{-1}\bar{w}_{|\lambda|}^{\\{C\\},l}$ is seen to be a polynomial in $\mathbb{L}$. Hence $\sum_{|\lambda|=n}(-1)^{||\lambda||}\bar{w}_{\lambda}^{\\{C\\},l}=0\in\mathcal{M}$. We then note that $\sum_{|\lambda|=n}(-1)^{||\lambda||}(\bar{w}_{\lambda}^{M,l}-w_{\lambda}^{M,l})=\sum_{l}\sum_{M^{\prime},|\lambda|=n}(-1)^{||\lambda||}w_{\lambda}^{M^{\prime},l}.$ where $M^{\prime}$ ranges over the moduli spaces of curves strictly containing a curve in $M$. However either $M^{\prime}$ consists of curves of degree $\geq 4$ and hence $w_{\lambda}^{M^{\prime},l}$ is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}-\epsilon)d-(n+1)|\lambda|-l$ by Proposition 6.6, or $M^{\prime}$ is a case that has already been considered on this list and hence has dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}-\epsilon)d-(n+1)|\lambda|-l$. ∎ ###### Proposition 6.12. Let $M\subseteq Chow(3)$ be contained in the subset parametrizing three lines. Then there exists $\epsilon>0$, such that for $N=\lceil(\frac{3n}{2}+\epsilon)d\rceil$, and $l>0$, $\sum_{|\lambda|<N}(-1)^{||\lambda||}w_{\lambda}^{M,l}$ is of dimension $\leq-l+|\lambda|n+-(\frac{3n}{2}+\epsilon)d$ ###### Proof. We note that we may express $M$ as a disjoint union of constructible sets $M=\amalg_{i=1}^{k}M_{i}$ such that for each $i$, and for each triple of lines $\\{L_{1},L_{2},L_{3}\\}$ in $M_{i}$ the triple has the same configuration, i.e. the number of points of intersection of the lines remain constant in $L_{i}$. It suffices to prove the Proposition for each $M_{i}$ separately. Thus in what follows we assume that each triple of lines in $M$ has the same configuration and $M$ is irreducible. We note that under this assumption the family of triples of lines over $M$ is Zariski locally trivial. By Proposition 6.7 $\bar{w}_{\lambda}^{M,l}-w_{\lambda}^{M,l}\subseteq\cup_{d^{\prime}\geq 4,l^{\prime}>l}w_{\lambda}^{Chow(d^{\prime}),l^{\prime}}$ and by Proposition 6.6 the right hand side of the above expression is of dimension $\leq-l+|\lambda|n+-(\frac{3n}{2}+\epsilon)d$. Therefore it suffices to prove that $\sum_{|\lambda|<N}(-1)^{||\lambda||}\bar{w}_{\lambda}^{M,l}$ is of dimension $\leq-l+|\lambda|n+-(\frac{3n}{2}+\epsilon)d$. Let $A=\mathbb{Z}^{7}/S_{3}$ where $S_{3}$ denotes the symmetric group of three letters and the action of $S_{3}$ on $\mathbb{Z}^{7}$ is induced from the natural action of $S_{3}$ on the power set of $\\{1,2,3\\}\setminus\varnothing$. A configuration of points $Z\subseteq\mathbb{P}^{n}$ and a set of 3 lines $\\{L_{1},L_{2},L_{3}\\}$ naturally gives us $|Z\cap\\{L_{1},L_{2},L_{3}\\}|:=(|Z\cap L_{1}|,|Z\cap L_{2}|,|Z\cap L_{3}|,|Z\cap L_{1}\cap L_{2}|,\dots|Z\cap L_{1}\cap L_{2}\cap L_{3}|)$ which is a well defined element of $A$. Given $a\in A$, let $\bar{w}_{\lambda}^{M}(a)=\\{(Z,\\{L_{1},L_{2},L_{3}\\})\in w_{\lambda}^{M}||Z\cap\\{L_{1},L_{2},L_{3}\\}|=a\\}.$ We may now further stratify each $\bar{w}_{\lambda}^{M,l}$ as the disjoint union of subspaces $\bar{w}_{\lambda}^{M,l}\cap\bar{w}_{\lambda}^{M}(a)$ where $a\in A$. However by Proposition 6.10, $\bar{w}_{\lambda}^{M,l}(a)$ is either all of $\bar{w}_{\lambda}^{M}(a)$ or it is empty. We note that since the family of triples of lines over $M$ is Zariski locally trivial, $\bar{w}_{\lambda}^{M}(a)$ is also a locally trivial family over $M$. Thus since Zariski locally trivial families split into prosucts in $\mathcal{M}$, we have the following equality: $[\bar{w}_{\lambda}^{M}(a)]=[M][w_{\lambda}^{c}(a)],$ where $c$ is a point in $M$. Thus it suffices to establish that for a fixed $c\in M$ and $a\in A$ $\sum_{|\lambda|<N}(-1)^{||\lambda||}w_{\lambda}^{c}(a)=0$. Since the above expression is polynomial in $\mathbb{L}$, it suffices to prove that its image in $K_{0}(MHS)$ is $0$. By Proposition 4.8 $\sum_{|\lambda|=r}(-1)^{||\lambda||}w_{\lambda}^{c}(a)$ is mapped to $H_{*}^{c}(w_{[r]}^{c}(a))$. The topological space $w_{[r]}^{c}(a)$ has a factor of the form $w_{[r]}^{c}(\mathbb{A}^{1})$ . However by Proposition 4.8 $H_{*}^{c}(w_{[r]}^{c}(\mathbb{A}^{1}),\pm\mathbb{Q})=0$ and thus $H_{*}^{c}(w_{[r]}^{c}(a))=0$. This concludes the proof. ∎ ###### Definition 6.13. Let $\epsilon>0$. Let $M$ denote the moduli space of smooth plane cubics in $\mathbb{P}^{n}$. Let $N=(\frac{3n}{2}+\epsilon)d$. Let $Y(d):=\sum_{l>0}\sum_{|\lambda|<N}((-1)^{||\lambda||})w_{\lambda}^{M,l}\mathbb{L}^{n|\lambda|}(\mathbb{L}^{l}-1).$ We note that a priori this definition of $Y(d)$ depends on our choice of $\epsilon>0$. However if $\epsilon$ is sufficiently small, different values of $\epsilon$ will only change the value of $Y(d)$ in very high codimension, and this is an ambiguity that is acceptable for our purposes. ###### Proposition 6.14. There exists $\epsilon>0$ such that: Up to dimension $h^{0}(\mathbb{P}^{n}\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d$, $Y(d)=\sum_{l>0}\sum_{\lambda}((-1)^{||\lambda||})\bar{w}_{\lambda}^{M,l}\mathbb{L}^{n|\lambda|}(\mathbb{L}^{l}-1).$ Let us now assume $d$ is even. Let $k_{1}(l)=\frac{3d+l}{2}$. Then up to dimension $h^{0}(\mathbb{P}^{n}\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d$, $\sum_{l>0}\sum_{\lambda}((-1)^{||\lambda||})\bar{w}_{\lambda}^{M,l}\mathbb{L}^{n|\lambda|}(\mathbb{L}^{l}-1)$ $=\sum_{l>0,l\textrm{ odd}}\sum_{|\lambda|<N}\sum_{\alpha\subseteq\lambda,|\alpha|=k_{1}(l)}((-1)^{||\lambda||})\\{(Z_{1},Z_{2},C)|C\in M,Z_{1}\in w_{\alpha}(C),Z_{2}\in w_{\lambda\setminus\alpha}(\mathbb{P}^{n}\setminus C)\\}\mathbb{L}^{-(n+1)|\lambda|}(\mathbb{L}^{l}-1).$ ###### Proof. Let $M$ denote the space of plane cubics in $\mathbb{P}^{n}$. We note that $Y(d)-\sum_{l>0}\sum_{\lambda}((-1)^{||\lambda||})\bar{w}_{\lambda}^{M,l}\mathbb{L}^{n|\lambda|}(\mathbb{L}^{l}-1),$ equals $\sum_{l>0}\sum_{\lambda}((-1)^{||\lambda||})w_{\lambda}^{M,l}-\bar{w}_{\lambda}^{M,l}\mathbb{L}^{n|\lambda|}(\mathbb{L}^{l}-1).$ By Proposition 6.6 and Proposition 6.7 each difference $w_{\lambda}^{M,l}-\bar{w}_{\lambda}^{M,l}\mathbb{L}^{n|\lambda|}(\mathbb{L}^{l}-1)$ is of dimension $\leq h^{0}(\mathbb{P}^{n}\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d$. Thus the entire sum is of dimension $\leq h^{0}(\mathbb{P}^{n}\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d.$ If $d$ is even and $l<\frac{3d}{2}$ we have $barw_{\lambda}^{M,l}=\sum_{\alpha\subseteq\lambda,|\alpha|=k_{1}(l)}((-1)^{||\lambda||})\\{(Z_{1},Z_{2},C)|C\in M,Z_{1}\in w_{\alpha}(C),Z_{2}\in w_{\lambda\setminus\alpha}(\mathbb{P}^{n}\setminus C)\\},$ by Proposition $\ref{h1list}(11).$ To see this note that under the assumption $l<\frac{3d}{2}$, it must be the case that $h^{1}(N(C),I^{2}_{Z\cap C}(d))=h^{1}(C,\mathcal{O}(d)-2|Z\cap C|)$. Since $C$ is of genus $1$, $h^{1}(C,\mathcal{O}(d)-2|Z\cap C|)=h^{0}(C,2|Z\cap C|-\mathcal{O}(d))=2|Z\cap C|-3d$ (by our parity assumption $3d\neq 2|Z\cap C|$ and the case when $2|Z\cap C|-\mathcal{O}(d)$ is the trivial line bundle does not arise). We note that we may disregard the terms when $l\geq\frac{3d}{2}$ because they contribute in too high a codimension. Thus we have our desired equality. ∎ ###### Proposition 6.15. Let $M$ be the space of plane cubics in $\mathbb{P}^{n}$. Let $d\equiv 1\mod{4}$. Let $k_{0}=\frac{3d+1}{2}$. Then for $\epsilon$ sufficiently small (the definition of $Y(d)$ implicitly depends on an $\epsilon$), $Y(d)$ has the same highest weight term as $H^{*}(\mathcal{M}_{1,1},H_{k_{0}})H^{*}(PGL_{n+1}(\mathbb{C}))\mathbb{L}^{-k_{0}(n+1)+1}$ in $K_{0}(MHS)$. The weight of this highest weight term is $2(-k_{0}(n+1)+1)+k_{0}+1+(n^{2}-1)$. ###### Proof. By Proposition 6.14, up to dimension $d(\frac{3n}{2}+\epsilon)$ $Y(d)$ $=\sum_{l>0,l\textrm{ odd}}\sum_{|\lambda|<N}\sum_{\alpha\subseteq\lambda,|\alpha|=k_{1}(l)}((-1)^{||\lambda||})\\{(Z_{1},Z_{2},C)|C\in M,Z_{1}\in w_{\alpha}(C),Z_{2}\in w_{\lambda\setminus\alpha}(\mathbb{P}^{n}\setminus C)\\}\mathbb{L}^{-(n+1)|\lambda|}(\mathbb{L}^{l}-1).$ To finish the proof of the Proposition it suffices to compute the image of the above expression in $K_{0}(MHS)$ . We note that the image equals $\sum_{l,k}H^{*}(w_{[k]}^{M,l},\pm\mathbb{Q})\mathbb{L}^{-n|\lambda|+l}$. For $l=1$, $\sum_{k}H^{*}(w_{[k]}^{M,1},\pm\mathbb{Q})\mathbb{L}^{-nk+l}$ has highest weight term equal to that of $H^{*}(w_{k_{0}}^{M,1},\pm\mathbb{Q})$, where $k_{0}=\frac{3d+1}{2}$, there are other terms corresponding to higher values of $k$, but we will establish (after first dealing with the leading term)that these have lower weight. This highest weight term corresponds to the situation where we have $k_{0}$ points on a plane cubic $C$, in this situation being singular on such a collection of points imposes $(k_{0})(n+1)-1$ linear conditions. Now $w_{k_{0}}^{M,1}=\\{(Z,C)\in w_{k_{0}}\times M|Z\subseteq C\\}$(by Proposition 6.10) and hence $w_{k_{0}}^{M,1}/PGL_{n+1}(\mathbb{C})=\mathcal{M}_{1,k}$. So we have, $H^{*}(w_{k_{0}}^{M,1},\pm\mathbb{Q})=H^{*}(PGL_{n+1},\mathbb{Q})\sum_{l,n}H^{*}(\mathcal{M}_{1,k_{0}},\pm\mathbb{Q})$. By the results of Section 5 of [7] (also see Corollary 2.8 in [8] ) the highest weight term of this equals that of $H^{*}(PGL_{n+1}(\mathbb{C}),\mathbb{Q})H^{*}(\mathcal{M}_{1,1},H_{k_{0}}),$ we note that this then implies that $H^{*}(w_{[k_{0}]}^{M,1},\pm\mathbb{Q})\mathbb{L}^{-nk_{0}+1}$ has the required highest weight, see [7] for instance. We will now establish that for $k>k_{0}$, the term $H^{*}(w_{[k]}^{M,1},\pm\mathbb{Q})\mathbb{L}^{-(n+1)k+l}$ has lower weight than for $k=k_{0}$. To see this let $\bar{w}_{[k]}^{M,1}=\\{(Z,C)\in w_{[k]}|h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))=1\\}$. We note that $\bar{w}_{[k]}^{M,1}\setminus w_{[k]}^{M,1}$ is of small dimension by Proposition 6.6. Thus it suffices to establish that $H^{*}(\bar{w}_{[k]}^{M,1},\pm\mathbb{Q})\mathbb{L}^{-nk+l}$ has lower weight. But we note that by Proposition 6.10 $\bar{w}_{[k]}^{M,1}=\\{(Z,C)\in w_{[k]}\times C||Z\cap C|=k_{0}\\}.$ Now we have a fibration $w_{[k]}^{M,1}\to M$ with fiber isomorphic to $w_{[k_{0}]}(C)\times w_{k-k_{0}}(\mathbb{P}^{n}\setminus C)$. We have a further fibration (in the orbifold sense) $M\to M/PGL_{n+1}(\mathbb{C}).$ The fiber of the composite map $w_{[k]}^{M,1}\to M/PGL_{n+1}(\mathbb{C})$ is seen to be a copy of $PGL_{n+1}(\mathbb{C})\times w_{[k_{0}]}(C)\times w_{k-k_{0}}(\mathbb{P}^{n}\setminus C).$ We may thus use the Leray spectral sequence to bound the weights of $H_{*}^{c}(w_{[k]}^{M,1}),\pm\mathbb{Q})$ in terms of the dimension of $M/PGL_{n+1}(\mathbb{C})$ (which is complex 1 dimensional and non compact) and the weights of $H_{*}^{c}PGL_{n+1}(\mathbb{C})\times w_{[k_{0}]}(C)\times w_{k-k_{0}}(\mathbb{P}^{n}\setminus C))$. The highest weight term of the latter is $((n+1)^{2}-1)+k_{0}+1+(2n-1)(k-k_{0})+1$ (this uses a computation of the Borel-Moore homology of configuration spaces- see Corollary 7.2.6 of [3] for instance). Thus the highest weight term of $H_{*}^{c}(w_{[k]}^{M,1}),\pm\mathbb{Q})$ is at most $1+((n+1)^{2}-1)+k_{0}+1+(2n-1)(k-k_{0})$. Thus the highest weight of $H_{*}^{c}(w_{[k]}^{M,1}),\pm\mathbb{Q})\mathbb{L}^{-k(n+1)+1}$ is at most $1+((n+1)^{2}-1)+k_{0}+1+(2n-1)(k-k_{0})+2-2k(n+1)$. If we subtract this quantity from $2(-k_{0}(n+1)+1)+k_{0}+1+((n+1)^{2}-1)$ we obtain: $3(k-k_{0})-1$ which is always positive for $k>k_{0}$. Thus the possible weights are lower than the highest term. We note that for $l>1$ a similar computation shows that all the corresponding terms $H^{*}(w_{[k]}^{M,l})\mathbb{L}^{-nk+l}$ are of smaller weight and we may ignore them for the purpose of finding the highest weight term. This establishes the result. ∎ ###### Proposition 6.16. Let $d\gg n$. Let $L$ be a fixed line in $\mathbb{P}^{n}$. Then there exists, $\epsilon>0$ such that for $N=(\frac{3n}{2}+\epsilon)d$, we have the following inequalities/equalities. 1. 1. $\\{(f,Z)\in W_{\lambda,\geq N}^{l,n}|\dim\textrm{Sing}(f)=0\\}$ is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d$ 2. 2. $W_{\lambda,\geq N}^{0,n}$is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d$ 3. 3. $\\{(f,Z)\in W_{\lambda,\geq N}^{l,n}|\textrm{Sing}(f)\textrm{contains a curve of degree}\geq 3\\}$ is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d$. 4. 4. $W_{\lambda,\geq N}^{l,n}=W_{\lambda,\geq N}^{l,n}\cap\\{(f,Z)|C\subseteq\textrm{Sing}(f),C\textrm{ is a curve of degree }\leq 2\\}$ $=\\{(f,L,Z)|(f,Z)\in W_{\lambda,\geq N}^{l,n},L\subseteq\textrm{Sing}(f),L\textrm{ is a line}\\}$ $+\\{(f,C,Z)|(f,Z)\in W_{\lambda,\geq N}^{l,n},C\subseteq\textrm{Sing}(f),C\textrm{is an irreducible conic}\\}$ $-\\{(f,L_{1},L_{2},Z)|(f,Z)\in W_{\lambda,\geq N}^{l,n},L_{i}\subseteq\textrm{Sing}(f)L_{i}\textrm{are distinct lines}\\},$ up to dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d.$ 5. 5. $\sum_{|\lambda|<N}(-1)^{||\lambda||}\\{(f,L_{1},L_{2},Z)|(f,Z)\in W_{\lambda,\geq N}^{l,n},L_{i}\subseteq\textrm{Sing}(f)\\}=0$ up to dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d$. 6. 6. $\sum_{|\lambda|<N}(-1)^{||\lambda||}\\{(f,C,Z)|(f,Z)\in W_{\lambda,\geq N}^{l,n},C\subseteq\textrm{Sing}(f)\\}=0$ up to dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d$. 7. 7. $\sum_{|\lambda|<N}(-1)^{||\lambda||}\\{(f,L,Z)|(f,Z)\in W_{\lambda,\geq N}^{l,n},L\subseteq\textrm{Sing}(f)L\textrm{ is a line}\\}=0$ up to dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d$. ###### Proof. We begin by proving (1). It suffices to prove (1) in the case when $|\lambda|=N$. By Proposition 3.1 if $l>0$, then associated to any $Z\in W_{\geq\lambda}^{n,l}$ is a unique reduced curve $C$ such that $l=h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap C}(d))$. Furthermore, if the components of $C$ are $C_{1},\dots C_{k}$ then $\frac{\deg(C_{k})d-g(C_{k})}{2}\leq|Z\cap C_{k}|$ as otherwise $2|Z|\cap C_{k}$ imposes linearly independent conditions which is not possible (see Proposition 3.1). Also, there is some constant $K$ depending on $\deg(C_{k})$ such that if $|Z\cap C_{k}|>\deg(C_{k})d+K,$ a section singular at $Z$ is singular on all of $C_{k}$. if that happens $Z\not\in W_{\geq\lambda}^{n,l}$. Thus we must have $|Z\cap C_{k}|<\deg(C_{k})d+K$. Furthermore there is some number $K^{\prime}$ such that , $|l-(n-1)\sum_{k}(|Z\cap C_{k}|)|\leq K^{\prime}$(Refer to Proposition 4.10). We can therefore bound both the dimension of the space of such $Z$ and $l$, which gives us (after a computation): , $\dim\\{(f,Z)|\dim\textrm{Sing}(f)=0\\}\cap W_{\lambda,\geq N+1}^{l,n}\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(-\frac{3n}{2}-\epsilon)d.$ Part (2) is immediate, since $\dim W_{\lambda,\geq N+1}^{0,n}\leq\dim W_{\geq[N]}^{0,n}\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-N.$ Let us now proceed to prove part (3). We first begin with a claim: Claim: For any fixed curve $C$ of degree $\geq 3$ and a fixed curve $C^{\prime}$ of degree $r^{\prime}$ possibly empty, there exists $\epsilon^{\prime}>0$, such that $\dim\\{(f,Z)\in W_{\geq\lambda}^{l,n}|C\subseteq\textrm{Sing}(f),h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap(C\cup C^{\prime})}(d))\\}\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon^{\prime})d$ , here we require $\epsilon^{\prime}$ to be independent of $C^{\prime},r^{\prime}$. To prove the claim, we let $A=\\{(f,Z)|\textrm{Sing}(f)\textrm{contains a curve of degree}\geq 3\\}\cap W_{\lambda,\geq N}^{l,n}$ and $\tilde{A}=\\{(f,Z,C)|(f,Z)\in A,C\textrm{ is a curve of degree}\geq 3,C\subseteq\textrm{Sing}(f)\\}.$ We note that $\dim\tilde{A}\geq\dim A$. Then $\dim\tilde{A}\leq\sup_{r\geq 3,C\textrm{is a curve of degree }r}(\dim Chow(r)+\dim\\{(f,Z)|C\subseteq\textrm{Sing}(f)\\}\cap W_{\lambda,\geq N}^{l,n}).$ Let us first argue that the claim implies (4). We will establish the claim afterwards. Let $B_{r^{\prime}}(C)$ be the subset of $W_{\geq\lambda}^{l,n}\times Chow(r^{\prime})$ consisting of all $(f,Z,C^{\prime})$ such that $C\subseteq\textrm{Sing}(f)$ and $C^{\prime}$ is the curve of minimal degree such that $h^{1}(\mathbb{P}^{n},I^{2}_{Z}(d))=h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap(C\cup C^{\prime})}(d))\\}.$ Now we use Proposition 3.1 to argue that $\dim\\{(f,Z)|C\subseteq\textrm{Sing}(f)\\}\cap W_{\lambda,\geq N+1}^{l,n})\leq\sup_{r^{\prime}}(\dim Chow(r^{\prime})+\dim B_{r^{\prime}}(C).$ Note that in the above two expressions, the supremums are actually maximums for a given $|\lambda|$, there are only finitely many $r,r^{\prime}$ for which the sets $B_{r^{\prime}}(C)$ can be nonempty. Further, we note that for $d\gg 0$, $\epsilon/10d\gg\dim Chow(r)$ for $r$ in the relevant range. Thus if we establish that $\dim B_{r^{\prime}}(C)\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d$ we would be done. This concludes the proof of (4) assuming the claim, so we will now proceed to establish the claim. For this we note the following- for $d\gg\deg(C)$ the space $\\{f\in H^{0}(\mathbb{P}^{n},\mathcal{O}(d))|C\subseteq\textrm{Sing}(f)\\}$ is of codimension $h^{0}(N(C),\mathcal{O}(d))$ in $H^{0}(\mathbb{P}^{n},\mathcal{O}(d))$. We know that $h^{0}(N(C),\mathcal{O}(d))\approx n\deg Cd$ by Proposition 4.10. For $(f,Z)\in W_{\geq\lambda}^{C\cup C^{\prime},l,n}$, let $Z_{1}=Z\cap C$,$Z_{2}=Z\cap C^{\prime}\setminus Z_{1}$ $Z_{3}=Z\setminus(Z_{1}\cup Z_{2})$. We now have the following inequalities: 1. 1. $\deg(C)d\geq|Z_{1}|$. 2. 2. We note that $|2|Z_{2}|-\deg(C^{\prime})d-h^{0}(\mathbb{P}^{n},I^{2}_{Z_{2}}(d))|$ is bounded by a constant independent of $d$. 3. 3. $\\{f|Z\cup C\subseteq\textrm{Sing}(f)\\}$ is of dimension $h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-h^{0}(N(C),\mathcal{O}(d))-h^{1}(\mathbb{P}^{n},I^{2}_{Z^{\prime}}(d))-(n+1)|Z_{3}|-(n+1)|Z_{2}|$ in . 4. 4. But the dimension of the space of $Z$ such that $(f,Z)\in W_{\geq\lambda}^{C\cup C^{\prime},l,n}$ is less than $|Z_{1}|+|Z_{2}|+n|Z_{3}|$. 5. 5. Hence the dimension of $\\{(f,Z)\in W_{\geq\lambda}^{C\cup C^{\prime},l,n}|C\subseteq\textrm{Sing}(f)\\}$ $\leq|Z_{1}|+|Z_{2}|+n|Z_{3}|+h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-h^{0}(N(C),\mathcal{O}(d))+h^{1}(\mathbb{P}^{n},I^{2}_{Z_{2}}(d))-(n+1)|Z_{3}|-(n+1)|Z_{2}|$ $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+\deg(C)d-n|Z_{2}|-h^{0}(N(C),\mathcal{O}(d))+h^{1}(\mathbb{P}^{n},I^{2}_{Z_{2}}(d))$ $\approx h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(n-1)\deg(C)d-n|Z_{2}|+h^{1}(\mathbb{P}^{n},I^{2}_{Z_{2}}(d))$ $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))+(n-1)\deg(C)d$ $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d.$ For part (4), we note that by (1), $W_{\lambda,\geq N+1}^{l,n}=X$ up to dimension $h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d,$ where $X:=\\{(f,Z)\in W_{\lambda,\geq N+1}^{l,n}|\textrm{Sing}(f)\textrm{contains a curve of degree}\leq 2\textrm{and does not contain a curve of degree}\geq 3\\}.$ Standard inclusion-exclusion techniques then imply that the motive of $X=\\{(f,L,Z)|(f,Z)\in W_{\lambda,\geq N+1}^{l,n},L\subseteq\textrm{Sing}(f)\\}+\\{(f,C,Z)|(f,Z)\in W_{\lambda,\geq N+1}^{l,n},C\subseteq\textrm{Sing}(f)\\}$ $-\\{(f,L_{1},L_{2},Z)|(f,Z)\in W_{\lambda,\geq N+1}^{l,n},L_{i}\subseteq\textrm{Sing}(f)\\}.$ up to dimension$h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d$. We will omit the proof of parts (5) and (6) because their proofs are similar to that of part (7) (which we will not omit) and in fact a bit simpler. For part (7), we note that the family of lines over $Chow(1)$ is Zariski locally trivial. Hence $\\{(f,L,Z)|(f,Z)\in W_{\lambda,\geq N+1}^{l,n},L\subseteq\textrm{Sing}(f)\\}$ defines a locally trivial family over $Chow(1)$ and thus, $\\{(f,L,Z)|(f,Z)\in W_{\lambda,\geq N+1}^{l,n},L\subseteq\textrm{Sing}(f)\\}=Chow(1)\\{(f,Z)|(f,Z)\in W_{\lambda,\geq N+1}^{l,n},L\subseteq\textrm{Sing}(f)\\}$ in $\mathcal{M}$ (Zariski locally trivial families split into products in $\mathcal{M}$). We now make the following claim. Claim: $\sum_{|\lambda|=k}(-1)^{||\lambda||}\\{(f,Z)|(f,Z)\in W_{\lambda,\geq N+1}^{l,n},L\subseteq\textrm{Sing}(f)\\}$ $=\sum_{|\lambda|=k}(-1)^{||\lambda||}\sum_{\alpha\subseteq\lambda}w_{\alpha}(L)w_{\lambda\setminus\alpha}(\mathbb{P}^{n}\setminus L)\mathbb{L}^{-(n+1)(|\lambda|-|\alpha|-h^{0}(N(L),\mathcal{O}(d))}$ up to dimension $h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d$, i.e. the sum is well approximated by the corresponding sum when we ignore the fact that points in $\mathbb{P}^{n}\setminus L$ need not impose linearly independent conditions on $H^{0}(\mathbb{P}^{n},I^{2}_{L}(d))$. The proof of the claim is essentially a dimension computation similar to those we’ve already seen, if a collection of points $Z$ fails to impose linearly independent conditions on $H^{0}(\mathbb{P}^{n},I^{2}_{L}(d))$, then a large number of points of $Z$ lie on a curve, either the curve is of degree $\geq 2$ and the dimension of the resulting error is small (see Proposition 6.6) or the curve is a line and the alternating sum vanishes for the same reason as in Proposition 6.11. We will first establish (7) assuming the claim. To do so we must simply establish that $\sum_{|\lambda|<N}(-1)^{||\lambda||}\sum_{\alpha\subseteq\lambda}w_{\alpha}(L)w_{\lambda\setminus\alpha}(\mathbb{P}^{n}\setminus L)\mathbb{L}^{-(n+1)(|\lambda|-|\alpha|-h^{0}(N(L),\mathcal{O}(d))}=0$ up to dimension $h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-(\frac{3n}{2}+\epsilon)d.$ It is clear that the sum is a polynomial in $\mathbb{L}$ and thus it suffices to establish that its image in $K_{0}(MHS)$ is of sufficiently low weight. Furthermore by Proposition 4.8 the image of the above expression in $K_{0}(MHS)$ is $\mathbb{L}^{-h^{0}(N(L),\mathcal{O}(d))}\sum_{k<N}\sum_{k^{\prime}\leq k_{1}}H_{*}^{c}(w_{k_{1}}(\mathbb{P}^{1})\times w_{k-k_{1}}(\mathbb{P}^{n}\setminus\mathbb{P}^{1}),\pm\mathbb{Q})\mathbb{L}^{-(k-k_{1})(n+1)}.$ To proceed with the proof of (7) we will need the following identity: $\sum_{k=0}^{\infty}H_{*}^{c}(w_{k}(\mathbb{P}^{1}),\pm\mathbb{Q})=0.$ To establish this we note that $\sum_{k=0}^{\infty}H_{*}^{c}(w_{k}(\mathbb{P}^{1}),\pm\mathbb{Q})$ is the image of $\sum_{\lambda}(-1)^{||\lambda||}w_{\lambda}(\mathbb{P}^{1})=Z_{\mathbb{P}^{1}}^{-1}(0)=0$. Thus we have established the previous identity. We also note that in the sum $\sum_{k=0}^{\infty}H_{*}^{c}(w_{k}(\mathbb{P}^{1}),\pm\mathbb{Q})$ the only relevant terms are for $k=0,1,2$ as all other terms vanish by Lemma 1.2 of [17]. We now note that $\mathbb{L}^{-h^{0}(N(L),\mathcal{O}(d))}\sum_{k<N}\sum_{k^{\prime}\leq k}H_{*}^{c}(w_{k^{\prime}}(\mathbb{P}^{1})\times w_{k-k^{\prime}}(\mathbb{P}^{n}\setminus\mathbb{P}^{1}),\pm\mathbb{Q})\mathbb{L}^{-(k-k^{\prime})(n+1)}$ $=\mathbb{L}^{-h^{0}(N(L),\mathcal{O}(d))}\sum_{k^{\prime}=0}^{2}\sum_{k=k^{\prime}}^{N}H_{*}^{c}(w_{k^{\prime}}(\mathbb{P}^{1})\times w_{k-k^{\prime}}(\mathbb{P}^{n}\setminus\mathbb{P}^{1}),\pm\mathbb{Q})\mathbb{L}^{-(k-k^{\prime})(n+1)}$ $=\mathbb{L}^{-h^{0}(N(L),\mathcal{O}(d))}\sum_{k^{\prime}=0}^{2}H_{*}^{c}(w_{k^{\prime}}(\mathbb{P}^{1}),\pm\mathbb{Q})\sum_{k^{\prime\prime}=0}^{N-k^{\prime}-1}H_{*}^{c}(w_{k^{\prime\prime}}(\mathbb{P}^{n}\setminus\mathbb{P}^{1}),\pm\mathbb{Q})\mathbb{L}^{-(k^{\prime\prime})(n+1)}$ $=\mathbb{L}^{-h^{0}(N(L),\mathcal{O}(d))}\sum_{k^{\prime}=0}^{2}H_{*}^{c}(w_{k^{\prime}}(\mathbb{P}^{1}),\pm\mathbb{Q})\sum_{k^{\prime\prime}=0}^{N-1}H_{*}^{c}(w_{k^{\prime\prime}}(\mathbb{P}^{n}\setminus\mathbb{P}^{1}),\pm\mathbb{Q})\mathbb{L}^{-(k^{\prime\prime})(n+1)}$ $-\mathbb{L}^{-h^{0}(N(L),\mathcal{O}(d))}\sum_{k^{\prime}=0}^{2}H_{*}^{c}(w_{k^{\prime}}(\mathbb{P}^{1}),\pm\mathbb{Q})\sum_{k^{\prime\prime}=N-k^{\prime}-1}^{N-1}H_{*}^{c}(w_{k^{\prime\prime}}(\mathbb{P}^{n}\setminus\mathbb{P}^{1}),\pm\mathbb{Q})\mathbb{L}^{-(k^{\prime\prime})(n+1)}.$ By our previous computation the first term is 0. The second term is easily seen to be of dimension $\leq-(\frac{3n}{2}+\epsilon)d$. This establishes the proof of (7) assuming the claim. We will now prove the claim. Let $X_{\lambda,j,l}=\\{(f,Z)|(f,Z)\in W_{\lambda,\geq N}^{l,n},L\subseteq\textrm{Sing}(f),|Z\cap L|=j\\}$ Let $Y_{\lambda,j,l}=\sum_{|\alpha|=j,\alpha\subseteq\lambda}w_{\alpha}(L)w_{\lambda\setminus\alpha}(\mathbb{P}^{n}\setminus L)\mathbb{L}^{-(n+1)(|\lambda|-|\alpha|)-h^{0}(N(L),\mathcal{O}(d))}$ We will establish that $\sum_{|\lambda|<N}(-1)^{||\lambda||}X_{\lambda,j,l}-Y_{\lambda,j,l}$ is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-N$ which will immediately give us the claim. We may further stratify $X_{\lambda,j,l}$ into the pieces $X_{\lambda,j,l,k}=\\{(f,Z)\in X_{\lambda,j,l}|h^{1}(\mathbb{P}^{n},I^{2}_{Z\cup L}(d))=h^{0}(N(L),\mathcal{O}(d))+(n+1)(|\lambda|-j)+k\\},$ i.e. $X_{\lambda,j,l,k}$ consists of those $Z$ such that vanishing to order 2 on $Z\cup L$ imposes $k$ fewer linear conditions than expected. Let $Y_{\lambda,j,l,k}=X_{\lambda,j,l,k}\mathcal{L}^{-}k$. We note that $\sum_{k}Y_{\lambda,j,l,k}=Y_{\lambda,j,l}$. We now note that it suffices to establish that $\sum_{|\lambda|<N}(-1)^{||\lambda||}X_{\lambda,j,l,k}-Y_{\lambda,j,l,k}$ is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-N$. For $k=0$, the above is exactly $0$. Let us assume $k>0$. We then note that by an argument similar to that of Proposition 3.1 there must exist a curve $C^{\prime}$ such that $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cup L}(d))=h^{1}(N(C^{\prime}\cup L),I^{2}_{Z\cap(C)\cup L}(d)).$ Further more the above curve $C^{\prime}$ is uniquely determined by $Z$ assuming it is of minimal degree. Let $X_{\lambda,j,l,k}^{r}\subseteq X_{\lambda,j,l,k}\times Chow(r)$ consist of all $(f,Z,C)$ such that $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cup L}(d))=h^{1}(N(C\cup L),I^{2}_{Z\cap(C)\cup L}(d)),$ and $C$ is of minimal degree. Thus we have that $X_{\lambda,j,l,k}=\sum_{r}X_{\lambda,j,l,k}^{r}$ Let $Y_{\lambda,j,l,k}^{r}=X_{\lambda,j,l,k}^{r}\mathbb{L}^{-k}$. Note that $\sum_{k,r}Y_{\lambda,j,l,k}^{r}=Y_{\lambda,j,l}.$ We now consider the difference $X_{\lambda,j,l,k}^{r}-Y_{\lambda,j,l,k}^{r}$. It suffices to establish for all fixed values of $j,l,k,r$ that $\sum_{|\lambda|<N}(-1)^{||\lambda||}X_{\lambda,j,l,k}^{r}-Y_{\lambda,j,l,k}^{r}$ is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-N$. It is an easy computation to note that for $r\geq 2$ the above difference is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-N$. Also for $r=0$, the above term always vanishes. Thus the only case that we must consider is when $r=1$. Let $M_{1}$ denote the space of all lines in $\mathbb{P}^{n}$ intersecting $L$ at a single point. Let $M_{2}$ denote the space of all lines in $\mathbb{P}^{n}$ not intersecting $L$. Let $X_{\lambda,j,l,k}^{M_{i}}=\\{(f,Z,L^{\prime})\in X_{\lambda,j,l,k}^{1},L^{\prime}\in M_{i}\\}.$ Let $Y_{\lambda,j,l,k}^{M_{i}}=X_{\lambda,j,l,k}^{M_{i}}\mathbb{L}^{-k}.$ Let $\bar{X}_{\lambda,j,l,k}^{M_{i}}=\\{(f,Z,L^{\prime})\in X_{\lambda,j}|h^{1}(L^{\prime}\cup L,I^{2}_{Z\cap(L^{\prime})\cup L}(d))=k\\}.$ Let $\bar{Y}_{\lambda,j,k}^{M_{i}}=\bar{X}_{\lambda,j,k}^{M_{i}}\mathbb{L}^{-k}.$ We claim that: $\bar{X}_{\lambda,j,l,k}^{M_{i}}-X_{\lambda,j,l,k}^{M_{i}}$ and $\bar{Y}_{\lambda,j,l,k}^{M_{i}}-Y_{\lambda,j,l,k}^{M_{i}}$ is of dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-N.$ This follows from the fact that these differences are contained in the union of $X_{\lambda,j,l,k}^{r}$ for $r\geq 2$ and we have already remarked that these have dimension $\leq h^{0}(\mathbb{P}^{n},\mathcal{O}(d))-N.$. Thus it suffices to establish that $\sum(-1)^{||\lambda||}\bar{X}_{\lambda,j,l,k}^{M_{i}}=\sum(-1)^{||\lambda||}barY_{\lambda,j,l,k}^{M_{i}}=0.$ To do so we will explicitly describe the sets $\bar{X}_{\lambda,j,l,k}^{M_{1}}$ and $\bar{X}_{\lambda,j,l,k}^{M_{2}}$. We note that we have a Zariski locally trivial bundle $\bar{X}_{\lambda,j,l,k}^{M_{2}}\to Chow(1)$ defined by $(f,Z,L^{\prime})\mapsto L^{\prime}$. Let us denote the fibre of this map by $F_{\lambda,j,l,k}^{M_{2}}$, it is clear that $X_{\lambda,j,l,k}^{M_{2}}=Chow(1)F_{\lambda,j,l,k}^{M_{2}}$ The fibre over $L^{\prime}$ consist of those $Z$ such that $|Z\cap L|=j$ and $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap L^{\prime}\cup L}(d))=k+h^{1}(\mathbb{P}^{n},I^{2}_{L}(d))$ or equivalently $h^{1}(\mathbb{P}^{n},I^{2}_{Z\cap}{d})=k$. But as is established in Proposition 6.10, this just implies that $|Z\cap L|=f(k)$ for some number $f(k)$ depending on $k$. We may then apply Proposition 6.9 to conclude that $F_{\lambda,j,l,k}^{M_{2}}$ is a polynomial in $\mathbb{L}$ and it suffices to establish that the image of $\sum_{|\lambda|<N}(-1)^{||\lambda||}\bar{F}_{\lambda,j,l,k}^{M_{2}}$ is zero in $K_{0}(MHS)$. But by Proposition 4.8 the image of $\sum(-1)^{||\lambda||}\bar{F}_{\lambda,j,l,k}^{M_{i}}$ in $K_{0}(MHS)$ is $\sum_{i=0}^{N}H_{*}^{c}(w_{j}(L)\times w_{f(k)}(L^{\prime})\times w_{i-j-f(k)}(\mathbb{P}^{n}\setminus L\setminus L^{\prime}),\pm,\mathbb{Q}),$ where in all relevant cases $f(k)\geq\frac{d}{2}$ which since $d$ is assumed to be large is greater than $2$. Thus by Lemma 1.2 in [17] the above cohomology group vanishes and we have the desired equality. Similarly $\sum_{|\lambda|<N}(-1)^{||\lambda||}\bar{X}_{\lambda,j,l,k}^{M_{1}}=0.$ and
Beam Instabilities G. Rumolo CERN, Geneva, Switzerland When a beam propagates in an accelerator, it interacts with both the external fields and the self-generated electromagnetic fields. If the latter are strong enough, the interplay between them and a perturbation in the beam distribution function can lead to an enhancement of the initial perturbation, resulting in what we call a beam instability. This unstable motion can be controlled with a feedback system, if available, or it grows, causing beam degradation and loss. Beam instabilities in particle accelerators have been studied and analysed in detail since the late 1950s. The subject owes its relevance to the fact that the onset of instabilities usually determines the performance of an accelerator. Understanding and suppressing the underlying sources and mechanisms is therefore the key to overcoming intensity limitations, thereby pushing forward the performance reach of a machine. § INTRODUCTION The motion of charged particles forming a beam in an accelerator can be studied either individually or taking into account the electromagnetic interaction between them. In the former case, the beam is regarded as a collection of non-interacting particles and the forces acting on them, i.e. the driving terms in each particle's equations of motion, are fully prescribed by the accelerator design. The study of the single-particle dynamics is then complicated by all non-linear components of the applied electromagnetic fields. In practice, this description is sufficient as long as additional electromagnetic fields caused by the presence of the whole beam of particles are not strong enough to perturb significantly the motion imparted by the external fields. In many applications, however, beams carrying a high charge (high intensity) and densely packed in a tiny phase space (high brightness) are required, for which the electromagnetic fields created by the interaction of the beam with the external environment need to be included when solving the particles' motion. Under unfavourable conditions, these electromagnetic fields act back on the beam distribution itself in a closed loop, such as to enhance a however small initial perturbation. This situation eventually leads to an instability. The most general example of an instability loop is schematically illustrated in Fig. <ref>. Schematic of the closed loop through which a beam can become unstable under the effect of self-generated electromagnetic fields. The block labelled `Interaction between beam and external environment' has been willingly left vague, as any further specification depends on the type of problem being modelled. In the most frequent case, which will also be the subject of this article, the interaction between beam and external environment will be purely electromagnetic, so that it can be expressed in terms of Maxwell's equations with the beam as source term and boundary conditions given by the accelerator devices through which the beam propagates. Another case that is frequently the object of study is when the beam generates an electron or ion cloud that acts back on the beam itself and potentially destabilizes it. In this case, the interaction of the beam with the environment needs to be described with all of the physical processes leading to the cloud formation. The additional electric field from the cloud can then be evaluated through Poisson's equation and used as a driving term in the equations of motion of the beam particles. In practice, a beam becomes unstable when, as a result of the loop described above, at least a moment of its six-dimensional (6D) phase space distribution, $\psi(x,y,z,x',y',\delta)$, exhibits an exponential growth (e.g. typically the mean positions $\langle x\rangle$, $\langle y\rangle$, $\langle z\rangle$ or the standard deviations $\sigma_x$, $\sigma_y$, $\sigma_z$), resulting in beam loss or emittance growth. Assuming an arbitrary observation point $s_0$ along the trajectory of a beam inside an accelerator, described through the coordinate $s$, the full 6D phase space can usually be decomposed into transverse and longitudinal phase spaces. The 4D transverse space is described by the two pairs of conjugate variables $(x, x',y, y')$, i.e. the offsets from the nominal orbit in the horizontal and vertical directions (horizontal is the direction in which the beam is bent), and the relative divergences from the nominal orbit, $x'={\rm d}x/{\rm d}s$ and $y'={\rm d}y/{\rm d}s$. The longitudinal plane is described by the conjugate pair $(z,\delta)$, i.e. a space coordinate proportional to the delay in the arrival time at the selected location with respect to the synchronous particle, $z=-c\tau$ (the minus sign is chosen such that particles arriving before the synchronous particle have a positive $z$), and the relative longitudinal momentum deviation from the nominal momentum, $\delta=\delta p/p_0$. As an example of instability detection, the onset of a transverse instability can be easily revealed by the signal captured from a beam position monitor (BPM). A phase of exponential growth can be observed, usually followed by saturation and decay either due to non-linearities or because of beam loss. Figure <ref> shows an example of horizontal BPM signals from two different bunches during the store of a train of 72 bunches with 25 ns spacing in the CERN-Proton Synchrotron (PS). The signal in Fig. <ref>(a) is basically BPM noise and represents a stable bunch, while that in Fig. <ref> is from an unstable one. This also highlights how the unstable beam oscillation is eventually associated with a certain amount of beam loss and is damped after the loss occurs. (a) (b) Examples of stable (a) and unstable (b) signals from a BPM. The signal from a beam current transformer (BCT) is also sketched, showing how the stable beam does not suffer from any intensity loss, while a sharp intensity decrease is associated with the rise of the instability. The interest in studying coherent beam instabilities arises from the fact that the onset of a beam instability usually determines the maximum beam intensity that a machine can store/accelerate (i.e. its performance limitation). Understanding the type of instability limiting the performance, and its underlying mechanism, is essential because it allows the source and possible measures to mitigate/suppress the effect to be identified, or providing the specifications of an active feedback system to prevent the instability. Beam instabilities occur in both linear and circular machines and can equally affect the longitudinal plane or the transverse plane. Coherent instabilities can affect the beam on different scales. For example, a typical multibunch instability exhibits an excitation pattern extending over different bunches in a train and depends on a long-range coupling agent. Nevertheless, in some cases the unstable motion of subsequent bunches does not appear as coupled, because the instability can be just the consequence of a certain mechanism that builds up along the bunch train, but visibly affects only the last bunches of a train (e.g. an electron cloud). In a pure single-bunch instability, usually the coupling happens between head and tail of the same bunch. In this case, the mechanism that drives the instability only needs to act on the short range. In the following sections, we will first set the mathematical framework to address the problem of beam instabilities driven by self-generated electromagnetic fields (wake functions and impedances) and we will then apply these concepts to reduced models (one- or two-particle) to explain the physics of some of the most frequent instability mechanisms in particle accelerators. The reference that will be followed throughout this article is [1]. § THE LONGITUDINAL PLANE Let us consider two ultra-relativistic charged particles ($q_1$ and $q_2$, travelling at $v\approx c$, or equivalently $\gamma\gg 1$) going through an accelerator structure, separated by a distance $|z|$ ($z=-c\tau$, with $\tau$ expressing the delay between the arrival times of the two particles at an arbitrary location). The leading particle will be our source and the trailing particle will be the witness. Since both particles are travelling basically at the speed of light, causality imposes that the leading particle cannot be affected by the trailing particle. As long as source and witness move in a perfectly conducting chamber, the witness does not feel any force from the source. However, when the source encounters a discontinuity, the electromagnetic field produced to satisfy the boundary conditions (wake field) can effectively reach the witness particle and affect its motion. In this process, the source loses energy, while the witness feels a net force all along an effective length, $L$, of the discontinuity/structure/device that caused the wake. Figure <ref> shows a simple sketch of how the situation could look like after a source has gone through a cavity-like object and modes are trapped after its passage. Wake field from a source particle potentially affecting a witness travelling at distance $z$ behind the source In fact, geometric discontinuities are not the only possible origin of wake fields. For example, in a chamber with finite conductivity the induced current from a source particle is delayed and can significantly act back on witness particles within a certain distance range. Generally, electromagnetic boundary conditions other than a perfect electrical conductor (PEC) can generate wake fields. The longitudinal wake function associated with a certain accelerator object (able to create a wake field) is defined as the integrated longitudinal force ($q_2E_s(s,z)$) acting on the witness particle along the effective length $L$ of the object (i.e. its energy change, $\Delta E_2$), normalized by the source and witness charges: \begin{equation} W_{||}(z)=-\frac{\int_0^L E_s(s,z) \, {\rm d}s}{q_1} = -\frac{\Delta E_2}{q_1q_2}. \label{longwake} \end{equation} The minus sign is also introduced in the definition, so that $W(0)=-\Delta E_1/q_1^2$ is defined positive (the source particle can only lose energy, $\Delta E_1<0$). The beam loading theorem also proves that the wake function is discontinuous in $z=0$, with $W_{||}(0^-)=2\cdot W_{||}(0)$. Intuitively, this theorem states that a particle travelling at the speed of light can only see half of its own wake. Besides, causality imposes that $W_{||}(0^+)=0$, and actually $W_{||}(z)=0$ for $z>0$. In a global energy balance, the energy lost by the source, $\Delta E_1$, splits into * Electromagnetic energy of the modes that may remain trapped in the object. This is then partly dissipated on the lossy walls or into purposely designed inserts or higher order mode (HOM) absorbers. Partly, it can be potentially transferred to the trailing particles (or the same particle over successive turns), possibly feeding into an instability. * Electromagnetic energy of modes that propagate down the beam chamber (above cut-off), which will be eventually lost on surrounding lossy materials. The energy loss of a beam is very important, because the fraction lost on the beam environment causes equipment heating (with consequent outgassing and possible damage), while the part associated with long-lived wake fields can feed into both longitudinal and transverse instabilities. The calculation of the energy loss will be the subject of the next subsection. The wake function of an accelerator object is basically its Green function in the time domain (i.e. the electromagnetic response of the object to a pulse excitation). Therefore, it is very useful for macroparticle models and simulations, because it can be used to describe the driving terms in the single-particle equations of motion, as we will see in one of the next subsections. However, we can also describe this response as a transfer function in the frequency domain. This gives the definition of longitudinal beam coupling impedance of the object under study: \begin{equation} Z_{||}(\omega)=\int_{-\infty}^{\infty} W_{||}(z) \exp\left(-\frac{{\rm i}\omega z}{c}\right)\frac{{\rm d}z}{c}. \label{longimped} \end{equation} Typical longitudinal wake/impedance pairs are described as resonators and are displayed in Fig. <ref>. The wake function is a damped oscillation with a discontinuity in $z=0$, while the beam coupling impedance spectrum exhibits a peak at the specific oscillation frequency. The width of the peak relates to the lifetime of the oscillation in the time domain before becoming fully damped, distinguishing between a narrowband and a broadband resonator, as shown in top and bottom of Fig. <ref>, respectively. In more complex cases, several modes can be excited in the object and the beam coupling impedance is a combination of several peaks like those shown in the single-resonance examples depicted in Fig. <ref>. For example, a pill-box cavity with walls having finite conductivity and attached to a vacuum chamber left and right (Fig. <ref>(a)) can resonate on all its characteristic modes determined by its geometry. The width of the excited peaks will be narrower for the modes below the cut-off frequency of the chamber (as the decay is purely determined by the resistive losses), while they will be broader for the peaks above cut-off, as additional losses come from the propagation of these modes into the chamber. This is visible in Fig. <ref> (simulations done with CST® Particle Studio Suite). Wake functions (left) and beam coupling impedances (right) for narrowband (top) and broadband (bottom) resonator objects. (a) (b) Pill-box cavity: a 3D longitudinal cut of the simulated cavity (a) and the obtained longitudinal beam coupling impedance (b). The cut-off frequency of the beam chamber is shown with a vertical dashed line. Courtesy of C. Zannini. In beam physics, broadband impedances, such that the associated wake functions decay over the length of one particle bunch, are only responsible for intrabunch (head–tail) coupling, potentially leading to single-bunch instabilities. Conversely, narrowband impedances, associated with long-lived wake functions decaying over the length of a bunch train or several turns, cause bunch-to-bunch or multiturn coupling, leading to multibunch or multiturn instabilities. §.§ Energy loss By using the concepts so far introduced, we can easily derive an analytical expression for the energy lost by a bunch with line density $\lambda(z)$ (see Fig. <ref>) when it goes through a structure characterized by a wake function $W_{||}(z)$ or beam coupling impedance $Z_{||}(\omega)$. The energy change $\Delta E(z)$ of the witness slice $e\lambda(z) \, {\rm d}z$ can be expressed as the integral of the contributions from the wake functions generated by all the preceding source slices, $e\lambda(z') \, {\rm d}z'$. Integrating $\Delta E(z)$ over the whole bunch provides the total energy loss of the bunch: \begin{equation} \Delta E = \int_{-\hat{z}}^{\hat{z}}\Delta E(z) \, {\rm d}z = - \int_{-\hat{z}}^{\hat{z}} e\lambda(z) \int_z^{\hat{z}}e\lambda(z')W_{||}(z-z') \, {\rm d}z' \, {\rm d}z. \label{bunchenloss1} \end{equation} By using the Parseval identity and the convolution theorem, the energy loss can be easily written in terms of bunch spectrum $\Lambda(\omega)$ and beam coupling impedance: \begin{equation} \Delta E = -\frac{e^2}{2\pi}\int_{-\infty}^{\infty}\Lambda^*(\omega)\left[\Lambda(\omega)Z_{||}(\omega)\right] \, {\rm d}\omega=-\frac{e^2}{2\pi}\int_{-\infty}^{\infty}|\Lambda(\omega)|^2\mathrm{Re}\left[Z_{||}(\omega)\right] \, {\rm d}\omega. \label{bunchenloss2} \end{equation} Sketch of the bunch and line density. Source and witness slices are also highlighted In the last expression, we also took into account that, since $W_{||}(z)$ is a real function, $\mathrm{Re}[Z_{||}(\omega)]$ and $\mathrm{Im}[Z_{||}(\omega)]$ are even and odd functions of $\omega$, respectively. Since Eq. (<ref>) represents the total energy lost by the bunch over a single pass through the object with beam coupling impedance $Z_{||}(\omega)$, it can also be interpreted as the bunch energy loss per turn in a circular accelerator (again due to a single object with beam coupling impedance $Z_{||}(\omega)$, or the total energy loss per turn if $Z_{||}(\omega)$ represents instead the total longitudinal beam coupling impedance modelling the whole ring). However, this is rigorously true only as long as the wake function is short enough lived to be fully damped after one turn, so that subsequent passages of the bunch are not coupled through the wake. In fact, defining $C$ as the circumference of the ring, Eqs. (<ref>) and (<ref>) can be generalized to the case of a bunch going through a structure that keeps memory of previous passages, assuming that its longitudinal distribution does not change in time: \begin{equation} \Delta E = \int_{-\hat{z}}^{\hat{z}}\Delta E(z) \, {\rm d}z = - \int_{-\hat{z}}^{\hat{z}} e\lambda(z) \int_z^{\hat{z}}e\lambda(z')\sum_{k=-\infty}^{\infty}W_{||}(kC+z-z') \, {\rm d}z' \, {\rm d}z. \label{bunchenloss1multi} \end{equation} Applying the identity \begin{equation} \sum_{k=-\infty}^{\infty}W_{||}(kC+z-z')=\frac{\omega_0}{2\pi}\sum_{p=-\infty}^{\infty}Z_{||}(p\omega_0)\exp\left[-\frac{{\rm i}p\omega_0(z-z')}{c}\right], \end{equation} in which $\omega_0=2\pi c/C$ is the revolution frequency, we can easily recast Eq. (<ref>) in the following form: \begin{equation} \Delta E = -\frac{e^2\omega_0}{2\pi}\sum_{p=-\infty}^{\infty}|\Lambda(p\omega_0)|^2\mathrm{Re}\left[Z_{||}(p\omega_0)\right]. \label{bunchenloss2multi} \end{equation} Equation (<ref>) is very powerful, because it can be applied to the full beam circulating in an accelerator ring and can be used for calculating the total beam energy loss per turn. In this case, we would simply need to replace $\Lambda(\omega)$, the Fourier transform of the single-bunch distribution, with the Fourier transform of the full beam signal, $\Lambda_B(\omega)$. For example, we could assume the beam to be a train of $M$ bunches covering only a fraction of the full circumference ($M<h$, $h$ being the harmonic number of the accelerator) with spacing between bunches $\tau_b=2\pi/(h\omega_0)$: \begin{equation} \lambda_B(z)=\sum_{n=0}^{M-1}\lambda(z-nc\tau_b) \;\; \stackrel{\displaystyle\mathcal{F}}{\iff} \;\; \Lambda_B(\omega)=\Lambda(\omega)\sum_{n=0}^{M-1}\exp\left(-{\rm i}\omega\tau_b\right). \end{equation} Summing up the terms in the expression of the beam spectrum, we obtain \begin{equation} \Lambda_B(\omega)=\Lambda(\omega)\exp\left[\-\frac{{\rm i}\omega\tau_b(M-1)}{2}\right]\cdot\frac{\displaystyle\sin\left(\frac{M\omega\tau_b}{2}\right)}{\displaystyle\sin\left(\frac{\omega\tau_b}{2}\right)}, \end{equation} which can be finally inserted into Eq. (<ref>), yielding \begin{equation} \Delta E = \frac{e^2\omega_0}{2\pi}\sum_{p=-\infty}^{\infty}|\Lambda(p\omega_0)|^2\mathrm{Re}\left[Z_{||}(p\omega_0)\right] \cdot \left[\frac{1-\displaystyle\cos\left(\frac{2\pi Mp}{h}\right)}{1-\displaystyle\cos\left(\frac{2\pi p}{h}\right)}\right]. \end{equation} The terms in the summation above are maximum for $p=k\cdot h$, as the ratio in brackets becomes equal to $M^2$. This means that narrowband impedances peaked around multiples of the harmonic number of the accelerator are the most efficient to drain energy from the beam, and consequently the associated objects suffer from beam-induced heating. However, this type of impedances, usually associated with the RF systems and their HOMs, need to be avoided in accelerator design by either detuning them or including HOM absorbers. In fact, they not only cause equipment heating, but potentially lead to important instabilities (e.g. the Robinson instability, see the next subsection, or transverse coupled bunch instabilities). The total energy loss per turn associated with the global accelerator impedance needs to be compensated for by the RF system, so that the average stable phase shifts by an amount $\langle\Delta \Phi_s\rangle$ given by \begin{equation} \sin\langle\Delta\Phi_s\rangle = \frac{\Delta E}{MN_beV_m}, \label{phaseshift} \end{equation} where $N_b$ is the number of particles per bunch and $V_m$ is the applied RF voltage. §.§ The Robinson instability To study instabilities, the effect of wake fields (or impedances) must be formally introduced in the equation of motion of the beam particles. Resorting to the concepts introduced at the beginning of this section, we can write the equation of motion of any single particle in the witness slice $\lambda(z) \, {\rm d}z$ under the effect of the force from the RF system and that associated with the wake, which can extend to several turns: \begin{equation} \frac{{\rm d}^2z}{{\rm d}t^2} + \frac{\eta e V_{\mathrm{RF}}(z)}{m_0\gamma C} = \frac{\eta e^2}{m_0\gamma C}\int_{z}^{\infty}\sum_{k=0}^{\infty}\lambda(z'+kC, t)W_{||}(z-z'-kC) \, {\rm d}z'. \label{motion} \end{equation} Equation (<ref>) is very general and can be used in macroparticle tracking programs, which solve it for each macroparticle, determining self consistently the full beam evolution $\lambda(z,t)$. It is to be noted that both the integral and the summation in the above equation can be formally extended from $-\infty$, as the wake function vanishes for positive values of $z$ due to causality. In the following, to illustrate the most basic mechanism of longitudinal instability, i.e. the Robinson instability, we will make use of these simplifications: * The bunch is assumed to be point-like (carrying the full bunch charge $N_be$) and feels an external linear focusing force (i.e. in absence of the wake forces, it would execute linear synchrotron oscillations with synchrotron frequency $\omega_s$). * The bunch additionally feels the effect of the multiturn wake from an impedance source distributed over the ring circumference $C$ (the analysis would not change if the impedance source had been lumped at one ring location, and in reality both the external voltage and the impedance source should be localized, making Eq. (<ref>) de facto time discrete). In this case, the equation of motion (<ref>) reduces to \begin{equation} \frac{{\rm d}^2z}{{\rm d}t^2} +\omega_s^2z= \frac{N_b\eta e^2}{m_0\gamma C}\sum_{k=0}^{\infty}W_{||}\left[z(t)-z(t-kT_0)-kC\right]. \label{Rob-motion} \end{equation} First of all, we assume that the wake function can be linearized on the scale of the synchrotron oscillation (i.e. the wake function does not exhibit abrupt changes over a half-bucket length): \begin{equation} W_{||}\left[z(t)-z(t-kT_0)-kC\right]\approx W_{||}(kC)+W'_{||}(kC)\cdot\left[z(t)-z(t-kT_0)-kC\right]. \end{equation} We can use the above expansion in Eq. (<ref>). The term $\sum_k W_{||}(kC)$ only contributes to a constant term in the solution of the equation of motion, shifting the centre of the synchrotron oscillation from the bucket centre to a certain $z_0$. This term represents the stable phase shift that compensates for the energy loss introduced by the wake and will be neglected in the following. The dynamic term proportional to $z(t)-z(t-kT_0) \approx kT_0 \, {\rm d}z/{\rm d}t$, instead, is a friction-like term in the equation of the harmonic oscillator and, under certain conditions, can lead to an instability. Going to the frequency domain then yields \begin{equation} \omega^2 - \omega_s^2 = -\frac{{\rm i}N_b\eta e^2}{m_0\gamma C^2}\sum_{p=-\infty}^{\infty}\left[p\omega_0Z_{||}(p\omega_0)-(p\omega_0+\omega)Z_{||}(p\omega_0+\omega)\right]. \end{equation} At this point, assuming that the wake only introduces a small deviation from the nominal synchrotron frequency, we can easily write the complex frequency shift, which results in a real part (synchrotron frequency shift) and an imaginary part (growth/damping rate): \begin{equation} \left\{\begin{array}{l} \Delta\omega_s=\mathrm{Re}(\omega-\omega_s)=\displaystyle\left(\frac{e^2}{m_0c^2}\right)\frac{N_b\eta}{2\gamma T_0^2\omega_s}\sum_{p=-\infty}^{\infty}\left[p\omega_0\mathrm{Im}Z_{||}(p\omega_0)-(p\omega_0+\omega_s)\mathrm{Im}Z_{||}(p\omega_0+\omega_s)\right],\\[0.7cm] \tau^{-1}=\mathrm{Im}(\omega-\omega_s)=\displaystyle\left(\frac{e^2}{m_0c^2}\right)\frac{N_b\eta}{2\gamma T_0^2\omega_s}\sum_{p=-\infty}^{\infty}(p\omega_0+\omega_s)\mathrm{Re}Z_{||}(p\omega_0+\omega_s). \end{array} \right. \label{shifts} \end{equation} The possibility of having an instability is related to a positive value of $\tau$ in the second of the equations in (<ref>). This is determined by the sign of $\eta$ and that of the weighted summation on $\mathrm{Re}Z_{||}$, which are the only two terms that can admit both signs. A relevant situation that can be studied in further detail is when the impedance has a spectrum peaked at a frequency $\omega_r$ close to the RF frequency $h\omega_0$, or to a multiple of it (i.e. associated with the cavity fundamental mode or with a HOM). In this case, out of the infinite summation only two terms will dominate the right-hand side of the equation for the growth/damping rate: \begin{equation} \tau^{-1}=\mathrm{Im}(\omega-\omega_s)\approx\displaystyle\left(\frac{e^2}{m_0c^2}\right)\frac{N_b\eta h\omega_0}{2\gamma T_0^2\omega_s}\left[\mathrm{Re}Z_{||}(h\omega_0+\omega_s)-\mathrm{Re}Z_{||}(h\omega_0-\omega_s)\right]. \end{equation} Stability requires that $\eta$ and the variation of $\mathrm{Re}Z_{||}(\omega)$ around $n\omega_0 $ have different signs. Figure <ref> shows that, assuming $\omega_s$ to be small with respect to the width of the resonance peak, this can be achieved differently according to whether $\omega_r$ is below or above $n\omega_0$. In particular, when $h\omega_0<\omega_r$ (Fig. <ref>(a)), the term $\left[\mathrm{Re}Z_{||}(h\omega_0+\omega_s)-\mathrm{Re}Z_{||}(h\omega_0-\omega_s)\right]$ is positive and therefore $\eta$ needs to be negative for stability (i.e. the machine should be operating below transition). Otherwise, for $h\omega_0>\omega_r$ (Fig. <ref>(b)), stability is guaranteed only above transition. (a) (b) Sketch of the two possible situations for the Robinson instability Other types of impedances can also cause instabilities through the Robinson mechanism, following the general equations (<ref>). However, a smooth broadband impedance with no narrow structures on the $\omega_0$ scale cannot give rise to an instability, because \begin{equation} \sum_{p=-\infty}^{\infty}(p\omega_0+\omega_s)\mathrm{Re}Z_{||}(p\omega_0+\omega_s) \rightarrow \frac{1}{\omega_0}\int_{-\infty}^{\infty}\omega\mathrm{Re}Z_{||}(\omega) \, {\rm d}\omega \rightarrow 0. \end{equation} Physically, this could be expected, because the absence of structure on $\omega_0$ scale in the spectrum implies that the wake has fully decayed over one turn and, therefore, the driving term in the equation of motion (<ref>) also vanishes. To summarize, the Robinson instability affects a single bunch under the action of a multiturn wake field. It is characterized by a term of coherent synchrotron tune shift (the first of the equations (<ref>)) and an unstable rigid bunch dipole oscillation (growth rate given by the second of the equations (<ref>) under the conditions explained above). It does not involve higher order moments of the bunch longitudinal phase space distribution. Other important collective effects can affect a bunch in a beam, for instance: * Potential well distortion, resulting in synchronous phase shift, bunch lengthening or shortening, synchrotron tune shift/spread. * Coupled bunch instabilities. * Higher order mode and mode-coupling single-bunch instabilities (e.g. microwave instability). * Coasting beam instabilities (e.g. negative-mass instability). To be able to study these effects, more refined modes of the beam are needed (e.g. the kinetic model described by the Vlasov equation or macroparticle simulations), but this is beyond the scope of this introductory article. § THE TRANSVERSE PLANE We can start from the same system we have used in the previous section to introduce the concept of longitudinal wake function. We consider two ultra-relativistic charged particles, $q_1$ and $q_2$, going through an accelerator structure, with the trailing (witness) particle at a distance $z$ from the leading (source) one. In an axisymmetric structure (or simply with a top–bottom and left–right symmetry) a source particle travelling on axis cannot induce net transverse forces on a witness particle also following on axis. A symmetry breaking has to be introduced to drive transverse effects, and at the first order there are two options, i.e. offset the source or the witness (see Fig. <ref>). The transverse (horizontal or vertical) dipolar wake function associated with a certain accelerator object (able to create a wake field) is defined as the integrated transverse force from an offset source ($q_2\cdot [\vec{E}(s,z) + \vec{v}\times \vec{B}(s,z)]_{x,y}$) acting on the witness particle along the effective length of the object, normalized by the source and witness charges and by the offset of the source charge, $\Delta x_1$ or $\Delta y_1$ (see Fig. <ref>, top): \begin{equation} \begin{array}{l} \displaystyle W_{Dx}(z)=-\frac{\int_0^L \left[\vec{E}(s,z) + \vec{v}\times\vec{B}(s,z)\right]_x \, {\rm d}s}{q_1\Delta x_1} = -\left(\frac{E_0}{q_1q_2}\right)\frac{\Delta x'_2}{\Delta x_1},\\[5mm] \displaystyle W_{Dy}(z)=-\frac{\int_0^L \left[\vec{E}(s,z) + \vec{v}\times\vec{B}(s,z)\right]_y \, {\rm d}s}{q_1\Delta y_1} = -\left(\frac{E_0}{q_1q_2}\right)\frac{\Delta y'_2}{\Delta y_1}. \end{array} \label{longwake2} \end{equation} Wake field from a source particle potentially affecting a witness travelling at distance $z$ behind the source: dipolar (top) and quadrupolar (bottom). The transverse (horizontal or vertical) quadrupolar wake function associated with a certain accelerator object (able to create a wake field) is defined as the integrated transverse force from an on-axis source ($q_2\cdot [\vec{E}(s,z) + \vec{v}\times \vec{B}(s,z)]_{x,y}$) acting on an offset witness particle along the effective length of the object, normalized by the source and witness charges and by the offset of the witness charge, $\Delta x_2$ or $\Delta y_2$ (see Fig. <ref>, bottom): \begin{equation} \begin{array}{l} \displaystyle W_{Qx}(z)=-\frac{\int_0^L \left[\vec{E}(s,z) + \vec{v}\times\vec{B}(s,z)\right]_x \, {\rm d}s}{q_1\Delta x_2} = -\left(\frac{E_0}{q_1q_2}\right)\frac{\Delta x'_2}{\Delta x_2},\\[5mm] \displaystyle W_{Qy}(z)=-\frac{\int_0^L \left[\vec{E}(s,z) + \vec{v}\times\vec{B}(s,z)\right]_y \, {\rm d}s}{q_1\Delta y_2} = -\left(\frac{E_0}{q_1q_2}\right)\frac{\Delta y'_2}{\Delta y_2}. \end{array} \label{longwake3} \end{equation} For most objects of interest, it can be seen that the wake functions so defined do not depend on the source or witness offsets, provided the offsets are much smaller than the transverse size of the object. For larger offsets, coupling and/or higher order non-linear terms can become important and may need to be taken into account to describe correctly the particle dynamics. Both the dipolar and quadrupolar wake functions in $z=0$, $W_{Dx,Dy}(0)$ and $W_{Qx,Qy}(0)$, must vanish because for $z=0$ source and witness particles are travelling together and they can only mutually interact through space charge, which is not included in this framework. Besides, $W_{Dx,Dy}(0^-)$ is generally negative, because trailing particles tend to be deflected toward the source particle (offset and kick have the same sign). The sign of the quadrupolar wake functions in $0^-$ depends on the geometry and properties of the surrounding environment. As we also discussed for the longitudinal wake function, the condition that transverse wake functions must vanish for $z>0$ due to causality also holds. The transverse wake function of an accelerator object is very useful for macroparticle models and simulations, because it relates source or witness perturbations to the associated kicks on trailing particles: \begin{equation} \left\{\begin{array}{l} \displaystyle \Delta x'_2(z)=-\left(\frac{q_1q_2}{E_0}\right)\left[ W_{Dx}(z)\Delta x_1 + W_{Qx}(z)\Delta x_2\right], \\[5mm] \displaystyle \Delta y'_2(z)=-\left(\frac{q_1q_2}{E_0}\right)\left[ W_{Dy}(z)\Delta y_1 + W_{Qy}(z)\Delta y_2\right]. \end{array} \right. \end{equation} We can also describe the interaction as a transfer function in the frequency domain, which defines the transverse beam coupling impedance (dipolar and quadrupolar) of the object under study: \begin{equation} \left\{\begin{array}{l} \displaystyle Z_{Dx,Dy}(\omega)={\rm i}\int_{-\infty}^{\infty}W_{Dx,Dy}(z)\exp\left(\frac{{\rm i}\omega z}{c}\right)\frac{{\rm d}z}{c},\\[5mm] \displaystyle Z_{Qx,Qy}(\omega)={\rm i}\int_{-\infty}^{\infty}W_{Qx,Qy}(z)\exp\left(\frac{{\rm i}\omega z}{c}\right)\frac{{\rm d}z}{c}. \end{array} \right. \label{transv-imp} \end{equation} Similarly to the longitudinal plane, in the transverse plane typical wake/impedance pairs are also represented by resonators, which have a peaked structure in the frequency domain and are damped oscillations in the time domain. Another important example of transverse impedance is the wall impedance. Figure <ref> depicts the dipolar impedance spectrum for a simple cylindrical chamber with a wall of finite thickness $t$, finite conductivity $\sigma$ and radius $b$. This impedance extends over a very wide range of frequencies and exhibits different behaviours that can be intuitively understood as described in the following. At low frequencies, such that the penetration depth of the electromagnetic fields into the chamber is much larger than the wall thickness, i.e. $\delta(\omega)=\sqrt{2/(\mu_0\sigma\omega)}\gg t$, the beam can only see the induced charges on the inner surface of the chamber, associated with a constant imaginary part of the impedance (betatron tune shift by extra defocusing) and basically zero real part (no losses). At intermediate frequencies, the electromagnetic interaction between the beam and the conducting pipe happens through a decreasing $\delta(\omega)$ and the impedance becomes about proportional to the penetration depth and decays like $\sqrt{\omega}$. At high frequency, a point is reached at which the electromagnetic fields can become trapped within the penetration depth, generating a resonant peak. Wall dipolar impedance for a cylindrical pipe of radius $b$ and thickness $t$, as illustrated. Courtesy of N. Mounet. Corresponding to the different frequency ranges, the wake function also exhibits different behaviours on different distance ranges from the source charge. In the medium–long range (usually coupled bunch and/or multiturn), the wake function is characterized by a monotonic decay slowly converging to zero, while in the short range (typically responsible for single-bunch effects) the high-frequency resonance peak dominates and a damped oscillatory behaviour is found. The wall wake function is displayed in Fig. <ref>, in which the short-range part has also been zoomed in order to highlight the different behaviour. The switch between the two regimes obviously depends on the frequency at which the peak is actually located. It can be demonstrated that the transverse impedance of a resistive wall is inversely proportional to $b^3$, while its longitudinal counterpart is inversely proportional to $b$. That is why the transverse effects due to a resistive wall are in general more severe than the corresponding longitudinal ones. In particular, the transverse resistive wall impedance is responsible for coupled bunch instabilities and determines which damping time a feedback system needs in order to be able to efficiently counteract this effect. Furthermore, resistive wall effects become especially important in machines with low-emittance beams, which have chambers with very small radii and might require coatings with low-conductivity materials to avoid other effects. §.§ The rigid bunch instability Repeating the same procedure used for the longitudinal plane, to study instabilities in the transverse plane, the effect of wake fields must be formally introduced in the equations of transverse motion of the beam particles. Using the concepts introduced at the beginning of this section, we can write the equations of the transverse motion of any single particle in the witness slice $\lambda(z) \, {\rm d}z$ under the effect of the focusing force from the external magnets and that associated with a distributed wake, which can extend to several turns: \begin{equation} \left\{\begin{array}{l} \displaystyle \frac{{\rm d}^2x}{{\rm d}s^2} + K_x(s)x = -\frac{e^2}{m_0c^2\gamma C}\cdot\\[3mm] \displaystyle\;\;\;\;\;\;\cdot\sum_{k=0}^{\infty}\int_{z}^{\infty}\lambda(z'+kC,s)\left[\langle x \rangle(z'+kC,s)W_{Dx}(z-z'-kC) + x W_{Qx}(z-z'-kC) \right] \, {\rm d}z',\\[8mm] \displaystyle\frac{{\rm d}^2y}{{\rm d}s^2} + K_y(s)y = -\frac{e^2}{m_0c^2\gamma C}\cdot\\[3mm] \displaystyle\;\;\;\;\;\;\cdot\sum_{k=0}^{\infty}\int_{z}^{\infty}\lambda(z'+kC,s)\left[\langle y \rangle(z'+kC,s)W_{Dy}(z-z'-kC) + y W_{Qy}(z-z'-kC) \right] \, {\rm d}z'. \end{array} \right. \label{transv-motion} \end{equation} Wall dipolar wake function for a cylindrical pipe of radius $b$ and thickness $t$. The short range has been zoomed in to highlight the oscillatory behaviour of the function at short distances from the source. Courtesy of N. Mounet. Here $K_{x,y}(s)$ are the external focusing coefficients as from Hill's equation and $\langle x, y\rangle(z,s)$ represent the average $x$ and $y$ across the ${\rm d}N_b(z)=\lambda(z,s) \, {\rm d}z$ particles contained in the thin slice ${\rm d}z$. Equation (<ref>) is very general and can be used in macroparticle tracking programs, which solve it for each macroparticle, determining self consistently the evolution of $\lambda(z,s)$ as well as of $\langle x, y\rangle(z,s)$. As in the longitudinal case, all integrals and summations in the above equations can be formally extended from $-\infty$, as the wake functions vanish for positive values of $z$ due to causality. In the following, to illustrate the most basic mechanism of transverse instability, i.e. the rigid bunch instability, we will make use of a set of simplifying assumptions: * The bunch is point-like, carries a charge $N_be$ and feels an external linear force (i.e. it would execute linear betatron oscillations in absence of the wake forces). * Longitudinal motion is neglected. * Smooth approximation (constant focusing). * Distributed wake, only dipolar and only in the $y$ direction. The equation of motion of this one-particle beam can then be written \begin{equation} \displaystyle\frac{{\rm d}^2y}{{\rm d}s^2} + \left(\frac{\omega_{\beta}}{c}\right)^2y=-\left(\frac{N_be^2}{m_0c^2\gamma C}\right)\sum_{k=0}^{\infty}y(s+kC)W_{Dy}(-kC). \end{equation} Transforming into the frequency domain and applying the Poisson sum formula, we obtain \[ \displaystyle\omega^2 - \omega_{\beta}^2 = \frac{N_be^2}{m_0\gamma C}\sum_{k=-\infty}^{\infty}\exp(ik\omega T_0)W_{Dy}(kC)= \] \begin{equation} \displaystyle\;\;\;=-\frac{iN_be^2}{m_0\gamma CT_0}\sum_{p=-\infty}^{\infty}Z_y(p\omega_0+\omega) \;\; . \end{equation} Assuming that the effect of the wake results in a small deviation from the betatron tune, we can derive from the above equation a simple estimate of the real frequency shift (tune shift) and the imaginary frequency shift (growth/damping rate): \begin{equation} \begin{array}{l} \displaystyle \frac{\mathrm{Re}(\omega-\omega_{\beta})}{\omega_0}=\Delta Q_y \approx \frac{N_be^2\langle\beta_y\rangle}{4\pi m_0\gamma c C}\sum_{p=-\infty}^{\infty}\mathrm{Im}\left[Z_{Dy}(p\omega_0+\omega_{\beta})\right], \\[6mm] \displaystyle \mathrm{Im}(\omega-\omega_{\beta})=\tau^{-1}\approx -\frac{N_be^2\langle\beta_y\rangle}{2m_0\gamma C^2}\sum_{p=-\infty}^{\infty}\mathrm{Re}\left[Z_{Dy}(p\omega_0+\omega_{\beta})\right]. \end{array} \label{rigid-eqs} \end{equation} With the given definition of the transverse impedance (i.e. including the imaginary unit in Eqs. (<ref>)), the tune shift is found to depend only on the imaginary part of the impedance, while the growth/damping rate depends only on its real part. It is interesting to note that the tune shift can also be expressed in the following well-known compact form: \begin{equation} \Delta Q_y = \frac{1}{4\pi}\left[\langle\beta_y\rangle \frac{eI_b\mathrm{Im}(Z_{Dy}^{\mathrm{eff}})}{E} \right]\rightarrow\frac{1}{4\pi}\oint\beta_y(s)\Delta k_y(s) \, {\rm d}s, \end{equation} where $\Delta k_y(s)$ is the distributed quadrupolar error due to the wake, which can be written as the relative energy kick from the wake $\Delta E_y/E$ divided by the circumference $C$. Furthermore, besides the tune shift, the presence of the wake introduces an imaginary part of the betatron frequency shift, which, if positive, can result in a beam instability. In particular, the summation in the second of the equations (<ref>) can be positive or negative, because $\mathrm{Re}[Z_{Dy}(\omega)]$ is an odd function. Unlike the case of the Robinson instability in the longitudinal plane, here the sign of the imaginary frequency shift is solely determined by the sign of this summation. In a first noteworthy case, similarly to what we discussed in the longitudinal case, we can assume the transverse impedance to be peaked at a frequency $\omega_r$ close to $h\omega_0$ (e.g. RF cavity fundamental mode or a HOM). If we define the tune $Q_y=n_y + \Delta_{\beta y}$ with $-0.5<\Delta_{\beta y}<0.5$ and we make use of the property of the real part of the impedance to be an odd function of $\omega$, we can easily reduce the summation at the right-hand side of the second of the equations (<ref>) to the sum of its two leading terms alone ($p$ such that $p+n_y=h$): \begin{equation} \tau^{-1}\approx -\frac{N_be^2\langle\beta_y\rangle}{2m_0\gamma C^2}\left(\mathrm{Re}[Z_{Dy}(h\omega_0+\Delta_{\beta y}\omega_0)] - \mathrm{Re}[Z_{Dy}(h\omega_0-\Delta_{\beta y}\omega_0)] \right). \label{stab-res} \end{equation} Figure <ref> illustrates the two possible situations that can occur, corresponding to the case of positive $\Delta_{\beta y}$, i.e. tune below the half integer. If $\omega_r>h\omega_0$ (Fig. <ref>(a)), the right-hand side of Eq. (<ref>) is negative and the bunch will be stable. Conversely, if $\omega_r<h\omega_0$ (Fig. <ref>(b)), the right-hand side of Eq. (<ref>) is positive, entailing an instability. Obviously, the situation is reversed when the tune is above the half integer ($\Delta_{\beta y}<0$). (a) (b) Sketch of the two possible situations for the rigid bunch instability with resonator impedance Another interesting case is when the impedance is of resistive wall type, i.e. strongly peaked in the very low frequency range (diverging for $\omega\rightarrow 0$ in the approximation of thick wall). Then we can distinguish the two situations depicted in Fig. <ref>, i.e. fractional tune below or above the half integer. In the former case, the two leading terms of the summation can be expressed as \begin{equation} \tau^{-1}\approx -\frac{N_be^2\langle\beta_y\rangle}{2m_0\gamma C^2}\left(\mathrm{Re}[Z_{Dy}(\Delta_{\beta y}\omega_0)] - \mathrm{Re}[Z_{Dy}((1-\Delta_{\beta y})\omega_0)] \right), \label{stab-resb} \end{equation} which is negative (see Fig. <ref>, top plot), ensuring beam stability. In the latter case, i.e. for fractional tune above the half integer, we obtain \begin{equation} \tau^{-1}\approx -\frac{N_be^2\langle\beta_y\rangle}{2m_0\gamma C^2}\left(\mathrm{Re}[Z_{Dy}((1+\Delta_{\beta y})\omega_0)] -\mathrm{Re}[Z_{Dy}(-\Delta_{\beta y}\omega_0)] \right). \label{stab-res2} \end{equation} As is visible from Fig. <ref>, bottom plot, this is positive, leading in any case to an instability. This is the reason why most of the running machines are usually operated with a fractional part of the tunes below 0.5, although, in practice, tunes above the half integer can be used, if the resistive wall instability is Landau damped or efficiently suppressed with a feedback system. Sketch of the two possible situations for the rigid bunch instability with resistive wall impedance §.§ Strong head–tail instability and transverse-mode coupling Making now one step further in the description of the mechanisms of transverse instability, the case of the strong head–tail instability, also called transverse mode coupling instability (TMCI), can be illustrated by means of a simple two-particle model. It is assumed that the beam consists of two macroparticles, each having a charge of $\Np_be/2$. They perform synchrotron oscillations of the same frequency and amplitude, but with opposite phase. During half a synchrotron period $T_s=2\pi/\omega_s$, particle with index 1 is leading and thus performing free betatron oscillations, while the trailing particle indexed 2 feels the wake field generated by particle 1. Thus, for $0 < s < \pi c/\omega_s$, assuming zero chromaticity, and therefore no dependence of the frequency of the transverse oscillation on the longitudinal parameters, the equations of motion for the two macroparticles are simply written as \begin{align} y_1''+\left(\frac{\omega_\beta}{c}\right)^2 y_1 &= 0 \label{EQ:2ParticleEquationOfMotionLeading}, \\ y_2''+\left(\frac{\omega_\beta}{c}\right)^2 y_2 &= \left(\frac{e^2}{m_0c^2}\right)\frac{\Np_b \wake_0}{2 \gammarel C}\,y_1(s), \label{EQ:2ParticleEquationOfMotionTrailing} \end{align} where $y_1$ denotes the vertical position of particle 1 and $y_2$ the position of particle 2, and the focusing term in Hill's equation has been written as $K_y=\left({\omega_\beta}/{c}\right)^2$ with $\omega_\beta$ denoting the (vertical) betatron frequency. Note that it is assumed here that the wake field $\wake_0$ (integrated over the machine circumference $C$) is constant but vanishes before the beam passage in the consecutive turn. This corresponds practically to the case of a broadband impedance. The stability of the two-particle system is analysed in the following. The solution for the free betatron oscillation of Eq. (<ref>) can be written as \begin{equation} \tilde y_1(s)=\tilde y_1 (0) \exp{\left(\frac{-{\rm i} \omega_\beta s}{c}\right)}, \end{equation} where the complex phasor $\tilde y_{1}(s)$ \begin{equation} \tilde y_{1}(s)=y_{1}(s)+{\rm i}\frac{c}{\omega_\beta}y'_{1}(s) \end{equation} has been introduced. Inserting the solution for $\tilde y_1(s)$ into Eq. (<ref>) leads to the solution for $\tilde y_2(s)$ \begin{equation} \tilde y_2(s)={\tilde y_2 (0) \exp\!{\left(\!-\frac{{\rm i} \omega_\beta s}{c}\right)}} +{{\rm i} \frac{\Np_b e^2 \wake_0}{4 m_0 \gammarel c C \omega_\beta} \left[\frac{c}{\omega_\beta}\tilde y_1^*(0) \sin{\left( \frac{\omega_\beta s}{c}\right)\!+ \tilde y_1 (0)\,s\exp\!{\left(\!-\frac{{\rm i} \omega_\beta s}{c}\right)}} \right]}, \label{EQ:PositionTrailingParticle} \end{equation} which consists of the free betatron oscillation term and a driven oscillation term. For the further analysis, the position of the two particles is evaluated at $s=\pi c/\omega_s$, i.e. after half the synchrotron period. Since the betatron frequency is typically much larger than the synchrotron frequency, i.e. $\omega_\beta\gg\omega_s$, the second term on the right-hand side of Eq. (<ref>) is small compared to the last term. Thus, the solutions of the equations of motion can be written in matrix form: \begin{equation} \left(\begin{array}{c}\tilde y_1 \\\tilde y_2 \end{array}\right)_{s=\pi c/\omega_s}=\exp{\left(\!-\frac{{\rm i} \pi \omega_\beta}{\omega_s}\right)}\cdot\left(\begin{array}{cc}1 & 0 \\ {\rm i} \Upsilon & 1 \end{array}\right)\cdot\left(\begin{array}{c}\tilde y_1 \\\tilde y_2 \end{array}\right)_{s=0}, \end{equation} where the positive dimensionless parameter $\Upsilon$ has been defined as \begin{equation} \Upsilon=\frac{\pi \Np_b e^2 \wake_0}{4 m_0 \gammarel C \omega_\beta \omega_s}. \end{equation} During the second half of the synchrotron period, i.e. $\pi c/\omega_s < s < 2\pi c/\omega_s$, the two particles exchange their roles and now particle 2 is leading, while particle 1 is feeling the wake. Thus, the equations of motion have to be exchanged and, by combining the transformations over the two half synchrotron periods, the transformation matrix for the full synchrotron period can be finally obtained as \begin{align} \left(\begin{array}{c}\tilde y_1 \\\tilde y_2 \end{array}\right)_{s=2\pi c/\omega_s}&=\exp{\left(\!-\frac{{\rm i} 2\pi \omega_\beta}{\omega_s}\right)}\cdot\left(\begin{array}{cc}1 & {\rm i} \Upsilon \\ 0 & 1 \end{array}\right)\cdot\left(\begin{array}{cc}1 & 0 \\ {\rm i} \Upsilon & 1 \end{array}\right)\cdot\left(\begin{array}{c}\tilde y_1 \\\tilde y_2 \end{array}\right)_{s=0}\\ &=\exp{\left(\!-\frac{{\rm i} 2\pi \omega_\beta}{\omega_s}\right)}\cdot\left(\begin{array}{cc}1 - \Upsilon^2 & {\rm i}\Upsilon \\ {\rm i}\Upsilon & 1 \end{array}\right)\cdot\left(\begin{array}{c}\tilde y_1 \\\tilde y_2 \end{array}\right)_{s=0}. \end{align} The stability of the system is determined by the eigenvalues of the transformation matrix. The characteristic equation for the two eigenvalues $\lambda_{\pm}$ yields \begin{equation} \lambda_{\pm}=\left(1-\Upsilon^2/2\right)\pm\sqrt{\left(1-\Upsilon^2/2\right)^2-1}. \label{EQ:2ParticleModelEigenvalueEquation} \end{equation} Since the product of the two eigenvalues is equal to 1, the condition for stability requires that they should be purely imaginary exponentials, i.e. \begin{equation} \lambda_{+}\cdot\lambda_{-}=1 ~ \Rightarrow ~ \lambda_{1,2}=\exp{(\pm {\rm i}\upsilon)}. \end{equation} Inserting this back into Eq. (<ref>) yields finally \begin{equation} \lambda_{+}+\lambda_{-}=2-\Upsilon^2 ~ \Rightarrow ~ \sin{\left(\frac{\upsilon}{2} \right)}=\frac{\Upsilon}{2}. \end{equation} Stability requires that $\upsilon$ should be real, which in turn is satisfied only if $\Upsilon\leq 2$. Therefore, the condition for stability written in terms of wake, machine and beam parameters reads \begin{equation} \Upsilon = \frac{\pi \Np_b e^2 \wake_0}{4m_0\gammarel C \omega_\beta\omega_s}\leq2. \end{equation} The threshold intensity for the onset of the strong head–tail instability in the two-particle model is thus obtained as \begin{equation} \Np_{b,\text{thr}}=\frac{8}{\pi e^2}\frac{p_0 \omega_s}{\beta_y}\left(\frac{C}{\wake_0}\right). \label{EQ:TMCIthreshold2ParticleModel} \end{equation} From Eq. (<ref>), we can deduce the following main features of this instability. The intensity threshold: * is proportional to the momentum $p_o$, i.e. bunches with higher energy are more stable; * scales proportionally with the synchrotron frequency $\omega_s$, i.e. faster synchrotron motion helps increasing the stability range; * is inversely proportional to the beta function at the location of the impedance source, which is expected because the strength of a kick is always proportional to the beta function at the kick location; * is inversely proportional to the integrated wake field around the ring per unit length $\wake_0/C$, which means that a larger wake will decrease the intensity threshold. The evolution of the centre of charge of the beam in the two-particle model is obtained by the sum of $\tilde y_1 + \tilde y_2$, which is found as \begin{equation} \begin{split} \left(\tilde y_1 + \tilde y_2\right)(s) &= \exp\left[-{\rm i}\left(\omega_\beta \mp \frac{\upsilon \omega_s}{2 \pi} \right)\frac{s}{c}\right] \sum_{m=-\infty}^{\infty}C_m \exp{\left(-\frac{{\rm i} m \omega_s s}{c} \right)},\\ C_m &= 2 {\rm i} \Upsilon \frac{1\pm (-1)^m}{(2\pi m \mp \upsilon)^2}\left( 1\mp e^{\pm {\rm i} \upsilon /2}\right) \end{split} \end{equation} with the amplitude coefficients $C_m$ for the oscillation modes with the mode number $m$. The oscillation frequencies of these modes are given by \begin{equation} \begin{cases} \Omega_+ = \omega_\beta+m\omega_s+\upsilon\omega_s/2\pi,\quad m~\text{even},\\ \Omega_- = \omega_\beta+m\omega_s-\upsilon\omega_s/2\pi,\quad m~\text{odd}. \end{cases} \end{equation} Thus, as a function of the beam intensity the modes are shifting in frequency through the dependence on $\upsilon$. Figure <ref> shows the frequencies of these modes for $m=0$ and $m=-1$ as a function of $\Upsilon$. The two modes merge at $\Upsilon=2$ and the oscillation frequency becomes imaginary, i.e. the beam becomes unstable and exhibits exponential growth. This is illustrated by plotting also the imaginary part of the oscillation frequencies. The strong head–tail instability is therefore also called TMCI. Frequency spectrum of the centre of charge motion as a function of the parameter $\Upsilon$ as predicted by the two-particle model. Beyond the two-particle model, several analytical formalisms have been developed for describing the TMCI. Good agreement between the different approaches is obtained when assuming a broadband resonator $\BBimpedanceVertical$ as driving impedance, \begin{equation} \BBimpedanceVertical (\omega)=\frac{\ResonatorFrequency}{\omega}\frac{\shuntImpedance}{\displaystyle 1+{\rm i}\QualityFactor \left(\frac{\omega}{\ResonatorFrequency} - \frac{\ResonatorFrequency}{\omega} \right)}, \end{equation} where $\ResonatorFrequency$ is the resonance angular frequency, $\QualityFactor$ is the resonator quality factor and $\shuntImpedance$ is the resonator shunt impedance (in $\Omega$/m). In the long bunch regime, i.e. $\blength>\pi/\ResonatorFrequency$, the TMCI threshold can be obtained for example from the quasi-coasting beam approach using the peak values of bunch current and momentum spread, which yields \begin{equation} N_{b,\mathrm{thr}}^{\scriptscriptstyle \text{TMC}}=\frac{16\sqrt{2}}{3\pi} \frac{C |\eta| \emitlong}{ \langle\beta_y\rangle e c}\frac{\ResonatorFrequency}{|\BBimpedanceVertical|}\left(1+\frac{ Q'_y\,\omegarev }{\eta\,\ResonatorFrequency} \right), \label{EQ:TMCIthreshold} \end{equation} where $C$ is the machine circumference, $|\BBimpedanceVertical|$ is the peak value of the broadband resonator impedance and $\omegarev$ is the angular revolution frequency. Note that in comparison to the instability threshold obtained with the two-particle model in Eq. (<ref>), the TMCI intensity threshold depends here in addition to the synchrotron tune (through the slip factor $\eta$) also on the longitudinal emittance $\emitlong$. Furthermore, the threshold can be raised by operating the machine with positive (negative) chromaticity above (below) transition. For a real bunch under the effect of a generic impedance, modes usually exhibit a more complicated shift pattern, which can be calculated via the Vlasov equation or can be found through macroparticle simulations. Examples of more complicated mode shift pictures for a short bunch (a) and a long Super Proton Synchrotron (SPS) bunch (b) under the effect of a broadband impedance are shown in Fig. <ref>. Examples of mode shifts for a short bunch ((a), courtesy of A. Chao) and for a long bunch ((b), courtesy of B. Salvant). The left plot contains both the results of an analytical calculation (solid red lines) and those from macroparticle simulations (white lines). Apart from the TMCI, it can be demonstrated that, including a non-zero value of chromaticity in the previous analysis, the individual coherent modes of a single bunch are intrinsically unstable (head–tail instabilities), even below the threshold for which they couple and give rise to TMCI. In particular, the main mode (mode $m=0$, corresponding to the betatron frequency) is naturally unstable below transition with positive chromaticity or above transition with negative chromaticity. Correspondingly, all higher order modes, $m\geq 1$, are naturally unstable below transition with negative chromaticity or above transition with positive chromaticity. Since, in practice, the most dangerous mode for a bunch is mode $m=0$, which is associated with the fastest rise time, accelerators are always operated with settings such as to keep this mode stable, while the other modes, slowly unstable, are damped through other mechanisms. That is the reason why low-energy accelerators operating below transition energy do not need chromaticity correction and can operate with their natural chromaticity (usually negative) without having problems of beam stability. On the other hand, high-energy accelerators operating above transition energy need sextupoles to correct chromaticity and stabilize the otherwise unstable mode $m=0$. Accelerators crossing transition need to make a chromaticity jump upon transition crossing, such as to ensure that the conditions for stabilizing mode $m=0$ are fulfilled at all times during the cycle. § FINAL REMARKS Although all the mechanisms for beam instability reviewed in this article might seem to define sharp instability boundaries and thin parameter ranges for the operation of accelerators, in real life beam stability is eased by some other mechanisms not included in the simple models analysed in this article: * Spreads of the beam characteristic frequencies and the possible associated non-linearities have a natural stabilizing action through Landau damping. Examples are momentum spread and synchrotron frequency spread in the longitudinal plane, or chromaticity and amplitude detuning in the transverse plane. * Active feedback systems are routinely employed to control/suppress instabilities. The principle is that the onset of a beam coherent motion is detected through a pick up, which sends a signal to a kicker that acts back on the beam to damp the motion before it can cause any degradation. Most of the running accelerators rely on this type of device, which is especially efficient against coupled-bunch instabilities. For single-bunch effects, especially in machines operating with short bunches, bandwidth and power requirements can be very stringent, potentially putting a technological limit to the feasibility of the system. Furthermore, nowadays there is also a constant effort to identify, monitor and control impedance sources in present or future machines. In particular, impedance localization and reduction techniques are applied to running accelerators as well as for the design of new accelerators to extend their performance reach or ensure a smooth operation with the desired (target) parameter sets. § ACKNOWLEDGEMENTS The author would like to thank H. Bartosik, G. Iadarola, K. Li, E. Métral, N. Mounet, B. Salvant, R. Tomás and C. Zannini for their invaluable help and the input/material kindly provided for the preparation of this article. [1] A. W. Chao, Physics of collective beam instabilities in high energy accelerators (Wiley Series in Beam Physics and Accelerator Technology) (Wiley & Sons Inc., New York, 1993).
11institutetext: Tallinn University of Technology # The Structure of Concurrent Process Histories Chad Nester This research was supported by the ESF funded Estonian IT Academy research measure (project 2014-2020.4.05.19-0001). ###### Abstract We identify the algebraic structure of the material histories generated by concurrent processes. Specifically, we extend existing categorical theories of resource convertibility to capture concurrent interaction. Our formalism admits an intuitive graphical presentation via string diagrams for proarrow equipments. ## 1 Introduction Concurrent systems are abundant in computing, and indeed in the world at large. Despite the large amount of attention paid to the modelling of concurrency in recent decades (e.g., [12, 19, 22, 20, 1]), a canonical mathematical account has yet to emerge, and the basic structure of concurrent systems remains elusive. In this paper we present a basic structure that captures what we will call the _material_ aspect of concurrent systems: As a process unfolds in time it leaves behind a material history of effects on the world, like the way a slug moving through space leaves a trail of slime. This slime is captured in a natural way by _resource theories_ in the sense of [5], in which morphisms of symmetric monoidal categories – conveniently expressed as string diagrams – are understood as transformations of resources. $\includegraphics[height=85.35826pt,align=c]{figs/process-slime.png}\hskip 28.45274pt\leftrightsquigarrow\hskip 28.45274pt\includegraphics[height=85.35826pt,align=c]{figs/slug.png}$ From the resource theoretic perspective, objects of a symmetric monoidal category are understood as collections of resources, with the unit object denoting the empty collection and the tensor product of two collections consisting of their combined contents. Morphisms are understood as ways to transform one collection of resources into another, which may be combined sequentially via composition, and in parallel via the tensor product. For example, the process of baking bread might generate the following material history: meaning that the baking process involved kneading dough and baking it in an oven to obtain bread (and also the oven). This approach to expressing the material history of a process has many advantages: It is general, in that is assumes minimal structure; canonical, in that monoidal categories are well-studied as mathematical objects; and relatively friendly, as it admits an intuitive graphical calculus (string diagrams). However, it is unable to capture the interaction between components of a concurrent process. For example, consider our hypothetical baking process and suppose that the kneading and baking of the dough are handled by separate subsystems, with control of the dough being handed to the baking subsystem once the kneading is complete. Such interaction of parts is a fundamental aspect of concurrency, but is not expressible in this framework – we can only describe the effects of the system as a whole. We remedy this by extending a given resource theory to allow the decomposition of material histories into concurrent components. Specifically, we augment the string diagrams for symmetric monoidal categories with _corners_ , through which resouces may flow between different components of a transformation. $\includegraphics[height=85.35826pt,align=c]{figs/component-clouds.png}\hskip 28.45274pt\leftrightsquigarrow\hskip 28.45274pt\includegraphics[height=85.35826pt,align=c]{figs/composite- clouds.png}$ Returning to our baking example, we might express the material history of the kneading and baking subsystems _separately_ with the following diagrams, which may be composed horizontally to obtain the material history of the baking process as a whole. These augmented diagrams denote cells of a single object double category constructed from the original resource theory. The corners make this double category into a proarrow equipment, which turns out to be all the additional structure we need in order to express concurrent interaction. From only this structure, we obtain a theory of exchanges – a sort of minimal system of behavioural types – that conforms to our intuition about how such things ought to work remarkably well. For example, if we begin with a compact closed resouce theory, then duals may be understood as _debits_ , and the corresponding theory of exchanges allows us to work with debits as accountants do, treating our material histories as a sort of ledger [21]. Our approach to these concurrent material histories retains the aforementioned advantages of the resource-theoretic perspective: We lose no generality, since our construction applies to any resource theory; It is canonical, with proarrow equipments being a fundamental structure in formal category theory – although not usually seen in such concrete circumstances; Finally, it remains relatively friendly, since the string diagrams for monoidal categories extend in a natural way to string diagrams for proarrow equipments [13]. ### 1.1 Contributions and Related Work _Related Work_. Monoidal categories are ubiquitous – if often implicit – in theoretical computer science. An example from the theory of concurrency is [18], in which monoidal categories serve a purpose similar to their purpose here. String diagrams for monoidal categories seem to have been invented independently a number of times, but until recently were uncommon in printed material due to technical limitations. The usual reference is [15]. We credit the resource-theoretic interpretation of monoidal categories and their string diagrams to [5]. Double categories first appear in [7]. Free double categories are considered in [6] and again in [9]. The idea of a proarrow equipment first appears in [26], albeit in a rather different form. Proarrow equipments have subsequently appeared under many names in formal category theory (see e.g., [24, 11]). String diagrams for double categories and proarrow equipments are treated precisely in [13]. We have been inspired by work on message passing and behavioural types, in particular [3], from which we have adopted our notation for exchanges. Finally, our observations concerning accounting build on the work of [21]. _Contributions_. Our main contribution is the resource-theoretic interpretation certain proarrow equipments, which we call _cornerings_ , and the observation that they capture exactly the structure of concurrent process histories. Our mathematical contributions are minor, most significantly the identification of crossing cells in the free cornering of a resource theory and the corresponding Lemma 2, which we believe to be novel. We do not claim the other lemmas of the paper as significant mathematical contribuitions. Instead, they serve to flesh out the structure of the free cornering, serving as evidence that our interpretation is a good one. ### 1.2 Organization and Prerequisites _Prerequisites_. This paper is largely self-containted, but we assume some familiarity with category theory, in particular with monoidal categories and their string diagrams. Some good references are [17, 23, 10]. _Organization_. In Section 2 we review the resource-theoretic interpretation of symmetric monoidal categories. We continue by reviewing the theory of double categories in Section 3, specialized to the single object case. In Section 4 we introduce cornerings of a resource theory, in particular the free such cornering, and exhibit the existence of crossing cells in the free cornering. In Section 5 we show how the free cornering of a resource theory inherits its resource-theoretic interpretation while enabling the concurrent decomposition of resource transformations. In Section 6 we consider the case in which the original resource theory is given by a compact closed category, as in the theory of double-entry bookkeeping. In Section 7 we conclude and consider directions for future work. ## 2 Monoidal Categories as Resource Theories Symmetric strict monoidal categories can be understood as theories of resource transformation. Objects are interpreted as collections of resources, with $A\otimes B$ the collection consisting of both $A$ and $B$, and $I$ the empty collection. Arrows $f:A\to B$ are understood as ways to transform the resources of $A$ into those of $B$. We call symmetric strict monoidal categories _resource theories_ when we have this sort of interpretation in mind. For example, let $\mathfrak{B}$ be the free symmetric strict monoidal category with generating objects $\\{\texttt{bread},\texttt{dough},\texttt{water},\texttt{flour},\texttt{oven}\\}$ and with generating arrows mix : water ⊗flour →dough knead : dough →dough bake : dough ⊗oven →dough ⊗oven subject to no equations. $\mathfrak{B}$ can be understood as a resource theory of baking bread. The arrow mix represents the process of combining water and flour to form a bread dough, knead represents kneading dough, and bake represents baking dough in an oven to obtain bread (and an oven). The structure of symmetric strict monoidal categories provides natural algebraic scaffolding for composite transformations. For example, consider the following arrow of $\mathfrak{B}$: $(\texttt{bake}\otimes 1_{\texttt{dough}});(1_{\texttt{bread}}\otimes\sigma_{\texttt{oven},\texttt{dough}};\texttt{bake})$ of type $\texttt{dough}\otimes\texttt{oven}\otimes\texttt{oven}\to\texttt{bread}\otimes\texttt{bread}\otimes\texttt{oven}$ where $\sigma_{A,B}:A\otimes B\stackrel{{\scriptstyle\sim}}{{\to}}B\otimes A$ is the braiding. This arrow describes the transformation of two units of dough into loaves of bread by baking them one after the other in an oven. It is often more intuitive to write composite arrows like this as string diagrams: Objects are depicted as wires, and arrows as boxes with inputs and outputs. Composition is represented by connecting output wires to input wires, and we represent the tensor product of two morphisms by placing them beside one another. Finally, the braiding is represented by crossing the wires involved. For the morphism discussed above, the corresponding string diagram is: Notice how the topology of the diagram captures the logical flow of resources. Given a pair of parallel arrows $f,g:A\to B$ in some resource theory, both $f$ and $g$ are ways to obtain $B$ from $A$, but they may not have the same effect on the resources involved. We explain by example: Consider the parallel arrows $1_{\texttt{dough}},\texttt{knead}:\texttt{dough}\to\texttt{dough}$ of $\mathfrak{B}$. Clearly these should not be understood to have the same effect on the dough in question, and this is reflected in $\mathfrak{B}$ by the fact that they are not made equal by its axioms. Similarly, knead and $\texttt{knead}\circ\texttt{knead}$ are not equal in $\mathfrak{B}$, which we understand to mean that kneading dough twice does not have the same effect as kneading it once, and that in turn any bread produced from twice-kneaded dough will be different from once-kneaded bread in our model. Consider a hypothetical resource theory constructed from $\mathfrak{B}$ by imposing the equation $\texttt{knead}\circ\texttt{knead}=\texttt{knead}$. In this new setting we understand kneading dough once to have the same effect as kneading it twice, three times, and so on, because the corresponding arrows are all equal. Of course, the sequence of events described by knead is not the one described by $\texttt{knead}\circ\texttt{knead}$: In the former the dough has been kneaded only once, while in the latter it has been kneaded twice. The equality of the two arrows indicates that these two different processes would have the same effect on the dough involved. We adopt as a general principle in our design and understanding of resource theories that transformations should be equal if and only if they have the same effect on the resources involved. For the sake of further illustration, observe that by naturality of the braiding maps the following two resource transformations are equal in $\mathfrak{B}$: Each transformation gives a method of baking two loaves of bread. On the left, two batches of dough are mixed and kneaded before being baked one after the other. On the right, first one batch of dough is mixed, kneaded and baked and only then is the second batch mixed, kneaded, and baked. Their equality tells us that, according to $\mathfrak{B}$, the two procedures will have the same effect, resulting in the same bread when applied to the same ingredients with the same oven. ## 3 Single Object Double Categories In this section we set up the rest of our development by presenting the theory of _single object double categories_ , being those double categories $\mathbb{D}$ with exactly one object. In this case $\mathbb{D}$ consists of a _horizontal edge monoid_ $\mathbb{D}_{H}=(\mathbb{D}_{H},\otimes,I)$, a _vertical edge monoid_ $\mathbb{D}_{V}=(\mathbb{D}_{V},\otimes,I)$, and a collection of _cells_ where $A,B\in\mathbb{D}_{H}$ and $X,Y\in\mathbb{D}_{V}$. Given cells $\alpha,\beta$ where the right boundary of $\alpha$ matches the left boundary of $\beta$ we may form a cell $\alpha|\beta$ – their _horizontal composite_ – and similarly if the bottom boundary of $\alpha$ matches the top boundary of $\beta$ we may form $\frac{\alpha}{\beta}$ – their _vertical composite_ – with the boundaries of the composite cell formed from those of the component cells using $\otimes$. We depict horizontal and vertical composition, respectively, as in: and Horizontal and vertical composition of cells are required to be associative and unital. We omit wires of sort $I$ in our depictions of cells, allowing us to draw horizontal and vertical identity cells, respectively, as in: and Finallly, the horizontal and vertical identity cells of type $I$ must coincide – we write this cell as $\square_{I}$ and depict it as empty space, see below on the left – and vertical and horizontal composition must satisfy the interchange law. That is, $\frac{\alpha}{\beta}|\frac{\gamma}{\delta}=\frac{\alpha|\gamma}{\beta|\delta}$, allowing us to unambiguously interpret the diagram below on the right: Every single object double category $\mathbb{D}$ defines strict monoidal categories $\mathbf{V}\mathbb{D}$ and $\mathbf{H}\mathbb{D}$, consisting of the cells for which the $\mathbb{D}_{H}$ and $\mathbb{D}_{V}$ valued boundaries respectively are all $I$, as in: and That is, the collection of objects of $\mathbf{V}\mathbb{D}$ is $\mathbb{D}_{H}$, composition in $\mathbf{V}\mathbb{D}$ is vertical composition of cells, and the tensor product in $\mathbf{V}\mathbb{D}$ is given by horizontal composition: In this way, $\mathbf{V}\mathbb{D}$ forms a strict monoidal category, which we call the category of vertical cells of $\mathbb{D}$. Similarly, $\mathbf{H}\mathbb{D}$ is also a strict monoidal category (with collection of objects $\mathbb{D}_{V}$) which we call the _horizontal cells_ of $\mathbb{D}$. ## 4 Cornerings and Crossings Next, we define cornerings, our primary technical device. In particular we discuss the free cornering of a resource theory, which we show contains special crossing cells with nice formal properties. Tersely, a cornering of a resource theory $\mathbb{A}$ is a single object proarrow equipment with $\mathbb{A}$ as its vertical cells. Explicitly: ###### Definition 1 Let $\mathbb{A}$ be a symmetric strict monoidal category. Then a _cornering of $\mathbb{A}$_ is a single object double category $\mathbb{D}$ such that: 1. (i) The vertical cells of $\mathbb{D}$ are $\mathbb{A}$. That is, there is an isomorphism of categories $\mathbf{V}\mathbb{D}\cong\mathbb{A}$. 2. (ii) For each $A$ in $\mathbb{A}_{0}\cong\mathbb{D}_{H}$, there are distinguished elements $A^{\circ}$ and $A^{\bullet}$ of $\mathbb{D}_{V}$ along with distinguished cells of $\mathbb{D}$ called _$\circ$ -corners_ and _$\bullet$ -corners_ respectively, which must satisfy the _yanking equations_ : Intuitively, $A^{\circ}$ denotes an instance of $A$ moving from left to right, and $A^{\bullet}$ denotes an instance of $A$ moving from right to left (see Section 5). Of particular interest is the free cornering of a resource theory: ###### Definition 2 Let $\mathbb{A}$ be a resource theory. Then the _free cornering of $\mathbb{A}$_, written ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$, is the free single object double category defined as follows: * • The horizontal edge monoid ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}_{H}=(\mathbb{A}_{0},\otimes,I)$ is given by the objects of $\mathbb{A}$. * • The vertical edge monoid ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}_{V}=(\mathbb{A}_{0}\times\\{\circ,\bullet\\})^{*}$ is the free monoid on the set $\mathbb{A}_{0}\times\\{\circ,\bullet\\}$ of polarized objects of $\mathbb{A}$ – whose elements we write $A^{\circ}$ and $A^{\bullet}$. * • The generating cells consist of corners for each object $A$ of $\mathbb{A}$ as above, subject to the yanking equations, along with a vertical cell ${\text{}^{\ulcorner}_{\llcorner}\\!{f}\\!_{\lrcorner}^{\urcorner}}$ for each morphism $f:A\to B$ of $\mathbb{A}$ subject to equations as in: For a precise development of free double categories see [9]. In brief: cells are formed from the generating cells by horizontal and vertical composition, subject to the axioms of a double category in addition to any generating equations. The free cornering is free both in the sense that it is freely generated, and in the sense that for any cornering $\mathbb{D}$ of $\mathbb{A}$ there is exactly one double functor ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}\to\mathbb{D}$ that sends corner cells to corner cells and restricts to the identity on $\mathbb{A}\cong\mathbf{V}\mathbb{D}$. That is, diagrams in ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ have a canonical interpretation in any cornering of $\mathbb{A}$. ###### Proposition 1 ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ is a cornering of $\mathbb{A}$. ###### Proof Intuitively $\mathbf{V}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}\cong\mathbb{A}$ because in a composite vertical cell every wire bent by a corner must eventually be un-bent by the matching corner, which by yanking is the identity. The only other generators are the cells ${\text{}^{\ulcorner}_{\llcorner}\\!{f}\\!_{\lrcorner}^{\urcorner}}$, and so any vertical cell in ${\text{}^{\ulcorner}_{\llcorner}\\!{A}\\!_{\lrcorner}^{\urcorner}}$ can be written as ${\text{}^{\ulcorner}_{\llcorner}\\!{g}\\!_{\lrcorner}^{\urcorner}}$ for some morphism $g$ of $\mathbb{A}$. A more rigorous treatment of corner cells can be found in [13], to the same effect. ∎ Before we properly explain our interest in ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ we develop a convenient bit of structure: _crossing cells_. For each $B$ of ${\text{}^{\ulcorner}_{\llcorner}\\!{A}\\!_{\lrcorner}^{\urcorner}}_{H}$ and each $X$ of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}_{V}$ we define a cell of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ inductively as follows: In the case where $X$ is $A^{\circ}$ or $A^{\bullet}$, respectively, define the crossing cell as in the diagrams below on the left and right, repectively: in the case where $X$ is $I$, define the crossing cell as in the diagram below on the left, and in the composite case define the crossing cell as in the diagram below on the right: We prove a technical lemma: ###### Lemma 1 For any cell $\alpha$ of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ we have ###### Proof By structural induction on cells of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$. For the $\circ$-corners we have: and for the $\bullet$-corners, similarly: the final base cases are the ${\text{}^{\ulcorner}_{\llcorner}\\!{f}\\!_{\lrcorner}^{\urcorner}}$ maps: There are two inductive cases. For vertical compositon, we have: Horizontal composition is similarly straightforward, and the claim follows by induction. ∎From this we obtain a “non-interaction” property of our crossing cells, similar to the naturality of braiding in symmetric monoidal categories: ###### Corollary 1 For cells $\alpha$ of $\mathbf{V}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ and $\beta$ of $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$, the following equation holds in ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$: These crossing cells greatly aid in the legibility of diagrams corresponding to cells in ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$, but also tell us something about the categorical structure of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$, namely that it is a monoidal double category in the sense of [25]: ###### Lemma 2 If $\mathbb{A}$ is a symmetric strict monoidal category then ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ is a monoidal double category. That is, ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ is a pseudo-monoid object in the strict 2-category $\mathbf{V}\mathsf{DblCat}$ of double categories, lax double functors, and vertical transformations. ###### Proof We give the action of the tensor functor on cells: This defines a pseudofunctor, with the component of the required vertical transformation given by exchanging the two middle wires as in: Notice that $\otimes$ is strictly associative and unital, in spite of being only pseudo-functorial. ∎ ## 5 Concurrency through Cornering We next proceed to extend the resource-theoretic interpretation of some symmetric strict monoidal category $\mathbb{A}$ to its free cornering ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$. Interpret elements of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}_{V}$ as _$\mathbb{A}$ -valued exchanges_. Each exchange $X_{1}\otimes\cdots\otimes X_{n}$ involves a left participant and a right participant giving each other resources in sequence, with $A^{\circ}$ indicating that the left participant should give the right participant an instance of $A$, and $A^{\bullet}$ indicating the opposite. For example say the left participant is Alice and the right participant is Bob. Then we can picture the exchange $A^{\circ}\otimes B^{\bullet}\otimes C^{\bullet}$ as: $\texttt{Alice}\rightsquigarrow\includegraphics[height=28.45274pt,align=c]{figs/alice- bob-swap-stack- example.png}\mathrel{\mathchoice{\reflectbox{$\displaystyle\rightsquigarrow$}}{\reflectbox{$\textstyle\rightsquigarrow$}}{\reflectbox{$\scriptstyle\rightsquigarrow$}}{\reflectbox{$\scriptscriptstyle\rightsquigarrow$}}}\texttt{Bob}$ Think of these exchanges as happening _in order_. For example the exchange pictured above demands that first Alice gives Bob an instance of $A$, then Bob gives Alice an instance of $B$, and then finally Bob gives Alice an instance of $C$. We interpret cells of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ as _concurrent transformations_. Each cell describes a way to transform the collection of resources given by the top boundary into that given by the bottom boundary, via participating in $\mathbb{A}$-valued exchanges along the left and right boundaries. For example, consider the following cells of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathfrak{B}}\\!_{\lrcorner}^{\urcorner}}$: From left to right, these describe: A procedure for transforming water into nothing by mixing it with flour obtained by exchange along the right boundary, then sending the resulting dough away along the right boundary; A procedure for transforming an oven into an oven, receiving flour along the right boundary and sending it out the left boundary, then receiving dough along the left boundary, which is baked in the oven, with the resulting bread sent out along the right boundary; Finally, a procedure for turning flour into bread by giving it away and then receiving bread along the left boundary. When we compose these concurent transformations horizontally in the evident way, they give a transformation of resources in the usual sense, i.e., a morphism of $\mathbb{A}\cong\mathbf{V}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$: We understand equality of cells in ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ much as we understand equality of morphisms in a resource theory: two cells should be equal in case the transformations they describe would have the same effect on the resources involved. In this way, cells of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ allow us to break a transformation into many concurrent parts. Note that with the crossing cells, it is possible to exchange resources “across” cells. Consider the category $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ of horizontal cells. If the vertical cells $\mathbf{V}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ are concerned entirely with the transformation of resources, then our interpretation tells us that the horizontal cells are concerned entirely with exchange. Just as isomorphic objects in $\mathbf{V}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}\cong\mathbb{A}$ can be thought of as equivalent collections of resources – being freely transformable into each other – we understand isomorphic objects in $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ as _equivalent exchanges_. For example, There are many ways for Alice to give Bob an $A$ and a $B$: Simultaneously, as $A\otimes B$; one after the other, as $A$ and then $B$; or in the other order, as $B$ and then $A$. While these are different sequences of events, they achieve the same thing, and are thus equivalent. Similarly, for Alice to give Bob an instance of $I$ is equivalent to nobody doing anything. Formally, we have: ###### Lemma 3 In $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ we have for any $A,B$ of $\mathbb{A}$: 1. (i) $I^{\circ}\cong I\cong I^{\bullet}$. 2. (ii) $A^{\circ}\otimes B^{\circ}\cong B^{\circ}\otimes A^{\circ}$ and $A^{\bullet}\otimes B^{\bullet}\cong B^{\bullet}\otimes A^{\bullet}$. 3. (iii) $(A\otimes B)^{\circ}\cong A^{\circ}\otimes B^{\circ}$ and $(A\otimes B)^{\bullet}\cong A^{\bullet}\otimes B^{\bullet}$ ###### Proof 1. (i) For $I\cong I^{\circ}$, consider the $\circ$-corners corresponding to $I$: we know that these satisfy the yanking equations: which exhibits an isomorphism $I\cong I^{\circ}$. Similarly, $I\cong I^{\bullet}$. Thus, we see formally that exchanging nothing is the same as doing nothing. 2. (ii) The $\circ$-corner case is the interesting one: Define the components of our isomorphism to be: and then for both of the required composites we have: and so $A^{\circ}\otimes B^{\circ}\cong B^{\circ}\otimes A^{\circ}$. Similarly $A^{\bullet}\otimes B^{\bullet}\cong B^{\bullet}\otimes A^{\bullet}$. This captures formally the fact that if Alice is going to give Bob an $A$ and a $B$, it doesn’t really matter which order she does it in. 3. (iii) Here it is convenient to switch between depicting a single wire of sort $A\otimes B$ and two wires of sort $A$ and $B$ respectively in our string diagrams. To this end, we allow ourselves to depict the identity on $A\otimes B$ in multiple ways, using the notation of [4]: Then the components of our isomorphism $(A\otimes B)^{\circ}\cong A^{\circ}\otimes B^{\circ}$ are: and and, much as in (ii), it is easy to see that the two possible composites are both identity maps. Similarly, $(A\otimes B)^{\bullet}\cong(A^{\bullet}\otimes B^{\bullet})$. This captures formally the fact that giving away a collection is the same thing as giving away its components. ∎ For example, we should be able to compose the cells on the left and right below horizontally, since their right and left boundaries, respectively, indicate equivalent exchanges: Our lemma tells us that there will always be a canonical isomorphism, as above in the middle, making composition possible. It is worth noting that we _do not_ have $A^{\circ}\otimes B^{\bullet}\cong B^{\bullet}\otimes A^{\circ}$: ###### Observation 1 There is a morphism $d^{\circ}_{\bullet}:A^{\circ}\otimes B^{\bullet}\to B^{\bullet}\otimes A^{\circ}$ in one direction, defined by but there is need not be a morphism in the other direction, and this is not in general invertible. In particular, $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ is monoidal, but need not be symmetric. This observation reflects formally the intuition that if I receive some resources before I am required to send any, then I can send some of the resources that I receive. However, if I must send the resources first, this is not the case. In this way, $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ contains a sort of causal structure. ## 6 Compact Closure and Accounting In this section we consider the case where our resource theory $\mathbb{A}$ is compact closed. From the standpoint of accounting, string diagrams in resource theories are like _ledgers_ , recording the material history of the resources they concern [21]. The _double-entry_ method of accounting insists that every change to a ledger consists of a matching credit (positive change) and debit (negative change) so that the system of accounts represented by the ledger remains _balanced_. This serves as a kind of integrity check: given a ledger we may attempt to _balance_ it by matching credits with debits and cancelling them out, with our ledger being balanced if and only if all entries may be cancelled in this way. While the credits and debits of double-entry accounting are usually positive and negative integers, the technique makes sense in the context of any compact closed resource theory. The unit arrows $\eta_{A}:I\to A\otimes A^{*}$ create matching credits and debits, and the cancellative process of balancing is performed via the counit arrows $\varepsilon_{A}:A^{*}\otimes A\to I$. Following the usual convention, we depict the units and counits, respectively, as: and and then their defining equations are: and For example, there is a well-known compact closed category $\mathbb{Z}$ whose set of objects is the group of differences construction of the integers[14] in which there is a morphism between two such objects if and only if the corresponding integers are equal. This category captures integer valued double-entry accounting in the sense discussed above – extending its treatment in [8]. $\mathbb{A}$ being compact closed has effects on the structure of ${\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$. To begin, we extend Lemma 3 with a formal version of the idea that Alice giving Bob $A^{*}$ ought to be equivalent to Bob giving Alice $A$. ###### Lemma 4 If $\mathbb{A}$ is compact closed, then in $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ we have $A^{\circ}\cong(A^{*})^{\bullet}$ and $A^{\bullet}\cong(A^{*})^{\circ}$. ###### Proof For $A^{\circ}\cong(A^{*})^{\bullet}$, the two halves of the isomorphism are given by: and then we have: and as required. $A^{\bullet}\cong(A^{*})^{\circ}$ similarly. ∎ Next, consider the way that debits allow us to violate causality in everyday life: By incurring a debit I can trade something away _before I have it_. This is also reflected in our development. For example, let us augment our resource theory $\mathfrak{B}$ of baking with an object $1 which we will use to represent currency. We also require a dual object $\texttt{\$1}^{*}$ along with a cap $\eta_{\texttt{\$1}}$ and cap $\varepsilon_{\texttt{\$1}}$ satisfying the equations above. Consider the following cell of the free cornering of this augmented resource theory: Here our process must bake bread from flour and water, but has no flour, and must purchase it for $1 along the left boundary. However it also has no money, so it incurs a debit (via the cap) in order to purchase the flour. This done, the bread is baked, and then sold along the right boundary in exchange for two instances of $1, one of which is used to balance the debit (via the cup), with the other kept as profit. If we compose our cell with a suitable flour seller and bread buyer on the left and right respectively, for example and then we obtain the following material history: While in Observation 1 we saw that $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ is not symmetric in general, the presence of dual objects $A^{*}$ in $\mathbb{A}$ makes it so. This is enough to make $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ itself compact closed. ###### Lemma 5 If $\mathbb{A}$ is compact closed, so is $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$. ###### Proof We show that $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ is both rigid and symmetric, beginning with the latter. The inverse to $d^{\circ}_{\bullet}$ of Observation 1 is: making $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ symmetric. Now, define the dual $X^{*}$ of an object $X$ inductively: $I^{*}=I$, $(A^{\circ})^{*}=A^{\bullet}$, $(A^{\bullet})^{*}=A^{\circ}$, and $(X\otimes Y)^{*}=X^{*}\otimes Y^{*}$. It suffices to give the unit and counit for the $A^{\circ}$ and $A^{\bullet}$ cases. In one of them we have: and which are easily seen to satisfy the required equations: and For the other, the unit and counit are given by and which again satisfy the required equations as in: and For $I$ both unit and counit are given by $1_{I}$, and the inductive case is as in: and ∎ In fact, if $\mathbb{A}$ is compact closed then $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ and $\mathbb{A}$ are equivalent as categories. We omit the proof of this, which is both rather technical and, we feel, not particularily enlightenting. The main idea is that using Lemma 4 we can obtain from any arrow of $\mathbf{H}\,{\text{}^{\ulcorner}_{\llcorner}\\!{\mathbb{A}}\\!_{\lrcorner}^{\urcorner}}$ an equivalent one in which all exchanges flow from left to right ($\circ$ polarity), which is just a morphism of $\mathbb{A}$. ## 7 Conclusions and Future Work We have shown how to decompose the material history of a process into concurrent components by working in the free cornering of an appropriate resource theory. We have explored the structure of the free cornering in light of this interpretation and found that it captures formally many of our intuitions – in particular those concerning accounting and debt. We do not claim to have solved all problems in the modelling of concurrency, but we feel that our formalism captures the material aspect of concurrent systems very well. Thus, we have identified the algebraic structure of a fundamental aspect of concurrent systems. There is much future work, and we list a few directions that we feel are particularly promising. Our work is inspired by the message passing logic of [3], which has its categorical semantics in _linear actegories_. Any cornering defines an actegory – although not quite a _linear_ actegory – and we speculate that cornerings are equivalent to some class of actegories, which would connect our work to the literature on behavioural types. Another direction for furture work is to connect our material histories to a theory of concurrent processes – the slugs to our slime – with the goal of a formalism accounting for both. The category of spans of reflexive graphs, interpreted as open transition systems, seems especially promising here[16]. Finally, our observations about accounting are in part motivated by an interest in smart contracts, which would benefit from a rigorous formal treatment[2]. ## References * [1] Samson Abramsky. What are the fundamental structures of concurrency? we still don’t know! CoRR, abs/1401.4973, 2014. * [2] N. Atzei, M. Bartolietti, and T. Cimoli. A survey of attacks on ethereum smart contracts. In POST 2017, volume 10204 of LNCS, pages 164–186, 2017. * [3] J.R.B. Cockett and C. Pastro. The logic of message-passing. Science of Computer Programming, 74:498–533, 2009. * [4] J.R.B. Cockett and R.A.G. Seely. Proof theory of the cut rule. In E. Landry, editor, Categories for the Working Philosopher, pages 223–261. Oxford University Press, 2017. * [5] B. Coecke, T. Fritz, and R.W. Spekkens. A mathematical theory of resources. Information and Computation, 250:59–86, 2016. * [6] Robert Dawson and Robert Paré. What is a free double category like? Journal of Pure and Applied Algebra, 168(1):19–34, 2002. * [7] Charles Ehresmann. Catégories structurées. Annales scientifiques de l’École Normale Supérieure, 80(4):349–426, 1963. * [8] D. Ellerman. The mathematics of double entry bookkeeping. 58:226–233. * [9] M. Fiore, S. Paoli, and D. Pronk. Model structures on the category of small double categories. Algebraic and Geometric Topology, 8(4):1855–1959, 2008. * [10] Brendan Fong and David I Spivak. Seven Sketches in Compositionality: An Invitation to Applied Category Theory. 2018\. * [11] Marco Grandis and Robert Pare. Adjoint for double categories. Cahiers de Topologie et Géométrie Différentielle Catégoriques, 45(3):193–240, 2004. * [12] C. A. R. Hoare. Communicating sequential processes. Commun. ACM, 21(8):666–677, 1978. * [13] David Jaz Myers. String Diagrams For Double Categories and Equipments. arXiv e-prints, 2016. * [14] A. Joyal, R. Street, and D. Verity. Traced monoidal categories. Mathematical Proceedings of the Cambridge Philosophical Society, 119:447–468, 1996. * [15] André Joyal and Ross Street. The geometry of tensor calculus, i. Advances in Mathematics, 88(1):55 – 112, 1991. * [16] P. Katis, N. Sabadini, and R.F.C Walters. Span(graph): A categorical algebra of transition systems. In Intnational Conference on Algebraic Methodology and Software Technology, pages 307–321. Springer, Berlin, Heidelberg, 1997. * [17] S. Mac Lane. Categories for the Working Mathematician. Springer, 1971. * [18] José Meseguer and Ugo Montanari. Petri nets are monoids. Information and Computation, 88(2):105–155, 1990. * [19] R. Milner. A calculus of communicating systems. volume 92 of Lecture Notes in Computer Science. Springer-Verlag, 1980. * [20] Robin Milner. Communicating and Mobile Systems: The Pi-Calculus. Cambridge University Press, USA, 1999. * [21] Chad Nester. A foundation for ledger systems (to appear). In International Conference on Blockchain Economics, Security and Protocols (Tokenomics 2020). Schloss Dagsthul, 2020. * [22] C. A. Petri. Communication with automata. 1966\. * [23] Peter Selinger. A survey of graphical languages for monoidal categories. In New Structures for Physics, pages 289–355. Springer, 2010. * [24] Michael Shulman. Framed bicategories and monoidal fibrations. Theory and Applications of Categories, 20(18):650–738, 2008. * [25] Michael A. Shulman. Constructing symmetric monoidal bicategories. arXiv e-prints, 2010. * [26] R. J. Wood. Abstract pro arrows i. Cahiers de Topologie et Géométrie Différentielle Catégoriques, 23(3):279–290, 1982.
# Spin-Phonon interaction in quasi 2D- $Cr_{2}Te_{3}$ Gurupada Ghorai School of Physical Sciences, National Institute of Science Education and Research (NISER) Bhubaneswar, An OCC of Homi Bhabha National Institute, Jatni-752050, Odisha, India. Kalyan Ghosh School of Physical Sciences, National Institute of Science Education and Research (NISER) Bhubaneswar, An OCC of Homi Bhabha National Institute, Jatni-752050, Odisha, India. Abhilash Patra School of Physical Sciences, National Institute of Science Education and Research (NISER) Bhubaneswar, An OCC of Homi Bhabha National Institute, Jatni-752050, Odisha, India. Prasanjit Samal School of Physical Sciences, National Institute of Science Education and Research (NISER) Bhubaneswar, An OCC of Homi Bhabha National Institute, Jatni-752050, Odisha, India. Kartik Senapati School of Physical Sciences, National Institute of Science Education and Research, An OCC of Homi Bhabha National Institute, Jatni, Odisha - 752050, India. Center for Interdisciplinary Sciences (CIS), NISER Bhubaneswar, HBNI, Jatni-752050, Odisha, India. Pratap K. Sahoo<EMAIL_ADDRESS>School of Physical Sciences, National Institute of Science Education and Research, An OCC of Homi Bhabha National Institute, Jatni, Odisha - 752050, India. Center for Interdisciplinary Sciences (CIS), NISER Bhubaneswar, HBNI, Jatni-752050, Odisha, India. (August 28, 2024) ###### Abstract Spin-phonon interaction plays an important role in 2D magnetic materials and motivates the development of next-generation spin- and charge-dependent microelectronic devices. Understanding the spin-phonon interaction by tuning the growth parameter of single crystal Cr${}_{2}Te_{3}$, a robust quasi-2D room temperature magnetic material, is crucial for spintronic devices. The synthesis of single crystal 2D Cr${}_{2}Te_{3}$ flakes on a Si substrate from co-deposited thin film by plasma annealing techniques is a significant achievement. The temperature dependence and polarization-resolved Raman spectroscopy with support of density functional theory classified lattice symmetry operations were used to identify the phonon modes to investigate the spin/electron-phonon interactions in Cr${}_{2}Te_{3}$. The mean-field theory model in single crystal Cr${}_{2}Te_{3}$ is employed to quantify the spin- phonon interaction and correlate with in-plane and out-of-plane magnetic behavior. The observation of a positive correlation between phonon mode frequency and spin-phonon interaction strength in single crystal Cr${}_{2}Te_{3}$ can be a potential candidate for spintronic applications. Keyword 2D-Cr22Te3, TEM, SQUID, Raman spectroscopy, DFT. ## I INTRODUCTION The thermal transport and spin properties of the 2D magnetic materials are crucial for spintronic devices[1]. Although spin-phonon, electron-phonon, and phonon-phonon interactions are offered for study for high-performance spin- based thermal management devices[2, 3]. This recommends establishing the influence of spin-phonon interaction on the physical phenomena of the 2D magnet. The spin-phonon solid interaction in the 2D $CrSiTe_{3}$ was proposed from the temperature-dependent I-R spectroscopy and super-exchange mechanism influenced in the Si-Te stretching and Te displacement in temperature- dependent frequency study[4, 5]. The unconventional spin-phonon interaction vis Dzyaloshinskii–Moriya interaction(DMI) in $Y_{2}Ir_{2}O_{7}$ crystal by temperature-dependent infrared spectroscopy was also reported[6]. The spin- phonon interaction in the bulk crystal of yttrium iron garnet (YIG) with enhanced interaction strength at higher phonon frequency was investigated [7]. However, spin-phonon coupling is a primary need to tackle how spin affects phonon thermal transport. To address these coupling issues, we are interested in exploring the spin-phonon interaction of $Cr_{2}Te_{3}$. The 2D magnetic order in chromium trihalide materials was first discovered[8]. The 2D magnetic materials could have been lead to one of the spintronics materials by utilizing charge and spin for applications ranging from molecular quantum devices and sensing to ultrathin high-density data storage devices[9, 10]. In a similar line, the intrinsic 2D layer-dependent ferromagnetism properties in CrGe2 were also investigated [11]. The structural properties using scanning tunneling microscopy, bandgap, and electronic properties were calculated using DFT calculation of $CrGeTe_{3}$ materials in ref[12]. Although three. Theoretically, it has been suggested that layered materials like $CrXTe_{3}(X=Si,Ge,Sn)$ show 2D ferromagnetic or antiferromagnetic behavior dependent on their exchange or superexchange interaction strength[13, 14]. It has been proposed that $Cr_{2}Te_{3}$ is a tunable curie temperature up to room temperature and suitable for magnetic memory device applications[15]. So, for magnetic-based applications of $Cr_{2}Te_{3}$, there should be clear evidence of the magnetic properties of such material, which are influenced by defect spins in the lattice. It has been demonstrated that the 2D $Cr_{2}Te_{3}$ materials show ferrimagnet or ferromagnet properties, which are dependent on the strength of the nearest neighbor exchange interaction[16]. The (001) oriented crystal grown at a temperature of 400∘C at a rate of 1.7 nm/min on Al${}_{2}O_{3}$(0001) using molecular beam epitaxial(MBE) reveals the local electronic and magnetic structure of Cr ions, which was demonstrated the edge of energy of Cr L2,3 using X-ray magnetic circular dichroism (XMCD) and soft x-ray absorption spectroscopy[17]. However, the magnetic structure of $Cr_{2}Te_{3}$ is not yet fully understood, because of the complicated coupling of the spins in three distinct Cr sites. Curie temperature and magnetic properties vary with various defect sites in the 2D magnetic materials. To understand the spin-phonon interaction, we have synthesized a single crystal $Cr_{2}Te_{3}$, a quasi 2D magnetic material by co-deposited Cr and Te thin film followed by thermal annealing. The $A_{g}$ and three $E_{g}$ phonon modes were analyzed from the temperature-dependent Raman study. Different vibrational modes and frequencies were obtained using DFT calculations and compared with the experimental results. The anharmonicity of Raman modes can be well-fitted with Balkanski’s model above the critical temperature. The ferromagnetic transition with temperature matched well with the Raman anharmonicity, which can be explained in terms of spin-phonon interaction. The results can be understood in terms of magnon-phonon coupling. ## II Experimental method The commercially available high-purity (99.999$\%$, Sigma Aldrich) metal pieces of Chromium(Cr) and Tellurium(Te) are co-evaporated using the e-beam evaporation techniques and thermal evaporation technique in a base vacuum of 2$\times 10^{-7}$ mbar to produce thin films of CrxTey on the clean Si substrate. The deposition rate of Cr and Te were 0.1 Å/s and 0.4Å/s, respectively, and the thickness of 40 nm was measured by a quartz crystal thickness monitor attached to the deposition chamber. The Co-deposited samples were annealed at 500∘C inside of quartz tube of the plasma-enhanced chemical vapor deposition (PECVD) system at 120 W plasma power with the ambient of forming gas for two hours. The surface micrographs were collected using a Field emission of scanning electron microscopy (FESEM) system (SIGMA ZEISS), and the elemental mapping and percentage of composition were estimated using Apex energy dispersive spectroscopy (EDS) attached to the FESEM. The XRD pattern of the $Cr_{2}Te_{3}$ crystalline phases was obtained using a Rigaku diffractometer with $CuK_{\alpha}$ radiation ($\lambda=1.54056$ Å) at room temperature with a grazing incidence X-ray diffraction (GIXRD) approach at a grazing angle of 1.0 degree with a step size of 0.02deg. The local crystalline structural properties of the 2D-$Cr_{2}Te_{3}$ were verified using a high- resolution transmission electron microscope (HRTEM) of a 200 KV energetic electron source. The high-resolution confocal Raman scattering with a spatial resolution of 2-4 microns was used to understand the vibrational modes of $Cr_{2}Te_{3}$. The temperature-dependent Raman spectrum of $Cr_{2}Te_{3}$ flake was measured using a Linkam cooling stage. The specimens were mounted on a sample holder fixed to a cold head in a vacuum with a constant liquid nitrogen flow to cool the chamber down to 80K. A half-wave plate in an incident light path was fixed to study the polarization-dependent Raman spectra. We also calculated the Raman peaks position, and modes of vibration were obtained theoretically using density functional theory (DFT). The temperature dependence of the Raman spectrum of $Cr_{2}Te_{3}$ flakes was measured using a Horiba Raman microscope equipped with a Linkam cooling stage. The specimens were mounted on a non-background sample holder fixed to a cold head in a vacuum, and the liquid nitrogen was used to cool the chamber up to 80K. The superconducting quantum interference device (SQUID) system of Quantum Design was used to investigate the magnetic properties of $Cr_{2}Te_{3}$ in the temperature range of 5-250K. ## III Results and discussions The GIXRD measurement at a grazing angle of 1∘ was used to obtain the crystalline properties and the orientation of the $Cr_{2}Te_{3}$ growth. The GIXRD spectra of the pristine and 120W plasma, 500∘C annealed samples are shown in figure 1(a). The pristine sample shows a weak XRD peak at 27.56∘, corresponding to the lattice plane of Te (101) (JCPDS # 36-1452)[18, 19]. The 120W plasma annealed sample shows three XRD peak positions at 2$\theta$ values of 14.54∘, 29.68∘, and 44.52∘ corresponding to the lattice plane of (002), (004), and (006), respectively for the crystalline plane of $Cr_{2}Te_{3}$. The above lattice plans describe the Trigonal family of $Cr_{2}Te_{3}$, with the space group of 163 $(P\bar{3}1c)$, with lattice parameters of a=6.78 Å, b=6.78 Å, c=12.06 Å, $\alpha$=90, $\beta$=90 and $\gamma$=120 (PDF#71-2245), which matches well with an earlier report[20, 17]. Figure 1(b) and (c) shows the FESEM surface morphology of the pristine and 120 W annealed sample. The pristine sample shows a very smooth surface. It has been observed that after annealing the pristine at 500∘C in 120W plasma for 2 hours, the CrTe thin films grew as individual flakes with clear grain boundaries, as shown in figure 1(c), which is a 2D $Cr_{2}Te_{3}$. To further clarify the crystalline growth of quasi 2D $Cr_{2}Te_{3}$, we have prepared a lamella for TEM study from a single flake. The lamella was prepared using a focused ion beam (FIB) system using 30 keV Ga beam followed by in situ polishing by low energy Ar ion. The lamella provides the cross-sectional view of 40 nm thin flake as shown in figure 1(d), which is basically the c-axis of the 2D-$Cr_{2}Te_{3}$ crystalline plane. The high-resolution TEM with lattice arrangements and corresponding selected area electron diffraction (SAED) patterns of 2D-Cr${}_{2}Te_{3}$ is shown in figure 1(e). The yellow parallel line shows the (001) plane of the Cr${}_{2}Te_{3}$ with interplanar separation d001 of 3.03 Å. The schematic image along the (010) zone axis and the lattice spacing of d001 is shown in figure 1(f). The XRD and HRTEM images strongly suggest the crystalline growth of quasi-2D $Cr_{2}Te_{3}$ along the (001) plane. Figure 1: (a) GIXRD data of the pristine, applied 120 W plasma power samples. (b) Schematic illustration of crystal structure of $Cr_{2}Te_{3}$. (c) FESEM image of co-deposited films, (d) Surface morphology image of $Cr_{2}Te_{3}$ sample after annealed $500^{0}C$ with the presence of 120 W plasma power. ### III.1 Raman spectra of Cr2Te3 from density functional theory Raman spectroscopy is a versatile tool to understand the electron-phonon or spin-phonon interaction in 2D materials. Depending on the crystal orientation the vibrational modes can be suppressed or enhanced which may affect the spin- phonon interaction. The observation of Raman modes in the case of Cr${}_{2}Te_{3}$ strongly depends on the crystalline quality and synthesis process [xx-xx]. In order to identify different Raman modes of Cr${}_{2}Te_{3}$ , we have used the density functional theory[21, 22] using the Vienna Ab-initio Simulation Package (VASP).[23, 24, 25, 26]. We have used the recommended Perdew-Burke-Ernzerhof[27] (PBE) PAW pseudo potentials in all the calculations along with the plane wave basis cutoff of 500 eV and set the value of EDIFF = $1.0\times 10^{-8}$ eV to represent an energy convergence criterion. For the Brillouin zone integration, a $\Gamma$-centered ($12\times 12\times 6$) k-point sampling is used. The Hubbard correction method within semilocal DFT[28], i.e., GGA(PBE)+U is used for both structure optimization and studying the Raman spectra. The value of U=3.0 eV[29] for Cr atoms is set to take into account the orbital localization. The unit cell of Cr2Te3 with P-31c space group is optimized with the above given set up and obtained the lattice constants as a=b=6.99 Å, and c=12.65 Å, respectively. To get the Raman active modes, the dielectric matrix is calculated for each mode using the density functional perturbation theory(DFPT) as implemented in the VASP package. The intensity of each mode is calculated following the Ref. [30] and using the raman-sc[31] python package. The Raman spectra of Cr2Te3 is shown in figure 2(a). The Lorentzian broadening width of 5 cm-1 is used to match with the experimental spectra. Six dominating peaks corresponding to more intense peaks are found at 40, 73, 102, 123, 144, and 190 cm-1. For comparison, the experimental Raman peaks are shown in figure 2(b). The peaks at 102, 123, and 144 cm-1 are closer to experimental peaks at 106, 128, and 146 cm-1. One can observe that two modes at 40 and 190 cm-1 are not observed in the experimental Raman spectra. The Raman modes observed from the experiment and DFT calculation are tabulated in Table 1. Figure 2: (a) Polarisation dependent Raman at the temperature 150K where blue color code for unpolarised(XX) condition and red color curve for polarised(XY) Raman spectrum. (b) The theoretical DFT calculation of the Raman spectrum of $Cr_{2}Te_{3}$ single crystal. The atomic displacements of the Raman active modes (the color code of atoms is the same as Fig 2(a)). The out-of-plane $A_{g}$ vibrational modes are depicted on the left panel (top view and side view), while the in-plane$E_{g}$ vibrational modes are on the right (top view only). It should be noted that Cr2Te3 belongs to the point group of D3d(-3m). Based on the point groups of the character table, the deduced Raman active modes for Cr2Te3 are 4A1g+9Eg. The Raman-active mode intensity is related to the Raman tensor and the direction of incident and scattered light as: $I_{m}^{R}=\frac{d\sigma}{d\Omega}\propto\sum_{m}|\vec{e}_{s}\cdot Rt_{m}\cdot\vec{e}_{i}|^{2}$. Where Rtm is the Raman tensor of mth active mode and $\vec{e}_{s}$ ( $\vec{e}_{i}$) is polarization vectors of scattered (incoming) light[32, 33]. The Raman tensors correspond to active Raman modes A1g, and Eg are as follows, A${}_{1g}(D{{}_{3d}}(-3m)):$ $\begin{pmatrix}a&0&0\\\ 0&a&0\\\ 0&0&b\end{pmatrix}$ E${}_{g}(D{{}_{3d}}(-3m)):\begin{pmatrix}c&0&0\\\ 0&-c&d\\\ 0&d&0\end{pmatrix},\begin{pmatrix}0&-c&-d\\\ -c&0&0\\\ -d&0&0\end{pmatrix}.$ We use the notation XX and YX to represent unpolarized and polarized scattering configurations, respectively. In the case of unpolarized configuration XX, both the incoming and scattering polarization vectors are $\vec{e_{i}}$=(1, 0, 0)=$\vec{e_{s}}$ and for YX, $\vec{e_{i}}$=(1, 0, 0), $\vec{e_{s}}=(0,1,0)$. So, under unpolarized configuration XX, I(A1g) = a2, and I(Eg) = c2. Under polarized condition YX, I(A1g) = 0, and I(Eg) = c2. So, in cross polarization condition, the intensity of A1g mode vanishes, and Eg peaks remain unchanged. Table 1: The experimental and theoretical frequencies, and vibrational modes of strain-free R3 Cr${}_{2}Te_{3}$. The ‘$||$’ and ‘$\perp$’ represent the parallel and cross polarization conditions, respectively. The ‘$-$’ represents peaks that were not clearly observed. mode | $A_{g}$ | $E_{g}$ | $A_{g}$ | $E_{g}$ | $E_{g}$ | $A_{g}$ ---|---|---|---|---|---|--- EXPT‘$||$’ | - | 96 | 106 | 128 | 146 | - EXPT‘$\perp$’ | - | 96 | - | 128 | 146 | - Theory | 40 | 73 | 102 | 123 | 144 | 190 The Raman active modes for $Cr_{2}Te_{3}$ belong to the trigonal lattice structure with 163 space group[34]. For this space group, two types of active Raman vibration modes are observed, i.e non-degenerate $E_{g}$ and degenerate $A_{g}$[34]. So the $A_{g}$ mode is the out-plane lattice vibration and $E_{g}$ is in-plane atomic vibration of the lattice. We have studied the polarization Raman spectra at low temperatures (80K) to best match the theoretical Raman modes. Compared to theoretical parameters where unpolarized configuration XX, I(A1g) = a2, and I(Eg) = c2, the experimental spectra shows four Raman peaks at $96cm^{-1}$, $106cm^{-1}$, $128cm^{-1}$ and $146cm^{-1}$. For the theoretical polarized condition YX, I(A1g) = 0, and I(Eg) = c2, the experimental spectra shows three Raman peaks at $96cm^{-1}$, $128cm^{-1}$ and $146cm^{-1}$. The above observation clearly illustrates the condition and provides the information that the $A_{g}$ is suppressed and other peaks correspond to $E_{g}$ Raman modes. This procedure quantifies the modes of vibration in $Cr_{2}Te_{3}$ quasi 2D materials. The schematic configuration of the atomic vibration in the lattice atoms of $Cr_{2}Te_{3}$ is shown in figure 2(c, d, e, f, g, h). The direction of vibration is shown in red arrows and the Cr and Te atoms are shown in magenta and yellow, respectively. Figure 2(c, d, e) shows the three configurations as A1g, A2g, A3g for out-of-plane vibration in the c-axis, in which the light interact ‘$||$’ to the lattice plane. Figure 2(f, g, h) represents three configurations as E1g, E2g, E3g for in-plane vibration in the XY plane, in which polarized light interacts $\perp$ to the lattice plane. ### III.2 Temperature dependent Raman study: The interaction of phonon with spin and magnon can be understood from the temperature dependant Raman spectroscopy. The interpretation of Raman spectra of soft modes in a ferromagnetic material can be understood from its over- damped line shape and the temperature dependence near the transition temperature ($T_{N}$).It has been observed that Raman spectroscopy a unique tool to probe spin-phonon coupling(s) which is strongly influenced by elementary excitations in multiferroic materials [7]. For a better understanding of the phonon modes in Cr2Te3, systematic Raman spectra were collected in the temperature range of 80-300 K and analyzed their evolution using Lorentzian multifunction. The temperature dependant Raman spectra of Cr2Te3, sample synthesized at 120 W is shown in Fig. 3(a). Two distinct peaks at 125 cm-1 and 144 cm-1 corresponding to $E_{2g}$ and $E_{3g}$, respectively, are observed with peak intensities varying with temperature. A 2D contour plot shown in Fig 3(b) indicates the distribution of Raman peak intensities within the frequency range of 80-165 cm-1. One can clearly perceive that both $E_{g}$ mode intensities are maximum within the temperature range of 100-165 K and then reduced at the higher temperature. The evolution of parameters like peak positions, full width at half maximum (FWHM), and integrated intensity as a function of temperature are evaluated to understand the strength of the spin- phonon interaction and the phonon dynamics of the system. Figure 3: (a) Temperature dependent Raman spectrum of 120 W plasma power samples in the temperature range of 82K to 273K. (b) 2D contour, in the range of 80 to 270 cm-1. Figure 4: Magnetisation measurement (a) The MH loop measurement parallel the ab plane configuration of 120 W plasma sample. (b) The MH loop parallel to c axis of the 120 W plasma sample. (c) The MT measurement with applied 300 Oe field parallel and perpendicular to the ab plane of 120 W plasma used sample.(d) are the Exchange bias field of the above four configurations. ### III.3 Magnetization study: In the magnetically ordered system, both Raman line shift and magnetic moments deviate at similar temperatures due to the strong spin-phonon interaction. In order to correlate that, the magnetic properties of the material, hysteresis loop, zero field cooled (ZFC), and field cooled (FC), measurements were performed with both perpendicular and parallel applied external magnetic fields with respect to the sample plane. In Figure 4(a), the magnetic moments (ZFC and FC) are plotted as a function of temperature for the 120 W plasma power sample. The brown curve represents the case of a perpendicular applied field of 300 Oe to the ab plane, while the pink curve represents the case of an applied field parallel to the ab plane. The inset of Figure 4 shows the first order derivative of M-T, and the TC is determined to be $\approx 162K$. However, the ZFC curve of the parallel external magnetic field for the 120 W plasma power samples shows an increase in the magnetic moment at the transition temperature. This suggests the presence of a well-defined paramagnetic to AFM phase transition. In the case of $Cr_{2}Te_{3}$, the magnetic behavior is complex due to the coexistence of strong FM and weak AFM interactions within the system. The system exhibits spin frustration at lower temperatures, leading to the emergence of a re-entrant spin glass phase. Previous studies by Roy et al. and Hashimoto et al. have reported the presence of both FM and AFM interactions in $Cr_{2}Te_{3}$, with the stronger FM interactions dominating above the TC (165K)[35]. The existence of a more complex magnetic order below the TC has also been predicted. The magnetic field-dependant magnetization M-H measurement of the 120W plasma power sample was performed to confirm the FM and perpendicular anisotropy. The M-H loops are shown in Figure 4(b) and (c). Figure 4(d) shows the exchange bias field from the M-H loop as a function of temperature. It has been observed that the exchange field decreases with increasing temperature, indicating the presence of FM and AFM coupling in the sample. The observation of exchange bias in $Cr_{2}Te_{3}$ coupled with the ferromagnet CdTe suggests the presence of an AFM component in $Cr_{2}Te_{3}$, likely arising from c-axis Cr atom defects in the lattice[36]. The existence of both ferromagnet or ferrimagnet properties in $Cr_{2}Te_{3}$ dependent on the nearest neighbor spin coupling energy of the Cr sites [16]. It has been reported that the ferromagnetic order enhanced perpendicular magnetic anisotropy. Depending on the spin orientations by the applied magnetic field, the net magnetization in both directions is different. So, the magnetization property will depend on the direction of the applied magnetic field to the c-axis of the $Cr_{2}Te_{3}$. The unit cell of $Cr_{2}Te_{3}$ contains alternating layers of Cr and Te atoms and has CrI vacancies in every second metal layer. The CrI atoms in the partially occupied layers are sandwiched between the CrIII layers and have no close-by neighbors in the a-b plane, whereas the CrII and CrIII atoms make up the fully occupied metal layers. The crystal growth resembles a hexagonal lattice in the (001) plane, indicating high crystalline quality as observed from XRD measurements. ### III.4 Spin-phonon interaction in $Cr_{2}Te_{3}$ The Raman shift of the most prominent peaks, $E_{2g}$(125 cm-1) and $E_{3g}$(144 cm-1), as a function of temperature are shown in Figure 5(a) and (b). The Balkanski model fits the data well, indicating the presence of strong electron-phonon interactions. The FWHM and integrated intensity of the $E_{2g}$(125 cm-1) and $E_{3g}$(144 cm-1) peaks were studied as a function of temperature, as depicted in Figure 5(c) and (d). Anomalous behavior near the transition temperature of 166 K indicates that all Raman peaks correspond to the quasi-2D $Cr_{2}Te_{3}$ film. Magnetization measurement of $Cr_{2}Te_{3}$ confirms the presence of a strong ferromagnetic component with a weak antiferromagnetic component. The temperature-dependent behavior of the magnetic moment suggests the induction of a magnetic moment below the transition temperature, owing to the influence of nearest neighboring spin- spin interactions. The excellent agreement between the temperature-dependent Raman spectroscopy and magnetization measurements, and theoretical calculations provides a comprehensive understanding of the spin-phonon interaction in $Cr_{2}Te_{3}$, which is rucial for potential applications in spintronics and contributes to the understanding of the material’s behavior and properties. Figure 5: Temperature dependent Raman spectrum, (a) The Raman peak position of E2g (125 cm-1) and (b) E3g (145 cm-1) mode of vibration with temperature and fitted equation(1) for showing the anharmonicity. (c) FWHM with the function of temperature of two most prominent peaks of E2g and E3g modes, respectively. (d) Integrated intensity of Raman peaks as a function of temperature of two most prominent peaks of E2g and E3g modes. The extra Raman peak shift below the transition temperature of both peaks (e) E2g and (f) E3g and fitting function equation 4. In the absence of spin order above the transition point, the temperature- induced phonon position shift follows a three-phonon scattering model that takes into account the anharmonicity contributions leading to softening or hardening of some Raman modes. The occupation probabilities of the phonons are responsible for the temperature dependence Raman shift and linewidth broadening. In the case of the phonon decay process, the optical phonon at $\gamma$ point with energy $\hbar\omega_{c2}(0)$ decays into two acoustic phonons from the same branch, keeping both energy and momentum conserved. So the decay phonon has $\hbar\omega_{c2}(0)$/2 energy with equal and opposite momentum. This process can be expressed as the temperature dependence Raman shift for the optical phonon frequency $\omega_{c2}(T)$ as follows (Klemens’s model): $\omega_{c2}(T)=\omega_{0}-C(1+\frac{2}{\exp\frac{\hbar\omega}{k_{B}T}-1})$ (1) Here, $\omega_{0}$ represents the phonon frequency at 0 K, $\hbar$ is the reduced Planck constant, $k_{B}$ is the Boltzmann constant, T is the temperature, and C is the three phonon interaction constant. The first term represents the initial phonon frequency, while the second term accounts for anharmonic contributions due to phonon decay. This can be understood by fitting parameters such as C and $\omega_{0}$. It has been observed that the temperature dependence Raman shift fitted well with Klemens’s model up to 162K. The anomaly below 162K does not follow the three-phonon decay process. According to Klemens’s model, the decayed acoustic phonon should have a constant velocity below the critical temperature depending on the thermal conductivity of the materials. However, the upturn behavior indicates the increase of the phonon velocity. By further decreasing the temperature below 162 K the Raman shift deviats from the expected anharmonic tendency indicating an additional scattering mechanism involved. Such a mechanism can be attributed to the coupling of the lattice to spin fluctuations. e.g., spin- phonon interaction. Revisiting the M-T data of $Cr_{2}Te_{3}$, it has been observed that the paramagnetic to FM transition occurs at 162K, indicating the role of spin below TC of 162K. Fennie et al. suggested that for an FM material spin coupled with optical phonon below transition temperature [37]. The optical phonon frequency $\omega{{}^{\prime}}$ can be considered as the contribution of the anharmonic behavior and an extra term due to the deviation due to spin-phonon interaction ($\lambda_{sp}$), which can be written as : $\omega{{}^{\prime}}=\omega_{c2}(T)+\Delta{\omega_{sp}}$ (2) Where $\omega_{c2}(T)$ is the Raman frequency shift as mentioned in the Klemens model and $\Delta\omega_{sp}$ is the deviation due to spin-phonon interaction from the Klemens model fitting. Finally, the spin-phonon coupling parameter $\lambda_{sp}$ for $Cr_{2}Te_{3}$ can be calculated using this method. This effect is similar to that observed in other magnetic materials, and it is associated with phonon renormalization induced by magnetic ordering due to the coupling between magnetic ordering and crystal lattice. Considering the N-N Heisenberg spin system, the magnetic energy of the FM system can be correlated with the frequency shift of lattice vibrations by introducing the proportional constant as spin-phonon coupling parameter $\lambda_{sp}$, which depends on the derivatives of the exchange constants with respect to the coordinates of the magnetic ions. In the absence of magnetostriction effects and electronic states renormalization, the contribution of spin-phonon coupling $(\lambda_{sp}$ of the $k^{th}$ position is approximately given by[7, 38]: $(\Delta{\omega_{k}})_{sp}=-\sum_{i,j}\lambda_{sp}<S_{i}.S_{j}>$ (3) where $<S_{i}.S_{j}>$ is the spin correlation function. This contribution, considering only first-neighbor interactions and a molecular field approximation, can be considered in the case of the lattice vibration, as $(\Delta{\omega_{p}})=-\lambda_{sp}S^{2}\phi(T)$ (4) where $\Delta\omega_{p}$ indicates the deviation of Raman frequency, $\lambda_{sp}$ is the coefficients of the spin-phonon coupling, $<S_{i}.S_{i+1}>$ denotes the average for adjacent spins (where S can be taken as 3/2 for Cr spins); and $\phi$(T) is the order parameter given by $\phi(T)=1-(T/T_{\theta})^{\gamma}$ where $T_{\theta}$ is the transition temperature. The temperature dependence function of $\phi(T)$ is the normalized order parameter, which is proportional to the square of the average magnetization M(T). Figure 6: (a) Above the TC the lattice vibration are observed due to Laser shining. (b) Below the transition temperature lattice vibration perturbed the exchange interaction. The quasi-2D ferromagnetism in $Cr_{2}Te_{3}$ is intriguing due to its high spin-phonon interaction, especially above its paramagnetic region. Klemens’s model well fits the temperature dependence Raman frequency till TC, $\approx$162K and anomalies below that, as shown in Figure 5(a) and (b), indicating anharmonicity and spin-phonon interactions. Figure 5(e) and (f) fitted with the equations 3 and 4 in the temperature range of 80 – 160 K. The spin-phonon interaction parameter ($\lambda_{sp}$) is calculated from the fitting and found to be 0.47$\pm$0.03 cm-1. This value confirms the presence of a strong spin-phonon interaction, which is responsible for the large phonon dispersion. Below the transition temperature, the order parameter is well- fitted, and the $\gamma$ value is determined to be 2.61$\pm$0.04. The value of $\lambda_{sp}$ is comparable with the literature and indicates the colinear spin arrangement describes the magnetostructural transition below TC. The temperature-dependent Raman concluded that the anomalies below TC is the signature of spin-phonon interaction arising from the paramagnetic to FM transition. The larger spin-phonon interaction implies that atoms with stronger bonds have a greater influence on the magnetic ordering with colinear spin states. We attribute that below TC the structural fluctuation rearranges the spins and tends to frustrate spin states with larger magnetization. A schematic model, as shown in Figure 6(a) shows the normal lattice vibration and Figure 6(b) illustrates the lattice vibration with spin-spin interaction and distortion of the exchange coupling due to spin-phonon interaction due to laser shining. Both the Raman modes and the magnetic moment as a function of temperature strongly suggest the interaction of spin momentum with lattice phonons below TC due to nearest neighbor spin-spin interactions. ## IV Conclusions The quasi 2D FM $Cr_{2}Te_{3}$ flakes were synthesized and optimized from co- evaporated thin films and subsequent plasma treatment. The FESEM and EDX analysis confirms the surface flake-like structures. The XRD and HRTEM confirm the (001) oriented single crystal $Cr_{2}Te_{3}$ flakes with c-axis van-der- wall gap of 2.95 Å. The magnetic behavior of the 2D $Cr_{2}Te_{3}$ has been investigated using SQUID measurements with both perpendicular and parallel applied magnetic fields to the c-axis of the flakes. A mixed magnetic phase with strong FM and weak AFM components has been confirmed from M-H and M-T measurements. The polarization-dependent Raman analysis of the thin flakes provides insights into the modes of lattice vibration. Additionally, the DFT calculations support the experimental frequency of the Raman modes. This strengthens the identification of the Raman modes and supports the reliability of the experimental observations. The observed Raman frequency, FWHM, and integrated intensity from the temperature-dependent Raman measurements indicate the transition temperature. The anomalies below the TC suggest the strong spin-phonon interaction parameter ($\lambda_{sp}$) of 0.47$\pm$0.03 cm-1. The combination of experimental techniques, along with theoretical calculations, has provided a comprehensive understanding of the structural, magnetic, and vibrational properties of the 2D $Cr_{2}Te_{3}$ flakes and strengthened the understanding of strong spin-phonon interaction mechanisms. The spin-phonon phenomena of 2D ferromagnetic $Cr_{2}Te_{3}$ thin flakes can be suitable for 2D magnetic spintronic device applications. ###### Acknowledgements. The authors acknowledge the National Institute of Science Education and Research Bhubaneswar, DAE, India for supporting this work through the project RIN-4001. ## References * Elahi _et al._ [2022] E. Elahi, G. Dastgeer, G. Nazir, S. Nisar, M. Bashir, H. Akhter Qureshi, D. kee Kim, J. Aziz, M. Aslam, K. Hussain, M. A. Assiri, and M. Imran, A review on two-dimensional (2d) magnetic materials and their potential applications in spintronics and spin-caloritronic, Computational Materials Science 213, 111670 (2022). * Ahn [2020] E. C. Ahn, 2d materials for spintronic devices, npj 2D Materials and Applications 4, 17 (2020). * Lou _et al._ [2018] P. C. Lou, L. de Sousa Oliveira, C. Tang, A. Greaney, and S. Kumar, Spin phonon interactions and magneto-thermal transport behavior in p-si, Solid State Communications 283, 37 (2018). * Milosavljević _et al._ [2018] A. Milosavljević, A. Solajic, J. Pesic, Y. Liu, C. Petrovic, N. Lazarević, and Z. Popovic, Evidence of spin-phonon coupling in crsite 3, Physical Review B 98 (2018). * Casto _et al._ [2015] L. D. Casto, A. J. Clune, M. O. Yokosuk, J. L. Musfeldt, T. J. Williams, H. L. Zhuang, M.-W. Lin, K. Xiao, R. G. Hennig, B. C. Sales, J.-Q. Yan, and D. Mandrus, Strong spin-lattice coupling in crsite3, APL Materials 3, 041515 (2015), https://doi.org/10.1063/1.4914134 . * Son _et al._ [2019] J. Son, B. C. Park, C. H. Kim, H. Cho, S. Y. Kim, L. J. Sandilands, C. Sohn, J.-G. Park, S. J. Moon, and T. W. Noh, Unconventional spin-phonon coupling via the dzyaloshinskii-moriya interaction, npj Quantum Materials 4, 17 (2019). * Olsson _et al._ [2021] K. S. Olsson, J. Choe, M. Rodriguez-Vega, G. Khalsa, N. A. Benedek, J. He, B. Fang, J. Zhou, G. A. Fiete, and X. Li, Spin-phonon interaction in yttrium iron garnet, Phys. Rev. B 104, L020401 (2021). * Seyler _et al._ [2018] K. L. Seyler, D. Zhong, D. R. Klein, S. Gao, X. Zhang, B. Huang, E. Navarro-Moratalla, L. Yang, D. H. Cobden, M. A. McGuire, W. Yao, D. Xiao, P. Jarillo-Herrero, and X. Xu, Ligand-field helical luminescence in a 2d ferromagnetic insulator, Nature Physics 14, 277 (2018). * Pulizzi [2008] F. Pulizzi, The rise of semiconductor spintronics, Nature Physics 4, S20 (2008). * Wolf _et al._ [2001] S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. von Molnár, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, Spintronics: A spin-based electronics vision for the future, Science 294, 1488 (2001). * Gong _et al._ [2017] C. Gong, L. Li, Z. Li, H. Ji, A. Stern, Y. Xia, T. Cao, W. Bao, C. Wang, Y. Wang, Z. Q. Qiu, R. J. Cava, S. G. Louie, J. Xia, and X. Zhang, Discovery of intrinsic ferromagnetism in two-dimensional van der waals crystals, Nature 546, 265 (2017). * Hao _et al._ [2018] Z. Hao, H. Li, S. Zhang, X. Li, G. Lin, X. Luo, Y. Sun, Z. Liu, and Y. Wang, Atomic scale electronic structure of the ferromagnetic semiconductor cr2ge2te6, Science Bulletin 63, 825–830 (2018). * Huang _et al._ [2018] C. Huang, J. Feng, F. Wu, D. Ahmed, B. Huang, H. Xiang, K. Deng, and E. Kan, Toward intrinsic room-temperature ferromagnetism in two-dimensional semiconductors, J. Am. Chem. Soc. 140, 11519 (2018). * Lin _et al._ [2016] M.-W. Lin, H. L. Zhuang, J. Yan, T. Z. Ward, A. A. Puretzky, C. M. Rouleau, Z. Gai, L. Liang, V. Meunier, B. G. Sumpter, P. Ganesh, P. R. C. Kent, D. B. Geohegan, D. G. Mandrus, and K. Xiao, Ultrathin nanosheets of crsite3: a semiconducting two-dimensional ferromagnetic material, J. Mater. Chem. C 4, 315 (2016). * Wen _et al._ [2020a] Y. Wen, Z. Liu, Y. Zhang, C. Xia, B. Zhai, X. Zhang, G. Zhai, C. Shen, P. He, R. Cheng, L. Yin, Y. Yao, M. Getaye Sendeku, Z. Wang, X. Ye, C. Liu, C. Jiang, C. Shan, Y. Long, and J. He, Tunable room-temperature ferromagnetism in two-dimensional cr2te3, Nano Lett. 20, 3130 (2020a). * Youn _et al._ [2007] S. J. Youn, S. K. Kwon, and B. I. Min, Correlation effect and magnetic moments in cr2te3, Journal of Applied Physics 101, 09G522 (2007), https://doi.org/10.1063/1.2713699 . * Burn _et al._ [2019] D. M. Burn, L. B. Duffy, R. Fujita, S. L. Zhang, A. I. Figueroa, J. Herrero-Martin, G. van der Laan, and T. Hesjedal, Cr2te3 thin films for integration in magnetic topological insulator heterostructures, Scientific Reports 9, 10793 (2019). * Tang _et al._ [2015] G. Tang, Q. Qian, X. Wen, G. Zhou, X. Chen, M. Sun, D. Chen, and Z. Yang, Phosphate glass-clad tellurium semiconductor core optical fibers, Journal of Alloys and Compounds 633, 1 (2015). * Manikandan _et al._ [2020] M. Manikandan, K. Subramani, M. Sathish, and S. Dhanuskodi, Hydrothermal synthesis of cobalt telluride nanorods for a high performance hybrid asymmetric supercapacitor, RSC Adv. 10, 13632 (2020). * Wen _et al._ [2020b] Y. Wen, Z. Liu, Y. Zhang, C. Xia, B. Zhai, X. Zhang, G. Zhai, C. Shen, P. He, R. Cheng, L. Yin, Y. Yao, M. Getaye Sendeku, Z. Wang, X. Ye, C. Liu, C. Jiang, C. Shan, Y. Long, and J. He, Tunable room-temperature ferromagnetism in two-dimensional cr2te3, Nano Letters 20, 3130 (2020b), pMID: 32338924, https://doi.org/10.1021/acs.nanolett.9b05128 . * Hohenberg and Kohn [1964] P. Hohenberg and W. Kohn, Inhomogeneous electron gas, Phys. Rev. 136, B864 (1964). * Kohn and Sham [1965] W. Kohn and L. J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. 140, A1133 (1965). * Kresse and Hafner [1993] G. Kresse and J. Hafner, Ab initio molecular dynamics for liquid metals, Phys. Rev. B 47, 558 (1993). * Kresse and Furthmüller [1996] G. Kresse and J. Furthmüller, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Comput. Mater. Sci. 6, 15 (1996). * Kresse and Hafner [1994] G. Kresse and J. Hafner, Norm-conserving and ultrasoft pseudopotentials for first-row and transition elements, J. Phys. Condens.Matter 6, 8245 (1994). * Kresse and Joubert [1999] G. Kresse and D. Joubert, From ultrasoft pseudopotentials to the projector augmented-wave method, Phys. Rev. B 59, 1758 (1999). * Perdew _et al._ [1996] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77, 3865 (1996). * Cococcioni and de Gironcoli [2005] M. Cococcioni and S. de Gironcoli, Linear response approach to the calculation of the effective interaction parameters in the $\mathrm{LDA}+\mathrm{U}$ method, Phys. Rev. B 71, 035105 (2005). * Lee _et al._ [2021] I. H. Lee, B. K. Choi, H. J. Kim, M. J. Kim, H. Y. Jeong, J. H. Lee, S.-Y. Park, Y. Jo, C. Lee, J. W. Choi, S. W. Cho, S. Lee, Y. Kim, B. H. Kim, K. J. Lee, J. E. Heo, S. H. Chang, F. Li, B. L. Chittari, J. Jung, and Y. J. Chang, Modulating curie temperature and magnetic anisotropy in nanoscale-layered cr2te3 films: Implications for room-temperature spintronics, ACS Appl. Nano Mater. 4, 4810 (2021). * Porezag and Pederson [1996] D. Porezag and M. R. Pederson, Infrared intensities and raman-scattering activities within density-functional theory, Phys. Rev. B 54, 7830 (1996). * Fonari and Stauffer [2013] A. Fonari and S. Stauffer, _vasp-raman-.py_ (https://github.com/raman-sc/VASP/, 2013). * Zhang _et al._ [2016] X. Zhang, Q.-H. Tan, J.-B. Wu, W. Shi, and P.-H. Tan, Review on the raman spectroscopy of different types of layered materials, Nanoscale 8, 6435 (2016). * Niranjan [2021] M. K. Niranjan, Significance of coulomb interaction in interlayer coupling, polarized raman intensities, and infrared activities in the layered van der waals semiconductor gase, Phys. Rev. B 103, 195437 (2021). * Zhou _et al._ [2019a] S. Zhou, R. Wang, J. Han, D. Wang, H. Li, L. Gan, and T. Zhai, Ultrathin non-van der waals magnetic rhombohedral cr2s3: Space-confined chemical vapor deposition synthesis and raman scattering investigation, Advanced Functional Materials 29, 1805880 (2019a). * Roy _et al._ [2015] A. Roy, S. Guchhait, R. Dey, T. Pramanik, C.-C. Hsieh, A. Rai, and S. K. Banerjee, Perpendicular magnetic anisotropy and spin glass-like behavior in molecular beam epitaxy grown chromium telluride thin films, ACS Nano 9, 3772 (2015). * Hui _et al._ [2012] L. Hui, S. T. Lim, J. F. Bi, and K. L. Teo, Investigation on the antiferromagnetic component in the intrinsic exchange bias in structurally single phase cr2te3 thin film, Journal of Applied Physics 111, 07D719 (2012), https://doi.org/10.1063/1.3677883 . * Fennie and Rabe [2006] C. J. Fennie and K. M. Rabe, Magnetically induced phonon anisotropy in ${\mathrm{zncr}}_{2}{\mathrm{o}}_{4}$ from first principles, Phys. Rev. Lett. 96, 205505 (2006). * Thomas _et al._ [2022] A. Thomas, P. Telang, K. Mishra, M. Cesnek, J. Bednarcik, D. V. S. Muthu, S. Singh, and A. K. Sood, Role of spin-phonon and electron-phonon interactions in the phonon renormalization of ${({\mathrm{Eu}}_{1-x}{\mathrm{Bi}}_{x})}_{2}{\mathrm{ir}}_{2}{\mathrm{o}}_{7}$ across the metal-insulator phase transition: Temperature-dependent raman and x-ray studies, Phys. Rev. B 105, 075145 (2022). * Zhou _et al._ [2019b] S. Zhou, R. Wang, J. Han, D. Wang, H. Li, L. Gan, and T. Zhai, Ultrathin non-van der waals magnetic rhombohedral cr2s3: Space-confined chemical vapor deposition synthesis and raman scattering investigation, Advanced Functional Materials 29 (2019b).
# Jovian sodium nebula and Io plasma torus S+ and brightnesses 2017 – 2023: insights into volcanic vs. sublimation supply ###### Abstract We present first results derived from the largest collection of contemporaneously recorded Jovian sodium nebula and Io plasma torus (IPT) in [S II] 6731 Å images assembled to date. The data were recorded by the Planetary Science Institute’s Io Input/Output observatory (IoIO) and provide important context to Io geologic and atmospheric studies as well as the Juno mission and supporting observations. Enhancements in the observed emission are common, typically lasting 1 – 3 months, such that the average flux of material from Io is determined by the enhancements, not any quiescent state. The enhancements are not seen at periodicities associated with modulation in solar insolation of Io’s surface, thus physical process(es) other than insolation- driven sublimation must ultimately drive the bulk of Io’s atmospheric escape. We suggest that geologic activity, likely involving volcanic plumes, drives escape. JGR: Space Physics Planetary Science Institute 1700 East Fort Lowell, Suite 106 Tucson, AZ 85719-2395, USA Center for Space Physics Boston University Boston, MA 02155, USA University Of Colorado, Boulder Boulder, CO 80309, USA Jeffrey P<EMAIL_ADDRESS> A large set of Jovian sodium nebula and Io plasma torus S+ images provides context for Io and Jovian magnetospheric studies Enhancements in Na and S+ emission last 1 – 3 months, ruling out insolation- driven sublimation as their driver Volcanic plumes likely play a key role in atmospheric escape ## Plain Language Summary The Planetary Science Institute’s Io Input/Output observatory (IoIO) is composed almost entirely of off-the-shelf parts popular with amateur astronomers. IoIO uses special filters to isolate emission from two gasses found around Jupiter: neutral sodium and ionized sulfur. The sodium is thrown out from Io in a vast cloud called the Jovian sodium nebula. The ionized sulfur collects into the Io plasma torus (IPT), a ring-shaped structure centered around Jupiter that wobbles around Io’s orbital path. These gasses ultimately come from Jupiter’s highly volcanic moon, Io. We see the Na nebula and IPT brighten frequently. This demonstrates that the majority of the material leaving Io comes from whatever drives the frequent brightening events, with volcanic plumes likely playing a key role. Our results challenge a widely held belief in the scientific community, that the majority of the material in the Na nebula and IPT comes from Io’s tenuous global atmosphere, which is fed by the sublimation of surface frosts and is relatively stable in time. Our dataset also provides important context for NASA’s Juno mission and supporting observations that focus on Io volcanism, the material’s likely source, and Io’s magnetosphere, the material’s ultimate destination. ## 1 Introduction One of the first hints that Io was somehow releasing material into the Jovian magnetospheric environment in large amounts compared to the other Galilean satellites came from spectroscopic observations of sodium D1 (5896 Å) and D2 (5890 Å) emissions that were time variable, broad, and in a ratio suggesting optically thick gas [RA. Brown Chaffee (1974)]. kupo76 conducted spectroscopic studies of S+ in the [S II] 6717 Å and 6731 Å doublet in the orbital plane of Io and detected extended emission corotational with Jupiter. As Voyager I approached Jupiter, Extreme ultraviolet (EUV) emission from S III, S IV, and O III resolved into a torus-like structure encircling Jupiter, dubbed the Io plasma torus [<]IPT;¿broadfoot79. The potential source of the material escaping Io was first hinted at when Io itself was also seen to be intermittently bright at a particular orbital phase angle in the 3 – 5 $\mu$m region of the infrared spectrum, with volcanic activity being one of several possible explanations offered [Witteborn . (1979)]. Volcanic activity on Io was subsequently unambiguously confirmed by Voyager 1 images of plumes and volcanic surface features [Morabito . (1979), Smith . (1979)]. This volcanism gives rise, either directly or indirectly, to an SO2 dominated atmosphere [Pearl . (1979), Kumar (1979), de Pater . (2021)] with minor constituents, including NaCl [Lellouch . (2003), McGrath . (2004), Moullet . (2010), Redwing . (2022)]. Io’s atmosphere undergoes several reactions with material in the IPT, resulting in detectable effects. For majority species S and O, change exchange, sputtering, and electron impact ionization are important processes for removing atmospheric material [<]e.g.,¿mcgrath87, smyth88a, smyth88, thomas04, schneider07, dols08, dols12, smith22, resulting in a roughly torus- shaped neutral cloud confined to Io’s orbital plane and mapped in the EUV at O I 1304 Å [Koga . (20181)]. IPT electron impact ionization of this neutral cloud is the primary process by which the IPT receives new material, with direct ionization in Io’s atmosphere providing only a minor component [Dols . (2008), Dols . (2012)]. The canonical value of $\sim$1 ton s-1 of material flowing into the IPT from Io has been estimated using the IPT’s total EUV power output and a simple geometric model of the EUV emission region [<]e.g.,¿broadfoot79, schneider07. The path that sodium-bearing material takes as it escapes Io’s atmosphere is different, thanks to the low ionization potential of any sodium-containing molecule, NaX. For these molecules, impact ionization and charge exchange processes are very efficient [<]e.g.,¿schneider07. Pickup NaX ions generated in Io’s exosphere that promptly neutralize and dissociate create a directional feature, the “jet,” that points radially outward from Io and flaps up and down in synchrony with the IPT [Pilcher . (1984), Wilson Schneider (1999), Burger . (1999), see also animations accompanying Figure 1]. A more extended structure, dubbed the “stream,” has a similar radial morphology and behavior relative to IPT modulation, but extends for several hours in Jovian local time downstream of Io’s position. This comes from NaX+ ionized in Io’s exosphere by IPT plasma and swept downstream in the plasma flow before they neutralize [Schneider . (1991), Wilson Schneider (1994)]. IPT NaX+ ions that dissociate produce neutral fragments that are ejected from the IPT at an average of $\sim$70 km s-1, which is the Jovian corotational velocity at the IPT. This velocity is above Jupiter’s escape velocity. Thus, neutral Na is detected at distances $>$500 Jovian radii (Rj) from Jupiter [Mendillo . (1990)]. All these neutral sodium emission features are known collectively as the Jovian sodium nebula and are well-described by Monte Carlo modeling techniques [Wilson . (2002)]. The Jovian sodium nebula has been the subject of several long-term studies using ground-based coronagraphic techniques [Mendillo . (2004), Yoneda . (2009), Yoneda . (2015), Roth . (2020), Morgenthaler . (2019), and this work]. Except for the study presented here, long-term observations of the IPT from ground-based observatories have been limited. The longest continuous study to date covered a full Jovian opposition using a spectroscopic technique [ME. Brown Bouchez (1997)]. Ground-based IPT imaging campaigns using coronagraphic techniques have typically lasted a few weeks per opposition, though some have extended over several oppositions [Schneider Trauger (1995), Woodward . (2000), Nozawa . (2004), Kagitani . (2020)]. These ground-based observations concentrate on the bright [S II] emissions of the 6717 Å, 6731 Å doublet, which are excited by IPT thermal electrons. A significant amount of structure is seen in high-resolution IPT images: a dense “ribbon” near Io’s orbit is separated by a gap from the more disk-like “cold torus,” closer to Jupiter [<]e.g.¿schneider95. There is evidence that diffusion proceeds inward from the ribbon to the cold torus [Herbert . (2008)]. Long-term spectro-imaging observations of EUV emission of the IPT have been conducted from Voyager, Cassini, and Hisaki [Broadfoot . (1979), Steffl . (2004), Yoshioka . (2014)]. This emission, known as the “warm torus,” is more extended radially and vertically than the ribbon. The EUV emissions are excited by suprathermal electrons and have been used to study radial transport in the IPT, providing evidence that the total residence time of material in the IPT is 20 – 80 days [Bagenal Delamere (2011), Hess . (2011), Copper . (2016), Tsuchiya . (2018)]. Once material has left the IPT, it rapidly spirals outward in a few days [Bagenal Delamere (2011)]. Thus, long-term monitoring of the IPT, such as that presented here, provides critical context to any study of Jupiter’s broader magnetosphere, such as that conducted by NASA’s Juno mission [Bolton . (2017)] and supporting observations [Orton . (2020), Orton . (2022)]. In this work, we present the first results of a combined Jovian sodium nebula and IPT monitoring campaign, conducted since March 2017 by the Planetary Science Institute’s Io Input/Output observatory (IoIO). The coronagraphic observations are described in §2 and §3 provides the methodology used to reduce the data. Section 4 presents the primary results of our study, which is a time history of the surface brightnesses of the Na nebula and IPT (Figure 3.3). This time history shows 1 – 2 brightness enhancements per 7-month observing season, each lasting 1 – 3 months, such that emissions are seldom found in a quiescent state. In §5 we compare our results to previous studies, noting that although none of the previous workers reported such frequent activity in the Na nebula and IPT, all are consistent with it. In §6, we use the IoIO data to rule out solar insolation-driven sublimation of Io’s surface frosts as the primary driver of material from Io’s atmosphere, showing instead that geologic processes must be involved. We then review the existing evidence that connects enhancements in material escape from Io’s atmosphere with volcanic plume activity and discuss implications for the transport of material. A summary and concluding remarks are provided in §7. In §8, we suggest additional uses for the IoIO dataset, including providing support for current and planned missions to Jupiter. ## 2 Observations All observations presented here were conducted with the Planetary Science Institute’s Io Input/Output observatory (IoIO). IoIO consists of a 35 cm Celestron telescope feeding a custom-built coronagraph, described by morgenthaler19. Since the publication of that work, both the observatory hardware and control software have been upgraded, enabling fully robotic acquisition of Jovian sodium nebula and IPT [S II] on- and off-band images, regular photometric observations of burnashev85 spectrophotometric standard stars in all filters, and observations of telluric sodium foreground emission. Bias, dark and sky flat images are also periodically recorded. Since 2017-03-09, IoIO has contemporaneous recorded Na 5890 Å nebula and IPT [S II] 6731 Å observations on over 550 nights, with over 2300 Na images and over 8300 [S II] images collected. The observatory has been operated on another $\sim$500 nights in support of other Planetary Science Institute projects [<]e.g.,¿adams23DPS and pilot studies, increasing the number of spectrophotometric calibrations and time coverage of telluric Na emission, which provides a time-variable foreground emission [<]e.g.,¿plane18. ## 3 Data Reduction ### 3.1 General Considerations All IoIO data are reduced pipeline-style using the software enumerated in the Open Research section. The burnashev85 spectrophotometric observations show each filter in our filter library provides stable zero points and extinction coefficients over the length of our study, modulo random nightly variations due to variation in atmospheric transparency, which we ignore in our current analyses, using instead the biweight location [Tukey (1977)], of all the measurements of each filter. The biweight location, $\zeta_{biloc}$, is defined as: $\zeta_{biloc}=M+\frac{\sum_{|u_{i}|<1}\ (x_{i}-M)(1-u_{i}^{2})^{2}}{\sum_{|u_{i}|<1}\ (1-u_{i}^{2})^{2}}$ (1) where $x$ is the data, $M$ is the median of the data. The quantity $u_{i}$ is: $u_{i}=\frac{(x_{i}-M)}{c*MAD}$ (2) where $c$ is a tuning constant, set to 9 in our case, and $MAD$ is the median absolute deviation: $MAD=\mathrm{median}(|(x_{i}-\overline{x}|)$ (3) The biweight location is a more robust statistical measure of the central location of a distribution than the median, particularly for data not distributed as a Gaussian [Beers . (1990)]. Surface brightnesses are expressed in rayleighs (R), where: $1~{}R={10^{6}\over 4\pi}~{}\mathrm{photons~{}cm^{-2}~{}sec^{-1}~{}sr^{-1}}$ (4) Astrometric solutions of our images, together with high-quality JPL HORIZONS ephemerides [Giorgini . (1996)] enable high-precision alignment of on- and off-band images before subtraction of the off-band images. Subtraction of the off-band images effectively removes Jupiter’s scattered continuum light from the on-band images. When astrometric solutions using field stars fail, the position of Jupiter on the coronagraph neutral density (ND) filter is used to establish the astrometric center of the image, with the clock angle determined by the previous successfully solved image. As expected from our stellar calibrations, we found the ratio between our on- and off-band sky flats gave stable results over the lifetime of the project. Thus, we used the biweight location of all ratios to scale the off-band images before subtraction from the on-band. Sample reduced images are shown in Figure 1. Figure 1: Sample IoIO Na nebula (left column) and IPT [S II] 6731 Å (right column) images. The top row shows images recorded just before the large 2018 enhancements in Na nebula and IPT emission (see Figure 3.3), the middle row images recorded near the 2018 Na nebula and IPT peaks in emission and the bottom row images recorded near the 2022 peaks. Animations are provided in the on-line journal. We note that our calibration procedure is a significant improvement over the technique used by morgenthaler19, which relied on the image of Jupiter through the ND filter for surface brightness calibration. As discovered after the installation of a larger and filter wheel in 2019, Jupiter’s detected brightness is subject to an unexpected Fabry Pérot effect between the narrow- band and ND filters, with each narrow-band filter providing a different magnitude of effect. Our current procedure avoids this issue by using the stellar and flat-field calibrations described above. In order to establish a time-sequence of the Na nebula and IPT brightnesses, we first rotate the images reduced by the procedure above into the plane of the IPT centrifugal equator using the relation: $\alpha=-A\times\ cos(\lambda_{\mathrm{III}}-P)$ (5) where $\alpha$ is the angle between the Jovian rotational axis and the perpendicular to the IPT centrifugal equator, $\lambda_{\mathrm{III}}$ is the sub-observer System III longitude, $A$ is the amplitude of the oscillation of the centrifugal equator and $P$ is the $\lambda_{\mathrm{III}}$ longitude of the intersection of the magnetic and equatorial planes. For this work, we used $A$ = 6.8∘ [Moirano . (2021)] and $P$ = 290.8∘ [Connerney . (1998)]. Values of $A$ = 6.3∘ [Phipps Bagenal (2021)] and $P$ = 286.61∘ [Connerney . (2018)] could also be used, and would result in trivial differences in our extracted surface brightnesses. ### 3.2 Na Nebula As shown by previous work (§1) and the Na nebula animations accompanying Figure 1 in the online journal, the bulk of the bright jet and stream emission follow Io in its 42 hour orbit and flap up and down with each 9.925 hr Jovian rotation. To minimize the effects of this high variability when extracting surface brightnesses from individual Na nebula images, we rotate each image by $\alpha$, as described above, and divide the resulting image into horizontal apertures distributed vertically from the IPT centrifugal plane, as shown in Figure 2. The ND filter and area beyond the edge of the narrow-band filters are masked, as are pixels with values above the non-linear point of the CCD, with a larger mask area applied around Galilean satellites. The average surface brightness in each aperture is calculated by totaling the individual surface brightnesses of the unmasked pixels and dividing by the total number of unmasked pixels. The final surface brightness for a given distance from the IPT centrifugal plane is the average of the surface brightnesses of the pair of apertures located at that distance above and below the plane. Figure 2: Sample Na nebula image illustrating reduction steps described in §3.2. The blue lines indicate boundaries between apertures used to extract surface brightness values as a function of vertical distance from the IPT centrifugal plane. The boundaries between apertures are defined by the following vertical distances from the IPT centrifugal plane: 4 Rj – 8 Rj, 8 Rj – 16 Rj, and 16 Rj – 32 Rj, with the average distances from the plane of each pair of apertures used for subsequent identification (e.g., see legend of Figure 3.3, top). Masked areas are shown in white. #### 3.2.1 Removal of Telluric Sodium Contamination Telluric sodium emission provides a time-varying and, at times, substantial field-filling component to our Na nebula images. We attempted to remove this emission using an empirical model constructed from our multi-year dataset of telluric sodium emission observations. The model accounted for airmass effects, solar scattering angle, and seasonal effects. However, after subtraction of the model, the time sequence of Na nebula surface brightnesses was still quite noisy. Thus, we instead subtract the average surface brightness of emission $>$32 Rj above and below the centrifugal plane from the extracted surface brightnesses of each image. As a result, the variation induced by telluric emission was greatly diminished. A final step in the Na nebula reduction is to compute the biweight location of all of the measurements at each distance on each night. The results are plotted together with 21-point moving median filter in Figure 3.3 (top). A byproduct of our telluric sodium removal technique is to induce an intensity-dependent error of the order $\sim$ 5 R – $\sim$ 25 R in our quoted Na nebula surface brightnesses. This is because our telluric removal procedure effectively assumes that the brightness of the Na nebula is zero at the edge of the IoIO FOV. However, as shown by larger-field images [Mendillo . (2004), Yoneda . (2009), Yoneda . (2015)], the emission $>$30 Rj above and below Jupiter’s equatorial plane varies from $<$ 5 R to $\sim$ 25 R, depending on whether or not the nebula is enhanced. This effect could be corrected using a model of the Na nebula emission, however, doing so would not affect the results of our current study. ### 3.3 IPT As shown by comparing Figure 1 (lower right) to the IPT image in Figure 3, rotating by Equation 5 provides a natural coordinate system for extracting brightness values of the ansas (edges, Latin: “handles”) of the IPT. As described in §1, the [S II] 6731 Å ansas primarily capture the IPT ribbon emission. We extract the average surface brightness from each ribbon feature using the two-step process shown graphically in Figure 3. Specifically, starting from the rotated image, we define ansa extraction regions that extend radially 4.75 Rj to 6.75 Rj from Jupiter and $\pm$1.25 Rj above and below the IPT centrifugal equator (white boxes). Radial profiles of the emission in the white boxes are shown in the bottom row. These profiles are generally well-fit by a Gaussian plus sloping continuum of the form: $f(x)=Ae^{-{(x-x_{0})^{2}}\over{2\sigma^{2}}}+P_{0}+P_{1}x$ (6) where $A$ is the peak surface brightness of the Gaussian component of the radial profile, $x_{0}$ is the ribbon radial distance from Jupiter, $\sigma$ is the width of the ribbon, and $P_{0}$ and $P_{1}$ are the coefficients of the linear background. The $\pm$1-$\sigma$ limits of these Gaussians are used to define the radial limits of the region used to extract vertical profiles, shown in the top row, outside plots. Equation 6, with $P_{1}$ = 0, is used to fit the vertical profiles. The Gaussian component of this function is integrated to arrive at an average ribbon intensity of $A\sigma\sqrt{2\pi}$. This is converted to an average surface brightness by dividing by $\sigma$. Occasionally, the data are of high enough quality and the torus configured such that cold torus is resolved. This is the case for the dawn (left) ansa and results in a small peak inside of the ribbon. We ignore the effect of this feature on our fits, since the simple sloping continuum plus Gaussian provides an adequate a foundation for determining the region over which to extract vertical profiles. As shown in the Figure, the vertical profiles are well- described by Equation 6, with $P_{1}$ = 0. The total area of the Gaussian components of each fit is then used to establish the average surface brightness of each ribbon. If an extraction area contains saturated pixels from any nearby Galilean satellite, it is excluded from the analysis. Fits that result in ribbon peak positions outside of the range 5.0 Rj – 6.5 Rj or peak widths outside the range 0.1 Rj – 0.5 Rj are discarded. In this way, our extractions are able to adjust for varying observing conditions and the intrinsic variability in the IPT ansa morphology [<]e.g.,¿schneider95 and reliably discard pathological cases. The time history of the average ribbon surface brightnesses, together with a 21-point running median filter of the dusk ribbon points are plotted in Figure 3.3 (bottom). On timescales of weeks to months, all other parameters of the fits roughly scale with ribbon surface brightness, except for the radial peak positions of the ribbon. This behavior is expected because all of the parameters of the fits, except the radial peak positions, are sensitive to the total amount of material in the IPT, whereas the radial positions of the ribbon are determined by physical effects outside of the IPT [<]e.g.,¿[see also §8]barbosa83, ip83. Figure 3: Graphical depiction of [S II] ribbon surface brightness extraction process described in §3.3. An IPT image rotated into the reference frame of the IPT centrifugal equator is shown in the top, middle panel. Radial profiles of the ansas are shown in the bottom row, fit by Equation 6. The 1-$\sigma$ limits indicated on these plots define the edges of regions between which the vertical profiles are computed (top row, outer plots). The average surface brightness of each ribbon is the integral of the Gaussian component of the fit of the corresponding vertical profile. We note that, as derived, the absolute values of the dawn ribbon surface brightness values shown in Figure 3.3 (bottom) are artificially low because, at the blueshift of the dawn ribbon, IPT emission falls outside of the central bandpass of the [S II] 6731 Å filter as measured in a collimated beam. This effect can be corrected using a velocity-dependent IPT map, the [S II] filter transmission curve, and consideration of the effects of the telescope’s F/11 light paths on the filter’s total transmission. However, making these corrections would not affect the results of our current study. Top: Time sequence of surface brightnesses in Jovian sodium nebula at three distances from the plasma torus equatorial plane. Bottom: Time sequence of IPT ribbon average surface brightnesses. A running median filter, 21 points wide, is applied to each Na aperture and the dusk ribbon brightnesses. ## 4 Results We anticipate the IoIO data will be very useful for correlative studies for observations focusing on Io and the effect that material escaping Io has on Jupiter’s magnetosphere, such as afforded by NASA’s Juno mission [Bolton . (2017)] and supporting observations [Orton . (2020), Orton . (2022)]. To that effect, the surface brightness points shown in Figure 3.3 have been archived at Zenodo [Morgenthaler . (2023)]. In this paper, we focus on what the data themselves can say about the physical processes that drive material escape from Io’s atmosphere. Our 6-year time sequence of Jovian Na nebula and IPT [S II] 6731 Å ribbon brightnesses (Figure 3.3) shows considerable modulation in each emission line as a function of time. During each $\sim$7-month observing window, at least 1 – 2 enhancements, each lasting 1 – 3 months are seen. Very little time is spent in a quiescent state. Visual inspection of Figure 3.3 reveals that the average values of the Na nebula and IPT surface brightnesses are determined by the enhancements, rather than any quiescent value. To quantify this finding we compute the tukey77 biweight distribution (Equation 1) of the measurements presented in Figure 3.3. Recall from §3.1 that the biweight distribution is a more robust statistical measure of the central location of a distribution than the median or average. The biweight distribution values of the Na nebula points are 80 R, 50 R and 15 R in the 6 Rj, 12 Rj, and 24 Rj apertures, respectively. Compare this to low values of approximately 30 R, 20 R, and 5 R. Similarly, the biweight distributions of the ribbon brightnesses are 50 R and 90 R for dawn and dusk, respectively, with minima of 15 R and 30 R. Visual inspection of Figure 3.3 also shows a quasi-contemporaneous relationship between the Na nebula and IPT enhancements. For instance, the relative timing between the peaks of the 2018 Na nebula and IPT enhancements is significantly different than that seen in 2020. And the fall 2022 Na nebula enhancement is particularly bright compared to other Na nebula enhancements, yet the IPT enhancement during that time period is particularly weak compared to other years. This type of behavior has not been reported before. We discuss the implications of our results in §6. But first, we compare our results to those of previous studies, which provide valuable context to our discussions. ## 5 Comparison to previous studies IoIO occupies a unique niche in sensitivity, which ideally suits it to study of the modulation of material flow from Io into the broader Jovian magnetosphere. The 35 cm telescope aperture of IoIO was chosen to be comparable to the smallest apertures that have successfully imaged the IPT [Nozawa . (2004)]. This has allowed us to reliably capture, at modest cost, a 6-year history of the modulation in the IPT [S II] 6731 Å ribbon brightnesses (presented here) and positions (to be presented in a subsequent work). Our equipment choice limited the FOV of the instrument to 0.4∘, which is much smaller than the 2.5∘ – 7∘ FOV of long-term previous coronagraphic Na nebula studies [Mendillo . (2004), Yoneda . (2009), Yoneda . (2015), Roth . (2020)]. However, the narrower FOV of IoIO affords it much greater sensitivity to emission close to Io, as evident by comparing the left columns of our Figure 1 to Figure 1 of mendillo04 and Figures 2 of yoneda09, yoneda15. This feature of the IoIO Na nebula observations will allow us to conduct detailed morphological studies of the jet and stream in future work. ### 5.1 Sodium-related studies The outer portions of the IoIO FOV overlap with the inner portions of the images recorded by other wide-field Na nebula studies [Mendillo . (2004), Yoneda . (2009), Yoneda . (2015), Roth . (2020)], allowing direct comparison. For instance, the peak intensity of the fall 2022 Na nebula enhancement detected by IoIO roughly compares to the peak intensities in the 2007 and 2015 enhancements captured by yoneda09, yoneda15. The roth20 study is useful, since it provides a time-history of Na nebula brightnesses measured with the same coronagraph used in the yoneda15 work for a 4-month interval in 2017 during which no enhancement was reported. Nevertheless, modulation at the $\sim$10 R level in daily values ($\sim$5 R in the half-month averages) is seen. This is comparable to the variation seen in the IoIO dataset during the 2018 enhancement. The greater sensitivity of the IoIO coronagraph to emission closer to Jupiter makes variation of this magnitude much easier to detect. This implies that periods formerly identified as quiescent in the yoneda15 dataset may, in fact, contain enhancements. By that interpretation, the period highlighted in the roth20 appears to be capturing the low point between two enhancements. Spectroscopic observations conducted at the Lick observatory over the entire 1995 Jovian opposition [ME. Brown (1994), ME. Brown Bouchez (1997)] are also useful for comparison. This work captured an enhancement in both the Na 5890 Å doublet and [S II] 6731 Å (see also Sections 5.2 – 5.3). The 10′′ spectrograph slit was aligned along the centrifugal equator, with peak emission averaged along the slit reaching levels of 400 R – 800 R. In order to compare to our data, we extend the aperture extraction procedure outlined in §3.2 using apertures progressively closer to the centrifugal plane following the same geometric sequence. We stopped decreasing the aperture size when the emission brightness increased by $<$10%. The resulting aperture extended 0.5 Rj above and below the centrifugal plane and resulted in peak brightnesses of 200 R – 300 R during the years 2017 – 2021 and 630 R in 2022. This suggests that the 1995 Na enhancement captured by brown97 was comparable in size to the 2022 enhancement shown in Figure 3.3 (upper panel). Also important to mention are the Galileo dust detector measurements acquired 1996 – 2003 [Krüger, Geissler . (2003)]. There is evidence that the dust comes from Io [Graps . (2000), Krüger, Horányi Grün (2003)], is composed almost entirely of NaCl [Postberg . (2006)] and has its origin from Io volcanic plumes [Krüger, Geissler . (2003)]. Further evidence shows that NaCl+ is an important pathway for Na escape from Io’s atmosphere [Grava . (2014), Schmidt . (2023)]. This suggests that variation seen in our Na nebula dataset and others should be echoed in the Galileo dust detector data. krueger03 used a simple geometric model of dust emission from Io to translate dust detector count rates into the flux of dust from Io (their Figure 2). As discussed by krueger03, the Galileo orbit precluded continuous measurements of the dust streams before mid 2000. However, beginning after this time, there was a large, well-covered enhancement that lasted $\sim$6 months. Subsequent enhancements in the calculated Io dust flux last $\sim$1 – $\sim$3 months and have smaller amplitudes than the 2000 enhancement. The magnitudes of the enhancements are 1 – 4 orders of magnitude, which is much larger than those seen in sodium nebula data. Full treatment of the reasons for the difference in magnitude seen between the different measurement methods is beyond the scope of this work. Rather, we point out that the durations of the enhancements in the derived dust flux from Io is comparable to those observed in the Jovian sodium nebula [ME. Brown Bouchez (1997), Yoneda . (2009), Yoneda . (2015), and this work]. ### 5.2 IPT studies Previous studies of IPT [S II] 6731 Å emission show peak ribbon brightness values in the $\sim$100 R – $\sim$1000 R range in individual measurements [<]e.g.,¿morgan85a, oliversen91, jockers92, woodward94, schneider95, thomas01, nozawa04_no_SII, yoneda09, schmidt18. As described in §3.3, the values shown in Figure 3.3 are the average surface brightness of each ribbon derived using a two-step Gaussian fitting procedure, with one Gaussian used to isolate emission in the radial direction and one to compute the average surface brightness in the vertical. To convert from averages over the Gaussian functions to peak values, we multiply by $2\pi$, one factor of $\sqrt{2\pi}$ for the integral over the vertical Gaussian and another factor of $\sim\sqrt{2\pi}$ to account for the summation between the $\pm 1\sigma$ limits of the radial Gaussian. Following our 21-point moving averages, this yields peak values of $\sim$200 R – $\sim$900 R, with individual points ranging from $\sim$50 R to $\sim$1200 R. We take this to be good agreement with previous studies and therefore independent validation of our stellar calibration procedure. The only other published study of IPT [S II] 6731 Å emission lasting more than a few weeks during a single Jovian opposition is the companion of the spectroscopic Na nebula observations collected in 1995 at the Lick observatory, discussed in §5.1 [ME. Brown (1994), ME. Brown Bouchez (1997)]. That study captured an IPT enhancement that lasted $\sim$2.5 months. The emission pre- and post enhancement was $\sim$200 R and the emission was $\sim$400 R at its peak. The factor of $\sim$2 difference between the pre/post and peak values is comparable to the broad enhancement partially captured at the beginning of the 2019 Jovian opposition. In other words, within the range observed over our 6-year study. Also useful for comparison are the two long-term studies of the warm torus that have been conducted in the EUV, one by Cassini and one by Hisaki [<]e.g.,¿steffl04a, yoshikawa17. Comparison of the surface brightnesses seen in the EUV warm torus observations to the surface brightness of the [S II] 6731 Å observations of the ribbon region would require detailed IPT modeling that is beyond the scope of this work. Thus, we limit our discussion to the duration of the enhancements. During the duration of its observations, Cassini captured two enhancements in emissions from ionization states of S and O lasting of order 1 – 3 months, one in late 2000, the other in early 2001. During its multi-year observing campaign Hisaki saw one large enhancement that lasted $\sim$3 months in 2015. Smaller amplitude modulations in the Hisaki data have also been noted [Roth . (2020)]. These enhancement durations are comparable to those seen in the IoIO data. ### 5.3 Contemporaneous Na nebula and IPT studies Two previous studies reported contemporaneous Na nebula and IPT enhancements [ME. Brown Bouchez (1997), Tsuchiya . (2018)]. These studies, and related work, concentrated on the detailed behavior of the observed emission during the enhancements and the implications for physical processes occurring within the IPT and broader Jovian magnetosphere [<]e.g.,¿yoshikawa17, kimura18, hikida18, yoshioka18, tsuchiya18, hikida20, roth20, tao16a, tao16b, tao18, tao21. Such in-depth study of individual enhancements is beyond the scope of our current work. Rather, we note for comparison to our data, that for both the 1995 and 2015 enhancements, there was delay of $\sim$4 weeks between the peak in the Na nebula and IPT S+ emissions, even though in the 2015 case, the S+ emissions were from the ribbon region and detected via the [S II] 6731 Å line and in the 2015 case, the emissions were from the warm torus and detected via the S II 765 Å line. Because different regions of the torus were studied in the two cases, comparison of the relative strengths of the IPT enhancements requires modeling that is beyond the scope of this effort. Thus, we are not currently able to use these studies to corroborate our observation that the relative strengths of the Na nebula and S+ enhancements can vary significantly with time. ## 6 Discussion Our study has a unique combination of sensitivity, cadence and duration that has enabled it to determine that the Na nebula and IPT are frequently in states of enhancement and that the enhancements in the two species have a quasi-contemporaneous relationship. When interpreted within the context of previous studies, the former result allows us to rule out solar insolation- driven sublimation as the primary mechanism driving Io atmospheric escape; the later provides insights into the most likely mechanism – volcanism – and the subsequent path sodium- and sulfur-containing materials take through and out of Io’s atmosphere. Our discussion begins with atmospheric escape. ### 6.1 Response of sodium nebula and IPT to Io atmospheric escape As reviewed in §1, Io’s atmosphere is removed by interaction with the IPT via charge exchange, sputtering, and electron impact ionization to fill the neutral clouds on timescales of hours [<]e.g.,¿smyth88, dols08, dols12, smith22. The apertures used to extract Na nebula surface brightnesses (Figure 2) are primarily filled by sodium traveling near IPT’s $\sim$70 km s-1 corotation velocity. The residence time of this material in the IoIO FOV is $\sim$11 hours. Furthermore, we have chosen the apertures that integrate over the effect of Jupiter’s $\sim$10 hr rotation period and Io’s $\sim$40 hr orbit. Thus, to the accuracy of $\sim$1 day, the Na nebula surface brightnesses shown in Figure 3.3 (top panel) provide a good indicator of the modulations in the escape rate of sodium-bearing material from Io’s atmosphere. The response of the IPT to Io atmospheric escape is somewhat more complicated than that of the Na nebula. The neutral clouds described in the previous paragraph are shaped by interaction with the IPT through the processes of impact ionization and charge-exchange [<]e.g.,¿smyth03. Impact ionization results in the addition of new material and proceeds on timescales of $\sim$ 1 day [Smyth Marconi (2003)]. The residence time of plasma in the IPT, is 20 – 80 days, with the shorter residence times corresponding to times of higher total plasma density [Bagenal Delamere (2011), Hess . (2011), Copper . (2016), Tsuchiya . (2018)]. Thus, when there is an enhancement in the escape of sulfur-bearing material from Io’s atmosphere, the peak in the IPT S+ 6731 Å ribbon will lag by an amount dependent on the IPT plasma density. A model is being developed that could, in principle, calculate the precise IPT ribbon response [D. Coffin . (2020), DA. Coffin . (2022), DA. Coffin Withers (2023), Nerney Bagenal (2020), Nerney . (2022), Nerney . (2023)], but its completion and application to the IoIO dataset is beyond the scope of our current project. Thus, we take the $\sim$4 week delay between the peaks in Na nebula and IPT S+ emission in the previous two studies that captured contemporaneous enhancements [<]§5.3;¿brown97, tsuchiya18 as indicative of the plasma transport time during a typical IPT enhancement. Four weeks is also similar to the transport time deduced from the larger IPT enhancement captured by Cassini [Steffl . (2004), Copper . (2016)]. ### 6.2 Interpretation of quasi-contemporaneity of Na nebula and IPT enhancements Within the context of the discussion in Sections 5.3 and 6.1, we can now offer an interpretation the quasi-contemporaneous nature of the Na nebula and IPT enhancements seen in Figure 3.3. In all cases except 2020, each major Na nebula enhancement has a companion enhancement seen in the IPT nebula that is delayed by $\sim$4 weeks. This $\sim$4 week delay is consistent with that seen in previous studies and is indicative of simultaneous release of sodium- and sulfur-bearing material from Io’s atmosphere. In 2020, the delay between the Na nebula and IPT enhancements peaks is almost twice as long, however, the profiles of both enhancements are more complicated than the enhancements in other years: the Na nebula enhancement has a shoulder on its trailing edge and the IPT enhancement appears to consist of a broad, main peak, followed by a small, sharp peak. Thus, we offer the suggestion that in mid 2020, there are two overlapping sets of Na nebula and IPT enhancements, with the earlier set being larger than the later. A similar relationship may exist between other, smaller enhancements, such as the shoulder on the early 2020 Na nebula peak and the small IPT peak in mid 2019. We discuss the implications of the variation in the relative Na nebula and IPT peak _sizes_ seen in each contemporaneous pair (e.g., fall 2022 being the most extreme) in §6.6. ### 6.3 Ruling out solar insolation-driven sublimation The current paradigm holds that the bulk of the escaping material from Io’s atmosphere is supplied by Io’s global sublimation atmosphere [<]e.g.,¿schneider07, dols08, dols12. This paradigm suggests that, in the absence of some other perturbing effect on Io’s atmosphere, the variations seen in the Na nebula and IPT should be dominated by variations in solar insolation, that is, Io’s 42 hour orbit or Jupiter’s 12 year orbit [<]e.g.,¿depater20_post_eclipse, tsang12. Enhancement in the escape of material from Io’s atmosphere may also be modulated by Jupiter’s magnetic rotational period (9.925 hr) due to Io’s apparent motion within the IPT [<]e.g.,¿smyth11. Even when considering the timescales of the responses of the Na nebula and IPT to material escape from Io’s atmosphere, discussed in Sections 6.1 – 6.2, solar insolation and magnetic periodicities are not compatible with the behavior of the enhancements seen in Figure 3.3. Thus, we argue that one or more other physical mechanisms are driving atmospheric escape during enhancements. Since enhancements dominate the average supply of material from Io’s atmosphere (Sections 4, 6.1), the process(es) driving enhancements provide the bulk of the material in the Jovian sodium nebula and IPT. ### 6.4 The case for volcanism The initial claim of a link between Io volcanism and material release from Io’s atmosphere was made using Jovian sodium nebula images recorded with a cadence of weeks to months and disk-integrated infrared observations [Mendillo . (2004)]. A subsequent study using Jovian sodium nebula observations recorded with a near-nightly cadence and a much more extensive set of disk-resolved Io infrared observations, has failed to validate that initial claim [Roth . (2020)]. In this work, we take a different approach and use the time behavior of material release from Io’s atmosphere to suggest the most likely driver for atmospheric escape is volcanic plumes. Recall from §1, that based on EUV brightness of the IPT, $\sim$1 ton s-1 of material must be flowing into it from Io’s atmosphere. We can now attribute this amount of flux to the mechanism(s) that cause enhancements. Io’s atmosphere is itself tenuous and, without resupply from the surface or subsurface, cannot act as a reservoir for supplying enhancements in escape that last weeks to months. We have ruled out variation due to solar insolation-induced sublimation and magnetospherically enhanced escape of Io’s global atmosphere, in §6.3. Thus, geologic processes of some sort must ultimately be involved in the dominant process(es) of atmospheric escape from Io’s atmosphere. In §6.2, we showed that enhancements in the escape of sodium and sulfur- bearing material from Io’s atmosphere occur simultaneously. This simultaneity, together with the geologic nature of the processes driving escape imply that there is a single geographical location responsible for driving atmospheric escape in each pair of enhancements, with that location not necessarily being the same for each pair. Finally, we note that behavior of the Na nebula and IPT surface brightnesses appears to be stochastic, with large and small enhancements interleaved. The picture that emerges is that geologic activity at discrete sites on Io results in enhancements in the escape of sodium- and sulfur-bearing materials that last 1 – 3 months, with 1 – 2 enhancements seen during each 7 month Jovian opposition observing window and with the relative amount of material escape varying between individual events. Of the known geologic processes on Io that match the above criteria, volcanism is the most likely to result in the stochastic perturbation of Io’s atmosphere, via processes involving plumes. Observational support for Io volcanic plume-driven atmospheric escape comes from the correlation between plumes observed by Galileo [P. Geissler . (2004), PE. Geissler McMillan (2008)] and enhancements in the Jovian dust streams [Krüger, Geissler . (2003)]. As reviewed in §5.1, the dust streams are composed almost entirely of NaCl and come from Io [Graps . (2000), Krüger, Horányi Grün (2003), Postberg . (2006)]. Furthermore, NaCl+ has been shown to be an important pathway for Na escape from Io [Grava . (2014), Schmidt . (2023)]. Based on this evidence, it is plausible to suggest that volcanic plumes play a key role in the supply of the Jovian sodium nebula. We use the very large Jovian dust stream enhancement that peaked in early September 2000 and the IPT enhancement observed by Cassini that peaked about a month later as observational evidence of the connection between IPT enhancements and volcanic plumes [Krüger, Geissler . (2003), Steffl . (2004), see also §5.2]. The difficulty with suggesting that volcanic plumes themselves are responsible for launching material out of Io’s atmosphere is that plume vent velocities are far below Io’s gravitational escape velocity, even when considering local atmospheric heating by plume dynamics [<]e.g.,¿schneider07, mcdoniel17, depater20_post_eclipse, redwing22. Thus, if plumes are implicated in material escape from Io, the mechanism must be indirect. Plume models show that shocks in the plume canopy impede upward flow of material, redirecting it outward and toward the surface [Zhang . (2003), Zhang . (2004)]. zhang03 suggest that the material that is redirected forcibly toward the surface enhances sublimation of SO2 frost over a large area. Sublimation of SO2 frosts by hot surface/subsurface lavas would provide a similar localized enhancement in Io’s SO2 atmosphere. This SO2 would then be available to interact with the IPT via the pathways of sputtering, electron impact ionization and change exchange. The difficulty with this scenario is that it does not provide a comparable mechanism for enhancing the escape of NaCl, since NaCl has a much higher sublimation temperature than SO2. However, the amount of NaCl provided by the plume itself and/or NaCl lofted from Io’s surface by sublimation may be sufficient to drive escape, when considering interaction with the IPT over a large area (see also §6.5). Perhaps more a more plausible path of escape for both SO2 and NaX is to consider the ability of plumes to loft material to sufficient altitude to enable enhanced interaction between IPT plasma and the tops of plumes. An extension of the zhang03, zhang04 model of Pele’s plume shows that by including interaction between the top of the plume and the IPT, better agreement is found with the distribution of material seen on the surface [McDoniel . (2015), McDoniel . (2017), McDoniel . (2019)]. The mcdoniel19 work provides theoretical validation of the ability of plume tops and IPT plasma to interact, but stops short of a full quantitative calculation of the amount of material that could be removed by this interaction. Thus, although there is observational evidence that connects volcanic plumes to enhancements in the escape of sodium- and sulfur-bearing material from Io’s atmosphere, current state-of-the-art theoretical calculations have not been able to determine the exact pathway taken by the material to Io’s exosphere. ### 6.5 Implications for the transport of sodium- and sulfur-bearing material through Io’s atmosphere Atacama Large Millimeter/submillimeter Array (ALMA) observations of Io’s atmosphere reveal collections of hot NaCl, KCl and SO2 gases, that are interpreted as plumes [Redwing . (2022)]. Redwing . note that the highest column density collections of alkali and SO2 gasses are consistently _not_ found to be coincident with each other (Figure 9 of that work). As discussed by Redwing ., these results are difficult to explain given that SO2, the primary volatile on Io, is expected to be associated with all plume activity. These results are also in apparent conflict with our result that sodium- and sulfur-bearing material are consistently seen to escape at the same time, implying a common geographic source (Sections 6.2 – 6.4). In this Section, we discuss some mechanisms that might contribute to this effect, including those not considered by Redwing .. redwing22 provide two potential reasons for the lack of spatial coincidence between alkali and SO2 plumes in Io’s atmosphere which rely on the difference in vaporization pressures of these materials. (1) SO2 gas is produced primarily by hot lava vaporizing frost deposits, with these deposits found primarily at low- to mid-latitudes. Alkalis sublime at a much higher temperature and therefore will not be released into Io’s atmosphere by this effect. (2) The alkalis observed by ALMA are released in the plumes of high- temperature volcanoes. Redwing . note that these high-temperature plumes should also produce SO2 but that these alkali-producing volcanoes are consistently located at high latitudes where atmospheric temperatures may be low enough to freeze SO2 within the plumes. In this way, (1) explains why SO2 plumes appear in the absence of alkali plumes (low latitudes) and (2) suggests that SO2 is always collocated with the high-latitude alkali plumes, however, this high-latitude SO2 is largely invisible to ALMA because it is in solid form. Evidence of solid-phase transport of SO2 through Io’s atmosphere comes from Galileo detection of very high mass-to-charge ratio ions, interpreted as clusters of SO2 molecules, or “snowflakes,” when it flew over Io’s north pole Frank Paterson (2002). This explanation requires that the SO2 gas in the plumes at low latitudes not contribute significantly to SO2 escape, and requires NaCl to be primarily sourced from the polar plumes, possibly by the mechanism discussed in the next paragraph. Plume models have yet to be constructed that would test this “snowflakes” hypothesis. Another way to “hide” SO2 and/or NaCl from ALMA is to ionize them. The behavior of Io’s auroral Na, O, SO2, and SO emission while Io transitions to and from eclipse behind Jupiter provides evidence that (a) photoionization is the primary mechanism for producing SO${}_{2}^{+}$ and (b) SO${}_{2}^{+}$ plays an important role on the pathway of NaCl escape from Io’s atmosphere via charge-exchange Schmidt . (2023). Io’s polar atmosphere is exposed to the sun for longer periods of time than that above the rest of Io, thus increasing the average rate of photoionization at the poles. Furthermore, Io’s collisional atmosphere is thinner in these regions <e.g.,¿walker10, providing more access of plume material to the exosphere, where IPT-driven escape processes are the most efficient. This suggests that plumes at the poles will have enhanced escape (see also §6.6). Like the SO2 “snowflake” atmospheric transport mechanism suggested in the previous paragraphs, this mechanism favors the NaCl-rich high-latitude plumes detected by ALMA as the source of the sodium- and sulfur-bearing material contributing to the Na nebula and IPT enhancements. For logical completeness, we also consider the suggestion that NaCl-containing volcanic dust and ash (“dust bunnies”) may transport NaCl through Io’s atmosphere in a form not visible to ALMA. To simultaneously explain the ALMA and IoIO data, this would imply that these large particles are driven through the atmosphere by the SO2 plumes at low latitudes and that those particles are quickly charged by interaction with the IPT and removed from ALMA’s FOV by Jupiter’s magnetic field. Importantly, under this interpretation, the high- latitude NaCl plumes seen by ALMA would not contribute to escape and the SO2 from these plumes must still be invisible to ALMA, necessitating one or more of the mechanisms in the previous paragraphs. In support of the “dust bunny” hypothesis, we recall the observational evidence we used to connect plumes to the Jovian sodium nebula <§6.4;¿krueger03, grava14, schmidt23. Krüger, Geissler . found that dust stream enhancements were likely associated with the plumes of volcanoes such as Pele, Tvashtar, a region near the north pole, and a region south of Karei <now known as Grian;¿geissler08. The plume deposits of these eruptions are primarily SO2 rich, with minor contributions from silicate ash PE. Geissler . (1999); P. Geissler . (2004). The association between atmospheric escape of sodium- and sulfur-bearing material and these large, SO2-rich plumes is suggestive that the large atmospheric disturbances caused by these plumes may be the key to driving escape. The lack of association between SO2-dominated plumes and NaCl emission in the ALMA data then becomes support for transport of NaCl through the atmosphere in these plumes in a form such as dust or ash. Finally, we suggest that there may be no need to “hide” SO2 from ALMA in the regions where bright NaCl emission is seen. In the ALMA observation of NaCl and SO2 that has the highest resolution, relatively faint concentrations of SO2 gas are seen in the vicinity of the brightest NaCl emission <see 2016-07-26 in Figure 9 of¿redwing22. More detailed analysis of the ALMA data would be needed to determine if the NaCl and SO2 seen in this observation is consistent with a single geographic source. Plume models could be used to determine if the amount of SO2 is, in fact, less than what is expected for the range volcanic activity that has been observed on Io <see, e.g. review by¿depater21. Additional high-resolution ALMA images, ideally recorded contemporaneously with IoIO data, would also be useful. If the NaCl to SO2 ratio were to be found to be reasonable in NaCl plumes, this could imply that atmospheric escape is primarily tied to NaCl-rich and/or high-latitude plumes and that the collections of high column density SO2 gas seen at lower latitudes in the ALMA data contribute at most a minor, steady amount to the IPT (e.g., baseline in Figure 3.3, bottom panel.) ### 6.6 Processes that modulate Na nebula and IPT enhancement sizes Having established that volcanic plumes are the likely precipitating agent for material escape from Io’s atmosphere and discussed possible pathways of material transport through Io’s atmosphere, we return to our discussion of the quasi-contemporaneous nature of the Na nebula and IPT enhancements, begun in §6.2, to offer possible causes for the modulation seen in the _sizes_ of Na nebula and IPT enhancements. These suggested causes divide into four general categories: (1) variation in the content of sodium- and sulfur-bearing material in volcanic plumes, such as produced by different magmas <e.g., see¿redwing22; (2) processes involving the interaction of the plumes with the atmosphere, such as shocks <§6.4;¿zhang03, zhang04 (3) modulation in material transport through the atmosphere <§6.5;¿schmidt23; and (4) variation in the efficiency of escape. In this Section, we concentrate on category (4), because understanding effects in this category greatly enhances the ability to use Jovian sodium nebula and IPT data to make progress understanding physical effects in categories (1) – (3). One of the scenarios discussed in §6.5 suggested that enhanced SO2 ionization in Io’s polar regions may play a role in enhancing NaCl escape Schmidt . (2023). This would favor the Tvashtar plume (63∘N 124∘W) over that of Pele (19∘S 255∘W) or Surt (45∘N, 336∘W) for the source of the very large Jovian dust stream enhancement observed by Galileo in late 2000, even though Surt, active at the time, had a much larger infrared output and Pele was the most active large plume during the Galileo era Marchis . (2002); Porco . (2003); Krüger, Geissler . (2003); P. Geissler . (2004). Because SO2 ionization and subsequent rapid dissociation Huddleston . (1998) may be enhanced in the polar regions Schmidt . (2023), the efficiency of S and O production, and thus escape, may be enhanced at the poles as well. Because NaCl+ has an important role in the pathway of Na escape from Io’s atmosphere <Sections 5.1, 6.4, 6.5;¿schmidt23, sodium-bearing material in Io’s anti-Jovian equatorial exosphere may have an exaggerated escape efficiency over that of sulfur-bearing material. The sodium “jet” was seen to be rooted in the anti-Jovian equatorial region of Io’s exosphere the one time it has been imaged in sufficient spatial detail for detection near Io Burger . (1999). Burger . offered two hypotheses to explain this behavior: (i) Material was being injected into the exosphere from below at this location, e.g., by volcanism. (ii) Ionization is enhanced in this region due to Io equatorial auroral activity <e.g.,¿roesler99, geissler99, roth14. Case (ii) would provide a mechanism for enhancing material flow into the jet, over material flow to the IPT, since the jet is formed by prompt neutralization of pickup ions within Io’s exosphere Schneider . (1991); Wilson Schneider (1994), whereas direct ionization of material in Io’s exosphere has been shown to be a minor contributor to the influx of plasma to the IPT <§1;¿dols08, dols12. The prompt neutralization of the sodium forming the jet also explains why it is not expected to be seen rooted at the sub-Jovian auroral spot: the initial gyration of the ions directs them into Io’s surface. We note that hypotheses (i) and (ii) are not necessarily mutually exclusive: the “jet” may always be rooted in the region of the anti-Jovian equatorial auroral spot, but its response to atmospheric escape may be exaggerated if that atmospheric escape is located in that region. Finally, dust, which acquires a negative charge Zook . (1996), enhances the initial escape of Na-bearing material on Io’s sub-Jovian hemisphere Grava . (2021). As the dust particles are destroyed and the constituent molecules and atoms are released, they acquire a positive charge and join the “jet” feature. A positive identification between a Na nebula enhancement and a plume on the sub-Jovian hemisphere, could thus potentially be used in support of the “dust bunnies” hypothesis discussed in §6.5. ### 6.7 Toward a connection with Io infrared observations Since the early 1980s, synoptic Io infrared observations have been used to help understand Io volcanic processes and their geologic implications <e.g., see review by¿depater21. As noted in §6.4, these infrared observations have been compared to sodium nebula observations in an attempt to establish a correlation Mendillo . (2004) which has not stood the test of time Roth . (2020). The initial attempt to connect infrared indicators of Io volcanic activity to enhancements in the sodium nebula made the implicit assumption that the brightest infrared events should be correlated to the brightest nebula images. Here we suggest that instead, the dimmer infrared events may be more likely to be correlated to the brighter sodium nebula enhancements. One of the fundamental results of our study is that the Jovian sodium nebula and IPT show contemporaneous enhancements of varying relative amplitudes that last 1 – 3 months (Figure 3.3; Sections 4, 6.1 – 6.2). A long-term study of the time variability of Io’s hotspots found that they divided into two groups: those with persistent activity and those that exhibited sudden brightening, followed by a steady decay de Kleer de Pater (2016). For those hotspots that exhibited sudden brightening events, the brighter the event, the shorter the decay <see, e.g., Figure 11 of¿dekleer16. The hot spots with decay times of order 1 month or longer were dimmer, by a factor of 5 or more, than the brightest outbursts. If we make the very simplistic assumption that in a volcano where plume activity is found, plume activity will persist over roughly the same time period as infrared activity, we can support the argument that infrared hotspots exhibiting eruption phases lasting 1 – 3 months are the more likely to be correlated to atmospheric release events. Thus, dim infrared outbursts may be the most likely to be correlated with enhancements in material release from Io’s atmosphere. ## 7 Summary and Conclusions We have used IoIO, an observatory composed almost entirely off-the-shelf equipment <§2 and¿morgenthaler19 to collect the largest set of contemporaneously recorded Jovian sodium nebula and Io plasma torus (IPT) in [S II] 6731 Å images assembled to date (see examples in Figure 1 and accompanying animations). Using simple image analysis techniques (§3), we construct a time history of the brightnesses of the Na nebula and IPT [S II] emission (Figure 3.3). Qualitative inspection of this Figure shows 1 – 2 enhancements in the Na nebula and IPT [S II] emission per $\sim$7-month observing window, such that a quiescent state of emission is rare (§4). The minimum and maximum surface brightness values seen in the IoIO Na nebula and IPT images compare favorably with previous studies (§5.1 – 5.2). Most large IPT enhancements peak $\sim$4-weeks after the corresponding enhancement in the Na nebula, as seen in previous studies (§5.3) and as expected from plasma transport within the IPT (§6.1). The exception to this, seen in mid 2020, is likely caused by the overlap of multiple enhancements (§6.2). We rule out sublimation as the primary driver of material escape from Io’s atmosphere in §6.3. This is our most definitive result. Having ruled out sublimation as the primary driver of atmospheric escape from Io, we show that geologic activity in some form, likely volcanic plumes, drives escape, either directly or indirectly (§6.4). In light of other published results, this has implications on the transport of material through Io’s atmosphere (§6.5). In §6.6, we review the processes that can modulate the relative sizes of contemporaneous Na nebula and IPT enhancements, focusing on processes that might modulate the efficiency of Io atmospheric escape as a function of geographic location. Finally, in §6.7, we note that Io’s dimmer infrared outbursts have durations and time profiles similar to Na nebula and IPT enhancements, suggesting that it these 1 – 3 month-long infrared outbursts may be the more likely to show correlation with the release of material from Io’s atmosphere. In conclusion, our work shows that off-the-shelf equipment with minimal customization, together with simple analysis techniques can be used to collect data that provides valuable insights into the processes which produce material on Io’s surface, transport it through its atmosphere, and release it into Jupiter’s broader magnetospheric environment. ## 8 Future Plans We have pointed out the existing observational evidence that links plume activity on Io to atmospheric escape in §6.4. Further confirmation of this link may be accomplished by accumulating additional contemporaneous IoIO observations of the Na nebula and IPT together with ALMA observations of Io’s atmosphere – currently there is only overlap during March 2018 Redwing . (2022). Disk-integrated observations conducted by the NOrthern Extended Millimetre Array (NOEMA) interferometer of the Institut de Radioastronomie Millimetrique (IRAM), such as those conducted by roth20, may also be useful. Continued theoretical work on the effect that plumes have on Io’s atmosphere and exosphere, as well as the interaction between the exosphere and the IPT is also needed <e.g.,¿bloecker18, mcdoniel19, dols08, dols12, dols23, adeloye23DPS. These observational and theoretical studies can also be useful to help differentiate between the hypotheses offered in 6.5 concerning material transport through Io’s atmosphere. Continued disk-resolved observations of Io IR activity, such as those carried out at the Keck and Gemini telescopes de Kleer . (2019) will also be interesting as they might lead to validation of the correlation that was initially claimed between Na nebula, IPT and IR brightnesses Mendillo . (2004); Yoshikawa . (2017); Yoshioka . (2018); Tao . (2018); Koga . (20182), but has subsequently proven elusive de Kleer . (2016); Roth . (2020). We are also planning to conduct more detailed analysis of the IoIO images. For instance, the IoIO images of the Na nebula contain three distinct features – the “banana,” “jet,” and “stream” – that can be used to estimate the neutral sodium source rate from Io Wilson . (2002). The IPT ribbon positions, which are detectable with IoIO (Figure 3, lower panels), are related to the dawn- dusk electric field, which is modulated by a combination of material flow toward the magnetotail and solar wind pressure Barbosa Kivelson (1983); Ip Goertz (1983). When combined with the analysis presented here, the IPT ribbon positions retrieved from the IoIO data will provide a significant amount of information regarding the production of material on Io and its subsequent flow through and out of Jupiter’s magnetosphere. Finally, the unique sensitivity of IoIO to Na nebula and IPT [S II] 6731 Å enhancements, together with reliable robotic operation and $<$24 hour turnaround for pipeline reduction ideally suits it to provide real-time alerts of enhancements in the departure of material from Io’s atmosphere. These can inform planned observations of Io from both ground- and space-based platforms. In particular, nearly all of the plasma found in Jupiter’s magnetosphere comes from Io and makes its way through the IPT in 20 – 80 days before rapidly spiraling out through the rest of the magnetosphere Bagenal Delamere (2011); Hess . (2011); Copper . (2016); Tsuchiya . (2018). The modulations seen in the lower panel of Figure 3.3 therefore precede modulation in plasma density throughout the Jovian magnetosphere, a feature that can be used to enhance the science operations of the Juno mission Bolton . (2017) and supporting observations Orton . (2020, 2022). NASA’s Europa Clipper Howell Pappalardo (2020) and ESA’s JUICE Grasset . (2013) missions will benefit from planned IoIO observations, because of the record of exogenic material impinging on Europa, Ganymede and Callisto during those missions. Also, because enhancements in the Jovian dust streams can induce detector fatigue in Europa Clipper’s SUrface Dust Analyzer <SUDA;¿goode23, IoIO observations can be used to inform SUDA operations while Europa Clipper is sampling the broader Jovian magnetospheric environment and thus optimize detector performance for that mission’s primary target. ## Open Research Section The following software was used in this project: Astrometry.net Lang . (2010), AstroPy Astropy Collaboration . (2022), Astroquery Ginsburg . (2019), BigMultiPipe Morgenthaler (2022), Burnashev Morgenthaler (20232), CCDMultiPipe Morgenthaler (20233), ccdproc Craig . (2017), IoIO control software Morgenthaler (20231), matplotlib Hunter (2007), moviepy Zulko . (2021), NumPy Oliphant (2006); Harris . (2020), photutils Bradley . (2022), precisionguide Morgenthaler (20234), Python 3 Van Rossum Drake (2009), reproject Robitaille . (2020), SciPy Virtanen . (2020), specutils Earl . (2022) The reduction products used to create Figure 3.3 are archived with Zenodo Morgenthaler . (2023). ###### Acknowledgements. IoIO is hosted at the San Pedro Valley Observatory, near Benson, Arizona, and has benefited greatly from the services provided by observatory manager, Dean Salman. Gavin Nelson, Andy Sorenson, Elisabeth Adams, and the staff of Starizona have also provided invaluable assistance in observatory maintenance. The authors are grateful for the insightful comments received during the manuscript review process. This work has been supported by NSF grants 1616928 and 2109219 and NASA grant 80NSSC22K0317 to the Planetary Science Institute and NASA grant 80NSSC20K0559 to Boston University. ## References * Adams . (2023) adams23DPSAdams, E., Jackson, B., Sickafoose, A., Morgenthaler, J., Stubbers, H., Carson, D. Worters, H. 2023\. Finding Doomed Worlds: searching for ultra-hot Jupiters with decaying orbits Finding doomed worlds: searching for ultra-hot jupiters with decaying orbits. Bull. Am. Astron. Soc.55403.05. * Adeloye . (2023) adeloye23DPSAdeloye, A., Trafton, L., Goldstein, D., Varghese, P. Mahieux, A. 202310\. Investigating Io’s Tvashtar Plume: DSMC Simulations, Mie Theory Analysis, and Implications for Entrained Particulate Properties Investigating Io’s Tvashtar Plume: DSMC Simulations, Mie Theory Analysis, and Implications for Entrained Particulate Properties. Bull. Am. Astron. Soc.55111.03. * Astropy Collaboration . (2022) astropy_collaboration22Astropy Collaboration, Price-Whelan, AM., Lim, PL., Earl, N., Starkman, N., Bradley, L.Astropy Project Contributors 202208\. The Astropy Project: Sustaining and Growing a Community-oriented Open-source Project and the Latest Major Release (v5.0) of the Core Package The Astropy Project: Sustaining and Growing a Community-oriented Open-source Project and the Latest Major Release (v5.0) of the Core Package. Astrophys. J.9352167. 10.3847/1538-4357/ac7c74 * Bagenal Delamere (2011) bagenal11Bagenal, F. Delamere, PA. 201105\. Flow of mass and energy in the magnetospheres of Jupiter and Saturn Flow of mass and energy in the magnetospheres of Jupiter and Saturn. JGR: Space Physics116A05209. 10.1029/2010JA016294 * Barbosa Kivelson (1983) barbosa83Barbosa, DD. Kivelson, MG. 198303\. Dawn-dusk electric field asymmetry of the Io plasma torus Dawn-dusk electric field asymmetry of the Io plasma torus. Geophys. Res. Lett.103210-213. 10.1029/GL010i003p00210 * Beers . (1990) beers90Beers, TC., Flynn, K. Gebhardt, K. 199007\. Measures of Location and Scale for Velocities in Clusters of Galaxies—A Robust Approach Measures of Location and Scale for Velocities in Clusters of Galaxies—A Robust Approach. AJ10032. 10.1086/115487 * Blöcker . (2018) bloecker18Blöcker, A., Saur, J., Roth, L. Strobel, DF. 2018Nov. MHD Modeling of the Plasma Interaction With Io’s Asymmetric Atmosphere MHD Modeling of the Plasma Interaction With Io’s Asymmetric Atmosphere. JGR: Space Physics123119286-9311. 10.1029/2018JA025747 * Bolton . (2017) bolton17Bolton, SJ., Lunine, J., Stevenson, D., Connerney, JEP., Levin, S., Owen, TC.Thorpe, R. 201711\. The Juno Mission The Juno Mission. Space Sci. Rev.2131-45-37. 10.1007/s11214-017-0429-6 * Bradley . (2022) photutils22Bradley, L., Sipöcz, B., Robitaille, T., Tollerud, E., Vinícius, Z., Deil, C.Souchereau, H. 202207\. astropy/photutils: 1.5.0 astropy/photutils: 1.5.0 [Computer Software]. Zenodo. 10.5281/zenodo.6825092 * Broadfoot . (1979) broadfoot79Broadfoot, AL., Belton, MJ., Takacs, PZ., Sandel, BR., Shemansky, DE., Holberg, JB.McElroy, MB. 197906\. Extreme ultraviolet observations from Voyager 1 encounter with Jupiter Extreme ultraviolet observations from voyager 1 encounter with jupiter. Sci204979-982. 10.1126/science.204.4396.979 * ME. Brown (1994) brown94TBrown, ME. 1994\. The Structure and Variability of the Io Plasma Torus The Structure and Variability of the Io Plasma Torus . University of California, Berkeley. * ME. Brown Bouchez (1997) brown97Brown, ME. Bouchez, AH. 199710\. The response of Jupiter’s magnetosphere to an outburst on Io. The response of Jupiter’s magnetosphere to an outburst on Io. Sci278268-271. 10.1126/science.278.5336.268 * RA. Brown Chaffee (1974) brown74Brown, RA. Chaffee, FH., Jr. 197402\. High-Resolution Spectra of Sodium Emission from Io High-Resolution Spectra of Sodium Emission from Io. Astrophys. J., Lett.187L125. 10.1086/181413 * Burger . (1999) burger99Burger, MH., Schneider, NM. Wilson, JK. 199911\. Galileo’s close-up view of the Io sodium jet Galileo’s close-up view of the Io sodium jet. Geophys. Res. Lett.26223333-3336. 10.1029/1999GL003654 * Burnashev (1985) burnashev85Burnashev, VI. 198501\. Catalogue of data on energy distribution in spectra of stars in a uniform spectrophotometric system. Catalogue of data on energy distribution in spectra of stars in a uniform spectrophotometric system. Abastumanskaia Astrofizicheskaia Observatoriia Byulleten5983-89. * D. Coffin . (2020) coffin20Coffin, D., Delamere, P. Damiano, P. 202005\. Implications for Magnetosphere-Ionosphere Coupling From Jupiter’s System IV Quasi-Period Implications for Magnetosphere-Ionosphere Coupling From Jupiter’s System IV Quasi-Period. JGR: Space Physics1255e27347. 10.1029/2019JA027347 * DA. Coffin . (2022) coffin22AGUCoffin, DA., Delamere, PA., Bagenal, F. Nerney, EG. 2022\. Examining the Europa plasma environment via a multi-dimensional physical chemistry model Examining the Europa plasma environment via a multi-dimensional physical chemistry model. AGU Fall Meeting Abstracts2022SM55B-1457. * DA. Coffin Withers (2023) coffin23AGUCoffin, DA. Withers, P. 2023\. The Consequences of Alfvénic Inefficiency in Coupling Jupiter with the Io Plasma Torus The Consequences of Alfvénic Inefficiency in Coupling Jupiter with the Io Plasma Torus. AGU Fall Meeting Abstracts2023SM23D-2845. * Connerney . (1998) connerney98Connerney, JEP., Acuña, MH., Ness, NF. Satoh, T. 199806\. New models of Jupiter’s magnetic field constrained by the Io flux tube footprint New models of Jupiter’s magnetic field constrained by the Io flux tube footprint. JGR: Space Physics103A611929-11940. 10.1029/97JA03726 * Connerney . (2018) connerney18Connerney, JEP., Kotsiaros, S., Oliversen, RJ., Espley, JR., Jørgensen, JL., Jørgensen, PS.Levin, SM. 201803\. A New Model of Jupiter’s Magnetic Field From Juno’s First Nine Orbits A New Model of Jupiter’s Magnetic Field From Juno’s First Nine Orbits. Geophys. Res. Lett.4562590-2596. 10.1002/2018GL077312 * Copper . (2016) copper16Copper, M., Delamere, PA. Overcast-Howe, K. 201607\. Modeling physical chemistry of the Io plasma torus in two dimensions Modeling physical chemistry of the Io plasma torus in two dimensions. JGR: Space Physics12176602-6619. 10.1002/2016JA022767 * Craig . (2017) ccdproc17Craig, M., Crawford, S., Seifert, M., Robitaille, T., Sipőcz, B., Walawender, J.Streicher, O. 201712\. astropy/ccdproc: v1.3.0.post1 astropy/ccdproc: v1.3.0.post1 [Computer Software]. Zenodo. 10.5281/zenodo.1069648 * de Kleer de Pater (2016) dekleer16de Kleer, K. de Pater, I. 201612\. Time variability of Io’s volcanic activity from near-IR adaptive optics observations on 100 nights in 2013-2015 Time variability of Io’s volcanic activity from near-IR adaptive optics observations on 100 nights in 2013-2015. Icarus280378-404. 10.1016/j.icarus.2016.06.019 * de Kleer . (2019) dekleer19bde Kleer, K., de Pater, I., Molter, EM., Banks, E., Davies, AG., Alvarez, C.Tollefson, J. 201907\. Io’s Volcanic Activity from Time Domain Adaptive Optics Observations: 2013-2018 Io’s Volcanic Activity from Time Domain Adaptive Optics Observations: 2013-2018. AJ158129. 10.3847/1538-3881/ab2380 * de Kleer . (2016) dekleer16AGUde Kleer, K., de Pater, I. Yoneda, M. 201612\. Io’s Volcanic Activity in 2013-2016 and Comparison with Extended Sodium Cloud Variability Io’s Volcanic Activity in 2013-2016 and Comparison with Extended Sodium Cloud Variability. AGU Fall Meeting AbstractsP21E-04. presented at 2016 Fall Meeting, AGU, San Francisco, Calif., 12-16 Dec. * de Pater . (2021) depater21de Pater, I., Keane, JT., de Kleer, K. Davies, AG. 202105\. A 2020 Observational Perspective of Io A 2020 Observational Perspective of Io. Annual Review of Earth and Planetary Sciences49. 10.1146/annurev-earth-082420-095244 * de Pater . (2020) depater20_post_eclipsede Pater, I., Luszcz-Cook, S., Rojo, P., Redwing, E., de Kleer, K. Moullet, A. 202012\. ALMA Observations of Io Going into and Coming out of Eclipse ALMA Observations of Io Going into and Coming out of Eclipse. Planetary Sci. J.1360. 10.3847/PSJ/abb93d * Dols . (2008) dols08Dols, V., Delamere, PA. Bagenal, F. 200809\. A multispecies chemistry model of Io’s local interaction with the Plasma Torus A multispecies chemistry model of Io’s local interaction with the Plasma Torus. Journal of Geophysical Research (Space Physics)113A9A09208. 10.1029/2007JA012805 * Dols . (2012) dols12Dols, V., Delamere, PA., Bagenal, F., Kurth, WS. Paterson, WR. 201210\. Asymmetry of Io’s outer atmosphere: Constraints from five Galileo flybys Asymmetry of Io’s outer atmosphere: Constraints from five Galileo flybys. JGR: Planets117E16E10010. 10.1029/2012JE004076 * Dols Johnson (2023) dols23Dols, V. Johnson, RE. 202303\. Ion-molecule charge exchange in Io’s extended atmosphere: Velocity dependence Ion-molecule charge exchange in Io’s extended atmosphere: Velocity dependence. Icarus392115365. 10.1016/j.icarus.2022.115365 * Earl . (2022) specutils22Earl, N., Tollerud, E., Jones, C., O’Steen, R., Kerzendorf, W., Busko, I.Ferguson, H. 202202\. astropy/specutils astropy/specutils [Computer Software]. Zenodo. https://doi.org/10.5281/zenodo.1421356 10.5281/zenodo.1421356 * Frank Paterson (2002) frank02Frank, LA. Paterson, WR. 200208\. Plasmas observed with the Galileo spacecraft during its flyby over Io’s northern polar region Plasmas observed with the Galileo spacecraft during its flyby over Io’s northern polar region. Journal of Geophysical Research (Space Physics)107A81220. 10.1029/2002JA009240 * P. Geissler . (2004) geissler04_surfGeissler, P., McEwen, A., Phillips, C., Keszthelyi, L. Spencer, J. 200405\. Surface changes on Io during the Galileo mission Surface changes on Io during the Galileo mission. Icarus169129-64. 10.1016/j.icarus.2003.09.024 * PE. Geissler . (1999) geissler99Geissler, PE., McEwen, AS., Ip, W., Belton, MJS., Johnson, TV., Smyth, WH. Ingersoll, AP. 199908\. Galileo Imaging of Atmospheric Emissions from Io Galileo Imaging of Atmospheric Emissions from Io. Sci285870-874. 10.1126/science.285.5429.870 * PE. Geissler McMillan (2008) geissler08Geissler, PE. McMillan, MT. 2008Oct. Galileo observations of volcanic plumes on Io Galileo observations of volcanic plumes on Io. Icarus1972505-518. 10.1016/j.icarus.2008.05.005 * Ginsburg . (2019) ginsburg19Ginsburg, A., Sipőcz, BM., Brasseur, CE., Cowperthwaite, PS., Craig, MW., Deil, C.a subset of the astropy collaboration 201903\. astroquery: An Astronomical Web-querying Package in Python astroquery: An Astronomical Web-querying Package in Python. AJ15798. 10.3847/1538-3881/aafc33 * Giorgini . (1996) giorgini96Giorgini, JD., Yeomans, DK., Chamberlin, AB., Chodas, PW., Jacobson, RA., Keesey, MS.Wimberly, RN. 199609\. JPL’s On-Line Solar System Data Service JPL’s On-Line Solar System Data Service. Bull. Am. Astron. Soc.2825.04. * Goode . (2023) goode23Goode, W., Kempf, S. Schmidt, J. 202303\. Mapping the surface composition of Europa with SUDA Mapping the surface composition of Europa with SUDA. Planet. Space Sci.227105633. 10.1016/j.pss.2023.105633 * Graps . (2000) graps00Graps, AL., Grün, E., Svedhem, H., Krüger, H., Horányi, M., Heck, A. Lammers, S. 200005\. Io as a source of the jovian dust streams Io as a source of the jovian dust streams. Nature405678248-50. 10.1038/35011008 * Grasset . (2013) grasset13Grasset, O., Dougherty, MK., Coustenis, A., Bunce, EJ., Erd, C., Titov, D.Van Hoolst, T. 201304\. JUpiter ICy moons Explorer (JUICE): An ESA mission to orbit Ganymede and to characterise the Jupiter system JUpiter ICy moons Explorer (JUICE): An ESA mission to orbit Ganymede and to characterise the Jupiter system. Planet. Space Sci.781-21. 10.1016/j.pss.2012.12.002 * Grava . (2021) grava21Grava, C., Cassidy, TA., Schneider, NM., Hsu, HW., Morgenthaler, JP., Leblanc, F.Barbieri, C. 202111\. A Possible Dust Origin for an Unusual Feature in Io’s Sodium Neutral Clouds A Possible Dust Origin for an Unusual Feature in Io’s Sodium Neutral Clouds. AJ1625190. 10.3847/1538-3881/ac1ff8 * Grava . (2014) grava14Grava, C., Schneider, NM., Leblanc, F., Morgenthaler, JP., Mangano, V. Barbieri, C. 201403\. Solar control of sodium escape from Io Solar control of sodium escape from Io. Journal of Geophysical Research (Planets)119404-415. 10.1002/2013JE004504 * Harris . (2020) numpy20Harris, CR., Millman, KJ., van der Walt, SJ., Gommers, R., Virtanen, P., Cournapeau, D.Oliphant, TE. 202009\. Array programming with NumPy Array programming with NumPy. Nature5857825357–362. 10.1038/s41586-020-2649-2 * Herbert . (2008) herbert08Herbert, F., Schneider, NM. Dessler, AJ. 2008Jan. New description of Io’s cold plasma torus New description of Io’s cold plasma torus. JGR: Space Physics113A1A01208. 10.1029/2007JA012555 * Hess . (2011) hess11Hess, SLG., Delamere, PA., Bagenal, F., Schneider, N. Steffl, AJ. 201111\. Longitudinal modulation of hot electrons in the Io plasma torus Longitudinal modulation of hot electrons in the Io plasma torus. JGR: Space Physics116A1511215. 10.1029/2011JA016918 * Hikida . (2018) hikida18Hikida, R., Yoshioka, K., Murakami, G., Kimura, T., Tsuchiya, F., Yamazaki, A.Iwagami, N. 201807\. Identification of Extreme Ultraviolet Emission Lines of the Io Plasma Torus Observed by Hisaki/EXCEED Identification of Extreme Ultraviolet Emission Lines of the Io Plasma Torus Observed by Hisaki/EXCEED. Journal of Geophysical Research (Planets)12371723-1731. 10.1029/2018JE005629 * Hikida . (2020) hikida20Hikida, R., Yoshioka, K., Tsuchiya, F., Kagitani, M., Kimura, T., Bagenal, F.Yoshikawa, I. 202003\. Spatially Asymmetric Increase in Hot Electron Fraction in the Io Plasma Torus During Volcanically Active Period Revealed by Observations by Hisaki/EXCEED From November 2014 to May 2015 Spatially Asymmetric Increase in Hot Electron Fraction in the Io Plasma Torus During Volcanically Active Period Revealed by Observations by Hisaki/EXCEED From November 2014 to May 2015. JGR: Space Physics1253e27100. 10.1029/2019JA027100 * Howell Pappalardo (2020) howell20Howell, SM. Pappalardo, RT. 202003\. NASA’s Europa Clipper—a mission to a potentially habitable ocean world NASA’s Europa Clipper—a mission to a potentially habitable ocean world. Nature Communications111311. 10.1038/s41467-020-15160-9 * Huddleston . (1998) huddleston98Huddleston, DE., Strangeway, RJ., Warnecke, J., Russell, CT. Kivelson, MG. 199809\. Ion cyclotron waves in the Io torus: Wave dispersion, free energy analysis, and SO2+ source rate estimates Ion cyclotron waves in the Io torus: Wave dispersion, free energy analysis, and SO2+ source rate estimates. J. Geophys. Res.103E919887-19900. 10.1029/97JE03557 * Hunter (2007) matplotlibHunter, JD. 2007\. Matplotlib: A 2D graphics environment Matplotlib: A 2d graphics environment. Computing In Science & Engineering9390–95. 10.1109/MCSE.2007.55 * Ip Goertz (1983) ip83Ip, WH. Goertz, CK. 198303\. An interpretation of the dawn-dusk asymmetry of UV emission from the Io plasma torus An interpretation of the dawn-dusk asymmetry of UV emission from the Io plasma torus. Nature302232. 10.1038/302232a0 * Jockers . (1992) jockers92Jockers, K., Thomas, N., Bonev, T., Ivanova, V. Shkodrov, V. 199208\. Observations of Io’s sodium cloud and torus Observations of Io’s sodium cloud and torus. Advances in Space Research128347-351. 10.1016/0273-1177(92)90409-Q * Kagitani . (2020) kagitani20Kagitani, M., Sakanoi, T., Kasaba, Y. Okano, S. 202012\. A coronagraph using a digital micromirror device as an adaptive occultation mask: design and observational result A coronagraph using a digital micromirror device as an adaptive occultation mask: design and observational result. Proceedings of SPIE11447114479Y. 10.1117/12.2561906 * Kimura . (2018) kimura18Kimura, T., Hiraki, Y., Tao, C., Tsuchiya, F., Delamere, PA., Yoshioka, K.Fujimoto, M. 201803\. Response of Jupiter’s Aurora to Plasma Mass Loading Rate Monitored by the Hisaki Satellite During Volcanic Eruptions at Io Response of Jupiter’s Aurora to Plasma Mass Loading Rate Monitored by the Hisaki Satellite During Volcanic Eruptions at Io. JGR: Space Physics1231885-1899. 10.1002/2017JA025029 * Koga . (20181) koga18_spaceKoga, R., Tsuchiya, F., Kagitani, M., Sakanoi, T., Yoneda, M., Yoshioka, K.Bagenal, F. 2018105\. Spatial Distribution of Io’s Neutral Oxygen Cloud Observed by Hisaki Spatial Distribution of Io’s Neutral Oxygen Cloud Observed by Hisaki. JGR: Space Physics1233764-3776. 10.1029/2018JA025328 * Koga . (20182) koga18_timeKoga, R., Tsuchiya, F., Kagitani, M., Sakanoi, T., Yoneda, M., Yoshioka, K.Smith, HT. 2018201\. The time variation of atomic oxygen emission around Io during a volcanic event observed with Hisaki/EXCEED The time variation of atomic oxygen emission around Io during a volcanic event observed with Hisaki/EXCEED. Icarus299300-307. 10.1016/j.icarus.2017.07.024 * Krüger, Geissler . (2003) krueger03Krüger, H., Geissler, P., Horányi, M., Graps, AL., Kempf, S., Srama, R.Grün, E. 200311\. Jovian dust streams: A monitor of Io’s volcanic plume activity Jovian dust streams: A monitor of Io’s volcanic plume activity. Geophys. Res. Lett.3021210000-1. 10.1029/2003GL017827 * Krüger, Horányi Grün (2003) krueger03_graps00Krüger, H., Horányi, M. Grün, E. 200301\. Jovian dust streams: Probes of the Io plasma torus Jovian dust streams: Probes of the Io plasma torus. Geophys. Res. Lett.3021058. 10.1029/2002GL015920 * Kumar (1979) kumar79Kumar, S. 197908\. The stability of an SO2 atmosphere on Io The stability of an SO2 atmosphere on Io. Nature2805725758-760. 10.1038/280758a0 * Kupo . (1976) kupo76Kupo, I., Mekler, Y. Eviatar, A. 197604\. Detection of ionized sulfur in the Jovian magnetosphere Detection of ionized sulfur in the Jovian magnetosphere. Astrophys. J., Lett.205L51-L53. 10.1086/182088 * Lang . (2010) lang10Lang, D., Hogg, DW., Mierle, K., Blanton, M. Roweis, S. 201005\. Astrometry.net: Blind Astrometric Calibration of Arbitrary Astronomical Images Astrometry.net: Blind Astrometric Calibration of Arbitrary Astronomical Images. AJ13951782-1800. 10.1088/0004-6256/139/5/178210.48550/arXiv.0910.2233 * Lellouch . (2003) lellouch03Lellouch, E., Paubert, G., Moses, JI., Schneider, NM. Strobel, DF. 200301\. Volcanically emitted sodium chloride as a source for Io’s neutral clouds and plasma torus Volcanically emitted sodium chloride as a source for io’s neutral clouds and plasma torus. Nature42145-47. 10.1038/nature01292 * Marchis . (2002) marchis02Marchis, F., de Pater, I., Davies, AG., Roe, HG., Fusco, T., Le Mignant, D.Prangé, R. 200211\. High-Resolution Keck Adaptive Optics Imaging of Violent Volcanic Activity on Io High-Resolution Keck Adaptive Optics Imaging of Violent Volcanic Activity on Io. Icarus1601124-131. 10.1006/icar.2002.6955 * McDoniel . (2015) mcdoniel15McDoniel, WJ., Goldstein, DB., Varghese, PL. Trafton, LM. 201509\. Three-dimensional simulation of gas and dust in Io’s Pele plume Three-dimensional simulation of gas and dust in Io’s Pele plume. Icarus257251-274. 10.1016/j.icarus.2015.03.019 * McDoniel . (2017) mcdoniel17McDoniel, WJ., Goldstein, DB., Varghese, PL. Trafton, LM. 201709\. The interaction of Io’s plumes and sublimation atmosphere The interaction of Io’s plumes and sublimation atmosphere. Icarus29481-97. 10.1016/j.icarus.2017.04.021 * McDoniel . (2019) mcdoniel19McDoniel, WJ., Goldstein, DB., Varghese, PL. Trafton, LM. 201907\. Simulation of Io’s plumes and Jupiter’s plasma torus Simulation of Io’s plumes and Jupiter’s plasma torus. Physics of Fluids317077103. 10.1063/1.5097961 * McGrath Johnson (1987) mcgrath87McGrath, MA. Johnson, RE. 198703\. Magnetospheric plasma sputtering of Io’s atmosphere Magnetospheric plasma sputtering of Io’s atmosphere. Icarus693519-531. 10.1016/0019-1035(87)90021-2 * McGrath . (2004) mcgrath04McGrath, MA., Lellouch, E., Strobel, DF., Feldman, PD. Johnson, RE. 2004\. Satellite atmospheres Satellite atmospheres. F. Bagenal, TE. Dowling WB. McKinnon (), Jupiter. The Planet, Satellites and Magnetosphere Jupiter. the planet, satellites and magnetosphere ( 457-483). CambridgeCambridge University Press. * Mendillo . (1990) mendillo90Mendillo, M., Baumgardner, J., Flynn, B. Hughes, WJ. 199011\. The extended sodium nebula of Jupiter The extended sodium nebula of Jupiter. Nature348312-314. 10.1038/348312a0 * Mendillo . (2004) mendillo04Mendillo, M., Wilson, J., Spencer, J. Stansberry, J. 200408\. Io’s volcanic control of Jupiter’s extended neutral clouds Io’s volcanic control of Jupiter’s extended neutral clouds. Icarus170430-442. 10.1016/j.icarus.2004.03.009 * Moirano . (2021) moirano21IPTMoirano, A., Gomez Casajus, L., Zannoni, M., Durante, D. Tortora, P. 202110\. Morphology of the Io Plasma Torus From Juno Radio Occultations Morphology of the Io Plasma Torus From Juno Radio Occultations. Journal of Geophysical Research (Space Physics)12610e29190. 10.1029/2021JA029190 * Morabito . (1979) morabito79Morabito, LA., Synnott, SP., Kupferman, PN. Collins, SA. 197906\. Discovery of currently active extraterrestrial volcanism Discovery of currently active extraterrestrial volcanism. Sci204972. 10.1126/science.204.4396.972 * Morgan (1985) morgan85aMorgan, JS. 198506\. Temporal and spatial variations in the Io torus Temporal and spatial variations in the Io torus. Icarus62389-414. 10.1016/0019-1035(85)90183-6 * Morgenthaler (2022) morgenthaler22bmpMorgenthaler, JP. 202212\. jpmorgen/BigMultiPipe jpmorgen/bigmultipipe [Computer Software]. Zenodo. 10.5281/zenodo.7485043 * Morgenthaler (20231) morgenthaler23IoIO_codeMorgenthaler, JP. 2023101\. Io Input/Output facility (IoIO) control, reduction, and analysis software Io Input/Output facility (IoIO) control, reduction, and analysis software [Computer Software]. Zenodo. 10.5281/zenodo.834711710.5281/zenodo.7703222 * Morgenthaler (20232) morgenthaler23burnashevMorgenthaler, JP. 2023201\. jpmorgen/burnashev jpmorgen/burnashev [Computer Software]. Zenodo. 10.5281/zenodo.7507994 * Morgenthaler (20233) morgenthaler23ccdmultipipeMorgenthaler, JP. 2023301\. jpmorgen/ccdmultipipe jpmorgen/ccdmultipipe [Computer Software]. Zenodo. 10.5281/zenodo.7507738 * Morgenthaler (20234) morgenthaler23precisionguideMorgenthaler, JP. 2023401\. jpmorgen/precisionguide jpmorgen/precisionguide [Computer Software]. Zenodo. 10.5281/zenodo.7507157 * Morgenthaler . (2019) morgenthaler19Morgenthaler, JP., Rathbun, JA., Schmidt, CA., Baumgardner, J. Schneider, NM. 2019jan. Large Volcanic Event on Io Inferred from Jovian Sodium Nebula Brightening Large volcanic event on io inferred from jovian sodium nebula brightening. Astrophys. J., Lett.8712L23. 10.3847/2041-8213/aafdb7 * Morgenthaler . (2023) morgenthaler23_IoIO_dataMorgenthaler, JP., Schmidt, CA., Vogt, M., Schneider, NM. Marconi, M. 202309\. Io Input/Output observatory (IoIO) reduction products. Io Input/Output observatory (IoIO) reduction products. Zenodo. https://doi.org/10.5281/zenodo.8342758 10.5281/zenodo.8342758 * Moullet . (2010) moullet10Moullet, A., Gurwell, MA., Lellouch, E. Moreno, R. 201007\. Simultaneous mapping of SO2, SO, NaCl in Io’s atmosphere with the Submillimeter Array Simultaneous mapping of SO2, SO, NaCl in Io’s atmosphere with the Submillimeter Array. Icarus2081353-365. 10.1016/j.icarus.2010.02.009 * Nerney Bagenal (2020) nerney20Nerney, EG. Bagenal, F. 202004\. Combining UV Spectra and Physical Chemistry to Constrain the Hot Electron Fraction in the Io Plasma Torus Combining UV Spectra and Physical Chemistry to Constrain the Hot Electron Fraction in the Io Plasma Torus. JGR: Space Physics1254e27458. 10.1029/2019JA027458 * Nerney . (2022) nerney22AGUNerney, EG., Bagenal, F., Coffin, DA. Delamere, PA. 202212\. 3D Physical Chemistry and Emission Simulations of the Io Plasma Torus 3D Physical Chemistry and Emission Simulations of the Io Plasma Torus. AGU Fall Meeting Abstracts Agu fall meeting abstracts ( 2022, SM55B-1456). * Nerney . (2023) nerney23DPSNerney, EG., Bagenal, F. Schmidt, C. 202310\. A 3D Model of the Io Plasma torus and Model Comparisons with Observations A 3D Model of the Io Plasma torus and Model Comparisons with Observations. Bull. Am. Astron. Soc.55111.05. * Nozawa . (2004) nozawa04_no_SIINozawa, H., Misawa, H., Takahashi, S., Morioka, A., Okano, S. Sood, R. 200407\. Long-term variability of [S II] emissions from the Io plasma torus between 1997 and 2000 Long-term variability of [S II] emissions from the Io plasma torus between 1997 and 2000. JGR: Space Physics1097209. 10.1029/2003JA010241 * Oliphant (2006) oliphant06Oliphant, TE. 2006\. Guide to NumPy Guide to NumPy. ProvoTrelgol Publishing. * Oliversen . (1991) oliversen91Oliversen, RJ., Scherb, F. Roesler, FL. 199109\. The Io Sulfur Torus in 1981 The io sulfur torus in 1981\. Icarus9353–62. * Orton . (2022) orton22Orton, G., Momary, T., Brueshaber, S., Hansen, C., Bolton, S. Rogers, J. 202209\. The Juno Extended Mission: A Call for Continued Support from Amateur Observers The Juno Extended Mission: A Call for Continued Support from Amateur Observers. European Planetary Science Congress European planetary science congress ( EPSC2022-769). 10.5194/epsc2022-769 * Orton . (2020) orton_planets2020Orton, G., Tabataba-Vakili, F. Momary, T. 202003\. The Earth-Based Observational Program for Juno Mission Support The Earth-Based Observational Program for Juno Mission Support. Planets 2020, Ground and Space Observatories: a Joint Venture to Planetary Science Planets 2020, ground and space observatories: a joint venture to planetary science ( 17). 10.5281/zenodo.4435595 * Pearl . (1979) pearl79Pearl, J., Hanel, R., Kunde, V., Maguire, W., Fox, K., Gupta, S.Raulin, F. 197908\. Identification of gaseous SO2 and new upper limits for other gases on Io Identification of gaseous SO2 and new upper limits for other gases on Io. Nature2805725755-758. 10.1038/280755a0 * Phipps Bagenal (2021) phipps21_equatorPhipps, PH. Bagenal, F. 202101\. Centrifugal Equator in Jupiter’s Plasma Sheet Centrifugal Equator in Jupiter’s Plasma Sheet. Journal of Geophysical Research (Space Physics)1261e28713. 10.1029/2020JA028713 * Pilcher . (1984) pilcher84Pilcher, CB., Fertel, JH., Smyth, WH. Combi, MR. 198412\. Io’s sodium directional features - Evidence for a magnetospheric-wind-driven gas escape mechanism Io’s sodium directional features - Evidence for a magnetospheric-wind-driven gas escape mechanism. Astrophys. J.287427-444. 10.1086/162702 * Plane . (2018) plane18Plane, JMC., Flynn, GJ., Määttänen, A., Moores, JE., Poppe, AR., Carrillo-Sanchez, JD. Listowski, C. 201802\. Impacts of Cosmic Dust on Planetary Atmospheres and Surfaces Impacts of Cosmic Dust on Planetary Atmospheres and Surfaces. Space Sci. Rev.214123. 10.1007/s11214-017-0458-1 * Porco . (2003) porco03Porco, CC., West, RA., McEwen, A., Del Genio, AD., Ingersoll, AP., Thomas, P.Vasavada, AR. 200303\. Cassini Imaging of Jupiter’s Atmosphere, Satellites, and Rings Cassini imaging of jupiter’s atmosphere, satellites, and rings. Sci2991541-1547. * Postberg . (2006) postberg06Postberg, F., Kempf, S., Srama, R., Green, SF., Hillier, JK., McBride, N. Grün, E. 200607\. Composition of jovian dust stream particles Composition of jovian dust stream particles. Icarus1831122-134. 10.1016/j.icarus.2006.02.001 * Redwing . (2022) redwing22Redwing, E., de Pater, I., Luszcz-Cook, S., de Kleer, K., Moullet, A. Rojo, PM. 202210\. NaCl and KCl in Io’s Atmosphere NaCl and KCl in Io’s Atmosphere. Planetary Sci. J.310238. 10.3847/PSJ/ac9784 * Robitaille . (2020) robitaille20Robitaille, T., Deil, C. Ginsburg, A. 202011\. reproject: Python-based astronomical image reprojection. reproject: Python-based astronomical image reprojection. Astrophysics Source Code Library, record ascl:2011.023. * Roesler . (1999) roesler99Roesler, FL., Moos, HW., Oliversen, RJ., Woodward, J., R. C., Retherford, KD., Scherb, F.Strobel, DF. 199901\. Far-Ultraviolet Imaging Spectroscopy of Io’s Atmosphere with HST/STIS Far-Ultraviolet Imaging Spectroscopy of Io’s Atmosphere with HST/STIS. Sci283353. 10.1126/science.283.5400.353 * Roth . (2020) roth20Roth, L., Boissier, J., Moullet, A., Sánchez-Monge, Á., de Kleer, K., Yoneda, M.Thorwirth, S. 202011\. An attempt to detect transient changes in Io’s SO2 and NaCl atmosphere An attempt to detect transient changes in Io’s SO2 and NaCl atmosphere. Icarus350113925. 10.1016/j.icarus.2020.113925 * Roth . (2014) roth14Roth, L., Saur, J., Retherford, KD., Feldman, PD. Strobel, DF. 201401\. A phenomenological model of Io’s UV aurora based on HST/STIS observations A phenomenological model of Io’s UV aurora based on HST/STIS observations. Icarus228386-406. 10.1016/j.icarus.2013.10.009 * Schmidt . (2018) schmidt18Schmidt, C., Schneider, N., Leblanc, F., Gray, C., Morgenthaler, J., Turner, J. Grava, C. 201807\. A Survey of Visible S+ Emission in Io’s Plasma Torus During the Hisaki Epoch A Survey of Visible S+ Emission in Io’s Plasma Torus During the Hisaki Epoch. JGR: Space Physics1235610-5624. 10.1029/2018JA025296 * Schmidt . (2023) schmidt23Schmidt, C., Sharov, M., de Kleer, K., Schneider, N., de Pater, I., Phipps, PH.Brown, M. 202302\. Io’s Optical Aurorae in Jupiter’s Shadow Io’s Optical Aurorae in Jupiter’s Shadow. Planetary Sci. J.4236. 10.3847/PSJ/ac85b0 * Schneider Bagenal (2007) schneider07Schneider, NM. Bagenal, F. 2007\. Io’s neutral clouds, plasma torus, and magnetospheric interaction Io’s neutral clouds, plasma torus, and magnetospheric interaction. Io After Galileo: A New View of Jupiter’s Volcanic Moon Io After Galileo: A New View of Jupiter’s Volcanic Moon ( 265-286). Berlin, HeidelbergSpringer Praxis Books / Geophysical Sciences. 10.1007/978-3-540-48841-5_11 * Schneider . (1991) schneider91CoronaSchneider, NM., Hunten, DM., Wells, WK., Schultz, AB. Fink, U. 199102\. The structure of Io’s corona The structure of Io’s corona. Astrophys. J.368298-315. 10.1086/169694 * Schneider Trauger (1995) schneider95Schneider, NM. Trauger, JT. 1995Sep. The Structure of the Io Torus The Structure of the Io Torus. Astrophys. J.450450. 10.1086/176155 * Smith . (1979) smith79Smith, BA., Soderblom, LA., Beebe, R., Boyce, J., Briggs, G., Carr, M.Veverka, J. 197911\. The Galilean Satellites and Jupiter: Voyager 2 Imaging Science Results The Galilean Satellites and Jupiter: Voyager 2 Imaging Science Results. Science2064421927-950. 10.1126/science.206.4421.927 * Smith . (2022) smith22Smith, HT., Koga, R., Tsuchiya, F. Dols, VJ. 202208\. Insight Into Io Enabled by Characterization of Its Neutral Oxygen Torus Insight Into Io Enabled by Characterization of Its Neutral Oxygen Torus. Journal of Geophysical Research (Space Physics)1278e30581. 10.1029/2022JA030581 * Smyth Combi (19881) smyth88Smyth, WH. Combi, MR. 1988105\. A General Model for Io’s Neutral Gas Clouds. II - Application to the Sodium Cloud A general model for io’s neutral gas clouds. ii - application to the sodium cloud. Astrophys. J.328888–918. 10.1086/166346 * Smyth Combi (19882) smyth88aSmyth, WH. Combi, MR. 1988204\. A general model for Io’s neutral gas clouds. I - Mathematical description A general model for Io’s neutral gas clouds. I - Mathematical description. Astrophys. J., Suppl. Ser.66397-411. 10.1086/191264 * Smyth Marconi (2003) smyth03Smyth, WH. Marconi, ML. 200311\. Nature of the iogenic plasma source in Jupiter’s magnetosphere I. Circumplanetary distribution Nature of the iogenic plasma source in Jupiter’s magnetosphere I. Circumplanetary distribution. Icarus16685-106. 10.1016/S0019-1035(03)00176-3 * Smyth . (2011) smyth11Smyth, WH., Peterson, CA. Marconi, ML. 201107\. A consistent understanding of the ribbon structure for the Io plasma torus at the Voyager 1, 1991 ground-based, and Galileo J0 epochs A consistent understanding of the ribbon structure for the Io plasma torus at the Voyager 1, 1991 ground-based, and Galileo J0 epochs. JGR: Space Physics116A157205. 10.1029/2010JA016094 * Steffl . (2004) steffl04aSteffl, AJ., Stewart, AIF. Bagenal, F. 200411\. Cassini UVIS observations of the Io plasma torus. I. Initial results Cassini UVIS observations of the Io plasma torus. I. Initial results. Icarus17278-90. 10.1016/j.icarus.2003.12.027 * Tao, Kimura, Badman, André . (2016) tao16bTao, C., Kimura, T., Badman, SV., André, N., Tsuchiya, F., Murakami, G.Fujimoto, M. 201605\. Variation of Jupiter’s aurora observed by Hisaki/EXCEED: 2. Estimations of auroral parameters and magnetospheric dynamics Variation of Jupiter’s aurora observed by Hisaki/EXCEED: 2. Estimations of auroral parameters and magnetospheric dynamics. JGR: Space Physics1214055-4071. 10.1002/2015JA021272 * Tao, Kimura, Badman, Murakami . (2016) tao16aTao, C., Kimura, T., Badman, SV., Murakami, G., Yoshioka, K., Tsuchiya, F.Fujimoto, M. 201605\. Variation of Jupiter’s aurora observed by Hisaki/EXCEED: 1. Observed characteristics of the auroral electron energies compared with observations performed using HST/STIS Variation of Jupiter’s aurora observed by Hisaki/EXCEED: 1. Observed characteristics of the auroral electron energies compared with observations performed using HST/STIS. JGR: Space Physics1214041-4054. 10.1002/2015JA021271 * Tao . (2021) tao21Tao, C., Kimura, T., Kronberg, EA., Tsuchiya, F., Murakami, G., Yamazaki, A.Okamoto, S. 202102\. Variation of Jupiter’s Aurora Observed by Hisaki/EXCEED: 4. Quasi Periodic Variation Variation of Jupiter’s Aurora Observed by Hisaki/EXCEED: 4. Quasi Periodic Variation. Journal of Geophysical Research (Space Physics)1262e28575. 10.1029/2020JA02857510.1002/essoar.10504013.1 * Tao . (2018) tao18Tao, C., Kimura, T., Tsuchiya, F., Muirakami, G., Yoshioka, K., Yamazaki, A.Fujimoto, M. 201801\. Variation of Jupiter’s Aurora Observed by Hisaki/EXCEED: 3. Volcanic Control of Jupiter’s Aurora Variation of Jupiter’s Aurora Observed by Hisaki/EXCEED: 3. Volcanic Control of Jupiter’s Aurora. Geophys. Res. Lett.4571-79. 10.1002/2017GL075814 * Thomas . (2004) thomas04Thomas, N., Bagenal, F., Hill, T. Wilson, J. 2004\. The Io neutral clouds and plasma torus The Io neutral clouds and plasma torus. F. Bagenal, TE. Dowling WB. McKinnon (), Jupiter. The Planet, Satellites and Magnetosphere Jupiter. the planet, satellites and magnetosphere ( 561-591). Cambridge University Press. * Thomas . (2001) thomas01Thomas, N., Lichtenberg, G. Scotto, M. 200111\. High-resolution spectroscopy of the Io plasma torus during the Galileo mission High-resolution spectroscopy of the Io plasma torus during the Galileo mission. JGR: Space Physics10626277-26292. 10.1029/2000JA002504 * Tsang . (2012) tsang12Tsang, CCC., Spencer, JR., Lellouch, E., López-Valverde, MA., Richter, MJ. Greathouse, TK. 2012Jan. Io’s atmosphere: Constraints on sublimation support from density variations on seasonal timescales using NASA IRTF/TEXES observations from 2001 to 2010 Io’s atmosphere: Constraints on sublimation support from density variations on seasonal timescales using NASA IRTF/TEXES observations from 2001 to 2010. Icarus2171277-296. 10.1016/j.icarus.2011.11.005 * Tsuchiya . (2018) tsuchiya18Tsuchiya, F., Yoshioka, K., Kimura, T., Koga, R., Murakami, G., Yamazaki, A.Sakanoi, T. 201808\. Enhancement of the Jovian Magnetospheric Plasma Circulation Caused by the Change in Plasma Supply From the Satellite Io Enhancement of the Jovian Magnetospheric Plasma Circulation Caused by the Change in Plasma Supply From the Satellite Io. Journal of Geophysical Research (Space Physics)12386514-6532. 10.1029/2018JA025316 * Tukey (1977) tukey77Tukey, JW. 1977\. Exploratory data analysis Exploratory data analysis. Addison-Wesley. * Van Rossum Drake (2009) python3Van Rossum, G. Drake, FL. 2009\. Python 3 Reference Manual Python 3 reference manual. Scotts Valley, CACreateSpace. * Virtanen . (2020) SciPy20Virtanen, P., Gommers, R., Oliphant, TE., Haberland, M., Reddy, T., Cournapeau, D.SciPy 1.0 Contributors 2020\. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods17261–272. 10.1038/s41592-019-0686-2 * Walker . (2010) walker10Walker, AC., Gratiy, SL., Goldstein, DB., Moore, CH., Varghese, PL., Trafton, LM.Stewart, B. 2010May. A comprehensive numerical simulation of Io’s sublimation-driven atmosphere A comprehensive numerical simulation of Io’s sublimation-driven atmosphere. Icarus2071409-432. 10.1016/j.icarus.2010.01.012 * Wilson . (2002) wilson02Wilson, JK., Mendillo, M., Baumgardner, J., Schneider, NM., Trauger, JT. Flynn, B. 200206\. The Dual Sources of Io’s Sodium Clouds The Dual Sources of Io’s Sodium Clouds. Icarus157476-489. 10.1006/icar.2002.6821 * Wilson Schneider (1994) wilson94Wilson, JK. Schneider, NM. 199409\. Io’s fast sodium: Implications for molecular and atomic atmospheric escape Io’s fast sodium: Implications for molecular and atomic atmospheric escape. Icarus11131-44. 10.1006/icar.1994.1131 * Wilson Schneider (1999) wilson99Wilson, JK. Schneider, NM. 199907\. Io’s sodium directional feature: Evidence for ionospheric escape Io’s sodium directional feature: Evidence for ionospheric escape. JGR: Planets10416567-16584. 10.1029/1999JE900017 * Witteborn . (1979) witteborn79Witteborn, FE., Bregman, JD. Pollack, JB. 197902\. Io - an intense brightening near 5 micrometers Io - an intense brightening near 5 micrometers. Sci203643-646. 10.1126/science.203.4381.643 * Woodward . (2000) woodward00_no_SIIWoodward, RC., Jr., Oliversen, RJ., Scherb, F. Roesler, FL. 200010\. Synoptic Imaging of the Io Plasma Torus in [S II]: Long-term Variability Synoptic Imaging of the Io Plasma Torus in [S II]: Long-term Variability. Bull. Am. Astron. Soc.323. AAS/Division of Planetary Sciences Meeting 32, poster #35.12 * Woodward . (1994) woodward94Woodward, RC., Jr., Scherb, F. Roesler, FL. 199409\. Periodic intensity variations in sulfur emissions from the Io plasma torus Periodic intensity variations in sulfur emissions from the io plasma torus. Icarus11145–64. 10.1006/icar.1994.1132 * Yoneda . (2009) yoneda09Yoneda, M., Kagitani, M. Okano, S. 200912\. Short-term variability of Jupiter’s extended sodium nebula Short-term variability of Jupiter’s extended sodium nebula. Icarus204589-596. 10.1016/j.icarus.2009.07.023 * Yoneda . (2015) yoneda15Yoneda, M., Kagitani, M., Tsuchiya, F., Sakanoi, T. Okano, S. 201511\. Brightening event seen in observations of Jupiter’s extended sodium nebula Brightening event seen in observations of Jupiter’s extended sodium nebula. Icarus26131-33. 10.1016/j.icarus.2015.07.037 * Yoshikawa . (2017) yoshikawa17Yoshikawa, I., Suzuki, F., Hikida, R., Yoshioka, K., Murakami, G., Tsuchiya, F.Fujimoto, M. 201708\. Volcanic activity on Io and its influence on the dynamics of the Jovian magnetosphere observed by EXCEED/Hisaki in 2015 Volcanic activity on Io and its influence on the dynamics of the Jovian magnetosphere observed by EXCEED/Hisaki in 2015. Earth, Planets and Space691110. 10.1186/s40623-017-0700-9 * Yoshioka . (2014) yoshioka14Yoshioka, K., Murakami, G., Yamazaki, A., Tsuchiya, F., Kimura, T., Kagitani, M.Fujimoto, M. 201409\. Evidence for global electron transportation into the jovian inner magnetosphere Evidence for global electron transportation into the jovian inner magnetosphere. Sci3451581-1584. 10.1126/science.1256259 * Yoshioka . (2018) yoshioka18Yoshioka, K., Tsuchiya, F., Kagitani, M., Kimura, T., Murakami, G., Fukuyama, D.Fujimoto, M. 2018Oct. The Influence of Io’s 2015 Volcanic Activity on Jupiter’s Magnetospheric Dynamics The Influence of Io’s 2015 Volcanic Activity on Jupiter’s Magnetospheric Dynamics. Geophys. Res. Lett.451910,193-10,199. 10.1029/2018GL079264 * Zhang . (2003) zhang03Zhang, J., Goldstein, DB., Varghese, PL., Gimelshein, NE., Gimelshein, SF. Levin, DA. 2003May. Simulation of gas dynamics and radiation in volcanic plumes on Io Simulation of gas dynamics and radiation in volcanic plumes on Io. Icarus1631182-197. 10.1016/S0019-1035(03)00050-2 * Zhang . (2004) zhang04Zhang, J., Goldstein, DB., Varghese, PL., Trafton, L., Moore, C. Miki, K. 200412\. Numerical modeling of ionian volcanic plumes with entrained particulates Numerical modeling of ionian volcanic plumes with entrained particulates. Icarus1722479-502. 10.1016/j.icarus.2004.06.016 * Zook . (1996) zook96Zook, HA., Grun, E., Baguhl, M., Hamilton, DP., Linkert, G., Liou, JC.Phillips, JL. 199611\. Solar Wind Magnetic Field Bending of Jovian Dust Trajectories Solar Wind Magnetic Field Bending of Jovian Dust Trajectories. Sci27452921501-1503. 10.1126/science.274.5292.1501 * Zulko . (2021) moviepy21Zulko, Burrows, T., Earney, B., Mondéjar, Á., kerstin, Gaitán, M.Ørland, K. 202105\. johncooper199/moviepy johncooper199/moviepy [Computer Software]. Zenodo. 10.5281/zenodo.4781125
# On polynomial solutions of PDE Anna R. Gharibyan, Hakop A. Hakopian ###### Abstract In this paper we prove that the PDE $p(D)f=q,$ where $p$ and $q$ are multivariate polynomials, has a solution in the space of polynomials of total degree not exceeding ${n+s},$ where $n$ is the degree of $q$ and $s$ is the zero order of $O=(0,\ldots,0)$ for $p.$ Key words: PDE with constant coefficients, bivariate polynomial, $s$-fold zero of a polynomial. Mathematics Subject Classification (2010): 35E20. ## 1 Introduction Let us start with the bivariate case. Denote by $\Pi_{n}$ the space of biariate polynomials of total degree at most $n:$ $\Pi_{n}=\\{\sum_{i+j\leq n}a_{ij}x^{i}y^{j}\\},\quad\dim\Pi_{n}=1/2(n+1)(n+2).$ Thus $\Pi_{0}=\\{c:c=const.\\},\dim\Pi_{0}=1$ and $\Pi_{-1}=\\{0\\},\dim\Pi_{-1}=0.$ We set $\Pi=\cup_{n\geq 0}\Pi_{n}.$ Denote also by $\overline{\Pi}_{n}$ the space homogeneous polynomials of total degree $n:$ $\overline{\Pi}_{n}=\\{\sum_{i+j=n}a_{ij}x^{i}y^{j}\\},\quad\dim\overline{\Pi}_{n}=n+1.$ For a polynomial $p$ denote by $p(D)$ the respective differentiation operator: $p(D)=p(\frac{\partial}{\partial x},\frac{\partial}{\partial y}).$ ###### Definition 1.1. Suppose that $p\in\Pi_{n},\ p(x,y)=\sum_{i+j\leq n}a_{ij}x^{i}y^{j}$. Then the $k$th homogeneous layer of $p$ we denote by $p_{(k)}=\sum_{i+j=k}a_{ij}x^{i}y^{j}.$ Denote also by $p^{\uparrow}$ the upper nonzero homogeneous layer of $p$ and by $p_{\downarrow}$ the lower nonzero homogeneous layer of $p.$ ###### Definition 1.2. The point $O=(0,0)$ is called an $m$-fold zero for $p$ if the lower homogeneous layer of $p(x)$ is the $m$th one. A point $a\in\mathbb{C}^{2}$ is called an $m$-fold zero for $p$ if the lower homogeneous layer of $p(x+a)$ is the $m$th one. Let $V$ and $W$ be finite dimensional linear spaces and $\mathcal{L}$ be a linear operator $\mathcal{L}:V\rightarrow W.$ In the sequel we will use the following formula $\dim(Im\mathcal{L})=\dim V-\dim(ker\mathcal{L}).$ (1.1) Let us bring the following result: ###### Theorem 1.3 ( Th. 5, [1]; Th. 6, [2]). Assume that a polynomial $f_{0}$ is a solution of the PDE $p(D)f=0.$ Then the partial derivatives of $f_{0}$ are solutions too. ###### Corollary 1.4. Suppose that $p(x,y)=\sum_{i+j\leq n}a_{ij}x^{i}y^{j}$: The homogeneous PDE $p(D)f=0$ has nonzero polynomial solution if and only if $f=1$ is its solution, i.e., $p(D)1=a_{00}=0$: In the sequel we will use the following ###### Theorem 1.5 (Th. 2.1, [3]). Suppose that the origin $O=(0,0)$ is an $s$-fold zero of $p\in\Pi_{n}.$ Then the homogeneous PDE $p(D)f=0$ has exactly $D_{k}$ linearly independent solutions in $\Pi_{k},$ where $D_{k}$ is the $k$th partial sum of the following number series $\sum_{i=0}^{\infty}d_{i}=1+2+\cdots+s+\cdots+s+\cdots.$ Note that in the case $s=0,$ i.e., $p(0,0)\neq 0,$ this result states that the PDE $p(D)f=0$ has no solution except $f=0.$ Of course this statement coincides with Corollary 1.4. ## 2 Main result We are going to prove the following ###### Theorem 2.1. Assume that $q\in\Pi_{m}$ and the origin $O=(0,0)$ is an $s$-fold zero of $p\in\Pi.$ Then we have that the PDE $p(D)f=q$ has a polynomial solution from $\Pi_{m+s}.$ In view of Theorem 1.5 we get ###### Corollary 2.2. Assume that the origin $O=(0,0)$ is an $s$-fold zero of $p\in\Pi$ and $q\in\Pi_{m}.$ Then we have that the solutions of PDE $p(D)f=q,$ (2.1) belonging to $\Pi_{k},\ k\geq m+s,$ form an affine space of dimension $\sigma=\frac{s(2k-s+1)}{2}.$ Thus the solutions of PDE (2.1) can be represented as $f=f_{0}+\sum_{i=1}^{\sigma}\lambda_{i}f_{i},$ where $f_{0}\in\Pi_{m+s}$ is a solution of PDE (2.1) and $f_{i},\ i=1,...,\sigma,$ are the linearly independent solutions of the homogeneous PDE $p(D)f=0$ in $\Pi_{k}.$ Let us start the consideration with the case when both $p$ and $q$ are homogeneous. Note that the following is a particular case $s=n$ of Theorem 3.1. ###### Proposition 2.3. Assume that $p\in\overline{\Pi}_{n}$ and $q\in\overline{\Pi}_{m}.$ Then the PDE $p(D)f=q$ has a solution from $\overline{\Pi}_{n+m}.$ ###### Proof. Let us verify that in this case the number of linearly independent solutions of PDE $p(D)f=0$ in $\overline{\Pi}_{n+m}$ is $n.$ Indeed, according to Theorem 1.5, the number of linearly independent solutions of PDE $p(D)f=0$ in $\Pi_{n+m}$ is: $1+2+...+n+mn,$ while in $\Pi_{n+m-1}$ is: $1+2+...+n+(m-1)n:$ Then consider the linear operator $\mathcal{L}:\overline{\Pi}_{n+m}\rightarrow\overline{\Pi}_{m}$, given by $\mathcal{L}f=p(D)f.$ What we verified above means that $dim(ker\mathcal{L})=n.$ Now, by using the formula (1.1), we get $dim(Im\mathcal{L})=dim\overline{\Pi}_{n+m}-dim(ker\mathcal{L})=m+n+1-n=m+1=dim\overline{\Pi}_{m}.$ Therefore the operator $\mathcal{L}$ is on $\overline{\Pi}_{n+m}$ and the equation (2.1) has a solution. ∎ ###### Second proof of Proposition 2.3. By using Theorem 1.5, we proved Proposition 2.3, which states the polynomial solability of the PDE $p(D)f=q$ in the case when $p$ and $q$ are homogeneous polynomials. Now, let us establish the same result in other way, which will be important in the proof of the result in the case of polynomials of more variables. Assume that $p\in\overline{\Pi}_{n}$ and $q\in\overline{\Pi}_{m}:$ $p(x,y)=a_{0}x^{n}+a_{1}x^{n-1}y+...+a_{n}y^{n},$ $q(x,y)=b_{0}x^{m}+b_{1}x^{m-1}y+...+b_{m}y^{m}.$ Let us find a solution $f\in\overline{\Pi}_{n+m}$ of the PDE $p(D)f=q.$ (2.2) Suppose that $f(x,y)=\gamma_{0}x^{n+m}+\gamma_{1}x^{n+m-1}y+\cdots+\gamma_{n+m}y^{n+m}.$ Now, the PDE (3.2) looks like $(a_{0}x^{n}+a_{1}x^{n-1}y+...+a_{n}y^{n})(D)(\gamma_{0}x^{n+m}+\gamma_{1}x^{n+m-1}y+...+\gamma_{n+m}y^{n+m})$ $=b_{0}x^{m}+b_{1}x^{m-1}y+...+b_{m}y^{m}$ By equating the coefficients of $x^{m},x^{m-1}y,\ldots,y^{m}$ in the left and right hand sides of the equation we get $\frac{(n+m)!0!}{m!0!}a_{0}\gamma_{0}+\frac{(n+m-1)!1!}{m!0!}a_{1}\gamma_{1}+...+\frac{m!n!}{m!0!}a_{n}\gamma_{n}=b_{0},$ $\frac{(n+m-1)!1!}{(m-1)!1!}a_{0}\gamma_{1}+\frac{(n+m-2)!2!}{(m-1)!1!}a_{1}\gamma_{2}+...+\frac{(m-1)!(n+1)!}{(m-1)!1!}a_{n}\gamma_{n+1}=b_{1},$ $\vdots$ $\frac{n!m!}{0!m!}a_{0}\gamma_{m}+\frac{(n-1)!(m+1)!}{0!m!}a_{1}\gamma_{m+1}+...+\frac{0!(n+m)!}{0!m!}a_{n}\gamma_{n+m}=b_{m},$ respectively. If $a_{0}\neq 0$, then $\gamma_{m+1},\gamma_{m+2},...,\gamma_{m+n},$ are $n$ free variables in above linear system. Thus the solutions of system form an affine space of dimension $n.$ If $a_{0}=0$ and $a_{1}\neq 0$, then it is easily seen that $\gamma_{0}$ becomes a free variable instead of $\gamma_{m+1}$ and again we have $n$ free variables. In the general case, when $a_{0}=a_{1}=\cdots a_{k-1}=0$ and $a_{k}\neq 0,$ the $n$ free variables are $\gamma_{0},\ldots,\gamma_{k-1}$ and $\gamma_{m+k+1},\ldots,\gamma_{m+n}.$ ∎ Thus in the above considered case, when $p\in\overline{\Pi}_{n}$ and $q\in\overline{\Pi}_{m},$ we get another proof of the fact that the solutions of the PDE (3.2) form an affine space of dimension $n.$ Now consider a particular case $s=0$ of Theorem 3.1 and Corollary 2.2: ###### Proposition 2.4. Suppose that $p\in\Pi_{n},\ q\in\Pi_{m}$ and the free term of $p$ is not zero: $a_{00}\neq 0.$ Then we have that the PDE $p(D)f=q$ (2.3) has a unique solution $f_{0}\in\Pi_{m}.$ The polynomial $f_{0}$ is the only solution of the PDE (2.3) also in each $\Pi_{k},\ k\geq m.$ ###### Proof. Since $p(D)1=a_{00}\neq 0$, we get from Corollary 1.4, that the PDE $p(D)f=0$ has a unique polynomial solution $f=0$: Then consider the linear operator $\mathcal{L}:\Pi_{m}\rightarrow\Pi_{m}$ given by $\mathcal{L}f=p(D)f.$ Above we verified that $ker\mathcal{L}=\\{0\\}$: Now, by using the formula (1.1), we get that $dim(Im\mathcal{L})=dim\Pi_{m}-dim(ker\mathcal{L})=dim\Pi_{m}.$ Thus the equation (2.3) has a unique solution in $\Pi_{m}$ denoted by $f_{1}.$ Now, concerning the other polynomial spaces, assume by way of contradiction, that the equation (2.3) has another solution, denoted by $f_{2},\ f_{2}\neq f_{1},$ in $\Pi_{k},$ where $k>m.$ Then we get that $f_{1}-f_{2}$ is a nonzero polynomial solution of the PDE $p(D)f=0,$ which contradicts Corollary 1.4. ∎ Now we are in a position to present ###### Proof of Theorem 3.1. Let us use induction on $m.$ As the first step of induction consider the case $m=-1,\ q\in\Pi_{-1},$ i.e., $q=0$ (the next step is $m=0,\ q\in\Pi_{0},$ i.e., $q=const.\neq 0).$ In this case the PDE $p(D)f=0$ has a polynomial solution $0\in\Pi_{m+s},\ \forall s\geq 0.$ Now assume that the PDE $p(D)f=q$ (2.4) has a solution if $q\in\Pi_{m+s-1}.$ Let us prove that it has a solution assuming that $q\in\Pi_{m+s}.$ Now assume that the lower homogeneous layer of $p$ is the $s$th layer: $p_{\downarrow}=p_{(s)}.$ Next assume that $f\in\Pi_{m+s}$ and $f^{\uparrow}=f_{(m+s)}$ is the upper homogeneous layer of $f.$ Note that the case $f_{(m+s)}=0$ follows from the induction hypothesis. Consider the following PDE with homogeneous polynomials $p_{(s)}$ and $q_{(m)}:$ $p_{(s)}(D)f=q_{(m)}.$ According to Theorem 2.3 this equation has a solution $\hat{f}\in\overline{\Pi}_{m+s}:$ $p_{(s)}(D)\hat{f}=q_{(m)}.$ (2.5) Next let us seek for a solution of the PDE (2.4) in the form $f=g+\hat{f}:$ $p(D)(g+\hat{f})=q.$ (2.6) It is easily seen that, in view of (2.5), this PDE is equivalent to $p(D)(g)=r,$ (2.7) where $r=q-p(D)(\hat{f}).$ Note that $p_{\downarrow}=p_{(s)}$ readily implies that $p(D)(\hat{f})\in\Pi_{m}.$ Since also $r\in\Pi_{m}$ we obtain that $r\in\Pi_{m}$ too. Next let us verify that $r\in\Pi_{m-1}.$ It is enough to show that $r_{(m)}=0.$ To this purpose, by using (2.5), we obtain that $r_{(m)}=q_{(m)}-[p(D)(\hat{f})]_{(m)}=q_{(m)}-p_{(s)}(D)(\hat{f})=q_{(m)}-q_{(m)}=0.$ Now, in view of $r\in\Pi_{m-1},$ let us use the induction hypothesis. Hence, we get that the PDE (2.7) has a solution denoted by $g_{0},$ where $g_{0}\in\Pi_{m+s-1}.$ Therefore $f=g_{0}+\hat{f}$ is a solution of the PDE (2.4). Note also that this implies that $\hat{f}=f_{m+s}.$ ∎ ## 3 The case of more than two variables Let us use stamdard multivariate notation. For $x=(x_{1},\ldots,x_{k})\in\mathbb{C}^{k}$ and $\alpha=(\alpha_{1},\ldots,\alpha_{k}\in mathbbZ_{+}^{k}$ denote $x^{\alpha}=x_{1}^{\alpha_{1}}\cdots x_{k}^{\alpha_{k}},\quad|\alpha|=\alpha_{1}+\cdots+\alpha_{k},\quad\alpha!=\alpha_{1}!\cdots\alpha_{k}!.$ Denote by $\Pi_{n}^{(k)}$ the space polynomials of $k$ variables of total degree at most $n,$ $\Pi_{n}=\\{\sum_{|\alpha|\leq n}a_{\alpha}x^{\alpha}\\},\quad\dim\Pi_{n}^{(k)}=\binom{n+k}{k}.$ We set $\Pi^{(k)}=\cup_{n\geq 0}\Pi_{n}^{(k)}.$ Denote also by $\overline{\Pi}_{n}^{(k)}$ the space homogeneous polynomials of total degree $n,$ for which we have that $\overline{\Pi}_{n}=\\{\sum_{|\alpha|=n}a_{\alpha}x^{\alpha}\\},\quad\dim\overline{\Pi}_{n}^{(k)}=\binom{n+k-1}{k-1}.$ The following result holds for polynomials of $k$ variables: ###### Theorem 3.1. Assume that $q\in\Pi_{m}^{(k)}$ and the origin $O=(0,\ldots,0)$ is an $s$-fold zero of $p\in\Pi^{(k)}.$ Then we have that the PDE $p(D)f=q$ has a polynomial solution from $\Pi_{m+s}^{(k)}.$ It is clear from the previous section that all we need is to generalize Proposition 2.3 to the case of $k$ variables: ###### Proposition 3.2. Assume that $p\in\overline{\Pi}_{n}^{(k)}$ and $q\in\overline{\Pi}_{m}^{(k)}.$ Then the PDE $p(D)f=q$ (3.1) has a solution from $\overline{\Pi}_{n+m}^{(k)}.$ ###### Proof. Assume that $p\in\overline{\Pi}_{n}$ and $q\in\overline{\Pi}_{m}:$ $p(x)=\sum_{|\alpha|=n}a_{\alpha}x^{\alpha},$ $q(x)=\sum_{|\alpha|=m}b_{\alpha}x^{\alpha}.$ Let us find a solution $f\in\overline{\Pi}_{n+m}$ of the PDE $p(D)f=q.$ (3.2) Suppose that $f(x)=\sum_{|\alpha|=m+n}\gamma_{\alpha}x^{\alpha}.$ Now, the PDE (3.2) looks like $\sum_{|\alpha|=n}a_{\alpha}x^{\alpha}(D)\sum_{|\alpha|=m+n}\gamma_{\alpha}x^{\alpha}=\sum_{|\alpha|=m}b_{\alpha}x^{\alpha}.$ By equating the terms with $x^{\alpha_{0}},\ |\alpha_{0}|\leq m,$ in the left and right hand sides of the equation, we get that $\sum_{|\alpha|=n}a_{\alpha}x^{\alpha}(D)\gamma_{\alpha+\alpha_{0}}x^{\alpha+\alpha_{0}}=b_{\alpha_{0}}x^{\alpha_{0}}.$ For the respective coefficients we have that $\sum_{|\alpha|=n}(\alpha+\alpha_{0})!a_{\alpha}\gamma_{\alpha+\alpha_{0}}=\alpha_{0}!b_{\alpha_{0}}.$ (3.3) By the change of unknowns $\gamma^{\prime}_{\alpha}=\alpha!\gamma_{\alpha}$ we get the linear system $\sum_{|\alpha|=n}a_{\alpha}\gamma_{\alpha+\alpha_{0}}^{\prime}=\alpha_{0}!b_{\alpha_{0}}.$ Let us verify that the main matrix of this system has full rank. Note that in the row of the matrix with $\alpha_{0}=(0,\ldots,0)$ we have the coefficients of the polynomial $p(x),$ while in the row with any $\alpha_{0},$ where $|\alpha_{0}|\leq m,$ we have the coefficients of the polynomial $x^{\alpha}p(x).$ Thus if the rows of the main matrix are linearly dependent then so are the polynomials $x^{\alpha}p(x),\ |\alpha_{0}|\leq m,$ which evidently takes place only if $p(x)\equiv 0.$ Therefore we have that the linear system (3.3) also is a full rank linear system, where we have $\binom{m+k-1}{k-1}$ equations and $\binom{n+m+k-1}{k-1}$ unknowns. Thus we have that the system is consistent. Moreover, in the solution of the linear system we have exactly $\sigma(n,m,k):=\binom{n+m+k-1}{k-1}-\binom{m+k-1}{k-1}$ free variables. ∎ ## REFERENCES * [1] H. Hakopian, A multivariate analog of fundamental theorem of algebra and Hermite interpolation, Constructive Theory of Functions, (B. Bojanov, ed.), Proceedings of the international conference, Varna, 2002, Darba, Sofia, 2003, 1-18. * [2] H. Hakopian and M. Tonoyan, On a multivariate theory, Approximation Theory, A Volume Dedicated to Blagovest Sendov, (B. Bojanov, ed.), Darba, Sofia, 2002, pp.212–230. * [3] Navasard, Vardanyan, On constant coefficient PDE systems and intersection multiplicities, Proceedings of YSU. Physical and Mathematical Sciences, 54 no. 2 (2020), 108–114.
# Notes on conformal anomaly, nonlocal effective action and the metamorphosis of the running scale A. O. Barvinsky<EMAIL_ADDRESS>Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, Moscow 119991, Russia Institute for Theoretical and Mathematical Physics, Moscow State University, Leninskie Gory, GSP-1, Moscow, 119991, Russia W. Wachowski<EMAIL_ADDRESS>Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, Moscow 119991, Russia ###### Abstract We discuss the structure of nonlocal effective action generating the conformal anomaly in classically Weyl invariant theories in curved spacetime. By the procedure of conformal gauge fixing, selecting the metric representative on a conformal group orbit, we split the renormalized effective action into anomalous and Weyl invariant parts. A wide family of thus obtained anomalous actions is shown to include two special cases of Riegert–Fradkin–Tseytlin and Fradkin–Vilkovisky actions. Both actions are shown to be contained in the first three orders of the curvature expansion for a generic one-loop effective action obtained by covariant perturbation theory. The complementary Weyl invariant part of the action is given by the “conformization” of the full effective action—restricting its argument to the conformally invariant representative of the orbit of the conformal group. This is likely to resolve a long-standing debate between the proponents of the Riegert action and adherents of the perturbation expansion for the effective action with typical nonlocal logarithmic form factors. We derive the relation between quantum stress tensors on conformally related metric backgrounds, which generalizes the known Brown-Cassidy equation to the case of nonzero Weyl tensor, and discuss applications of this relation in the cosmological model driven by conformal field theory. We also discuss the issue of renormalization group running for the cosmological and gravitational coupling constants and show that it exhibits a kind of a metamorphosis to the nonlocal form factors of the so-called partners of the cosmological and Einstein terms—nonlocal curvature squared terms of the effective action. ###### Contents 1. 1 Introduction 2. 2 Conformal gauge fixing 1. 2.1 Riegert–Fradkin–Tseytlin gauge 2. 2.2 Fradkin-Vilkovisky gauge 3. 3 Conformal anomaly and covariant curvature expansion 1. 3.1 Cubic order 2. 3.2 Conformal resummation: Fradkin–Vilkovisky anomaly action 3. 3.3 The problem of double poles and global conformal transformations 4. 4 Stress tensor in conformally related spacetimes 1. 4.1 Conformal anomaly from the divergent part of the effective action 2. 4.2 Minimal form of Wess-Zumino action and a-theorem 3. 4.3 Renormalized stress tensors 5. 5 Conformally flat spacetime 1. 5.1 Anomaly driven cosmology 6. 6 Renormalization group and the metamorphosis of the running scale 7. 7 Conclusions To the memory of Stanley Deser ## 1 Introduction The status of local Weyl anomalies is widely considered to be fully settled in current literature. However, the issue of their relevance to concrete physical effects, as opposed to a mere criterion of consistency at the quantum level of the classically Weyl invariant theories, often remains a subject of the debate. The manifestation of the conformal anomaly in physical applications usually occurs within the effective action formalism, and there is extending over years debate on the structure of this action, taking place between the pioneers of the conformal anomaly and adherents of perturbation theory. The nature of this debate consists in a seemingly contradictory difference between the known expression for the anomaly action and the form of the nonlocal effective action obtained by Feynman diagrammatic technique. As is well known, the one-loop conformal anomaly for classically Weyl invariant 4-dimensional theory having in Euclidean curved spacetime the covariantly renormalized effective action $\varGamma[\,g_{\mu\nu}]$ reads as [1, 2, 3, 4, 5, 6, 7] $\displaystyle\langle\,T^{\mu}_{\mu}\,\rangle\equiv\frac{2\,g_{\mu\nu}}{\sqrt{g}}\frac{\delta\varGamma}{\delta g_{\mu\nu}}=\frac{1}{16\pi^{2}}\big{(}\alpha C^{2}+\beta E+\gamma\Box R\big{)},$ (1.1) $\displaystyle\;E=R_{\mu\nu\alpha\gamma}R^{\mu\nu\alpha\gamma}-4R_{\mu\nu}R^{\mu\nu}+R^{2},$ (1.2) where $\sqrt{g}E$ denotes the Gauss–Bonnet density, $C_{\mu\nu\alpha\beta}$ is the Weyl tensor, $C^{2}=C_{\mu\nu\alpha\beta}C^{\mu\nu\alpha\beta}$, and $\alpha$, $\beta$ and $\gamma$ are the numerical coefficients depending on the spin of the quantum field.111We work in Euclidean signature spacetime, and our notations are $R^{\alpha}{}_{\beta\mu\nu}=\partial_{\mu}\varGamma^{\alpha}_{\nu\beta}-\cdots$, $R_{\mu\nu}=R^{\alpha}{}_{\mu\alpha\nu}$, $\Box=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}$. For simplicity we do not include in the anomaly the contribution $F_{\mu\nu}^{2}$ of the vector gauge field and $\varphi^{4}$-contribution of the self-interacting conformal scalar field. The anomalous action $\varGamma_{A}[\,g_{\mu\nu}]$ generating this anomaly was first derived in the nonlocal form by Riegert [8] and by Fradkin and Tseytlin [9] in the local form of the conformal Wess-Zumino action involving an auxiliary scalar field—the dilaton responsible for intetwining two conformally related metrics. The nonlocal form of the Riegert–Fradkin–Tseytlin (RFT) action reads as $\displaystyle\varGamma_{A}[\,g\,]$ $\displaystyle=\frac{1}{64\pi^{2}}\int d^{4}x\,\sqrt{g}\,\Big{(}\alpha\,C^{2}+\frac{\beta}{2}\mathcal{E}_{4}\Big{)}\frac{1}{\Delta}_{4}\mathcal{E}_{4}$ $\displaystyle-\frac{1}{32\pi^{2}}\Big{(}\frac{\gamma}{6}+\frac{\beta}{9}\Big{)}\,\int d^{4}x\,\sqrt{g}R^{2},$ (1.3) where ${\cal E}_{4}\equiv E-\tfrac{2}{3}\Box R,$ (1.4) $\Delta_{4}$ denotes the so-called Paneitz operator [10] $\Delta_{4}=\Box^{2}+2R^{\mu\nu}\nabla_{\mu}\nabla_{\nu}-\frac{2}{3}R\,\Box+\frac{1}{3}(\nabla^{\mu}R)\,\nabla_{\mu}$ (1.5) and $1/\Delta_{4}$ implies its inverse—the notation for the operation of acting by its Green’s function ${\cal G}(x,y)$ on a generic test function $\psi(y)$, $\Delta_{4}{\cal G}(x,y)=\delta(x,y)$, $\tfrac{1}{\Delta}_{4}\psi(x)=\int d^{4}y\,{\cal G}(x,y)\,\psi(y)$. Some time after the invention of the RFT action the attention to it was drawn by Antoniadis, Mazur and Mottola due to several applications in gravity theory [11, 12], but this caused a serious criticism [13] of the expression (1.3) in view of its drastic structural difference from the renormalized effective action built within perturbation theory in powers of spacetime curvature. This expansion begins with [3] $\displaystyle\varGamma_{\rm ren}$ $\displaystyle=\frac{1}{32\pi^{2}}\int dx\,\sqrt{g}\Big{[}-\alpha\,C_{\mu\nu\alpha\beta}\ln\Big{(}\\!-\frac{\Box}{\mu^{2}}\Big{)}C^{\mu\nu\alpha\beta}$ $\displaystyle-\frac{\gamma}{6}R^{2}\,\Big{]}+O(\Re^{3}),$ (1.6) $\Re$ collectively denoting here the Riemann, Ricci and scalar curvature, and does not at all resemble the form of (1.3). This criticism was maintained by objections against short distance behavior of stress tensor correlation functions generated by the RFT action, which were shown to contradict the conformal Ward identities for these correlator [14]. Another criticism was associated with the objections against the double pole structure of the Green’s function of the operator (1.5), $\sim 1/\Box^{2}$ [15]. Although these objections were disclaimed in [16] by explicit calculations of $\langle TTT\rangle$-correlators, the question might still be hovering unsettled in the literature [17]. The goal of this paper will be to discuss the status of the effective action responsible for the generation of the Weyl anomaly. To begin with we will focus on a wide variety of nonlocal anomalous actions by including the RFT action in their functional family. The idea of this construction is similar to gauge fixing applied to the ambiguity of the conformal split of the metric argument of the action functional, which was suggested rather long ago in [18]. The resulting class of anomaly actions will be parameterized by the conformal gauge selecting the representative on the orbit of the local conformal group. We will explicitly demonstrate that the difference between the members of this class is a Weyl invariant functional—a point of departure between various suggestions for the anomalous action. Two particular gauges will be considered, one of them exactly corresponding to the RFT action (1.3) and another associated with the Weyl invariant nonlocal rescaling of the metric field suggested by Fradkin and Vilkovisky. This rescaling, which is directly applicable in asymptotically flat spacetimes, was designed as a remedy against the trace anomaly [19]—the analogue of the Yamabe problem of a local Weyl transformation to the metric with a vanishing scalar curvature. Then we show how the Fradkin–Vilkovisky version of the anomaly action arises in the first three orders of the covariant curvature expansion for a generic one-loop effective action. We discuss the associated mechanism of partial summation of scalar curvature terms of this expansion [20] along with the double pole problem for the Green function of the Paneitz operator (1.5). Lack of uniqueness of the anomaly action defined only up to a Weyl invariant functional raises, of course, the question of its incompleteness in concrete applications. This also poses the question of whether the RFT action or its modifications within the above class provides an optimal description of the physical problem in question. For example, it is well known that in two dimensions the stress tensor trace anomaly and the associated nonlocal Polyakov action are fully responsible for the Hawking radiation of the two dimensional black holes [21]. On the contrary, in higher dimensions the anomaly action is insufficient to describe this phenomenon. Still there is a strong belief [11, 12, 16] that at distances of the horizon scale gravity theory is essentially modified due to large infrared effects of the conformal mode described by the action (1.3). These effects might dominate macroscopic physics at such scales, like for instance the near black hole horizon behavior of quantum stress tensor [22], the contribution to the scalar sector of gravitational waves [23] or dynamical vacuum energy in effective theory of gravity [24]. Though it is not entirely clear how complete is the setup in these problems, there are physical situations when the conformal mode really runs the whole show, and we consider as a direct application of (1.3) two examples of such a situation. These are the calculation of the metric stress tensor in a generic conformally flat spacetime [25] and the Friedmann metric cosmology driven by the trace anomaly of conformal invariant fields [26], the latter playing important role in the model of initial conditions for inflationary cosmology [27, 28]. A related issue in the problem of nonlocal effective action is the question of renormalization group (RG) running of the cosmological and gravitational constants. Though the issue of running scale and its relation to the cosmological constant problem has already become a byword in current literature, it becomes increasingly clearer that this running should not be interpreted in the usual sense of RG theory [29, 30]. The notion of “scale” is so ambiguous in physics that its running nature actually looses universality when addressing various physical setups, like for example associating cosmological inflation with RG running [31]. Serious arguments against running nature of the cosmological and gravitational couplings in [29, 30] have led to the notion of cosmological constant partners [32] interpreted in [33] in terms of separation of scales or decoupling of heavy modes [34, 35]. Still, it is customary to have nontrivial solutions of RG equations in renormalizable gravity models [36, 37] with running scale dependent $\varLambda$ and $G$. Therefore a natural question arises how these solutions have to be interpreted when the tadpole structure of the covariant cosmological and Einstein terms preclude them from their actual dependence on the momentum [30]. So one of the goals of this paper is an attempt to clarify this issue within a special version of the notion of the “scale”. Looking forward to the final conclusion, we might formulate the suggestion for the notions of running $\varLambda$ and $G$ couplings as their conversion or metamorphosis into their nonlocal partners similar to those introduced by J. Donoghue in [32]. Within perturbation scheme the cosmological and Einstein terms start manifesting themselves as nonlocal curvature squared terms very different from their original form. The paper is organized as follows. In Sect. 2 we decompose the quantum effective action into anomalous and Weyl invariant parts by imposing the conformal gauge for the choice of the representative on the orbit of the conformal group. This allows one to build the whole class of nonlocal anomalous actions, functionally parameterized by the choice of this gauge and including the RFT action (1.3) and the Fradkin–Vilkovisky action suggested in [20]. Sect. 3 contains the discussion of the covariant curvature expansion of [38, 39, 40] and the way how it contains the anomalous action in the lowest orders of this expansion. In particular, it is shown that the Fradkin–Vilkovisky version of this action performs a resummation of the covariant curvature series in powers of the Ricci scalar [20]. In Sect. 4 we give a direct and, apparently, not very well known derivation from the RFT action of the vacuum stress-tensor behavior at the orbit of the conformal group—a good example of direct applicability of (1.3). Here we also comment on the application of the anomalous conformal Wess-Zumino action to the a-theorem [41, 42] and present the generalization of the Brown–Cassidy formula [25] for the stress tensor to the case of a nonzero Weyl tensor, see Eq.(4.34). Applications of the anomaly action in conformally flat spacetime are presented in Sect. 5. It is shown how this action underlies the construction of the inflation scenario starting from the cosmological initial state in the form of the mircocanonical density matrix [27, 28, 43, 44, 45], recently reviewed in [46]. Important feature of this application is the value of the Casimir vacuum energy which is also determined by the coefficients of the anomalous trace (1.1) [47, 48, 49, 50]. In Sect.6 we discuss the problem of scale dependence of the gravitational and cosmological constants related to the ideas of [29, 30, 32] and [51, 52]. Here we show that in the UV regime the RG analysis of the cosmological and Einstein terms strongly points out to the conversion of their scale dependence into the nonlocal form factors of their UV partners represented by curvature squared terms with dimensionless nonlocal coefficients. We call this phenomenon a metamorphosis of the running scale, which we derive by using a special scaling operator. In IR domain the same analysis leads to the low energy partners depending on mass scale of the theory. These nonlocal partners were suggested in [32] by J. Donoghue for the cosmological constant term and blueprinted for the Einstein term in [51, 52] in the form of the long distance modification of Einstein gravity. In the concluding section we briefly recapitulate the above observations and dwell on related potential problems and applications. We start by discussing the role of Weyl anomaly in the problem of cosmological initial conditions for the inflation scenario driven by a conformal field theory [46]. This scenario motivates introduction of numerous conformal higher spin (CHS) fields whose Weyl anomaly is generated only in the one-loop approximation and, thus, acquires a kind of nonperturbative status. Then we discuss the uniqueness for the nonlocal scaling operator used for the derivation of the above metamorphosis phenomenon. In particular, we show that in the curvature squared terms of the action it is nearly uniquely determined due to general covariance of the theory, though in Lorentz symmetry violating models like Hořava gravity [53] it may be rather ambiguous. ## 2 Conformal gauge fixing The splitting of the renormalized effective action of a classically conformally invariant theory into the anomaly part $\varGamma_{A}$ generating the trace anomaly (1.1) and the Weyl invariant part $\varGamma^{\rm conf}$, $g_{\mu\nu}\delta\varGamma^{\rm conf}/\delta g_{\mu\nu}=0$, $\varGamma_{\rm ren}=\varGamma_{A}+\varGamma^{\rm conf},$ (2.1) is obviously not unique and admits the freedom $\varGamma_{A}\to\varGamma_{A}+W^{\rm conf},\qquad\varGamma^{\rm conf}\to\varGamma^{\rm conf}-W^{\rm conf},$ (2.2) with an arbitrary conformally invariant functional $W^{\rm conf}$, $g_{\mu\nu}\frac{\delta W^{\rm conf}}{\delta g_{\mu\nu}}=0.$ (2.3) The freedom in the choice of $W^{\rm conf}[\,g_{\mu\nu}]$ arises as a functional integration constant for the first order variational equation that can be written down for $\varGamma_{A}[\,g_{\mu\nu}]$ or for the renormalized effective action $\varGamma[\,g_{\mu\nu}]\equiv\varGamma_{\rm ren}[\,g_{\mu\nu}]$. At the orbit of the conformal group passing through the metric $g_{\mu\nu}$—the argument of the effective action—and parameterized by the local conformal parameter $\sigma=\sigma(x)$, $g_{\mu\nu}=e^{2\sigma}\bar{g}_{\mu\nu},$ (2.4) the renormalized action $\varGamma_{\rm ren}[\,e^{\sigma}\bar{g}\,]$ satisfies the equation $\frac{\delta\varGamma_{\rm ren}[e^{2\sigma}\bar{g}]}{\delta\sigma}=\frac{\sqrt{g}}{16\pi^{2}}\big{(}\alpha C^{2}+\beta E+\gamma\Box R\big{)}\Big{|}_{g_{\mu\nu}=e^{\sigma}\bar{g}_{\mu\nu}},$ (2.5) which can be integrated to give conformal Wess-Zumino action [9] $\displaystyle\Delta\varGamma[\,\bar{g},\sigma\,]\equiv\varGamma_{\rm ren}[\,g\,]-\varGamma_{\rm ren}[\,\bar{g}\,]$ $\displaystyle\qquad=\frac{1}{16\pi^{2}}\int d^{4}x\,\sqrt{\bar{g}}\left\\{\Big{[}\alpha\bar{C}^{2}+\beta\bar{\mathcal{E}}_{4}\Big{]}\sigma+2\beta\sigma\bar{\Delta}_{4}\sigma\right\\}$ $\displaystyle\qquad-\frac{1}{32\pi^{2}}\Big{(}\frac{\gamma}{6}+\frac{\beta}{9}\Big{)}\,\int d^{4}x\,\Big{(}\sqrt{g}R^{2}-\sqrt{\bar{g}}\bar{R}^{2}\Big{)},$ (2.6) where the two metrics $g_{\mu\nu}$ and $\bar{g}_{\mu\nu}$ are related by the equation (2.4), all barred quantities are built in terms of $\bar{g}_{\mu\nu}$ and $\bar{\Delta}_{4}$ is the barred version of the fourth-order Paneitz operator (1.5). This expression $\varGamma_{\rm ren}-\bar{\varGamma}_{\rm ren}=\varGamma_{A}-\bar{\varGamma}_{A}$ can also be rewritten in the other form $\displaystyle\varGamma_{A}[\,g\,]-\varGamma_{A}[\,\bar{g}\,]$ $\displaystyle\qquad=\frac{1}{16\pi^{2}}\int d^{4}x\sqrt{g}\,\left\\{\,\Big{[}\,\alpha\,C^{2}+\beta\mathcal{E}_{4}\Big{]}\,\sigma-2\beta\,\sigma\Delta_{4}\sigma\,\right\\}$ $\displaystyle\qquad-\frac{1}{32\pi^{2}}\Big{(}\frac{\gamma}{6}+\frac{\beta}{9}\Big{)}\,\int d^{4}x\,\Big{(}\sqrt{g}R^{2}-\sqrt{\bar{g}}\bar{R}^{2}\Big{)},$ (2.7) if one takes into account two important properties of the Paneitz operator—Weyl invariance of its densitized form, $\sqrt{\bar{g}}\,\bar{\Delta}_{4}=\sqrt{g}\,\Delta_{4},$ (2.8) and the finite conformal transformation of ${\cal E}_{4}$—the Gauss-Bonnet density modified by $\sqrt{g}\,\Box R$ term (1.4), $\sqrt{g}\,\mathcal{E}_{4}=\sqrt{\bar{g}}\,\bar{\mathcal{E}}_{4}+4\sqrt{\bar{g}}\,\bar{\Delta}_{4}\sigma.$ (2.9) These two properties are consistent with each other because the last equation should obviously remain valid under the interchange of $g_{\mu\nu}$ and $\bar{g}_{\mu\nu}$ accompanied by flipping the sign of $\sigma$. There is also the third form of the Wess-Zumino action, which will be given below in Eq.(4.24). It exists for a special renormalization converting to zero the coefficient $\gamma$ of the $\Box R$ term in (1.1), and underlies the proof of the so-called $a$-theorem for the monotonic RG flow of the coefficient $a=\beta/16\pi^{2}$ of the topological term in the trace anomaly [41, 42]. Modulo a nonvanishing conformal anomaly all points on the orbit of the conformal group (2.4) are physically equivalent, and this typical situation of a broken local gauge invariance can be managed by introducing the gauge condition which uniquely selects $\bar{g}_{\mu\nu}$ as the representative of the equivalence class of metrics (2.4). If we denote this gauge condition as $\chi[\bar{g}]=0$ then this representative should be uniquely selected by the solution of the equation for the conformal parameter $\sigma$, $\chi[\,\bar{g}\,]=\chi[\,ge^{-2\sigma}]=0,$ (2.10) this solution being a functional of the metric $\varSigma_{\chi}[\,\bar{g}\,]$, labelled by the gauge symbol $\chi$, $\sigma=\varSigma_{\chi}[\,\bar{g}\,].$ (2.11) The representative of the conformal orbit $\bar{g}_{\mu\nu}[g]$ as a functional of a given metric $g_{\mu\nu}$ (through which the orbit is passing) becomes Weyl invariant, $\bar{g}_{\mu\nu}[\,g\,]\equiv g_{\mu\nu}\,e^{-2\varSigma_{\chi}[\,\bar{g}\,]},\quad g_{\alpha\beta}\frac{\delta\bar{g}_{\mu\nu}[\,\bar{g}\,]}{\delta g_{\alpha\beta}}=0,$ (2.12) because under any local Weyl rescaling $g_{\mu\nu}\to e^{2\sigma}g_{\mu\nu}$ the conformal parameter transforms as $\varSigma_{\chi}[g]\to\varSigma_{\chi}[g]+\sigma$ in view of the identity $\chi\big{[}ge^{-\varSigma_{\chi}[g]}\big{]}\equiv 0$, so that $\delta_{\sigma}\varSigma_{\chi}[\,\bar{g}\,]=\sigma,$ (2.13) where $\delta_{\sigma}$ is the operator of the conformal variation $\delta_{\sigma}\equiv 2\int d^{4}x\;\sigma(x)\,g_{\mu\nu}(x)\frac{\delta}{\delta g_{\mu\nu}(x)}.$ (2.14) For the uniqueness of such conformal gauge fixing procedure (in spacetime and at least in some finite domain of the space of metrics) the Faddeev–Popov operator $Q_{\chi}=Q_{\chi}(x,y)$, corresponding to the gauge $\chi[g]$, $\delta_{\omega}\chi(x)=\int d^{4}y\,Q_{\chi}(x,y)\,\omega(y)$, should be nondegenerate. Thus, the terms of (2.7) $\displaystyle W^{\rm conf}[\,g\,]=\varGamma_{A}[\,\bar{g}\,]+\frac{1}{32\pi^{2}}\Big{(}\frac{\gamma}{6}+\frac{\beta}{9}\Big{)}\\!\int d^{4}x\sqrt{\bar{g}}\,\bar{R}^{2}$ (2.15) taken at $\bar{g}_{\mu\nu}[\,g\,]$ can be considered as an irrelevant Weyl invariant integration “constant”, while the rest of the terms can be identified with the anomaly action after the substitution of $\sigma=\varSigma_{\chi}[\,g\,]$. This set of anomaly actions $\varGamma_{A}[\,g\,]\equiv\varGamma_{\chi}[\,g\,]$ parameterized and labelled by conformal gauge conditions $\chi$ reads as $\displaystyle\varGamma_{\chi}[\,g\,]=\frac{1}{16\pi^{2}}\int d^{4}x\,\sqrt{g}\,\Big{\\{}\big{(}\alpha C^{2}+\beta\mathcal{E}_{4}\big{)}\varSigma_{\chi}$ $\displaystyle\;\;\;\;-2\beta\varSigma_{\chi}\Delta_{4}\varSigma_{\chi}\Big{\\}}-\frac{1}{32\pi^{2}}\Big{(}\frac{\gamma}{6}+\frac{\beta}{9}\Big{)}\\!\int d^{4}x\,\sqrt{g}R^{2}.$ (2.16) The difference between various members of this set is, of course, a Weyl invariant functional. For two arbitrary conformal gauges one has $\displaystyle\varGamma_{\chi_{1}}-\varGamma_{\chi_{2}}$ $\displaystyle=\frac{1}{16\pi^{2}}\int d^{4}x\,\sqrt{g}\,\big{(}\varSigma_{\chi_{1}}-\varSigma_{\chi_{2}}\big{)}$ $\displaystyle\times\Big{[}\alpha\,C^{2}+\beta\mathcal{E}_{4}-2\beta\Delta_{4}\big{(}\varSigma_{\chi_{1}}+\varSigma_{\chi_{2}}\big{)}\Big{]}.$ (2.17) Conformal variation of this expression is vanishing, because of the transformation law (2.13) for $\varSigma_{1,2}$, Weyl invariance of the density $\sqrt{g}\,C^{2}$ and the relation (2.9) which in the infinitesimal form reads as $\delta_{\sigma}\Big{[}\sqrt{g}\,\mathcal{E}_{4}\Big{]}=4\sqrt{g}\,\Delta_{4}\sigma,$ (2.18) so that using all the above properties $\delta_{\sigma}\big{(}\varGamma_{\chi_{1}}-\varGamma_{\chi_{2}}\big{)}=0$. Note that with our definition of the anomaly action (2.16) the way it enters the full quantum action can be represented as $\varGamma[\,g\,]=\varGamma_{\chi}[\,g\,]+\varGamma[\,\bar{g}\,]+\frac{1}{32\pi^{2}}\Big{(}\frac{\gamma}{6}+\frac{\beta}{9}\Big{)}\int d^{4}x\sqrt{\bar{g}}\,\bar{R}^{2},$ (2.19) where $\bar{g}_{\mu\nu}[\,g\,]=e^{-2\varSigma_{\chi}[\,g\,]}g_{\mu\nu}$ ### 2.1 Riegert–Fradkin–Tseytlin gauge An obvious choice of the conformal gauge associated with the Gauss–Bonnet density and the Branson curvature is the Riegert–Fradkin–Tseytlin gauge $\chi_{{\vphantom{L}}_{\rm RFT}}[\,\bar{g}\,]\equiv\bar{\mathcal{E}}_{4}=0.$ (2.20) It can be imposed for topologically simple spacetime manifolds with a vanishing bulk part of the Euler characteristics (see Eq.(2.38) and footnote 3 below), to which in particular belongs asymptotically flat spacetime to be mainly considered throughout the paper. The advantage of this gauge is that it is exactly solvable due to the transformation law for the Branson curvature (2.9). Applying this gauge and using Eq. (2.8) we obtain a linear equation on $\varSigma_{{\vphantom{L}}_{\rm RFT}}$ which has a solution in terms of the inverse Paneitz operator $\varSigma_{{\vphantom{L}}_{\rm RFT}}=\frac{1}{4}\,\frac{1}{\Delta}_{4}\mathcal{E}_{4}.$ (2.21) Formally substituting this expression to (2.16) we obtain exactly the RFT action (1.3). This RFT action and the inverse Paneitz operator are well defined and exist in asymptotically flat spacetime under Dirichlet boundary conditions at infinity when treated within perturbation theory in powers of the curvatures whose collection is denoted below as $\Re$. Indeed, in this case $\frac{1}{\Delta}_{4}=\frac{1}{\Box^{2}}+O(\Re),$ (2.22) and this operator works well when it is applied to the functions of the Branson curvature type $\sim\mathcal{E}_{4}$. Because of the double-pole nature of the operator $1/\Box^{2}$ its action on generic functions may be badly defined due to infrared divergences, but when the function is represented by the total derivative structure it generates, when acted upon by $1/\Box^{2}$, well defined multipole expansion valid in four dimensions at spacetime infinity [39].222 As discussed in [39], the operator $1/\Box^{n}$ in $D$-dimensional space with $D<2n$ is ill defined unless the functions it acts upon are of the form $\partial_{\alpha_{1}}...\partial_{\alpha_{m}}j(x)$, $m=2n-D+1$ with the function $j(x)$ having an asymptotic behavior $j(x)=O(1/|x|^{D})$, $|x|\to\infty$. This property can be explained by the fact that in the multipole expansion of $\tfrac{1}{\Box}\partial_{\alpha_{1}}...\partial_{\alpha_{m}}j(x)$ the first few multipoles vanish, which improves the fall-off properties of the result at infinity and makes possible a repeated action by $1/\Box$. But the Gauss–Bonnet density and $\sqrt{g}\,\Box R$ are both locally a total derivative which makes $1/\Delta_{4}$ well defined in the expression (2.21) for $\varSigma_{{\vphantom{L}}_{\rm RFT}}$. This in fact implies the invertibility of the Faddeev–Popov operator in this gauge, which up to coefficient coincides with the Paneitz operator, $Q_{{\vphantom{L}}_{\rm RFT}}=4\Delta_{4}$, and thus guarantees local uniqueness of conformal gauge fixing procedure. Moreover, the above observation serves as a repudiation of the harmful role of double poles in the RFT action that was claimed in [15]. Absence of infrared dangerous double poles is explicit in the lowest order of the curvature expansion for $\varSigma_{{\vphantom{L}}_{\rm RFT}}$ which reads $\varSigma_{{\vphantom{L}}_{\rm RFT}}=-\frac{1}{6\,\Box}R+O(\Re^{2}),$ (2.23) in view of the fact that the Gauss–Bonnet density is quadratic in the curvature $\sqrt{g}E=O(\Re^{2})$. Higher orders of this expansion are also safe because of the total derivative nature of $\sqrt{g}\,E$. Regarding the lowest order quadratic in curvature part, with the above approximation for $\varSigma_{{\vphantom{L}}_{\rm RFT}}$ it equals $\varGamma_{{\vphantom{L}}_{\rm RFT}}[\,g\,]=-\frac{\gamma}{192\pi^{2}}\int d^{4}x\sqrt{g}\,R^{2}+O(\Re^{3}),$ (2.24) because all the terms depending on the parameter $\beta$ completely cancel out, and what remains coincides with the last quadratic term of (1.6). This coincidence fully matches with the linear in curvature part of the trace anomaly (1.1) (its $\gamma$-term) generated by the quadratic action (1.6). Indeed, the conformal transformation of its nonlocal Weyl term contributes only to $O(\Re^{2})$-part of the anomaly due to the fact that only its form factor $\ln(-\Box/\mu^{2})$ is not Weyl invariant, and the whole $\gamma$-term of the anomaly entirely comes from the $R^{2}$-part of (1.6). ### 2.2 Fradkin-Vilkovisky gauge Another conformal gauge arises in context of conformal off-shell extension of Einstein gravity suggested in [19] and corresponds to the 4-dimensional version of the Yamabe problem. The representative of the conformal group orbit is chosen to be the metric with a vanishing scalar curvature $\chi_{{\vphantom{L}}_{\rm FV}}[\,\bar{g}\,]=\bar{R},$ (2.25) which implies a nonlinear but still explicitly solvable equation for $\varSigma_{{\vphantom{L}}_{\rm FV}}$ , $R[e^{-2\varSigma_{{\vphantom{L}}_{\rm FV}}}g_{\mu\nu}]=e^{3\varSigma_{{\vphantom{L}}_{\rm FV}}}\big{(}R-6\,\Box\big{)}\,e^{-\varSigma_{{\vphantom{L}}_{\rm FV}}}=0.$ (2.26) This solution reads $\displaystyle\varSigma_{{\vphantom{L}}_{\rm FV}}=-\ln\Big{(}1+\frac{1}{6}\frac{1}{\Box-R/6}R\Big{)},$ (2.27) $\displaystyle\lim\limits_{|x|\to\infty}e^{-\varSigma_{{\vphantom{L}}_{\rm FV}}}=1$ (2.28) in terms of the inverse of the conformal second order operator $\Box-\tfrac{1}{6}R$ subject to zero boundary conditions at infinity. This inverse operator also admits covariant curvature expansion and in the lowest order yields the function $\varSigma_{{\vphantom{L}}_{\rm FV}}$ coinciding with that of the RFT gauge (2.23), $\varSigma_{{\vphantom{L}}_{\rm FV}}=\varSigma_{{\vphantom{L}}_{\rm RFT}}+O(\Re^{2}),$ (2.29) and, therefore, generates in the quadratic order the same expression for the anomaly action $\varGamma_{{\vphantom{L}}_{\rm FV}}=\varGamma_{{\vphantom{L}}_{\rm RFT}}+O(\Re^{3}).$ (2.30) Using Eqs. (2.17) and (2.21) it is easy to see that the difference between RFT and FV actions is given by the exact expression $\displaystyle\varGamma_{{\vphantom{L}}_{\rm RFT}}-\varGamma_{{\vphantom{L}}_{\rm FV}}$ $\displaystyle=\frac{1}{16\pi^{2}}\int d^{4}x\,\sqrt{g}\big{(}\varSigma_{{\vphantom{L}}_{\rm RFT}}-\varSigma_{{\vphantom{L}}_{\rm FV}}\big{)}$ $\displaystyle\times\Big{[}\,\alpha\,C^{2}+2\beta\,\Delta_{4}\big{(}\varSigma_{{\vphantom{L}}_{\rm RFT}}-\varSigma_{{\vphantom{L}}_{\rm FV}}\big{)}\Big{]},$ (2.31) bilinear in the local Weyl squared term and conformally invariant nonlocal functional $\displaystyle\varSigma_{{\vphantom{L}}_{\rm RFT}}-\varSigma_{{\vphantom{L}}_{\rm FV}}$ $\displaystyle=\frac{1}{4}\,\frac{1}{\Delta}_{4}\mathcal{E}_{4}$ $\displaystyle+\ln\Big{(}1+\frac{1}{6}\frac{1}{\Box-R/6}R\Big{)}=O(\Re^{2}).$ (2.32) Therefore within perturbation theory these two actions remain coinciding even in the cubic order and become different only starting from the fourth order in the curvature. Perturbatively both terms of (2.32) produce similar nonlocal structures of tree-like nature, that is the terms characteristic of the tree-level approximation in field theory. Such terms are composed of the powers of inverse d’Alembertians acting on the curvature tensor structures or on the products of similar nonlocal tensor structures built according to the same pattern. However, taken separately as exact entities they have essentially different types of nonlocality. RFT action formalism involves the Green’s function of the fourth order Paneitz operator, whereas the FV version of the action is based on the Green’s function of the second order operator $\Box-\tfrac{1}{6}R$. Both operators are conformally covariant, but the Weyl transformation of $\Box-\tfrac{1}{6}R$ is different from (2.8) $\Box-\tfrac{1}{6}R=e^{-3\sigma}\,\big{(}\bar{\Box}-\tfrac{1}{6}\bar{R}\big{)}\,e^{\sigma},\qquad g_{\mu\nu}=e^{2\sigma}\bar{g}_{\mu\nu}.$ (2.33) Moreover, FV action formalism involves a special logarithmic nonlinearity absent in RFT gauge fixing. The action of the Paneitz operator derivatives in (2.16) can destroy this logarithmic structure, but the $\varSigma_{{\vphantom{L}}_{\rm FV}}C^{2}$-term in $\varGamma_{{\vphantom{L}}_{\rm FV}}$ still contains it intact. A further comparison of the RFT and FV actions can be done along the lines of their “naturalness”. RFT gauge (2.20) is based on structures organically belonging to the conformal anomaly formalism in the sense that it involves the same fundamental objects—the Branson curvature $\mathcal{E}_{4}$ and the relevant Paneitz operator $\Delta_{4}$ which are immanently present in the flow of the anomalous action along the conformal group orbit (2.6). One could even interpret this gauge as the one providing the extremum of $\beta$-terms in this expression with respect to the variation of the orbit parameter $\sigma$. This interpretation is, however, erroneous because $g_{\mu\nu}$, $\bar{g}_{\mu\nu}$ and $\sigma$ cannot be treated as independent variables in Eq. (2.6). On the contrary, FV gauge (2.25) uses a somewhat extraneous entity—the scalar curvature—which is singled out only by the fact that it turns out to be the bearer of the metric conformal mode. As the result the advantage of FV gauge is that it does not involve higher than second order derivatives and does not produce double pole nonlocalities. Another advantage is that the equation (2.19) disentangling the FV anomaly action from the full effective action becomes in view of $\bar{R}=0$ much simpler $\varGamma[\,g\,]=\varGamma_{{\vphantom{L}}_{\rm FV}}[\,g\,]+\varGamma[\,\bar{g}\,]\,\big{|}_{\bar{g}_{\mu\nu}[\,g\,]},$ (2.34) where $\bar{g}_{\mu\nu}[\,g\,]=e^{-2\varSigma_{{\vphantom{L}}_{\rm FV}}[\,g\,]}g_{\mu\nu}$, which is obviously consistent with the fact that $\varGamma_{{\vphantom{L}}_{\rm FV}}[\,\bar{g}\,]=0$ because $\varSigma_{{\vphantom{L}}_{\rm FV}}[\,\bar{g}\,]\equiv 0$. As compared to the FV version, among technical disadvantages of the RFT gauge and the action is the presence of fourth order derivatives of the Paneitz operator. Due to this the RFT version turns out to be vulnerable from the viewpoint of possible generalizations. For example, a modification of the gauge (2.20) by the additional Weyl squared term, $\chi_{{\vphantom{L}}_{\rm RFT}}\to\chi_{{\vphantom{L}}_{\rm RFT}}+aC^{2}$ would not work, because the relevant modification $\varSigma_{{\vphantom{L}}_{\rm RFT}}\to\varSigma_{{\vphantom{L}}_{\rm RFT}}+a\,(2\Delta_{4})^{-1}C^{2}$ is badly defined for the reasons described above in the footnote 2—the additional term should have a total derivative structure. The generalization to spacetimes with nontrivial topology is also not straightforward, because the condition (2.20) should not contradict nonvanishing Euler number of the manifold, which for compact manifolds without a boundary reads $e_{E}=\tfrac{1}{32\pi^{2}}\int d^{4}x\sqrt{g}\,E(x)$. Say, for a compact manifold of a finite volume $V=\int d^{4}x\,\sqrt{g}$ the gauge (2.20) can be chosen to be $\chi(\bar{g})=\sqrt{\bar{g}}\,\Big{(}\bar{E}-\frac{2}{3}\bar{\Box}\bar{R}-32\pi^{2}\frac{e_{E}}{\bar{V}}\Big{)},$ (2.35) but this leads to a nonlinear integro-differential equation for the relevant $\varSigma$ $\displaystyle 4\sqrt{g}\Delta_{4}\varSigma$ $\displaystyle=\sqrt{g}\bigg{(}E-\frac{2}{3}\Box R-32\pi^{2}\frac{e^{-4\varSigma}}{\langle\,e^{-4\varSigma}\,\rangle}\frac{e_{E}}{V}\bigg{)},$ (2.36) $\displaystyle\langle\,e^{-4\varSigma}\,\rangle$ $\displaystyle\equiv\frac{1}{V}\int d^{4}x\,\sqrt{g}e^{-4\varSigma},$ (2.37) which apparently can be solved analytically only by perturbations in $e_{E}/V$. Unless stated otherwise, below we consider asymptotically flat spacetime with a trivial topology, whose Euler characteristics should be modified by the boundary term. For generic 4-dimensional manifolds with a smooth boundary it reads $e_{E}=\tfrac{1}{32\pi^{2}}\Big{(}\int_{\cal M}d^{4}x\sqrt{g}\,E(x)+\int_{\cal\partial M}d^{3}x\sqrt{\gamma}\varOmega(x)\Big{)},$ (2.38) where $\gamma=\det\gamma_{ab}$ and $\gamma_{ab}$ is the induced metric on $\partial{\cal M}$. For asymptotically flat case due to the contribution of $\partial{\cal M}$ at infinity $|\,x\,|\to\infty$ it equals $1$, so that everywhere in what follows the bulk part of the Euler characteristics is $\tfrac{1}{32\pi^{2}}\int d^{4}x\sqrt{g}\,E(x)\equiv e^{\prime}_{E}=e_{E}-1=0$.333I am grateful for this observation to M.Duff. Explicit and simple expression for the boundary term of the Euler characteristics in the 4-dimensional case can be found in [36], $\varOmega=\tfrac{1}{4}R_{a\perp b\perp}K^{ab}+16\det K^{a}_{b}$, where $K_{ab}=\nabla_{a}n_{b}$ is the extrinsic curvature of the boundary, and $\perp$ denotes the projection on the outward pointing normal vector $n^{\mu}$. The last term in $\varOmega$ exactly reproduces the value of the Euler number $e_{E}=1$ for flat and asymptotically flat spaces [54]. ## 3 Conformal anomaly and covariant curvature expansion Despite the diversity of nonlocal structures of RFT and FV versions of anomaly action, neither of them seem to appear in conventional perturbation theory for quantum effective action. The covariant form of this perturbation theory in curved spacetime (1.6) was pioneered in [3], but its logarithmic nonlocal formfactor did not resemble the nonlocal operators of the RFT action (1.3). Here we show how in spite of these discrepancies the anomaly action originates from covariant perturbation theory of [38, 39, 40]. This perturbation theory arose as a concrete implementation of the ideas of [3] as an expansion in powers of covariant tensors of spacetime and fibre bundle curvatures and other covariant background field objects. This expansion is completely equivalent to standard Feynman diagrammatic technique and represents its resummation converting the original perturbation series in noncovariant odjects, like matter and metric field perturbations on top of flat and empty spacetime background, into the series in powers of covariant fields strengths denoted collectively below by $\Re$ and including spacetime and fibre bundle curvature. To be more specific, consider the theory with the inverse propagator on top of the nontrivial field background $\hat{F}(\nabla)=F^{A}_{B}(\nabla)$, hat denoting the matrix structure of the operator acting in the space of fields $\varphi=\varphi^{A}(x)$ with a generic spin-tensor index $A$ and $\nabla=\nabla_{\mu}$ denoting the covariant derivative with respect to the corresponding fibre bundle connection, $\hat{F}(\nabla)=\Box+\hat{P}-\frac{\hat{1}}{6}R,\qquad\Box=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}.$ (3.1) This operator is characterized by the “curvatures”—metric Riemann tensor with its Ricci contractions, fibre bundle curvature $\hat{\cal R}_{\mu\nu}$ determining the commutator of covariant derivatives, $[\nabla_{\mu},\nabla_{\nu}]\,\varphi=\hat{\cal R}_{\mu\nu}\,\varphi$, and the potential term $\hat{P}$ (the term $-\tfrac{\hat{1}}{6}R$ is disentangled from the operator potential for reasons of convenience), $\Re=(R^{\mu}{}_{\nu\alpha\beta},R_{\mu\nu},R,\hat{\cal R}_{\mu\nu},\hat{P}).$ (3.2) In covariant perturbation theory the one-loop effective action gets expanded in powers of these curvatures $\varGamma=\frac{1}{2}{\rm Tr}\ln F(\nabla)=\\!\\!\\!\\!\stackrel{{\scriptstyle\text{local power div}}}{{\overbrace{\varGamma_{0}+\varGamma_{1}}}}\\!\\!\\!\\!\\!\\!+\,\varGamma_{2}+\varGamma_{3}+O(\Re^{4}),$ (3.3) where $\varGamma_{n}\sim\Re^{n}$. Within dimensional regularization of $2\omega$-dimensional spacetime, $\omega\to 2$, the zeroth and first order terms of the expansion represent pure power divergences (note that we consider the case of a massless theory, or the theory where the mass matrix is included in the potential term $\hat{P}$ and treated by perturbations), so that these two terms are annihilated by the regularization, while the second order term is given by the expression [38] $\displaystyle\varGamma^{(2)}_{\rm dim\;reg}$ $\displaystyle=-\frac{\Gamma(2-\omega)\Gamma(\omega+1)\Gamma(\omega-1)}{2(4\pi)^{\omega}\Gamma(2\omega+2)}\,\mu^{4-2\omega}$ $\displaystyle\times\int dx\,\sqrt{g}\,{\rm tr}\,\Big{\\{}R_{\mu\nu}(-\Box)^{\omega-2}R^{\mu\nu}\hat{1}$ $\displaystyle-\frac{1}{18}(4-\omega)(\omega+1)R(-\Box)^{\omega-2}R\hat{1}$ $\displaystyle-\frac{2}{3}(2-\omega)(2\omega+1)\,\hat{P}(-\Box)^{\omega-2}R$ $\displaystyle+2(4\omega^{2}-1)\,\hat{P}(-\Box)^{\omega-2}\hat{P}$ $\displaystyle+(2\omega+1)\hat{\cal R}_{\mu\nu}(-\Box)^{\omega-2}\hat{\cal R}^{\mu\nu}\Big{\\}},$ (3.4) where $\omega=\tfrac{d}{2}\to 2$. Here ${\rm tr}$ denotes the matrix trace and the concrete coefficients implement the originally conjectured structure of dimensionally regularized effective action Lagrangian, $\Re(-\Box)^{\omega-2}\Re$, that was blueprinted in [3]. What is important and should be especially emphasized is that $\Box=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}$ means here the full covariant d’Alembertian acting on a respective scalar $R$, tensor $R_{\mu\nu}$ or spintensor $\hat{\cal R}_{\mu\nu}$ and $\hat{P}$ objects. For brevity we will consider the case of a single conformal scalar field with $\hat{1}=1$, $\hat{P}=0$, $\hat{\cal R}_{\mu\nu}=0$ and the following values of the trace anomaly coefficients444The coefficients have the opposite sign to those of $b=-\alpha/16\pi^{2}$ and $b^{\prime}=-\beta/16\pi^{2}$ in [4], because in our case the stress tensor is defined with respect to the Euclidean effective action $\varGamma=-i\varGamma_{L}$ in contrast to the definition of $T^{\mu\nu}=2g^{-1/2}\delta\varGamma_{L}/\delta g_{\mu\nu}$ in the Lorentzian signature spacetime of [4]. Comparison with [5] should also take into account another sign of the stress tensor defined by the variation with respect to the contravariant metric. $\alpha=-\frac{1}{120},\quad\beta=\frac{1}{360},\quad\gamma=-\frac{1}{180},$ (3.5) for which the action (3.4) takes the form—a particular case of (1.6), $\displaystyle\varGamma^{(2)}_{\rm ren}$ $\displaystyle=\frac{1}{32\pi^{2}}\int dx\,\sqrt{g}\,\Big{\\{}\frac{1}{60}\Big{[}R_{\mu\nu}\gamma(-\Box)R^{\mu\nu}$ $\displaystyle-\frac{1}{3}R\gamma(-\Box)R\Big{]}+\frac{R^{2}}{1080}\Big{\\}}$ $\displaystyle=\frac{1}{32\pi^{2}}\int dx\,\sqrt{g}\,\Big{\\{}\frac{1}{120}C_{\mu\nu\alpha\beta}\gamma(-\Box)C^{\mu\nu\alpha\beta}$ $\displaystyle+\frac{R^{2}}{1080}\Big{\\}}+O(\Re^{3}).$ (3.6) Here $\gamma(-\Box)$ is the nonlocal formfactor (in minimal subtraction scheme with $\ln(4\pi)$ and Euler constants absorbed in $\mu$) $\gamma(-\Box)=\ln\Big{(}\\!-\frac{\Box}{\mu^{2}}\Big{)}-\frac{16}{15},$ (3.7) and the transition to the last line is valid up to the higher order terms in curvature and based on the nonlocal generalization of the identity $\int d^{4}x\,\sqrt{g}\,C^{2}=2\int d^{4}x\,\sqrt{g}\,(R_{\mu\nu}R^{\mu\nu}-\tfrac{1}{3}R^{2})$ (3.8) derived in [39, 40] by integration by parts and use of the nonlocal representation of the Riemann tensor in terms of the Ricci one (see footnote 5 below). The first term of this action is obviously conformal invariant in quadratic order, so that the linear in curvature part of the anomaly originates from the last term which is the RFT (or FV) action (2.24) in the quadratic approximation with $\gamma=-1/180$. Thus, the RFT or FV action is fully recovered in this approximation from perturbation theory and, as expected, turns out to be local. ### 3.1 Cubic order Quadratic order of the covariant curvature expansion is, in fact, a trivial generalization of the flat space expressions for self-energy operators of Feynman diagrammatic technique, because $\ln(-\Box/\mu^{2})$ is just a straightforward replacement of the typical momentum space formfactor $\ln(p^{2}/\mu^{2})$ by its position space version. At higher orders the situation becomes much more complicated and usually represented in terms of correlators of stress-tensor and other observables, written down in momentum space representation, see [55, 56, 57] for the treatment of generic conformal field theories. These correlators are, of course, contained in the effective action expanded in curvatures which, for reasons of general covariance, we prefer to consider in coordinate representation. In this representation the effective action becomes for each order $N$ in the curvature a sum of nonlocal monomials $\int d^{4}x_{1}\cdots d^{4}x_{N}\,F(x_{1},\ldots,x_{N})\nabla...\nabla\Re(x_{1})...\Re(x_{N})$ (3.9) with nonlocal multiple-point coefficients and covariant derivatives somehow acting on the product of curvatures at their various points. The absence of convenient and generally covariant momentum space representation makes us to work in coordinate representation and invent a special language which would simplify the formalism and make it manageable [38, 39, 40]. This language is based on the operator representation of nonlocal formfactors, $\displaystyle F(x_{1},\ldots,x_{N})$ $\displaystyle=\varGamma(\nabla_{1},\ldots,\nabla_{N})$ $\displaystyle\times\delta(x_{1},x_{2})\,\delta(x_{1},x_{2})\cdots\delta(x_{1},x_{N}),$ (3.10) where $\varGamma(\nabla_{1},\ldots,\nabla_{N})$ is the operator valued function of $N$ independent covariant derivatives such that each $\nabla_{i}$ is acting on its own $x_{i}$. This allows one to write the orders of perturbation theory as $\displaystyle\varGamma^{(N)}$ $\displaystyle=\frac{1}{2(4\pi)^{2}}\int\\!d^{4}x\,\sqrt{g}\sum\limits_{M}\varGamma_{M}(\nabla_{1},\ldots,\nabla_{N})$ $\displaystyle\times I_{M}(x_{1},\ldots,x_{N})\,\Big{|}_{\\{x\\}=x},$ (3.11) where summation runs over all invariant monomials in curvatures of a given $n$-th order $I_{M}(x_{1},\ldots,x_{N})\sim\nabla\cdots\nabla\Re(x_{1})\cdots\Re(x_{N})$ (3.12) and after the action of all independent derivatives on their arguments all these arguments $\\{x\\}=(x_{1},\ldots x_{N})$ have to be identified. In the cubic order for the full set of curvatures (3.2) there are 29 such invariant structures built of these curvatures and their covariant derivatives with all indices fully contracted with each other. Moreover, in view of the scalar (no free indices) nature of the formfactors and the formal identity $\nabla_{1}+\nabla_{2}+\nabla_{3}=0$ (reflecting the possibility of integration by parts without surface terms, which is a counterpart to the momentum conservation in Feynman diagrams) the formfactors of $\varGamma^{(3)}$ can be written down as functions of three d’Alembertians $\Box_{1}$, $\Box_{2}$ and $\Box_{3}$ independently acting on three arguments of $I_{M}(x_{1},x_{2},x_{3})$. Thus, cubic order reads as $\displaystyle\varGamma^{(3)}$ $\displaystyle=\frac{1}{2(4\pi)^{2}}\int\\!dx\,\sqrt{g}\sum\limits^{29}_{M=1}\varGamma_{M}(\Box_{1},\Box_{2},\Box_{3})$ $\displaystyle\times I_{M}(x_{1},x_{2},x_{3})\,\Big{|}_{\\{x\\}=x}.$ (3.13) The list of cubic invariants and their formfactors is presented in [40, 18, 20]. It is very long and, as its details are not necessary for our purposes, we will not fully present it here. We only give the general structure of the nonlocal formfactors of these invariants. It reads as a sum of three different groups of terms $\displaystyle\varGamma_{M}(\Box_{1},\Box_{2},\Box_{3})$ $\displaystyle=A_{M}\,\varGamma(\Box_{1},\Box_{2},\Box_{3})$ $\displaystyle+\sum_{1\leq i<k}^{3}\frac{D^{ik}_{M}}{(\Box_{i}-\Box_{k})}\ln\frac{\Box_{i}}{\Box_{k}}+B_{M}.$ (3.14) Here $\varGamma(\Box_{1},\Box_{2},\Box_{3})$ is the fundamental cubic formfactor corresponding to the triangular Feynman graph of massless theory with unit vertices [58], $\displaystyle\varGamma(\Box_{1},\Box_{2},\Box_{3})$ $\displaystyle=\int\limits_{\alpha\geq 0}\frac{d^{3}\alpha\,\delta(1-\alpha_{1}-\alpha_{2}-\alpha_{3})}{\alpha_{1}\alpha_{2}(-\Box_{3})+\alpha_{1}\alpha_{3}(-\Box_{2})+\alpha_{2}\alpha_{3}(-\Box_{1})},$ (3.15) which cannot be reduced to an elementary function. The operator-valued coefficients $A_{M}$, $B_{M}$ and $D_{M}^{ik}$ are rational functions of three $\Box$-arguments with a polynomial numerator $P(\Box_{1},\Box_{2},\Box_{3})$ and the denominator containing together with the product $\Box_{1}\Box_{2}\Box_{3}$ also the powers of a special quadratic form of these arguments $D$, $\displaystyle A_{M},\;D^{ik}_{M},\;B_{M}\sim\frac{P(\Box_{1},\Box_{2},\Box_{3})}{\Box_{1}\Box_{2}\Box_{3}\,D^{L}},\;L\leq 6,$ (3.16) $\displaystyle D={\Box_{1}}^{2}\\!+{\Box_{2}}^{2}\\!+{\Box_{3}}^{2}\\!-\\!2\Box_{1}\Box_{2}\\!-\\!2\Box_{1}\Box_{3}\\!-\\!2\Box_{2}\Box_{3}.$ (3.17) In this cubic order of the curvature expansion the conformal anomaly (1.1), which is quadratic in curvatures, was explicitly derived by the direct variation of the metric in [59]. Though this derivation has demonstrated nontrivial localization of the nonlocal terms under straightforward tracing the metric variational derivative, it still remained rather technical and not very illuminating because it has not revealed the anomalous part of the action. It turns out, however, that the transition to another basis of curvature invariants, suggested in [20, 18], explicitly disentangles this part. ### 3.2 Conformal resummation: Fradkin–Vilkovisky anomaly action The recovery of the anomaly part of the action and its conformal invariant part is based on a simple idea that the latter should consist of the series of Weyl invariant structures. The construction of Weyl invariants can be done by the gauge fixing procedure of the above type—choosing the representative metric on the group orbit by imposing the conformal gauge. Obviously the set of invariants surviving after imposing this gauge will be minimal if the gauge would explicitly annihilate the maximum number of invariants in their original full set. For this reason the FV gauge (2.25) is much easier to use for the separation of the total set of invariants into the Weyl type ones and those which vanish when the gauge is enforced. As $R$ is one of the curvatures in the set of $\Re$ the FV gauge is more useful for the purpose of such a separation than the RFT gauge (2.20) which nonlinearly intertwines all the curvatures. Intuitively it is also clear because $R$, in contrast to $C^{\alpha}{}_{\beta\mu\nu}$, is a bearer of the conformal mode. In the purely metric sector such a separation is attained by the transition to the new curvature basis [20], $\Re=\big{(}\,R^{\mu}_{\;\nu\alpha\beta},R_{\mu\nu},R\,\big{)}\to\tilde{\Re}=\big{(}\,C^{\alpha}_{\;\;\beta\mu\nu},R\,\big{)},$ (3.18) via expressing Ricci tensor in terms of the Weyl tensor and the Ricci scalar555In fact, the original basis and the curvature expansion of [38, 39, 40] consisted of $R_{\mu\nu}$ and $R$ because in asymptotically flat Euclidean spacetime Riemann tensor can be expressed as nonlocal power series in the Ricci tensor, $R_{\alpha\beta\mu\nu}=\frac{1}{\Box}\big{(}\nabla_{\mu}\nabla_{\alpha}R_{\nu\beta}-\nabla_{\nu}\nabla_{\alpha}R_{\mu\beta}\big{)}-(\alpha\leftrightarrow\beta)+O(\Re^{2})$,—the corollary of contracted Bianchi identity.. This expression follows from the contracted Bianchi identity which for the Weyl tensor reads as $\nabla^{\beta}\nabla^{\alpha}C_{\alpha\mu\beta\nu}=\frac{1}{2}\Box R_{\mu\nu}-\frac{1}{6}\nabla_{\mu}\nabla_{\nu}R-\frac{g_{\mu\nu}}{12}\Box R+O(\Re^{2}).$ (3.19) This equation can be solved by iterations for Ricci tensor in terms of nonlocal series in powers of two objects—Ricci scalar $R$ and the new traceless (and up to quadratic order transverse) tensor $C_{\mu\nu}$ which is itself a nonlocal derivative of Weyl, $C_{\mu\nu}=\frac{2}{\Box}\nabla^{\beta}\nabla_{\alpha}C^{\alpha}_{\;\;\mu\beta\nu}.$ (3.20) The resulting series begins with $R_{\mu\nu}=C_{\mu\nu}+\frac{1}{3}\nabla_{\mu}\nabla_{\nu}\frac{1}{\Box}R+\frac{1}{6}g_{\mu\nu}R+O(\tilde{\Re}^{2}).$ (3.21) Effective action reexpansion imples the transition from $I_{M}(x_{1},\ldots,x_{n})$ to a new basis of invariants $\tilde{I}_{M}(x_{1},...x_{n})\sim\nabla...\nabla\tilde{\Re}(x_{1})...\tilde{\Re}(x_{n}),$ (3.22) which can be separated in the set of monomials $I_{C}(x_{1},\ldots,x_{n})$ involving only $C_{\mu\nu}$ and the set of monomials $I_{R}(x_{1},\ldots,x_{n})$ containing at least one scalar curvature factor, $\displaystyle I_{C}(x_{1},...x_{n})\sim$ $\displaystyle\nabla...\nabla C(x_{1})...C(x_{n}),$ (3.23) $\displaystyle I_{R}(x_{1},...x_{n})\sim$ $\displaystyle\nabla...\nabla R(x_{1})C(x_{2})...C(x_{n}),$ $\displaystyle\nabla...\nabla R(x_{1})R(x_{2})C(x_{3})...C(x_{n}),\;...$ (3.24) Expansion in the new basis of invariants implies, of course, the transition to a new set of their relevant formfactors $\varGamma_{M}(\nabla_{1},...\nabla_{n})\to\varGamma_{C}(\nabla_{1},...\nabla_{n}),\varGamma_{R}(\nabla_{1},...\nabla_{n}),$ (3.25) and the new expansion takes the form $\varGamma=W+\varGamma_{R},$ (3.26) where $W$ is the Weyl and $\varGamma_{R}$ is the mixed Weyl–Ricci scalar parts of the whole expansion, which we write in abbreviated form (omitting multiple spacetime arguments and the operation of equating them) $\displaystyle W$ $\displaystyle=\frac{1}{32\pi^{2}}\int\\!d^{4}x\,\sqrt{g}\sum\limits_{n,C}\varGamma_{C}^{(n)}I_{C}^{(n)},$ (3.27) $\displaystyle\varGamma_{R}$ $\displaystyle=\frac{1}{32\pi^{2}}\int\\!d^{4}x\,\sqrt{g}\sum\limits_{n,R}\varGamma_{R}^{(n)}I_{R}^{(n)}.$ (3.28) Note that $W$ and its Weyl basis invariants are not Weyl invariant, because apart from Weyl tensors they contain covariant derivatives and nontrivial formfactors which do not possess conformal invariance properties. The main statement on the conformal decomposition of the effective action of [20] is that $\varGamma[\,g\,]=\varGamma_{{\vphantom{L}}_{\rm FV}}[\,g\,]+W[\,\bar{g}\,]\,\Big{|}_{\,\bar{g}_{\mu\nu}=e^{-2\varSigma_{{\vphantom{L}}_{\rm FV}}[\,g\,]}g_{\mu\nu}},$ (3.29) where $\varGamma_{{\vphantom{L}}_{\rm FV}}[\,g\,]$ is exactly the FV anomaly action introduced above666One can check that the last four lines of Eq. (24) in [20] form exact expression for $\varGamma_{{\vphantom{L}}_{\rm FV}}[\,g\,]$ by taking into account that the function $Z$ in this equation coincides with $-\varSigma_{{\vphantom{L}}_{\rm FV}}$ and satisfies the equation $\Box Z+\tfrac{1}{2}(\nabla Z)^{2}=\tfrac{1}{3}R$.. Conformally invariant part is obtained by the “conformization” of $W$, while the rest of the effective action is exhausted by the Fradkin-Vilkovisky anomaly action. Invariant meaning of this representation is that the Ricci part of the full action is not independent, but fully determined by the anomaly and Weyl parts of the action. This representation looks as the realization of Eq. (2.34) within perturbation theory in curvatures. This result is likely to resolve a long- standing debate between the proponents of the Riegert action and adherents of the flat space perturbation expansion for the effective action with typical nonlocal logarithmic form factors of the form (3.7). Note that these form factors do not contribute to the anomaly even though their coefficients are directly related to its expression (1.1). Rather they become Weyl invariant under the substitution of $\bar{g}_{\mu\nu}$ as their functional argument. Validity of the representation (3.29) was checked in the cubic order approximation for the effective action in [20]. The transition to the new basis of invariants in the second order leads to (see the second line of Eq. (3.6)), $\displaystyle W^{(2)}[\,g\,]$ $\displaystyle=\frac{1}{32\pi^{2}}\int dx\,\sqrt{g}\frac{1}{120}C_{\mu\nu\alpha\beta}\,\gamma(-\Box)C^{\mu\nu\alpha\beta},$ (3.30) $\displaystyle\varGamma^{(2)}_{R}[\,g\,]$ $\displaystyle=\frac{1}{32\pi^{2}}\int dx\,\sqrt{g}\frac{1}{1080}R^{2},$ (3.31) whereas in the third order it results in a great simplification of the “Ricci scalar” formfactors $\varGamma_{R}^{(3)}$ as compared to the original ones—they become much simpler and, moreover, in their expressions of the form (3.14) the coefficients $A,D^{ik}_{M},B_{M}$ of (3.16) completely loose powers of the function $D$ in the denominator. Thus, modulo the contributions of $\ln(\Box_{i}/\Box_{k})/(\Box_{i}-\Box_{k})$ the formfactors $\varGamma_{R}^{(3)}$ acquire the tree-level structure. The terms with these factors get, however, completely absorbed with accuracy $O(\Re^{4})$ by the replacement $W^{(2)}[\,g_{\mu\nu}\,]\to W^{(2)}[\,\bar{g}_{\mu\nu}\,]$ in view of the following relation [40] $\displaystyle W^{(2)}[\,g\,]-W^{(2)}[\,\bar{g}\,]$ $\displaystyle\sim\int dx\,\sqrt{g}\,C_{\mu\nu\alpha\beta}\Big{[}\ln(-\Box)-\ln(-\bar{\Box})\Big{]}C^{\mu\nu\alpha\beta}$ $\displaystyle=\int dx\,\sqrt{g}\frac{\ln(\Box_{1}/\Box_{2})}{\Box_{1}-\Box_{2}}[\Box_{2}-\bar{\Box}_{2}]C_{1\,\mu\nu\alpha\beta}C^{\mu\nu\alpha\beta}_{2}+O(\Re^{4}),$ $\displaystyle\Box_{2}-\bar{\Box}_{2}\sim\Re_{3}+O(\Re^{2}),$ (3.32) where the right hand side is the set of relevant cubic order terms with the above factor acting on two Weyl tensors out of three curvatures in $RCC$-type invariants. What remains in the sector of cubic $I^{(3)}_{R}$-invariants is the set of tree-like nonlocal form factors which comprise the curvature expansion of FV action up to $\tilde{\Re}^{3}$ order inclusive. This observation done in [20] can be formalized as the following sequence of identical transformations $\displaystyle\varGamma[\,g\,]=W^{(2+3)}[\,g\,]+\varGamma^{(2+3)}_{R}[\,g\,]+O(\Re^{4})=W^{(2+3)}[\,\bar{g}\,]$ $\displaystyle\;\;\;\;+\mathop{\underbrace{\varGamma^{(2+3)}_{R}[\,g\,]+\\!\big{(}W^{(2)}[\,g\,]-W^{(2)}[\,\bar{g}\,]\big{)}}}_{{\varGamma^{(2+3)}_{{\vphantom{L}}_{\rm FV}}+O(\Re^{4})}}+O(\Re^{4}),$ (3.33) where the group of the last three terms forms Fradkin–Vilkovisky anomaly action expanded with $\tilde{\Re}^{3}$-accuracy. Explicitly the cubic part of $\varGamma_{{\vphantom{L}}_{\rm FV}}$ for the model of a single conformal scalar field with (3.5) reads [20] $\displaystyle\varGamma^{(3)}_{{\vphantom{L}}_{\rm FV}}$ $\displaystyle=-\frac{1}{32\pi^{2}}\int dx\,\sqrt{g}\bigg{\\{}\frac{1}{19440}\Big{(}\frac{2}{\Box}_{3}-\frac{\Box_{1}}{\Box_{2}\Box_{3}}\Big{)}R_{1}R_{2}R_{3}$ $\displaystyle+\frac{1}{1620\,\Box_{2}\Box_{3}}C_{1}^{\alpha\beta}\nabla_{\alpha}R_{2}\nabla_{\beta}R_{3}$ $\displaystyle+\frac{1}{540}\Big{(}\frac{4}{\Box}_{2}-\frac{1}{\Box}_{3}-\frac{2\,\Box_{1}}{\Box_{2}\Box_{3}}-\frac{\Box_{3}}{\Box_{1}\Box_{2}}\Big{)}C_{1}^{\mu\nu}C_{2\,\mu\nu}R_{3}$ $\displaystyle+\frac{1}{135}\Big{(}\frac{1}{\Box_{1}\Box_{2}}-\frac{2}{\Box_{2}\Box_{3}}\Big{)}\nabla^{\mu}C_{1}^{\nu\alpha}\nabla_{\nu}C_{2\,\mu\alpha}R_{3}$ $\displaystyle-\frac{1}{135\,\Box_{1}\,\Box_{2}\Box_{3}}\nabla_{\alpha}\nabla_{\beta}C_{1}^{\mu\nu}\nabla_{\mu}\nabla_{\nu}C_{2}^{\alpha\beta}R_{3}\bigg{\\}}\bigg{|}_{\,\\{x\\}=x},$ (3.34) where $C_{\mu\nu}$ is the “Weyl” part (3.20) of Ricci tensor (3.21) ### 3.3 The problem of double poles and global conformal transformations The expression (3.34) shows that in the cubic order the anomalous effective action is free from double pole nonlocal terms. For the FV action this is obviously true to all orders of the curvature expansion, since all its tree type nonlocalities originate from the Green’s function of the conformal scalar operator $\Box-\tfrac{1}{6}R$. However, for the RFT action double poles formally appear starting from the fourth order in the curvature because the metric variation of $\varSigma_{\chi}=\varSigma_{\rm RTF}$ in (2.16) leads to the action of the inverse Paneitz operator upon the square of the Weyl tensor $C^{2}=C_{\mu\nu\alpha\beta}C^{\mu\nu\alpha\beta}$ due to a formal variational rule $\int d^{4}x\,\sqrt{g}\,C^{2}\delta\varSigma_{\rm RFT}=\int d^{4}x\,\sqrt{g}\,(\Delta_{4}^{-1}C^{2})\delta(\ldots).$ (3.35) This operation is not well defined, because $C^{2}$ is not a total derivative and the repeated action of $1/\Box$ upon generic test functions in four dimensions leads to IR divergent integrals—see footnote 2. In the cubic order of $\varGamma_{\rm RFT}$ this problem does not arise because of the extra $\Box$ factor in $\Box R$, as it was checked in [16] by explicit calculations of $\langle\,TTT\,\rangle$ correlators, but one is not granted to be free from this difficulty for higher order correlators. In fact this is a typical situation of IR divergences in two dimensions, where the kernel of $1/\Box$ has a logarithmic dependence at infinity, and the correlators of undifferentiated conformal fields $\phi$ are UV divergent, while the correlators $\langle\partial\phi(x)\partial\phi(y)\cdots\rangle$ stay well defined. Apparently, the same property in four dimensions also underlies absence of unitarity in dipole theories with $1/\Box^{2}$-type propagators recently discussed in [60]. The mechanism of transition from operators to their derivatives in shift symmetric theories actually helps to justify the RFT action as a source of well defined stress tensor correlators and extend the validity of results in [16] to all higher orders. This follows from the observation that the Paneitz operator reads $\sqrt{g}\Delta_{4}=\partial_{\mu}\Big{[}\sqrt{g}\big{(}\nabla^{\mu}\nabla_{\nu}+2R^{\mu\nu}-\frac{2}{3}Rg^{\mu\nu}\big{)}\Big{]}\partial_{\nu}$ (3.36) and, therefore, perturbatively on the flat space background can be represented as $\sqrt{g}\Delta_{4}=\tilde{\Box}^{2}+V,\;\tilde{\Box}=\delta^{\mu\nu}\partial_{\mu}\partial_{\nu},\;V=\overrightarrow{\partial_{\mu}V^{\mu\nu}\partial_{\nu}},$ (3.37) where the perturbation $V=O(\Re)$ has a special form—another differential operator $V^{\mu\nu}$ sandwiched between two derivatives with all derivatives acting to the right (which is indicated by the arrow). Within perturbation theory in powers of $V$ the action of the inverse operator on a generic test function $\psi$—scalar density—could have been understood as the expansion $\displaystyle\phi=\frac{1}{\sqrt{g}\Delta_{4}}\psi$ $\displaystyle=\sum\limits_{n=0}^{\infty}\frac{(-1)^{n}}{\tilde{\Box}^{2}}\left(V\frac{1}{\tilde{\Box}^{2}}\right)^{n}\psi$ $\displaystyle=\sum\limits_{n=0}^{\infty}\frac{(-1)^{n}}{\tilde{\Box}^{2}}\left(\overrightarrow{\partial_{\mu}V^{\mu\nu}}\frac{1}{\tilde{\Box}^{2}}\partial_{\nu}\right)^{n}\psi,$ (3.38) where we deliberately permuted the factors of $\partial_{\nu}$ and $1/\tilde{\Box}^{2}$ using their formal commutativity in order to provide the action of $1/\tilde{\Box}^{2}$ on the total derivative function. Thus all terms of this expansion except the first one become infrared finite. The first term $(1/\tilde{\Box}^{2})\psi$, however, makes this function $\phi$ ill defined. On the contrary, its derivative $\partial_{\alpha}\phi$ becomes consistent if one understands the first term of the expansion as $(1/\tilde{\Box}^{2})\partial_{\alpha}\psi$, so that the prescription for the operation of $\partial_{\alpha}(1/\sqrt{g}\Delta_{4})$ on a generic non- derivative type test function reads as $\partial_{\alpha}\frac{1}{\sqrt{g}\Delta_{4}}\psi=\sum\limits_{n=0}^{\infty}\frac{(-1)^{n}}{\tilde{\Box}^{2}}\partial_{\alpha}\left(\overrightarrow{\partial_{\mu}V^{\mu\nu}}\frac{1}{\tilde{\Box}^{2}}\partial_{\nu}\right)^{n}\psi.$ (3.39) With this prescription the term $C^{2}\varSigma_{\rm RFT}$ in the RFT action becomes perturbatively well defined to all orders of expansion. Indeed, this term with $\varSigma_{\rm RFT}$ given by (2.21) and on account of total derivative structure $\sqrt{g}(E-\tfrac{2}{3}\,\Box R\big{)}=\partial_{\alpha}E^{\alpha}$ can be rewritten by integration by parts as $4\\!\int d^{4}x\sqrt{g}\,C^{2}\varSigma_{\rm RFT}=-\int d^{4}x\,\sqrt{g}E^{\alpha}\,\partial_{\alpha}\frac{1}{\sqrt{g}\Delta_{4}}\big{(}\sqrt{g}\,C^{2}\big{)}$ (3.40) with the above prescription (3.39). This confirms a well defined nature of all multiple point correlators of stress tensor generated by RFT action. Finally, it is worth discussing the effective action behavior under global conformal transformations with $\sigma_{0}={\rm const}$. Higher order curvature terms of the effective action scale as negative powers of $e^{\sigma_{0}}$ and therefore are irrelevant in the IR limit. In [61] this was a main argument in favor of a dominant role of the Wess–Zumino action (2.6) in this limit because $\Delta\varGamma[g,\sigma]$ behaves linearly in $\sigma_{0}$ (or logarithmically in the distance). Indeed, $\displaystyle\Delta\varGamma[g,\sigma+\sigma_{0}]$ $\displaystyle=\Delta\varGamma[g,\sigma]$ $\displaystyle+\sigma_{0}\Big{(}\frac{\gamma}{32\pi^{2}}\int d^{4}x\,\sqrt{g}\,C^{2}+\beta\,e^{\prime}_{E}\Big{)},$ (3.41) where $e^{\prime}_{E}$ is the Euler characteristics of the manifold modulo its boundary contribution (see footnote 3). Note, however, that this behavior cannot be captured within the nonlocal RTF form of the anomaly action (1.3) because it is valid only under Dirichlet boundary conditions for the Green’s function of $\Delta_{4}$ (which would be violated by the $\sigma_{0}$-shift). In other words, the expression (1.3) lacks the contribution of the zero mode of the Paneitz operator, which on the contrary is explicitly featuring in (3.41). For compact manifolds with possibly nontrivial topology global Weyl transformations would not contradict boundary conditions, and these transformations will obviously show up in the generalized RFT gauge (2.35) as an ambiguity of the solution for Eq. (2.37), $\varSigma\to\varSigma+\sigma_{0}$. ## 4 Stress tensor in conformally related spacetimes Equations (2.19) and (2.34) show that the anomalous action makes sense as an object specifying the difference of effective actions on conformally related metrics and other fields. Outside of this context this action, being a subject of shifting by an arbitrary conformal invariant functional $W^{\rm conf}[\,g\,]$, as in Eq. (2.2), is not very instructive because such a shift can include essential physical information on conformally invariant degrees of freedom. Anomaly action $\varGamma_{\chi}$, or it would be better to say, the Wess–Zumino type action (2.6)—the generating functional of $\varGamma_{\chi}$—is really useful in situations when the physics of a conformally related spacetime with the metric $\bar{g}_{\mu\nu}$ is fully known. Then the effective action at $g_{\mu\nu}$ can be completely recovered from the knowledge of the Weyl anomaly. The simplest situation belongs to the class of conformally flat spacetimes when $\bar{g}_{\mu\nu}$ can be associated with flat metric for which all the metric field invariants are vanishing and $\varGamma[\,\bar{g}\,]$ is either exactly zero or calculable for quantum matter fields in flat spacetime. In particular, the fundamental observable which can then be obtained is the UV renormalized expectation value of the stress tensor of classically conformally invariant fields, $\sqrt{g}\,\big{\langle}\,T^{\alpha\beta}\big{\rangle}=2\frac{\delta\varGamma_{\rm ren}}{\delta g_{\alpha\beta}}$ (4.1) provided $\langle\,\bar{T}^{\alpha\beta}\rangle=0$ or known from flat space physics. Here we derive from (2.6) the expression for the difference of (densitized) stress tensors $\sqrt{g}\,\langle\,T^{\alpha}_{\beta}\rangle-\sqrt{\bar{g}}\,\langle\,\bar{T}^{\alpha}_{\beta}\rangle$, which for a conformally flat spacetime coincides with a well-known Brown–Cassidy expression [25] and generalizes it to the case of a nonvanishing Weyl tensor. ### 4.1 Conformal anomaly from the divergent part of the effective action To derive the behavior of the renormalized stress tensor on the conformal group orbit we, first, have to trace the origin of conformal anomaly as the result of subtracting UV divergences from covariantly regularized effective action, $\varGamma_{\rm ren}=\varGamma_{\rm reg}-\varGamma_{\infty}$. In dimensional regularization, $\varGamma_{\rm reg}={}^{(d)}\\!\varGamma$, these divergences are given by $\displaystyle\varGamma_{\infty}$ $\displaystyle=-\frac{1}{16\pi^{2}\epsilon}\int d^{d}x\,\sqrt{g}\,a_{2}$ $\displaystyle=\frac{1}{16\pi^{2}\epsilon}\int d^{d}x\,\sqrt{g}\,\big{(}\alpha\,{}^{(4)}\\!C^{2}+\beta\,{}^{(4)}\\!E\,\big{)},$ (4.2) where $\epsilon=4-d$, ${}^{(4)}\\!C^{2}$ and ${}^{(4)}\\!E$ are the four- dimensional invariants formally continued to $d$-dimensions and $a_{2}$ is the relevant second Schwinger–DeWitt coefficient of the corresponding heat kernel expansion for the inverse propagator of the theory [62, 63, 64], $\displaystyle\;a_{2}=-\big{(}\alpha\,{}^{(4)}\\!C^{2}+\beta\,{}^{(4)}E+\gamma\,\Box R\big{)},$ (4.3) $\displaystyle{}^{(4)}C^{2}=R_{\mu\nu\alpha\beta}^{2}-2R_{\mu\nu}^{2}+\frac{1}{3}R^{2},$ (4.4) $\displaystyle{}^{(4)}\\!E=R_{\mu\nu\alpha\beta}^{2}-4R_{\mu\nu}^{2}+R^{2}.$ (4.5) This structure of $a_{2}$ follows from the local conformal invariance of the pole residue of $\varGamma_{\infty}$ at $d=4$ and associated with the integrability (or conformal Wess–Zumino) condition for a conformal anomaly. It includes the topological Gauss–Bonnet density $\sqrt{g}E$, Weyl tensor squared and the total derivative $\Box R$ terms. Conformal anomaly arises as a contribution of the conformal transformation of the one-loop counterterm (4.2) subtracted from the regularized effective action $\sqrt{g}\,\big{\langle}\,T^{\alpha}_{\alpha}\big{\rangle}=-2g_{\alpha\beta}\frac{\delta\varGamma_{\infty}}{\delta g_{\alpha\beta}},$ (4.6) because the regularized (but not yet renormalized by counterterm subtracting) action $\varGamma_{\rm reg}$ is assumed to be conformally invariant777Or the Weyl invariance violation of dimensionally regularized $\varGamma_{\rm reg}$ is proportional to $(d-4)^{2}$ as it happens for spin one case [4], so that it does not contribute to the residue of the simple pole in dimensionality.. The $\Box R$ term does not contribute to the divergences but it appears in the conformal anomaly in view of the conformal transformation of the Weyl squared term continued to $d$ dimensions. Moreover, within the above subtraction scheme its coefficient $\gamma$ in the anomaly turns out to be determined by the coefficient $\alpha$ of the Weyl term [4]. Indeed, introduce conformally covariant Weyl tensor in $d$ dimensions $\displaystyle{}^{(d)}\\!C_{\mu\nu\alpha\beta}$ $\displaystyle=R_{\mu\nu\alpha\beta}+2P_{\beta[\mu}g_{\nu]\alpha}-2P_{\alpha[\mu}g_{\nu]\beta},$ (4.7) $\displaystyle{}^{(d)}C^{\mu}{}_{\nu\alpha\beta}$ $\displaystyle={}^{(d)}\\!\bar{C}^{\mu}{}_{\nu\alpha\beta},$ (4.8) which is written down in terms of the Schouten tensor $P_{\mu\nu}\equiv\frac{1}{d-2}\Big{(}R_{\mu\nu}-\frac{Rg_{\mu\nu}}{2(d-1)}\Big{)}.$ (4.9) In view of the relation between the square of Weyl tensors ${}^{(d)}C^{2}\equiv{}^{(d)}C_{\mu\nu\alpha\beta}^{2}$ and $C^{2}\equiv{}^{(4)}\\!C^{2}_{\mu\nu\alpha\beta}$ (both formally continued to $d$ dimensions) [25] ${}^{(4)}\\!C^{2}={}^{(d)}\\!C^{2}-\frac{\epsilon}{2}\big{(}E-C^{2}-\tfrac{1}{9}R^{2}\big{)}+O(\epsilon^{2})$ (4.10) one has $\displaystyle\frac{\delta}{\delta g_{\mu\nu}}\int d^{d}x\,\sqrt{g}\,C^{2}=\frac{\delta}{\delta g_{\mu\nu}}\int d^{d}x\,\sqrt{g}\,{}^{(d)}\\!C^{2}$ $\displaystyle\;\;\;\;+\frac{\epsilon}{2}\frac{\delta}{\delta g_{\mu\nu}}\int d^{4}x\,\sqrt{g}\,\big{(}C^{2}+\tfrac{1}{9}R^{2}\big{)}+O(\epsilon^{2}).$ (4.11) Then, since the tensor ${}^{(d)}C_{\mu\nu\alpha\beta}$ is conformally covariant in any dimension, $g_{\mu\nu}(\delta/\delta g_{\mu\nu})\int d^{d}x\,\sqrt{g}\,^{(d)}C^{2}=-\tfrac{\epsilon}{2}\sqrt{g}\,{}^{(d)}C^{2}$, we have $\frac{1}{\epsilon}g_{\mu\nu}\frac{\delta}{\delta g_{\mu\nu}}\int d^{d}x\,\sqrt{g}C^{2}=-\frac{1}{2}\sqrt{g}\Big{(}C^{2}+\frac{2}{3}\Box R\Big{)}+O(\epsilon).$ (4.12) Using this in (4.6) one recovers the $C^{2}$ and the $\Box R$ terms in the expression for the anomaly $\sqrt{g}\,\big{\langle}T^{\alpha}_{\alpha}\big{\rangle}=-\frac{1}{16\pi^{2}}\sqrt{g}\,a_{2},$ (4.13) with the parameter $\gamma$ related to the coefficient $\alpha$ of the Weyl squared term [4] $\gamma=\frac{2}{3}\alpha.$ (4.14) This simple expression for the trace anomaly in terms of the second Schwinger–DeWitt coefficient also follows from the zeta-function regularization [65]. The Gauss–Bonnet part of the anomaly follows from the conformal variation of the ${}^{(4)}\\!E$-term in the divergent part of the action. Just like $\Box R$, as the residue of the pole in $\varGamma_{\infty}$ the integral of $\sqrt{g}{}^{(4)}\\!E$ at least naively does not contribute to the stress tensor, because in four dimensions this integral is a constant Euler characteristics of the manifold. But in a covariant renormalization procedure the coefficient of $1/\epsilon$ in $\varGamma_{\infty}$ cannot be treated other than as a $d$-dimensional object, so that $\int d^{d}x\sqrt{g}{}^{(4)}\\!E$ is no longer a topological invariant, and its metric variation is nontrivial. Therefore, rewriting, similarly to (4.10), the dimensionally continued Gauss–Bonnet density in terms of ${}^{(d)}\\!C^{2}$, $\displaystyle{}^{(4)}\\!E$ $\displaystyle=R_{\mu\nu\alpha\beta}^{2}-4R_{\mu\nu}^{2}+R^{2}$ $\displaystyle={}^{(d)}\\!C^{2}-(2-3\epsilon)\big{(}R_{\mu\nu}^{2}-\tfrac{1}{3}R^{2}\big{)}+O(\epsilon^{2}),$ (4.15) one has $\displaystyle\frac{1}{\epsilon}\frac{\delta}{\delta g_{\alpha\beta}}\int d^{d}x\,\sqrt{g}\,{}^{(4)}\\!E=$ $\displaystyle-\sqrt{g}\big{(}\,\tfrac{1}{2}W^{\alpha\beta}+^{(3)}\\!\\!H^{\alpha\beta}$ $\displaystyle+2R_{\mu\nu}C^{\mu\alpha\nu\beta}\big{)}+O(\epsilon),$ (4.16) where the two new tensors arise $\displaystyle\\!\\!\\!{}^{(3)}\\!H^{\alpha\beta}=R^{\alpha\mu}R^{\beta}_{\mu}-\frac{2}{3}RR^{\alpha\beta}-\frac{1}{2}g^{\alpha\beta}R_{\mu\nu}^{2}+\frac{1}{4}g^{\alpha\beta}R^{2},$ (4.17) $\displaystyle W^{\alpha\beta}=\lim\limits_{\epsilon\to 0}\frac{1}{\epsilon}\left(4\,{}^{(d)}\\!C^{\alpha}{}_{\mu\nu\lambda}{}^{(d)}\\!C^{\beta\mu\nu\lambda}-g^{\alpha\beta}\,{}^{(d)}\\!C^{2}\right).$ (4.18) The limit to $d=4$ for the tensor $W^{\alpha\beta}$ is regular here because at $d=4$ there is the important identity $4\,{}^{(4)}\\!C^{\alpha}{}_{\mu\nu\lambda}{}^{(4)}\\!C^{\beta\mu\nu\lambda}=g^{\alpha\beta}{}^{(4)}\\!C^{2}$ (4.19) —it can be proven by antisymmetrization over five indices in the four- dimensional spacetime [66]. Tensors ${}^{(3)}H^{\alpha\beta}$ and $W^{\alpha\beta}$ have the following traces ${}^{(3)}\\!H^{\alpha}_{\alpha}=\tfrac{1}{3}R^{2}-R_{\mu\nu}^{2}=\tfrac{1}{2}(E-C^{2}),\quad W_{\alpha}^{\alpha}=C^{2}.$ (4.20) Thus from (4.1) and (4.20) we have the relation $\frac{2}{\epsilon}g_{\alpha\beta}\frac{\delta}{\delta g_{\alpha\beta}}\int d^{d}x\,\sqrt{g}\,{}^{(4)}\\!E=-\sqrt{g}{}^{(4)}\\!E+O(\epsilon),$ (4.21) which recovers the contribution of $E$-term in the conformal anomaly (4.13) with the expression (4.3) for $a_{2}$. ### 4.2 Minimal form of Wess-Zumino action and a-theorem Of course there is a big ambiguity in the above analytic continuation of the coefficients relating 4-dimensional objects to their $d$-dimensional counterparts. This ambiguity reduces to the renormalization by finite 4-dimensional counterterms $\int d^{4}x\sqrt{g}\,R_{\mu\nu\alpha\beta}^{2}$, $\int d^{4}x\sqrt{g}\,R_{\mu\nu}^{2}$ and $\int d^{4}x\sqrt{g}\,R^{2}$ among which in view of the total-derivative nature of the Gauss-Bonnet density only one counterterm can additionally break Weyl invariance and change the coefficient $\gamma$ of the $\Box R$ term in the conformal anomaly. This is because the combination $\int d^{4}x\,\sqrt{g}(C^{2}-E)=2\int d^{4}x\,\sqrt{g}(R_{\mu\nu}^{2}-\tfrac{1}{3}R^{2})$ is Weyl invariant, and such a counterterm can be chosen as the square of the curvature scalar, satisfying $g_{\mu\nu}\frac{\delta}{\delta g_{\mu\nu}}\int d^{4}x\sqrt{g}\,R^{2}=-6\sqrt{g}\,\Box R.$ (4.22) Therefore this finite local counterterm can be used to alter the coefficient $\gamma$ and, in particular, put it to zero by a special finite renormalization which we will denote by a subscript $\rm Ren$, $\varGamma_{\rm ren}[\,g\,]\to\varGamma_{\rm Ren}[\,g\,]\equiv\varGamma_{\rm ren}[\,g\,]+\frac{\gamma}{192\pi^{2}}\int d^{4}x\sqrt{g}\,R^{2}.$ (4.23) Regularization and subtraction scheme dependence of $\gamma$-coefficient manifests itself in the violation of the relation (4.14) for the dimensionally regularized electromagnetic vector field [5], but ultimately does not change the physics of the theory because of the locality of the covariant counterterm $\int d^{4}x\sqrt{g}\,R^{2}$, whose subtraction point should be determined from the comparison with the observable value of its coupling constant. In the cosmological example considered below the above renormalization (4.23) corresponds to fixing the coupling constant in the Starobinsky $R^{2}$-model [43]. The renormalization (4.23) has an important consequence – with $\gamma=0$ the terms with quartic derivatives of $\sigma$, contained in the combination $\tfrac{\beta}{16\pi^{2}}\int d^{4}x\,\big{(}4\sqrt{\bar{g}}\,\sigma\bar{\Delta}_{4}\sigma-\frac{1}{9}\sqrt{g}\,R^{2}\big{)}$ of (2.6), completely cancel out, and the resulting minimal Wess-Zumino action does not acquire extra hihger-derivative degrees of freedom, $\displaystyle\varGamma_{\rm Ren}[\,g\,]-\varGamma_{\rm Ren}[\,\bar{g}\,]=\frac{\alpha}{16\pi^{2}}\int d^{4}x\,\sqrt{\bar{g}}\,\bar{C}_{\mu\nu\alpha\beta}^{2}\sigma$ $\displaystyle\quad+\frac{\beta}{16\pi^{2}}\int d^{4}x\,\sqrt{\bar{g}}\,\Big{\\{}\bar{E}\,\sigma-4\,\big{(}\bar{R}^{\mu\nu}-\tfrac{1}{2}\bar{g}^{\mu\nu}\bar{R}\,\big{)}\,\partial_{\mu}\sigma\,\partial_{\nu}\sigma$ $\displaystyle\quad-4\,\bar{\Box}\sigma\,(\bar{\nabla}^{\mu}\sigma\,\bar{\nabla}_{\mu}\sigma)-2\,(\bar{\nabla}^{\mu}\sigma\,\bar{\nabla}_{\mu}\sigma)^{2}\Big{\\}}.$ (4.24) This minimal version of the action for the dilaton field $\sigma$ was discussed in [41] and used in the derivation of the $a$-theorem in [42, 67] – monotonically decreasing coefficient $a=\beta/16\pi^{2}$ in the RG flow of the theory from UV to IR domains. This theorem is based on the sign of the last quartic interaction term for this field, related to the cross section of the forward $2\to 2$ dilaton scattering which should be positive in unitary theory, its unitarity being related to the absence of higher-derivative ghosts in (4.24). ### 4.3 Renormalized stress tensors The behavior of the stress tensor on the orbit of the conformal group can be obtained by using the commutativity of the following functional variations $\left[\,g_{\mu\nu}(y)\frac{\delta}{\delta g_{\mu\nu}(y)},g_{\beta\gamma}(x)\frac{\delta}{\delta g_{\alpha\gamma}(x)}\right]=0,$ (4.25) which allows one to write $\displaystyle\frac{\delta}{\delta\sigma(y)}\sqrt{g}\left\langle T^{\alpha}_{\beta}(x)\right\rangle=2g_{\beta\gamma}(x)\frac{\delta}{\delta g_{\alpha\gamma}(x)}\frac{\delta\varGamma_{\rm ren}}{\delta\sigma(y)}\bigg{|}_{g_{\mu\nu}=e^{2\sigma}\bar{g}_{\mu\nu}}$ $\displaystyle\qquad=g_{\beta\gamma}(x)\frac{\delta}{\delta g_{\alpha\gamma}(x)}\sqrt{g}(y)\left\langle T^{\mu}_{\mu}(y)\right\rangle\Big{|}_{g_{\mu\nu}=e^{2\sigma}\bar{g}_{\mu\nu}}.$ (4.26) Bearing in mind that $g_{\beta\gamma}\delta/\delta g_{\alpha\gamma}=\bar{g}_{\beta\gamma}\delta/\delta\bar{g}_{\alpha\gamma}$ at fixed $\sigma$ and functionally integrating this relation over $\sigma$ one has $\sqrt{g}\,\big{\langle}T^{\alpha}_{\beta}\big{\rangle}-\sqrt{\bar{g}}\,\big{\langle}\bar{T}^{\alpha}_{\beta}\big{\rangle}=2\bar{g}_{\beta\gamma}\frac{\delta}{\delta\bar{g}_{\alpha\gamma}}\Delta\varGamma[\,\bar{g},\sigma\,],$ (4.27) where $\Delta\varGamma[\,\bar{g},\sigma\,]=\varGamma_{\rm ren}-\bar{\varGamma}_{\rm ren}$ is given by (2.6). Before calculating this difference by the metric variation of $\Delta\varGamma[\bar{g},\sigma]$ it is instructive to obtain it directly from the divergent part of the action as it was done in [25]. Note that $\varGamma_{\rm ren}-\bar{\varGamma}_{\rm ren}=-(\varGamma_{\infty}-\bar{\varGamma}_{\infty})$ because $\varGamma_{\rm reg}$ does not contribute to the anomaly (see footnote 7). Therefore, $\sqrt{g}\,\big{\langle}\,T^{\alpha}_{\beta}\big{\rangle}\,\Big{|}_{\,\bar{g}}^{\,g}=-2\,g_{\beta\gamma}\frac{\delta\varGamma_{\infty}}{\delta g_{\alpha\gamma}}\,\Big{|}_{\,\bar{g}}^{\,g}$ (4.28) To calculate the contribution of the ${}^{(4)}C^{2}$-term in $\varGamma_{\infty}$ we rewrite it in terms of ${}^{(d)}C^{2}$ and use Eq. (4.11). This leads to the contribution of the first term of this equation $\displaystyle\frac{\delta}{\delta g_{\mu\nu}}\int d^{d}x\,\sqrt{g}\,{}^{(d)}C^{2}=-\frac{\epsilon}{2}\sqrt{g}\,W^{\mu\nu}-4\sqrt{g}\,{}^{(d)}B^{\mu\nu},$ (4.29) $\displaystyle{}^{(d)}B^{\mu\nu}=\Big{(}\frac{1}{d-2}R_{\alpha\beta}+\nabla_{(\alpha}\nabla_{\beta)}\Big{)}C^{\mu\alpha\nu\beta},$ (4.30) where the tensor $W^{\mu\nu}$ is defined by Eq. (4.18) and ${}^{(d)}\\!B^{\mu\nu}$ is the $d$-dimensional Bach tensor. Assembling this with the second term of Eq. (4.11) we get on the orbit of the conformal group $\frac{1}{\epsilon}g_{\beta\gamma}\frac{\delta}{\delta g_{\alpha\gamma}}\int d^{d}x\,\sqrt{g}\,^{(4)}\\!C^{2}\,\Big{|}_{\,\bar{g}}^{\,g}\\\ =-\sqrt{g}\Big{[}\,\frac{4}{\epsilon}{}^{(d)}\\!B^{\alpha}_{\beta}+\frac{1}{18}{}^{(1)}\\!H^{\alpha}_{\beta}\,\Big{]}_{\,\bar{g}}^{\,g}+O(\epsilon),$ (4.31) where the tensor ${}^{(1)}\\!H^{\alpha}_{\beta}$ is given by the equation $\displaystyle{}^{(1)}\\!H^{\alpha}_{\beta}=\frac{1}{\sqrt{g}}g^{\alpha\gamma}\frac{\delta}{\delta g^{\beta\gamma}}\int d^{4}x\,\sqrt{g}R^{2}$ $\displaystyle=-\frac{1}{2}\delta^{\alpha}_{\beta}R^{2}+2RR^{\alpha}_{\beta}+2\delta^{\alpha}_{\beta}\Box R-2\nabla^{\alpha}\nabla_{\beta}R,$ (4.32) and we took into account that the both tensor densities $\sqrt{g}\,W^{\alpha}_{\beta}$ and $\sqrt{g}\,B^{\alpha}_{\beta}$ in four dimensions are invariant on the conformal orbit. Outside of four dimensions the Bach tensor density transforms on this orbit as (here as above $g_{\mu\nu}=e^{2\sigma}\bar{g}_{\mu\nu}$) $\sqrt{g}\,{}^{(d)}\\!B^{\alpha}_{\beta}\,\Big{|}_{\bar{g}}^{g}=-\frac{\epsilon}{2}\sqrt{\bar{g}}\big{(}\bar{R}^{\mu\nu}+2\bar{\nabla}^{(\mu}\bar{\nabla}^{\nu)}\big{)}\big{(}\sigma\bar{C}^{\alpha}{}_{\mu\beta\nu}\big{)}+O(\epsilon^{2}),$ (4.33) which makes the first term on the right hand side of (4.31) well defined at $d\to 4$. Note that the expression $\sqrt{\bar{g}}(\bar{R}^{\mu\nu}+2\bar{\nabla}^{(\mu}\bar{\nabla}^{\nu)}\big{)}\big{(}\sigma\bar{C}^{\alpha}{}_{\mu\beta\nu})$ treated as a functional of independent $\bar{g}_{\mu\nu}$ and $\sigma$ is Weyl invariant under local conformal transformations of the barred metric. This can be easily inferred from the invariance of Eq.(4.33) under the interchange $g_{\mu\nu}\leftrightarrow\bar{g}_{\mu\nu}$ and $\sigma\to-\sigma$ or directly checking the conformal transformation of $\bar{g}_{\mu\nu}$ (with a fixed scalar $\sigma$). The contribution of Gauss–Bonnet term to the stress tensor behavior on the conformal orbit is obtained from using (4.1)–(4.17). Collecting this contribution with the contribution (4.31) of the Weyl tensor squared part we finally have $\displaystyle\sqrt{g}\,\big{\langle}T^{\alpha}_{\beta}\big{\rangle}\Big{|}_{\,\bar{g}}^{\,g}=-\frac{\alpha}{4\pi^{2}}\sqrt{\bar{g}}\,\big{(}\bar{R}^{\mu\nu}+2\bar{\nabla}^{(\mu}\bar{\nabla}^{\nu)}\big{)}\big{(}\sigma\bar{C}^{\alpha}{}_{\mu\beta\nu}\big{)}+\frac{1}{8\pi^{2}}\sqrt{g}\,\Big{[}\,\beta\,^{(3)}\\!H^{\alpha}_{\beta}+\frac{\alpha}{18}\,^{(1)}\\!H^{\alpha}_{\beta}+2\beta R^{\mu\nu}C^{\alpha}{}_{\mu\beta\nu}\,\Big{]}_{\,\bar{g}}^{\,g}.$ (4.34) This is a generalization of the Brown–Cassidy formula to the case of a nonzero Weyl tensor. The first term of this expression is Weyl invariant in view of the above remark and can be represented by its unbarred version. The check of consistency of this formula with the original expression for the conformal anomaly is trivial in view of ${}^{(3)}\\!H^{\alpha}_{\alpha}=(E-C^{2})/2$, ${}^{(1)}\\!H^{\alpha}_{\alpha}=6\Box R$ and tracelessness of the Weyl tensor, $\sqrt{g}\,\big{\langle}T^{\alpha}_{\alpha}\big{\rangle}\Big{|}_{\,\bar{g}}^{\,g}=\frac{\sqrt{g}}{16\pi^{2}}\\!\Big{[}\beta E-\beta C^{2}+\tfrac{2\alpha}{3}\Box R\Big{]}_{\bar{g}}^{g}=-\frac{\sqrt{g}\,a_{2}}{16\pi^{2}}\Big{|}_{\,\bar{g}}^{\,g},$ (4.35) where the last equality follows from the conformal invariance of the density $\sqrt{g}\,C^{2}$ and from the relation (4.14) between the coefficients $\gamma$ and $\alpha$, $\alpha=\tfrac{3}{2}\gamma$. The recovery of (4.34) from the direct variation of the Wess–Zumino action (4.27) goes as follows. We use metric variational formulae $\displaystyle\frac{\delta}{\delta g_{\alpha\beta}}\\!\int d^{4}x\,\sqrt{g}\,C^{2}\sigma=-2\sqrt{g}\big{(}R_{\mu\nu}\\!+2\nabla_{(\mu}\\!\nabla_{\nu)}\big{)}\big{(}\sigma C^{\alpha\mu\beta\nu}\big{)},$ (4.36) $\displaystyle\frac{\delta}{\delta g_{\alpha\beta}}\int d^{4}x\,\sqrt{g}\,\mathcal{E}_{4}\sigma=\sqrt{g}\,\Delta^{\alpha\beta}\sigma,$ (4.37) $\displaystyle\frac{\delta}{\delta g_{\alpha\beta}}\int d^{4}x\,\sqrt{g}\,\varphi\Delta_{4}\sigma=-\frac{\sqrt{g}}{2}D^{\alpha\beta}[\varphi,\sigma],$ (4.38) which hold for generic scalar test functions $\sigma$ and $\varphi$ with the differential operator $\Delta^{\alpha\beta}$ acting on $\sigma$, $\displaystyle\Delta_{\alpha\beta}$ $\displaystyle=\frac{1}{3}(g_{\alpha\beta}\Box-\nabla_{\alpha}\nabla_{\beta})\,\Box$ $\displaystyle+\left[\,2(g_{\alpha\beta}P_{\mu\nu}-g_{\alpha\mu}P_{\beta\nu}-g_{\alpha\nu}P_{\beta\mu})+\frac{8}{3}g_{\mu\nu}P_{\alpha\beta}\right.$ $\displaystyle\left.+2Pg_{\alpha\mu}g_{\beta\nu}-\frac{5}{3}Pg_{\alpha\beta}g_{\mu\nu}-2W_{\alpha\mu\beta\nu}\right]\nabla^{\mu}\nabla^{\nu}$ $\displaystyle+\big{(}\,g_{\alpha\beta}g_{\mu\nu}-g_{\alpha\mu}g_{\beta\nu}-g_{\alpha\nu}g_{\beta\mu}\big{)}(\nabla^{\mu}P)\nabla^{\nu},$ (4.39) and the bilinear form $D^{\alpha\beta}(\varphi,\sigma)$, $\displaystyle D_{\alpha\beta}[\varphi,\sigma]$ $\displaystyle=-\frac{1}{2}g_{\alpha\beta}\Box\varphi\,\Box\sigma-2\sigma_{\alpha\beta}\Box\varphi$ $\displaystyle+2\sigma_{\alpha}\Box\varphi_{\beta}-\frac{1}{3}g_{\alpha\beta}\sigma_{\mu}\Box\varphi^{\mu}-\frac{2}{3}\varphi_{\mu(\alpha\beta)}\sigma^{\mu}$ $\displaystyle+\left[\,2W_{\alpha\mu\beta\nu}+\frac{1}{3}(g_{\mu\nu}R_{\alpha\beta}-g_{\alpha\mu}g_{\beta\nu}R)\right]\varphi^{(\mu}\sigma^{\nu)}$ $\displaystyle+\frac{1}{3}\big{(}\,4\varphi_{\alpha\mu}\sigma^{\mu}_{\beta}-g_{\alpha\beta}\varphi_{\mu\nu}\sigma^{\mu\nu}\big{)}+\big{(}\,\varphi\Leftrightarrow\sigma\,\big{)},$ (4.40) where $\varphi_{\alpha}\equiv\nabla_{\alpha}\varphi$, $\sigma_{\alpha\beta}\equiv\nabla_{\beta}\nabla_{\alpha}\sigma$, $\varphi_{\alpha\beta\gamma}\equiv\nabla_{\gamma}\nabla_{\beta}\nabla_{\alpha}\varphi$, etc. Note that the trace of $\Delta^{\alpha\beta}$ coincides with the Paneitz operator, $g_{\alpha\beta}\Delta^{\alpha\beta}=\Delta_{4}$, which matches with the conformal variation (2.9), and the bilinear form $D^{\alpha\beta}(\varphi,\sigma)$ is traceless in view of conformal invariance of $\sqrt{g}\Delta_{4}$. Using these relations we get from (4.27) and (2.6) $\displaystyle\sqrt{g}\,\big{\langle}T^{\alpha}_{\beta}\big{\rangle}\,\Big{|}_{\,\bar{g}}^{\,g}=-\frac{\alpha}{4\pi^{2}}\sqrt{g}\big{(}R^{\mu\nu}+2\nabla^{(\mu}\nabla^{\nu)}\big{)}\big{(}\sigma C^{\alpha}{}_{\mu\beta\nu}\big{)}$ $\displaystyle+\frac{\sqrt{g}}{8\pi^{2}}\\!\Big{(}2\beta\Delta^{\alpha}_{\beta}\sigma+\beta D^{\alpha}_{\beta}[\sigma,\sigma]\Big{)}+\sqrt{g}\left(\tfrac{\gamma}{12}+\tfrac{\beta}{18}\right){}^{(1)}\\!H^{\alpha}_{\beta}\,\Big{|}_{\,\bar{g}}^{\,g}.$ (4.41) The term in the first line here coincides with its barred version in (4.34)—this easily follows from the relation (4.36) where the integrand can be identically replaced by the barred one. The $\frac{\gamma}{12}{}^{(1)}\\!H^{\alpha}_{\beta}$ term here matches with the $\frac{\alpha}{18}{}^{(1)}\\!H^{\alpha}_{\beta}$ term of (4.34) in view of the relation $\alpha=\tfrac{3}{2}\gamma$. And finally, the identity holds $\displaystyle\sqrt{g}\,\Big{[}{}^{(3)}\\!H^{\alpha}_{\beta}+\tfrac{1}{18}{}^{(1)}\\!H^{\alpha}_{\beta}$ $\displaystyle+2R^{\mu\nu}C^{\alpha}{}_{\mu\beta\nu}\Big{]}_{\,\bar{g}}^{\,g}$ $\displaystyle=\sqrt{g}\Big{(}2\Delta^{\alpha}_{\beta}\sigma+D^{\alpha}_{\beta}[\sigma,\sigma]\Big{)},$ (4.42) which completely reconciles the two expressions (4.34) and (4.41) for the stress tensor behavior on the orbit of the conformal group. ## 5 Conformally flat spacetime The generalization (4.34) of Brown-Cassidy formula to the case of a nonvanishing Weyl tensor might be not very useful, because in the general case not much can be said about $\langle\,T^{\alpha}_{\beta}\rangle\,|_{\bar{g}}$. Therefore we will restrict ourselves with the case of the conformally flat spacetime for which the conformal transformation of the metric can lead to the metric $\bar{g}_{\mu\nu}$ of flat spacetime, where $\langle\,\bar{T}^{\alpha}_{\beta}\rangle$ is either zero or can be obtained from flat space physics. Interestingly, in this case the parameter of the conformal transformation $\sigma$ making this transition satisfies the equation $\Delta_{4}\,\sigma=\frac{1}{4}\mathcal{E}_{4}$ (5.1) and in asymptotically flat case with Dirichle boundary conditions has a unique solution (2.21), $\sigma=\varSigma_{\rm RFT}$. This, apparently not very well known fact, can be proven by using the equation for the conformal transformation of the four-dimensional Schouten tensor (4.8) ($g_{\mu\nu}=e^{2\sigma}\bar{g}_{\mu\nu}$) $P_{\mu\nu}-\bar{P}_{\mu\nu}=-\sigma_{\mu\nu}-\sigma_{\mu}\sigma_{\nu}+\frac{1}{2}\sigma_{\alpha}\sigma^{\alpha}g_{\mu\nu},$ (5.2) where $\sigma_{\mu}\equiv\nabla_{\mu}\sigma$ and $\sigma_{\mu\nu}\equiv\nabla_{\nu}\nabla_{\mu}\sigma$. Assuming that $\bar{g}_{\mu\nu}$ is flat space metric with $\bar{P}_{\mu\nu}=0$, differentiating twice and again using this relation to express $P_{\mu\nu}$ in terms of the derivatives of $\sigma$ one has $\nabla^{\mu}\nabla^{\nu}\left(P_{\mu\nu}+\sigma_{\mu\nu}+\sigma_{\mu}\sigma_{\nu}-\frac{1}{2}\sigma_{\alpha}\sigma^{\alpha}g_{\mu\nu}\right)\\\ =\Delta_{4}\sigma-\frac{1}{4}\mathcal{E}_{4}=0,$ (5.3) whence it follows that the conformal invariant metric (2.12) in the RFT gauge (2.20) is actually the flat space one when the Weyl tensor is zero $\bar{R}^{\alpha}_{\;\;\beta\mu\nu}=0,\quad\bar{g}_{\mu\nu}=e^{-2\varSigma_{\rm RFT}[\,g\,]}g_{\mu\nu}\big{|}_{\,C_{\alpha\beta\mu\nu}=0}.$ (5.4) Note that $\bar{g}_{\mu\nu}$ here is not automatically diagonal unit matrix $\delta_{\mu\nu}$, because this is the invariant statement which is valid in any coordinate system. ### 5.1 Anomaly driven cosmology Applications of the conformal anomaly in the cosmological context have a long history, see for example [26, 68, 9, 69, 70, 71, 72]. In particular, cosmology with the Friedman–Robertson–Walker (FRW) metric represents the situation when the anomalous action $\Delta\varGamma[\bar{g},\sigma]$ entirely determines the physics of the field model and via effective equations of motion produces a nontrivial back reaction of quantum matter on the dynamical metric background. The most interesting example is, perhaps, the case when $\varGamma[\,\bar{g}\,]$ in (2.6) nontrivially contributes to this back reaction effect rather than just serves as an inert flat space background. This is the spatially closed cosmology driven by a conformal field theory (CFT) from the initial state in the form of a special microcanonical density matrix, which was orginally suggested in [27] and recently reviewed in [46]. With the density matrix defined as the projector on the space of solutions of the Wheeler–DeWitt equations [28, 73] the statistical sum in this model has a representation of the Euclidean quantum gravity (EQG) path integral $Z=\int D[\,g_{\mu\nu},\phi\,]\;e^{-S[\,g_{\mu\nu},\phi\,]},$ (5.5) where integration runs over the metric $g_{\mu\nu}$ and matter fields $\phi$ which are periodic on the Euclidean spacetime of topology $S^{1}\times S^{3}$ with the time $\tau$ compactified to a circle $S^{1}$. When the classical action $S[\,g_{\mu\nu},\phi\,]$ is dominated by numerous CFT fields $\varPhi$ with their action $S_{C\\!F\\!T}[\,g_{\mu\nu},\varPhi\,]$, the statistical sum can be approximated by the contribution of the saddle point of this integral. This is the extremum of the total action including the tree-level gravitational Einstein–Hilbert action $S_{EH}[\,g_{\mu\nu}]$ and the effective action $\varGamma[\,g_{\mu\nu}]$ of these CFT fields888Disregarding the graviton loops can be justified by the domination of conformal fields outnumbering the metric, and retaining the Einstein–Hilbert term obviously follows from the fact that this term with renormalized gravitational and cosmological constants is anyway induced from the quantum conformal sector., $\displaystyle\varGamma_{\rm tot}[\,g_{\mu\nu}]$ $\displaystyle=S_{EH}[\,g_{\mu\nu}]+\varGamma[\,g_{\mu\nu}],$ (5.6) $\displaystyle e^{-\varGamma[\,g_{\mu\nu}]}$ $\displaystyle=\int D\varPhi\,e^{-S_{C\\!F\\!T}[\,g_{\mu\nu},\varPhi\,]}.$ (5.7) Choosing as $g_{\mu\nu}$ the FRW metric with the scale factor $a(\tau)$ and the lapse function $N$ ($\varOmega^{2}_{(3)}$ is the metric of the 3-dimensional sphere of a unit radius), $ds^{2}=N^{2}d\tau^{2}+a^{2}d\varOmega^{2}_{(3)}=a^{2}(\tau)\big{(}d\eta^{2}+d\varOmega^{2}_{(3)}\big{)},$ (5.8) one immediately finds that in terms of the conformal time variable $\eta$, related to the Euclidean time $\tau$ by the relation $d\eta=d\tau/a(\tau)$, this metric is conformally equivalent to the metric $\bar{g}_{\mu\nu}\equiv g_{\mu\nu}^{EU}$ of the Einstein static universe with spatial sections—the 3-dimensional spheres of some constant radius $a_{0}$, $\displaystyle d\bar{s}^{2}=a_{0}^{2}\big{(}d\eta^{2}+d\varOmega^{2}_{(3)}\big{)}\equiv g_{\mu\nu}^{EU}dx^{\mu}dx^{\nu},$ (5.9) $\displaystyle ds^{2}=e^{2\sigma}d\bar{s}^{2},\quad g_{\mu\nu}=e^{2\sigma}g_{\mu\nu}^{EU},\quad\sigma=\ln\frac{a}{a_{0}}.$ (5.10) Therefore the CFT effective action expresses in terms of the same action on a static Einstein universe $\varGamma[\,g_{\mu\nu}^{EU}]\equiv\varGamma_{EU}$ and Wess–Zumino action (2.6) with the above conformal parameter $\sigma$ $\varGamma[\,g_{\mu\nu}]=\Delta\varGamma[\,g_{\mu\nu}^{EU},\sigma\,]+\varGamma_{EU}.$ (5.11) The calculation of $\varGamma_{EU}$ is strongly facilitated by the static nature of the background, but it still yields a nontrivial result in view of compactification of time on $S^{1}$. To begin with, note that although $g_{\mu\nu}^{EU}$ explicitly depends on the size $a_{0}$ of $S^{3}$, the value of $\varGamma_{EU}$ is $a_{0}$-independent for a fixed period of the conformal time $\eta=\oint d\eta$. This follows from the invariance of the effective action under global conformal transformations (3.41) for conformally flat spacetimes with zero bulk part of the Euler characteristics (which is the case of $S^{1}\times S^{3}$). This also can be confirmed by using scaling properties of the conformal fields. Indeed, the energies of conformal quanta on a static spacetime scale as $1/a_{0}$ and their Hamiltonian reads, $\hat{H}=\sum_{\omega}\frac{\omega}{a_{0}}\Big{(}\hat{a}^{\dagger}_{\omega}\hat{a}_{\omega}\pm\frac{1}{2}\Big{)},$ (5.12) where summation runs over all quantum numbers (and spins) of the energies $\omega/a_{0}$ of all field oscillator modes on a static 3-dimensional sphere of the radius $a_{0}$ and $\hat{a}^{\dagger}_{\omega}$ and $\hat{a}_{\omega}$ are the relevant creation-annihilation operators ($\pm$ signs correspond to bosons or fermions). The path integral over (anti)periodic conformal (fermion) boson fields with a period ${\cal T}=\oint d\tau N$ on a static metric background is exactly calculable and equals the equilibrium statistical sum at the temperature $1/{\cal T}$ which expresses as a function of the conformal time period $\eta={\cal T}/a_{0}$ $e^{-\varGamma_{EU}}=\int D\varPhi\,e^{-S_{C\\!F\\!T}[\,g_{\mu\nu}^{EU},\varPhi\,]}\\\ ={\rm Tr}\,e^{-{\cal T}\hat{H}}=\exp\big{(}-\eta E_{\rm vac}-F(\eta)\big{)}.$ (5.13) Here $F(\eta)$ is the free energy of the gas of conformal particles and $E_{\rm vac}$ is a UV divergent Casimir energy which should be covariantly renormalized $\displaystyle F(\eta)$ $\displaystyle=\sum_{\omega}\Big{[}\pm\,\ln\big{(}1\mp e^{-\omega\eta}\big{)}\,\Big{]},$ (5.14) $\displaystyle E_{\rm vac}$ $\displaystyle=\Big{(}\sum_{\omega}\frac{\pm\,\omega}{2}\Big{)}_{\rm ren}.$ (5.15) Thus, the dependence on $a_{0}$ is absorbed into the dependence on $\eta$ which should be fixed under the rescaling of $a_{0}$. Note that it is $\eta$ that should be kept fixed under the global conformal transformation which simultaneously rescales the lapse function $N$ and $a_{0}$ in the definition of the conformally invariant $\eta=\oint d\tau\,N/a_{0}$. Remarkably, the covariant renormalization of the vacuum Casimir energy $E_{\rm vac}$ also follows from the behavior of the effective action on the orbit of the conformal group. The Einstein universe extending from $-\infty$ to $+\infty$ in $\eta$ is mapped to flat space by the transition to the radial coordinate $\rho$ $\eta\mapsto\rho=a_{0}e^{\eta},\quad-\infty<\eta<+\infty,\quad 0\leq\rho<\infty,$ (5.16) with the conformal relation between the two metrics $\displaystyle ds^{2}_{EU}=e^{2\sigma}ds_{\rm flat}^{2},\quad\sigma=-\eta=\ln\frac{a_{0}}{\rho},$ (5.17) $\displaystyle ds_{\rm flat}^{2}=d\rho^{2}+\rho^{2}d\varOmega^{2}_{(3)}.$ (5.18) For the vacuum state (the limit $\eta\to\infty$ and $F(\eta)\to 0$ in Eq. (5.13)) $\varGamma_{EU}\to E_{\rm vac}\eta$. On the other hand, from Eq. (2.6) with the above expression for $\sigma$ $\displaystyle\Delta\varGamma[\,g_{\rm flat},\sigma\,]=\frac{\beta}{8\pi^{2}}\int d^{4}x\sqrt{g_{\rm flat}}\big{(}\Box_{\rm flat}\sigma\big{)}^{2}$ $\displaystyle\qquad\qquad\quad-\frac{1}{32\pi^{2}}\Big{(}\frac{\gamma}{6}+\frac{\beta}{9}\Big{)}\,\int d^{4}x\sqrt{g_{EU}}R^{2}_{EU}.$ (5.19) Bearing in mind that $\Box_{\rm flat}\sigma=-2/\rho^{2}$, $\int d^{4}x\sqrt{g_{\rm flat}}\\!\mapsto 2\pi^{2}\int d\rho\,\rho^{3}$, $R_{EU}=6/a_{0}^{2}$ and $\int d^{4}x\sqrt{g_{EU}}\mapsto 2\pi^{2}a_{0}^{4}\int d\eta$, one has $\displaystyle\varGamma_{EU}-\varGamma_{\rm flat}=\Delta\varGamma[\,g_{\rm flat},\sigma\,]$ $\displaystyle=\beta\\!\int\frac{d\rho}{\rho}-\Big{(}\frac{3}{8}\gamma+\frac{\beta}{4}\Big{)}\\!\int d\eta$ $\displaystyle=\frac{3}{4}\,\Big{(}\beta-\frac{\gamma}{2}\Big{)}\int d\eta.$ (5.20) Therefore, under an obvious assumption that $\varGamma_{\rm flat}=0$ one has $E_{\rm vac}=\frac{3}{4}\,\Big{(}\beta-\frac{\gamma}{2}\Big{)}.$ (5.21) In other words, after covariant renormalization by covariant counterterms the Casimir energy gets the value compatible with the behavior of the renormalized effective action on the conformal group orbit (or with the Brown–Cassidy formula for the vacuum stress tensor). This compatibility was indeed checked by direct renormalization of the UV divergent sum over field modes in (5.15) [47, 48, 49, 50]. Let us now turn to the contribution of the conformal transformation from the generic FRW metric to that of the static Einstein universe in (5.11). To begin with we use the freedom of finite renormalization (4.23) which reduces the theory to the case of anomaly (1.1) with $\gamma=0$ and, in particular, renders $E_{\rm vac}=\tfrac{3}{4}\beta$. In the cosmological context this freedom corresponds to the adjustment of the coupling constant of the Starobinsky $R^{2}$-action [68] which plays an important role in inflation theory and the dark energy model. Then, with $\gamma=0$ and $\sigma$ given by (5.17) the Wess–Zumino term in (5.11) takes the form [27] $\varGamma_{\rm Ren}[\,g\,]-\varGamma_{\rm Ren}[\,g_{EU}]=\frac{3\beta}{2}\oint d\tau\,N\left(\frac{a^{\prime 2}}{a}-\frac{a^{\prime 4}}{6\,a}\right),$ (5.22) when written down in terms of the original FRW coordinates with the notation for the invariant time derivative $a^{\prime}=da/Nd\tau$. Note that the result is again independent of the constant $a_{0}$ because it contains only differentiated $\sigma$ and, moreover, it does not involve higher order derivatives of $a(\tau)$. The last property is entirely due to the fact of $\gamma$ being renormalized to zero and due to the cancellation of higher derivative terms in the minimal form of Wess-Zumino action (4.24). Now we assemble together the Einstein-Hilbert action (with the reduced Planck mass $M_{\rm P}=1/\sqrt{8\pi G}$ and the cosmological constant $\varLambda$), the action on the Einstein universe space (5.13) and (5.22). This leads to the total effective action on the generic Euclidean FRW background periodic in Euclidean time with the period $\eta$ measured in units of the conformal time $\displaystyle\varGamma_{\rm tot}[\,a,N\,]=6\pi^{2}M_{P}^{2}\oint d\tau\,N\bigg{\\{}-aa^{\prime 2}-a+\frac{\varLambda}{3}a^{3}$ $\displaystyle+\frac{\beta}{4\pi^{2}M_{P}^{2}}\left(\frac{a^{\prime 2}}{a}-\frac{a^{\prime 4}}{6a}+\frac{1}{2a}\right)\bigg{\\}}+F(\eta),$ (5.23) $\displaystyle\eta=\oint\frac{d\tau N}{a}.$ (5.24) Here the contribution of the conformal anomaly and Casimir energy (5.21) (with $\gamma=0$) are both weighted by the parameter $\beta$ of the topological term in the conformal anomaly. The free energy of the gas of conformal particles $F(\eta)$ is a function of the effective (“comoving”) temperature of this gas – the inverse of the circumference $\eta$ of the cosmological instanton (5.24). Despite essentially non-stationary metric background this gas stays in equilibrium state because of scaling properties of its particles and produces back reaction on the Friedmann metric background. Applications of the action (5.23) have been considered in the number of papers [27, 43, 44] and recently reviewed in [46]. Physics of the CFT driven cosmology is entirely determined by this effective action and the effective (Euclidean) Friedmann equation. The latter follows from the action by varying the lapse $N(\tau)$ and expressing the Hubble factor in terms of the energy density. In cosmic type gauge $N=1$, $\dot{a}=da/d\tau$, it reads $\displaystyle\frac{1}{a^{2}}-\frac{\dot{a}^{2}}{a^{2}}=\frac{\varepsilon}{3M_{\pm}^{2}\big{(}\varepsilon\big{)}},$ (5.25) $\displaystyle\varepsilon=M_{P}^{2}\varLambda+\frac{1}{2\pi^{2}a^{4}}\sum_{\omega}\frac{\omega}{e^{\eta\omega}-1},$ (5.26) $\displaystyle M_{\pm}^{2}(\varepsilon)=\frac{M_{P}^{2}}{2}\Big{(}1\pm\sqrt{1-\beta\varepsilon/6\pi^{2}M_{P}^{4}}\,\Big{)},$ (5.27) where the total energy density $\varepsilon$ includes the cosmological constant contribution and the radiation density of conformal field modes distributed over Planckian spectrum with the comoving temperature $1/\eta$. The nonlinear effect of the Weyl anomaly manifests itself in the effective Planck mass squared explicitly depending on $\varepsilon$ which takes two possible values $M_{\pm}^{2}(\varepsilon)$.999To avoid mixup of the signs in $M_{\pm}^{2}$ and sign factors associated with the statistics of conformal $\omega$-modes we present here the radiation spectrum only for bosonic case. These equations should be amended by the expression for the conformal time period that interpolates between the turning points of the solution with $\dot{a}(\tau)=0$. Note that the right hand side of the Friedmann equation does not contain Casimir energy density – it turns out to be fully screened due to the dynamical effect of the Weyl anomaly. This is the result of the finite renormalization (4.23) leading to a particular value of the anomaly coefficient of $\Box R$, $\gamma=0$. For the choice of $+$ sign in $M_{\pm}^{2}$ the solutions of this quantum Friedmann equation turn out to be the so-called garlands – the cosmological instantons of $S^{1}\times S^{3}$ topology, which have the periodic scale factor $a(\tau)$ oscillating on $S^{1}$ between maximal and minimal values $a_{\pm}$ [27]. These instantons serve as initial conditions for the cosmological evolution in the physical Lorentzian spacetime. This evolution follows from $a(\tau)$ by the analytic continuation $a_{L}(t)=a(\tau_{+}+it)$, $(da_{L}/dt)^{2}=-\dot{a}^{2}$, to the complex plane of the Euclidean time at the turning point with the maximal scale factor $a_{+}=a(\tau_{+})$. It can incorporate a finite inflationary stage if the model is generalized to the case when a primordial cosmological constant is replaced by the potential of the inflaton field $\phi$, $\varLambda\to V(\phi)/M_{P}^{2}$, staying in the slow-roll regime during the inflationary stage101010Alternatively, the role of inflaton can be played by Ricci curvature in the Starobinsky $R^{2}$-model, the coupling of the $R^{2}$ term being subject to the renormalization respecting the zero value of $\alpha$ in the total Weyl anomaly [43]. and decaying in the end of inflation by a usual exit scenario [43, 44]. The energy scale of inflation – its Hubble parameter $H\sim\sqrt{\varLambda/3}$ turns out to be bounded from above by $\sqrt{2}\pi M_{P}/\sqrt{\beta}$, so that to solve the problem of hierarchy between the Planck and inflation scales one needs $\beta\gg 1$ which matches with the previously adopted assumption that numerous conformal fields drastically outnumber all other fields and dominate over their loop corrections. For the negative sign in $M_{\pm}^{2}$ the solutions represent vacuum $S^{4}$-instantons of the no-boundary type with the vanishing minimal value of the scale factor $a_{-}=0$. They correspond to the diverging $\eta\sim\int_{0}^{a_{+}}da/a\dot{a}\to\infty$ or zero temperature. These solutions, however, do not contribute to the statistical sum because of their infinitely positive action $\varGamma_{\rm tot}\to+\infty$ — the quantum effect of the trace anomaly which flips the sign of the negative tree-level action of the Hartle-Hawking instantons [74] and sends it to $+\infty$ [27]. Thus the CFT cosmology scenario is free from the infrared catastrophe of the no-boundary quantum state which would imply that the origin of an infinitely big Universe is infinitely more probable than that of a finite one. ## 6 Renormalization group and the metamorphosis of the running scale This section has essentially discussion nature and is associated with the covariant perturbation theory of the above type. One of the motivations for this discussion is that, in spite of a widespread concept of running cosmological and gravitational constants, which is especially popular within the asymptotic safety approach, there is a very profound and persuading criticism of this concept [30]. It is based on numerous arguments of the tadpole structure of the cosmological and Einstein terms, on concrete results for graviton scattering amplitudes [29] which cannot be interpreted in terms of a universal scaling of $\varLambda$ and $G$, etc. At the same time in renormalizable gravity models with multiple couplings the solution of the full set of RG equations includes running cosmological and gravitational constants [36]. So the question arises how to interpret their running scale. Here is the attempt to do this in terms of the covariant curvature expansion developed in [38, 39, 40]. We start with the classical action which is the sum of local curvature invariants of growing dimensionality $(4+m)$ in units of the mass $S[\,g_{\mu\nu}]=\sum\limits_{m,N}\varLambda^{(m)}_{N}\int d^{4}x\,\sqrt{g}\,\Re^{(4+m)}_{N}(x).$ (6.1) They are monomials of $N$-th order in curvature tensors which are acted upon by covariant derivatives $\displaystyle\Re^{(m)}_{N}(x)=\mathop{\underbrace{\nabla...\nabla}_{m-2N}}\mathop{\overbrace{\Re(x)...\Re(x)}^{N}},$ (6.2) $\displaystyle{\rm dim}\;\Re^{(m)}_{N}(x)\equiv\big{[}\,\Re^{(m)}_{N}(x)\,\big{]}=m.$ (6.3) The curvature monomials enter the action with coupling constants $\varLambda^{(m)}_{N}$ of the decreasing (with growing $m$) dimensionality $[\,\varLambda^{(m)}_{N}\,]=d-m,\quad m=0,1,\ldots.$ (6.4) Summation in (6.1) can run over finite set of terms providing the renormalizability of the theory, or formally extended to the infinite set in the framework of generalized RG theory with infinite set of couplings $\\{\varLambda\\}=\varLambda^{(m)}_{N}$. Within covariant perturbation theory the full metric is decomposed as a sum of the flat spacetime metric $\tilde{g}_{\mu\nu}$ and the perturbation $h_{\mu\nu}$ $g_{\mu\nu}=\tilde{g}_{\mu\nu}+h_{\mu\nu},$ (6.5) so that each curvature invariant becomes expanded as an infinite series in powers of $h_{\mu\nu}$ forming a new set of $h$-monomials on the flat space background $\displaystyle\int d^{4}x\,\sqrt{g}\,\Re^{(m)}_{N}=\sum\limits_{M=N}^{\infty}\int d^{4}x\,\sqrt{\tilde{g}}\,I_{M}^{(m)}(h),$ $\displaystyle I_{M}^{(m)}(h)\propto\mathop{\underbrace{\tilde{\nabla}...\tilde{\nabla}}_{m}}\mathop{\overbrace{h(x)...h(x)}^{M}}.$ (6.6) Then in the notations of the covariant perturbation theory the calculation of the renormalized effective action leads to the same sequence of monomials acted upon by the operator form factors $\varGamma_{n}^{(i)}\big{(}\\{\varLambda\\},\tilde{\nabla}_{1},...\tilde{\nabla}_{1}\big{)}$ which make them nonlocal, $\\{\varLambda\\}$ denoting the full set of couplings (6.4). Within dimensional regularization these renormalized coupling constants get rescaled by the normalization parameter $\mu$ and expressed in terms of their dimensionless analogues $\lambda^{(m)}_{N}(\mu)$ $\varLambda^{(m)}_{N}=\mu^{d-m}\lambda^{(m)}_{N}(\mu),$ (6.7) and the perturbation theory form factors also express as the functions of dimensionless arguments $\varGamma_{M}^{(m)}\big{(}\\{\varLambda\\},\tilde{\nabla}_{1},...\tilde{\nabla}_{M}\big{)}=\mu^{d-m}\gamma_{M}^{(m)}\Big{(}\\{\lambda(\mu)\\},\tfrac{\tilde{\nabla}_{1}}{\mu},...\tfrac{\tilde{\nabla}_{M}}{\mu}\Big{)}$ (6.8) Correspondingly the effective action becomes $\displaystyle\varGamma[\,g_{\mu\nu}]=\sum\limits_{(m)}\mu^{d-m}\sum\limits_{M=0}^{\infty}\int d^{d}x\,\sqrt{\tilde{g}}$ $\displaystyle\times\gamma_{M}^{(m)}\big{(}\\{\lambda(\mu)\\},\tfrac{\tilde{\nabla}_{1}}{\mu},...\tfrac{\tilde{\nabla}_{M}}{\mu}\big{)}I_{M}^{(m)}(h_{1},h_{2},...h_{M})\,\big{|}_{\,\\{x\\}=x},$ (6.9) where $I_{M}^{(m)}(h_{1},h_{2},...h_{M})$ is the analogue of the invariant (6.6) with split spacetime arguments. A typical assumption of the RG theory that the renormalized action is independent of the running scale then leads to the set of equations for $\lambda^{(m)}_{N}(\mu)$ with the beta functions following from the residues of spacetime dimension poles in the formfactors $\varGamma_{M}^{(m)}\big{(}\\{\lambda(\mu)\\},\\{\tilde{\nabla}/\mu\\}\big{)}$, $\mu\frac{d}{d\mu}\varGamma[\,g_{\mu\nu}]=0\to\mu\frac{d}{d\mu}\lambda^{(m)}_{N}(\mu)=\beta^{(m)}_{N}(\mu)\big{(}\\{\lambda(\mu)\\}\big{)}.$ (6.10) A critical step now consists in the choice of the running scale which could probe the high energy limit of the theory and embrace a simultaneous scaling of all formfactors and invariant monomials of (6.9). Then the replacement of the parameter $\mu$ by this scale will identically bring the effective action to the form explicitly revealing its UV limit. The choice of this scaling object can be very different depending on the concrete physical setup. If the theory has a dimensional scalar field $\phi$ with a nonvanishing and slowly varying mean value it would be natural to identify RG normalization $\mu$ with $\phi$. This would lead to the nontrivially “running” in $\phi$ of the cosmological and Einstein terms, $\varLambda\to\varLambda(\phi)$ and $G\to G(\phi)$, (amended of course by a gradient expansion series in derivatives of $\phi$), but of course these terms acquire the interpretation of the Coleman- Weinberg type potential and nonminimal coupling of $\phi$ to the scalar curvature. We, however, are interested in the UV scaling of all derivatives $\tilde{\nabla}\to\infty$, which in momentum space representation of scattering amplitudes is conventionally represented by the high energy Mandelstam invariants or some other combinations of external momenta. In the coordinate representation of the covariant perturbation theory of [38, 39, 40] the role of this scale should be played by some operator. So we suggest as a candidate for this object the following nonlocal operator $\tilde{D}$ which also formally tends to infinity in the limit of $\tilde{\nabla}\to\infty$ and in fact embraces a simultaneous scaling of all invariant monomials in (6.9), $\tilde{D}\equiv\Big{(}-\sum_{N=1}^{\infty}\tilde{\Box}_{N}\Big{)}^{1/2},\quad\tilde{\Box}_{N}\equiv\tilde{g}^{\mu\nu}\tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}.$ (6.11) Though being very formal, this operator is well defined in each $N$-th monomial order because it becomes truncated to the finite sum when acting on the monomial of $N$ perturbations $h_{1},...h_{N}$, and for $N=0$ it is just zero because of its action on an independent of $x$ constant, $\tilde{D}_{N}\equiv\bigg{(}-\sum_{M=1}^{N}\tilde{\Box}_{M}\bigg{)}^{1/2},\quad\tilde{D}_{0}=0.$ (6.12) In the UV domain $\tilde{\nabla}_{n}\to\infty$, when $\tilde{\nabla}_{n}/\tilde{D}_{N}=O(1)$, $n\leq N$, the formfactors in each $N$-th order become after the replacement $\mu\to\tilde{D}$ the functions of a single operator variable $\tilde{D}_{N}$, $\displaystyle\mu^{4-m}\gamma_{N}^{(m)}\big{(}\lambda(\mu)\,\big{|}\,\tfrac{\tilde{\nabla}_{1}}{\mu},...\tfrac{\tilde{\nabla}_{N}}{\mu}\big{)}\,\Big{|}_{\,\mu\to\tilde{D}_{N}}\;\to$ $\displaystyle\;(\tilde{D}_{N})^{4-m}\gamma_{N}^{(m)}\big{(}\lambda(\tilde{D}_{N})\,\big{|}\,O(1)\big{)}\equiv(\tilde{D}_{N})^{4-m}\lambda_{N}^{(m)}\big{(}\tilde{D}_{N}\big{)},$ (6.13) and the expansion of the formally independent of $\mu$ action takes the form $\displaystyle\varGamma[\,g_{\mu\nu}]\,\Big{|}_{\,\mu\to\tilde{D}}\to\,\sum\limits_{m}\sum\limits_{N=0}^{\infty}\int d^{4}x\,\sqrt{\tilde{g}}$ $\displaystyle\times\,(\tilde{D}_{N})^{4-m}\lambda_{N}^{(m)}(\tilde{D}_{N})\,I_{N}^{(m)}(h_{1},h_{2},...h_{N})\,\Big{|}_{\,\\{x\\}=x}.$ (6.14) The next step consists in the recovery of the covariant form of the expansion in terms of the original spacetime curvature. Curiously, despite the fact that the covariant perturbation theory of [38, 39, 40] is rather often being referred to in literature, subtle details of this step are usually disregarded which leads to confusing statements on the ambiguity of this procedure, dependence on the gauge by which the metric perturbation $h_{\mu\nu}$ is related to the curvature [33], etc. At the same time, this procedure is unique, provided that one does not treat $\tilde{g}_{\mu\nu}$ and $\tilde{\nabla}_{\mu}$ as Cartesian $\delta_{\mu\nu}$ and $\partial_{\mu}$, but rather proceeds in generic coordinate system and uses the only invariant statements that the curvature of the tilded metric is vanishing $\tilde{R}^{\alpha}_{\;\;\beta\mu\nu}=0$. This is the covariant equation for $\tilde{g}_{\mu\nu}$ in terms of the curved metric $g_{\mu\nu}$ and its curvature $R^{\alpha}_{\;\;\beta\mu\nu}$, whose solution exists as perturbation expansion in $R^{\alpha}_{\;\;\beta\mu\nu}$ and also requires imposing the gauge [38, 39]. But the result of substituting this solution back into manifestly noncovariant (double field) series (6.6) is gauge independent because of the implicit invariance of the left hand side of (6.6). In the convenient DeWitt type gauge $\tilde{\nabla}^{\nu}h_{\mu\nu}-\tfrac{1}{2}\nabla_{\mu}h=O[\,h^{2}]$, $h\equiv\tilde{g}^{\alpha\beta}h_{\alpha\beta}$, the solution for $h_{\mu\nu}$ and $\tilde{\nabla}_{\mu}$ in terms of $g_{\mu\nu}$ and $\nabla_{\mu}$ reads in the lowest order as [38, 39] $h_{\mu\nu}=-\frac{2}{\Box}R_{\mu\nu}+O[\,\Re^{2}],\quad\tilde{\nabla}_{\mu}=\nabla_{\mu}+O[\,\Re\,].$ (6.15) Using this in (6.14) we get the replacement of $h$-monomials by the covariant curvature monomials along with the replacement of $\tilde{D}_{N}$ by $D_{N}$, $\displaystyle I_{N}^{(m)}(h_{1},h_{2},...h_{N})\to$ $\displaystyle\qquad\quad\frac{1}{\Box_{1}...\Box_{N}}\Re^{(m+2N)}_{N}(x_{1},...x_{N})+O[\,\Re^{N+1}],$ (6.16) $\displaystyle\tilde{D}_{N}\to D_{N}+O[\,\Re\,],$ (6.17) where $D_{N}$ is obviously defined by (6.12) in terms of full-fledged covariant d’Alembertians $\Box=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}$, and we reabsorb the coefficient $(-2)^{n}$ into the symbolic definition of the $N$-th order covariant monomial – the analogue of the local $\Re^{(m)}_{N}(x)$, see Eq. (6.2), with split $N$ spacetime arguments $\Re^{(m)}_{N}(x_{1},...x_{N})=\mathop{\underbrace{\nabla...\nabla}_{m-2N}}\Re(x_{1})...\Re(x_{N}),\,N\geq 1.$ (6.18) For $N=0$ this monomial can be defined as an irrelevant constant bringing no contribution in the UV limit. Thus the UV limit of the effective action takes the form $\displaystyle\varGamma[\,g_{\mu\nu}]\to\int d^{4}x\,\sqrt{g}\sum\limits_{m,N\geq 0}^{\infty}\\!\frac{\lambda_{N}^{(m)}(D_{N})(D_{N})^{4-m}}{\Box_{1}...\Box_{N}}$ $\displaystyle\qquad\qquad\qquad\times\,\Re^{(m+2N)}_{N}(x_{1},\cdots x_{N})\,\Big{|}_{\,\\{x\\}=x},$ (6.19) where we remind that the dimensionless formfactors $\lambda_{N}^{(m)}(D_{N})$ follow from the running RG couplings of the theory $\lambda_{N}^{(m)}(\mu)$ by the replacement of $\mu$ with the operator $D_{N}$. Let us consider application of this result to the cosmological constant sector involving the metric invariants of dimensionality $m=0$ and $\varLambda^{(4)}_{0}=\varLambda/16\pi G$. This classical cosmological term gives rise to the infinite set of zero dimension invariants $\displaystyle\int d^{4}x\,\sqrt{g}=\sum\limits_{n=0}^{\infty}\int d^{4}x\,\sqrt{\tilde{g}}\,I_{n}^{(0)}(\tilde{g},h),$ (6.20) $\displaystyle\;\;I_{0}^{(0)}(\tilde{g},h)=1,\;\;I_{1}^{(0)}(\tilde{g},h)=-\tfrac{1}{2}h,$ $\displaystyle\;\;I_{2}^{(0)}(\tilde{g},h)=\tfrac{1}{4}h^{2}-\tfrac{1}{2}h_{\mu\nu}^{2},\ldots$ (6.21) (indices are contracted by the flat metric and $h=\tilde{g}^{\mu\nu}h_{\mu\nu}$), whereas at the quantum level they generate the sequence of high energy $m=0$ structures of (6.19) $\int\\!d^{4}x\sqrt{g}\sum\limits_{N=2}^{\infty}\lambda_{N}^{(0)}(D_{N})\frac{(D_{N})^{4}}{\Box_{1}...\Box_{N}}\,\Re^{(2N)}_{N}(x_{1},...\,x_{N})\Big{|}_{\\{x\\}=x},$ (6.22) where the zeroth order term is zero in view of $D_{0}=0$ (see Eq.(6.11) and the first order term is also absent due to its tadpole (total derivative) nature – remember that $D_{1}=(-\Box_{1})^{1/2}$ and $D_{1}^{4}/\Box_{1}=\Box_{1}$ is acting on $\Re^{(2)}_{1}(x_{1})$.111111Important caveat is necessary here concerning the annihilation of the total derivative terms. The surface terms at infinity should be vanishing, which is equivalent to a good IR behavior of the nonlocal form factor $\lambda_{1}^{(0)}(D_{1})$ at $\Box\to 0$. We will assume this property basing on the maximum logarithmic singularity of $\lambda_{1}^{(0)}(D_{1})$ which is a function of $\log(-\Box)$ solving the RG equation. The same also applies to integrations by parts considered in what follows. Otherwise, the procedure of subtracting the boundary terms, like the Gibbons-Hawking surface action at asymptotically flat infinity, will be needed, which we briefly discuss below. The expansion starts at $N=2$ with the term which has the following structure $\displaystyle 4\sum\int d^{4}x\,\sqrt{g}\,\Re^{(2)}(x)\,\lambda_{2}^{(0)}\big{(}\sqrt{-2\Box}\big{)}\,\Re^{(2)}(x)$ $\displaystyle\;\;=\int d^{4}x\sqrt{g}\Big{(}R_{\mu\nu}F_{1}(\Box)R^{\mu\nu}+RF_{2}(\Box)R\Big{)}\\!+\\!O[\,\Re^{3}].$ (6.23) Here we took into account that the set of invariants $\Re^{(4)}_{2}(x_{1},x_{2})$ can be represented as a sum of terms factored out into the products of Ricci tensors and Ricci scalars with some coefficients121212Bilinear in Riemann curvature terms under the integration sign also reduce to bilinear combinations of $R_{\mu\nu}$ and $R$ by using the expression for Riemann tensor in terms of the Ricci one [38, 39], see footnote 5. $a$ and $b$, $\Re^{(4)}_{2}(x_{1},x_{2})=aR_{\mu\nu}(x_{1})\,R^{\mu\nu}(x_{2})+bR(x_{1})R(x_{2}),$ (6.24) and also used an obvious corollary of integration by parts $\displaystyle\int d^{4}x\sqrt{g}\,F(\Box_{1},\Box_{2})\Re(x_{1})\Re(x_{2})\,\Big{|}_{\,\\{x\\}=x}$ $\displaystyle\qquad\qquad=\int d^{4}x\sqrt{g}\,\Re(x)F(\Box,\Box)\Re(x).$ (6.25) Remarkable feature of the expression (6.23) is that the power-law operator factors in $(D_{N})^{4}/\Box_{1}...\Box_{N}$ at $N=2$ completely cancelled out to give the dimensionless formfactors $F_{1}(\Box)$ and $F_{2}(\Box)$ which originate as linear combinations of relevant running $\lambda_{2}^{(0)}\big{(}\sqrt{-2\Box}\big{)}$ obtained by solving the RG equation. Even more remarkable is the fact that this is a nonlocal term which is quadratic in the curvature even though it has originated from the sector of cosmological term expanded in the series of zero dimension invariants. This is what can be called as metamorphosis to high-energy partners of the cosmological constant suggested by J.Donoghue in [32]. Their structure is a direct corollary of the dimensionality arguments within RG approach. The arising form factors of the curvature squared terms are the descendants of RG running couplings of the zero dimension invariants which participate in the decomposition of the cosmological constant term. In fact, the same structure (6.23) gets reproduced for the contribution of any dimension $m$ in the expansion (6.19). For even dimensionality131313For the set of 2-dimensional curvatures $\Re$ only even dimensions $m$ enter the expansion (6.19), but this can always be generalized to the case of odd- dimensional “curvatures”, like for example the extrinsic curvature in Hořava gravity models., $m\to 2m$, this can be easily demonstrated by decomposing any $(2m+4)$-dimensional quadratic invariant as this was done above $\Re^{(2m+4)}_{2}(x_{1},x_{2})=\\!\\!\\!\\!\sum_{m_{1}+m_{2}=2m}\\!\\!\\!\\!\Re^{(m_{1}+2)}_{1}(x_{1})\Re^{(m_{2}+2)}_{1}(x_{2}).$ (6.26) Using this in (6.19) one has complete cancellation of the dimensional factor $(D_{2})^{4-2m}/\Box^{2}\sim\Box^{-m}$ in the expression $\displaystyle\int d^{4}x\,\sqrt{g}\\!\sum\limits_{m_{1}+m_{2}=2m}\Re^{(m_{1}+2)}_{1}(x)$ $\displaystyle\qquad\times\frac{\lambda_{2}^{(2m)}(D_{2})(D_{2})^{4-m_{1}-m_{2}}}{\Box^{2}}\,\Re^{(m_{2}+2)}_{1}(x)$ $\displaystyle=\\!\int d^{4}x\sqrt{g}\Big{(}R_{\mu\nu}F_{1}(\Box)R^{\mu\nu}+RF_{2}(\Box)R\Big{)}\\!+O[\,\Re^{3}].$ (6.27) Noting that with $\Re^{(m+2)}_{1}=\mathop{\overbrace{\nabla\cdots\nabla}^{m}}\Re^{(2)}_{1}$ this follows from integration by parts and the use of various corollaries of contracted Bianchi identity ($\nabla^{\nu}R_{\mu\nu}=\tfrac{1}{2}\nabla_{\mu}R$, etc.), $\displaystyle\int d^{4}x\,\sqrt{g}\\!\sum\limits_{m_{1}+m_{2}=2m}\mathop{\underbrace{\nabla\cdots\nabla}_{m_{1}}}\Re(x)F(\Box)\mathop{\underbrace{\nabla\cdots\nabla}_{m_{2}}}\Re(x)$ $\displaystyle=\int d^{4}x\,\sqrt{g}\Big{(}R_{\mu\nu}\,\Box^{m}F_{1}(\Box)R^{\mu\nu}+R\,\Box^{m}F_{2}(\Box)R\Big{)}$ $\displaystyle\qquad+O[\,\Re^{3}].$ (6.28) Here the operators $F_{1}(\Box)$ and $F_{2}(\Box)$ have the same dimension as $F(\Box)$ and originate from $F(\Box)$ by the algebra of contracting the indices of covariant derivatives. Using this relation in the left hand side of (6.27) one gets the right hand side with completely cancelled powers of $\Box$. Thus, Eq.(6.27) with $m=2$ implies the conversion of the gravitational coupling constant into the dimensionless formfactors of the Einstein term partners. These partners have the same structure as the cosmological term partners quadratic in curvatures. This is again the metamorphosis of RG running of the form $1/16\pi G(\mu)=\mu^{2}\lambda^{(2)}_{2}(\mu)\to F_{1,2}(\Box)$. Note that all this takes place in the UV limit where all curvatures in their monomials are rapidly varying in spacetime with their derivatives $\nabla\to\infty$. At intermediate energies, when the mass scale $M$ surfaces up, the scaling (6.13) ceases to make sense and roughly should be replaced with $D\sim M$, and instead of (6.23) one gets exactly the cosmological constant partners of Donoghue [32] which have the structure of $M^{4}\\!\int d^{4}x\,\sqrt{g}\Big{(}R_{\mu\nu}\frac{F_{1}^{\rm part}(\Box)}{\Box^{2}}R^{\mu\nu}+R\,\frac{F_{2}^{\rm part}(\Box)}{\Box^{2}}R\Big{)}.$ (6.29) The dimensionless form factors $F^{\rm part}_{1,2}(\Box)$ here are accumulating loop corrections with nonlocal logarithmic structures of the form $F^{\rm part}(\Box)\sim\ln\frac{M^{2}-\Box}{M^{2}}.$ (6.30) Note that these partners are still in high-energy domain $-\Box\geq M^{2}$, but they are subdominant as compared to the leading contribution (6.23) with dimensionless form factors which incorporate the logarithmically running solutions of RG equations. This is because the partners (6.29) are suppressed by power law factors $M^{4}/\Box^{2}$. Exact form of these formfactors at intermediate scales was derived at one-loop order in [33] for rather generic theory of massive fields by using the heat kernel technique of [38, 39]. In IR domain $-\Box\ll M^{2}$ they are of course expandable in local gradient series reflecting the decoupling phenomenon [34, 35, 33]. Similarly, the gravitational constant partner in IR reads as $M^{2}\int d^{4}x\,\sqrt{g}\Big{(}R_{\mu\nu}\frac{F_{1}(\Box)}{\Box}R^{\mu\nu}+R\,\frac{F_{2}(\Box)}{\Box}R\Big{)},$ (6.31) which reminds the construction of the nonlocal action for long-distance modifications of gravity theory in [51, 52]. This differs from the cosmological constant partner by another powers of $M$ and the power of $\Box$ in the denominator. One should be more careful at this point – while the case of (6.31) is well defined in asymptotically flat spacetime, the cosmological constant partner (6.29) is IR divergent for the reasons discussed above. The action of $\tfrac{1}{\Box^{2}}$ is not well defined in four dimensions (or, equivalently, $\int d^{4}x\sqrt{g}(\tfrac{1}{\Box}\Re)^{2}$ is IR divergent), so that the perturbation expansion in the dimension zero sector should be critically reconsidered. To trace the origin of this difficulty note that the first three terms of the cosmological term expansion (6.20) are divergent, whereas a similar expansion for the Einstein term becomes well defined only after the subtraction of the Gibbons-Hawking surface term $\int_{\infty}d^{3}\sigma^{\mu}\big{(}\partial_{\mu}h-\partial^{\nu}h_{\mu\nu}\big{)}$ at the infinity of asymptotically flat spacetime. Due to this subtraction we can write for the integral of the invariant $\Re^{(2)}_{1}(x)=-R(x)$, weighted in the Einstein action by $\varLambda^{(2)}_{1}=1/16\pi G$, a legitimate expansion (6.6) starting with the quadratic order in $h_{\mu\nu}$, $\displaystyle\int d^{4}x\sqrt{g}(-R)-\int_{\infty}d^{3}\sigma^{\mu}\big{(}\partial_{\mu}h-\partial^{\nu}h_{\mu\nu}\big{)}$ $\displaystyle\qquad\qquad\quad\quad\;=\sum\limits_{M=2}^{\infty}\int d^{4}x\sqrt{\tilde{g}}\,I_{M}^{(2)}(\tilde{g},h),$ (6.32) $\displaystyle I_{2}^{(2)}(\tilde{g},h)=\\!-\tfrac{1}{4}h_{\mu\nu}\tilde{\Box}h^{\mu\nu}\\!+\tfrac{1}{8}h\tilde{\Box}h-\tfrac{1}{2}\big{(}\tilde{\nabla}^{\nu}h_{\mu\nu}\\!-\tfrac{1}{2}\tilde{\nabla}_{\mu}\tilde{h}\big{)}^{2}.$ (6.33) Then the above calculational strategy leads to the effective action (6.31) whose tree level IR limit should match low energy physics with the Planck mass cutoff $M^{2}$ and the form factors $F_{1}(0)=1$ and $F_{2}(0)=1/2$. This tree level answer up to $\Re^{3}$-corrections directly corresponds to the above expression for $I_{2}^{(2)}(\tilde{g},h)$ with $h_{\mu\nu}$ given by Eq. (6.15) in terms of the curved space metric $g_{\mu\nu}$ [51, 52]. To the best of our knowledge, no such subtraction is known for cosmological term expansion (6.20), so that its rigorous treatment is still to be done. It is interesting if new structures can be generated by the regularization of this IR behavior. Apparently, this should be based on the analogue of the Graham-Fefferman construction for asymptotically AdS spaces [75, 76] and deserves further studies. In any case, the UV behavior of both cosmological and gravitational constant partners, which should be not sensitive to IR problems, is determined by curvature squared terms (6.23) with running dimensionless “couplings”. Their
# Vortex Fermi Liquid and Strongly Correlated Quantum Bad Metal Nayan Myerson-Jain Department of Physics, University of California, Santa Barbara, CA 93106 Chao-Ming Jian Department of Physics, Cornell University, Ithaca, New York 14853, USA Cenke Xu Department of Physics, University of California, Santa Barbara, CA 93106 ###### Abstract The semiclassical description of two-dimensional ($2d$) metals based on the quasiparticle picture suggests that there is a universal threshold of the resistivity: the resistivity of a $2d$ metal is bounded by the so called Mott- Ioffe-Regal (MIR) limit, which is at the order of $h/e^{2}$. If a system remains metallic while its resistivity is beyond the MIR limit, it is referred to as a “bad metal”, which challenges our theoretical understanding as the very notion of quasiparticles is invalidated. The description of the system becomes even more challenging when there is also strong correlation between the electrons. Partly motivated by the recent experiment on transition metal dichalcogenides moiré heterostructure, we seek for understanding of strongly correlated bad metals whose resistivity far exceeds the MIR limit. For some strongly correlated bad metals, though a microscopic description based on electron quasiparticles fails, a tractable dual description based on the “vortex of charge” is still possible. We construct a concrete example of such strongly correlated bad metals where vortices are fermions with a Fermi surface, and we demonstrate that its resistivity can be exceptionally large at zero temperature. And when extra charge $\delta n_{e}$ is doped into the system away from half-filling, a small Drude weight proportional to $(\delta n_{e})^{2}$ will emerge in the optical conductivity . ## I Introduction The most rudimentary description of a metal relies on the notion of quasiparticles, $i.e.$ an electron near the Fermi surface can be well approximated as a wave packet between two consecutive elastic scatterings with impurities. This picture requires that $l_{\mathrm{m}}k_{F}>1$, where $l_{\mathrm{m}}$ is the mean free path of the electrons from scattering with the impurities Emery and Kivelson (1995). When $l_{\mathrm{m}}k_{F}\sim 1$, the resistivity of a two dimensional system is of the order of $h/e^{2}$, which is also known as the Mott-Ioffe-Regal (MIR) limit. The common wisdom is that, for noninteracting electrons, when the resistivity of a $2d$ metal exceeds the MIR limit, not only would the rudimentary description of the system fail, the system would actually become an insulator due to the Anderson localization. The potential metal-insulator transition (MIT) of a noninteracting $2d$ electron system Evers and Mirlin (2008) (within certain symmetry class such as the symplectic) should happen when the resistivity is of order $h/e^{2}$. In strongly interacting electron systems, the universal threshold of resistivity $h/e^{2}$ still appears to hold. For electrons at half-filling (on average one electron per site) on a lattice, the competition between the interaction and kinetic energy can lead to an interaction-driven MIT between a metal and a Mott insulator phase. When the insulator is a particular type of spin liquid phase, this MIT can be understood through a parton construction Lee and Lee (2005); Senthil (2008), and the total resistivity follows the Ioffe-Larkin rule Ioffe and Larkin (1989) $\rho=\rho_{b}+\rho_{f}$, where $\rho_{b}$ and $\rho_{f}$ are resistivity from the bosonic and fermionic partons respectively. $\rho_{f}$ is a smooth function across the MIT, and at low temperature $\rho_{f}$ mostly arises from disorder, which is expected to be small if we assume weak disorder. Hence at the MIT, the critical resistivity is mostly dominated by the bosonic parton $\rho_{b}$. The critical resistivity $\rho_{b}$ is expected to be of order $h/e^{2}$ (though in the DC limit $\rho_{b}$ may acquire an extra factor of $7\sim 8$, based on analytical evaluation in certain theoretical limit Witczak-Krempa et al. (2012)) 111Here we also note that a weak disorder, or Umklapp process is needed to remove the logarithmic divergence of conductivity due to thermal fluctuation, which is predicted by hydrodynamics Delacrétaz (2020); Kovtun (2012).. Hence in most noninteracting as well as strongly interacting systems that we have understood, the resistivity of a $2d$ metallic system should be roughly bounded by the MIR limit. Hence if a $2d$ system remains metallic while its resistivity far exceeds the MIR limit, it challenges our theoretical understanding. These exotic metals are referred to as “bad metals” Emery and Kivelson (1995). The recent experiment on transition metal dichalcogenides (TMD) revealed the existence of a novel interaction-driven MIT Li et al. (2021), where the universal MIR limit is violated: the DC critical resistivity at the MIT exceeds the MIR limit by nearly two orders of magnitude. The system is supposedly modelled by an extended Hubbard model of spin-1/2 electrons on a triangular moiré lattice Wu et al. (2018); Tang et al. (2019), but the experimental finding is qualitatively beyond the previous theory of MIT. A few recent theoretical proposals Xu et al. (2022); Kim et al. (2022) were made in order to understand this exotic MIT. The experiment mentioned above only revealed a critical point whose resistivity is clearly beyond the MIR limit. Given the current experimental finding and the strongly interacting nature of the system, it is natural to ask, can there also be a stable bad metal phase of strongly correlated electrons, whose properties can be evaluated in a tractable way? In this work we discuss the construction of a quantum bad metal state with longitudinal transport only; the electrical resistivity $\rho_{e}$ of the state can far exceed the MIR limit even with weak disorder, at zero and low temperature. It is worth noting that the phenomenology of the state we construct is different from the original example of “bad metal” (hole doped cuprates) discussed in Ref. Emery and Kivelson, 1995, where the resistivity increases with temperature monotonically and exceeds the MIR limit at high temperature; while the resistivity of our “quantum bad metal” remains finite and large at zero temperature, and clearly violates the MIR bound. Our construction is formulated through the dual degrees of freedom of “charge vortex”. The particle-vortex duality has a long history Peskin (1978); Dasgupta and Halperin (1981); Fisher and Lee (1989). This duality was originally discussed for bosons, but recent developments have generalized the duality to fermion-vortex duality Son (2015); Wang and Senthil (2015a); Metlitski and Vishwanath (2016); Mross et al. (2016); Kachru et al. (2017), as well as Chern-Simons matter theory to free Dirac or Majorana fermion duality Aharony (2016); Seiberg et al. (2016); Hsin and Seiberg (2016); Aharony et al. (2017); Metlitski et al. (2017); Chen et al. (2018); Son et al. (2019); Jian et al. (2019). And since the particle-vortex duality is still a “strong-weak” duality, when the charges are strongly correlated which invalidates a perturbative description based on quasiparticles, the vortices are weakly interacting through the dual gauge field, which facilitates a rudimentary description. Hence one way to construct a quantum bad metal for charges is to drive the vortices into a good metal. The vortices can naturally form a good metal as long as (1) the vortex is a fermion, and (2) the fermionic vortices form a Fermi surface with a finite density of states. In the next section we will discuss how exactly a vortex becomes a fermion in our construction, and how to derive the charge responses of the system from the physics of vortices. We would like to clarify that we are not the first to investigate correlated electrons as vortex liquid. Besides the more well-known interpretation of composite fermions as “vortex liquid” in the context of half-filled Landau level (and similarly for charged bosons at filling 1) Halperin et al. (1993); Read (1998); Son (2015); Wang and Senthil (2016a, b); a metallic phase with anomalously large conductivity that emerges in amorphous thin film also motivated discussions of exotic physics of superconductor vortices Galitski et al. (2005); Wu and Phillips (2006); Kapitulnik et al. (2019). We will compare our construction with the previous works. ## II construction of the quantum bad metal ### II.1 General considerations Before we detail our construction, some general considerations can already be made. (1) As was pointed out in previous literatures, at least for charged bosons, the product of the conductivity of the charges and the conductivity of the vortices is a constant Fisher et al. (1990); Gazit (2015), $i.e.$ $\sigma_{e}\sim 1/\sigma_{v}$. If this relation still (at least approximately) holds in our construction, it implies that if the vortex conductivity $\sigma_{v}$ follows the standard behavior of a good metal at finite temperature, then the resistivity $\rho_{e}(T)$ of charge should decrease with $T$, at least below certain characteristic energy scale. (2) A charge vortex can generally be viewed as a point defect with circulating vorticity of charge current. A charge vortex must become an anti-vortex under spatial reflection $\mathcal{P}$. This is because the electric current circulation will reverse its orientation under reflection. If the vortices form a Fermi surface, in general it would break $\mathcal{P}$, as a Fermi surface usually is not invariant under the particle-hole transformation. The same observation can be made for time-reversal $\mathcal{T}$: since charge density is invariant under $\mathcal{T}$, time-reversal would reverse the direction of electric current circulation. We will discuss later how to preserve $\mathcal{P}$ and $\mathcal{T}$ in our construction, by enforcing certain particle-hole symmetry of the fermionic vortices. (3) As was pointed out in Ref. Wang and Senthil, 2016a, b, the Wiedemann-Franz law $\kappa\sim T\sigma$ should generally be strongly violated in a vortex liquid, as the vortices carry entropy, but no charge. In the state we construct this is still true, the modified Wiedmann-Franz law should be $\kappa\sim LT\sigma_{v}\sim LT\rho_{e}$. The Lorenz number $L$ is about $\kappa/(T\sigma_{e})\sim\rho_{e}^{2}$, which can be exceedingly larger than an ordinary metal. Here we remind the readers that our state has longitudinal transport only, and it can have both large resistivity and thermal conductivity. (4) For a strongly interacting electron system, the relaxation of the electric current is pretty much independent from the relaxation of a single particle. Hence the physics of a strongly interacting electron liquid may be only captured by some hydrodynamical description without microscopic particles Delacrétaz (2020); Kovtun (2012); Hartnoll (2014); Hartnoll and Mackenzie (2021); Lucas and Fong (2018); Hartnoll et al. (2016), as hydrodynamics is defined at a much larger length scale. But since the particle-vortex duality is a strong-weak duality, the interaction between electron density becomes $\displaystyle\sum_{i,j}V_{i,j}n_{i}n_{j}\sim\int d^{2}x\ \frac{1}{g}(\vec{\nabla}\times\vec{a})^{2}$ (1) in the dual picture, where $g$ can be viewed as the gauge coupling of the gauge charges (vortices), and also the charge compressibility $\kappa_{e}$. The stronger the charge interaction is, the weaker is the bare gauge coupling of the vortices. The common “patch theory” for analyzing the RG flow of a Fermi surface coupled with a U(1) gauge field predicts that the gauge coupling would flow to a strongly coupled fixed point eventually Nayak and Wilczek (1994a, b); Lee (2009); Metlitski and Sachdev (2010); Mross et al. (2010). But this patch theory breaks down when there is disorder, as disorder would mix different patches in the momentum space. But at least when the bare gauge coupling $g$ is weak enough (which corresponds to a strong charge density- density interaction), there should be a sufficient window for the gauge coupling to be viewed as a perturbation, and the momentum of the vortices can be transferred to the photons, and then relax through disorder before “feeding back” to the vortices. Hence in this sense we can view the dual vortex system as an approximate vortex Fermi liquid. Figure 1: The resistivity $\rho(T)$ as a function of temperature at low temperature, computed using the composition formula Eq. 17. We have chosen $\sigma_{b}=\exp(-\Delta_{b}/T)$ where $\Delta_{b}=1$, and $\sigma_{f}\sim 200$. $\rho$ is measured in unit of $h/e^{2}$ in this plot. We can also give $\sigma_{f}$ a Fermi-liquid like temperature dependence, the plot remains qualitatively unchanged. ### II.2 Quantum Bad metal at half-filling on a lattice The system we begin with is a strongly interacting electron system with half- filling (one electron per site on average) on a lattice, later we will discuss what happens when the system is doped away from half-filling. We start with the standard “$\mathrm{SU}(2)$ slave rotor” theory for the electron operator Affleck et al. (1988); Dagotto et al. (1988); Wen and Lee (1996); Lee et al. (1998): $\displaystyle c_{\uparrow}$ $\displaystyle=$ $\displaystyle f_{\uparrow}z_{1}-f^{\dagger}_{\downarrow}z^{\dagger}_{2},$ (2) $\displaystyle c_{\downarrow}$ $\displaystyle=$ $\displaystyle f_{\downarrow}z_{1}+f^{\dagger}_{\uparrow}z^{\dagger}_{2}.$ (4) Here $(f_{\uparrow},f_{\downarrow})$ is a fermionic spinon doublet (fermionic partons) that carries spin-1/2, and $(z_{1},z_{2})$ are slave rotors (bosonic partons) carrying the electric charge. This formalism can maximally host a SU(2) charge transformation and a SU(2) gauge transformation, and both transformations can be made explicit by rewriting Eq. 4 in a matrix form (see $e.g.$ Ref. Hermele, 2007; Ran et al., 2008 and references therein) 222An even more complete parton construction can accommodate an SO(4) gauge symmetry Xu and Sachdev (2010). But for our purpose it suffices to assume all SU(2) transformations, including the spin symmetry are broken down to U(1). In fact, on a lattice with frustration, both the charge SU(2) and the gauge SU(2) transformations are broken down to U(1) by the most natural mean field states of $z_{\alpha}$ and $f_{\alpha}$. The assignment of the electric charge symmetry $\mathrm{U}(1)_{e}$, gauge symmetry $\mathrm{U}(1)_{g}$ and spin symmetry $\mathrm{U}(1)_{s}$ on the partons is $\displaystyle\mathrm{U}(1)_{e}$ $\displaystyle:$ $\displaystyle z_{1}\rightarrow e^{\mathrm{i}\theta_{e}}z_{1},\ \ \ z_{2}\rightarrow e^{-\mathrm{i}\theta_{e}}z_{2},\ \ \ f_{\alpha}\rightarrow f_{\alpha};$ (5) $\displaystyle\mathrm{U}(1)_{g}$ $\displaystyle:$ $\displaystyle z_{1}\rightarrow e^{\mathrm{i}\theta_{g}}z_{1},\ \ \ z_{2}\rightarrow e^{\mathrm{i}\theta_{g}}z_{2},\ \ \ f_{\alpha}\rightarrow e^{-\mathrm{i}\theta_{g}}f_{\alpha},$ (7) $\displaystyle\mathrm{U}(1)_{s}$ $\displaystyle:$ $\displaystyle z_{\alpha}\rightarrow z_{\alpha},\ \ \ f_{\uparrow}\rightarrow e^{\mathrm{i}\theta_{s}}f_{\uparrow},\ \ \ f_{\downarrow}\rightarrow e^{-\mathrm{i}\theta_{s}}f_{\downarrow}.$ (9) The system being at half-filling implies that the total rotor number of $z_{1}$ and $z_{2}$ are equal: $\sum_{i}n_{1,i}=\sum_{i}n_{2,i}$. There is a dynamical $\mathrm{U}(1)_{g}$ gauge field $a_{\mu}$ that couples to both $z_{\alpha}$ and $f_{\alpha}$. The U(1) gauge constraint demands that on every site $i$, $\sum_{\alpha}n_{\alpha,i}+1=\sum_{\alpha}f^{\dagger}_{\alpha,i}f_{\alpha,i}$. A detailed discussion of the physical meaning of the partons introduced can be found in Ref. Ran et al., 2008. This slave rotor parton construction allows us to construct many states of the strongly interacting electron system which are difficult to visualize using free or weakly interacting electrons. For example, if the bosonic partons $z_{\alpha}$ are in a trivial bosonic Mott insulator state with $n_{1,i}=n_{2,i}=0$ (meaning $\sum_{\alpha}f^{\dagger}_{\alpha,i}f_{\alpha,i}=1$ on each site), the system becomes a Mott insulator of electrons with a charge gap, and the spin physics of the Mott insulator depends on the state of $f_{\alpha}$. Various spin liquid states can be designed and classified depending on the mean field band structure of $f_{\alpha}$ Wen (2002). The many-body state of electrons is determined by the states of the partons. In the last decade the study of symmetry protected topological states (SPT) significantly broadened our understanding of the states of matter Chen et al. (2013, 2012), which also allows us to construct even more novel states of electrons using the partons. We first use $z_{\alpha}$ to define two other composite bosonic fields: $\phi_{e}=z^{\dagger}_{1}z_{2}$, $\phi_{g}=z_{1}z_{2}$. $\phi_{e}$ and $\phi_{g}$ carry charge $(2,0)$ and $(0,2)$ under $\left(\mathrm{U}(1)_{e},\mathrm{U}(1)_{g}\right)$. Then we drive the composite fields $\phi_{e}$ and $\phi_{g}$ into a bosonic SPT state with $\mathrm{U}(1)_{e}$ and $\mathrm{U}(1)_{g}$ symmetries, which is the bSPT state for two flavors of bosons constructed in Ref. Levin and Senthil, 2004. The physics of this bSPT state is analogous to the quantum spin Hall insulator: the vortex of $\phi_{e}$ carries charge of $\phi_{g}$, and vice versa. If we follow the Chern-Simons description of the bSPT Lu and Vishwanath (2012), this state is $\displaystyle\mathcal{L}_{\mathrm{bSPT}}=\frac{\mathrm{i}K^{IJ}}{2\pi}\tilde{a}_{I}\wedge d\tilde{a}_{J}+\frac{\mathrm{i}2}{2\pi}\tilde{a}_{1}\wedge da+\frac{\mathrm{i}2}{2\pi}\tilde{a}_{2}\wedge dA^{e},$ (10) where $K^{IJ}$ takes the same form as the Pauli matrix $\sigma^{x}$. Here $\ast d\tilde{a}_{1}$ and $\ast d\tilde{a}_{2}$ are the dual of the currents of $\phi_{g}$ and $\phi_{e}$ respectively. This bSPT state of the rotors also has a particle-hole symmetry of the bosonic rotors, and in this bSPT state the expectation value of the total rotor number of both $z_{1}$ and $z_{2}$ is zero. Please note that the total rotor number being zero does not imply a trivial vacuum state, as the rotor number (just like a spin $S^{z}$ operator) can take both positive and negative values. A bSPT state is gapped, and also nondegenerate, hence it is safe to integrate out the bosonic degree of freedom, and obtain the response to the gauge fields. After integrating out $\tilde{a}_{1,2}$ from Eq. 10, we obtain: $\displaystyle\mathcal{L}$ $\displaystyle=$ $\displaystyle\mathcal{L}_{F}(f_{\alpha},a_{\mu})+\frac{4\mathrm{i}}{2\pi}a\wedge dA^{e}+\frac{1}{g}(\vec{\nabla}\times\vec{a})^{2}+\cdots.$ (11) One can also introduce the external gauge field for the spin symmetry $\mathrm{U}(1)_{s}$, but it won’t have a nontrivial response from the bSPT state. The mutual Chern-Simons term in the last term of Eq. 11 fundamentally changes the physics of the system in the following way: The electric charge current, which is defined as $J^{e}=\delta\mathcal{L}/\delta A_{e}$, is identified as $\displaystyle J^{e}=e\frac{4}{2\pi}\ast da,$ (12) meaning the flux of $a_{\mu}$ now carries electric charge $4e$. Hence the bSPT state so constructed turns the gauge field $a_{\mu}$ into the dual of the charge current in the sense of the particle-vortex duality Peskin (1978); Dasgupta and Halperin (1981); Fisher and Lee (1989), and turns the gauge charge of $a_{\mu}$ into the charge vortex. The fermionic parton $f_{\alpha}$, which carries gauge charge $1$ under $a_{\mu}$, now automatically becomes the vortex of the electric charge, as when a charge (now the flux of $a_{\mu}$) circulates the gauge charged $f_{\alpha}$, it would accumulate a Berry’s phase. It is worth noting that, for more general bSPT states of $z_{\alpha}$ with only mutual $\mathrm{U}(1)_{e}\times\mathrm{U}(1)_{g}$ Chern-Simons response, the electric charge carried by the flux of $a_{\mu}$ has to be an integer multiple of $4e$. Hence, the bSPT described above is the minimal non- trivial choice. Now we take the long wavelength limit, and integrate out both $f_{\alpha}$ and $a_{\mu}$; we also choose the temporal gauge with $a_{\tau}=0$. The response Lagrangian in terms of $A^{e}$ is $\displaystyle\mathcal{L}_{\mathrm{res}}=\sum_{\omega}\frac{1}{2}\frac{4}{\pi^{2}}\frac{\omega^{2}}{\Pi_{f}(\omega)}|\vec{A}^{e,t}(\omega)|^{2}.$ (13) $\vec{A}^{e,t}$ is the transverse component of $\vec{A}^{e}$. $\Pi_{f}(\omega)$ is the polarization of $f_{\alpha}$, and it should be proportional to $\mathrm{i}\omega\sigma_{f}(\omega)$ after analytic continuation Lee and Nagaosa (1992), where $\sigma_{f}(\omega)$ is the conductivity of the fermionic parton ($i.e.$ also the vortex) $f_{\alpha}$. This implies that the electrical resistivity of the system should be $\displaystyle\rho_{e}(\omega)=\frac{\pi^{2}}{4}\sigma_{f}(\omega)=\frac{\pi^{2}}{4}\left(\frac{\sigma_{0}}{1-\mathrm{i}\omega\tau_{v}}\right).$ (14) We note that here $\rho_{e}$ is measured with unit $\hbar/e^{2}$; $\sigma_{f}$ is computed in the convention that $f_{\alpha}$ carries charge 1. Here we can evaluate the conductivity of the good metal of $f_{\alpha}$ using the rudimentary Drude formula. $\sigma_{0}$ is the conductivity of the $f_{\alpha}$ in the DC limit. It was shown recently that even when a Fermi surface is coupled to a dynamical gauge field, the response to the gauge field is still exactly the same as what is computed by the Drude theory (at least when there is no disorder) Shi et al. (2022). We also exploit the fact that, when the electron density has a strong interaction, the bare gauge coupling becomes weak, and the photon-vortex interaction remains perturbative at least within a large window of scale. An analysis of the fermions interacting with gauge field in a disordered environment can be found in Ref. Kumar et al., 2022. In Eq.14, $\sigma_{0}$ can be rather large, namely the vortices form a good metal, when there is a finite Fermi surface of $f_{\alpha}$ and the disorder is weak. In this case the electrical resistivity of the system can be far beyond the MIR limit, $i.e.$ the system is a very bad metal. Now we investigate the spatial reflection symmetry $\mathcal{P}$ of our system. And let us use $\mathcal{P}_{y}:y\rightarrow-y$ as an example. We assume that $c_{\alpha}$ changes up to a sign under $\mathcal{P}_{y}$, then this leads to the transformation of $f_{\alpha}$, $z_{\alpha}$: $\displaystyle\mathcal{P}_{y}:~{}~{}f_{\alpha}\rightarrow(\sigma^{x})_{\alpha\beta}f_{\alpha}^{\dagger},\ \ \ z_{\alpha}\rightarrow(\sigma^{x})_{\alpha\beta}z^{\dagger}_{\beta}.$ (15) The $\mathrm{U}(1)_{e}$ and $\mathrm{U}(1)_{g}$ charges are even and odd under $\mathcal{P}_{y}$ respectively. This means that $\tilde{a}_{1,2}$ transform oppositely under $\mathcal{P}$, and the bSPT state preserves $\mathcal{P}$ based on Eq. 10. In order to ensure the reflection symmetry, we also need the band structure of $f_{\alpha}$ to satisfy $\varepsilon_{\uparrow}(k_{x},k_{y})=-\varepsilon_{\downarrow}(k_{x},-k_{y})$. Our construction also preserves a (special) time-reversal symmetry $\mathcal{T}$ defined as following: the electron operator is still invariant under $\mathcal{T}$ (up to an extra sign). We can choose the following transformations of $z_{\alpha}$ and $f_{\alpha}$ to ensure the desired transformation of the electrons: $\displaystyle\mathcal{T}:~{}~{}z_{\alpha}\rightarrow(\sigma^{x})_{\alpha\beta}z^{\dagger}_{\beta},\ \ \ f_{\alpha}\rightarrow(\sigma^{x})_{\alpha\beta}f^{\dagger}_{\beta}.$ (16) As we can see, the $\mathrm{U}(1)_{e}$ and $\mathrm{U}(1)_{g}$ charges are again even and odd under $\mathcal{T}$ respectively. This means that $\tilde{a}_{1,2}$ transform oppositely under $\mathcal{T}$, and the bSPT state preserves $\mathcal{T}$ based on Eq. 10. In order to ensure the time-reversal symmetry, we also need the band structure of $f_{\alpha}$ to satisfy $\varepsilon_{\uparrow}(\vec{k})=-\varepsilon_{\downarrow}(-\vec{k})$. More precisely, here the time-reversal is a product between the particle-hole transformation and a spatial-inversion 333Without the spatial-inversion transformation, the time-reversal transformation alone would demand $\varepsilon_{\uparrow}(\vec{k})=-\varepsilon_{\downarrow}(\vec{k})$, which would make the Fermi pockets of the two spinon bands overlap with each other, and lead to a potential exciton instablity due to nesting. . A more realistic time-reversal symmetry for electrons with $\mathcal{T}^{2}=-1$ can be defined and preserved if we introduce another orbital flavor to the electrons. So far we have ignored the conductivity of the bosonic partons, which is valid when the energy scale is much smaller than the gap of the bosons $\Delta_{b}$. With finite frequency $\omega$, the bosons will also make two new nonzero contributions to the longitudinal response of $A^{e}$. The first of which is a simple addition to the response Lagrangian of the boson polarization $i.e.$ $\mathcal{L}^{b}_{\mathrm{res}}\sim\frac{1}{2}\Pi_{b}(\omega)|\vec{A}^{e,t}(\omega)|^{2}$, for which $\Pi_{b}(\omega)\rightarrow\mathrm{i}\omega\sigma_{b}(\omega)$ after analytic continuation. This only modifies the charge conductivity by shifting it a value of $\sigma_{b}(\omega)$. The second, more interesting contribution, is that a longitudinal term will be generated for the internal gauge field $a_{\mu}$ as well. In principle $\Pi_{b}(\omega)$ can be different for $A^{e}_{\mu}$ and $a_{\mu}$, but without loss of generality we assume that they are the same. Since the internal gauge field and electromagnetic gauge field transform differently under $\mathcal{P}$ and $\mathcal{T}$ and so do the $\phi_{g}$ and $\phi_{e}$ current, there cannot be mixed terms between $a$ and $A^{e}$ that would lead to mutual longitudinal response in the effective Lagrangian. Then eventually the electrical conductivity of the system follows the following composition rule $\displaystyle\sigma_{e}=\sigma_{b}+\frac{4}{\pi^{2}}\left(\frac{1}{\sigma_{f}+\sigma_{b}}\right).$ (17) In this equation, $\sigma_{e}$ is measured in the unit of $e^{2}/\hbar$; $\sigma_{b}$ and $\sigma_{f}$ are computed in the convention of charge $1$ and $\hbar=1$. This composition is very different from the Ioffe-Larkin rule Ioffe and Larkin (1989), and we expect this composition rule to be valid for temperature far smaller than the gap of the bosonic parton, $i.e.$ when the thermally activated bosonic partons are very dilute. In Fig. 1 we plot $\rho_{e}(T)=1/\sigma_{e}(T)$ measured in unit of $h/e^{2}$ using the composition rule Eq. 17. Another quantity of interest is the charge compressibility. The compressibility can be computed from the charge density-density correlation function which can be attained by reading off the $A_{\tau}(q)A_{\tau}(-q)$ term in $\mathcal{L}_{\text{res}}$ at vanishing frequency after integrating out all the matter and the internal gauge fields. As we discussed before, one contribution to the compressibility is proportional to the gauge coupling $g$, which is evident after integrating out $\vec{a}$ from Eq. 11. Eventually the compressibility should involve the bare charge density-density interaction, after being renormalized by integrating out the fermions $f_{\alpha}$. In the limit $\omega\rightarrow 0$ and large gap of bosonic rotors, the total compressibility is given by $\displaystyle\kappa_{e}(\mathbf{q})=\frac{1}{(\kappa_{0})^{-1}+\frac{\pi^{2}}{4}\frac{\Pi_{f}(q)}{|\mathbf{q}|^{2}}},$ (18) where $\kappa_{0}$ is the “bare” compressibility of the system when the gauge field $a_{\mu}$ is not coupled to any matter fields, hence $\kappa_{0}$ is proportional to the gauge coupling $g$: $\kappa_{0}=g(4/\pi^{2})$. If we choose a simple quadratic dispersion or a circular Fermi surface for the vortices, then the result for the fermionic vortex polarization at zero frequency is well-known to be Nave et al. (2007); Lee and Nagaosa (1992) $\Pi_{f}(\mathbf{q})\sim|\mathbf{q}|^{2}/(12\pi m_{v})$ where $m_{v}$ is the effective mass of the fermionic vortices. Like the charge conductivity, this gives us a composition rule for the charge compressibility in terms of linear response functions for the different species of partons. For the more general case of a finite boson gap, this composition rule involves both the boson compressibility and boson polarization $\Pi_{b}\sim\chi_{b}|\mathbf{q}|^{2}$: $\displaystyle\kappa_{e}(\mathbf{q})=\kappa_{b}(\mathbf{q})+\frac{1}{(\kappa_{0})^{-1}+(\chi_{b}+\chi_{f})\pi^{2}/4}.$ (19) Here, $\chi_{b}$ is the magnetic susceptibility of the bosons. We expect this composition rule to hold at small finite temperature much below the boson gap. ### II.3 Physics at weak doping Figure 2: The optical conductivity $\mathrm{Re}[\sigma(\omega)]$ near $\omega=0$ at half-filling, and with small doping $\delta n_{e}$. We would also like to consider the effects of weak charge doping $\delta n_{e}$ away from one electron per site. As the charge density is now bound with the internal gauge flux through the mutual CS term in Eq. 11, weakly doping the system corresponds to adding a background $\mathrm{U}(1)_{g}$ gauge field with small average magnetic flux. Since the vortices (spinon $f_{\alpha}$) form a good metal, their conductivity may still be evaluated through the semiclassical Drude theory. Using the semiclassical equation of motion with magnetic field, it is straight forward to compute the Drude conductivity of $f_{\alpha}$: $\displaystyle\sigma_{f}(\omega)=\bigg{(}\frac{\frac{1}{\tau_{v}}-i\omega}{\omega_{c}^{2}+(\frac{1}{\tau_{v}}-i\omega)^{2}}\bigg{)}\frac{1}{\tau_{v}}\sigma_{0}.$ (20) $\omega_{c}$ is the cyclotron frequency of the vortices $\omega_{c}=\bar{b}/m_{v}$, and $\bar{b}\sim\delta n_{e}$ is the average flux seen by the fermionic vortices. Note that the conductivity $\sigma_{f}(\omega)$ of $f_{\alpha}$ only contains a longitudinal component given by the expression above. The Hall conductivities from $f_{\uparrow}$ and $f_{\downarrow}$ cancel each other due to the time-reversal symmetry $\mathcal{T}$ which involves a particle-hole transformation of the spinons $f_{\alpha}$. From the relation between the charge conductivity and vortex conductivity in Eq. 17, if the boson gap is taken to infinity, we can extract the main interesting piece of the charge conductivity, which is given by $\displaystyle\sigma_{e}(\omega)\sim\frac{4}{\pi^{2}\sigma_{f}(\omega)}=\frac{4\tau_{v}}{\pi^{2}\sigma_{0}}\bigg{[}\frac{\omega_{c}^{2}}{\frac{1}{\tau_{v}}-i\omega}+\bigg{(}\frac{1}{\tau_{v}}-i\omega\bigg{)}\bigg{]}.$ (21) At zero doping, the optical conductivity does not have a Drude weight; but the presence of added charge has introduced a new Drude peak to the optical conductivity with Drude weight: $\displaystyle D\sim\frac{2\omega_{c}^{2}\tau_{v}}{\pi^{2}\sigma_{0}}\sim(\delta n_{e})^{2},$ (22) and the Drude weight is proportional the square of the doped charge density, contrary to the ordinary Drude theory where the Drude weight is linear with the charge density. The DC resistivity now takes the form of $\displaystyle\rho_{e}=1/\sigma_{e}(0)=\frac{\pi^{2}\sigma_{0}}{4}\frac{1}{1+\tau_{v}^{2}\omega_{c}^{2}}.$ (23) The Lorenz number, defined as $L=\kappa/(T\sigma_{e})$ becomes $\displaystyle L=\frac{\kappa}{T\sigma_{e}}\sim\frac{L_{0}\sigma_{f}}{\sigma_{e}}\sim\frac{\pi^{2}}{4}\frac{L_{0}\sigma_{0}^{2}}{(1+\tau_{v}^{2}\omega_{c}^{2})^{2}}.$ (24) Here $\kappa$ represents the thermal conductivity. Both the resistivity, and Lorenz number decrease with the doped charge density. Combing with the emergence of the Drude weight under doping, it suggests that doping would eventually drive the the system more like a normal metal. Here we have ignored the thermal conductivity arising from the gauge bosons, which also transports heat without charge, hence it also contributes to the violation of the Wiedemann-Franz law. ### II.4 Nearby Phases (1) Spin liquid Mott insulator, and metal The quantum bad metal state constructed above can be driven to a Mott insulator which is also a $\mathrm{U}(1)$ spin liquid with a Fermi surface of spinon $f_{\alpha}$, by driven $(z_{1},z_{2})$ into a trivial bosonic Mott insulator, with zero rotor number of $z_{1}$ and $z_{2}$. In the Mott insulator, $f_{\alpha}$ still has Fermi pockets, but the gauge flux of $a_{\mu}$ no longer carries any nontrivial quantum number. This is one of the most studied spin liquid states in the literature Lee and Lee (2005); Motrunich (2005); Ran et al. (2007), with potential applications to a variety of materials. The quantum phase transition between a bSPT state and a trivial Mott insulator of the boson is described by the $N=2$ QED Grover and Vishwanath (2013); Lu and Vishwanath (2012), and this theory is part of a web of duality involving also the easy-plane deconfined quantum critical point Xu and You (2015); Mross et al. (2016); Hsin and Seiberg (2016); Potter et al. (2017); Wang et al. (2017); Senthil et al. (2019). The original theory of the bSPT-MI transition is now coupled to an extra dynamical gauge field $a_{\mu}$. When there is no disorder, the dynamics of $a_{\mu}$ is overdamped by the Fermi surface of $f_{\alpha}$, then we do not expect the gauge field $a_{\mu}$ to change the infrared fate of the transition. The presence of disorder may complicate the nature of the bSPT-MI transition. The quantum bad metal phase can also be driven into an ordinary metal phase by condensing either $z_{1}$ or $z_{2}$. The $\mathrm{U}(1)_{g}$ gauge field will be gapped out by the Higgs mechanism, and the spinon operator $f_{\alpha}$ becomes the electron operator due to the condensate of, $e.g.$ $z_{1}$, similar to the previous theory of interaction-driven MIT Lee and Lee (2005); Senthil (2008). (2) Charge-$4e$ superconductor Starting with the quantum bad metal, we can also drive the spinon $f_{\alpha}$ into a trivial insulator without special topological response, likely through a Lifshitz transition where the Fermi pockets shrink to zero. Then the action of $a_{\mu}$ is just the ordinary Maxwell term, which describes a photon phase. The monopole which creates and annihilates the gauge flux is prohibited here as the gauge flux carries charge-$4e$ as we discussed before, and the electric charge is a conserved quantity. The photon phase of the gauge field $a_{\mu}$ is also dual to the condensate of its flux, $i.e.$ a condensate of charge$-4e$, or in other words a charge$-4e$ superconductor. (3) $Z_{2}$ spin-charge topological order We can also consider a situation where the fermionic vortices $f_{\alpha}$ form a “superconductor”, $i.e.$ the Cooper pair of $f_{\alpha}$ condenses. This condensation will gap out $a_{\mu}$ through a Higgs mechanism, and break $a_{\mu}$ down to a $Z_{2}$ gauge field, which supposedly forms a $Z_{2}$ topological order. Like all the $Z_{2}$ topological orders, here there are three types of anyons with mutual semionic statistics. One type of anyon is $f_{\alpha}$, another is the half-flux of $a_{\mu}$. Since the flux of $a_{\mu}$ carries charge-$4e$, the half-flux of $a_{\mu}$ carries charge-$2e$. ### II.5 Other Constructions One can also construct a similar quantum bad metal phase starting with a charge-$2e$ spin-singlet superconductor. Let us assume there are two flavors of bosons, $b_{1}$ and $b_{2}$, which carry charge $\pm 2e$ respectively. We take the following parton treatment for $b_{\alpha}$: $\displaystyle b_{1}=\psi_{1}f,\ \ \ b_{2}=\psi_{2}f,$ (25) where all the partons are complex fermions. Apparently there is also a gauge $U(1)_{g}$ shared by the partons. $\psi_{1,2}$ carries electric charge $\pm 2$, and gauge charge $+1$ of a dynamical internal gauge field; $f$ carries gauge charge $-1$ of the internal gauge field. We now consider the following state: $f$ again forms a band structure which is a good metal; $\psi_{\alpha}$ forms a quantum “psudospin” Hall insulator, in the sense that the flavor index of $\psi_{\alpha}$ is viewed as a pseudospin index. After integrating out $\psi_{\alpha}$, a mutual Chern-Simons term between the external EM field $A^{e}$ and the internal gauge field $a$ is generated, with the same form as Eq. 11. We assume that there is no other conserved charges other than the charge $\mathrm{U}(1)$. The charge response of this construction can be evaluated following the steps of the previous section. The bSPT state is one of the states that the bosonic partons $(z_{1},z_{2})$ can form that make the charge vortex a fermion. There are other options which achieve the similar effect, if we allow topological degeneracy. For example, $z_{1}$ and $z_{2}$ can each form a bosonic fractional quantum Hall state with Hall conductivity $\pm 1/(2k)$ where $k$ is an integer. Although there is a bit subtlety of integrating out a topological order, suppose we can do this, the response mutual CS theory in Eq. 11 would have level $2/k$. The rest of the discussion follows directly. We would like to compare our state with other “vortex liquids” discussed in previous literature, for example the well-known Dirac vortex liquid in the context of half-filled Landau level Son (2015); Wang and Senthil (2016b). The electrical conductivity tensor of the system reads $\sigma_{ij}=\delta_{ij}\sigma^{\ast}+\frac{e^{2}}{2h}\epsilon_{ij}$, where $\sigma^{\ast}\sim 1/\sigma_{v}$, and $\sigma_{v}$ is the conductivity of the Dirac composite fermions (the vortices). Although the longitudinal conductivity of the system can be small when $\sigma_{v}$ is large, the longitudinal resistivity would still be small due to the nonzero Hall conductivity. Hence in the simplest experimental set-up where the transport is measured along the $\hat{x}$ direction while the $\hat{y}$ direction of the sample has an open boundary, the measured longitudinal resistivity along the $x$ direction would be small. We can also put $c_{\uparrow}$ in the Dirac vortex liquid of the half-filled Landau level; and $c_{\downarrow}$ in the time-reversal conjugate state of $c_{\uparrow}$. In this case in order to correctly extract the longitudinal electrical resistivity, we need to introduce both the external electromagnetic field, and a “spin gauge field” $A^{s}$: $\displaystyle\mathcal{L}$ $\displaystyle=$ $\displaystyle\mathcal{L}_{D}(\psi_{1},a_{1})+\frac{\mathrm{i}}{4\pi}a_{1}\wedge A_{1}+\frac{\mathrm{i}}{8\pi}A_{1}\wedge dA_{1},$ (26) $\displaystyle+$ $\displaystyle\mathcal{L}_{D}(\psi_{2},a_{2})-\frac{\mathrm{i}}{4\pi}a_{2}\wedge A_{2}-\frac{\mathrm{i}}{8\pi}A_{2}\wedge dA_{2}.$ (28) $\psi_{1,2}$ are the composite Dirac fermions, and their Lagrangian $\mathcal{L}_{D}$ should in general have a nonzero chemical potential. We have introduced two external gauge fields $A_{1}=A^{e}+A^{s}$, and $A_{2}=A^{e}-A^{s}$. The system does not have a net charge Hall response, but there is a spin-Hall effect, $i.e.$ there is a mutual CS term between the electromagnetic field and the spin gauge field. The existence of the spin Hall response will again lead to a small longitudinal electrical resistivity when $\psi_{1,2}$ are good metals. By contrast, in our construction presented in the previous section, there is only longitudinal transport, hence a large vortex conductivity would ensure a large longitudinal electrical resistivity. Another vortex liquid with fermionic vortex was discussed in Ref. Galitski et al., 2005, aiming to understand the observed metallic state with an anomalously large conductivity between the superconductor and insulator in amorphous thin films. There the vortex is turned into a fermion through manual flux attachment. Vortices can also play an important role in quantum magnets. Exotic quantum spin liquid states were also constructed through fermionic spin vortices in previous literature Alicea et al. (2005a, b, 2006); Hermele (2009); Wang and Senthil (2015b). These works generally use two approaches to generate fermionic vortices: one can either introduce fermionic partons for the vortex operator, or turn a vortex into a fermion through flux attachment when the vortex sees a background magnetic field (dual of fractional spin density). In our work, instead of granting existing vortices fermionic statistics, an interpretation of the fermionic partons as charge vortices naturally emerges due to the topological physics of the bosonic sector. ## III Summary We present a construction of a strongly interacting quantum bad metal phase, $i.e.$ at zero and low temperature the resistivity is finite but exceedingly larger than the MIT limit $h/e^{2}$, by making the charge vortices a good metallic phase with a vortex Fermi surface. In this construction the charge vortex is naturally a fermion by driving the charge degree of freedom into a bosonic symmetry protected topological state. The quantum bad metal so constructed has the following features: (1) its resistivity can be exceedingly larger than the MIR limit; (2) a small Drude weight proportional to $(\delta n_{e})^{2}$ emerges under weak charge doping away from half-filling (one electron per unit cell); (3) like previously discussed vortex liquids, our construction should also have strong violation of the Wiedemann-Franz law. We also demonstrated that this quantum bad metal phase is next to a charge-$4e$ superconductor, a $Z_{2}$ spin-charge topological order, the Mott insulator phase which is also a well-studied spin liquid, and a normal metal phase. The exoticness of the state we constructed is of “quantum nature”, as strictly speaking a bSPT state that our construction strongly relies on is only sharply defined at zero temperature. At high temperature all the partons will be confined, and our state no longer enjoys a tractable description in terms of the dual weakly interacting vortices. Hence our state is different from the original example of bad metal discussed in Ref. Emery and Kivelson, 1995, $i.e.$ the cuprates materials with hole doping, where the resistivity of the system in each $2d$ layer reaches the threshold of bad metal at finite temperature. The authors thank Matthew Fisher and Steve Kivelson for very helpful discussions. We also thank Umang Mehta and Xiao-Chuan Wu for participating in the early stage of the work. This work is supported by the NSF Grant No. DMR-1920434, and the Simons Investigator program. C.-M. J. is supported by a faculty startup grant at Cornell University ## References * Emery and Kivelson (1995) V. J. Emery and S. A. Kivelson, Phys. Rev. Lett. 74, 3253 (1995), URL https://link.aps.org/doi/10.1103/PhysRevLett.74.3253. * Evers and Mirlin (2008) F. Evers and A. D. Mirlin, Rev. Mod. Phys. 80, 1355 (2008), URL https://link.aps.org/doi/10.1103/RevModPhys.80.1355. * Lee and Lee (2005) S.-S. Lee and P. A. Lee, Phys. Rev. Lett. 95, 036403 (2005), URL https://link.aps.org/doi/10.1103/PhysRevLett.95.036403. * Senthil (2008) T. Senthil, Phys. Rev. B 78, 045109 (2008), URL https://link.aps.org/doi/10.1103/PhysRevB.78.045109. * Ioffe and Larkin (1989) L. B. Ioffe and A. I. Larkin, Phys. Rev. B 39, 8988 (1989), URL https://link.aps.org/doi/10.1103/PhysRevB.39.8988. * Witczak-Krempa et al. (2012) W. Witczak-Krempa, P. Ghaemi, T. Senthil, and Y. B. Kim, Phys. Rev. B 86, 245102 (2012), URL https://link.aps.org/doi/10.1103/PhysRevB.86.245102. * Li et al. (2021) T. Li, S. Jiang, L. Li, Y. Zhang, K. Kang, J. Zhu, K. Watanabe, T. Taniguchi, D. Chowdhury, L. Fu, et al., Nature 597, 350 (2021), ISSN 1476-4687, URL http://dx.doi.org/10.1038/s41586-021-03853-0. * Wu et al. (2018) F. Wu, T. Lovorn, E. Tutuc, and A. H. MacDonald, Phys. Rev. Lett. 121, 026402 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.121.026402. * Tang et al. (2019) Y. Tang, L. Li, T. Li, Y. Xu, S. Liu, K. Barmak, K. Watanabe, T. Taniguchi, A. H. MacDonald, J. Shan, et al., 1910.08673 (2019), eprint 1910.08673. * Xu et al. (2022) Y. Xu, X.-C. Wu, M. Ye, Z.-X. Luo, C.-M. Jian, and C. Xu, Phys. Rev. X 12, 021067 (2022), URL https://link.aps.org/doi/10.1103/PhysRevX.12.021067. * Kim et al. (2022) S. Kim, T. Senthil, and D. Chowdhury, _Continuous mott transition in moiré semiconductors: role of long-wavelength inhomogeneities_ (2022), URL https://arxiv.org/abs/2204.10865. * Peskin (1978) M. E. Peskin, Annals of Physics 113, 122 (1978), ISSN 0003-4916, URL http://www.sciencedirect.com/science/article/pii/000349167890%252X. * Dasgupta and Halperin (1981) C. Dasgupta and B. I. Halperin, Phys. Rev. Lett. 47, 1556 (1981), URL https://link.aps.org/doi/10.1103/PhysRevLett.47.1556. * Fisher and Lee (1989) M. P. A. Fisher and D. H. Lee, Phys. Rev. B 39, 2756 (1989), URL https://link.aps.org/doi/10.1103/PhysRevB.39.2756. * Son (2015) D. T. Son, Phys. Rev. X 5, 031027 (2015), URL https://link.aps.org/doi/10.1103/PhysRevX.5.031027. * Wang and Senthil (2015a) C. Wang and T. Senthil, Phys. Rev. X 5, 041031 (2015a), URL https://link.aps.org/doi/10.1103/PhysRevX.5.041031. * Metlitski and Vishwanath (2016) M. A. Metlitski and A. Vishwanath, Phys. Rev. B 93, 245151 (2016), URL https://link.aps.org/doi/10.1103/PhysRevB.93.245151. * Mross et al. (2016) D. F. Mross, J. Alicea, and O. I. Motrunich, Phys. Rev. Lett. 117, 016802 (2016), URL https://link.aps.org/doi/10.1103/PhysRevLett.117.016802. * Kachru et al. (2017) S. Kachru, M. Mulligan, G. Torroba, and H. Wang, Phys. Rev. Lett. 118, 011602 (2017), URL https://link.aps.org/doi/10.1103/PhysRevLett.118.011602. * Hsin and Seiberg (2016) P.-S. Hsin and N. Seiberg, Journal of High Energy Physics 2016, 95 (2016), ISSN 1029-8479, URL http://dx.doi.org/10.1007/JHEP09(2016)095. * Aharony (2016) O. Aharony, Journal of High Energy Physics 2016 (2016), URL https://doi.org/10.1007%2Fjhep02%282016%29093. * Seiberg et al. (2016) N. Seiberg, T. Senthil, C. Wang, and E. Witten, Annals of Physics 374, 395 (2016), ISSN 0003-4916, URL http://www.sciencedirect.com/science/article/pii/S00034916163%01531. * Aharony et al. (2017) O. Aharony, F. Benini, P.-S. Hsin, and N. Seiberg, Journal of High Energy Physics 2017 (2017), URL https://doi.org/10.1007%2Fjhep02%282017%29072. * Metlitski et al. (2017) M. A. Metlitski, A. Vishwanath, and C. Xu, Phys. Rev. B 95, 205137 (2017), URL https://link.aps.org/doi/10.1103/PhysRevB.95.205137. * Chen et al. (2018) J.-Y. Chen, J. H. Son, C. Wang, and S. Raghu, Phys. Rev. Lett. 120, 016602 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.120.016602. * Son et al. (2019) J. H. Son, J.-Y. Chen, and S. Raghu, Journal of High Energy Physics 2019 (2019), URL https://doi.org/10.1007%2Fjhep06%282019%29038. * Jian et al. (2019) C.-M. Jian, Z. Bi, and Y.-Z. You, Phys. Rev. B 100, 075109 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.100.075109. * Wang and Senthil (2016a) C. Wang and T. Senthil, Phys. Rev. B 93, 085110 (2016a), URL https://link.aps.org/doi/10.1103/PhysRevB.93.085110. * Wang and Senthil (2016b) C. Wang and T. Senthil, Phys. Rev. B 94, 245107 (2016b), URL https://link.aps.org/doi/10.1103/PhysRevB.94.245107. * Halperin et al. (1993) B. I. Halperin, P. A. Lee, and N. Read, Phys. Rev. B 47, 7312 (1993), URL https://link.aps.org/doi/10.1103/PhysRevB.47.7312. * Read (1998) N. Read, Phys. Rev. B 58, 16262 (1998), URL https://link.aps.org/doi/10.1103/PhysRevB.58.16262. * Galitski et al. (2005) V. M. Galitski, G. Refael, M. P. A. Fisher, and T. Senthil, Phys. Rev. Lett. 95, 077002 (2005), URL https://link.aps.org/doi/10.1103/PhysRevLett.95.077002. * Wu and Phillips (2006) J. Wu and P. Phillips, Phys. Rev. B 73, 214507 (2006), URL https://link.aps.org/doi/10.1103/PhysRevB.73.214507. * Kapitulnik et al. (2019) A. Kapitulnik, S. A. Kivelson, and B. Spivak, Rev. Mod. Phys. 91, 011002 (2019), URL https://link.aps.org/doi/10.1103/RevModPhys.91.011002. * Fisher et al. (1990) M. P. A. Fisher, G. Grinstein, and S. M. Girvin, Phys. Rev. Lett. 64, 587 (1990), URL https://link.aps.org/doi/10.1103/PhysRevLett.64.587. * Gazit (2015) S. Gazit, _Critical Conductivity and Charge Vortex Duality Near Quantum Criticality_ (Springer International Publishing, Cham, 2015), pp. 35–52, ISBN 978-3-319-19354-0, URL https://doi.org/10.1007/978-3-319-19354-0_3. * Hartnoll (2014) S. A. Hartnoll, Nature Physics 11, 54 (2014), URL https://doi.org/10.1038%2Fnphys3174. * Hartnoll and Mackenzie (2021) S. A. Hartnoll and A. P. Mackenzie, _Planckian dissipation in metals_ (2021), URL https://arxiv.org/abs/2107.07802. * Lucas and Fong (2018) A. Lucas and K. C. Fong, Journal of Physics: Condensed Matter 30, 053001 (2018), URL https://doi.org/10.1088/1361-648x/aaa274. * Kovtun (2012) P. Kovtun, Journal of Physics A: Mathematical and Theoretical 45, 473001 (2012), URL https://doi.org/10.1088%2F1751-8113%2F45%2F47%2F473001. * Hartnoll et al. (2016) S. A. Hartnoll, A. Lucas, and S. Sachdev, _Holographic quantum matter_ (2016), URL https://arxiv.org/abs/1612.07324. * Delacrétaz (2020) L. Delacrétaz, SciPost Physics 9 (2020), URL https://doi.org/10.21468%2Fscipostphys.9.3.034. * Nayak and Wilczek (1994a) C. Nayak and F. Wilczek, Nuclear Physics B 417, 359 (1994a), ISSN 0550-3213, URL http://www.sciencedirect.com/science/article/pii/055032139490%%****␣vortexFS02.bbl␣Line␣375␣****4774. * Nayak and Wilczek (1994b) C. Nayak and F. Wilczek, Nuclear Physics B 430, 534 (1994b), ISSN 0550-3213, URL http://www.sciencedirect.com/science/article/pii/055032139490%1589. * Lee (2009) S.-S. Lee, Phys. Rev. B 80, 165102 (2009), URL https://link.aps.org/doi/10.1103/PhysRevB.80.165102. * Metlitski and Sachdev (2010) M. A. Metlitski and S. Sachdev, Phys. Rev. B 82, 075127 (2010), URL https://link.aps.org/doi/10.1103/PhysRevB.82.075127. * Mross et al. (2010) D. F. Mross, J. McGreevy, H. Liu, and T. Senthil, Phys. Rev. B 82, 045121 (2010), URL https://link.aps.org/doi/10.1103/PhysRevB.82.045121. * Wen and Lee (1996) X.-G. Wen and P. A. Lee, Phys. Rev. Lett. 76, 503 (1996), URL https://link.aps.org/doi/10.1103/PhysRevLett.76.503. * Lee et al. (1998) P. A. Lee, N. Nagaosa, T.-K. Ng, and X.-G. Wen, Phys. Rev. B 57, 6003 (1998), URL https://link.aps.org/doi/10.1103/PhysRevB.57.6003. * Affleck et al. (1988) I. Affleck, Z. Zou, T. Hsu, and P. W. Anderson, Phys. Rev. B 38, 745 (1988), URL https://link.aps.org/doi/10.1103/PhysRevB.38.745. * Dagotto et al. (1988) E. Dagotto, E. Fradkin, and A. Moreo, Phys. Rev. B 38, 2926 (1988), URL https://link.aps.org/doi/10.1103/PhysRevB.38.2926. * Ran et al. (2008) Y. Ran, A. Vishwanath, and D.-H. Lee, _A direct transition between a neel ordered mott insulator and a $d_{x^{2}-y^{2}}$ superconductor on the square lattice_ (2008), URL https://arxiv.org/abs/0806.2321. * Hermele (2007) M. Hermele, Phys. Rev. B 76, 035125 (2007), URL http://link.aps.org/doi/10.1103/PhysRevB.76.035125. * Wen (2002) X.-G. Wen, Phys. Rev. B 65, 165113 (2002), URL https://link.aps.org/doi/10.1103/PhysRevB.65.165113. * Chen et al. (2013) X. Chen, Z.-C. Gu, Z.-X. Liu, and X.-G. Wen, Phys. Rev. B 87, 155114 (2013). * Chen et al. (2012) X. Chen, Z.-C. Gu, Z.-X. Liu, and X.-G. Wen, Science 338, 1604 (2012). * Levin and Senthil (2004) M. Levin and T. Senthil, Phys. Rev. B 70, 220403 (2004), URL http://link.aps.org/doi/10.1103/PhysRevB.70.220403. * Lu and Vishwanath (2012) Y.-M. Lu and A. Vishwanath, Phys. Rev. B 86, 125119 (2012), URL https://link.aps.org/doi/10.1103/PhysRevB.86.125119. * Lee and Nagaosa (1992) P. A. Lee and N. Nagaosa, Phys. Rev. B 46, 5621 (1992), URL https://link.aps.org/doi/10.1103/PhysRevB.46.5621. * Shi et al. (2022) Z. D. Shi, H. Goldman, D. V. Else, and T. Senthil, _Gifts from anomalies: Exact results for landau phase transitions in metals_ (2022), URL https://arxiv.org/abs/2204.07585. * Kumar et al. (2022) P. Kumar, P. A. Nosov, and S. Raghu, Phys. Rev. Research 4, 033146 (2022), URL https://link.aps.org/doi/10.1103/PhysRevResearch.4.033146. * Nave et al. (2007) C. P. Nave, S.-S. Lee, and P. A. Lee, Phys. Rev. B 76, 165104 (2007), URL https://link.aps.org/doi/10.1103/PhysRevB.76.165104. * Motrunich (2005) O. I. Motrunich, Phys. Rev. B 72, 045105 (2005), URL https://link.aps.org/doi/10.1103/PhysRevB.72.045105. * Ran et al. (2007) Y. Ran, M. Hermele, P. A. Lee, and X.-G. Wen, Phys. Rev. Lett. 98, 117205 (2007), URL https://link.aps.org/doi/10.1103/PhysRevLett.98.117205. * Grover and Vishwanath (2013) T. Grover and A. Vishwanath, Phys. Rev. B 87, 045129 (2013), URL https://link.aps.org/doi/10.1103/PhysRevB.87.045129. * Xu and You (2015) C. Xu and Y.-Z. You, Phys. Rev. B 92, 220416 (2015), URL https://link.aps.org/doi/10.1103/PhysRevB.92.220416. * Potter et al. (2017) A. C. Potter, C. Wang, M. A. Metlitski, and A. Vishwanath, Phys. Rev. B 96, 235114 (2017), URL https://link.aps.org/doi/10.1103/PhysRevB.96.235114. * Wang et al. (2017) C. Wang, A. Nahum, M. A. Metlitski, C. Xu, and T. Senthil, Phys. Rev. X 7, 031051 (2017), URL https://link.aps.org/doi/10.1103/PhysRevX.7.031051. * Senthil et al. (2019) T. Senthil, D. T. Son, C. Wang, and C. Xu, Physics Reports 827, 1 (2019), ISSN 0370-1573, duality between (2+1)d quantum critical points, URL http://www.sciencedirect.com/science/article/pii/S03701573193%02637. * Alicea et al. (2005a) J. Alicea, O. I. Motrunich, M. Hermele, and M. P. A. Fisher, Phys. Rev. B 72, 064407 (2005a), URL https://link.aps.org/doi/10.1103/PhysRevB.72.064407. * Alicea et al. (2005b) J. Alicea, O. I. Motrunich, and M. P. A. Fisher, Phys. Rev. Lett. 95, 247203 (2005b), URL https://link.aps.org/doi/10.1103/PhysRevLett.95.247203. * Alicea et al. (2006) J. Alicea, O. I. Motrunich, and M. P. A. Fisher, Phys. Rev. B 73, 174430 (2006), URL https://link.aps.org/doi/10.1103/PhysRevB.73.174430. * Hermele (2009) M. Hermele, Phys. Rev. B 79, 184429 (2009), URL https://link.aps.org/doi/10.1103/PhysRevB.79.184429. * Wang and Senthil (2015b) C. Wang and T. Senthil, Phys. Rev. B 91, 195109 (2015b), URL https://link.aps.org/doi/10.1103/PhysRevB.91.195109. * Xu and Sachdev (2010) C. Xu and S. Sachdev, Phys. Rev. Lett. 105, 057201 (2010), URL https://link.aps.org/doi/10.1103/PhysRevLett.105.057201.
1]Physics Department, Bar-Ilan University, Ramat-Gan, 52900, Israel 2]Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat-Gan, 52900, Israel # Laplace’s first law of errors applied to diffusive motion Omer Hamdi Stanislav Burov Eli Barkai [ [ ###### Abstract In biological, glassy, and active systems, various tracers exhibit Laplace- like, i.e., exponential, spreading of the diffusing packet of particles. The limitations of the central limit theorem in fully capturing the behaviors of such diffusive processes, especially in the tails, have been studied using the continuous time random walk model. For cases when the jump length distribution is super-exponential, e.g., a Gaussian, we use large deviations theory and relate it to the appearance of exponential tails. When the jump length distribution is sub-exponential the packet of spreading particles is described by the big jump principle. We demonstrate the applicability of our approach for finite time, indicating that rare events and the asymptotics of the large deviations rate function can be sampled for large length scales within a reasonably short measurement time. ## 1 Introduction ††footnotetext: Invited paper for the topical issue in the European Physical Journal B: New Trends in Statistical Physics of Complex Systems: Theoretical and Experimental Approaches. Laplace’s two laws of error are milestones in statistics. The first was published in $1774$ [1] and states that the frequency of an error could be expressed as an exponential of the magnitude of the error, in absolute value. The second law of errors, from $1778$ [2], states that the frequency of the error is an exponential of a quadratic function of the error. In the context of diffusing particles in one dimension, and for a packet starting at the origin, the probability density of the particles is $P(x,t)$. The first law states $P(x,t)\propto\exp(-\mbox{const}\ |x|)$ and the second is the more familiar law, $P(x,t)\propto\exp(-\mbox{const}\ x^{2})$. In $1923$ Wilson [3] discussed some of the history of the problem. He noted that the second law is typically called the normal or Gauss law; however, despite Gauss’s well-known precocity, he probably did not make this discovery before he was two years old† †††Gauss utilized the least squares method of error estimation for the discovery of the lost dwarf planet, Ceres. [4, 5].. Indeed, it was Laplace who promoted the central limit theorem, putting a firm mathematical basis for the second law. The normal distribution has since been used in all fields of science, while the first law, as far as we know, was not in the spotlight for an extended period. However, using single-molecule tracking data and computer-generated trajectories of a variety of tracers diffusing in disordered media, in more recent years, the first law experienced a revival, as it was used to fit a large body of experimental data [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. These observations are related to the diffusing diffusivity models [25, 26, 21, 27] and a phenomena known as Brownian yet non- Gaussian diffusion [14, 28, 15, 29, 30, 31], and Fickian yet non-Gaussian diffusion [16, 17, 18, 19, 20]. By now, the exponential decay of $P(x,t)\propto\exp(-\mbox{const}\ |x|)$ is well documented. In some cases, only the large $x$ limit is described by this law, while in others, the Laplace law, as a fitting procedure, holds for all $x$ [14]. A large body of phenomenological models was used to describe these behaviors, for example, by assuming that the diffusion constant $D(t)$ is a stochastic process. Chaudhuri, Berthier, and Kob [22] analyzed four systems, focusing on particle displacements near glass and jamming transitions, highlighting behaviors like sticking (caging) and rapid jumps between basins. This type of dynamics is described by the continuous time random walk (CTRW) [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45], which will also be analyzed in this paper. Using specific waiting time distributions, they showed how the basic CTRW model can be used to predict exponential tails, in accordance with Laplace’s first law. This theory was later advanced by Wang, Burov, and Barkai [46, 47], showing that it holds for very general settings. They have shown that, for any distribution of waiting times, which is analytical for short waiting times, and for any jump length distribution, which decays faster than exponential, Laplace’s first law holds at the tails of $P(x,t)$. The two above-mentioned conditions are expected to hold in many systems. The analysis was based on large deviations [48, 49, 50, 51, 52, 53, 54, 55] arguments, and a saddle point approximation [56], namely to put things in a historical context, the technique used is an extension of the Laplace method for solving integrals [1]. Schramma et al. [6] discovered that chloroplasts, which are components of plant cells, adapt to dim light by moving in a manner closely resembling the caged dynamics observed in supercooled liquids or colloidal suspensions near the glass transition. This movement demonstrates exponential tails, fitting both the CTRW and the diffusing diffusivity models. The connection between Laplace diffusion and CTRW in polymer nanocomposites was further explored by Hu et al [57]. They demonstrated the ability to manipulate the emergence of exponential tails in contrast to Gaussian diffusion by modifying the strength of the disorder. In this manuscript, we first provide a more detailed analysis of CTRW dynamics using three tools. We will then conclude the paper with a broader perspective on the Laplace-like behavior, i.e., exponential decay of spreading particles. We start with a study of the CTRW model for the case when the jump length distribution is either sub or super-exponential. In the former case large deviations theory holds, while for the latter the big jump principle is valid (see below). This transition is related to the fact that for sub-exponential jump length distribution, the cumulant generating function diverges, and thus the standard Laplace-Cramers-Daniles tool of saddle point approximation of large deviations theory does not hold. We show how the big-jump principle is related to an extension of the Laplace method of solving integrals. In Laplace’s method, close to the saddle point, an analytic i.e., quadratic function is used, and hence, eventually, a Gaussian integral is computed, while the original integral is non-Gaussian. For the sub-exponential distribution of jump lengths, we find a similar extremum, but with non- analytical features. Namely, the quadratic expansion close to the extremum is invalid. Finally, we present the Edgeworth expansion to approximate the CTRW probability density [58]. This approximation provides corrections for the central limit theorem (CLT). It deals with a long time limit, and not very large $x$, while the large deviations theory and the big-jump principle cover the behavior of the tails of the density. To study these effects, we constructed a numerical tool, to sample finite time and finite $x$ propagators $P(x,t)$ using CTRW. ## 2 Model In the CTRW model, waiting times between jump events are independent identically distributed (IID) random variables with a probability density function (PDF) $\psi(\tau)$. The process starts at time $t=0$, and then the particle waits at its initial position $x=0$, for a random time $\tau$ drawn from $\psi(\tau)$. At time $t=\tau$, the random walker jumps to a new position, the duration of the jump being negligible. The jump lengths are also IID random variables, with $f(\chi)$ as the PDF. The process is then renewed, namely, a second waiting time is drawn, followed by a spatial jump, etc. The goal is to find the PDF $P(x,t)$ of finding the particle at $x$ at time $t$. The CTRW process is called a semi-Markovian process and the statistics for the number of jumps can be analyzed utilizing the renewal theory [59, 60]. We focus on exponentially distributed waiting times, $\psi(\tau)=\exp(-\tau)$, with the mean time between jumps set to unity. It follows that the number of jumps in the time interval $(0,t)$ is described by Poisson statistics. The jump length PDF $f\left(\chi\right)$ is: $f\left(\chi\right)=\widetilde{N}\exp\left(-\alpha^{\beta}|\chi|^{\beta}\right).$ (1) The mean jump length is zero, and the symmetry of $f\left(\chi\right)$ sets the symmetry of $P(x,t)$, i.e., $P(x,t)=P(-x,t)$. The variance of the jump length is set to be $1$. Here we have $\alpha=\sqrt{\Gamma(3/\beta)/\Gamma(1/\beta)}$, and the normalization $\widetilde{N}=\beta\sqrt{\Gamma(3/\beta)}/2\Gamma^{3/2}(1/\beta)$, with $\Gamma(\dots)$ being the Gamma function. The exponent $\beta>0$ is an important parameter in this study. For $\beta<1$ $(\beta>1)$ we have sub (or super) exponential decay of the jump length distribution. The case $\beta=2$ corresponds to Gaussian statistics for the jump length PDF. The probability of jumping $n$ times during time $t$ is $t^{n}\exp(-t)/n!$ hence $P(x,t)=\sum_{n=0}^{\infty}\frac{{\rm e}^{-t}t^{n}}{n!}\phi(x|n),$ (2) $\phi(x|n)$ is the PDF of finding the particle at $x$, conditioned on performing exactly $n$ jumps. In the previous equation, the summation is performed over all the possible number of jumps that can occur during the process. Using the fact that jump lengths are IID random variables, the characteristic function of $\phi(x|n)$ $\int_{-\infty}^{\infty}{\rm e}^{ikx}\phi(x|n){\rm d}x=\langle{\rm e}^{ik(\chi_{1}+...\chi_{n})}\rangle=\widetilde{f}^{n}(k),$ (3) is expressed in terms of the Fourier transform [61] of $f(\chi)$, namely $\widetilde{f}(k)=\int_{-\infty}^{\infty}\exp(ik\chi)f(\chi){\rm d}\chi$. The Fourier transform of $P(x,t)$, i.e., $\widetilde{P}(k,t)$, is obtained from Eq.(2) and Eq.(3) $\widetilde{P}(k,t)=\exp\left[-t\left(1-\widetilde{f}(k)\right)\right].$ (4) The goal is to find the inverse Fourier transform of the expression above. The CLT is valid in the limit $t\to\infty$ and $x\to\infty$ while the ratio $x/\sqrt{t}$ is finite. We utilize the expansion of $\widetilde{f}(k)$ around $k\to 0$. Since the jump length distribution is an even function, $\widetilde{f}(k)\sim 1-\sigma^{2}k^{2}/2$, and according to our notation $\sigma^{2}=1$. Therefore Eq. (4) yields $P(x,t)\sim\frac{\exp\left(-\frac{x^{2}}{2t}\right)}{\sqrt{2\pi t}}.$ (5) Namely, the central limit theorem holds. ## 3 Large Deviations $\beta\ >\ 1$ Capturing the essence of the limiting scaling law in Eq. (5) is but one aspect of the large deviations Theory. This theory also delves into analyzing a different yet significant limit. The inverse Fourier transform of $\widetilde{P}(k,t)$ in Eq. (4), while changing variable $ik=u$, reads $\displaystyle P(x,t)$ $\displaystyle=$ $\displaystyle\frac{1}{2\pi i}\int_{-i\infty}^{i\infty}\exp{\left[-xK(u)\right]}{\rm d}u,$ (6) where $\displaystyle K(u)$ $\displaystyle=$ $\displaystyle u+\frac{1}{q}[1-\hat{f}(u)],$ (7) $q\equiv x/t$, and $\hat{f}(u)=\int_{-\infty}^{\infty}\exp(u\chi)f(\chi){\rm d}\chi$ is the moment generating function. Note that $\hat{f}(u)$ diverges for $\beta<1$, leading to the divergence of the integral in Eq. (6). Therefore, in this section, we focus on super-exponential PDFs, i.e., $\beta>1$. Large deviations theory is based on the saddle point approximation and considers the case of large $x$ limit of $P(x,t)$ in Eq. (6). Strictly speaking, here the scaling is $x\to\infty$ and $t\to\infty$ while the ratio $q=x/t$ is kept finite. Nevertheless, we show that the approximate solution for $P(x,t)$ works reasonably well for large $x$ while $t$ is kept finite. The problem of finding $P(x,t)$ is solved by using the following steps: First, find the $u$ that satisfies $K^{\prime}(u)=0$ and term this $u$ as $u_{0}$. Then, expand $K(u)$ in Eq. (6), in the vicinity of $u_{0}$, up to a quadratic term. The obtained Gaussian integral yields $P(x,t)\sim\frac{\exp\left[-xK(u_{0})\right]}{\sqrt{2\pi|xK^{\prime\prime}(u_{0})|}}.$ (8) The approximation for large x can be obtained through numerical methods. Namely, we numerically find $u$ that satisfy $K^{\prime}(u)=0$, and insert it into Eq. (8). We call this solution method the numerical large deviations method (Numerical-LDT). Below, we discuss how to obtain the exact form of $P(x,t)$ and compare it to the solution obtained via Numerical-LDT. Using $K^{\prime}(u)=0$, and the definition of $\hat{f}(u)$, the equation for $u_{0}$ reads $1-\frac{1}{q}\int_{-\infty}^{\infty}\chi f(\chi)\exp(u_{0}\chi){\rm d}\chi=0.$ (9) We treat this equation in two limits, small and large $q=x/t$. For small $q$ the value of integral in Eq. (9), must be small. Since the integral in Eq. (9) is a growing function of $u$, the limit of small $q$ corresponds to small values of $u$. Expanding $\exp(u\chi)$ and using the fact that the variance of jump lengths equals $1$ we find $u_{0}\sim q\ll 1$, and $K(u_{0})\sim u_{0}+\frac{1}{q}[1-\hat{f}(u_{0})]\sim q/2$. It then follows that for $q\ll 1$, $P(x,t)\propto\exp[-x^{2}/(2t)]$. This is the standard prediction of the CLT, as delineated in Eq. (5), hence the more interesting case is $q\gg 1$. We use Eq. (9) in the large $q$ limit, considering large $u$, to find $u_{0}$. For that aim, we need to find $\hat{f}(u)$ when $u$ is large. With a change of variable $\chi=u^{\frac{1}{\beta-1}}\xi$, we obtain $\hat{f}(u)=\widetilde{N}u^{\frac{1}{\beta-1}}\int_{-\infty}^{\infty}\exp\left[u^{\frac{\beta}{\beta-1}}\left(\xi-\alpha^{\beta}|\xi|^{\beta}\right)\right]{\rm d}\xi.$ (10) Since $u$ is large this integral is solved using Laplace’s method $\hat{f}(u)\sim\widetilde{N}C_{2}u^{\frac{1}{2}\frac{2-\beta}{\beta-1}}\exp\left(C_{1}u^{\frac{\beta}{\beta-1}}\right),$ (11) where $C_{1}=(\beta-1)(\beta\alpha)^{\frac{\beta}{1-\beta}}$, and $C_{2}=\sqrt{2\pi/(\beta-1)(\beta\alpha^{\beta})^{\frac{1}{\beta-1}}}$. This expression is exact for the Gaussian case $\beta=2$. Eq. (9) for $u_{0}$ is written as $0=1-\frac{1}{q}\hat{f^{\prime}}(u)$, and using Eq. (11) we find that $u_{0}$ satisfies $\displaystyle 0=1-\frac{1}{q}\frac{\beta\widetilde{N}C_{1}C_{2}}{\beta-1}u_{0}^{\frac{4-\beta}{2\beta-2}}\exp\left(C_{1}u_{0}^{\frac{\beta}{\beta-1}}\right).$ (12) Note that in the case of $\beta=4$, this approximation breaks down. For that reason, in this work, we focused on the range $1<\beta<4$ and the transition to $\beta<1$, where the big jump principle holds. To solve Eq. (12) we utilize the Lambert function $W(x)$ [62] which satisfies $W(x)\exp\left[W(x)\right]=x$. The branch of the Lambert function that is relevant to our study is the well-documented principal branch $W_{0}(x)$. Solving Eq. (12) we obtain $u_{0}\sim C_{3}W_{0}\left[C_{4}q^{\frac{2\beta}{4-\beta}}\right]^{\frac{\beta-1}{\beta}}.$ (13) For large values of the argument, the Lambert function is expressed in terms of logarithmic functions, i.e., $W_{0}(z)\underset{z\to\infty}{\sim}\ln(z)-\ln(\ln(z))$, hence $u_{0}$ is growing with $x$, however very slowly. This holds true only when $\beta<4$, while the opposite case demands further study. The constants are: $\displaystyle C_{3}=\beta\alpha\left(\frac{2\beta(\beta-1)}{4-\beta}\right)^{\frac{1-\beta}{\beta}}\quad,\quad$ $\displaystyle C_{4}=\frac{2(\beta(\beta-1))^{\frac{4}{4-\beta}}}{4-\beta}\left(\frac{\alpha^{2}}{\sqrt{2\pi}\widetilde{N}(\beta)}\right)^{\frac{2\beta}{4-\beta}}.$ (14) Using $\exp[aW(x)]=(x)^{a}/[W(x)]^{a}$, we obtain $K(u_{0})=u_{0}+\frac{1}{q}[1-\hat{f}(u_{0})]$. Therefore, according to Eq. (8), we have that $P(x,t)\sim\frac{\exp\left\\{-t\left(\frac{|x|}{t}Z\left(\frac{|x|}{t}\right)+1\right)\right\\}}{\sqrt{2\pi K^{{}^{\prime\prime}}(u_{0})}},$ (15) where $Z(q)=\frac{c_{3}W_{0}\left[c_{4}q^{\frac{2\beta}{4-\beta}}\right]-\alpha\left(\frac{2\beta(\beta-1)}{4-\beta}\right)^{\frac{1}{\beta}}}{W_{0}\left[c_{4}q^{\frac{2\beta}{4-\beta}}\right]^{\frac{1}{\beta}}}.$ (16) Using $W_{0}(x)\sim\ln(x)$ we find $P(x,t)\sim\frac{\exp\left\\{-t\left[\kappa\frac{|x|}{t}\log\left(\frac{|x|}{t}\right)^{1-1/\beta}+1\right]\right\\}}{\sqrt{2\pi K^{{}^{\prime\prime}}(u_{0})}},$ (17) where $\kappa=\beta\alpha/(\beta-1)^{1-1/\beta}$. Thus, for large $x$, the packet is decaying according to Laplace’s first law, with logarithmic corrections which are typically hard to detect in experiments. The result in Eq. (17) was found in a more general setting in [46]. We also note a mistake in Eq. (9) in [46] the later equation corresponds to Eq. (16). Finally, we do not advocate the practical use of Eq. (17) beyond an asymptotic result. The Lambert functions in Eq. (16) are needed unless, $x/t$ is astronomically large. To see this consider the asymptotic expansion $W_{0}(x)\sim\ln(x)-\ln(\ln(x))$ the second term is one percent of the first when $x\sim 10^{280}$. In other words, the leading term, in applications not sampling extremely rare events is insufficient. We now show how the theory is applicable for finite times, using numerically exact $P(x,t)$. Figure 1: Comparative analysis of the CTRW exact solution (circles), our approximation derived from Eq. (15) (blue line), numerical LDT derived using Eq. (8) (dashed black), and CLT predictions (orange line), log-plotted for two time scales (0.3,3), and various $\beta$ values. Notably, the nearly exponential tails are a dominant feature in both time scales, with an observable convergence to the CLT regime over time. The numerical LDT shows remarkable fitting in the tails and also exhibits a strong performance in the central part as time progresses. The approximation we propose in Eq.(15) demonstrates a close match with the numerical data, however, deviations become more pronounced for large time values as $\beta\rightarrow 1$. This trend, indicative of slower convergence at smaller $\beta$ values, is further analyzed and discussed in the following section. ### 3.1 CTRW Propagator Sampling the “rare events” of the CTRW, particularly the tails of the PDF, poses significant challenges in trajectory simulations. To address this, we utilize the convolution theorem of the Fourier transform to compute $\phi(x|n)$ in Eq (2). More specifically, in Fourier space, the equation $\hat{\phi}(k|n)=\widetilde{f}^{n}(k)$ holds true. This leads to the recursive relation: $\phi(x|n)=f\ast f\dots\ast f=\phi(x|n-1)\ast f$, where $*$ is the convolution operator. For the case of $N=2$, this translates to $\phi(x|2)=f\ast f=\int_{-\infty}^{\infty}f(\chi^{\prime})f(x-\chi^{\prime})d\chi^{\prime}$. This equation for $\phi(x|2)$ represents the integration over the probability of making a first step to a certain point $\chi^{\prime}$ and then a subsequent step to $x$. In Fig. 1, we conduct a comparison between this numerically exact solution (CTRW), our derived approximation in Eq. (15), and the numerical LDT calculated using Eq. (8), juxtaposed with the predictions of the CLT. For short time scales, our derived approximation and the numerical LDT demonstrate excellent agreement in the tails, where the CLT fails. As the time scale increases, the numerical LDT consistently aligns for all values of $x$ and $\beta$, whereas our derived approximation, though precise for $\beta=2$ (Gaussian), and closely fitting for $\beta=3/2$, converges more gradually when $\beta=1.1$, nearing the critical transition for $\beta=1$ that was studied in [63]. For this case, when $\beta=1$, exponential-like tails are still exhibited but are not described by Eq. (17). Upon further examination of Fig. 1, it becomes apparent that for $t\gg 1$, the application of the CLT or the Edgeworth expansion is also viable, a topic we will discuss shortly. Figure 2: Convergence of the CTRW results (dashed-dotted) for increasing time scales, $t=1,5,15$, towards the numerical LDT, as derived in Eq. (8) (blue stars). The theory demonstrates exceptional accuracy, with the CTRW aligning with our numerical LDT across all values of $\beta$ for long times. Additionally, the linear-like nature of the tails is observed, aligning with the expected Laplace’s exponential tails. We also plot the asymptotical rate function (ARF) from Eq. (19) (shown as a black line). Notably, for $\beta=2$, the ARF aligns perfectly with the predictions, and for $\beta=1.5$, we achieve a close fit. However, as we approach the regime of the big jump principle, for $\beta=1.1$, convergence is slow, as we later describe. ### 3.2 Rate Function The rate function is an important concept in the large deviations literature as it holds the main characteristics of the propagator, describing both the typical and rare events. It is obtained in the limit of $t\rightarrow\infty$, but $x/t$ finite: $\displaystyle\mathcal{I}(x/t)=\lim_{t\to\infty}-\frac{1}{t}\ln{\left[P(x,t)\right]}.$ (18) In our model, we can extract the rate function from Eq. (8). Taking into consideration the fact that $u_{0}$ is a function of $q=x/t$, see Eq. (9), i.e. $q$ is the scaling parameter of the rate function: $\displaystyle\mathcal{I}(q)=qK(u_{0})\approx\begin{dcases*}q^{2}/2&, $q\ll 1$\\\ |q|Z(|q|)+1&, $q\gg 1$ .\end{dcases*}$ (19) The rate function in the small $q$ limit exhibits the quadratic CLT behavior, whereas the large $q$ behavior is what we call the asymptotic rate function (ARF). In Fig. 2, we demonstrate how our model effectively captures both the central part, corresponding to the CLT, and the Laplace(exponential)-like tails, corresponding to the large $q$ limit in Eq. (19), as $Z(q)$ is a slowly varying function. It becomes evident that, as time progresses, the numerical solution (CTRW) converges to our numerical LDT rate function. Moreover, although the ARF theory is valid, as expected, we observe an issue of slow convergence for values of $\beta$ near 1. Figure 3: Slow convergence of the asymptotical rate function (ARF) as formulated in Eq. (19), to the numerical LDT obtained using Eq. (8), for the case $\beta=1.1$. As we approach $\beta=1$ from above, the convergence of our approximation (ARF) becomes increasingly gradual. This slowing of convergence is attributed to the proximity to the transition point. At this critical juncture, the moment-generating function becomes non-existent, and the system transitions into the big jump principle (BJP) regime. ### 3.3 Slow Convergence of the ARF The issue of slow convergence of the ARF, as observed in Fig. 2, becomes increasingly pronounced as we mentioned before. More specifically, we focus on the scenario where our approximation, formulated in Eq. (19), under the umbrella of LDT $(\beta>1)$, is nearing the critical transition point that occurs for $\beta=1$. For this value of $\beta$, the moment-generating function ceases to exist and our approximation becomes invalid. In Fig. 3, we highlight how, for $\beta=1.1$, the ARF converges slowly to the numerical LDT, as $q=x/t\rightarrow\infty$. This gradual convergence is a direct consequence of our approximation for $u_{0}$, and the numerical solution for $u_{0}$ eliminates this problem. In Fig. 4 we demonstrate how our approximation of $u_{0}$ worsens as $\beta$ approaches 1, leading to the slow convergence of the rate function and the propagator in such cases. Figure 4: Approximation of $u_{0}$ (colored lines), and numerically computed $u_{0}$ (corresponding colored symbols). While for $\beta=2$ our approximation is exact, as $\beta\rightarrow 1$ the approximation works well only when $x\gg t$. ## 4 Edgeworth Expansion In the context of the summation of n IID random variables, the Edgeworth expansion [64, 65, 66] provides corrections to the CLT. As a reminder, we denote the PDF of finding the particle at $x$, conditioned on performing $n$ jumps, as $\phi(x|n)$. The leading order of the Edgeworth expansion when $n$ is large, is: $\phi(x|n)\sim\frac{\exp(-x^{2}/2n)}{\sqrt{2\pi n}}\left[1+\frac{\kappa_{4}\mathrm{He}_{4}(x/\sqrt{n})}{4!\ n}+\cdots\right]$ (20) where $\kappa_{4}$ is the fourth cumulant of $f(\chi)$, namely $\kappa_{4}=m_{4}-3\sigma^{2}$, and $m_{4}$ is the forth moment of $f(\chi)$. In our example $\sigma=1$ while $m_{4}=\Gamma(1/\beta)\Gamma(5/\beta)/\Gamma(3/\beta)^{2}$, $\mathrm{He}_{4}(\xi)=\xi^{4}-6\xi^{2}+3$. In Eq. (20) the correction term is proportional to the Hermite polynomial. It is tempting to use the Edgeworth expansion in the summation presented in Eq. (2). However, if $\kappa_{4}=0$, namely when $f(\chi)$ is Gaussian corresponding to $\beta=2$, the correction term vanishes. We introduce a modified Edgeworth expansion, where $t$ is the large parameter. We use the small $k$ expansion $\widetilde{f}(k)=1-k^{2}/2+m_{4}k^{4}/4!+\cdots$ in Eq. (4) $\widetilde{P}(k,t)=\exp\left[-\frac{tk^{2}}{2}+\frac{tm_{4}k^{4}}{4!}+\cdots\right].$ (21) Further expanding $\widetilde{P}(k,t)=\exp\left(-\frac{tk^{2}}{2}\right)\left[1+k^{4}\frac{m_{4}t}{4!}+\cdots\right].$ (22) Performing the inverse Fourier transform we obtain $P(x,t)=\frac{\exp\left(-x^{2}/2t\right)}{\sqrt{2\pi t}}\left[1+\frac{m_{4}}{4!\ t}\mathrm{He}_{4}\left(\frac{x}{\sqrt{t}}\right)+\cdots\right],$ (23) Eq. (23) has the same structure as the original expansion in Eq. (20), where we replaced large $n$ with large $t$. The difference is that now the correction is proportional to the fourth moment of the distribution $f(\chi)$ and not to its cumulant. The former is always non-negative while the latter can be negative zero or positive. Therefore, we see that the fluctuations in the number of jumps have a significant effect. The contribution of the Hermite polynomials within our model can be demonstrated through a straightforward rearrangement of $P(x,t)$ from Eq. (23), denoted as $\mathcal{R}(x,t)$. By defining the central limit theorem component as $\mathrm{CLT}=\exp\left(-x^{2}/2t\right)/\sqrt{2\pi t}$, we obtain the following form: $\displaystyle\mathcal{R}(x,t)=\frac{4!t}{m_{4}}\frac{P(x,t)-CLT}{CLT},$ (24) where, at sufficiently long times, this expression approaches the fourth Hermite polynomial. This is shown in Fig. 5 for different values of $\beta$, where $\mathcal{R}(x,t)$ is plotted versus the scaled variable $x/\sqrt{t}$. The Edgeworth expansion uses the diffusive scaling $x/\sqrt{t}$ while the large deviations theory relies on the scaled variable $x/t$. Both contribute to the leading order of the central limit theorem when $x/t\ll 1$. Note that when using the Edgeworth approach in the limit $x/\sqrt{t}\gg 1$, we have from the Hermite function a correction term to the Gaussian $P(x,t)$ that can be written as $tm_{4}(x^{4}/t^{4})$. This results in the same scaling $q=x/t$ as found in the rate function $P(x,t)\propto\exp[-t{\cal{I}}(x/t)]$. Thus, as expected, the Edgeworth expansion and large deviations theory are not conflicting. They consider different limits of the problem. The large deviations theory is valid for $\beta>1$ as it demands finite cumulants. The convergence properties of the Edgeworth expansion and their $\beta$ dependence are left for future work. Figure 5: We showcase the leading order of the Edgeworth expansion by plotting, for $t=10$ the rearranged result from the numerical CTRW as outlined in Eq.(24) (colored lines). This is compared against the fourth Hermite polynomial $\mathrm{He}_{4}(x/\sqrt{t})=(x/\sqrt{t})^{4}-6(x/\sqrt{t})^{2}+3$. The excellent agreement is observed regardless of the value of $\beta$, thereby confirmming the validity of this expansion for large time scales. ## 5 Big Jump Principle ### 5.1 Sum of IID Random Variables The big jump principle (BJP) [67, 68, 69, 70, 71, 72, 73] describes how, in a heavy-tailed process with a sub-exponential distribution of jump lengths, a single “big” jump among a series of $n$ IID random variables can dominate the asymptotic characteristics of the process. This principle connects the sum $x=\sum_{i=1}^{n}\chi_{i}$ to the largest displacement $\chi_{\text{max}}=\max{\chi_{1},\dots,\chi_{n}}$ [69]. Specifically, this principle provides the asymptotic equality: $\displaystyle\int_{\chi_{\text{large}}}^{\infty}\phi(x|n)dx$ $\displaystyle=$ $\displaystyle\text{Prob}(x>\chi_{\text{large}})=$ (25) $\displaystyle\text{Prob}(\chi_{\text{max}}>\chi_{\text{large}}),$ in the limit of $\chi_{\text{large}}\rightarrow\infty$. To further elucidate, we calculate: Prob $\displaystyle\left(\chi_{\text{max}}>\chi_{\text{large}}\right)=1-\text{Prob}\left(\chi_{i}<\chi_{\text{large}}\right)^{n}$ (26) $\displaystyle\quad=1-\left[1-\int_{\chi_{\text{large}}}^{\infty}f(\chi)d\chi\right]^{n}.$ Taking the derivative with respect to $\chi_{\text{large}}$, we obtain the PDF of $x$. Then, by approximating $\int_{\chi_{\text{large}}}^{\infty}f(\chi)d\chi$ as approximately zero, which is a leading-order approximation, we get the asymptotic result [74]: $\displaystyle\phi(x|n)_{\beta<1}\approx nf(x).$ (27) It is important to note that, in contrast with the CLT, which is applicable primarily when $n$ is significantly large, Eq. (27) is valid for all values of $n$, including instances when $n=2$. We will discuss the BJP for CTRW later, but for now, our focus is on $\phi(x|n)$, specifically in the case where the number of jumps, $n$, equals 2. Figure 6: The graph of $g(y)=-|1-y|^{\beta}-|y|^{\beta}$, for different values of $\beta$. It is evident that for $\beta>1$, the function is smooth with a single global maximum at $y_{0}=1/2$. For $\beta=1$, the function is piecewise linear, and for $\beta<1$, it exhibits two non-analytical cusps at $y=0$, and $y=1$. This example serves as a foundational demonstration of the transition from BJP $(\beta<1)$ to LDT $(\beta>1)$, while $\beta=1$ is a special transition point [63]. ### 5.2 Big Jump Principle for $n=2$ An understanding of the big jump principle is best demonstrated through the simple yet comprehensive scenario of two jumps. Employing the convolution theorem as seen earlier; $\phi(x|2)=f\ast f=\widetilde{N}^{2}\int_{-\infty}^{\infty}\exp\left(-\alpha^{\beta}|\chi^{\prime}|^{\beta}-\alpha^{\beta}|x-\chi^{\prime}|^{\beta}\right)d\chi^{\prime}$, we introduce the scaling variable $y=\frac{\chi^{\prime}}{x}$, and obtain the following equation: $\displaystyle\phi(x|2)=\widetilde{N}^{2}|x|\int_{-\infty}^{\infty}{\rm e}^{\alpha^{\beta}|x|^{\beta}g(y)}dy,$ (28) where we define $g(y)\equiv-|1-y|^{\beta}-|y|^{\beta}$. In Fig. 6 we demonstrate the transition in the analytical properties of $g(y)$. For $\beta>1$ where $f(\chi)$ is super-exponential, $g(y)$ is a smooth function with a single global maximum at $y_{0}=1/2$, suitable for the Laplace method of solving integrals. Conversely, as shown in Fig. 6, for $\beta<1$, $g(y)$ displays a pair of cusps, and two distinct maxima are observed. We proceed to examine these cases. Beginning with the case of $\beta>1$, one can approximate $g(y)\approx g(y_{0})+\frac{1}{2}g^{{}^{\prime\prime}}(y_{0})(y-y_{0})^{2}$, facilitating a direct Gaussian integration: $\displaystyle\phi(x|2)_{\beta>1}\approx\widetilde{N}^{2}|x|\sqrt{\frac{\pi 2^{\beta-2}}{\beta(\beta-1)\alpha^{\beta}|x|^{\beta}}}{\rm e}^{-\frac{1}{2}^{\beta-1}\alpha^{\beta}|x|^{\beta}}.$ This result diverges as $\beta$ approaches 1. The critical case of $\beta=1$ marks a transition, where the cumulant generating function of $\phi(x|n)$, exists only for $\beta>1$, and a piecewise linear function is displayed. At $\beta=1$, we can exactly solve Eq. (28), yielding: $\phi(x|2)=\frac{\sqrt{2}+2|x|}{4}{\rm e}^{-\sqrt{2}|x|}=\left(\frac{1+\sqrt{2}|x|}{2}\right)f(x)$, which is not described by either the BJP or the LDT. However, for $\beta<1$, where $f(\chi)$ is sub-exponential, $g(y)$ exhibits two non-analytical cusps, and a slightly modified version of the Laplace method of solving integrals is needed. Due to the symmetry of $g(y)$, it is easy to show that the main contributions from the two maxima to the integration are identical. Opting for the left cusp, around $y=0$, we approximate $g(y)\approx-1-|y|^{\beta}$, and insert it into Eq. (28), to derive: $\phi(x|2)\approx\widetilde{N}^{2}|x|\int_{-\infty}^{\infty}{\rm e}^{-\alpha^{\beta}|x|^{\beta}\left(1+|y|^{\beta}\right)}dy=2f(x)$, aligning with the BJP. Now, we calculate corrections to the BJP which are particularly important when $\beta\to 1$. The improved approximation is given by $g(y)\approx-1-|y|^{\beta}+\beta y+\frac{1}{2}\beta(1-\beta)y^{2}$, which we can plug into Eq. (28) to obtain $\displaystyle\phi(x|2)\approx 2\widetilde{N}^{2}|x|$ $\displaystyle\times\int_{-\infty}^{\infty}{\rm e}^{\alpha^{\beta}|x|^{\beta}(-1-|y|^{\beta})}\big{(}1+{\rm d}_{1}(x)y+{\rm d}_{2}(x)y^{2}\big{)}dy,$ where ${\rm d}_{1}(x)=\alpha^{\beta}|x|^{\beta}\beta$ vanishes due to symmetry. The solution of Eq. (5.2) yields $\phi(x|2)\approx 2\left(1+\frac{{\rm d}_{2}(x)}{x^{2}}\right)f(x),$ (31) where ${\rm d}_{2}(x)=\frac{1}{2}\alpha^{\beta}|x|^{\beta}\beta\left(1-\beta+\alpha^{\beta}|x|^{\beta}\beta\right).$ (32) The first term of Eq. (31) is the BJP, as detailed in Eq. (27), while ${\rm d}_{2}(x)/x^{2}$, serves as the correction term. The correction becomes significant as $\beta$ approaches $1$ from below, since the magnitude of ${\rm d}_{2}(x)/x^{2}$ is of the same order as the leading term of the BJP. In Fig. 7, we compare the numerically obtained $\phi(x|2)$, the leading term from Eq. (27), and the correction term we have formulated in Eq. (31). This comparison emphasizes the importance of additional correction terms as $\beta\to 1$ and demonstrates their redundancy for $\beta\ll 1$. Figure 7: The big jump principle for the pedagogical case of two jumps. The figure shows the numerically calculated $\phi(x|2)$ (colored symbols), the leading approximation $(2f(x))$ (colored lines), and the corrected form, derived in Eq. (31) (colored dashed lines) for $\beta=0.2,0.7,0.9$, are displayed on a semi-logarithmic scale. While the BJP is effective at the far tails, it becomes apparent that convergence slows as $\beta$ nears the critical value of 1. ### 5.3 BJP for CTRW As we progress with the big jump principle, integrating Eq. (27) into Eq. (2) leads us to a straightforward relation [63]: $\displaystyle P(x,t)\approx\langle n_{t}\rangle f(x),$ (33) where $\langle n_{t}\rangle$ represents the average number of jumps at time $t$. This relationship holds true for every distribution of waiting times. In this manuscript, our focus is specifically on exponential waiting times, corresponding to Poisson statistics. Namely, we have $\langle n\rangle=t$, hence $P(x,t)\approx tf(x)$. This signifies a pivotal behavior change: instead of exhibiting exponential tails, the propagator will now display sub- exponential characteristics. Thus, the universality observed in the super- exponential case (for $\beta>1$) disappears for $\beta<1$. Similarly, if $f(x)$ has a power law tail, Eq. (33) is still valid, and $P(x,t)$ will decay as a power law for large x. In Fig. 8, we showcase this transition, observing how the propagator, $P(x,t)$, shifts from sub-exponential features to exponential, Laplace-like characteristics. Figure 8: The propagator $P(x,t)$ is shown for different values of $\beta$ (shown in legend). The transition of $P(x,t)$ from exponential to Laplace-like characteristics is associated with the large deviations theory. When $\beta>1$, we observe a transition to sub-exponential behavior as per the big jump principle when $\beta<1$. Circles denote the numerically obtained CTRW data, while solid lines represent our approximations. For $\beta<1$, Eq. (33) is employed, and for $\beta>1$, Eq. (15) is utilized. ## 6 Discussion This research has centered on applying large deviations theory (LDT) to the study of diffusive motion in the context of continuous time random walks with exponential waiting times. A notable aspect of LDT is its traditional use in probing rare events, specifically within the large $x$ limit of the probability density function $P(x,t)$. While rate functions, a key component in LDT, are often perceived as challenging to sample in empirical settings [75, 76], our study suggests otherwise. By focusing on short time scales and large $x$ values, we demonstrate that the nearly exponential decay of $P(x,t)$ \- albeit with logarithmic corrections - is more accessible for sampling than previously assumed. This accessibility is particularly pertinent in single tracer experiments, where the microscopic time scale, characterized by the mean time between jumps, is not significantly smaller than the observation period $t$. Under these conditions, a universal exponential decay in the particle density is observed. Additionally, our investigation delved into the realm of the big jump principle, particularly its interplay and transition to LDT. The BJP is critical for understanding heavy-tailed processes in which significant fluctuations, or “big jumps”, dominate the characteristics of the random walk. This principle becomes particularly relevant when the jump length distribution is sub-exponential, leading to sub-exponential behavior in the propagator [74, 63, 77, 78, 79, 80]. The transition from the BJP regime $(\beta<1)$ to the LDT regime $(\beta>1)$ highlights how different statistical frameworks govern the behavior of diffusive processes under varying conditions. These findings are crucial as they extend the utility of LDT and BJP beyond theoretical confines, making them practical tools for capturing statistical behaviors of particle dispersion that deviate from the normal distribution, inherent in various physical systems. While the CLT remains a cornerstone in statistical physics, our study illustrates that deviations from CLT predictions, particularly in the tails, are theoretically significant and experimentally observable. Furthermore, the employment of the Edgeworth expansion has provided valuable corrections to the CLT, leading to a more nuanced understanding of the propagator’s behavior in different regimes. This approach has bridged the gap between the Gaussian central, and the non-Gaussian tails of the distribution, capturing the essence of diffusive dynamics more comprehensively. The understanding of Laplace’s first law is not limited to the CTRW framework. In Refs. [81, 29], a many-body scenario was studied, employing mainly simulations. One example is the Hitchhiker model [81], a framework based on experimental findings that the size of molecules may fluctuate due to diffusion, aggregation, and fragmentation processes, resulting in exponential- like tails for $P(x,t)$. Similarly, Ref. [29] argues that polymerization processes lead to the demonstrations of Brownian yet non-Gaussian diffusion. ### 6.1 Open Questions In the CTRW framework, a significant challenge is presented by the intermediate regime, where neither the central limit theorem nor the large deviations theory/big jump principle effectively approximates the results. Rusciano et al.[16] introduced a scaling parameter, assuming exponential statistics; $\exp{(-|x|/\lambda(t))}$, where $\lambda(t)\sim t^{1/3}$. An opposing comment was quickly raised by Berthier et al.[82], yet the phenomenon, particularly the non-trivial exponent $1/3$, was observed in several other experiments [57, 23, 16, 24]. More specifically, in Ref. [57], both the scaling parameter and the Lambert functions, as derived in this manuscript, are demonstrated. Furthermore, in Ref. [14], a similar scaling parameter, $\lambda(t)\sim t^{1/2}$, is proposed, adding further intrigue to the subject. It remains unclear whether the experimental and numerical evidence for $\lambda(t)\sim t^{b}$ represents merely a fitting issue due to insufficient data, or if it describes some deep physics not fully understood by the authors of this manuscript. Chaudhuri et al. [22] proposed that the waiting time PDF is a combination of two exponentials, with the first jump in the random walk being drawn from a less frequent PDF compared to the subsequent jumps. Under certain conditions, this results in more pronounced exponential tails. As mentioned in our model, for the sake of mathematical simplicity and broad applicability, we employed a Poisson process, while other works investigated a wider variety of processes [47]. However, a practical question that remains largely unanswered is the impact of non-exponential waiting times on the observations made in this study. Similarly, the effect of changing the dimensionality of the problem on our results is yet to be explored. Another interesting approach to the tails of CTRW was recently pioneered by Sokolov and Pacheco [55]. They use the rate function representation of the temporal process (waiting times) and the rate function representation of the spatial process (jumps) to study the properties of the rate function of $P(x,t)$. Finally, delving beyond sub- and super-exponential jump lengths, we still expect Laplace-like tails, for example, the case of CTRW on a lattice [47]. One may find similar results to those presented in this manuscript, though the Lambert function used extensively here, is not found to be generic. This implies that the characterization of the rate function for more general jump length distributions might yield further insights. Acknowledgements: We thank Lucianno Defaveri for insightful discussions. We also acknowledge the support of Israel Science Foundation’s grants 1614/21, and 2796/20. †† *<EMAIL_ADDRESS>*<EMAIL_ADDRESS>** <EMAIL_ADDRESS> ## References * Laplace [1774] P. S. Laplace, Mémoire de l’académie royale des sciences (1774). * Laplace [1781] P. S. Laplace, Mémoires de l’Académie Royale des sciences de Paris 1778 (1781). * Wilson [1923] E. B. Wilson, Journal of the American Statistical Association 18 (1923). * Teets and Whitehead [1999] D. Teets and K. Whitehead, Mathematics Magazine (1999). * Stigler [1981] S. M. Stigler, The Annals of Statistics (1981). * Schramma _et al._ [2023] N. Schramma, C. P. Israëls, and M. Jalaal, Proceedings of the National Academy of Sciences 120 (2023). * Lampo _et al._ [2017] T. J. Lampo, S. Stylianidou, M. P. Backlund, P. A. Wiggins, and A. J. Spakowitz, Biophysical Journal (2017). * Toyota _et al._ [2011] T. Toyota, D. A. Head, C. F. Schmidt, and D. Mizuno, Soft Matter (2011). * Stuhrmann _et al._ [2012] B. Stuhrmann, M. Soares e Silva, M. Depken, F. C. MacKintosh, and G. H. Koenderink, Phys. Rev. E (2012). * e Silva _et al._ [2014] M. S. e Silva, B. Stuhrmann, T. Betz, and G. H. Koenderink, New Journal of Physics 16 (2014). * Weeks _et al._ [2000] E. R. Weeks, J. Crocker, A. C. Levitt, A. Schofield, and D. Weitz, Science (2000). * Leptos _et al._ [2009] K. C. Leptos, J. S. Guasto, J. P. Gollub, A. I. Pesci, and R. E. Goldstein, Phys. Rev. Lett. (2009). * Stylianidou _et al._ [2014] S. Stylianidou, N. J. Kuwada, and P. A. Wiggins, Biophysical Journal 107 (2014). * Wang _et al._ [2009] B. Wang, S. M. Anthony, S. C. Bae, and S. Granick, Proceedings of the National Academy of Sciences (2009). * Wang _et al._ [2012] B. Wang, J. Kuo, S. C. Bae, and S. Granick, Nature materials (2012). * Rusciano _et al._ [2022] F. Rusciano, R. Pastore, and F. Greco, Phys. Rev. Lett. 128 (2022). * Chakraborty and Roichman [2020] I. Chakraborty and Y. Roichman, Phys. Rev. Res. 2 (2020). * Guan _et al._ [2014] J. Guan, B. Wang, and S. Granick, ACS nano 8 (2014). * Pastore _et al._ [2021] R. Pastore, A. Ciarlo, G. Pesce, F. Greco, and A. Sasso, Phys. Rev. Lett. 126 (2021). * Pastore _et al._ [2022] R. Pastore, A. Ciarlo, G. Pesce, A. Sasso, and F. Greco, Soft Matter 18 (2022). * Waigh and Korabel [2023] T. A. Waigh and N. Korabel, Reports on Progress in Physics (2023). * Chaudhuri _et al._ [2007] P. Chaudhuri, L. Berthier, and W. Kob, Phys. Rev. Lett. (2007). * Åberg and Poolman [2021] C. Åberg and B. Poolman, Biophysical Journal 120 (2021). * Miotto _et al._ [2021] J. M. Miotto, S. Pigolotti, A. V. Chechkin, and S. Roldán-Vargas, Phys. Rev. X 11 (2021). * Chubynsky and Slater [2014] M. V. Chubynsky and G. W. Slater, Phys. Rev. Lett. 113 (2014). * Yamamoto _et al._ [2021] E. Yamamoto, T. Akimoto, A. Mitsutake, and R. Metzler, Phys. Rev. Lett. 126 (2021). * Hidalgo-Soria _et al._ [2021] M. Hidalgo-Soria, E. Barkai, and S. Burov, Entropy 23 (2021). * Chechkin _et al._ [2017] A. V. Chechkin, F. Seno, R. Metzler, and I. M. Sokolov, Phys. Rev. X 7 (2017). * Baldovin _et al._ [2019] F. Baldovin, E. Orlandini, and F. Seno, Frontiers in Physics (2019). * Nampoothiri _et al._ [2022] S. Nampoothiri, E. Orlandini, F. Seno, and F. Baldovin, New Journal of Physics 24 (2022). * Nampoothiri _et al._ [2021] S. Nampoothiri, E. Orlandini, F. Seno, and F. Baldovin, Phys. Rev. E 104 (2021). * Montroll and Weiss [1965] E. W. Montroll and G. H. Weiss, Journal of Mathematical Physics 6 (1965). * Barkai [2002] E. Barkai, Chemical Physics 284 (2002). * Weiss and Rubin [1983] G. H. Weiss and R. J. Rubin, _Random walks: theory and selected applications_ , Vol. 52 (Wiley Online Library, 1983) pp. 363–505. * Weiss [1994] G. H. Weiss, North-Holland: Amsterdam, The Netherlands (1994). * Aghion _et al._ [2018] E. Aghion, D. A. Kessler, and E. Barkai, Eur.Phys.J. B 91 (2018). * Kutner and Masoliver [2017] R. Kutner and J. Masoliver, Eur.Phys.J. B 90 (2017). * Metzler and Klafter [2000] R. Metzler and J. Klafter, Physics Reports 339 (2000). * Shafir and Burov [2022] D. Shafir and S. Burov, Journal of Statistical Mechanics: Theory and Experiment 2022 (2022). * Barkai _et al._ [2000] E. Barkai, R. Metzler, and J. Klafter, Phys. Rev. E 61 (2000). * Monthus and Bouchaud [1996] C. Monthus and J.-P. Bouchaud, Journal of Physics A: Mathematical and General 29 (1996). * Masoliver _et al._ [2003] J. Masoliver, M. Montero, and G. H. Weiss, Phys. Rev. E 67 (2003). * Klafter and Sokolov [2011] J. Klafter and I. M. Sokolov, _First steps in random walks: from tools to applications_ (Oxford University Press, 2011). * Vitali _et al._ [2022] S. Vitali, P. Paradisi, and G. Pagnini, Journal of Physics A: Mathematical and Theoretical 55 (2022). * Luo _et al._ [2024] X. Luo, J.-D. Bao, and W.-Y. Fan, Phys. Rev. E 109 (2024). * Barkai and Burov [2020] E. Barkai and S. Burov, Phys. Rev. Lett. 124 (2020). * Wang _et al._ [2020] W. Wang, E. Barkai, and S. Burov, Entropy 22 (2020). * Touchette [2009] H. Touchette, Physics Reports (2009). * Majumdar and Vergassola [2009] S. N. Majumdar and M. Vergassola, Phys. Rev. Lett. (2009). * Krapivsky _et al._ [2014] P. L. Krapivsky, K. Mallick, and T. Sadhu, Phys. Rev. Lett. 113 (2014). * Hegde _et al._ [2014] C. Hegde, S. Sabhapandit, and A. Dhar, Phys. Rev. Lett. 113 (2014). * Nickelsen and Touchette [2018] D. Nickelsen and H. Touchette, Phys. Rev. Lett. 121 (2018). * Derrida [2007] B. Derrida, Journal of Statistical Mechanics: Theory and Experiment (2007). * du Buisson and Touchette [2023] J. du Buisson and H. Touchette, Physical Review E 107 (2023). * Pacheco-Pozo and Sokolov [2021] A. Pacheco-Pozo and I. M. Sokolov, Phys. Rev. E 103 (2021). * Lugannani and Rice [1980] R. Lugannani and S. Rice, _Saddle point approximation for the distribution of the sum of independent random variables_, Vol. 12 (Cambridge University Press, 1980) pp. 475–490. * Hu _et al._ [2023] M. Hu, H. Chen, H. Wang, S. Burov, E. Barkai, and D. Wang, ACS nano 17 (2023). * Rosenblatt [1956] M. Rosenblatt, Proceedings of the National Academy of Sciences 42 (1956). * Cox [1962] D. R. Cox, Methuen,London (1962). * Burov [2020] S. Burov, arXiv preprint arXiv:2007.00381 (2020). * Bochner and Chandrasekharan [1949] S. Bochner and K. Chandrasekharan, _Fourier transforms_ (Princeton University Press, 1949). * Corless _et al._ [1996] R. M. Corless, G. H. Gonnet, D. E. Hare, D. J. Jeffrey, and D. E. Knuth, Advances in Computational mathematics (1996). * Singh and Burov [2023] R. K. Singh and S. Burov, Phys. Rev. E 108 (2023). * Edgeworth [1905] F. Y. Edgeworth, in _Cambridge Philos. Trans._ , Vol. 20 (1905). * Edgeworth [1906] F. Y. Edgeworth, Journal of the Royal Statistical Society (1906). * Kendall _et al._ [1948] M. G. Kendall _et al._ , _The advanced theory of statistics. Vols. 1._ , Ed. 4 (Charles Griffin and Co., Ltd., 42 Drury Lane, London, 1948). * Vezzani _et al._ [2019] A. Vezzani, E. Barkai, and R. Burioni, Phys. Rev. E 100 (2019). * Vezzani _et al._ [2020] A. Vezzani, E. Barkai, and R. Burioni, Scientific Reports 10 (2020). * Chistyakov [1964] V. Chistyakov, Theory of Probability and Its Applications 9 (1964). * Höll and Barkai [2021] M. Höll and E. Barkai, Eur.Phys.J. B 94 (2021). * Wang _et al._ [2019] W. Wang, A. Vezzani, R. Burioni, and E. Barkai, Phys. Rev. Research 1 (2019). * Burioni and Vezzani [2020] R. Burioni and A. Vezzani, Journal of Statistical Mechanics: Theory and Experiment 2020 (2020). * Vezzani and Burioni [2023] A. Vezzani and R. Burioni, arXiv preprint arXiv:2309.16227 (2023). * Foss _et al._ [2011] S. Foss, D. Korshunov, S. Zachary, _et al._ , _An introduction to heavy-tailed and subexponential distributions_ , Vol. 6 (Springer, 2011). * Debiossac _et al._ [2023] M. Debiossac, N. Kiesel, and E. Lutz, arXiv preprint arXiv:2309.06056 (2023). * Thapa _et al._ [2021] S. Thapa, A. Wyłomańska, G. Sikora, C. E. Wagner, D. Krapf, H. Kantz, A. V. Chechkin, and R. Metzler, New Journal of Physics 23 (2021). * Embrechts _et al._ [2013] P. Embrechts, C. Klüppelberg, and T. Mikosch, _Modelling extremal events: for insurance and finance_ , Vol. 33 (Springer Science and Business Media, 2013). * Kutner [2002] R. Kutner, Chemical Physics 284 (2002). * De Mulatier _et al._ [2013] C. De Mulatier, A. Rosso, and G. Schehr, Journal of Statistical Mechanics: Theory and Experiment 2013 (2013). * Majumdar _et al._ [2005] S. N. Majumdar, M. R. Evans, and R. K. P. Zia, Phys. Rev. Lett. 94 (2005). * Hidalgo-Soria and Barkai [2020] M. Hidalgo-Soria and E. Barkai, Phys. Rev. E 102 (2020). * Berthier _et al._ [2023] L. Berthier, E. Flenner, and G. Szamel, Phys. Rev. Lett. 131 (2023).
# Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering Kosuke Akimoto Kunihiro Takeoka Masafumi Oyamada NEC Corporation {kosuke_a, k_takeoka<EMAIL_ADDRESS> ###### Abstract Retrieval-augmented generation models augment knowledge encoded in a language model by providing additional relevant external knowledge (context) during generation. Although it has been shown that the quantity and quality of context impact the performance of retrieval-augmented generation models during inference, limited research explores how these characteristics affect model training. This paper explores how context quantity and quality during model training affect the performance of Fusion-in-Decoder (FiD), the state-of-the- art retrieval-augmented generation model, in extractive open-domain question answering tasks. Experimental results suggest that FiD models overfit to context quality during training and show suboptimal performance when evaluated on different context quality. Through the experimental results, we also reveal FiD models trained with different context quality have different cross- attention distribution patterns. Specifically, as context quality during training increases, FiD models tend to attend more uniformly to each passage in context. Finally, based on these observations, we propose a method to mitigate overfitting to specific context quality by introducing bias to the cross-attention distribution, which we demonstrate to be effective in improving the performance of FiD models on different context quality. ## 1 Introduction Recently, large-scale pre-trained language models have achieved impressive performance in the field of Natural Language Generation, which includes tasks that require real-world knowledge, e.g., closed-book question answering and common sense reasoning Brown et al. (2020). However, these models are still prone to generate factually incorrect outputs known as hallucinations Ji et al. (2023), particularly when dealing with rare entities Mallen et al. (2023). Also, they cannot handle new information that arises after their training phase Kasai et al. (2022). In order to address these challenges, retrieval-augmented generation models have been recently proposed Izacard and Grave (2021b); Lewis et al. (2020). These models draw inspiration from retrieval-based extractive open-domain question answering methods Chen et al. (2017) and utilize additional relevant external knowledge (e.g., a Wikipedia article about an entity in a given question) during generation to augment knowledge encoded in a language model. Retrieval-augmented generation models have demonstrated effectiveness in knowledge-intensive tasks Petroni et al. (2021) such as question answering and fact checking Hofstätter et al. (2022), and have been reported to reduce hallucinations in dialogue tasks Shuster et al. (2021). The external knowledge given to the models is called a context, and it is usually obtained through information retrieval systems Lin et al. (2022). Multiple passages, typically up to 100, are often used collectively as a single context to ensure the high recall of relevant information. This strategy addresses the limitations of retrieval systems, which may return irrelevant passages and fail to capture relevant information in the top results. When dealing with contexts composed of multiple passages, we can define their quantity (the number of passages in the context) and quality (the proportion of relevant passages in the context). Since the context quantity and quality vary depending on model configuration or application, e.g., the performance of the retrieval system and the computational resources available, understanding how these characteristics impact the model performance becomes an important research question. Indeed, during the inference phase, it has been shown that the quantity and quality of contexts impact the performance of retrieval-augmented generation models. For example, Izacard and Grave (2021b) showed that increasing the number of top-ranked retrieved passages used as a context during inference improves the performance of their model in the question answering task, and Weller et al. (2022) found that the model prediction is distracted more strongly as the proportion of conflicting misinformation in the context increases. However, regarding the training phase, it is not yet fully understood how these context characteristics impact the performance of the trained models. Limited research suggests that increasing the number of retrieved passages used as a context during training improves question answering performance Izacard and Grave (2021b) and reduces memorization Chen et al. (2022). Still, the impact of quantity and quality of contexts is mixed in these studies, as relevant passages are typically biased towards higher rank in the retrieval result, and simply increasing the number of top-ranked passages changes both the quantity and quality of the context. In this paper, we focused on extractive open-domain question answering tasks and investigated the impact of context quantity and quality on the training of Fusion-in-Decoder (FiD) Izacard and Grave (2021b), a state-of-the-art retrieval-augmented generation model. We demonstrate that context quality during training affects the performance of the trained model. As far as our knowledge, this work is the first attempt to explicitly control context quality and investigate its effect on training of retrieval-augmented generation models. Key insights obtained through our experiments are as follows: * • FiD models overfit to context quality during training, resulting in deteriorated performance when evaluated on a different quality of context. * • FiD models overfit less to context quantity compared to context quality. * • FiD models trained with different context qualities show different patterns of cross-attention probability. As context quality during training increases, the trained models tend to attend more uniformly to each passage in context and vice versa. Based on these observations, we propose a method to mitigate the overfitting of a trained FiD model to specific context quality without additional training by controlling the selectivity of its cross-attention distribution. We present an empirical analysis demonstrating the proposed method’s effectiveness in improving the performance of a trained FiD model when deployed in environments with different context quality than those used during training. ## 2 Experimental Setup In this section, we describe the task (§2.1) and model architecture (§2.2) used in our experiments, and we define quality and quantity that we used in this paper (§2.3, 2.4). ### 2.1 Task and Dataset This study focuses on the extractive open-domain question answering task, where models have to extract answers from retrieved documents. We conducted experiments on two standard benchmark datasets of the task: * • Natural Questions Kwiatkowski et al. (2019) contains questions submitted to Google Search engine. We use the open-domain version of this dataset presented by Lee et al. (2019). * • TriviaQA Joshi et al. (2017) contains questions authored by trivia enthusiasts. Following Lee et al. (2019), we use the unfiltered set of the dataset. For each dataset, following Izacard and Grave (2021b), we used top-100 passages retrieved by DPR Karpukhin et al. (2020)111The retrieved passages were obtained from the published dataset at https://github.com/facebookresearch/FiD.. As an evaluation metric, we computed the exact match (EM) between a ground-truth answer and predicted answer generated by greedy decoding222If a question has multiple ground-truth answers, EM is computed as one if the predicted answer matches any one of the ground-truth answers.. We evaluated the performance on the development set of each dataset. ### 2.2 Model In our experiments, we focused on Fusion-in-Decoder (FiD) Izacard and Grave (2021b), a state-of-the-art architecture for retrieval-augmented generation model. FiD is extended from sequence-to-sequence models, such as T5 Raffel et al. (2020), and consists of a Transformer encoder $E$ and decoder $D$ Vaswani et al. (2017). Given a question $q$ and its context $c=\\{p_{i}\\}_{i=1}^{N}$, where $p_{i}$ is the $i$-th passage of the context, a FiD model converts each passage $p_{i}$ to $\tilde{p}_{i}$ by a template "question: {q} title: {t} context: {c}". Here, {q}, {t}, and {c} are respectively replaced by $q$, the title of $p_{i}$, and the main text of $p_{i}$. Then, a FiD model independently encodes each converted passage $\tilde{p}_{i}$ by the encoder $E$ and feeds their concatenation to the decoder $D$ to get predicted answer $a$ as follows: $a=D([E(\tilde{p}_{1});...;E(\tilde{p}_{n})]).$ (1) We followed standard practice and trained FiD models by minimizing a cross- entropy loss of a ground-truth answer. As the position of a passage is not considered while encoding, the prediction of a FiD model is insensitive to the order of the passages. Thus we did not perform any special shuffling or ordering of the passages during training and evaluation. We used t5-base333https://huggingface.co/t5-base Raffel et al. (2020) to initialize the model. See Appendix A for other implementation details of training and inference of FiD models. ### 2.3 Relevant and Irrelevant Passage In this paper, we adopt the same definition of relevant and irrelevant passage as in Li et al. (2022). More specifically, a passage is relevant to a question if it logically entails an answer to the question, and it is irrelevant if it does not. However, in our open-domain setting, no ground-truth annotation of passage relevance exists for Natural Questions and TriviaQA. As discussed by Li et al. (2022), a simple rule that determines a passage that contains a ground-truth answer as a relevant passage is insufficient to filter out irrelevant passages that contain the answer but do not entail the answer. Since accurately determining whether a passage is relevant or not is crucial for estimating context quality, we applied an additional rule to extract relevant passages. We fed a pair of a question and a passage to a pre-trained question answering model444To annotate a passage of a question $q$ in dataset $\mathcal{D}$, we used FiD models trained on a subset of $\mathcal{D}$ that did not contain $q$. See Appendix C for more details., and deemed the passage relevant if the predicted answer matched a ground-truth answer to the question555We discarded those passages that contained a ground-truth answer but did not pass the additional filter, and we did not use those passage in our experiments.. Following Li et al. (2022), we considered a passage that did not contain any ground-truth answers to the question as irrelevant. We respectively denote the set of relevant and irrelevant passages of question $q$ by $R(q)$ and $\bar{R}(q)$, and we omit $(q)$ when it is not necessary. ### 2.4 Context Quality and Quantity For a question $q$ and a context $c=\\{p_{i}\\}_{i=1}^{N}$ of $N$ passages, we define context quality and quantity as follows: * • Context quality is defined as the proportion of passages in $c$ that is relevant to $q$, i.e. $\frac{|R(q)|}{N}$. * • Context quantity is the number of passages in $c$, i.e. $N$. ## 3 Case Studies In this section, we describe our experiments to investigate how context quantity and quality during training affect the performance of FiD models. Throughout the experiments in this section, we created various training and evaluation environments with controlled context quantity or quality by sampling $n^{+}$ relevant passages from $R(q)$ and $n^{-}$ irrelevant passages from $\bar{R}(q)$ for each question $q$, without replacement666See Appendix B for more details of the experimental design.. In the rest of this paper, we define $n=n^{+}+n^{-}$ as the number of total passages and $k=\frac{n^{-}}{n^{+}}$ as the ratio of irrelevant passages to relevant ones. We will use subscripts $\cdot_{\text{train}}$ and $\cdot_{\text{eval}}$ respectively to denote the values of training and evaluation environments if required. ### 3.1 Effect of Context Quality during Training Setting: To investigate the effect of context quality during model training, we created training and evaluation environments with the same context quantity but different context qualities. More specifically, for each total number of passages (i.e., context quantity) $n$ in $\\{10,25,60\\}$, we varied the value of $n^{+}$ among $\\{1,2,3\\}$ (Natural Question) or $\\{1,2,3,5,10\\}$ (TriviaQA) to obtain environments with different context qualities. Figure 1: Performance of FiD models on Natural Questions with varying training context quality. Panels represent different evaluation environments with different $(n_{\text{eval}}^{+},n_{\text{eval}})$ pairs, and a red dashed line shows the context quality of the corresponding evaluation environment. Red stars represent the best-performed models in the corresponding evaluation environments. Dotted lines show models trained on the same context quantity $n_{\text{train}}$. Result: Figure 1 shows the performance of models with different training context qualities777See Figure 6,7 and Table 7,8 in Appendix D for full results.. We can obtain the following observations from the figure: (i) For a given evaluation context quality, models trained with similar context quality showed the highest performance. (ii) There was a trend of monotonically decreasing performance as training context quality deviated further from evaluation context quality. (iii) Difference of context quantity had a negligible impact in the above trends. Insight: FiD models overfit to context quality during training, and suboptimal performance can be obtained when evaluated on different context qualities. ### 3.2 Effect of Context Quantity during Training Setting: To investigate the effect of context quantity during model training, we created training and evaluation environments with the same context quality but different context quantities. More specifically, for each ratio $k$ among $\\{1,5,20\\}$, we varied the value of $n^{+}$ among $\\{1,2,3\\}$ (Natural Questions) or $\\{1,2,3,5,8,10\\}$ (TriviaQA) to change context quantity888Since target questions were only guaranteed to have at least 64 irrelevant passages, we did not conduct experiments in environments with $kn^{+}>64$.. Result: Figure 2 shows the performance of models with different training context quantities999See Figure 8, 9 and Table 9, 10 in Appendix D for full results.. As can be seen in the figure, the influence of context quantity on model training was generally less significant than that of context quality (§3.1). However, we observed a more significant influence for smaller $k_{\text{train}}$ (higher context quality) in comparison to larger $k_{\text{train}}$ (lower context quality), especially in the cases where the training context quantity was small. One possible explanation for this behavior is that the impact of noise in the annotation of relevant (or irrelevant) passages disturbs the actual context quality, and this effect is magnified in such cases due to limited context quantities. Nevertheless, our experiments did not reveal a consistent trend in performance changes due to varying context quantity. Hence, the results indicate that context quantity’s influence is relatively insignificant compared to context quality’s. Insight: Training context quantity has less influence on model performance compared to context quality. ### 3.3 Effect of Mixed Context Quality during Training Generally, in practical uncontrolled settings, a training dataset for FiD models may consist of questions with different context qualities. Thus, we conducted experiments with mixed context qualities to investigate whether a similar overfitting phenomenon occurs for a training dataset with multiple context qualities. Setting: We created three environments for each dataset. Specifically, context quantity was set to $n=10$, and $n^{+}$ was varied among $\\{1,2,3\\}$ for Natural Questions, and context quantity was set to $n=25$, and $n^{+}$ was varied among $\\{2,5,10\\}$ for TriviaQA. Then, we uniformly mixed each subset of the three environments and trained FiD models in each of them101010We mixed environments by randomly selectig the value of $n^{+}$ and sampling passages accordingly for each question and training step.. Performance in a mixed environment was computed by averaging the performance in each constituting environment. Figure 2: Performance of FiD models on TriviaQA with varying training context quantity. Panels represent different evaluation environments with different $(n^{+}_{\text{eval}},k_{\text{eval}})$ pairs, and a red dashed line shows the context quantity of the corresponding evaluation environment. Dotted lines show models trained on the same context quality $\frac{1}{1+k_{\text{train}}}$. Result: Table 1 shows model performance for each pair of training and evaluation environment in Natural Questions111111See Table 5 for the result in TriviaQA.. High scores at diagonal elements of the table show that the models performed best or as well as the best model when they were evaluated in the same mixture of environments as one in training. For example, the models trained in the uniform mixture of all environments performed best only when they were evaluated on the same mixture. It suggests that covering all context qualities during evaluation is insufficient for optimal performance, and the distribution of the context qualities also matters to the performance. Table 1: Performance of FiD models trained in each mixture of environments on Natural Questions. The color intensity indicates the relative performance within the same evaluation mixture of environments (column). Checkmarks indicate which environment included in each mixture. ${n^{+}}_{\text{eval}}(\rightarrow)$ ${n^{+}}_{\text{train}}(\downarrow)$ | 1 | ✓ | | | | ✓ | ✓ | ✓ ---|---|---|---|---|---|---|---|--- 2 | | ✓ | | ✓ | | ✓ | ✓ 3 | | | ✓ | ✓ | ✓ | | ✓ 1 | 2 | 3 | | | | | | | | ✓ | | | | 69.8 | 82.9 | 88.6 | 85.7 | 79.2 | 76.4 | 80.4 | ✓ | | | 65.0 | 85.5 | 92.2 | 88.9 | 78.6 | 75.3 | 80.9 | | ✓ | | 53.9 | 83.4 | 93.0 | 88.2 | 73.5 | 68.7 | 76.8 | ✓ | ✓ | | 62.3 | 85.2 | 92.5 | 88.8 | 77.4 | 73.7 | 80.0 ✓ | | ✓ | | 69.1 | 84.0 | 90.0 | 87.0 | 79.5 | 76.5 | 81.0 ✓ | ✓ | | | 69.7 | 83.9 | 89.6 | 86.8 | 79.6 | 76.8 | 81.1 ✓ | ✓ | ✓ | | 68.3 | 84.6 | 90.7 | 87.6 | 79.5 | 76.4 | 81.2 Insight: FiD models overfit to the distribution of context qualities during training. ### 3.4 Effect of Context Quality during Training on Model’s Cross-attention As we discussed in §3.1, FiDs trained on different context qualities may overfit to each quality, and they perform differently in the same evaluation environment. We hypothesize that overfitting to different context quality occurs due to changes in how a model selects relevant passages since lower context quality may force the model to concentrate more on selecting passages and vice versa. Thus, as a first step to investigate which aspect of model inference is affected by different context qualities during training, we analyzed how patterns of cross-attention varied among FiD models trained on different context qualities (§3.4.1). Then, we conducted intervention experiments to validate that different patterns of cross-attention explain part of the overfitting to context quality (§3.4.2). #### 3.4.1 Investigation on Patterns of Cross-attention Probability Setting: We denote cross-attention probability from the $l$-th decoder token to the $j$-th token of the $i$-th input passage $\tilde{p}_{i}$ at the $k$-th decoder layer by $c^{(k)}_{ijl}$. Following Izacard and Grave (2021a), we computed cross-attention probability from the first decoder token, $c^{(k)}_{ij1}$, and we computed aggregated cross-attention probability $\tilde{c}_{i}^{(k)}=\sum_{j}c_{ij1}^{(k)}$ for each passage $\tilde{p}_{i}$. We conducted the following two analyses: * (i) We analyzed how much cross-attention probability was allocated to relevant passages at each layer, i.e., $\sum_{i\in\\{i|p_{i}\in R\\}}\tilde{c}_{i}^{(k)}$. * (ii) We analyzed the difference between the distribution of cross-attention probability to relevant passages, i.e., $\\{\tilde{c}_{i}^{(k)}|i\in R\\}$, and that to irrelevant passages, i.e., $\\{\tilde{c}_{i}^{(k)}|i\in\bar{R}\\}$. We focused our analyses on FiD models trained for Natural Questions in §3.1 with the following settings: $(n,n^{+})\in\\{(10,1),(10,2),(10,3)\\}$. Note that these models were trained with the same context quantity but different qualities. We analyzed these models in two evaluation environments of Natural Questions with the following settings: $(n^{+},n^{-})\in\\{(3,7),(3,57)\\}$. Figure 3: Distribution of cross-attention probability to each relevant or irrelevant passage at Layer 9. A similar trend can be seen in other higher layers. Red vertical dashed lines represent uniform cross-attention probability, i.e., $\frac{1}{N}$ if context quantity is $N$. Table 2: Cross- attention probability allocated to relevant passages at each layer. High and Low respectively represent high and low context quality. | Evaluation environment $(n^{+},n^{-})$ ---|--- | | $(3,7)$ --- High | $(3,57)$ --- Low | Training environment $(n,n^{+})=(10,n^{+})$ $n^{+}$ | | 3 --- High | 2 --- | 1 --- Low | 3 --- High | 2 --- | 1 --- Low Layer 1 | 31.5 | 31.9 | 31.0 | 5.4 | 5.5 | 5.3 Layer 2 | 31.8 | 32.3 | 32.1 | 5.4 | 5.6 | 5.6 Layer 3 | 33.7 | 34.5 | 33.6 | 6.1 | 6.3 | 6.0 Layer 4 | 32.5 | 33.4 | 32.3 | 5.7 | 6.0 | 5.7 Layer 5 | 32.4 | 33.7 | 33.3 | 5.6 | 6.0 | 6.0 Layer 6 | 33.9 | 35.4 | 36.2 | 6.1 | 6.6 | 7.2 Layer 7 | 41.1 | 45.6 | 47.4 | 8.5 | 10.7 | 13.7 Layer 8 | 40.8 | 44.1 | 45.8 | 8.6 | 10.3 | 13.2 Layer 9 | 47.4 | 51.9 | 56.3 | 11.3 | 15.1 | 21.8 Layer 10 | 48.2 | 52.9 | 55.5 | 10.8 | 13.9 | 20.4 Layer 11 | 45.6 | 49.8 | 53.4 | 10.1 | 12.6 | 19.0 Layer 12 | 39.1 | 41.2 | 42.9 | 9.5 | 11.4 | 15.7 Result: Table 2 shows the cross-attention probability that was allocated to relevant passages at each layer (Analysis (i)). As shown in the table, in both evaluation environments, FiD models trained with lower context quality attended more strongly to relevant passages, especially at higher layers that are closer to the output layer121212The result may also suggest that a function to select relevant passages is learned at higher layers, which are consistent with the claims made by Tenney et al. (2019) that higher layers capture task-specific information.. A similar trend is also observed in Figure 3 that shows the distribution of cross-attention probability to a relevant or irrelevant passage for each model (Analysis (ii)). Models trained with lower context quality showed more long- tailed distribution for relevant passages, and there was a more significant difference between the distribution for relevant and irrelevant passages, which suggests they are trained to attend more selectively to relevant passages. On the contrary, the distribution is relatively closer to uniform distribution for models trained with higher context quality. We conjecture that this excessive selectivity of models trained in a low- quality environment may explain their relatively lower performance in a high- quality environment (§3.2), because such excessive selectivity makes the model overlook necessary information in ignored relevant passages and, as a result, fail to correctly answer the questions. It may be the case that, when evaluated in a high-quality environment (i.e., where the majority of passages are relevant to the question), it is more optimal for the model to examine all passages more uniformly without being overly selective. This claim is empirically supported by the result of our experiments in §4.2. Insight: FiD models trained with different context quality show different levels of selectivity w.r.t. allocation of cross-attention probability. Models trained with lower context quality attend more selectively to relevant passages. #### 3.4.2 Intervention Experiment Results in §3.4.1 suggests that overfitting of FiD models to different context quality is due to different level of selectivity of their cross-attention. We validate this claim by intervening on the cross-attention probability of these models during inference. Setting: We intervened on the cross-attention probability of FiD models so that the ratio of cross-attention probability to a relevant passage $p_{i}\in R$ and an irrelevant passage $p_{j}\in\bar{R}$ to be $r$ for all layers. Intuitively, the model completely ignores irrelevant passages when $r=0$, whereas the model attends uniformly to all passages when $r=1$. More specifically, for each decoder layer $k$, we converted original cross- attention probability $c_{ijl}^{(k)}$ into intervened version ${c^{\prime}}_{ijl}^{(k)}$ as follows: ${c^{\prime}}_{ijl}^{(k)}=\frac{w_{i}^{(k)}}{\sum_{j^{\prime}}c_{ij^{\prime}l}^{(k)}}c_{ijl}^{(k)},$ (2) where $w_{i}^{(k)}=\frac{1}{n^{+}+rn^{-}}$ if $p_{i}\in R$ and $w_{i}^{(k)}=\frac{r}{n^{+}+rn^{-}}$ if $p_{i}\in\bar{R}$. We selected $r$ from $\\{1,0.1,0\\}$ and conducted experiments in the same models and evaluation environments as in §3.4.1. Result: Figure 4 shows model performance with and without intervention on cross-attention probability. In both evaluation environments with lower and higher context quality, the difference in the performance of the models decreased, and the intervention mitigated the effect of overfitting to context quality. Insight: The result suggests that the difference of cross-attention probability as described in §3.4.1 is one element that explains the overfitting of FiD models to context quality. ## 4 Adapting Models to Different Context Quality Figure 4: Model performance under intervention on cross-attention probability. "No" represents a setting without intervention. While FiD models overfit to context quality during training as shown in §3, it is not desirable to train a dedicated model for each target environment that has potentially different context qualities from each other. Thus, in this section, we propose a method to mitigate the effect of overfitting and adapt an already trained FiD model to an environment with different context quality. ### 4.1 Proposed Method Based on the insights in §3.4 that shows the overfitting to context quality occurs due to the different levels of selectivity to relevant passages, we propose to change sharpness of distribution of cross-attention probability during inference. More specifically, we introduce temperature parameter $T$ $(T>0)$ and compute total cross-attention probability from the $l$-th decoder token to the $i$-th passage at the $k$-th layer as follows: $w_{il}^{(k)}=\left(\text{softmax}\left[\frac{\log(\sum_{j}c_{1jl})}{T},...,\frac{\log(\sum_{j}c_{Njl})}{T}\right]\right)_{i}.$ (3) Then, we use Equation (2) to convert cross-attention probability as in §3.4.2131313We use $w_{il}^{(k)}$ instead of $w_{i}^{(k)}$ in Equation (2) when decoding the $l$-th token. Note that the cross-attention probability is sequentially computed for each decoder token during inference.. Intuitively, the model attends more uniformly as $T$ becomes larger, which simulates the overfitting effect of a FiD model trained with higher context quality and vice versa. Note that our proposed temperature parameter does not change the set of input passages and can be tuned complementary with other existing hyperparameter that changes the set of input passages, e.g., the number of input passages. ### 4.2 Experiment Figure 5: Top panels: Performance of FiD models on Natural Questions with adaptation by the proposed method (solid lines) and without adaptation (dotted lines). Bottom panels: Optimal temperature parameter $T^{*}$ selected for each model. Multiple $T^{*}$ were selected for some context qualities, i.e., training environments, because we selected $T^{*}$ for each of the three models trained with different random seeds for each training environment. Panels represent different evaluation environments with different $(n_{\text{eval}}^{+},n_{\text{eval}})$ pairs, and a red dashed line shows the context quality of the corresponding evaluation environment. To validate the effectiveness of our proposed method, we adapted models trained in §3.1 by the proposed method and evaluated their performance on evaluation environments with different context qualities where $n_{\text{eval}}^{+}=3$ and $n_{\text{eval}}\in\\{10,25,60\\}$. Since the temperature parameter $T$ has to be tuned, we conducted 2-fold cross- validation. Specifically, we split the evaluation data into two folds and searched optimal temperature parameter $T^{*}\in\\{0.125,0.25,0.5,1,2,4,8\\}$ based on the EM score on one fold, and then, we used a model adapted with $T^{*}$ to evaluate performance on the other fold141414We used the single temperature parameter $T^{*}$ for all questions in a fold instead of using a different temperature parameter for each question. Predicting the optimal temperature parameter for each input question is an interesting direction of future works.. Figure 5 shows the performance of FiD models with and without adaptation by the proposed method151515See Figure 10 in Appendix D for the result in TriviaQA.. As shown in the figure, the proposed method improved the performance of the models in environments with different context qualities compared to those during training, and it reduced the effect of overfitting to context quality. Also, $T^{*}$ increased in the case of lower context qualities during training, and vice versa, which corroborates our finding that more uniform cross-attention, corresponding to higher $T^{*}$ , is effective when context quality in evaluation is higher than one in training. ## 5 Related Works Retrieval-augmented Generation Models: Lewis et al. (2020) introduced retrieval augmentation approach, firstly originating in the field of extractive open-domain question answering Chen et al. (2017), to sequence-to- sequence models and validated its effectiveness in knowledge-intensive tasks. Contemporary work by Min et al. (2020) applied the retrieval augmentation approach to the task of ambiguous question answering. Izacard and Grave (2021b) proposed Fusion-in-Decoder (FiD), in which each retrieved passage is independently encoded and then jointly input to the decoder. FiD achieves high scalability for the number of passages and effectively aggregates information by jointly using all passages in the decoder. Recently, the retrieval-augmented language generation approach has received attention, including its incorporation into the language model pretraining Borgeaud et al. (2022); Izacard et al. (2022); Zhang et al. (2022b), dialogue generation Komeili et al. (2022), and code generation Parvez et al. (2021). The approach has also been shown effective in improving inference of pre- trained language models (e.g., GPT-3 Brown et al. (2020)) without additional training Lazaridou et al. (2022); Mallen et al. (2023). For a comprehensive survey on this evolving field, refer to Yu et al. (2022). Effect of Context Characteristics on Retrieval-augmented Models: Several studies have investigated how context characteristics affect inference of retrieval-augmented generation models. For example, increasing the number of top-ranking passages used as the context has been found to improve performance in question answering Izacard and Grave (2021b) and response/prose generation Zhang et al. (2022b), while a higher proportion of false information in the context degrades performance in question answering Weller et al. (2022)161616Du et al. (2022) reported similar results in fact verification, while their focus is not on language generation models.. Liu et al. (2023) found that the performance of language models on multi-document question answering is influenced by the position of a relevant document. However, limited knowledge is available regarding the impact of context characteristics on the training of retrieval-augmented geneartion models. Notably, a few existing research suggest that the model performance in question answering improves by providing more top-ranking passages during training Izacard and Grave (2021b) or by randomly masking top-ranking passages during training Zhang et al. (2022a), and that the model’s memorization behavior is reduced by increasing recall of relevant information in the context during training Longpre et al. (2021); Chen et al. (2022). ## 6 Conclusion In this paper, we investigate how context quality and quantity affect the training of FiD models in extractive open-domain question answering tasks. We show that FiD models tend to overfit to the context quality during training, resulting in degraded performance when evaluated in environments with different context qualities. Additionally, our research reveals that the overfitting to context quality is partially explained by different patterns in the model’s cross-attention probability. Based on these observations, we propose changing the selectivity of the cross-attention probability to mitigate the effect of overfitting to context quality. The results of this paper suggest a broad spectrum of future work, including more sophisticated adaptation methods and investigations of the effect of other context characteristics on the training of retrieval-augmented generation models. ## 7 Limitations In this study, we investigated how the quality and quantity of context affect the training of FiD models in extractive open-domain question answering tasks. Our experiments revealed for the first time that context quality significantly impacts FiD models’ training and that FiD models tend to overfit to context quality of the training data. The implications of our findings suggest that various context characteristics similarly affect the training of retrieval- augmented generation models, potentially leading to issues such as overfitting. However, our experiments have several limitations that reduce the generalizability of our findings: Task: Firstly, in this paper, we only focused on the extractive open-domain question answering task, and it is unclear whether similar results can be obtained in other tasks such as dialogue generation, fact verification, code generation, and summarization. Model Architecture: Secondly, our analysis only targeted FiD models, and it is unclear whether different architectures such as RAG Lewis et al. (2020) and Internet-augmented language models Lazaridou et al. (2022) produce similar results. Also, it is an interesting direction of future work to conduct similar investigations on non-generative retrieval-augmented models such as FiE Kedia et al. (2022). Model Size: Thirdly, our experiments are focused on only t5-base, and it is unclear how scaling model size changes the behavior of overfitting to context quality. Characteristic of Context: Lastly, coverage of our analysis is limited to quality and quantity, and further research is required to investigate the effect of other context characteristics. For example, in the field of extractive question answering, it has been shown that models may overfit to answer positions in the contexts Ko et al. (2020), be misled by adversarially inserted sentence Jia and Liang (2017); Jiang and Bansal (2019), and be susceptible to whether an answer is in the most similar sentence in the context Sugawara et al. (2018). These findings suggest that those context characteristics may also affect retrieval- augmented generation models. Other than that, our experiments involved the automatic annotation of relevant and irrelevant passages, which may limit the accuracy of our analysis. Future studies should incorporate human annotation to ensure the high quality of the annotation. Also, passages with similar relevant information can impact models differently due to qualitative factors such as readability and writing style. Nevertheless, as quantitatively evaluating these factors poses challenges, our study did not conduct a fine-grained analysis regarding these aspects. ## References * Borgeaud et al. (2022) Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In _International conference on machine learning_ , pages 2206–2240. PMLR. * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901. * Chen et al. (2017) Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. * Chen et al. (2022) Hung-Ting Chen, Michael Zhang, and Eunsol Choi. 2022. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 2292–2307, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Du et al. (2022) Yibing Du, Antoine Bosselut, and Christopher D Manning. 2022. Synthetic disinformation attacks on automated fact verification systems. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 36, pages 10581–10589. * Hofstätter et al. (2022) Sebastian Hofstätter, Jiecao Chen, Karthik Raman, and Hamed Zamani. 2022. Fid-light: Efficient and effective retrieval-augmented text generation. _arXiv preprint arXiv:2209.14290_. * Izacard and Grave (2021a) Gautier Izacard and Edouard Grave. 2021a. Distilling knowledge from reader to retriever for question answering. In _ICLR 2021-9th International Conference on Learning Representations_. * Izacard and Grave (2021b) Gautier Izacard and Edouard Grave. 2021b. Leveraging passage retrieval with generative models for open domain question answering. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 874–880, Online. Association for Computational Linguistics. * Izacard et al. (2022) Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. _arXiv preprint arXiv:2208.03299_. * Ji et al. (2023) Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. _ACM Computing Surveys_ , 55(12):1–38. * Jia and Liang (2017) Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics. * Jiang and Bansal (2019) Yichen Jiang and Mohit Bansal. 2019. Avoiding reasoning shortcuts: Adversarial evaluation, training, and model development for multi-hop QA. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2726–2736, Florence, Italy. Association for Computational Linguistics. * Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. * Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6769–6781, Online. Association for Computational Linguistics. * Kasai et al. (2022) Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. Realtime qa: What’s the answer right now? _arXiv preprint arXiv:2207.13332_. * Kedia et al. (2022) Akhil Kedia, Mohd Abbas Zaidi, and Haejun Lee. 2022. FiE: Building a global probability space by leveraging early fusion in encoder for open-domain question answering. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 4246–4260, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Ko et al. (2020) Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. 2020. Look at the first sentence: Position bias in question answering. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1109–1121, Online. Association for Computational Linguistics. * Komeili et al. (2022) Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics. * Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. _Transactions of the Association for Computational Linguistics_ , 7:452–466. * Lazaridou et al. (2022) Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. _arXiv preprint arXiv:2203.05115_. * Lee et al. (2019) Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 6086–6096, Florence, Italy. Association for Computational Linguistics. * Lewis et al. (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. _Advances in Neural Information Processing Systems_ , 33:9459–9474. * Li et al. (2022) Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2022. Large language models with controllable working memory. _arXiv preprint arXiv:2211.05110_. * Lin et al. (2022) Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2022. _Pretrained transformers for text ranking: Bert and beyond_. Springer Nature. * Liu et al. (2023) Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language models use long contexts. _arXiv preprint arXiv:2307.03172_. * Longpre et al. (2021) Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7052–7063, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. * Mallen et al. (2023) Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 9802–9822, Toronto, Canada. Association for Computational Linguistics. * Min et al. (2020) Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering ambiguous open-domain questions. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 5783–5797, Online. Association for Computational Linguistics. * Parvez et al. (2021) Md Rizwan Parvez, Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Retrieval augmented code generation and summarization. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 2719–2734, Punta Cana, Dominican Republic. Association for Computational Linguistics. * Petroni et al. (2021) Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021\. KILT: a benchmark for knowledge intensive language tasks. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 2523–2544, Online. Association for Computational Linguistics. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_ , 21(1):5485–5551. * Shuster et al. (2021) Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 3784–3803, Punta Cana, Dominican Republic. Association for Computational Linguistics. * Sugawara et al. (2018) Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What makes reading comprehension questions easier? In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4208–4219, Brussels, Belgium. Association for Computational Linguistics. * Tenney et al. (2019) Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4593–4601, Florence, Italy. Association for Computational Linguistics. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30. * Weller et al. (2022) Orion Weller, Aleem Khan, Nathaniel Weir, Dawn Lawrie, and Benjamin Van Durme. 2022\. Defending against poisoning attacks in open-domain question answering. _arXiv preprint arXiv:2212.10002_. * Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics. * Yu et al. (2022) Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, and Meng Jiang. 2022. A survey of knowledge-enhanced text generation. _ACM Computing Surveys_ , 54(11s):1–38. * Zhang et al. (2022a) Shujian Zhang, Chengyue Gong, and Xingchao Liu. 2022a. Passage-mask: A learnable regularization strategy for retriever-reader models. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 3931–3943, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Zhang et al. (2022b) Yizhe Zhang, Siqi Sun, Xiang Gao, Yuwei Fang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2022b. Retgen: A joint framework for retrieval and grounded text generation modeling. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 36, pages 11739–11747. ## Appendix A Implementation Details ### A.1 Training and Evaluation Data We trained FiD models with the original train set of each dataset, and we further split the original train set into $\mathcal{D}_{\text{train}}$ for training and $\mathcal{D}_{\text{dev}}$ for evaluating performance during training. To train models in a strictly extractive task environment, we excluded questions for which no retrieved passage contained any of their ground-truth answers. For evaluation, we used the original development set of each dataset as evaluation data $\mathcal{D}_{\text{eval}}$. For a fair comparison, we used the same set of questions with at least 3 (Natural Questions) or 10 (TriviaQA) relevant passages and at least 64 irrelevant passages during training or evaluation. Statistics of the datasets are shown in Table 3. ### A.2 Details of FiD Training and Inference Our model implementation of FiD, including the loss function for training, is based on the official implementation by Izacard and Grave (2021b)171717https://github.com/facebookresearch/FiD. For both training and inference, we used transformers (Ver. 4.23.1) Wolf et al. (2020). We trained the models with Seq2SeqTrainer provided in transformers. Hyperparameters for Seq2SeqTrainer used in our experiments are listed in Table 4 and other hyperparameters were set to default values. Since we trained models with 32 A100 GPUs, the effective batch size is 64. We used a model checkpoint with highest EM on $\mathcal{D}_{\text{dev}}$ for downstream evaluations. Table 3: Size of datasets used for training and evaluation. Natural Questions | TriviaQA ---|--- $\mathcal{D}_{\text{train}}$ | $\mathcal{D}_{\text{dev}}$ | $\mathcal{D}_{\text{eval}}$ | $\mathcal{D}_{\text{train}}$ | $\mathcal{D}_{\text{dev}}$ | $\mathcal{D}_{\text{eval}}$ 20728 | 3048 | 2589 | 11414 | 1695 | 1434 Table 4: Hyperparameters for Seq2SeqTrainer parameter | value ---|--- learning_rate | 0.00005 lr_scheduler_type | constant_with_warmup warmup_steps | 1000 weight_decay | 0.01 max_grad_norm | 1.0 max_steps | 15000 per_device_train_batch_size | 1 gradient_accumulation | 2 eval_steps | 500 save_steps | 500 save_strategy | steps Table 5: Performance of FiD models trained in each mixture of environments on TriviaQA. The color intensity indicates the relative performance within the same evaluation mixture of environments (column). Checkmarks indicate which environment included in each mixture. ${n^{+}}_{\text{eval}}(\rightarrow)$ ${n^{+}}_{\text{train}}(\downarrow)$ | 2 | ✓ | | | | ✓ | ✓ | ✓ ---|---|---|---|---|---|---|---|--- 5 | | ✓ | | ✓ | | ✓ | ✓ 10 | | | ✓ | ✓ | ✓ | | ✓ 2 | 5 | 10 | | | | | | | | ✓ | | | | 79.8 | 93.9 | 98.3 | 96.1 | 89.0 | 86.8 | 90.6 | ✓ | | | 75.0 | 93.8 | 98.3 | 96.0 | 86.6 | 84.4 | 89.0 | | ✓ | | 61.7 | 90.0 | 97.7 | 93.8 | 79.7 | 75.8 | 83.1 | ✓ | ✓ | | 73.4 | 92.4 | 97.7 | 95.1 | 85.6 | 82.9 | 87.9 ✓ | | ✓ | | 79.1 | 93.3 | 97.5 | 95.4 | 88.3 | 86.2 | 90.0 ✓ | ✓ | | | 79.0 | 94.2 | 98.4 | 96.3 | 88.7 | 86.6 | 90.5 ✓ | ✓ | ✓ | | 77.8 | 93.4 | 98.0 | 95.7 | 87.9 | 85.6 | 89.7 Since most questions in Natural Questions are annotated with only one ground- truth answer, we used the first ground-truth answer for each question as a target output for model training. On the other hand, since questions in TriviaQA are more exhaustively annotated with paraphrases of ground-truth answers, we randomly sampled one ground-truth answer that appeared in any of the input passages as a target output at every training step. We tokenized each input passage $\tilde{p}_{i}$ described in §2.2 and target outputs by the tokenizer of t5-base. For both training and inference, to fix the sequence length of each tokenized passage to 256, we conducted truncation for longer passages or padding for shorter passages. We did not truncate target outputs during training and set a maximum length of a predicted answer to 50 during inference. In experiments where we subsampled passages to control context quality and quantity, to reduce the effect of bias in sampled passages, we sampled different passages at every training step instead of repeatedly using the fixed set of passages sampled before training. ## Appendix B Details of Experimental Design We trained three FiD models with different random seeds for each training environment and conducted evaluations for these models in each evaluation environment. We sampled five different sets of passages for each evaluation environment and computed average model performance on these sets of passages. We independently sampled relevant and irrelevant passages, and thus, the same set of relevant (or irrelevant) passages was sampled regardless of the number of irrelevant (or relevant) passages as long as the number of relevant (or irrelevant) passages was the same. Table 6: Size of datasets used to train FiD models for the relevant passage annotation $\mathcal{D}_{0,\text{train}}$ | $\mathcal{D}_{0,\text{dev}}$ | $\mathcal{D}_{1,\text{train}}$ | $\mathcal{D}_{1,\text{dev}}$ ---|---|---|--- 30612 | 4411 | 30589 | 4373 (Natural Questions) 28696 | 4112 | 28489 | 4115 (TriviaQA) ## Appendix C Details of Passage Relevance Annotation by Question Answering Model We used FiD models as pre-trained question answering models used in the passage relevance annotation described in §2.3, and we trained those models as described in Appendix A except training and development data which we describe below. For each dataset with original train set $\mathcal{D}_{\text{train}}$ and development set $\mathcal{D}_{\text{dev}}$, we split $\mathcal{D}_{\text{train}}$ into four sets: $\mathcal{D}_{0,\text{train}}$, $\mathcal{D}_{0,\text{dev}}$, $\mathcal{D}_{1,\text{train}}$, and $\mathcal{D}_{1,\text{dev}}$. Then, we respectively trained a FiD model $\mathcal{M}_{0}$ or $\mathcal{M}_{1}$ with a pair of train and development data $(\mathcal{D}_{0,\text{train}},\mathcal{D}_{0,\text{dev}})$ or $(\mathcal{D}_{1,\text{train}},\mathcal{D}_{1,\text{dev}})$. Finally, we annotated $\mathcal{D}_{0,\text{train}}$ and $\mathcal{D}_{0,\text{dev}}$ with $\mathcal{M}_{1}$, and $\mathcal{D}_{1,\text{train}}$, $\mathcal{D}_{1,\text{dev}}$, and $\mathcal{D}_{\text{dev}}$ with $\mathcal{M}_{0}$. See Table 6 for statistics of the datasets used to train FiD models for the relevant passage annotation. Our preliminary experiments showed that the behavior of FiD models differs when trained with all passages in the original dataset (All), compared to when trained with only those passages containing a ground-truth answer (Pos). Thus, we chose a stricter criterion to extract relevant passages. Specifically, we trained $\mathcal{M}_{0}$ and $\mathcal{M}_{1}$ and annotated passages in each of All and Pos setting, and we extracted only those passages annotated as a relevant passage in the both settings. ## Appendix D Full Experimental Results Full results of the experiments in §3.1 are shown in Figure 6 and Table 7 for TriviaQA and Figure 7 and Table 8 for Natural Questions. Full results of the experiments in §3.2 are shown in Figure 8 and Table 9 for TriviaQA and Figure 9 and Table 10 for Natural Questions. Results of the experiments in §3.3 are shown in Table 5 for TriviaQA. Results of the experiments in §4.2 are shown in Figure 10. Figure 6: Performance of FiD models on TriviaQA with varying training context quality. Panels represent different evaluation environments with different $(n_{\text{eval}}^{+},n_{\text{eval}})$ pairs, and a red dashed line shows corresponding context quality. Red stars represent the best performed models in the corresponding evaluation environments. Dotted lines show models trained on the same context quantity $n_{\text{train}}$. Table 7: Exact match (EM) [%] of FiD models trained in environment $({n^{+}}_{\text{train}},n_{\text{train}})$ in various evaluation environment $({n^{+}}_{\text{eval}},n_{\text{eval}})$ in TriviaQA. The standard deviations for each reported value is denoted with lower subscripts. | $n_{\text{eval}}$ | 10 | 25 | 60 ---|---|---|---|--- ${n^{+}}_{\text{eval}}$ | ${n^{+}}_{\text{train}}$ $n_{\text{train}}$ | 10 | 25 | 60 | 10 | 25 | 60 | 10 | 25 | 60 | 1 | $70.9_{3.0}$ | $70.4_{1.5}$ | $70.1_{1.9}$ | $58.5_{3.1}$ | $60.1_{1.6}$ | $61.9_{3.1}$ | $45.7_{3.5}$ | $49.8_{1.8}$ | $53.9_{3.4}$ | 2 | $69.0_{1.2}$ | $72.1_{2.7}$ | $72.4_{1.1}$ | $52.3_{1.9}$ | $60.8_{3.4}$ | $64.1_{2.0}$ | $34.4_{1.9}$ | $49.2_{4.0}$ | $56.1_{2.6}$ 1 | 3 | $64.3_{0.5}$ | $70.7_{0.8}$ | $71.4_{1.1}$ | $43.6_{0.5}$ | $57.4_{1.6}$ | $61.8_{1.7}$ | $24.5_{0.2}$ | $43.1_{2.3}$ | $52.2_{2.2}$ | 5 | $52.2_{0.8}$ | $66.9_{0.9}$ | $70.3_{2.0}$ | $29.1_{1.3}$ | $51.0_{1.7}$ | $59.2_{2.9}$ | $15.2_{0.5}$ | $34.1_{2.4}$ | $47.2_{3.3}$ | 10 | $18.0_{0.6}$ | $55.3_{1.2}$ | $63.7_{1.8}$ | $9.5_{0.2}$ | $36.4_{0.5}$ | $50.3_{1.4}$ | $6.4_{0.2}$ | $21.2_{0.4}$ | $36.7_{0.6}$ | 1 | $87.9_{1.4}$ | $86.7_{0.6}$ | $84.9_{0.2}$ | $77.9_{2.2}$ | $77.9_{1.3}$ | $77.9_{1.7}$ | $63.9_{3.3}$ | $66.6_{1.6}$ | $69.9_{3.1}$ | 2 | $89.1_{0.5}$ | $88.5_{1.0}$ | $87.1_{0.4}$ | $76.6_{1.0}$ | $79.8_{2.0}$ | $80.4_{1.1}$ | $56.2_{2.0}$ | $68.1_{3.5}$ | $72.7_{2.1}$ 2 | 3 | $86.7_{0.3}$ | $88.7_{0.4}$ | $87.8_{0.4}$ | $71.3_{0.2}$ | $78.6_{1.0}$ | $79.8_{1.1}$ | $45.9_{0.6}$ | $63.8_{2.3}$ | $70.2_{1.5}$ | 5 | $81.2_{0.5}$ | $87.0_{0.2}$ | $87.3_{0.7}$ | $56.9_{1.2}$ | $75.0_{0.5}$ | $78.6_{1.7}$ | $29.6_{1.4}$ | $56.1_{2.2}$ | $66.7_{2.8}$ | 10 | $39.6_{0.6}$ | $80.1_{0.7}$ | $83.0_{1.1}$ | $17.8_{0.6}$ | $61.7_{0.7}$ | $71.7_{1.4}$ | $9.5_{0.3}$ | $39.0_{0.9}$ | $57.0_{1.3}$ | 1 | $93.7_{0.7}$ | $92.1_{0.1}$ | $90.4_{0.4}$ | $86.3_{1.6}$ | $85.4_{0.9}$ | $84.5_{0.8}$ | $73.9_{2.9}$ | $75.5_{1.4}$ | $77.2_{2.2}$ | 2 | $95.4_{0.1}$ | $93.6_{0.2}$ | $92.0_{0.1}$ | $86.9_{0.8}$ | $87.5_{1.3}$ | $87.0_{0.6}$ | $69.7_{1.9}$ | $77.5_{3.0}$ | $80.2_{1.4}$ 3 | 3 | $94.3_{0.4}$ | $94.7_{0.2}$ | $92.6_{0.4}$ | $83.6_{0.5}$ | $87.6_{0.4}$ | $87.0_{0.5}$ | $61.9_{0.9}$ | $75.0_{1.7}$ | $79.2_{1.3}$ | 5 | $92.3_{0.4}$ | $93.7_{0.5}$ | $93.1_{0.1}$ | $74.6_{0.7}$ | $85.4_{0.1}$ | $86.4_{0.9}$ | $44.7_{1.1}$ | $69.6_{1.5}$ | $76.5_{2.1}$ | 10 | $59.6_{0.4}$ | $90.8_{0.2}$ | $90.8_{0.7}$ | $27.3_{0.4}$ | $76.1_{0.3}$ | $81.7_{0.8}$ | $12.8_{0.1}$ | $53.6_{0.7}$ | $68.5_{0.8}$ | 1 | $97.7_{0.3}$ | $96.6_{0.1}$ | $95.1_{1.0}$ | $93.3_{0.8}$ | $92.3_{0.4}$ | $91.1_{0.1}$ | $85.4_{1.8}$ | $85.6_{0.7}$ | $85.9_{1.1}$ | 2 | $98.5_{0.1}$ | $97.5_{0.4}$ | $96.1_{0.4}$ | $94.8_{0.4}$ | $93.9_{0.3}$ | $92.6_{0.0}$ | $84.2_{0.9}$ | $87.5_{1.4}$ | $88.5_{0.4}$ 5 | 3 | $97.8_{0.1}$ | $98.0_{0.1}$ | $96.6_{0.6}$ | $93.1_{0.1}$ | $94.6_{0.4}$ | $93.2_{0.3}$ | $79.5_{0.6}$ | $86.8_{0.9}$ | $88.3_{0.5}$ | 5 | $97.7_{0.2}$ | $97.6_{0.5}$ | $97.1_{0.2}$ | $90.1_{0.6}$ | $93.8_{0.6}$ | $93.8_{0.1}$ | $67.2_{0.8}$ | $83.5_{0.3}$ | $87.2_{0.8}$ | 10 | $83.8_{0.2}$ | $97.0_{0.3}$ | $96.3_{0.2}$ | $47.4_{0.6}$ | $90.0_{0.2}$ | $91.7_{0.2}$ | $21.9_{0.3}$ | $72.4_{0.2}$ | $82.0_{0.9}$ | 1 | $99.2_{0.2}$ | $98.9_{0.1}$ | $97.8_{0.9}$ | $98.3_{0.1}$ | $97.5_{0.1}$ | $96.3_{0.5}$ | $94.6_{0.6}$ | $93.6_{0.1}$ | $92.9_{0.3}$ | 2 | $99.3_{0.1}$ | $99.0_{0.1}$ | $98.2_{0.1}$ | $98.9_{0.0}$ | $98.3_{0.4}$ | $97.0_{0.3}$ | $95.2_{0.3}$ | $95.2_{0.0}$ | $94.4_{0.2}$ 10 | 3 | $99.0_{0.2}$ | $99.2_{0.1}$ | $98.6_{0.4}$ | $98.0_{0.2}$ | $98.5_{0.1}$ | $97.5_{0.4}$ | $93.0_{0.2}$ | $95.6_{0.3}$ | $94.9_{0.3}$ | 5 | $99.2_{0.2}$ | $99.0_{0.3}$ | $98.9_{0.3}$ | $98.1_{0.2}$ | $98.3_{0.6}$ | $97.9_{0.3}$ | $89.6_{0.6}$ | $94.6_{0.7}$ | $95.2_{0.6}$ | 10 | $98.8_{0.1}$ | $98.9_{0.2}$ | $98.4_{0.4}$ | $81.4_{0.4}$ | $97.7_{0.4}$ | $97.5_{0.2}$ | $44.3_{0.3}$ | $90.9_{0.4}$ | $93.5_{0.5}$ Table 8: Exact match (EM) [%] of FiD models trained in environment $({n^{+}}_{\text{train}},n_{\text{train}})$ in various evaluation environment $({n^{+}}_{\text{eval}},n_{\text{eval}})$ in Natural Questions. The standard deviations for each reported value is denoted with lower subscripts. | $n_{\text{eval}}$ | 10 | 25 | 60 ---|---|---|---|--- ${n^{+}}_{\text{eval}}$ | ${n^{+}}_{\text{train}}$ $n_{\text{train}}$ | 10 | 25 | 60 | 10 | 25 | 60 | 10 | 25 | 60 | 1 | $69.8_{0.7}$ | $70.3_{0.3}$ | $65.8_{0.3}$ | $58.1_{0.6}$ | $62.4_{0.7}$ | $61.0_{0.1}$ | $45.5_{0.8}$ | $54.1_{1.0}$ | $55.9_{0.4}$ 1 | 2 | $65.0_{0.5}$ | $69.6_{0.2}$ | $68.1_{0.5}$ | $45.4_{1.2}$ | $58.6_{0.5}$ | $60.7_{0.8}$ | $24.1_{1.5}$ | $45.7_{0.8}$ | $53.2_{1.2}$ | 3 | $53.9_{1.2}$ | $65.8_{0.2}$ | $67.8_{0.4}$ | $29.0_{0.4}$ | $50.7_{0.8}$ | $58.7_{0.2}$ | $10.6_{0.2}$ | $33.3_{2.1}$ | $49.2_{0.3}$ | 1 | $82.9_{0.4}$ | $80.3_{0.3}$ | $74.0_{1.4}$ | $73.7_{0.2}$ | $74.1_{0.2}$ | $70.8_{0.8}$ | $61.9_{0.5}$ | $67.0_{0.6}$ | $66.5_{0.3}$ 2 | 2 | $85.5_{0.1}$ | $83.9_{0.3}$ | $79.9_{0.1}$ | $71.4_{0.5}$ | $75.7_{0.1}$ | $74.5_{0.4}$ | $48.2_{1.4}$ | $64.3_{0.6}$ | $67.8_{0.7}$ | 3 | $83.4_{0.1}$ | $84.2_{0.3}$ | $81.9_{0.7}$ | $60.9_{0.6}$ | $73.9_{0.3}$ | $75.5_{0.8}$ | $29.8_{0.1}$ | $57.6_{1.3}$ | $66.6_{0.7}$ | 1 | $88.6_{0.8}$ | $84.4_{0.5}$ | $77.4_{1.6}$ | $80.8_{0.5}$ | $79.2_{0.2}$ | $74.4_{1.0}$ | $70.3_{0.1}$ | $72.9_{0.4}$ | $70.8_{0.9}$ 3 | 2 | $92.2_{0.2}$ | $89.5_{0.6}$ | $84.6_{0.8}$ | $82.4_{0.5}$ | $83.2_{0.5}$ | $79.6_{0.1}$ | $63.5_{1.0}$ | $73.6_{0.5}$ | $74.4_{0.5}$ | 3 | $93.0_{0.1}$ | $91.2_{0.5}$ | $87.6_{0.6}$ | $78.8_{0.3}$ | $83.7_{0.4}$ | $82.1_{0.8}$ | $48.0_{0.2}$ | $70.5_{0.5}$ | $74.8_{0.6}$ Table 9: Exact match (EM) [%] of FiD models trained in environment $({n^{+}}_{\text{train}},k_{\text{train}})$ in various evaluation environment $({n^{+}}_{\text{eval}},k_{\text{eval}})$ in TriviaQA. The standard deviations for each reported value is denoted with lower subscripts. | $k_{\text{eval}}$ | 1 | 5 | 20 ---|---|---|---|--- ${n^{+}}_{\text{eval}}$ | ${n^{+}}_{\text{train}}$ $k_{\text{train}}$ | 1 | 5 | 20 | 1 | 5 | 20 | 1 | 5 | 20 1 | 1 | $90.8_{0.2}$ | $90.1_{1.4}$ | $88.6_{0.9}$ | $77.4_{0.3}$ | $76.9_{2.3}$ | $77.5_{2.5}$ | $53.5_{0.3}$ | $57.6_{3.6}$ | $63.4_{3.5}$ 2 | $89.0_{0.2}$ | $89.4_{0.6}$ | $88.3_{0.2}$ | $69.0_{0.1}$ | $77.6_{0.4}$ | $77.1_{0.4}$ | $32.2_{1.1}$ | $58.7_{1.0}$ | $63.4_{1.1}$ 3 | $88.0_{0.4}$ | $89.4_{0.3}$ | $87.8_{0.8}$ | $68.8_{1.8}$ | $77.1_{0.8}$ | $77.1_{1.5}$ | $34.8_{3.5}$ | $58.3_{1.1}$ | $63.7_{2.9}$ 5 | $85.8_{0.3}$ | $87.6_{0.3}$ | | $65.4_{0.8}$ | $74.0_{0.5}$ | | $33.3_{1.3}$ | $54.7_{1.8}$ | 8 | $84.0_{1.1}$ | $86.1_{0.5}$ | | $64.0_{1.6}$ | $72.8_{0.7}$ | | $35.3_{1.9}$ | $53.9_{2.0}$ | 10 | $83.6_{0.6}$ | $84.3_{1.2}$ | | $63.9_{0.6}$ | $70.7_{1.8}$ | | $35.3_{0.7}$ | $52.7_{1.2}$ | 2 | 1 | $95.4_{0.2}$ | $95.1_{0.4}$ | $93.7_{0.6}$ | $85.1_{0.5}$ | $85.8_{1.2}$ | $85.7_{1.4}$ | $61.0_{0.3}$ | $66.4_{3.4}$ | $73.0_{3.2}$ 2 | $95.6_{0.1}$ | $95.2_{0.5}$ | $93.9_{0.2}$ | $81.2_{0.2}$ | $87.0_{0.6}$ | $85.9_{0.4}$ | $38.0_{1.2}$ | $68.4_{1.0}$ | $73.9_{1.4}$ 3 | $95.0_{0.1}$ | $94.6_{0.5}$ | $93.5_{0.1}$ | $81.1_{1.1}$ | $86.6_{0.7}$ | $86.3_{0.7}$ | $41.5_{3.4}$ | $68.4_{0.7}$ | $74.8_{2.5}$ 5 | $93.6_{0.1}$ | $93.9_{0.3}$ | | $77.9_{0.4}$ | $84.9_{0.6}$ | | $40.2_{1.4}$ | $65.6_{2.1}$ | 8 | $92.0_{0.7}$ | $92.8_{0.3}$ | | $75.4_{1.5}$ | $83.1_{0.3}$ | | $42.1_{1.9}$ | $64.6_{1.9}$ | 10 | $91.5_{0.2}$ | $91.6_{0.8}$ | | $74.6_{0.2}$ | $81.3_{1.3}$ | | $41.8_{0.4}$ | $63.3_{1.5}$ | 3 | 1 | $96.9_{0.1}$ | $96.7_{0.3}$ | $95.0_{0.3}$ | $88.2_{0.2}$ | $89.4_{0.9}$ | $88.9_{0.9}$ | $63.5_{0.6}$ | $69.8_{3.2}$ | $76.5_{3.5}$ 2 | $97.3_{0.3}$ | $96.6_{0.5}$ | $95.3_{0.2}$ | $84.9_{0.2}$ | $90.6_{0.7}$ | $89.5_{0.3}$ | $39.9_{1.2}$ | $71.3_{0.9}$ | $77.7_{1.2}$ 3 | $97.2_{0.1}$ | $96.2_{0.3}$ | $95.2_{0.1}$ | $85.7_{1.3}$ | $90.0_{0.5}$ | $89.6_{0.6}$ | $43.9_{3.5}$ | $71.3_{0.4}$ | $78.5_{2.1}$ 5 | $96.2_{0.2}$ | $95.9_{0.1}$ | | $82.9_{0.4}$ | $89.2_{0.2}$ | | $43.0_{1.0}$ | $69.3_{1.6}$ | 8 | $95.1_{0.1}$ | $95.0_{0.4}$ | | $80.9_{1.0}$ | $87.1_{0.3}$ | | $45.4_{2.4}$ | $68.9_{2.4}$ | 10 | $94.4_{0.1}$ | $93.7_{0.5}$ | | $80.2_{0.3}$ | $86.0_{0.7}$ | | $45.1_{0.3}$ | $67.6_{1.0}$ | 5 | 1 | $97.7_{0.2}$ | $98.0_{0.0}$ | $96.8_{0.4}$ | $90.0_{0.3}$ | $92.0_{0.4}$ | $91.6_{0.5}$ | | | 2 | $98.0_{0.1}$ | $97.9_{0.3}$ | $96.9_{0.5}$ | $87.8_{0.3}$ | $93.2_{0.4}$ | $92.3_{0.1}$ | | | 3 | $98.1_{0.2}$ | $97.6_{0.3}$ | $96.9_{0.4}$ | $88.4_{1.1}$ | $92.3_{0.2}$ | $92.4_{0.5}$ | | | 5 | $97.7_{0.2}$ | $97.7_{0.1}$ | | $86.5_{0.3}$ | $92.5_{0.3}$ | | | | 8 | $97.1_{0.2}$ | $97.1_{0.2}$ | | $84.9_{1.1}$ | $90.8_{0.3}$ | | | | 10 | $96.8_{0.1}$ | $96.3_{0.2}$ | | $84.1_{0.4}$ | $90.3_{0.3}$ | | | | 8 | 1 | $98.1_{0.0}$ | $98.7_{0.1}$ | $97.4_{0.6}$ | $91.3_{0.2}$ | $93.7_{0.2}$ | $93.3_{0.2}$ | | | 2 | $98.4_{0.1}$ | $98.5_{0.3}$ | $97.8_{0.3}$ | $89.1_{0.4}$ | $94.5_{0.3}$ | $94.2_{0.1}$ | | | 3 | $98.6_{0.3}$ | $98.2_{0.2}$ | $97.6_{0.3}$ | $90.2_{1.0}$ | $93.9_{0.4}$ | $94.5_{0.2}$ | | | 5 | $98.4_{0.2}$ | $98.4_{0.3}$ | | $88.8_{0.2}$ | $94.2_{0.4}$ | | | | 8 | $97.9_{0.3}$ | $98.0_{0.3}$ | | $87.7_{0.7}$ | $93.3_{0.4}$ | | | | 10 | $97.9_{0.0}$ | $97.4_{0.3}$ | | $87.0_{0.2}$ | $92.6_{0.4}$ | | | | 10 | 1 | $98.5_{0.1}$ | $99.0_{0.1}$ | $98.0_{0.6}$ | $91.9_{0.4}$ | $94.4_{0.1}$ | $94.1_{0.2}$ | | | 2 | $98.6_{0.2}$ | $98.7_{0.2}$ | $98.1_{0.2}$ | $90.2_{0.2}$ | $94.9_{0.3}$ | $95.0_{0.5}$ | | | 3 | $98.7_{0.1}$ | $98.5_{0.3}$ | $98.0_{0.3}$ | $90.9_{1.1}$ | $94.6_{0.4}$ | $95.1_{0.3}$ | | | 5 | $98.7_{0.1}$ | $98.7_{0.1}$ | | $89.6_{0.6}$ | $95.0_{0.5}$ | | | | 8 | $98.3_{0.2}$ | $98.3_{0.3}$ | | $88.7_{0.7}$ | $94.0_{0.3}$ | | | | 10 | $98.1_{0.1}$ | $97.9_{0.3}$ | | $88.4_{0.2}$ | $93.5_{0.5}$ | | | | Table 10: Exact match (EM) [%] of FiD models trained in environment $({n^{+}}_{\text{train}},k_{\text{train}})$ in various evaluation environment $({n^{+}}_{\text{eval}},k_{\text{eval}})$ in Natural Questions. The standard deviations for each reported value is denoted with lower subscripts. | $k_{\text{eval}}$ | 1 | 5 | 20 ---|---|---|---|--- ${n^{+}}_{\text{eval}}$ | ${n^{+}}_{\text{train}}$ $k_{\text{train}}$ | 1 | 5 | 20 | 1 | 5 | 20 | 1 | 5 | 20 1 | 1 | $87.9_{0.5}$ | $88.2_{0.3}$ | $84.3_{0.5}$ | $72.4_{0.7}$ | $76.0_{0.3}$ | $74.7_{0.6}$ | $48.2_{0.6}$ | $58.7_{0.9}$ | $62.9_{1.4}$ 2 | $85.1_{0.5}$ | $87.6_{0.1}$ | $83.2_{0.8}$ | $59.5_{1.2}$ | $73.9_{0.4}$ | $73.9_{0.7}$ | $20.7_{1.3}$ | $53.2_{0.6}$ | $62.4_{1.0}$ 3 | $82.4_{0.5}$ | $85.8_{0.3}$ | $82.3_{0.7}$ | $56.0_{1.1}$ | $70.9_{0.9}$ | $72.8_{0.6}$ | $20.4_{1.2}$ | $47.5_{1.0}$ | $60.9_{0.6}$ 2 | 1 | $91.8_{0.5}$ | $91.1_{0.4}$ | $87.6_{0.6}$ | $80.0_{0.8}$ | $82.3_{0.4}$ | $80.2_{0.5}$ | $54.9_{1.1}$ | $65.7_{0.3}$ | $69.3_{0.9}$ 2 | $93.8_{0.0}$ | $92.4_{0.2}$ | $87.7_{1.0}$ | $75.2_{1.0}$ | $83.5_{0.1}$ | $80.6_{0.2}$ | $25.6_{1.5}$ | $62.4_{0.4}$ | $70.7_{0.6}$ 3 | $93.1_{0.2}$ | $92.1_{0.2}$ | $87.4_{0.9}$ | $73.9_{0.6}$ | $82.7_{0.6}$ | $80.5_{0.6}$ | $26.3_{1.4}$ | $58.9_{0.8}$ | $70.7_{0.4}$ 3 | 1 | $93.3_{0.3}$ | $92.6_{0.3}$ | $89.2_{0.3}$ | $82.6_{0.8}$ | $84.3_{0.4}$ | $81.7_{0.4}$ | $57.3_{0.9}$ | $68.2_{0.0}$ | $71.5_{0.8}$ 2 | $96.4_{0.2}$ | $94.6_{0.1}$ | $89.3_{0.8}$ | $80.0_{0.5}$ | $86.7_{0.2}$ | $83.4_{0.4}$ | $27.5_{1.4}$ | $65.7_{0.4}$ | $73.8_{0.4}$ 3 | $96.2_{0.3}$ | $94.8_{0.2}$ | $89.5_{0.7}$ | $79.8_{0.1}$ | $87.1_{0.3}$ | $83.3_{0.4}$ | $28.4_{0.9}$ | $63.5_{1.0}$ | $74.4_{0.4}$ Figure 7: Performance of FiD models on Natural Questions with varying training context quality. Panels represent different evaluation environments with different $(n^{+}_{\text{eval}},n_{\text{eval}})$ pairs, and a red dashed line shows corresponding context quality. Red stars represent the best performed models in the corresponding evaluation environments. Dotted lines show models trained on the same context quantity $n_{\text{train}}$. Figure 8: Performance of FiD models on TriviaQA with varying training context quantity. Panels represent different evaluation environments with different $(n^{+}_{\text{eval}},k_{\text{eval}})$ pairs, and a red dashed line shows corresponding context quantity. Red stars represent the best performed models in the corresponding evaluation environments. Dotted lines show models trained on the same context quality $\frac{1}{1+k_{\text{train}}}$. Figure 9: Performance of FiD models on Natural Questions with varying training context quantity. Panels represent different evaluation environments with different $(n^{+}_{\text{eval}},k_{\text{eval}})$ pairs, and a red dashed line shows corresponding context quantity. Red stars represent the best performed models in the corresponding evaluation environments. Dotted lines show models trained on the same context quality $\frac{1}{1+k_{\text{train}}}$. Figure 10: Top panels: Performance of FiD models on TriviaQA with adaptation by the proposed method (solid lines) and without adaptation (dotted lines). Bottom panels: Optimal temperature parameter $T^{*}$ selected for each model. Multiple $T^{*}$ were selected for some context qualities, i.e., training environments, because we selected $T^{*}$ for each of the three models trained with different random seeds for each training environment. Panels represent different evaluation environments with different $(n^{+}_{\text{eval}},n_{\text{eval}})$ pairs, and a red dashed line shows corresponding context quality.
We can now notice that $\varepsilon\kappa^{2}\big{(}\langle q_{\rm i}\rangle\llbracket\underline{\mathtt{c}}_{1}\zeta\rrbracket-2\ell\dot{\delta}\langle\underline{\mathtt{c}}_{1}\zeta\rangle\big{)}$ is a lower order term in the sense that $\varepsilon\kappa^{2}|\langle q_{\rm i}\rangle\llbracket c_{1}\zeta\rrbracket-2\ell\dot{\delta}\langle c_{1}\zeta\rangle|\leq\varepsilon^{1/2}\kappa^{1/2}\underline{C}\big{(}\underline{{\mathfrak{E}}}_{\rm int}+\underline{{\mathfrak{E}}}_{\rm trace}\big{)},$ so that it can be absorbed by the sum of the three energies when $\varepsilon\kappa$ is small enough to have $\varepsilon^{1/2}\kappa^{1/2}\underline{C}<1$. For instance, if $\varepsilon^{1/2}\kappa^{1/2}\underline{C}<1/2$, and denoting $\widetilde{\underline{{\mathfrak{E}}}}:=\underline{\mathfrak{E}}_{\rm ext}+\underline{\mathfrak{E}}_{\rm int}+\underline{\mathfrak{E}}_{\rm trace},$ one obtains after a Gronwall estimate $\widetilde{\underline{{\mathfrak{E}}}}(t)\leq 3\big{[}\widetilde{\underline{{\mathfrak{E}}}}(0)+\varepsilon\underline{C}\int_{0}^{t}\big{(}|f|_{2}^{2}+|(g_{1},g_{2})|^{2}\big{)}\big{]}\exp(\sqrt{\varepsilon}\underline{C}t).$ Since moreover there exists a constant $C_{0}=C_{0}(\frac{1}{h_{\rm min}},\frac{1}{c_{\rm min}})$ such that $\widetilde{\underline{\mathfrak{E}}}\leq C_{0}\big{(}|{\mathtt{Z}}|^{2}+|(\zeta,q,\kappa\partial_{x}q)|_{2}^{2}+\varepsilon\kappa^{3}|(\zeta_{-},\zeta_{+})|^{2}+\varepsilon\kappa^{5}|(\dot{\zeta}_{-},\dot{\zeta}_{+})|^{2}$ and $|{\mathtt{Z}}|^{2}+|(\zeta,q,\kappa\partial_{x}q)|_{2}^{2}+\varepsilon\kappa^{3}|(\zeta_{-},\zeta_{+})|^{2}+\varepsilon\kappa^{5}|(\dot{\zeta}_{-},\dot{\zeta}_{+})|^{2}\leq C_{0}\widetilde{\underline{\mathfrak{E}}},$ one deduces the estimate stated in the theorem. Step 5. Well-posedness. By a straightforward adaptation of the proof of Theorem 3.3, one can observe that (3.33)-(3.36) can be reformulated as an ODE for $(\zeta,q,{\mathtt{Z}})\in H^{1}\times H^{2}({\mathcal{E}})\times{\mathbb{R}}^{3}$ and prove existence and uniqueness of a solution in this space by Cauchy-Lipschitz’s theorem. For data in $(\zeta,q,{\mathtt{Z}})\in L^{2}\times H^{1}({\mathcal{E}})\times{\mathbb{R}}^{3}$, this strategy does not work directly because the traces $\zeta_{\pm}$ that appear in the component of the ODE (3.53) for $\langle q_{\rm i}\rangle$ and $\dot{\delta}$ cannot be controlled by the $L^{2}$ norm of $\zeta$. However, the energy estimate just proved provides such a control and one can obtain the result by a classical density argument (as used for instance in the proof of Theorem 3.1.1 in [36], for hyperbolic initial boundary value problems where the control on the trace is furnished by using a Kreiss symmetrizer). ∎ ## 4\. Return to equilibrium We now deal with a specific kind of wave-structure interaction that was called the return to equilibrium problem in [26] and is commonly referred to as ”free decay test” in engineering. This a situation where the solid is released at zero speed from an out of equilibrium position ($\delta(t=0)\neq 0$), in a fluid that is at rest. The solid then oscillates vertically and its motion sends waves outwards; by this process, the solid loses energy and its oscillations are damped so that the solid asymptotically stabilizes to its equilibrium position. Engineers use this free decay test because by measuring the oscillations of the object, they deduce some buoyancy properties of the object. More precisely, assuming that the motion of the object satisfies the phenomenological Cummins equation [11, 30] (4.1) $M\ddot{\delta}+k\ast\dot{\delta}+a\delta=0$ with $M,a\in\mathbb{R}^{+}$ and $k\in L^{1}_{\text{loc}}(\mathbb{R}^{+})$, they calibrate these coefficients with experimental measurements. These measurements are also used to propose nonlinear extensions to (4.1) (by fitting coefficients with ad hoc nonlinear terms) [39]. Our goal in this section is to study this problem from a mathematical viewpoint, by proposing a qualitative analysis of the solutions to the transmission problem (3.2)-(3.5) in the particular configuration corresponding to the return to equilibrium problem. This approach is expected to lead in some cases to an equation of the form (4.1), which would provide an analytic description of the coefficients involved, and also to nonlinear extensions that could be of interest to engineers. This program was initiated and achieved in [26] for the (non dispersive) nonlinear shallow water equations, where it was found that $\delta$ solves a nonlinear second order ODE without integro-differential term. Still working with the shallow water equations but in horizontal dimension $d=2$, assuming radial symmetry and neglecting the nonlinear effects in the exterior region, it was shown in [6] that the equation on $\delta$ should contain an integro- differential term. Such a term is also necessary for the nonlinear shallow water equations in dimension $d=1$ if viscosity is taken into account [33]. The goal of this section is to investigate the contribution of the dispersive terms of the Boussinesq system to the equation satisfied by $\delta$ in this specific configuration of the return to equilibrium problem. From now, we assume that the initial data correspond to the configuration of the return to equilibrium problem, namely, (4.2) $q(t=0)=\zeta(t=0)=0\quad\text{and}\quad\delta(t=0)=\delta_{0},\quad\dot{\delta}(t=0)=0.$ ###### Notation 2. We use throughout this section the same notations as in Section 3, namely, we write $\kappa=\sqrt{\mu/3}$ and denote by ${\mathfrak{f}}_{\rm sw}$ the momentum flux of the nonlinear shallow water equations, ${\mathfrak{f}}_{\rm sw}=\frac{h^{2}-1}{2\varepsilon}+\varepsilon\frac{q^{2}}{h}=\zeta+\varepsilon\big{(}\frac{1}{2}\zeta^{2}+\frac{q^{2}}{h}\big{)}.$ We also recall that the buoyancy frequency $\tau_{\rm buoy}$ is defined in Appendix A. We introduce in §4.1 two Cummins operators that allow us to derive an abstract evolution equation for the solid. We then investigate two specific cases where it is possible to derive an explicit expression of these operators. The non dispersive case ($\varepsilon\neq 0$, $\mu=0$) is considered in §4.2 where it is shown that the motion of the object can be found by solving a simple nonlinear second order scalar ODE. Waves can then be described by solving an initial boundary value problem for a scalar Burgers equation. The opposite case, namely, the linear dispersive case ($\varepsilon=0$, $\mu\neq 0$) is addressed in §4.3; here again, it is possible to derive an explicit expression for the Cummins operators leading us to an integro-differential Cummins-type equation for the motion of the solid; qualitative properties of the solutions, such as their decay rate are then investigated. Finally, it is shown that the motion of the waves can be found by solving a nonlocal (in space) perturbation of the transport equation. ### 4.1. The general Cummins equation Quite obviously, any smooth solution of the transmission problem (3.2)-(3.5) with initial condition (4.2) is such that $\zeta$ is an even function while $q$ is odd – such solutions will be called “symmetric”. This implies that $\langle q_{\rm i}\rangle=0$ and that the transmission problem can be reduced into a simpler boundary value problem stated in the following direct corollary of Theorem 3.1. ###### Corollary 4.1. Any smooth symmetric solution to the transmission problem (3.2)-(3.5) solves the following boundary value problem on the half-line $(\ell,\infty)$, (4.3) $\begin{cases}\partial_{t}\zeta+\partial_{x}q=0\\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\partial_{x}{\mathfrak{f}}_{\rm sw}=0,\end{cases}\quad\mbox{ for }\quad t\geq 0,\quad x\in{\mathcal{E}}^{+},$ with boundary condition (4.4) $\displaystyle q_{|_{x=\ell}}$ $\displaystyle=-\ell\dot{\delta},$ where $\delta$ solves the ODE (4.5) ${(\tau_{\mu}(\varepsilon\delta)^{2}+\ell\kappa\frac{1}{h_{+}})}\ddot{\delta}+\delta=\varepsilon\beta(\varepsilon\delta)\dot{\delta}^{2}+{\mathfrak{H}}_{+},$ where $h_{+}=h_{|_{x=\ell}}$, ${\mathfrak{H}}_{+}={\mathfrak{H}}_{|_{x=\ell}}$ and we recall that ${\mathfrak{H}}={\mathfrak{H}}(\zeta,q)$ with ${{\mathfrak{H}}(\zeta,q)=\frac{1}{2}\varepsilon\big{(}\frac{1}{h}\zeta^{2}-\frac{q^{2}}{h^{2}}\big{)}+\frac{1}{h}R_{1}{\mathfrak{f}}_{\rm sw}},$ and that $\tau_{\mu}(\varepsilon\delta)$ and $\beta(\varepsilon\delta)$ are defined in Proposition 2.4, namely, $\displaystyle{{{\tau}_{\mu}(\varepsilon\delta)^{2}}}$ $\displaystyle=\tau_{\rm buoy}^{2}+\frac{1}{\ell}\int_{0}^{\ell}\frac{x^{2}}{h_{\rm eq}(x)+\varepsilon\delta}{\rm d}x+{\frac{\kappa^{2}}{h_{\rm eq}(\ell)+\varepsilon\delta}},$ $\displaystyle\beta(\varepsilon\delta)$ $\displaystyle=\frac{1}{2}\frac{1}{\ell}\int_{0}^{\ell}\frac{x^{2}}{(h_{\rm eq}(x)+\varepsilon\delta)^{2}}{\rm d}x.$ We know by Proposition 3.2 that if $f$ is a given $C^{1}$ function of time then there is a unique solution $(\zeta,q)$ to (4.3) with boundary condition $q_{|_{x=\ell}}=-\ell f$ with initial condition corresponding to the return to equilibrium problem, namely, $(\zeta,q)(t=0)=(0,0)$. It is in particular possible to compute the trace of $\zeta$ at $x=\ell$, so that the following definition makes sense. ###### Definition 4.1 (Cummins operators). Let $\varepsilon\in\mathbb{R}^{+}$, $\mu=\kappa^{2}/3>0$. Let also $f\in C^{1}({\mathbb{R}}^{+})$ and $T>0$, and $(\zeta,q)\in C^{1}\big{(}[0,T;H^{1}({\mathcal{E}}^{+})\times H^{2}({\mathcal{E}}^{+})\big{)}$ be a solution to (4.3) with boundary condition $q_{|_{x=\ell}}=-\ell f$ and initial condition $(\zeta,q)(t=0)=(0,0)$. We define the Cummins operators ${\mathfrak{c}}_{\varepsilon,\mu}$ and ${\mathfrak{C}}_{\varepsilon,\mu}$ as ${\mathfrak{c}}_{\varepsilon,\mu}[f]:=\zeta_{|_{x=\ell}}\quad\mbox{ and }\quad{\mathfrak{C}}_{\varepsilon,\mu}[f]:={{-}}{{\mathfrak{H}}(\zeta,q)_{|_{x=\ell}}}.$ ###### Remark 4.1. The Cummins operators can be defined for more general cases, for instance, the solution $(\zeta,q)$ to the initial boundary value problem needs only to be regular near the boundary $x=\ell$ (regular enough for the trace to make sense). This allows one to extend the definition of the Cummins operators in the case $\mu=0$, as done in §4.2 below. ###### Corollary 4.2. The ODE (4.5) can be reformulated in a compact form as what we shall refer to as the Cummins equation (4.6) ${{\big{(}\tau_{\mu}(\varepsilon\delta)^{2}+\ell\kappa\frac{1}{1+\varepsilon{\mathfrak{c}}_{\varepsilon,\mu}[\dot{\delta}]}\big{)}}}\ddot{\delta}+\delta{{{+\mathfrak{C}}_{\varepsilon,\mu}[\dot{\delta}]}}=\varepsilon\beta(\varepsilon\delta)\dot{\delta}^{2},$ with initial conditions $\delta(0)=\delta_{0}$ and $\dot{\delta}(0)=0$. The equation (4.6) is compact but not simple since the Cummins operators are nonlinear nonlocal operators which require the resolution of the equations for the fluid in the exterior domain. In order to get some qualitative insight on the Cummins equation, we describe it in two limiting cases: in the nonlinear non dispersive case ($\varepsilon>0$, not necessarily small, and $\mu=0$), and in the linear, dispersive case ($\varepsilon=0$ and $\mu>0$, not necessarily small). Note that in both cases, it is not necessary to compute the first Cummins operator ${\mathfrak{c}}_{\varepsilon,\mu}[\dot{\delta}]$ and that it is possible to provide an explicit expression of the second one ${\mathfrak{C}}_{\varepsilon,\mu}[\dot{\delta}]$. ### 4.2. The nonlinear non dispersive case Neglecting the dispersive effects is equivalent to setting $\mu=\kappa^{2}/3=0$ in the equations (4.3)-(4.6) ; in particular, the model considered for the propagation of the waves is now the shallow water equations (4.7) $\begin{cases}\partial_{t}\zeta+\partial_{x}q=0\\\ \partial_{t}q+\varepsilon\partial_{x}\left(\frac{1}{h}q^{2}\right)+h\partial_{x}\zeta=0,\end{cases}\quad\mbox{ for }\quad t\geq 0,\quad x\in{\mathcal{E}}^{+},$ the boundary condition is unchanged (4.8) $\displaystyle q_{|_{x=\ell}}$ $\displaystyle=-\ell\dot{\delta},$ and the ODE solved by $\delta$ is simplified into (4.9) ${{\tau_{0}(\varepsilon\delta)^{2}}}\ddot{\delta}+\delta{{{+\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]}}=-\varepsilon\tau_{0}(\varepsilon\delta)\tau_{0}^{\prime}(\varepsilon\delta)\dot{\delta}^{2},$ where we used the fact that $\beta(\varepsilon\delta)=-2\tau_{0}(\varepsilon\delta)\tau_{0}^{\prime}(\varepsilon\delta)$ when $\mu=0$ (see (2.28) and (2.29)), and where the definition of the second Cummins operator has been extended to the case $\mu=0$ as ${\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]:={{-}}\Big{(}\zeta+\varepsilon\frac{1}{2}\frac{q^{2}}{h^{2}}\Big{)}_{|_{x=\ell}};$ the fact that this definition makes sense follows from the decomposition of the shallow water invariants into Riemann invariants, as shown in the proof of the following theorem where an explicit expression of the Cummins operator is provided. This theorem is a reformulation of Corollary 1 in [26], but with a slight difference in the function $\gamma$, so that we reproduce a sketch of the proof111The difference comes from the fact that in [26], the choice of the boundary condition for the interior pressure was made by assuming that the jump of pressure at the contact point was purely hydrostatic; as in [33, 5], we rather use here a choice of the boundary condition on the pressure which is consistent with the approach used throughout this paper and motivated by the conservation of total energy, as explained in Corollary 2.1. With the choice of [26], one would have ${\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]=-\zeta_{|_{x=\ell}}$ and consequently $-\ell\dot{\delta}-\varepsilon\dot{\delta}^{2}\gamma(\varepsilon\dot{\delta})=\frac{1}{\varepsilon}(\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})^{2}-1)$.. ###### Theorem 4.1. Let $T>0$, $\delta\in C^{2}([0,T])$ and $(\zeta,q)$ be a continuous, piecewise $C^{1}$ solution of (4.7)-(4.9) on $[0,T]\times(\ell,\infty)$ satisfying the non vanishing depth condition $\inf_{[0,T]\times{\mathcal{E}}}h>0\quad\mbox{ and }\quad\inf_{[0,T]\times{\mathcal{I}}}h_{\rm eq}+\varepsilon\delta>0.$ If moreover $\ell\,\varepsilon\dot{\delta}<2r_{0}$, with $r_{0}:=\frac{4}{27}$, we have $\sqrt{h}=\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})$ with the real function $\sigma_{0}(r)=\frac{1}{3}\Big{(}1+C_{-}(r)+C_{+}(r)\Big{)},\quad C_{\pm}(r)=\frac{3}{2}\Big{(}-4r+2r_{0}\pm 4\sqrt{r(r-r_{0})}\Big{)}^{1/3},$ and the Cummins operator ${\mathfrak{C}}_{\varepsilon,0}$ is given explicitly by (4.10) ${\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]=-\varepsilon^{-1}\Big{(}\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})-1\Big{)}\Big{(}3\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})-1\Big{)}=:\ell\dot{\delta}{{+}}\varepsilon\dot{\delta}^{2}\gamma(\varepsilon\dot{\delta}),$ where $\gamma:(-\infty,2r_{0})\to{\mathbb{R}}$ is a smooth function such that $\gamma(0)=\frac{1}{4}\ell^{2}$ and whose exact expression is given in (4.11) below. ###### Proof. The proof of Corollary 1 of [26] is based on the fact that the shallow water equations can be put in diagonal form, $\partial_{t}R+(\sqrt{h}+\varepsilon\frac{q}{h})\partial_{x}R=0\quad\mbox{ and }\quad\partial_{t}L-(\sqrt{h}-\varepsilon\frac{q}{h})\partial_{x}L=0,$ where $R$ and $L$ are respectively the right and left Riemann invariants $R=\frac{q}{h}+\frac{2}{\varepsilon}(\sqrt{h}-1)\quad\mbox{ and }\quad L=\frac{q}{h}-\frac{2}{\varepsilon}(\sqrt{h}-1).$ One then notices that with the initial and boundary conditions considered here, $L$ vanishes identically on $(\ell,\infty)$, which allows one to find $\sqrt{h}$ in terms of $q$ as a root of the third order polynomial equation in $\sigma$, $\sigma^{3}-\sigma^{2}-\varepsilon\frac{1}{2}q=0.$ If $-\varepsilon\frac{1}{2}q<r_{0}$, then, as discussed in [26], the relevant root is $\sigma_{0}(-\varepsilon\frac{1}{2}q)$. Recalling that $q_{|_{x=\ell}}=-\ell\dot{\delta}$, we have $\sqrt{h}=\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})$. Moreover, $L=0$ implies $\varepsilon\frac{1}{2}\frac{q^{2}}{h^{2}}_{|_{x=\ell}}=2\,\varepsilon^{-1}\Big{(}\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})-1\Big{)}^{2}.$ Remarking that $\zeta_{|_{x=\ell}}=\frac{1}{\varepsilon}(\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})^{2}-1)$, one gets $\displaystyle\Big{(}\zeta+\varepsilon\frac{1}{2}\frac{q^{2}}{h^{2}}\Big{)}_{|_{x=\ell}}$ $\displaystyle=\varepsilon^{-1}(\sigma_{0}-1)(3\sigma_{0}-1)$ (4.11) $\displaystyle=:-\ell\dot{\delta}{{-}}\varepsilon\dot{\delta}^{2}\gamma(\varepsilon\dot{\delta}),$ where we used the fact that $\sigma_{0}(0)=-\sigma_{0}^{\prime}(0)=1$. The fact that $\gamma(0)=\frac{1}{4}\ell^{2}$ follows from the observation that $\sigma_{0}^{\prime\prime}(0)=-4$. ∎ A first corollary is that the motion of the solid can be reduced to a simple nonlinear ODE, provided that the initial displacement satisfies an upper bound ensuring that the velocity of the object does not become too big. ###### Corollary 4.3. Under the assumptions of the theorem, and with the same notations, let us assume moreover that $\varepsilon^{2}\delta_{0}^{2}<{\tau_{0}(\varepsilon|\delta_{0}|)^{2}}\big{(}\frac{2r_{0}}{\ell}\big{)}^{2}.$ Then, using the notations of the theorem, the motion of the solid is found by solving the nonlinear second order ODE (4.12) ${{\tau_{0}(\varepsilon\delta)^{2}}}\ddot{\delta}+\ell\dot{\delta}+\delta+\varepsilon{{\left(\tau_{0}(\varepsilon\delta)\tau^{\prime}_{0}(\varepsilon\delta)+\gamma(\varepsilon\dot{\delta})\right)}}\dot{\delta}^{2}=0,$ with initial condition $\delta(0)=\delta_{0}$, $\dot{\delta}(0)=0$. ###### Remark 4.2. In the linear case ($\varepsilon=0$), this equation is almost the same as (3.2.12) in [20], the only difference being that the author neglected the buoyancy frequency $\tau_{\rm buoy}$ in the expression for $\tau_{0}(0)$. ###### Proof. One just needs to check that the condition $\ell\varepsilon\dot{\delta}<2r_{0}$, which ensures by Theorem 4.1 that the Cummins operator takes the form (4.10), is satisfied for all times. Since at $t=0$, one has $\dot{\delta}=0$, we now that this condition is satisfied for small times. Since moreover one can deduce from Proposition 3.1 (by setting $\mu=0$) that (4.13) $\tau_{0}(\varepsilon\delta)^{2}\dot{\delta}^{2}+\delta^{2}\leq\delta_{0}^{2},$ one deduces that $|\delta|\leq|\delta_{0}|$ and therefore that $\dot{\delta}^{2}\leq\tau_{0}(\varepsilon|\delta_{0}|)^{-2}\delta_{0}^{2}$. The assumption made in the statement of the corollary therefore grants the result. ∎ The interest of reducing the motion of the solid to an ODE on the surface displacement is that it is possible to solve it even in situations when singularity arise in the exterior domain (typically, when shock happen). It is in particular possible to obtain a global existence result for the ODE (4.12), while such a result cannot be expected for strong solutions to the full transmission problem (4.7)-(4.9) due to shock formation. Note that the first condition on $\delta_{0}$ means that at $t=0$, the solid neither touches the bottom nor is lifted from a height greater than the height of the water column under the object when it is at equilibrium. ###### Proposition 4.1. Let $\delta_{0}\in{\mathbb{R}}$ be such that $\inf_{{\mathcal{I}}}h_{\rm eq}-\varepsilon|\delta_{0}|>0\quad\mbox{ and }\quad\varepsilon|\delta_{0}|<\tau_{0}(\varepsilon|\delta_{0}|)\frac{2r_{0}}{\ell}.$ Then there exists a unique global solution $\delta\in C^{\infty}({\mathbb{R}}^{+})$ to the ODE (4.12) with initial condition $(\delta,\dot{\delta})_{|_{t=0}}=(\delta_{0},0)$. ###### Remark 4.3. A byproduct of the proof is that ${\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]\,\dot{\delta}\geq 0$ and that this quantity corresponds to the energy transferred at each instant to the exterior fluid domain, that is, with the notations of Proposition 3.1, one has $\frac{d}{dt}{\mathfrak{E}}_{\rm ext}={\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]\dot{\delta}\geq 0.$ ###### Remark 4.4. The second condition of the proposition is a smallness condition on $\delta_{0}$, but this condition is not restrictive as it allows $\delta_{0}$ to be of size $O(\varepsilon^{-1})$. As communicated to us by the author it is possible, under stricter smallness conditions, to prove exponential decay of the solution of ODEs related to (4.12) using techniques developed in [24]. ###### Proof. There exists a positive time $T>0$ such that on $[0,T)$, there is a solution $\delta$ such that $\inf_{\mathcal{I}}h_{\rm eq}+\varepsilon\delta>0$ and $\ell\varepsilon\dot{\delta}<2r_{0}$. We want to show that one can take $T=+\infty$. As in the proof of Corollary 4.3, this follows from (4.13). We therefore need to prove that (4.13) holds, without appealing to Proposition 3.1 as in the proof of Corollary 4.3, but by direct manipulations on the solution to the ODE (4.12). We need the following two lemmas. ###### Lemma 4.1. The function $\sigma_{0}$ is decreasing on $(-\infty,r_{0})$. ###### Proof of the lemma. By construction, one has for all $r<r_{0}$, $\sigma_{0}(r)^{3}-\sigma_{0}(r)^{2}+r=0;$ differentiating this identity yields $\sigma^{\prime}_{0}(r)(3\sigma_{0}(r)^{2}-2\sigma_{0}(r))=-1,$ so that $\sigma_{0}(r)$ and $3\sigma_{0}(r)^{2}-2\sigma_{0}(r)$ have opposite sign. It is therefore enough to prove that $3\sigma_{0}(r)^{2}-2\sigma_{0}(r)>0$ for all $r<r_{0}$. Since $\sigma_{0}(0)=1$, this quantity is positive at $r=0$ and must therefore vanish if it changes sign. This means that for some $r_{1}<r_{0}$, one must have $\sigma_{0}(r_{1})=0$ or $\sigma_{0}(r_{1})=\frac{2}{3}$. Using the cubic equation solved by $\sigma_{0}(r_{1})$, this implies that $r_{1}=0$ or $r_{1}=r_{0}$. Both cases have to be excluded because $\sigma_{0}(0)=1\neq 0$ and $r_{1}<r_{0}$ by assumption. The result follows. ∎ ###### Lemma 4.2. If $\dot{\delta}\neq 0$ and $\varepsilon\ell\dot{\delta}<2r_{0}$, the Cummins operator satisfies ${\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]\,\dot{\delta}>0.$ ###### Proof of the lemma. Recalling that from (4.10), one has ${\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]\,\dot{\delta}=-\varepsilon^{-1}(\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})-1)(3\sigma_{0}(\varepsilon\frac{\ell}{2}\dot{\delta})-1)\dot{\delta},$ the conclusion follows from the previous lemma and the observation that $\sigma_{0}(0)=1$ and $\sigma_{0}(r_{0})=2/3$. ∎ We can now use the second lemma to conclude: multiplying (4.9) by $\dot{\delta}$ and integrating in time yields $\displaystyle\tau_{0}(\varepsilon\delta)^{2}\dot{\delta}^{2}+\delta^{2}$ $\displaystyle=\delta_{0}^{2}-\int_{0}^{t}{\mathfrak{C}}_{\varepsilon,0}[\dot{\delta}]\dot{\delta}<\delta_{0}^{2},$ which implies (4.13); the proposition is therefore proved. ∎ The following Corollary then shows that, once the ODE (4.12) has been solved, the solution in the exterior domain reduces to a simple initial boundary value problem for a scalar Burgers-type equation. This is a simple byproduct of the proof of Theorem 4.1 where it was shown that the nonlinear shallow water equations were reduced to the scalar equation on the right-going Riemann invariant. ###### Corollary 4.4. Under the assumptions of Corollary 4.3, $q$ is found in the exterior domain by solving the initial boundary value problem $\begin{cases}\partial_{t}q+\big{(}-\sigma_{0}^{\prime}(-\frac{\varepsilon}{2}q)\sigma_{0}(-\frac{\varepsilon}{2}q)\big{)}^{-1}\partial_{x}q&=0\qquad(t>0,\quad x>\ell),\\\ q_{|_{t=0}}&=0,\\\ q_{|_{x=\ell}}&=-\ell\dot{\delta},\end{cases}$ with $\delta$ furnished by Proposition 4.1, while $\zeta$ is given in terms of $q$ by the algebraic expression (4.14) $\zeta=\frac{1}{\varepsilon}(\sigma_{0}(-\varepsilon q/2)^{2}-1).$ ###### Remark 4.5. More generally, if one wants to compute the waves created by an object in forced motion, one must solve the same equations as in the corollary, but with $\delta$ corresponding to this forced motion rather than given by Proposition 4.1. ### 4.3. The linear dispersive case We have studied in the previous section the situation where dispersive effects could be neglected ($\mu=\kappa^{2}/3=0$) in front of the nonlinear effects. We consider here the opposite situation where nonlinear effects are negligible ($\varepsilon=0$) but the dispersive effects are taken into account. That is, we consider the linear approximation to (4.3)-(4.5). The model considered for the propagation of the waves is therefore (4.15) $\begin{cases}\partial_{t}\zeta+\partial_{x}q=0\\\ (1-\kappa^{2}\partial_{x}^{2})\partial_{t}q+\partial_{x}\zeta=0,\end{cases}\quad\mbox{ for }\quad t\geq 0,\quad x\in{\mathcal{E}}^{+},$ the boundary condition is unchanged (4.16) $\displaystyle q_{|_{x=\ell}}$ $\displaystyle=-\ell\dot{\delta},$ and the ODE solved by $\delta$ is simplified into (4.17) ${(\tau^{2}_{\mu}+\ell\kappa)}\ddot{\delta}+\delta+{\mathfrak{C}}_{0,\mu}[\dot{\delta}]=0,$ where we recall that according to the definition of the Cummins operator (see Definition 4.1 and (3.18)), ${\mathfrak{C}}_{0,\mu}[\dot{\delta}]:=-(R_{1}\zeta)_{|_{x=\ell}},$ and where, for the sake of clarity, we simply write throughout this section $\tau_{\mu}^{2}=\tau_{\mu}(0)^{2}=\tau_{\rm buoy}^{2}+\frac{1}{\ell}\int_{0}^{\ell}\frac{x^{2}}{h_{\rm eq}}+\kappa^{2}\frac{1}{h_{\rm eq}}.$ We know by Theorem 3.3 that for all $n\in{\mathbb{N}}$ and $T>0$, there exists a unique solution $(\zeta,q,\delta)\in C^{\infty}([0,T];{\mathbb{H}}^{n}\times{\mathbb{R}})$ of (4.15)-(4.17) with initial conditions (4.2); we want here to analyze the behavior of this solution. As for the nonlinear non dispersive case in the previous section, we first provide an explicit expression for the Cummins operator, from which we are able to derive an uncoupled scalar equation for the evolution of $\delta$, whose solution can be used to find $\zeta$ and $q$ in the exterior domain through the resolution of a simpler scalar initial boundary value problem. All the equations involved in this section are linear, the difficulty coming from their nonlocal nature. #### 4.3.1. Preliminary material In order to give an explicit representation of the Cummins operator ${\mathfrak{C}}_{0,\mu}$, we first need to recall the definition of the Bessel functions $J_{n}$ (§8.41 in [17]) $J_{n}(t)=\frac{1}{\pi}\int_{0}^{\pi}\cos\big{(}n\theta-t\sin\theta\big{)}{\rm d}\theta;$ we also define the causal convolution kernels ${\mathcal{K}}^{0}_{\mu}$ and ${\mathcal{K}}^{1}_{\mu}$ as (4.18) ${\mathcal{K}}^{0}_{\mu}(t)=\frac{1}{\kappa}J_{0}(\frac{t}{\kappa})\quad\mbox{ and }\quad{\mathcal{K}}^{1}_{\mu}(t)=\frac{1}{t}J_{1}(\frac{t}{\kappa}),\quad\mbox{ for all }t\geq 0,$ and use the following standard notation for the convolution of time causal functions, $\forall t\geq 0,\qquad f*g(t)=\int_{0}^{t}f(t-s)g(s){\rm d}s.$ We also need to use the Laplace transform with respect to the time variable, which we define as $\mathcal{L}:q\mapsto\hat{q},$ where $\mathcal{L}[q](s)=\displaystyle\int_{0}^{\infty}q(t)e^{-st}{\rm d}t\quad\text{ with }\quad s\in\mathbb{C}_{0}:=\\{s\in\mathbb{C}\,|\,\mathfrak{Re}(s)>0\\}.$ We shall in particular use the following properties on Bessel functions [17] (4.19) ${\mathcal{L}}^{-1}\big{(}\frac{1}{\sqrt{1+\kappa^{2}s^{2}}}\big{)}={\mathcal{K}}^{0}_{\mu}(t)\quad\mbox{ and }\quad{\mathcal{L}}^{-1}\big{(}\frac{1}{\sqrt{1+\kappa^{2}s^{2}}+\kappa s}\big{)}={\mathcal{K}}^{1}_{\mu}(t),$ with ${\mathcal{K}}^{0}_{\mu}$ and ${\mathcal{K}}^{1}_{\mu}$ as defined in (4.18). #### 4.3.2. Analysis of the equations Using the linear structure of the equations, one can obtain an explicit expression for the Cummins operator ${\mathfrak{C}}_{0,\mu}$. ###### Theorem 4.2. The Cummins operator ${\mathfrak{C}}_{0,\mu}$ is given explicitly by ${\mathfrak{C}}_{0,\mu}[\dot{\delta}]:=\ell\,{\mathcal{K}}^{1}_{\mu}*\dot{\delta},$ where ${\mathcal{K}}^{1}_{\mu}$ is defined in (4.18). ###### Proof. Applying the Laplace transform to the equations (4.15) and (4.16), which is possible since all the functions are continuous and bounded in time (as a consequence of Proposition 3.1), and taking into account that $\zeta_{|_{t=0}}=q_{|_{t=0}}=0$, this yields (4.20) $\begin{cases}s\hat{\zeta}+\partial_{x}\hat{q}=0\\\ (1-\kappa^{2}\partial_{x}^{2})s\hat{q}+\partial_{x}\hat{\zeta}=0,\end{cases}\quad\mbox{ and }\quad\widehat{q}_{|_{x=\ell}}=-\ell\widehat{\dot{\delta}}.$ This is an ODE for $(\widehat{\zeta},\widehat{q})$ on the half-line $(\ell,\infty)$ that can be explicitly solved in terms $\widehat{\dot{\delta}}$ (note that a representation of the solution in terms of the Laplace transform in space is also possible [21] but not adapted to our purpose here; see also [3] for other types of linear dispersive equations); the formula of the lemma below provides ”right-going” solutions to the linear Boussinesq equations and it is therefore no surprise that the relationship between $\widehat{\zeta}$ and $\widehat{q}$ is the same as the one that arises when imposing transparent boundary conditions as in [23]. ###### Lemma 4.3. There is one and only one solution $(\widehat{\zeta},\widehat{q})$ to (4.20) that does not grow exponentially at infinity; it is given by $\begin{cases}\hat{q}(s,x)=\displaystyle-\ell\,\widehat{\dot{\delta}}(s)e^{-\frac{s}{\sqrt{1+\kappa^{2}s^{2}}}(x-\ell)},\\\ \displaystyle\widehat{\zeta}(s,x)=\frac{1}{\sqrt{1+\kappa^{2}s^{2}}}\widehat{q}(s,x),\end{cases}$ where the square root is taken in order to have positive real part. ###### Proof of the lemma. From (4.20), one deduces (4.21) $\partial_{x}^{2}\hat{q}(s)-\frac{s^{2}}{1+\kappa^{2}s^{2}}\hat{q}(s)=0,$ and there are therefore two constants $A(s)$ and $B(s)$ such that $\displaystyle\hat{q}(s,x)=A(s)e^{-\frac{s}{\sqrt{1+\kappa^{2}s^{2}}}x}+B(s)e^{\frac{s}{\sqrt{1+\kappa^{2}s^{2}}}x}.$ Since exponentially increasing functions are not allowed, we have $B(s)=0$ and thus $\hat{q}(s,x)=A(s)e^{-\frac{s}{\sqrt{1+\kappa^{2}s^{2}}}x}.$ Then using the boundary condition on $\widehat{q}$ at $x=\ell$, we find the expected formula for $\hat{q}$. By using the first equation of (4.20), we get the formula for $\widehat{\zeta}$. ∎ Let us now remark that for all $f\in L^{2}({\mathcal{E}}^{+})$, one has $(R_{1}f)_{|_{x=\ell}}=\kappa^{-1}\int_{{\mathcal{E}}^{+}}e^{-\kappa^{-1}(x-\ell)}f(x){\rm d}x$ so that, using the lemma, $\displaystyle\widehat{{\mathfrak{C}}_{0,\mu}[\dot{\delta}]}(s)$ $\displaystyle=-(R_{1}\widehat{\zeta})_{|_{x=\ell}}$ $\displaystyle=\displaystyle\frac{\ell\widehat{\dot{\delta}}(s)}{\kappa\sqrt{1+\kappa^{2}s^{2}}}\,\int_{{\mathcal{E}}^{+}}e^{-\big{(}\frac{1}{\kappa}+\frac{s}{\sqrt{1+\kappa^{2}s^{2}}}\big{)}(x-\ell)}{\rm d}x.$ It follows that $\widehat{{\mathfrak{C}}_{0,\mu}[\dot{\delta}]}(s)=\frac{\ell}{\sqrt{1+\kappa^{2}s^{2}}+\kappa s}\,\widehat{\dot{\delta}}(s).$ Using (4.19), this yields ${\mathfrak{C}}_{0,\mu}[\dot{\delta}]=\ell{\mathcal{K}}^{1}_{\mu}*\dot{\delta};$ note also for future use that we also get from the lemma that $\displaystyle\zeta(t,x)$ $\displaystyle={\mathcal{K}}^{0}_{\mu}*q.$ ∎ As in Corollary 4.3 in the non dispersive case, it is possible to determine the motion of the solid by the resolution of a single scalar equation on $\delta$; due to the presence of the dispersive terms however, this equation is no longer an ordinary differential equation but an integro-differential equation. ###### Corollary 4.5. The motion of the floating object for the problem (4.15)-(4.17) can be found directly by solving the linear second order integro-differential equation, (4.22) $\big{(}\tau^{2}_{\mu}+\ell\kappa\big{)}\ddot{\delta}+\ell{\mathcal{K}}^{1}_{\mu}*\dot{\delta}+\delta=0,$ with initial conditions $\delta(0)=\delta_{0}$ and $\dot{\delta}(0)=0$. ###### Remark 4.6. In [33], the authors consider the linearized shallow-water equations with some viscosity $\nu$. More precisely, they consider (4.15) with $-\nu\partial_{x}^{2}q$ instead of $-\kappa^{2}\partial_{x}^{2}\partial_{t}q$ in the second equation and find the following Cummins equation (4.23) $\tau_{\mu}^{2}\ddot{\delta}+\sqrt{\nu}\ell\delta^{\left(\frac{3}{2}\right)}+\nu\dot{\delta}+\ell{\mathcal{F}}_{\nu}\ast\dot{\delta}+\delta=0\quad\text{with}\quad{\mathcal{F}}_{\nu}:=\mathcal{L}^{-1}\left[\frac{1}{\sqrt{1+\nu s}+\sqrt{\nu s}}\right],$ and where $\delta^{(\frac{3}{2})}$ stands for the fractional derivative of order $3/2$ of $\delta$. This equation shares some similarities with (4.22), in particular the convolution term, although with a different kernel (note that one gets $\widehat{\mathcal{K}}^{1}_{\mu}(s)$ by replacing $\nu s$ by $\kappa^{2}s^{2}$ in $\widehat{{\mathcal{F}}}_{\mu}(s)$). One the contrary, there is in (4.23) a viscous damping term $\nu\dot{\delta}$ that has no equivalent in (4.22). Note finally that the fractional derivative term $\sqrt{\nu}\ell\delta^{\left(\frac{3}{2}\right)}$ in (4.23) can be related to the added mass term ${\ell\kappa\ddot{\delta}}$ in (4.22). Indeed, in the analysis of [33], this fractional derivative is the leading order term of a convolution term $\ell F*\dot{\delta}$ with $\widehat{F}(s)=\sqrt{1+\nu s}$. In the dispersive case, the same analysis would give a symbol $\sqrt{1+\kappa^{2}s^{2}}$ and the leading order term of the same convolution would be the dispersive added mass term $\ell\kappa\ddot{\delta}$. In the linear non dispersive case ($\varepsilon=\mu=0$), Corollary 4.3 shows that the motion of the object is governed by the same equation as a damped harmonic oscillator; the return to equilibrium occurs therefore at an exponential rate. In the presence of dispersion, Corollary 4.5 states that the motion of the solid is now governed by the integro-differential equation (4.22) and numerical simulations (see Figure 2) suggest that the decay gets slower as the dispersion parameter $\kappa=\sqrt{\mu/3}$ increases. . Figure 2. Return to equilibrium: evolution of $\delta(t)$ with increasing value of $\kappa$ and $\delta_{0}=1,\tau_{\rm buoy}=1/6,h_{\rm eq}=1$. This issue is addressed in the following proposition. In particular, the fact that $\delta$ belongs to $H^{2}({\mathbb{R}}^{+})$ implies that $\delta$ and $\dot{\delta}$ tend to zero at infinity, but the third point of the proposition shows that the decay cannot be stronger than $O(t^{-3/2})$ (as opposed to the exponential convergence rate in the linear non dispersive case), bringing a theoretical confirmation to the above numerical observations. ###### Proposition 4.2. i. There is a unique solution $\delta\in C^{2}({\mathbb{R}}^{+})\cap W^{1,\infty}({\mathbb{R}}^{+})$ to (4.22) with initial data $\delta(0)=\delta_{0}$ and $\dot{\delta}(0)=0$. ii. Moreover, $\delta\in H^{2}({\mathbb{R}}^{+})$, but for $k\in\\{0,1,2\\}$, $t\delta^{(k)}\not\in L^{2}({\mathbb{R}}^{+})$. iii. For all $\alpha>0$ and $k\in\\{0,1,2\\}$ and for all $c>0$ and $T_{0}>0$, there exists $t>T_{0}$ such that $|\delta^{(k)}(t)|>ct^{-\frac{3}{2}-\alpha}.$ ###### Remark 4.7. The dispersive delay (convolution) term in (4.22) is responsible for the slow decay of the solution. Indeed, in the non dispersive limit case $\kappa=0$, the branching points $s=\pm i\kappa^{-1}$ disappear from the transfer function $\hat{H}_{\mu}$ derived in (4.24) below, which then becomes ${\hat{H}}_{0}(s):=\frac{\tau^{2}_{0}s+\ell}{{\tau^{2}_{0}\,s^{2}+s\ell+1}},$ whose poles have a strictly negative real part, hence an exponential decay for $\delta$. ###### Proof. Since the kernel ${\mathcal{K}}_{1}$ belongs to $L^{1}({\mathbb{R}}^{+})$ (recall that the Bessel function $J_{1}(t)$ decays like $O(t^{-1/2})$), the proof of the first part of the proposition does not raise any particular problem. To prove the second part of the proposition, we need a careful analysis of the transfer function $\widehat{H}_{\mu}$ defined by the relation $\widehat{\delta}=\widehat{H}_{\mu}\delta_{0}$. Since $(\delta,\dot{\delta})\in C\cap L^{\infty}(\mathbb{R}^{+})$, the Laplace transform of $\delta$ and $\dot{\delta}$ are well defined on $\mathbb{C}_{0}$ an after remarking that $\kappa s+\frac{1}{\sqrt{1+\kappa^{2}s^{2}}+\kappa s}=\sqrt{1+\kappa^{2}s^{2}},$ the Laplace transform of $\delta$ is, owing to (4.22), (4.24) $\hat{\delta}={\hat{H}}_{\mu}(s)\delta_{0}\quad\text{ where }\quad{\hat{H}}_{\mu}(s):=\frac{\tau^{2}_{\mu}s+\ell\sqrt{1+\kappa^{2}s^{2}}}{\tau^{2}_{\mu}\,s^{2}+s\ell\sqrt{1+\kappa^{2}s^{2}}+1}.$ ###### Lemma 4.4. The transfer function ${\hat{H}}_{\mu}$ defined in (4.24) is holomorphic on $\mathbb{C}_{0}$ and admits only two branching points at $\pm i\kappa^{-1}$. Moreover all the zeros of the denominator in (4.24) have strictly negative part. ###### Proof of the lemma. We denote by $P$ the holomorphic function on $\mathbb{C}_{0}$ $P(s):=\tau^{2}_{\mu}\,s^{2}+s\ell\sqrt{1+\kappa^{2}s^{2}}+1,$ where the square root stands for the square root with positive real part. Since $P$ has only two singularities which are the branching points $\pm i\kappa^{-1}$, we can extend it analytically on all the complex plane except on the cuts $i(-\infty,-\kappa^{-1})$ and $i(\kappa^{-1},\infty)$, and extend it continuously by a function $P^{*}$ on the imaginary axis by $P^{*}(i\omega):=\begin{cases}1-\tau^{2}_{\mu}\,\omega^{2}+i\omega\ell\sqrt{1-\kappa^{2}\omega^{2}}&|\omega|\leq\kappa^{-1},\\\ 1-\tau^{2}_{\mu}\,\omega^{2}-|\omega|\ell\sqrt{\kappa^{2}\omega^{2}-1}&|\omega|>\kappa^{-1}.\end{cases}$ We first show that $s$ cannot be purely imaginary, then that if a zero is real, it must be strictly negative, and finally that if a zero is not a real number, it must satisfy $\Re(s)<0$. Step 1. The zeros of $P$ cannot be purely imaginary. Indeed, if $\omega$ were we solution of $P^{*}(i\omega)=0$, then, from the expression of $P(i\omega)$ given above, $\omega^{2}$ would be a real root of the second order polynomial $(\tau_{\mu}^{4}-\ell^{2}\kappa^{2})X^{2}+(\ell^{2}-2\tau^{2}_{\mu})X+1.$ But the discriminant of this polynomial is $\Delta=\ell^{4}+4\ell^{2}(\kappa^{2}-\tau^{2}_{\mu})<-\ell^{2}/3$ (since $\tau^{2}_{\mu}>\kappa^{2}+\ell^{2}/3$), which is negative, implying that the polynomial cannot have any real root. Step 2. The zeros of $P$ cannot belong to ${\mathbb{R}}^{+}$ from the simple observation that $P(\eta)>0$ for all $\eta\in{\mathbb{R}}^{+}$. Step 3. The zeros of $P$ cannot have positive real part. In order to prove this, we show here that $\Im(P(s))\neq 0$ for all $s=\eta+i\omega$ with $s\notin i\mathbb{R}\cup\mathbb{R}$ (the case $s\in{\mathbb{R}}^{+}$ having been dealt with in Step 2). The imaginary part of $P(s)$ is given by $\Im[P(s)]=2\tau^{2}_{\mu}\,\eta\,\omega+\eta\,\ell\Im[\sqrt{1+\kappa^{2}s^{2}}]+\omega\,\ell\Re[\sqrt{1+\kappa^{2}s^{2}}].$ By definition of the square root, $\Re[\sqrt{1+\kappa^{2}s^{2}}]\geq 0$ and the sign of $\Im[\sqrt{1+\kappa^{2}s^{2}}]$ is the same as the sign of the product $\eta\,\omega$. The following table summarizes the sign of some quantities in different cases (where $+$ stands for strictly positive, $-$ stands for strictly negative and ind. signifies that the sign is indeterminate). $\eta$ | $\omega$ | $2\tau^{2}\,\eta\,\omega$ | $\eta\,\Im[\sqrt{1+\kappa^{2}s^{2}}]$ | $\omega\Re[\sqrt{1+\kappa^{2}s^{2}}$] | $\Im[P(s)]$ ---|---|---|---|---|--- - | - | + | - | - | ind. - | + | - | + | + | ind. + | - | - | - | - | - + | + | + | + | + | + Therefore, if $\eta>0$ then $\Im[P(s)]$ is either strictly positive or strictly negative, so that it does not vanish. ∎ Using Lemma 4.4, we can extend continuously the transfer function ${\hat{H}}_{\mu}$ on the imaginary axis by ${\hat{H}}^{*}_{\mu}(i\omega)=\begin{cases}\displaystyle\frac{\tau_{\mu}^{2}i\omega+\ell\sqrt{1-\kappa^{2}\omega^{2}}}{-\tau^{2}_{\mu}\,\omega^{2}+i\omega\ell\sqrt{1-\kappa^{2}\omega^{2}}+1}\delta_{0},&|\omega|\leq\kappa^{-1},\\\ \displaystyle\frac{\tau^{2}_{\mu}i\omega+i\ell{\rm sign}(\omega)\sqrt{\kappa^{2}\omega^{2}-1}}{-\tau^{2}_{\mu}\,\omega^{2}-|\omega|\ell\sqrt{\kappa^{2}\omega^{2}-1}+1}\delta_{0},&|\omega|>\kappa^{-1}.\end{cases}$ Integrating $|{\hat{H}}^{*}_{\mu}(i\omega)|^{2}$ over $\mathbb{R}$ we get $\displaystyle\int_{\mathbb{R}}|{\hat{H}}^{*}_{\mu}(i\omega)|^{2}{\rm d}\omega=\displaystyle\int_{-\kappa^{-1}}^{\kappa^{-1}}|{\hat{H}}^{*}_{\mu}(i\omega)|^{2}{\rm d}\omega+\displaystyle\int_{|\omega|>\kappa^{-1}}|{\hat{H}}^{*}_{\mu}(i\omega)|^{2}{\rm d}\omega.$ The first integral is obviously finite (the denominator in (4.24) does not vanish on $i{\mathbb{R}}$). The second integral is also finite since $|{\hat{H}}^{*}_{\mu}(i\omega)|^{2}{\underset{|\omega|\to\infty}{\sim}}\omega^{-2}$. Then ${\hat{H}}_{\mu}$ belongs to the standard Hardy space $\mathcal{H}^{2}(\mathbb{C}_{0})$ and by the Paley-Wiener theorem (see Theorem 5.1 below), one has $\delta\in L^{2}({\mathbb{R}}^{+})$. The same reasoning can be applied to $\widehat{\dot{\delta}}=\left(\frac{-1}{\tau^{2}_{\mu}\,s^{2}+s\ell\sqrt{1+\kappa^{2}s^{2}}+1}\right)\delta_{0}$ and $\widehat{\ddot{\delta}}=\left(\frac{-s}{\tau^{2}_{\mu}\,s^{2}+s\ell\sqrt{1+\kappa^{2}s^{2}}+1}\right)\delta_{0}$ so that $\dot{\delta}$ and $\ddot{\delta}$ also belong to $L^{2}({\mathbb{R}}^{+})$. Let us now prove that $u(t):=t\delta(t)$ does not belong to $L^{2}({\mathbb{R}}^{+})$. Denoting by $U$ the Laplace transform of $u$, one has $U(s):=(-1)\left(\frac{d}{ds}\right)H_{\mu}(s)\delta_{0}\quad\mbox{ on }\quad{\mathbb{C}}_{0},$ and the following extension to the imaginary axis holds, $U^{*}(i\omega)=\begin{cases}\displaystyle(-1)\left(\frac{d}{d\omega}\right)\frac{\tau^{2}_{\mu}i\omega+\ell\sqrt{1-\kappa^{2}\omega^{2}}}{-\tau^{2}_{\mu}\,\omega^{2}+i\omega\ell\sqrt{1-\kappa^{2}\omega^{2}}+1}\delta_{0},&|\omega|\leq\kappa^{-1},\\\ \displaystyle(-1)\left(\frac{d}{d\omega}\right)\frac{\tau^{2}_{\mu}i\omega+i\ell{\rm sign}(\omega)\sqrt{\kappa^{2}\omega^{2}-1}}{-\tau^{2}_{\mu}\,\omega^{2}-|\omega|\ell\sqrt{\kappa^{2}\omega^{2}-1}+1}\delta_{0},&|\omega|>\kappa^{-1}.\end{cases}$ But $U$ is no longer bounded on the imaginary axis as it contains two non isolated singularities (of order $-1/2$ is the Puiseux series expansion) at $\pm i\kappa^{-1}$; the integral $\displaystyle\int_{-\kappa^{-1}}^{\kappa^{-1}}|U^{*}(i\omega)|^{2}d\omega$ is not finite and thus $U\notin\mathcal{H}^{2}(\mathbb{C}_{0})$. By the Paley- Wiener theorem, this implies that $u\notin L^{2}(\mathbb{R}^{+})$. The same reasoning can be applied to $V(s)=(-1)\left(\frac{d}{ds}\right)\left(\frac{-1}{\tau^{2}_{\mu}\,s^{2}+s\ell\sqrt{1+\kappa^{2}s^{2}}+1}\right)\delta_{0}$ and $W(s)=(-1)\left(\frac{d}{ds}\right)\left(\frac{-s}{\tau^{2}_{\mu}\,s^{2}+s\ell\sqrt{1+\kappa^{2}s^{2}}+1}\right)\delta_{0},$ which are respectively the Laplace transforms of $t\dot{\delta}$ and $t\ddot{\delta}$. This completes the proof of the second point of the proposition. Let us now prove the third point by contradiction. Assuming that there exists $C>0$ such that for $t$ large enough $|\delta^{(k)}(t)|\leq Ct^{-\frac{3}{2}-\alpha},$ one gets $|t\delta^{(k)}(t)|^{2}\leq Ct^{-1-2\alpha}$ which implies $t\delta^{(k)}\in L^{2}(\mathbb{R}^{+})$, which contradicts the second point. ∎ In the non dispersive case, we showed in Corollary 4.4 that once the motion of the object is known, it is possible to find $q$ in the exterior domain by solving an initial boundary value problem for a Burgers-type scalar equation. This remains true in the present dispersive linear case, but the initial boundary value problem one has to solve is now nonlocal in time. Note that as in Remark 4.5, the corollary can easily be generalized to describe the waves created by an object in forced motion. ###### Corollary 4.6. The return to equilibrium problem for the linear Boussinesq equations (4.15)-(4.17) with initial condition (4.2) can be equivalently formulated as a scalar nonlocal initial boundary value problem on $q$ (4.25) $\begin{cases}\partial_{x}q+{\mathcal{K}}^{0}_{\mu}\ast\partial_{t}q&=0\qquad(t>0,\quad x>\ell),\\\ q_{|_{t=0}}&=0,\\\ q_{|_{x=\ell}}&=-\ell\dot{\delta},\end{cases}$ where ${\mathcal{K}}^{0}_{\mu}$ is defined in (4.18) while $\zeta$ is given in terms of $q$ by a convolution in time (4.26) $\zeta={\mathcal{K}}^{0}_{\mu}*q,$ with $\delta$ furnished by Proposition 4.2. ###### Remark 4.8. The nonlocal initial boundary value problem (4.25) is not standard. The most convenient way to handle it is to see it as an evolution equation with respect to $x$ rather than $t$; it then becomes a particular case of the nonlocal initial boundary value problems considered in Section 5. It is in particular a consequence of Theorem 5.2 below that (4.25) admits a unique solution $q\in C({\mathbb{R}}^{+}_{x};H^{1}({\mathbb{R}}^{+}_{x}))\cap C^{1}({\mathbb{R}}^{+}_{x};L^{2}({\mathbb{R}}^{+}_{x}))$. Moreover, Proposition 5.2 and Corollary 5.1 imply that the solution if actually of class $C^{2}({\mathbb{R}}^{+}\times{\mathbb{R}}^{+})$ and infinitely regular with respect to time, showing that the dispersive terms induce a smoothing effect. Indeed, when $\kappa=0$, the first equation in (4.25) becomes (4.27) $\partial_{t}q+\partial_{x}q=0$ and the solution to the initial boundary value problem, explicitly given by $q(x,t)=\begin{cases}-\ell\dot{\delta}(t-(x-\ell))&\text{ for }t-(x-\ell)\geq 0\\\ 0&\text{ for }t-(x-\ell)<0,\end{cases}$ does not belong to $C^{1}(\mathcal{E}_{+}\times\mathbb{R}^{+})$ because $\ddot{\delta}(0)=-\frac{1}{\tau^{2}_{\mu}}\neq 0$. ## 5\. The initial boundary value problem for a class of nonlocal transport equations As shown in the previous section, the analysis of the return to equilibrium problem in the linear dispersive case leads to a nonlocal generalization of the transport equation. The analysis of the initial value problem for such equations is not standard and we address it in this section. Since this subject is of interest in its own, we work here with more standard notations. More precisely, we consider an evolution with respect to the time variable and a nonlocal term with respect to the space variable (this is the reverse in §4.3). The domain of consideration is the quadrant $\\{x\geq 0,t\geq 0\\}$. The typical initial boundary value we shall consider is therefore of the form, $\begin{cases}\partial_{t}u+{\mathcal{K}}*_{x}\partial_{x}u&=f,\\\ u_{|_{x=0}}&={\underline{u}},\\\ u_{|_{t=0}}&=u^{\rm in}.\end{cases}$ for some convolution kernel ${\mathcal{K}}$ to be made precise later. After presenting some technical material in §5.1.1 for the functional setting and the Laplace transform, we remind in §5.2 some very classical facts on the initial and/or boundary value problems for the standard transport equation, making a distinction between the case of a positive and a negative velocity. The nonlocal generalizations of these transport problems, in which $\partial_{x}u$ is replaced by a nonlocal term ${\mathcal{K}}*_{x}\partial_{x}u$, are addressed in §5.3; in particular similarities and differences (such as the presence of an additional compatibility condition and a smoothing effect) with their local counterparts are commented. NB. To avoid confusions with the computations performed in §4.3 where the Laplace transform $\widehat{u}$ was taken with respect to time (with dual variable $s$) we denote throughout this section by $\widetilde{u}$ the Laplace transform with respect to $x$ (with dual variable $p=\alpha+i\xi$). ### 5.1. Functional setting and a brief reminder on the Laplace transform We gather here some definition of functional spaces that play an important role in the analysis of initial boundary value problems, as well as some classical facts on the Laplace transform. #### 5.1.1. Functional setting In the study of initial boundary value problems for hyperbolic systems of equations, the space ${\mathbb{X}}^{n}$ plays a central role; it is defined for all $n\in{\mathbb{N}}$ as ${\mathbb{X}}^{n}=\bigcap_{j=0}^{n}C^{j}({\mathbb{R}}^{+}_{t};H^{n-j}({\mathbb{R}}^{+}_{x}));$ in particular, for all $u\in{\mathbb{X}}^{n}$, one can define for all $t\geq 0$ the quantity $|||u(t,\cdot)|||_{n}:=\sup_{j+k\leq n}|\partial_{t}^{j}\partial_{x}^{k}u(t,\cdot)|_{L^{2}({\mathbb{R}}^{+})}.$ Let us also define ${\mathbb{Y}}^{n}$ as ${\mathbb{Y}}^{n}=\bigcap_{j=0}^{n}W^{j,1}_{\rm loc}({\mathbb{R}}^{+}_{t};H^{n-j}({\mathbb{R}}^{+}_{x})).$ When working with nonlocal transport equations, it is convenient to introduce weighted versions of these spaces. For any $a\in{\mathbb{R}}$, and $k\in{\mathbb{N}}$, we introduce therefore $\displaystyle L^{2}_{a}({\mathbb{R}}^{+})$ $\displaystyle:=\\{u\in L^{2}_{\rm loc}({\mathbb{R}}^{+}),\,|u|_{L^{2}_{a}}:=\Big{(}\int_{{\mathbb{R}}^{+}}e^{-2ax}|u(x)|^{2}{\rm d}x\Big{)}^{1/2}<\infty\\},$ $\displaystyle H^{k}_{a}({\mathbb{R}}^{+})$ $\displaystyle:=\\{u\in L^{2}_{a}({\mathbb{R}}^{+}),\,|u|_{H^{k}_{a}}:=\sum_{l=0}^{k}|\partial_{x}^{l}u|_{L^{2}_{a}}<\infty\\},$ and denote by ${\mathbb{X}}^{n}_{a}$ and ${\mathbb{Y}}^{n}_{a}$ the weighted version of the spaces ${\mathbb{X}}^{n}$ and ${\mathbb{Y}}^{n}$ obtained by replacing all $L^{2}({\mathbb{R}}^{+}_{x})$ based spaces by their $L^{2}_{a}({\mathbb{R}}^{+}_{x})$ analogue: we also write $|||u(t,\cdot)|||_{a,n}:=\sup_{j+k\leq n}|\partial_{t}^{j}\partial_{x}^{k}u(t,\cdot)|_{L_{a}^{2}({\mathbb{R}}^{+})}.$ #### 5.1.2. Some results on the Laplace transform For all $u\in L^{1}_{\rm loc}({\mathbb{R}}^{+})$, the Laplace transform is defined by $\widetilde{u}(p)=\int_{0}^{\infty}e^{-px}u(x){\rm d}x,$ for all $p=\alpha+i\xi\in{\mathbb{C}}$ such that this integral converges absolutely. Using for all $a\in{\mathbb{R}}$ the notation ${\mathbb{C}}_{a}=\\{p\in{\mathbb{C}},\Re p>a\\},$ we can define the Hardy space $\mathcal{H}^{2}(\mathbb{C}_{a}):=\Big{\\{}U\mbox{\rm{ holomorphic on }}\mathbb{C}_{a}\,;\,||U||^{2}_{\mathcal{H}^{2}(\mathbb{C}_{a})}:=\sup_{\alpha>a}\int_{\mathbb{R}}|U(\alpha+i\xi)|^{2}d\xi<\infty\Big{\\}}.$ Every function $U\in\mathcal{H}^{2}(\mathbb{C}_{a})$ admits a boundary trace denoted $U^{*}$ on $a+i{\mathbb{R}}$, that belongs to $L^{2}(a+i{\mathbb{R}})$, and $\mathcal{H}^{2}(\mathbb{C}_{a})$ is a Hilbert space for the scalar product $\langle F,G\rangle_{\mathcal{H}^{2}(\mathbb{C}_{a})}=\frac{1}{2\pi}\int_{-\infty}^{\infty}F^{*}(a+i\xi)\overline{G^{*}(a+i\xi)}{\rm d}\xi.$ Recalling that the weighted space $L^{2}_{a}({\mathbb{R}}^{+})$ is defined in the previous section, we can state the well-known Paley-Wiener theorem. ###### Theorem 5.1. Let $a\in{\mathbb{R}}$. The Laplace-transform $\mathcal{L}:\begin{array}[]{lcl}L_{a}^{2}({\mathbb{R}}^{+})\to\mathcal{H}^{2}(\mathbb{C}_{a})\\\ u\mapsto\widetilde{u}\end{array}$ is an isometry between Hilbert spaces. Recalling that $\widetilde{\frac{du}{dx}}(p)=p\widetilde{u}(p)-u(0)$ (whenever these quantities make sense), we also have the following characterization of the weighted Sobolev spaces $H^{k}_{a}({\mathbb{R}}^{+})$. ###### Proposition 5.1. Let $k\in{\mathbb{N}}$ and $({\underline{u}}_{0},\dots,{\underline{u}}_{k-1})\in{\mathbb{R}}^{k}$. The following assertions are equivalent, * i. One has $u\in H^{k}_{a}({\mathbb{R}}^{+})$ and for all $0\leq j\leq k-1$, $\lim_{x\to 0^{+}}\partial_{x}^{j}u(x)={\underline{u}}_{j}$. * ii. For all $0\leq j\leq k$, the mapping $p\mapsto p^{j}\widetilde{u}(p)-\sum_{i=0}^{j-1}p^{j-1-i}{\underline{u}}_{i}$ belongs to ${\mathcal{H}}^{2}({\mathbb{C}}_{a})$ (with the sum taken to be zero if $j=0$). Moreover, for all $0\leq j\leq k$, one has $\widetilde{\partial_{x}^{j}u}=p^{j}\widetilde{u}(p)-\sum_{i=0}^{j-1}p^{j-1-i}{\underline{u}}_{i}$. ### 5.2. Reminder on the standard transport equation Let us start with some considerations on the standard initial boundary value problem for the transport equations $\partial_{t}u+\partial_{x}u=f$ (referred to as right-going case) and $\partial_{t}u-\partial_{x}u=f$ (left-going case). #### 5.2.1. The right-going case We consider here the following initial boundary value problem (5.1) $\begin{cases}\partial_{t}u+\partial_{x}u&=f,\\\ u_{|_{x=0}}&={\underline{u}},\\\ u_{|_{t=0}}&=u^{\rm in},\end{cases}$ with $f\in{\mathbb{Y}}^{1}$, $u^{\rm in}\in H^{1}({\mathbb{R}}^{+})$ and ${\underline{u}}\in H^{1}_{\rm loc}({\mathbb{R}}^{+})$. In order for (5.1) to admit a solution $u\in{\mathbb{X}}^{1}=C({\mathbb{R}}^{+}_{t};H^{1}({\mathbb{R}}^{+}_{x}))\cap C^{1}({\mathbb{R}}^{+}_{t};L^{2}({\mathbb{R}}^{+}_{x}))$, and therefore continuous on $[0,\infty)\times[0,\infty)$, it is necessary that ${\underline{u}}(t=0)=u^{\rm in}(x=0).$ This compatibility condition is actually sufficient to ensure the existence and uniqueness of such a solution. Even if the data are more regular, i.e. if $f\in{\mathbb{Y}}^{n}$, $u^{\rm in}\in H^{n}({\mathbb{R}}^{+})$ and ${\underline{u}}\in H^{n}_{\rm loc}({\mathbb{R}}^{+})$ for some $n>1$, one cannot expect in general the solution to be in ${\mathbb{X}}^{n}$. It is a general feature of first order hyperbolic systems that such a regularity is achieved if and only if $n$ algebraic compatibility conditions are satisfied (see for instance [4, 36, 37, 18]). Of course, the situation is the same if we choose to work in the weighted space ${\mathbb{X}}^{n}_{a}$ since the presence of the weight changes the integrability properties at infinity, but not local regularity. In the present case, this can easily be checked on the following explicit representation of the solution (5.2) $u(t,x)=u^{\rm in}(x-t)+{\underline{u}}(t-x)+\int_{0}^{t}f(t^{\prime},x-t+t^{\prime}){\rm d}t^{\prime},$ where $u^{\rm in}$, ${\underline{u}}$ and $f(t,\cdot)$ are extended by zero in order to be considered as functions defined on the full line ${\mathbb{R}}$ instead of ${\mathbb{R}}^{+}$. #### 5.2.2. The left-going case It is well-known that an initial boundary value problem similar to (5.1) is ill-posed for the left-going transport equation $\partial_{t}u-\partial_{x}u=f$. Indeed, the initial value problem (without boundary condition) (5.3) $\begin{cases}\partial_{t}u-\partial_{x}u&=f,\\\ u_{|_{t=0}}&=u^{\rm in},\end{cases}$ is well posed, and the solution can be explicitly written as (5.4) $u(t,x)=u^{\rm in}(x+t)+\int_{0}^{t}f(t^{\prime},x+t-t^{\prime}){\rm d}t^{\prime};$ in particular, the boundary value ${\underline{u}}$ is given in terms of $u^{\rm in}$ and $f$ through the relation ${\underline{u}}(t)=u^{\rm in}(t)+\int_{0}^{t}f(t^{\prime},t-t^{\prime}){\rm d}t^{\prime}$ and therefore cannot be freely prescribed. Note that using this relation in (5.4), one can express the solution in terms of the boundary data instead of the initial data, namely, (5.5) $u(t,x)={\underline{u}}(x+t)-\int_{0}^{x}f(x+t-x^{\prime},x^{\prime}){\rm d}x^{\prime}.$ This proves in particular that the following boundary value problem (without initial condition) (5.6) $\begin{cases}\partial_{t}u-\partial_{x}u&=f,\\\ u_{|_{x=0}}&={\underline{u}},\end{cases}$ is also well-posed for the left-going transport equation. We note finally that for the initial value problem (5.3) as well as for the boundary value problem (5.6) (which are essentially the same by switching the variables $t$ and $x$) the solution $u$ belongs to ${\mathbb{X}}^{1}$ if the data are smooth enough without having to impose any compatibility condition, contrary to what we saw for the right-going case. ### 5.3. The nonlocal transport equation The aim of this section is to investigate the behavior of nonlocal perturbations of the right-going and left-going transport equations respectively given by (5.7) $\partial_{t}u+{\mathcal{K}}^{0}_{\mu}*_{x}\partial_{x}u=f\quad\mbox{ and }\quad\partial_{t}u-{\mathcal{K}}^{0}_{\mu}*_{x}\partial_{x}u=f,$ where $*_{x}$ stands for the causal convolution with respect to the space variable, $\forall x\in{\mathbb{R}}^{+},\qquad f*_{x}g(x)=\int_{0}^{x}f(x-x^{\prime})g(x^{\prime}){\rm d}x^{\prime}$ and with the Bessel kernel ${\mathcal{K}}^{0}_{\mu}$ as in (4.18); in particular, we recall that $\widetilde{{\mathcal{K}}^{0}_{\mu}}(p)=\frac{1}{\sqrt{1+\kappa^{2}p^{2}}}\qquad(\kappa^{2}=\mu/3).$ ###### Remark 5.1. Though we consider here the Bessel kernel ${\mathcal{K}}^{0}_{\mu}$, the results of this section can easily be adapted to other kernels. An important feature of the family $({\mathcal{K}}^{0}_{\mu})_{\mu>0}$ is that it formally converges to the Dirac mass at $x=0$ as $\mu\to 0$, so that the nonlocal transport equations (5.7) formally converge to the standard right- going and left-going transport equations respectively, namely, $\partial_{t}u+\partial_{x}u=f\quad\mbox{ and }\quad\partial_{t}u-\partial_{x}u=f.$ A natural question is therefore to ask whether the nonlocal initial and/or boundary value problems have a similar behavior to the behavior of their local counterpart described in §5.2. #### 5.3.1. The right-going case We want to address in this section the same kind of initial boundary value problem as (5.1), but where the space derivative is now replaced by a nonlocal term, namely, we consider (5.8) $\begin{cases}\partial_{t}u+{\mathcal{K}}^{0}_{\mu}*_{x}\partial_{x}u&=f,\\\ u_{|_{x=0}}&={\underline{u}},\\\ u_{|_{t=0}}&=u^{\rm in}.\end{cases}$ As for (5.1), if there exists a solution $u\in{\mathbb{X}}^{1}$ (or more generally in the weighted version ${\mathbb{X}}^{1}_{a}$ with $a\geq 0$) to (5.8), then it is continuous at $x=t=0$ and the data must therefore satisfy the same compatibility condition (5.9) ${\underline{u}}(t=0)=u^{\rm in}(x=0)$ as for the standard transport equation. There is however a new compatibility condition that arises here. Indeed, since ${\mathcal{K}}^{0}_{\mu}\in L^{1}_{\rm loc}({\mathbb{R}}^{+})$, the trace of ${\mathcal{K}}^{0}_{\mu}*_{x}\partial_{x}u$ at $x=0$ is well defined if $\partial_{x}u\in C({\mathbb{R}}^{+}_{t};L^{2}_{\rm loc}({\mathbb{R}}^{+}_{x}))$, and it must be equal to zero by definition of the convolution. Taking the trace of the first equation in (5.8), one therefore finds the following additional compatibility condition for the existence of solutions with the aforementioned regularity, (5.10) $\forall t\in{\mathbb{R}}^{+},\qquad\partial_{t}{\underline{u}}(t)=f(t,0).$ ###### Remark 5.2. The similar procedure applied to the standard transport problem (5.8) yields the relation $\partial_{x}u(t,0)=-\partial_{t}{\underline{u}}(t)+f(t,0),$ which is not a compatibility condition but an information on the behavior of the trace of $\partial_{x}u$ at the boundary. If these two compatibility conditions are satisfied, the theorem below shows the well-posedness of the nonlocal initial boundary value problem (5.8). We recall that the functional spaces have been defined in §5.1.1; note also that we have to work in weighted spaces here in order to compensate the slow decay of ${\mathcal{K}}^{0}_{\mu}$ at infinity (which is of order $O(|x|^{-1/2}$) and that more information on the regularity of the solution is given in Corollary 5.1 below. ###### Theorem 5.2. Let $a>0$ and $f\in{\mathbb{Y}}^{1}_{a}$, $u^{\rm in}\in H_{a}^{1}({\mathbb{R}}_{x}^{+})$ and ${\underline{u}}\in W^{1,1}_{\rm loc}({\mathbb{R}}^{+}_{t})$. Assume moreover that the compatibility conditions (5.9) and (5.10) hold. Then there exists a unique solution $u\in{\mathbb{X}}^{1}_{a}$ to the nonlocal initial boundary value problem (5.8), and there exists $c_{a}>0$ such that, for all $t\in{\mathbb{R}}^{+}$, $|u(t,\cdot)|_{H_{a}^{1}({\mathbb{R}}^{+}_{x})}\leq e^{-c_{a}t}|u^{\rm in}|_{H_{a}^{1}({\mathbb{R}}^{+}_{x})}+\int_{0}^{t}e^{-c_{a}(t-t^{\prime})}\big{[}|f(t^{\prime},\cdot)|_{H_{a}^{1}({\mathbb{R}}^{+}_{x})}+|{\mathcal{K}^{0}_{\mu}}|_{L_{a}^{2}}|{\underline{u}}(t^{\prime})|\big{]}{\rm d}t^{\prime}.$ If moreover ${\underline{u}}=0$ then the result still holds with $a=c_{a}=0$. ###### Proof. For the sake of clarity, we simply write ${\mathcal{K}}$ instead of ${\mathcal{K}}^{0}_{\mu}$. Taking the Laplace transform of (5.8) with respect to space, one gets that $\partial_{t}\widetilde{u}+\widetilde{{\mathcal{K}}}(p)(p\widetilde{u}-{\underline{u}})=\widetilde{f}\quad\mbox{ on }\quad{\mathbb{R}}^{+}.$ Solving this ODE with initial condition $\widetilde{u}_{|_{t=0}}=\widetilde{u^{\rm in}}$, one gets the following expression for $\widetilde{u}$, for all $p\in{\mathbb{C}}_{a}$ and $t\in{\mathbb{R}}^{+}$, $\displaystyle\widetilde{u}(t,p)$ $\displaystyle=e^{-p\widetilde{{\mathcal{K}}}(p)t}\widetilde{u^{\rm in}}(p)+\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\widetilde{f}{\rm d}t^{\prime}+\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\widetilde{{\mathcal{K}}}(p){\underline{u}}(t^{\prime}){\rm d}t^{\prime}$ (5.11) $\displaystyle=:\widetilde{u_{1}}+\widetilde{u_{2}}+\widetilde{u_{3}}.$ Since the Paley-Wiener Theorem 5.1 states that the Laplace transform is an isometry between $L_{a}^{2}({\mathbb{R}}^{+})$ and ${\mathcal{H}}^{2}({\mathbb{C}}_{a})$, the following lemma shows that both $u_{1}$ and $u_{2}$ belong to $C({\mathbb{R}}^{+}_{t};L^{2}_{a}({\mathbb{R}}^{+}_{x}))$ if $u^{\rm in}\in L^{2}_{a}({\mathbb{R}}^{x})$ and $f\in L^{1}_{\rm loc}({\mathbb{R}}^{+}_{t};L^{2}_{a}({\mathbb{R}}^{+}_{x}))$. ###### Lemma 5.1. Let $a\geq 0$. For all $U\in{\mathcal{H}}^{2}({\mathbb{C}}_{a})$, the mapping $\begin{array}[]{lcl}{\mathbb{R}}^{+}&\to&{\mathcal{H}}^{2}({\mathbb{C}}_{a})\\\ t&\mapsto&\big{(}p\mapsto e^{-p\widetilde{{\mathcal{K}}}(p)t}U(p)\big{)}\end{array}$ is well defined and continuous, and for all $t\in{\mathbb{R}}^{+}$, $\|e^{-p\widetilde{{\mathcal{K}}}(p)t}U\|_{{\mathcal{H}}^{2}({\mathbb{C}}_{a})}\leq\|U\|_{{\mathcal{H}}^{2}({\mathbb{C}}_{a})}$. If moreover $a>0$ then there exists $c_{a}>0$ such that for all $t\in{\mathbb{R}}^{+}$, $\|e^{-p\widetilde{{\mathcal{K}}}(p)t}U\|_{{\mathcal{H}}^{2}({\mathbb{C}}_{a})}\leq e^{-c_{a}t}\|U\|_{{\mathcal{H}}^{2}({\mathbb{C}}_{a})}.$ ###### Proof of the lemma. Except for the last assertion, we consider only the case $a=0$ since the case $a>0$ can easily be deduced from it. From the definition of ${\mathcal{H}}^{2}({\mathbb{C}}_{0})$ and Lebesgue’s dominated convergence theorem, it is sufficient to prove that $e^{-p\widetilde{{\mathcal{K}}}(p)t}$ is holomorphic and bounded on ${\mathbb{C}}_{0}$. The fact that it is holomorphic directly stems from the explicit expression $\widetilde{{\mathcal{K}}}(p)=(1+\kappa^{2}p^{2})^{-1/2}$. For the boundedness, this is a consequence of the fact that $\Re(p\widetilde{{\mathcal{K}}}(p))\geq 0$ on ${\mathbb{C}}_{0}$, as we now prove. For all $p=\alpha+i\xi\in\mathbb{C}_{0}$, one computes (5.12) $\Re(p\hat{{\mathcal{K}}}(p))=\frac{\alpha\,\Re({\sqrt{1+\kappa^{2}p^{2}}})+\xi\,\Im({\sqrt{1+\kappa^{2}p^{2}}})}{|\sqrt{1+\kappa^{2}p^{2}}|^{2}}.$ Since $\Re(\sqrt{1+\kappa^{2}p^{2}})$ is positive (by definition of the square root) and the sign of $\Im({\sqrt{1+\kappa^{2}p^{2}}})$ is the same as the sign of the product $\alpha\,\xi$, one gets the result. Since we have proved that $\Re(p\widetilde{{\mathcal{K}}}(p))\geq 0$ on ${\mathbb{C}}_{0}$, the last assertion follows if we can prove that $\Re(p\widetilde{{\mathcal{K}}}(p))$ does not vanish on ${\mathbb{C}}_{a}$ if $a>0$. Since both terms in the numerator in (5.12) are positive, both must vanish if $\Re(p\widetilde{{\mathcal{K}}}(s))$ vanishes. Since $\alpha>0$ on ${\mathbb{C}}_{0}$, this implies that there should be $p=\alpha+i\xi\in{\mathbb{C}}_{a}$ such that $\Re(\sqrt{1+\kappa^{2}p^{2}})=0$ and $\xi\,\Im({\sqrt{1+\kappa^{2}p^{2}}})=0$, which is obviously not possible. ∎ Remarking that for any $a>0$, one has $\widehat{\mathcal{K}}\in{\mathcal{H}}^{2}({\mathbb{C}}_{a})$, it is also a direct consequence of the lemma that there is $c_{a}>0$ such that $\displaystyle\|{\widetilde{u}_{3}}\|_{{\mathcal{H}}^{2}({\mathbb{C}}_{a})}$ $\displaystyle\leq\|{\widetilde{{\mathcal{K}}}}\|_{{\mathcal{H}}^{2}({\mathbb{C}}_{a})}\int_{0}^{t}e^{-c_{a}(t-t^{\prime})}|{\underline{u}}(t^{\prime})|{\rm d}t^{\prime}.$ Together with the results already proved on $\widetilde{u}_{1}$ and $\widetilde{u_{2}}$, we deduce (see the Paley-Wiener Theorem 5.1 below) that $|u(t,\cdot)|_{L_{a}^{2}({\mathbb{R}}^{+}_{x})}\leq e^{-c_{a}t}|u^{\rm in}|_{L_{a}^{2}({\mathbb{R}}^{+}_{x})}+\int_{0}^{t}e^{-c_{a}(t-t^{\prime})}\big{[}|f(t^{\prime},\cdot)|_{L_{a}^{2}({\mathbb{R}}^{+}_{x})}+|{\mathcal{K}}|_{L_{a}^{2}}|{\underline{u}}(t^{\prime})|\big{]}{\rm d}t^{\prime}.$ In order to conclude the proof of the theorem, we still need to control $\partial_{x}u$ and $\partial_{t}u$. * • Control of $\partial_{x}u$. We want to show that $\partial_{x}u\in C({\mathbb{R}}^{+}_{t};L^{2}_{a}({\mathbb{R}}^{+}_{x}))$, or equivalently that $\widetilde{\partial_{x}u}\in C({\mathbb{R}}^{+}_{t};{\mathcal{H}}^{2}({\mathbb{C}}_{a}))$. Since $\widetilde{\partial_{x}u}=p\widetilde{u}-{\underline{u}}$, we consider $p\widetilde{u}(t,p)=e^{-p\widetilde{{\mathcal{K}}}(p)t}p\widetilde{u^{\rm in}}(p)+\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}p\widetilde{f}{\rm d}t^{\prime}+\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}p\widetilde{{\mathcal{K}}}(p){\underline{u}}(t^{\prime}){\rm d}t^{\prime}.$ Writing $p\widetilde{u^{\rm in}}=\widetilde{\partial_{x}u^{\rm in}}+u^{\rm in}(0)$, $p\widetilde{f}(t,p)=\widetilde{\partial_{x}f}(t,p)+f(t,0)$, we can remark that $\displaystyle\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}p\widetilde{{\mathcal{K}}}(p){\underline{u}}(t^{\prime}){\rm d}t^{\prime}$ $\displaystyle=\int_{0}^{t}\partial_{t^{\prime}}\big{(}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\big{)}{\underline{u}}(t^{\prime}){\rm d}t^{\prime}$ $\displaystyle={\underline{u}}(t)-e^{-p\widetilde{{\mathcal{K}}}(p)t}{\underline{u}}(0)-\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\partial_{t}{\underline{u}}(t^{\prime}){\rm d}t^{\prime},$ from which we deduce that $\displaystyle\widetilde{\partial_{x}u}(t,p)=$ $\displaystyle e^{-p\widetilde{{\mathcal{K}}}(p)t}\widetilde{\partial_{x}u^{\rm in}}(p)+\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\widetilde{\partial_{x}f}{\rm d}t^{\prime}$ $\displaystyle+e^{-p\widetilde{{\mathcal{K}}}(p)t}\big{(}u^{\rm in}(0)-{\underline{u}}(0)\big{)}+\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\big{(}f(t^{\prime},0)-\partial_{t}{\underline{u}}(t^{\prime})\big{)}{\rm d}t^{\prime}.$ While the first two components of the right-hand side belong to $C({\mathbb{R}}^{+}_{t};{\mathcal{H}}^{2}({\mathbb{C}}_{a}))$ by Lemma 5.1, the last two ones do not, unless the compatibilty conditions given in the statement of the Theorem are satisfied, in which case these two components cancel and the result follows together with the upper bound $|\partial_{x}u(t,\cdot)|_{L^{2}_{a}}\leq|\partial_{x}u^{\rm in}|_{L^{2}_{a}}+\int_{0}^{t}e^{-c_{a}(t-t^{\prime})}|\partial_{x}f(t^{\prime},\cdot)|_{L^{2}_{a}}{\rm d}t^{\prime}.$ * • Control of $\partial_{t}u$. Using the equations, one has $\displaystyle|\partial_{t}u|_{L^{2}_{a}}$ $\displaystyle\leq|{\mathcal{K}}*_{x}\partial_{x}u|_{L^{2}_{a}}+|f|_{L^{2}_{a}}$ $\displaystyle\leq|{\mathcal{K}}|_{L^{1}_{a}}|\partial_{x}u|_{L^{2}_{a}}+|f|_{L^{2}_{a}},$ with $L^{1}_{a}=L^{1}({\mathbb{R}}^{+},e^{-ax}{\rm d}x)$, showing as needed that $\partial_{t}u\in C({\mathbb{R}}^{+}_{t};L^{2}_{a}({\mathbb{R}}^{+}_{x}))$. The theorem follows easily. ∎ ###### Remark 5.3. As explained above in Remark 5.1, the initial boundary value problem (5.8) can be seen as a nonlocal perturbation of the standard transport problem (5.1) toward which it formally converges when $\mu\to 0$. There seems however to be some discrepancy because two compatibility conditions, namely, (5.9) and (5.10), are needed to ensure the existence of solutions $u\in{\mathbb{X}}^{1}_{a}$ to (5.8), while the sole compatibility condition (5.9) is sufficient to get a similar result for the standard transport problem (5.1). One should explain why the second compatibility condition (5.10) disappears in the formal limit $\mu=0$. The reason is that (5.10) is here to ensure continuity of the solution at the boundary $x=0$. Indeed, by the initial value theorem, we know that $\lim_{x\to 0^{+}}u(t,x)=\lim_{p\in{\mathbb{C}}_{a},|p|\to\infty}p\widetilde{u}(t,p)$, and we therefore get from the Laplace representation formula (5.11) that $\lim_{x\to 0^{+}}u(t,x)=e^{-\frac{t}{\kappa}}u^{\rm in}(0)+\int_{0}^{t}e^{-\frac{t-t^{\prime}}{\kappa}}f(t^{\prime},0){\rm d}t^{\prime}+\int_{0}^{t}e^{-\frac{t-t^{\prime}}{\kappa}}\frac{1}{\kappa}{\underline{u}}(t^{\prime}){\rm d}t^{\prime}$ where we used the fact that $\lim_{p\in{\mathbb{C}}_{a},|p|\to\infty}p\widetilde{{\mathcal{K}}}(p)=\kappa^{-1}$; after an integration by parts, the right-hand side can be written ${\underline{u}}(t)+e^{-\frac{t}{\kappa}}\big{(}u^{\rm in}(0)-{\underline{u}}(0)\big{)}+\int_{0}^{t}e^{-\frac{t-t^{\prime}}{\kappa}}\big{(}f(t^{\prime},0)-\partial_{t}{\underline{u}}(t^{\prime})\big{)}{\rm d}t^{\prime},$ so that, if the first compatibility condition (5.9) is satisfied, one has $\lim_{x\to 0^{+}}u(t,x)-{\underline{u}}(t)=\int_{0}^{t}e^{-\frac{t-t^{\prime}}{\kappa}}\big{(}f(t^{\prime},0)-\partial_{t}{\underline{u}}(t^{\prime})\big{)}{\rm d}t^{\prime},$ which is nonzero if the second compatibility condition is not satisfied, hence a lack of continuity at $x=0$ (there would therefore be a Dirac mass at $x=0$ is the expression for $\partial_{x}u(t,\cdot)$ that would therefore not be in $L^{2}_{a}({\mathbb{R}}^{+}_{x})$ as seen in the proof). However, one readily observes that $\lim_{\mu\to 0}\int_{0}^{t}e^{-\frac{t-t^{\prime}}{\kappa}}\big{(}f(t^{\prime},0)-\partial_{t}{\underline{u}}(t^{\prime})\big{)}{\rm d}t^{\prime}=0\qquad(\kappa^{2}=\mu/3),$ so that this discontinuity shrinks to zero in the limit $\mu\to 0$, explaining why the second compatibility condition is no longer necessary in the endpoint case $\mu=0$. Before going further, we recall that there are two possibilities to define fractional derivatives of order $\alpha\in(0,1)$ on ${\mathbb{R}}^{+}$ using the convolution kernel ${\mathfrak{K}}_{\alpha}(x)=x^{-\alpha}/{\Gamma(1-\alpha)}$ with $\alpha\in(0,1)$ and $\Gamma$ the Euler Gamma function, namely, the Riemann- Liouville and Caputo derivatives, defined respectively as $D^{\alpha}_{\rm RL}u=\partial_{x}\big{(}{\mathfrak{K}}_{\alpha}*_{x}u)\quad\mbox{ and }\quad D^{\alpha}_{\rm C}u={\mathfrak{K}}_{\alpha}*_{x}\partial_{x}u.$ In the nonlocal initial boundary value problem (5.8), the space derivative $\partial_{x}u$ in the standard transport equation has been replaced by the nonlocal term ${\mathcal{K}}^{0}_{\mu}*\partial_{x}u$ which can be considered as a generalized derivative of Caputo type, with the kernel ${\mathfrak{K}}_{\alpha}$ replaced by the Bessel kernel ${\mathcal{K}}^{0}_{\mu}$. It is noteworthy that working with the Riemann- Liouville version of this operator, namely $\partial_{x}\big{(}{\mathcal{K}}^{0}_{\mu}*_{x}u)$, the situation is drastically different. Indeed, as shown in the following proposition, it is not possible to impose a boundary data anymore since the knowledge of the initial data suffices to fully determine the solution; in other words, the initial value problem (5.13) $\begin{cases}\partial_{t}u+\partial_{x}\big{(}{\mathcal{K}}^{0}_{\mu}*_{x}u)&=f,\\\ u_{|_{t=0}}&=u^{\rm in},\end{cases}$ is well posed on ${\mathbb{R}}^{+}_{t}\times{\mathbb{R}}^{+}_{x}$. In particular, the trace of the solution at the boundary $x=0$ is determined by $f$ and $u^{\rm in}$ and therefore cannot be imposed. We also show that if the data $u^{\rm in}$ and $f$ are smoother, then the solution is in ${\mathbb{X}}_{a}^{2}$, but generally not in ${\mathbb{X}}_{a}^{3}$ or higher in the absence of additional compatibility condition (but we show however that the regularity in time can be higher). ###### Proposition 5.2. Let $a>0$, $n=1$ or $2$, and $f\in{\mathbb{Y}}^{n}_{a}$ and $u^{\rm in}\in H_{a}^{n}({\mathbb{R}}_{x}^{+})$. Then there exists a unique solution $u\in{\mathbb{X}}_{a}^{n}$ to the nonlocal initial boundary value problem (5.13). Moreover, one has $u(t,\cdot)_{|_{x=0}}={\underline{u}}(t)$ for all $t\in{\mathbb{R}}^{+}$, with ${\underline{u}}(t)$ given by ${\underline{u}}(t)=e^{-\frac{t}{\kappa}}u^{\rm in}(0)+\int_{0}^{t}e^{-\frac{t-t^{\prime}}{\kappa}}f(t^{\prime},0){\rm d}t^{\prime}.$ If in addition $f\in C^{q}({\mathbb{R}}^{+}_{t};H^{n}_{a}({\mathbb{R}}^{+}_{x}))$ for some $q\in{\mathbb{N}}$ then one also has $u\in C^{q+1}({\mathbb{R}}^{+}_{t};H^{n}_{a}({\mathbb{R}}^{+}_{x}))$. ###### Remark 5.4. Comparing the representation of the solution given in (5.14) below to the representation of the solution to the initial boundary value problem (5.8) given in (5.11), one can check that they are both the same if ${\underline{u}}=0$, which is not surprising since one can compute $\partial_{x}({\mathcal{K}}^{0}_{\mu}*_{x}u)(t,x)=({\mathcal{K}}^{0}_{\mu}*_{x}\partial_{x}u)(t,x)+{\mathcal{K}}^{0}_{\mu}(x){\underline{u}}(t),$ so that the Caputo and Riemann-Liouville nonlocal initial boundary value problem coincide when ${\underline{u}}=0$. ###### Proof. As previously done, we simply write ${\mathcal{K}}={\mathcal{K}}_{\mu}^{0}$. Taking the Laplace transform of (5.13) one readily gets (5.14) $\widetilde{u}(t,p)=e^{-p\widetilde{{\mathcal{K}}}(p)t}\widetilde{u^{\rm in}}(p)+\int_{0}^{t}e^{-p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\widetilde{f}(t^{\prime},p){\rm d}t^{\prime};$ by the initial value theorem, one gets that $\lim_{x\to 0^{+}}u(t,x)={\underline{u}}(t)$, with ${\underline{u}}$ as in the statement of the theorem. For all $j$ and $l$, one deduces from the above formula for $\widetilde{u}$ that $p^{j}\widetilde{\partial_{t}^{l}u}=(-p\widehat{{\mathcal{K}}}(p))^{l}\big{[}e^{-p\widehat{{\mathcal{K}}}(p)t}p^{j}\widetilde{u^{\rm in}}+\int_{0}^{t}e^{-p\widetilde{\mathcal{K}}(p)(t-t^{\prime})}p^{j}\widetilde{f}\big{]}+\sum_{m=1}^{l}(-p\widehat{{\mathcal{K}}}(p))^{l-m}p^{j}\widetilde{\partial_{t}^{m-1}f}.$ Replacing in this expression $p^{j}\widetilde{v}=\widetilde{\partial_{x}^{j}v}+\sum_{i=0}^{j-1}p^{j-1-i}(\partial_{x}^{i}v)_{|_{x=0}}$ for $v=u^{\rm in}$, $\widetilde{f}$, $\widetilde{\partial_{t}^{m-1}f}$, we obtain $p^{j}\widetilde{\partial_{t}^{l}u}=\sum_{i=0}^{j-1}p^{j-1-i}U_{li}(p)+F_{lj}(t,p)$ (using the convention that the summation is zero if $j-1<0$) with $\displaystyle U_{li}(p):=$ $\displaystyle(-p\widetilde{{\mathcal{K}}}(p))^{l}\big{(}\partial_{x}^{i}u^{\rm in}(0)+\int_{0}^{t}e^{-p\widetilde{\mathcal{K}}(p)(t-t^{\prime})}(\partial_{x}^{i}f)(t^{\prime},0){\rm d}t^{\prime}\big{)}$ $\displaystyle+\sum_{m=1}^{l}(-p\widetilde{{\mathcal{K}}}(p))^{l-m}(\partial_{t}^{m-1}\partial_{x}^{i}f)_{|_{x=0}}$ and $F_{lj}(t,p):=(-p\widehat{{\mathcal{K}}}(p))^{l}\big{[}\widetilde{\partial_{x}^{j}u^{\rm in}}+\int_{0}^{t}e^{-p\widetilde{\mathcal{K}}(p)(t-t^{\prime})}\widetilde{\partial_{x}^{j}f}\big{]}+\sum_{m=1}^{l}(-p\widehat{{\mathcal{K}}}(p))^{l-m}\widetilde{\partial_{t}^{m-1}\partial_{x}^{j}f}.$ Remarking that $\lim_{|p|\to\infty}p\widehat{{\mathcal{K}}}(p)=\kappa^{-1}$, and introducing ${\underline{u}}_{li}=\lim_{|p|\to\infty}U_{li}(p)$, namely, ${\underline{u}}_{li}=(-\kappa)^{-l}\big{(}\partial_{x}^{i}u^{\rm in}(0)+\int_{0}^{t}e^{-\frac{t-t^{\prime}}{\kappa}}(\partial_{x}^{i}f)(t^{\prime},0){\rm d}t^{\prime}\big{)}\\\ +\sum_{m=1}^{l}(-\kappa)^{-l+m}(\partial_{t}^{m-1}\partial_{x}^{i}f)_{|_{x=0}}$ (of course, ${\underline{u}}_{00}={\underline{u}}$), we can write $p^{j}\widetilde{\partial_{t}^{l}u}-\sum_{i=0}^{j-1}p^{j-1-i}{\underline{u}}_{li}(p)=\sum_{i=0}^{j-1}p^{j-1-i}\big{(}U_{li}(p)-{\underline{u}}_{li}\big{)}+F_{lj}(t,p).$ From Proposition 5.1, we can deduce that $\partial_{x}^{j}\partial_{t}^{l}u$ belongs to $C({\mathbb{R}}^{+}_{t};L^{2}_{a}({\mathbb{R}}^{+}_{x}))$ if the right-hand side of the above equality is in $C({\mathbb{R}}^{+}_{t};{\mathcal{H}}^{2}({\mathbb{C}}_{a}))$. This is obvious for $F_{lj}$ under the assumptions made in the statement of the proposition (see Lemma 5.1); for the summation, the problem reduces to determine whether the mapping $p\mapsto p^{j-1}\big{(}p\widehat{{\mathcal{K}}}(p)-\kappa^{-1}\big{)}$ belongs to ${\mathcal{H}}^{2}({\mathbb{C}}_{a})$ or not. This mapping being holomorphic on ${\mathbb{C}}_{a}$, we just need to check that it is square integrable on $a+i{\mathbb{R}}$. Recalling that $\widehat{{\mathcal{K}}}(p)=\frac{1}{\sqrt{1+\kappa^{2}p^{2}}}$, and using the fact that for all $p\in{\mathbb{C}}_{a}$ one has $\sqrt{p^{2}}=p$, one has $p^{j-1}\big{(}p\widehat{{\mathcal{K}}}(p)-\kappa^{-1}\big{)}\sim-\frac{1}{2\kappa^{3}}p^{j-3}$ at infinity; the mapping is therefore square integrable on $a+i{\mathbb{R}}$ if and only if $j\leq 2$, hence the results. ∎ As a corollary, we can exhibit a smoothing effect for the nonlocal transport problem (5.8) that does not exist for the standard transport problem (5.1). Indeed, as one can easily check on the explicit expression (5.2), even if the data $u^{\rm in}$, ${\underline{u}}$ and $f$ are very smooth, the solution is not $C^{1}({\mathbb{R}}^{+}\times{\mathbb{R}}^{+})$ if the additional compatibility condition $\partial_{t}{\underline{u}}(0)=-\partial_{x}u^{\rm in}(0)+f(0,0)$ is not imposed. There is a smoothing effect for the nonlocal problem in the sense that the solution constructed in Theorem 5.2 actually belongs to ${\mathbb{X}}_{a}^{3}\subset C^{2}({\mathbb{R}}^{+}\times{\mathbb{R}}^{+})$ without any additional compatibility condition if the data are smooth enough. Note that using the last statement of Proposition 5.2, the proof shows that additional regularity in time on $\partial_{x}f$ would yield additional regularity in time on $\partial_{x}u$. ###### Corollary 5.1. Under the assumptions of Theorem 5.2, if moreover $f\in{\mathbb{Y}}^{n}_{a}$, $u^{\rm in}\in H_{a}^{n}({\mathbb{R}}_{x}^{+})$ and ${\underline{u}}\in W^{n,1}_{\rm loc}({\mathbb{R}}^{+}_{t})$ for $n=2$ or $3$, then the solution $u$ provided by the theorem belongs to ${\mathbb{X}}^{n}_{a}$. ###### Proof. Taking the space derivative of the nonlocal transport equation in (5.8), it is easy to see that $v=\partial_{x}u$ solves the initial boundary value problem $\begin{cases}\partial_{t}v+\partial_{x}\big{(}{\mathcal{K}}*_{x}v)=\partial_{x}f,\\\ v_{|_{t=0}}=\partial_{x}u^{\rm in}.\end{cases}$ It follows therefore from Proposition 5.2 that $\partial_{x}u\in{\mathbb{X}}^{n-1}$. We are therefore left to prove that $\partial_{t}^{j}u\in C({\mathbb{R}}_{t}^{+};L^{2}_{a}({\mathbb{R}}^{+}_{x}))$ for $1\leq j\leq n$; this easily follows from the observation that $\partial_{t}^{j}u=-{\mathcal{K}}*_{x}\partial_{t}^{j-1}\partial_{x}u+\partial_{t}^{j-1}f$ and from the fact that ${\mathcal{K}}\in L^{1}_{a}({\mathbb{R}}^{+})$. ∎ #### 5.3.2. The left-going case As for the right-going case in the previous section, we want to consider a nonlocal perturbation of the standard transport problem in which the space derivative $\partial_{x}$ is replaced by a nonlocal term ${\mathcal{K}}^{0}_{\mu}*\partial_{x}$. As recalled in §5.2, for the standard left-going transport equation, one has to consider either the initial value problem or the boundary value problem. While both cases are symmetric in the case of the standard transport equation, this is no longer the case and, as we shall see, the boundary value problem leads simpler expressions. We therefore consider here its nonlocal analogue (see Remark 5.6 below for the nonlocal analogue of the initial value problem), (5.15) $\begin{cases}\partial_{t}u-{\mathcal{K}}^{0}_{\mu}*_{x}\partial_{x}u&=f,\\\ u_{|_{x=0}}&={\underline{u}}.\end{cases}$ As for the boundary value problem (5.6) for the standard left-going transport equation, there is no compatibility condition like (5.9) since $u^{\rm in}$ is not prescribed. On the other hand, the analysis leading to the second compatibility condition (5.10) remains valid, and it is still necessary to have (5.16) $\forall t\in{\mathbb{R}}^{+},\qquad\partial_{t}{\underline{u}}(t)=f(t,0)$ in order to expect a solution $u$ that belongs to ${\mathbb{X}}^{1}$. In the statement below, we use the notation $H^{1}_{a}({\mathbb{R}}^{+}_{t}\times{\mathbb{R}}^{+}_{x}):=H^{1}({\mathbb{R}}^{+}_{t};L^{2}_{a}({\mathbb{R}}^{+}_{x}))\cap L^{2}({\mathbb{R}}^{+}_{t};H^{1}_{a}({\mathbb{R}}^{+}_{x}))$ (note that the assumptions on the time dependence of $f$ and ${\underline{u}}$ are chosen in order to ensure the convergence of the integral term over the range $(t,+\infty)$ and that they could easily be weakened). ###### Theorem 5.3. Let $a>0$, $f\in H^{1}_{a}({\mathbb{R}}^{+}_{t}\times{\mathbb{R}}^{+}_{x})$, and ${\underline{u}}\in H^{1}({\mathbb{R}}^{+}_{t})$. Assume moreover that the compatibility condition (5.16) holds. Then there exists a unique solution $u\in{\mathbb{X}}^{1}_{a}$ to the nonlocal boundary value problem (5.15), and there exists $c_{a}>0$ such that, for all $t\in{\mathbb{R}}^{+}$, $|u(t,\cdot)|_{H_{a}^{1}({\mathbb{R}}^{+}_{x})}\leq\int_{t}^{\infty}e^{c_{a}(t-t^{\prime})}\big{[}|f(t^{\prime},\cdot)|_{H_{a}^{1}({\mathbb{R}}^{+}_{x})}+|{\mathcal{K}^{0}_{\mu}}|_{L_{a}^{2}}|{\underline{u}}(t^{\prime})|\big{]}{\rm d}t^{\prime}.$ ###### Proof. Still denoting ${\mathcal{K}}={\mathcal{K}}_{\mu}^{0}$ and following the same procedure as for the proof of Theorem 5.2, one readily finds that $\widetilde{u}(t,p)=-\int_{t}^{\infty}e^{p\widetilde{{\mathcal{K}}}(p)(t-t^{\prime})}\big{(}\widetilde{f}(t^{\prime},p)-\widetilde{{\mathcal{K}}}(p){\underline{u}}(t^{\prime})\big{)}{\rm d}t^{\prime};$ as for the right-going case, one can check that the compatibility condition (5.16) is necessary for the continuity of the solution at $x=0$. We omit the proof which is an easy adaptation of the proof of Theorem 5.2. ∎ ###### Remark 5.5. As for the standard boundary transport problem (5.6), the initial data is determined in terms of the source term $f$ and the boundary data ${\underline{u}}$ by evaluating the Laplace representation formula given in the proof at $t=0$, namely, (5.17) $\widetilde{u^{\rm in}}(p)=-\int_{0}^{\infty}e^{-p\widetilde{{\mathcal{K}}_{\mu}^{0}}(p)t^{\prime}}\big{(}\widetilde{f}(t^{\prime},p)-\widetilde{{\mathcal{K}}_{\mu}^{0}}(p){\underline{u}}(t^{\prime})\big{)}{\rm d}t^{\prime}.$ In the limit case $\mu=0$ (and therefore $\widehat{{\mathcal{K}}}_{\mu}(p)=1$), one can check that the representation formula of the proof is equivalent to (5.5); the additional compatibility condition (5.16) that is not necessary for (5.6) also disappears at the limit along a mechanism similar to the one described in Remark 5.3. ###### Remark 5.6. For the standard left-going transport equation, the initial value problem (5.4) and the boundary value problem (5.5) can be treated in a totally symmetric case by switching the variables $t$ and $x$. The presence of the nonlocal term breaks this symmetry, and the nonlocal initial value problem would be more delicate to deal with than the boundary value problem addressed above. In particular, one would need to find ${\underline{u}}$ in terms of $f$ and $u^{\rm in}$ by solving the nonlocal equation (5.17). ## Appendix A Non dimensionalization of the equations We show here how to derive the dimensionless equations of motion used throughout this paper. To begin with, the Boussinesq-Abbott system describing the propagation of weakly nonlinear waves in a fluid of mean depth $h_{0}$ and with a pressure $P_{\rm atm}+\underline{P}$ exerted at the surface ($P_{\rm atm}$ is a constant reference value for the atmospheric pressure) is given by (A.1) $\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\\ (1-\frac{h_{0}^{2}}{3}\partial_{x}^{2})\partial_{t}q+\partial_{x}\big{(}\frac{1}{h}q^{2}\big{)}+gh\partial_{x}\zeta=-\frac{h}{\rho}\partial_{x}\underline{P}\qquad(h=h_{0}+\zeta).\end{cases}$ ###### Remark A.1. Introducing the hydrodynamic pressure $\Pi$ as (A.2) $\Pi=\underline{P}+\rho g\zeta,$ and alternative formulation of (A.1) is (A.3) $\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\\ (1-\frac{h_{0}^{2}}{3}\partial_{x}^{2})\partial_{t}q+\partial_{x}\big{(}\frac{1}{h}q^{2}\big{)}=-\frac{h}{\rho}\partial_{x}\Pi;\end{cases}$ we shall sometimes use this alternative formulation under the floating object. Let us now consider the equations for the solid. We recall that we consider here a floating object with vertical lateral walls located at $x=\pm\ell$ ($\ell>0$) and allowed to move only vertically (heave motion). There is therefore only one degree of freedom for the motion of the solid which can be fully deduced from the signed distance $\delta(t)$ between the center of mass $G=\big{(}x_{G},z_{G}(t)\big{)}$ and its equilibrium position $G_{\rm eq}=(x_{G},z_{G,{\rm eq}})$, namely, $\delta=z_{G}(t)-z_{G,{\rm eq}}$. Let us also assume that the water depth below the object is given at equilibrium by a nonnegative single valued function $x\mapsto h_{\rm eq}(x)$; the part of the bottom of the object in contact with the water (the wetted surface) is therefore given at all time $t$ by the graph of the function ${\zeta}_{\rm w}$ defined as (A.4) $\zeta_{\rm w}(t,x)=\delta(t)+h_{\rm eq}(x)-h_{0}.$ Newton’s equation for a body of mass $m$ that only moves vertically and subject to gravity and hydrodynamic forces is given by (A.5) $m\ddot{\delta}+mg=\displaystyle\int_{-\ell}^{\ell}\underline{P}_{\rm i}(t,x){\rm d}x,$ where $\underline{P}_{\rm i}(t,x)$ is the pressure exerted by the fluid on the object at the point $(x,\zeta_{\rm w}(t,x))$. Note that at equilibrium, the pressure is hydrostatic, $\underline{P}_{\rm i}=-\rho g(h_{\rm eq}-h_{0})$, so that $m=\rho\int_{-\ell}^{\ell}(h_{0}-h_{\rm eq}(x)){\rm d}x\qquad\mbox{(Archimedes' principle)},$ and we can rewrite Newton’s equation under the form $m\ddot{\delta}=\displaystyle\int_{-\ell}^{\ell}\big{(}\underline{P}_{\rm i}(t,x)+\rho g(h_{\rm eq}-h_{0})\big{)}{\rm d}x.$ By definition of the hydrodynamic pressure, its value $\Pi_{\rm i}$ in the interior domain $(-\ell,\ell)$ is given by $\Pi_{\rm i}=\underline{P}_{\rm i}+\rho g\zeta_{\rm w},$ from which we infer, using (A.4), (A.6) $\tau_{\rm buoy}^{2}\ddot{\delta}+\delta=\displaystyle\frac{1}{2\rho g\ell}\int_{-\ell}^{\ell}\Pi_{\rm i}(t,x){\rm d}x,$ where $2\pi\tau_{\rm buoy}$ is the buoyancy period defined through $\tau_{\rm buoy}^{2}=\frac{m}{2\ell\rho g}.$ We now proceed to derive dimensionless versions of (A.1), (A.4), (A.5). We recall that $h_{0}$ denotes the water depth at rest, and also denote by $a$ and $L$ the typical amplitude of the waves and a typical horizontal scale respectively. For the Boussinesq-Abbott equations (A.1), we use the following scalings $\widetilde{x}=\frac{x}{L},\quad\widetilde{z}=\frac{z}{h_{0}},\quad\widetilde{t}=\frac{t}{L/\sqrt{gh_{0}}},\quad\widetilde{\zeta}=\frac{\zeta}{a},\quad\widetilde{q}=\frac{q}{a\sqrt{gh_{0}}},\quad\widetilde{\underline{P}}=\frac{\underline{P}}{\rho gh_{0}}$ and consequently $\widetilde{h}=1+\varepsilon\widetilde{\zeta}$. We also introduce the nonlinearity and shallowness parameters $\varepsilon$ and $\mu$ as $\varepsilon=\frac{a}{h_{0}},\qquad\mu=\frac{h_{0}^{2}}{L^{2}}.$ For the sake of clarity the tildes used to denote dimensionless quantities are omitted throughout this paper. The system (A.1) thus becomes (A.7) $\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\\ (1-\frac{1}{3}\mu\partial_{x}^{2})\partial_{t}q+\varepsilon\partial_{x}\big{(}\frac{1}{h}q^{2}\big{)}+h\partial_{x}\zeta=-\frac{1}{\varepsilon}h\partial_{x}\underline{P}\qquad(h=1+\varepsilon\zeta).\end{cases}$ ###### Remark A.2. The dimensionless form of the hydrodynamic pressure is naturally $\widetilde{\Pi}=\frac{\Pi}{\rho gh_{0}}=\widetilde{\underline{P}}+{\varepsilon}\widetilde{\zeta},$ so that the dimensionless version of the alternative formulation (A.3) is (omitting the tildes) (A.8) $\begin{cases}\partial_{t}\zeta+\partial_{x}q=0,\\\ (1-\frac{\mu}{3}\partial_{x}^{2})\partial_{t}q+\varepsilon\partial_{x}\big{(}\frac{1}{h}q^{2}\big{)}=-\frac{h}{\varepsilon}\partial_{x}\Pi;\end{cases}$ In order to derive the dimensionless versions of (A.4), (A.5) and (A.5), we also need the following scalings $\widetilde{\zeta}_{\rm w}=\frac{\zeta_{\rm w}}{a},\qquad\widetilde{\delta}=\frac{\delta}{a},\qquad\widetilde{h}_{\rm eq}=\frac{h_{\rm eq}}{h_{0}},\qquad\widetilde{m}=\frac{m}{2\ell\rho h_{0}},\qquad\widetilde{\tau}_{\rm buoy}=\frac{\tau_{\rm buoy}}{L/\sqrt{gh_{0}}},\qquad\widetilde{\ell}=\frac{\ell}{L}$ so that, omitting again the tildes for the sake of readability, we can rewrite (A.4) and (A.5) as (A.9) $\zeta_{\rm w}(t,x)=\delta(t)+\frac{1}{\varepsilon}\big{(}h_{\rm eq}(x)-1\big{)}.$ and (A.10) $\tau_{\rm buoy}^{2}\ddot{\delta}+\frac{1}{\varepsilon}m=\displaystyle\frac{1}{\varepsilon}\frac{1}{2\ell}\int_{-\ell}^{\ell}\underline{P}_{\rm i}(t,x){\rm d}x;$ note that in these dimensionless coordinates, the coordinates of the vertical sides of the object are $x=\pm\ell$ and that Archimedes’ principle reads in dimensionless form as $m=\frac{1}{2\ell}\int_{-\ell}^{\ell}(1-h_{\rm eq}).$ Finally, the dimensionless version of A.6 is $\tau_{\rm buoy}^{2}\ddot{\delta}+\delta=\displaystyle\frac{1}{\varepsilon}\frac{1}{2\ell}\int_{-\ell}^{\ell}\Pi_{\rm i}(t,x){\rm d}x.$ ## References * [1] M. B. Abbott, H. M. Petersen, O. Skovgaard, Computations of shortwaves in shallow water, Coast. Eng. Proc. (1978), 414-433. * [2] P. C. Appell, Traité de mécanique rationelle, Vol 2, Gauthier-Villars, 1904. * [3] C. Audiard, Non-homogeneous boundary value problems for linear dispersive equations, Communications in Partial Differential Equations 37, (2012), 1-37. * [4] S. Benzoni-Gavage, D. Serre, Multi-dimensional hyperbolic partial differential equations: First-order Systems and Applications, Oxford University Press, 2007. * [5] E. Bocchi, Floating structures in shallow water: local well-posedness in the axisymmetric case, SIAM Journal on Mathematical Analysis 52 (2020), 306-339. * [6] E. Bocchi, On the return to equilibrium problem for axisymmetric floating structures in shallow water, Nonlinearity 33 (2020), 3594. * [7] E. Bocchi,J. He, G. Vergara-Hermosilla, Modelling and simulation of a wave energy converter, submitted (2019). * [8] U. Bosi, C. Eskilsson, A.P. Engsig-Karup, M. Ricchiuto, A spectral/hp depth-integrated model for nonlinear wave body interaction, Comp. Meth. Appl. Mech. Eng. 348 (2019), 222-249. * [9] D. Bresch, D. Lannes, G. Métivier, Waves interacting with a partially immersed obstacle in the Boussinesq regime, Analysis & PDE, to appear. * [10] C. Burtea, New long time existence results for a class of Boussinesq-type systems, Journal de Mathématiques Pures et Appliquées 106 (2016), 203-236. * [11] W. Cummins, The impulse response function and ship motions, Technical report, DTIC Document, 1962. * [12] E. Dombre, J. Harris, M. Benoit, D. Violeau, C. Peyrard, A $3d$ parallel boundary element method on unstructured triangular grids for fully nonlinear wave-body interactions, Ocean Engineering 171 (2019), 505-518. * [13] C. Eskilsson, J. Palm, J. P. Kofoed, E. Friis-Madsen, CFD study of the overtopping discharge of the Wave Dragon wave energy converter, Renewable Energies Offshore (2015), 287-294. * [14] A. G. Filippini, S. Bellec, M. Colin, M. Ricchiuto, On the nonlinear behaviour of boussinesq type models: Amplitude-velocity vs amplitude-flux forms, Coastal Engineering 99 (2015), 109-123. * [15] E. Godlewski, M. Parisot, J. Sainte-Marie, F. Wahl, Congested shallow water model: roof modeling in free surface flow, ESAIM: Mathematical Modelling and Numerical Analysis 52 (2018), 1679-1707. * [16] E. Godlewski, M. Parisot, J. Sainte-Marie, F. Wahl, Congested shallow water model: on floating body, SMAI Journal of Computational Mathematics, Société de Mathématiques Appliquées et Industrielles (SMAI), In press * [17] I. S. Gradshteyn, I. M. Ryzhik, Table of integrals, series, and products. Elsevier/Academic Press, Amsterdam, seventh edition, 2007. * [18] T. Iguchi, D. Lannes, Hyperbolic free boundary problems and applications to wave-structure interactions, Arxiv 1806.07704 (to appear in Indiana Univ. Math. J). * [19] T. Jiang, Ship waves in shallow water, Fortschritt-Berichte VDI Reihe 12, Verkehrstechnik, Fahrzeugtechnik; (2001). * [20] F. John, On the motion of floating bodies. I, Communications on Pure and Applied Mathematics 2 (1949), 13-57. * [21] C. Johnston, C. T. Gartman, D. Mantzavinos, The linearized classical Boussinesq system on the half-line, arXiv preprint arXiv:2009.09532, 2020 (to appear in Studies of Applied Math.). * [22] A. Kalogirou, O. Bokhove, D. Ham, Modelling of nonlinear wave-buoy dynamics using constrained variational methods, In International Conference on Offshore Mechanics and Arctic Engineering, volume 57731, page V07AT06A060. American Society of Mechanical Engineers, 2017. * [23] M. Kazakova, P. Noble, Discrete transparent boundary conditions for the linearized Green–Naghdi system of equations, SIAM Journal on Numerical Analysis 58 (2020), 657-683. * [24] K. Koike, Long-time behavior of a point mass in a one-dimensional viscous compressible fluid and pointwise estimates of solutions, J. Differential Equations 271 (2021), 356-413. * [25] D. Lannes, The Water Waves Problem: Mathematical Analysis and Asymptotics, volume 188 of Mathematical Surveys and Monographs. AMS, 2013. * [26] D. Lannes, On the dynamics of floating structures, Annals of PDE 3 (2017). * [27] D. Lannes, Modeling shallow water waves, Nonlinearity 33 (2020), R1. * [28] D. Lannes, L. Weynans, Generating boundary conditions for a Boussinesq system, Nonlinearity 33 (2020), 6868. * [29] D. Lannes Initial boundary value problems for hyperbolic systems, and dispersive perturbations, Lecture notes of the Bressanone Winter School, to appear. * [30] C. H. Lee, J. N. Newman, WAMIT User manual (2006), WAMIT, Inc, 42. * [31] F. Linares, D. Pilod, J.-C. Saut, Well-posedness of strongly dispersive two-dimensional surface wave Boussinesq systems, SIAM Journal on Mathematical Analysis 44 (2012), 4195-4221. * [32] J.-L. Lions, Contrôle des systèmes distribués singuliers, Gauthiers-Villars, Paris, 1968. * [33] D Maity, J San Martín, T Takahashi, M Tucsnak, Analysis of a simplified model of rigid structure floating in a viscous fluid, Journal of Nonlinear Science 29 (2019), 1975-2020 * [34] A. Majda, The stability of multi-dimensional shock fronts, Memoirs of the AMS 275 (1983). * [35] B. Melinand, A mathematical study of meteo and landslide tsunamis: The proudman resonance, Nonlinearity 28 (2015), 4037-2015. * [36] G. Métivier, Stability of multidimensional shocks, Advances in the theory of shock waves (2001), 25-103. * [37] G. Métivier, Small Viscosity and Boundary Layer Methods: Theory, Stability Analysis, and Applications, Springer Science & Business Media, 2012. * [38] C. Monteserin, A. P. Engsig-Karup, C. Eskilsson, Nonlinear wave-body interaction using a mixed-Eulerian-Lagrangian spectral element model. In International Conference on Offshore Mechanics and Arctic Engineering, volume 51302, page V009T13A030. American Society of Mechanical Engineers, 2018. * [39] M. J. Rao, S. Nallayarasu, S. K. Bhattacharyya , Assessment of Nonlinear Heave Damping Model for Spar With Heave Plate Using Free Decay Tests. In International Conference on Offshore Mechanics and Arctic Engineering 49934 (2016), V002T08A052). American Society of Mechanical Engineers. * [40] C. Perrin, An overview on congestion phenomena in fluid equations, Journées équations aux dérivées partielles, (2018), 1-34. * [41] J.-C. Saut and L. Xu, The Cauchy problem on large time for surface waves Boussinesq systems, Journal de mathématiques pures et appliquées 97 (2012), 635–662. * [42] E. F. G. Van Daalen, E. Van Groesen, P. J. Zandbergen, A Hamiltonian formulation for nonlinear wave-body interactions, Eighth international workshop on water waves and floating bodies, IWWWFB. Vol. 159163. 1993. * [43] Z. Zhou, E. Y. Lo, S. Tan, Effect of shallow and narrow water on added mass of cylinders with various cross-sectional shapes, Ocean engineering 32 (2005), 1199-1215.
# A Bilevel Optimization Framework For fuel-constrained UAV-UGV Cooperative Routing: Planning and Experimental Validation Md Safwan Mondal1, Subramanian Ramasamy1, Pranav Bhounsule1 *This work was supported by ARO grant W911NF-14-S-003.1Md Safwan Mondal, Subramanian Ramasamy and Pranav A. Bhounsule are with the Department of Mechanical and Industrial Engineering, University of Illinois Chicago, IL, 60607 USA<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Fast moving unmanned aerial vehicles (UAVs) are well suited for aerial surveillance, but are limited by their battery capacity. To increase their endurance UAVs can be refueled on slow moving unmanned ground vehicles (UGVs). The cooperative routing of UAV-UGV to survey vast regions within their speed and fuel constraints is a computationally challenging problem, but can be simplified with heuristics. Here we present multiple heuristics to enable feasible and sufficiently optimal solutions to the problem. Using the UAV fuel limits and the minimum set cover algorithm, the UGV refueling stops are determined. These refueling stops enable the allocation of mission points to the UAV and UGV. A standard traveling salesman formulation and a vehicle routing formulation with time windows, dropped visits, and capacity constraints is used to solve for the UGV and UAV route, respectively. Experimental validation of the approach on a small-scale testbed shows the efficacy of the approach. ## 1 INTRODUCTION The integration of Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs) has been increasingly utilized in various applications, such as search and rescue, surveillance, and reconnaissance missions, and transportation [1, 2, 3, 4] etc. One of the most significant challenges in such cooperative routing is the limited fuel capacity of UAVs, which restricts their operational time and range. However, effective cooperation between UAVs and UGVs can enhance the mission efficiency and extend the coverage range of UAVs, thereby enabling them to achieve longer range and persistent operations. The complexity of such cooperative routing for UAV-UGV systems lies in its combinatorial nature, making it computationally challenging to solve the formulation with exact methods. Therefore, suitable heuristics should be used to achieve high-quality solutions quickly. In this paper, we propose a bi-level optimization framework for solving the fuel-constrained UAV-UGV cooperative routing problem that optimizes the operational time and fuel consumption of both vehicles. To validate our proposed algorithm, we conducted hardware testing that provides practical insights into the real-world application and feasibility of our proposed approach. By maximizing the efficiency of the cooperative system, our algorithm has the potential to overcome the limitations posed by fuel capacity and speed constraints, enabling successful implementation of UAV-UGV cooperative routing in a variety of applications. ### 1-A Related works (a) given mission scenario with MSC (b) first subproblem (c) second subproblem Figure 1: Minimum set cover algorithm and task allocation technique a) Given mission scenario with minimum set cover algorithm. Blue circle indicates radial coverage of UAV. Here, three refuel stops (including starting depot) can cover entire mission b) First subproblem where UGV moves from starting depot to first refuel stop and UAV missions points within radial coverage are assigned. c) Second subproblem where UGV moves from first refuel stop to second refuel stop and UAV missions points within radial coverage are assigned. Fuel-constrained vehicle routing problem of UAVs has been an area of considerable research. Several studies have investigated the routing of multiple fuel-constrained UAVs with recharging on fixed depots. Levy et al. [5] investigated this routing of multiple fuel-constrained UAVs with fixed recharging depots using variable neighborhood descent (VND) and variable neighborhood search (VNS) heuristics to find feasible solutions for large instances. Similarly, Sundar et al. [6] developed a mixed-integer linear programming model (MILP) that can be solved using an off-the-shelf MILP solver. Instead of fixed depot, Maini et al. [7] addressed cooperative routing of a single UAV-UGV system, with the UGV serving as a mobile charging point for the UAV on a road. Unlike previous studies, the authors developed a greedy heuristic method to find the rendezvous locations of recharging along the UGV route. Manyam et al. [8] investigated the cooperative routing problem of a team comprising a UAV and UGV subject to communication constraints. They formulated the problem as a mixed-integer linear program (MILP) and developed a branch-and-cut algorithm to solve the problem optimally. Researchers extended the UAV-UGV cooperative vehicle routing problem by solving it in a tiered two-echelon approach. To solve the two-echelon cooperative routing problem, Luo et al. [9] proposed a binary integer programming model with two heuristics. Liu et al. [10] developed a two-stage route based framework to optimize both the truck’s main route and the drone’s adjoint flying routes for a truck and drone based parcel delivery system. They created a hybrid heuristic that combined nearest neighbor and cost-cutting strategies to quickly build a viable solution. In our previous research [11, 12], we explored a hierarchical bi-level optimization framework for cooperative routing of multiple fuel-constrained UAVs and a single UGV. The framework employs K-means clustering at the outer level to generate UGV visit points, which can be connected using the traveling salesman problem (TSP) to obtain the UGV route. At the inner level, a vehicle routing problem with capacity constraints, time windows, and dropped visits for the UAV is formulated and solved using the UGV path. In an extension to our work [13], we demonstrated that optimizing the heuristics parameters with Genetic Algorithm (GA) and Bayesian Optimization (BO) methods can lead to a significant improvement in the quality of the solutions. But the past framework was scenario specific making it hard to generalize for any unknown scenario, which was driving force for this study to come up with a robust optimization framework that can be generalized and implemented experimentally on hardware. On the experimental front, few significant works have been done to demonstrate localization and mapping of UAVs in indoor environments. Nigam et al. [14, 15, 16] investigated for high-level scalable control techniques for unmanned aerial vehicles (UAVs) for performing persistent surveillance in an uncertain stochastic environment in a hardware testbed. Two UAVs were used by Frew et al. [17] to demonstrate road following, obstacle avoidance, and convoy protection in a fligt testing, while Jodeh et al. [18] provided an overview of cooperative control algorithms of heterogeneous UAVs by the Air Force Research Laboratories (AFRL). Leahy et al. [19, 20] experimentally validated their proposed method for automating persistent surveillance missions involving multiple vehicles. Automata-based techniques were used to generate collision- free motion plans and vector fields were created for use with a differential flatness-based controller, allowing vehicle flight and deployment to be fully automated according to the motion plans. They used charging platforms for the UAVs for truly persistent missions. Boeing’s Vehicle Swarm Technology Laboratory (VSTL) [16, 21, 22, 23] and MIT’s RAVEN laboratory [24] testbed have conducted significant UAV flight testing demonstrations in indoor lab scale setups. The novelty of our work lies in the development of a robust bi-level optimization framework that uses minimum set cover algorithm and a task allocation technique to solve UAV-UGV cooperative routing in two echelon fashion. We have validated our proposed methodology with a hardware flight testing in lab scale experimental setup. To this end, we present following novel contributions: 1) The overall framework uses bilevel optimization with task allocation technique for mission allocation and constrained programming based routing solvers. 2) The task allocation technique based on minimum set cover algorithm divides the entire problem into decoupled subproblems which radically simplifies the overall problem. 3) A constraint programming-based formulation for vehicle routing problem with time windows, dropped visits, and fuel constraints enables quick solutions of each subproblem. 4) Hardware validation of our work demonstrates the practical feasibility and real-world applicability of our proposed algorithms and methods. ### 1-B Problem Description The aim of the problem is to perform an cooperative mission involves visiting a set of designated mission points ( see figure 1(a)) $x_{n_{i}}\in X\equiv\left(x_{i},t_{i}\right)$, using either the UGV road-based visit or the UAV aerial flyover, where $x_{i}\in R^{2}$ is a position on the ground plane; $t_{i}$ is the timestamp of the last visit to the node. The cost of travel between any two mission points is defined as the time required to travel from one point to another, $c_{ij}=t_{j}-t_{i}$. The UGV ${g_{i}\in G\equiv(\tau_{i},f_{i},x_{i})}$, and UAV ${a_{i}\in A\equiv(\tau_{i},f_{i},x_{i})}$, are heterogeneous in nature, with the UAV having a higher velocity but a lower fuel capacity compared to the UGV. During the mission, the vehicles are represented by current task or state $\tau_{i}$, fuel level $f_{i}$ and position $x_{i}$. Unlike UAV, the UGV is restricted to traveling along the road network, and the fuel consumption rate of both vehicles is a function of the velocity of the vehicle. The UAV can be recharged by the UGV at any refueling stop or at the starting depot, with the recharging time dependent on the fuel level of the UAV. The UGV is assumed to have an infinite fuel capacity due to its larger fuel capacity compared to the UAV. With the described assumptions, the objective is to find the quickest route for the UAV and UGV to visit all mission points together, with the starting depot being both the starting and ending point, while ensuring that the UAV never runs out of fuel. To find the time-optimized route, the following goals must be achieved: 1. 1. Identification of suitable stop locations where UAV will synchronize with UGV to get recharged to cover all the mission points, i.e, $x_{r}\equiv x_{i}=x_{j}$, here, $x_{i}\in a_{i},x_{j}\in g_{i}$. 2. 2. Determination of optimal times during the mission, when UAV, UGV will meet at those refuel stops i.e, $t_{r}\equiv t_{i}=t_{j}$, here, $t_{i}\in A,t_{j}\in G$. 3. 3. Determination of the optimal routes $\tau_{i}\in A,\ \ \tau_{j}\in G$ for both the UGV and UAV based on the refueling locations $x_{r}$ and times $t_{r}$. ## 2 OPTIMIZATION FRAMEWORK For the UAV-UGV cooperative routing, we proposed a bilevel optimization framework, which is shown in Fig. 2. The framework is designed as a two-level hierarchical structure, where at the higher level, we determine the UGV route using the “UGV First, UAV Second” heuristic method, which involves prioritizing the UGV route and then constructing the UAV route based on it. To ensure the feasibility of the cooperative route, it is critical to locate suitable refueling sites along the UGV route. We employed a minimum set cover algorithm to identify the best locations for refueling. The inner-level UAV route was built based on the UGV route by dividing the entire scenario into subproblems what could be solved by modeling them as an energy constrained vehicle routing problem (E-VRP). Figure 2: optimization framework ### 2-A Outer level: UGV route Maini et al. [7] demonstrated that in order to establish a viable cooperative route, it is necessary to ensure that at least one refueling stop is located within the UAV fuel coverage radius for each data point. Thus to determine the minimum number of refueling stops required to cover the entire mission scenario, we can adopt the minimum set cover algorithm (MSC). This is a well- established problem that can be solved using a variety of methods, including greedy heuristics [7]. However, in this study, we proposed a constraint programming formulation for minimum set cover algorithm. For a same scenario, we employed both greedy method and constraint programming approach individually to solve the minimum set cover problem where constraint programming method outperformed greedy heuristics. #### 2-A1 Greedy heuristics method Using a greedy heuristic approach can help to reduce the complexity of the minimum set cover problem. This method starts with the mission scenario points that require coverage $x_{n}$ and the UAV’s fuel capacity $F$ as the inputs, as an output we aim of determining the minimum number of mission points that can serve as refueling stops $x_{r}$ to cover the entire mission points $x_{n}$. In other words, given a set of mission points $X=\\{x_{1},x_{2},...,x_{n}\\}$ the goal is to identify a subset $X_{r}=\\{x_{1},x_{2},...,x_{r}\\}$ that has the least number of elements to act as refueling stops to cover the entire scenario. The greedy algorithm selects the initial point as the first refueling stop and then sequentially adds the mission points that cover the greatest number of other mission points to the refueling stop set until all points are covered. Although the greedy heuristic can produce an optimal result for a minimum set cover problem quickly, there is possibility of multiple optimal solutions for a given scenario. Since we are implementing a bilevel optimization framework, it is essential to consider all the other optimal solutions of the outer level algorithm. As it is not possible to acquire all optimal solutions using the greedy heuristic, we employed the constraint programming method. This approach can rapidly generate multiple optimal results, if any are present. Also, the greedy method may result into locally optimal solution which can be overcome through an alternate constraint programming formulation of the minimum set cover problem. #### 2-A2 Constraint programming method The minimum set cover problem can be transformed into a linear integer programming model using the constraint programming method (CP method). The objective function in Eq. 2.1 determines the minimum number of refueling stops, and the constraint function Eq. 2.2 ensures that each mission point $x_{i}$ has at least one refueling stop $x_{r}$ assigned to it. The decision variable $y_{i}$ determines whether a mission point will be selected as a refueling stop or not. We utilized Google’s OR-Tools™ CP-SAT solver [25] to solve this linear integer formulation, and it can record multiple optimal solutions if they exist. After identifying the refueling stop locations, a UGV route can be created by connecting these stops on the road network. We can solve a simple travelling salesman problem (TSP), considering the refueling stops, to determine an optimal UGV route. Once the optimal UGV route is established, we can proceed to the inner loop UAV routing. Objective, $\operatorname{min}\sum_{i=1}^{r}y_{i}$ (2.1) s.t., $\sum_{j=1}^{r}y_{j}\geq 1,\forall x_{i},\ i=1,\ldots,n$ (2.2) $y_{i}\in\\{0,1\\},\forall i=1,\ldots,n$ (2.3) ### 2-B Inner level: UAV route At the inner level of our framework, we employed a task allocation technique to divide the entire mission scenario into independent subproblems which were solved individually as an energy constrained vehicle routing problems (E-VRP). #### 2-B1 Task allocation technique Given the scenario and the obtained UGV route from outer loop MSC algorithm, we can divide the entire problem into $n-1$ number of subproblems ($n$ = number of refuel stops with starting depot) with an assumption that UGV travels only between two refuel stops in each subproblem. Before the subproblem division, each mission point is assigned to its nearest refuel stop (including starting depot) that covers it. In the subproblems, the first refuel stop is the origin node and the second refuel stop is the destination node of UGV route. The sub-problems are decoupled from each other by allocating separate mission points in them. The UAV mission points assigned to the destination refuel stop under each subproblem are allocated to that subproblem. Only, for the first subproblem the mission points assigned to both origin and destination node should be allocated to it. Figure 1 demonstrate the process of subproblem division and mission allocation. Figure 1(a) shows refuel stop locations along the UGV route obtained from outer level MSC algorithm for a given scenario, the first subproblem (figure 1(b)) is created by taking the starting depot as the origin node and the refuel stop 1 as the destination node. The UAV mission points covered by origin node (starting depot) and destination node (refuel stop 1) are assigned for subproblem 1. Similarly, the second subproblem (figure 1(c)) is created by taking the refuel stop 1 as origin node and refuel stop 2 as destination node and the mission points covered by the destination node (refuel stop 2) are assigned for this subproblem. Once we get an independent set of subproblems through task allocation, we tried to solve each subproblem by modeling it as energy constrained vehicle routing problem (E-VRP). #### 2-B2 E-VRP model The formulation of the E-VRP can be described with a graph theory. Consider an undirected graph $G=(V,E)$ where $V$ is the set of vertices $V=\\{0,1,2,...k\\}$ and $E$ is the set of edges between the vertices $i$ and $j$ as $E=\\{(i,j)\,\|\ i,j\,\in\,V,i\neq j\\}$. The non-negative arc cost between the vertices $i$ and $j$ is expressed as $c_{ij}$ and $x_{ij}$ is a binary decision variable whose value will be 1 if a vehicle travels from $i$ to $j$, and 0 otherwise. The objective function of the E-VRP problem is indicated by Eq. 2.4 which minimizes the total travel time, the other formulation of the constraints like fuel constraint, time window constraint and generic constraints of the E-VRP can be found here [13]. $\min\sum_{i}\sum_{j}c_{ij}x_{ij}\quad\forall i,j\in V\\\ $ (2.4) Again, we used Google OR-Tools ™ as our heuristic implementation for solving the E-VRP model with constrained programming (CP). OR-Tools ™ uses a search tree, local search, and meta-heuristics to identify feasible and optimal solutions. The heart of OR-Tools ™ is a CP-SAT solver that employs DecisionBuilder to find an initial feasible solution using the Path Cheapest Arc Strategy [26]. OR-Tools ™ then performs a local search to find the best solution in the neighborhood of the current solution. This local search uses move operators to rewire nodes and check for feasibility and cost, and the moves are repeated until a termination criterion is met [12]. ## 3 RESULTS (a) (b) Figure 3: UAV & UGV trajectory obtained from bilevel optimization with Greedy and CP method at the outer loop. Numerical and alphabetical order shows the UAV and UGV motion respectively. a) CP method based trajectory b) greedy method based trajectory. Figure 3 provides an illustrative example of the input and output of the problem at hand. The input, depicted in Figure 3, consists of mission points denoted by black crosses. The UAV and UGV must both initiate and terminate at the starting depot, while ensuring that all mission points are covered either by the UAV or UGV. The UAV can recharge at the depot or at designated refueling sites from the UGV. The UAV and UGV have fixed velocities of 10 m/s and 4 m/s, respectively, and their fuel capacities are 287.7 kJ and 25.01 MJ, respectively. To carry out the optimizations, we employed Python 3 and OR Tools ™ library, which ran on a 3.7 GHz Intel Core i9 processor with 48 GB RAM on a 64-bit operating system. For the scenario, two types of cooperative routes (if different) were generated by implementing the Greedy method and the CP method at the outer loop of the suggested framework. The UGV-only route, where only a UGV completes the whole mission was also determined for the scenario. Based on the metrics of total mission completion time and total energy consumption, the impact of collaboration between the UAV and UGV on mission execution was assessed by comparing the cooperative route with the UGV only route which served as the upper limit. Table I shows the improvement that was achieved through cooperative routing of UAV-UGV on the mission scenario. Cooperative routing is extremely energy efficient. Both CP method and greedy method at the outer loop showed positive improvement reducing total energy consumption in the mission by 36-39%. However, for total mission time greedy method at the outer loop had a negative impact. This is due to position of refuel stops ( see trajectory in figure 3(b) ) what made the UAV to take frequent detours (6 times) for recharging at the refuel sites elongating the total mission time. However, appropriate refuel stop locations obtained CP method ( see trajectory in figure 3(a) ) helped UAV to complete its route with less recharging detour (4 times), which effectively reduced total mission time. Further insights about the trajectory of the cooperative route can be drawn from Table II. As discussed earlier, greedy results in longer UAV travel time which ultimately costs higher mission time. Energy consumption of UGV is low in greedy method as UAV is visiting majority of mission points compare to CP method. In sum, both CP method and greedy heuristics are capable of providing feasible cooperative route for constrained complex mission scenario; however CP method outperforms greedy method at the cost of some computational efficiency. TABLE I: Impact of the optimal solution of the cooperative routing. Metrics | | Cooperative --- routing | UGV --- only route Improvement (%) | | CP --- method | Greedy --- method | | CP --- method | Greedy --- method | Time --- consumption (min.) 200 | 272 | 233 | 14.16 | -16.74 | Energy --- consumption (MJ) 21.98 | 21.14 | 34.69 | 36.62 | 39.06 TABLE II: Comparison between trajectories of CP method and greedy heuristics Metrics | CP method | Greedy method ---|---|--- Total time (min) | 200 | 272 Computational time (min) | 9 | 4 UGV results | | Travel time (minutes) | 200 | 272 Energy consumed (MJ) | 20.79 | 19.52 Mission visited | 22 | 18 UAV results | | Travel time (minutes) | 100 | 136.203 Energy consumed (kJ) | 1186.464 | 1618.092 Recharging stops on UGV | 3 | 6 Recharging stops on Depot | 1 | 0 Missions visited | 22 | 26 ## 4 EXPERIMENT DESIGN The most stringent way of validating a framework is by hardware demonstration, but it often has limitations in terms of scope and variety. Therefore, simulation results are often utilized to assess method performance, while experiments are used to verify some of those outcomes. The validation of a surveillance planning framework through experiments is especially significant as it is a highly challenging problem to solve. We devised a small laboratory scale scenario to test the proposed framework methodology with just one UAV and UGV. In order to fully achieve autonomy during the experiment, each robot must be capable of independently locating, planning, and executing its desired route to each location without relying on external input. Our localization, planning, and control strategies are built on the hardware’s sensing, processing, and communication capabilities. However, developing an appropriate experimental system poses various challenges as we need to consider the integration of software, hardware, and communication to accomplish the task. We have separately described the individual blocks of the hardware architecture (see figure 4) as follows: Figure 4: Hardware architecture (a) (b) Figure 5: Comparison between simulation and experimental results. In the trajectory, the alphabetical order represents the direction of motion of the UAV (a) (b) Figure 6: Experiment instances a) UAV, UGV are moving towards their designated locations b) UAV landed on UGV for recharging. 1. 1. Hardware: For our UAV, we chose the DJI Tello quadcopter due to its compact size and affordability. It is a small quadrotor measuring 9.8 x 9.2 x 4.1cm and weighing 80 g, including the battery (Li-Po, 1100mAh, 3.8V) and 3 inch propellers. The drone comes with a built-in flight controller that offers basic features such as flight stabilization and attitude regulation, translational velocity control, and simple trajectory execution. This flight controller is closed hardware and operated using a dedicated command protocol. However, a higher level of autonomy can be achieved by using an external ground-based computer that uses received telemetry and video feed to control the drone with the same communication protocol. For the UGV, we built a small raspberry-based omnidirectional car with a landing pad for charging the UAV. The UGV can be controlled by controlling its four wheels through the raspberry pi. 2. 2. Control & communication: A wireless 2.4 GHz 802.11n WiFi connection was used to communicate with the drone. The approach makes use of the official Tello SDK 2.0. The UDP port is used to send text messages to the drone programming interface. To create the application, we used the SDK and the low-level Python library DJItelloPy. Wireless 5 GHz Wifi communication was also established with the Raspberry Pi for controlling the UGV. 3. 3. Central manager: The final element of the system design is the centralized system manager. Based on the input scenario, this manager solves a fuel constrained vehicle routing problem using the proposed bi-level optimization framework to generate separate routes for the UAV and UGV. Then the manager communicates with the UAV and the UGV, assigns respective tasks to them to begin the surveillance. The manager continuously monitors their progress by collecting the positional data of the vehicles from motion capture system. The UGV runs on an open loop control and stops on its assigned stop locations on the route. While the central system enforce an feedback control on the UAV to navigate it correctly and make a successful landing on the UGV during recharge. 4. 4. Experiment scenario: The experiments were conducted in the Robotics and Motion Lab at the University of Illinois Chicago. The lab has a designated flight area equipped with a motion capture system that serves as a reference for the position of reflective markers placed on the quadrotor and the ground vehicle. This enables real-time localization of the robots during the experiment. The positional data of the vehicles can be obtained at a rate of 100 Hz, with a latency of less than 9 ms. A mission scenario was created by selecting 12 different points over an area of $4m\times 4m$ for the UAV, a road network was designed for UGV and a fuel constraint was introduced by limiting the UAV’s flight time in a single recharge (endurance limit). For this experimental setup the endurance limit was setup to be 50 seconds, the UAV and UGV speed was 0.20 m/s and 0.15 m/s respectively This required the UAV to visit the UGV at regular intervals for recharging in order to complete the mission. However no real recharging took place, it was only hypothesized that UAV gets recharged instantly when it lands on the UGV. The UGV road network was also designed to be challenging, with the farthest points on the mission requiring the UAV to operate near its maximum endurance limit, thus testing the robustness of the proposed framework. ### 4-A Flight test results Multiple trials of the experiment were carried out on the scenario. The algorithm was fed with the locations of the mission points of the UAV and the road network points as input. The outer-loop of the algorithm determined the UGV traversal path with refueling spots in space and time, while the inner- loop of the framework generated the UAV route. Both the routes were provided to the individual agents from the central manager, and the agents started performing their missions. The purpose of the experiment was to verify the feasibility of the algorithm’s output and to determine if the multi-agent experiment could be successfully carried out with our experimental architecture. The motion capture system was used to track the positional data of the UAV and UGV, which was processed to produce the experimental route. The figure 5(a) shows a comparison between the simulation route and the experimental route. During the experiment, due to dynamics, the UAV drifted away in some places but successfully managed to visit the mission points and get recharged from the UGV by landing on it, because of the feedback control. The endurance limit constraint was also tested, and it was observed that the maximum flight time in a single recharge was always below the maximum limit in figure 5(b). Dynamics of the UAV played an important role in the experiment which was compensated by considering buffer time period in the modeling of take off and landing of the UAV in the simulation counterpart. Instances of the flight test can be seen in figure 6. ## 5 CONCLUSIONS We conclude that a bilevel optimization framework with suitably designed heuristics is an effective method for solving cooperative routing problems. Our heuristics involved solving for UGV route first using the minimum set cover algorithm and traveling salesman followed by the UAV route based on mission allocation and a vehicle routing formulation. We found constraint programming outperform greedy heuristics to solve the minimum set cover algorithm although the former takes more computational time than the latter. Experimental validation of the framework on a small testbed shows a close match between simulation and hardware and confirming the proposed approach. ## ACKNOWLEDGMENT The authors would like to thank Jean-Paul F. Reddinger, James M. Dotterweich, and Marshal A. Childers from DEVCOM Army Research Laboratory, Aberdeen Proving Grounds, Aberdeen, MD 21005 USA and James D. Humann is with DEVCOM Army Research Laboratory, Los Angeles, CA, 90094 USA for providing solutions that helped improve the formulation and solutions. ## References * [1] Daniel H Stolfi, Matthias R Brust, Grégoire Danoy, and Pascal Bouvry. Uav-ugv-umv multi-swarms for cooperative surveillance. Frontiers in Robotics and AI, 8:616950, 2021. * [2] Yu Wu, Shaobo Wu, and Xinting Hu. Cooperative path planning of uavs & ugvs for a persistent surveillance task in urban environments. IEEE Internet of Things Journal, 8(6):4906–4919, 2020. * [3] Yao Liu, Zhihao Luo, Zhong Liu, Jianmai Shi, and Guangquan Cheng. Cooperative routing problem for ground vehicle and unmanned aerial vehicle: The application on intelligence, surveillance, and reconnaissance missions. IEEE Access, 7:63504–63518, 2019. * [4] Jianqiang Li, Genqiang Deng, Chengwen Luo, Qiuzhen Lin, Qiao Yan, and Zhong Ming. A hybrid path planning method in unmanned air/ground vehicle (uav/ugv) cooperative systems. IEEE Transactions on Vehicular Technology, 65(12):9585–9596, 2016\. * [5] David Levy, Kaarthik Sundar, and Sivakumar Rathinam. Heuristics for routing heterogeneous unmanned vehicles with fuel constraints. Mathematical Problems in Engineering, 2014, 2014. * [6] Kaarthik Sundar, Saravanan Venkatachalam, and Sivakumar Rathinam. Formulations and algorithms for the multiple depot, fuel-constrained, multiple vehicle routing problem. In 2016 American Control Conference (ACC), pages 6489–6494. IEEE, 2016. * [7] Parikshit Maini and PB Sujit. On cooperation between a fuel constrained uav and a refueling ugv for large scale mapping applications. In 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pages 1370–1377. IEEE, 2015. * [8] Satyanarayana G Manyam, Kaarthik Sundar, and David W Casbeer. Cooperative routing for an air–ground vehicle team—exact algorithm, transformation method, and heuristics. IEEE Transactions on Automation Science and Engineering, 17(1):537–547, 2019. * [9] Zhihao Luo, Zhong Liu, and Jianmai Shi. A two-echelon cooperated routing problem for a ground vehicle and its carried unmanned aerial vehicle. Sensors, 17(5):1144, 2017. * [10] Yao Liu, Zhong Liu, Jianmai Shi, Guohua Wu, and Witold Pedrycz. Two-echelon routing problem for parcel delivery by cooperated truck and drone. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51(12):7450–7465, 2020. * [11] Subramanian Ramasamy, Jean-Paul F Reddinger, James M Dotterweich, Marshal A Childers, and Pranav A Bhounsule. Cooperative route planning of multiple fuel-constrained unmanned aerial vehicles with recharging on an unmanned ground vehicle. In 2021 International Conference on Unmanned Aircraft Systems (ICUAS), pages 155–164. IEEE, 2021. * [12] Subramanian Ramasamy, Jean-Paul F Reddinger, James M Dotterweich, Marshal A Childers, and Pranav A Bhounsule. Coordinated route planning of multiple fuel-constrained unmanned aerial systems with recharging on an unmanned ground vehicle for mission coverage. Journal of Intelligent & Robotic Systems, 106(1):1–18, 2022. * [13] Subramanian Ramasamy, Md Safwan Mondal, Jean-Paul F Reddinger, James M Dotterweich, James D Humann, Marshal A Childers, and Pranav A Bhounsule. Heterogenous vehicle routing: comparing parameter tuning using genetic algorithm and bayesian optimization. In 2022 International Conference on Unmanned Aircraft Systems (ICUAS), pages 104–113. IEEE, 2022. * [14] Nikhil Nigam, Stefan Bieniawski, Ilan Kroo, and John Vian. Control of multiple uavs for persistent surveillance: Algorithm and flight test results. IEEE Transactions on Control Systems Technology, 20(5):1236–1251, 2011. * [15] Nikhil Nigam. The multiple unmanned air vehicle persistent surveillance problem: A review. Machines, 2(1):13–72, 2014. * [16] Nikhil Nigam, Stefan Bieniawski, Ilan Kroo, and John Vian. Control of multiple uavs for persistent surveillance: Algorithm description and hardware demonstration. In AIAA Infotech@ Aerospace Conference and AIAA Unmanned… Unlimited Conference, page 1852, 2009. * [17] Eric Frew, Xiao Xiao, Stephen Spry, Tim McGee, ZuWhan Kim, Jack Tisdale, Raja Sengupta, and J Karl Hedrick. Flight demonstrations of self-directed collaborative navigation of small unmanned aircraft. In AIAA 3rd” Unmanned Unlimited” Technical Conference, Workshop and Exhibit, page 6608, 2004. * [18] Nidal Jodeh, Mark Mears, and David Gross. An overview of the cooperative operations in urban terrain (counter) program. In AIAA Guidance, Navigation and Control Conference and Exhibit, page 6308, 2008. * [19] Kevin Leahy, Dingjiang Zhou, Cristian-Ioan Vasile, Konstantinos Oikonomopoulos, Mac Schwager, and Calin Belta. Persistent surveillance for unmanned aerial vehicles subject to charging and temporal logic constraints. Autonomous Robots, 40:1363–1378, 2016. * [20] Kevin Leahy, Dingjiang Zhou, Cristian-Ioan Vasile, Konstantinos Oikonomopoulos, Mac Schwager, and Calin Belta. Provably correct persistent surveillance for unmanned aerial vehicles subject to charging constraints. In Experimental Robotics: The 14th International Symposium on Experimental Robotics, pages 605–619. Springer, 2016. * [21] Josh Redding, Tuna Toksoz, N Kemal Ure, Alborz Geramifard, Jonathan P How, Matthew A Vavrina, and John Vian. Distributed multi-agent persistent surveillance and tracking with health management. In Proceedings of the AIAA Guidance Navigation and Control Conference, Portland, OR, USA, pages 8–11, 2011. * [22] Emad Saad, John Vian, Gregory Clark, and Stefan Bieniawski. Vehicle swarm rapid prototyping testbed. In AIAA Infotech@ Aerospace Conference and AIAA Unmanned… Unlimited Conference, page 1824, 2009. * [23] David Halaas, Stefan Bieniawski, Paul Pigg, and John Vian. Control and management of an indoor, health enabled, heterogenous fleet. In AIAA Infotech@ Aerospace Conference and AIAA Unmanned… Unlimited Conference, page 2036, 2009. * [24] Jonathan P How. Multi-vehicle flight experiments: Recent results and future directions. Technical report, MASSACHUSETTS INST OF TECH CAMBRIDGE, 2007. * [25] Google. Google OR-tools. https://developers.google.com/optimization, 2021. Online; accessed Feb 2, 2021. * [26] Bingkai Lin. A simple gap-producing reduction for the parameterized set cover problem. arXiv preprint arXiv:1902.03702, 2019.
that in turn acts as vector subtraction (tail-to-tail). When coupled to a system of coordinates, we saw that we must be able to describe vectors in terms of their lengths along each basis vector — we call these lengths the components of the vector. Later on, to be consistent between algebra and geometry, the we found rules for scalar multiplication and the dot product. Finally, using areas of parallelograms, we built the component form of the cross product AND inserted the idea of handedness into our coordinate systems by defining cross product relationships between our basis vectors. Table 1.1 lists the equations that are the most important (and general) throughout this chapter. I included the equation references to bring you back to the discussion where we derived them, just in case you need some context to refresh your memory about what each equation means. Sprinkled throughout the chapter are several proofs that I recommended that you try to do. All of them are practically identical to what I have already done in the chapter, although they may have more components or a couple more steps. I sincerely suggest that you try to do a few of them just to get a feel for how these proofs are done on your own — in physics and math, one of the best ways to deeply understand derivations or proofs is by putting in the time to do it out for yourself (although in my experience, a physics and math educations leaves very little time for anything but perpetual confusion…). I know that for many of you reading this chapter, my lack of numerical values will be unsettling — I will only get more algebraic as the chapters progress. I do this on purpose though. My first reason is that it is frankly easier once you get used to it. I do remember that the phase transition between needing numbers in math and exclusively using letters is not a smooth one. It will take time to master — probably as much time as it did for you when you first started using the symbol $\pi$ instead of $3.14159$ back in the day. The advantage to only using letters or symbols is that you eventually need not worry about how the intermediate numerical values affect the outcome of your mathematics. By extension, the necessary variables will be left in your physical models of the natural world giving a deeper insight into how the universe works. Unfortunately, unless you already have a handle on the physics, injecting numerical values at intermediate steps will obfuscate this insight. But again, it takes time (a.k.a. practice) to get used to doing everything algebraically. Hopefully this chapter and the following will serve as an external perturbation to make your phase transition that much easier. TABLE 1.1: A summary of the important and general equations derived for vector operations. Equation Description | Equation Formula | Text Reference ---|---|--- 3-Dimensional Cartesian Vector | $\vec{v}=v_{x}\hat{x}+v_{y}\hat{y}+v_{z}\hat{z}$ | Eq. 1.3 $n$-Dimensional Cartesian Vector | $\vec{v}=v_{1}\hat{x}_{1}+v_{2}\hat{x}_{2}+\dots=\sum\limits_{j=1}^{n}v_{j}\hat{x}_{j}$ | Eq. 1.4 Angle between $xy$ Components | $\tan\theta_{xy}=\dfrac{v_{y}}{v_{x}}$ | Eq. 1.5 Length/Magnitude of Vector | $|\vec{v}|=v=\sqrt{\sum\limits_{j=1}^{n}v_{j}^{2}}$ | Eq. 1.8 Sum of Two Vectors | $\vec{u}+\vec{v}=\sum\limits_{j=1}^{n}(u_{j}+v_{j})\hat{x}_{j}$ | Eq. 1.13 Difference of Two Vectors | $\vec{u}-\vec{v}=\sum\limits_{j=1}^{n}(u_{j}-v_{j})\hat{x}_{j}$ | Eq. 1.14 Scalar Multiplication | $a\vec{v}=\sum\limits_{j=1}^{n}(av_{j})\hat{x}_{j}$ | Eq. 1.15 Unit vector in Direction of $\vec{v}$ | $\hat{v}=\dfrac{\vec{v}}{|\vec{v}|}=\dfrac{\vec{v}}{v}$ | Eq. 1.16 Dot Product | $\vec{u}\cdot\vec{v}=\sum\limits_{j=1}^{n}u_{j}v_{j}=uv\cos{\theta}$ | Eq. 1.21 Projection of $\vec{v}$ onto $\vec{u}$ | $\textrm{proj}_{\hat{u}}(\vec{v})=\dfrac{\vec{u}\cdot\vec{v}}{|\vec{u}|}=\dfrac{\vec{u}\cdot\vec{v}}{u}$ | Eq. 1.23 Length/Magnitude and Dot Product | $|\vec{v}|=v=\sqrt{\vec{v}\cdot\vec{v}}$ | Eq. 1.25 Magnitude of Cross Product | $|\vec{v}\times\vec{u}|=uv\sin\theta$ | Eq. 1.27 Right-Handed Cartesian Basis | $\begin{aligned} \hat{x}\times\hat{y}&=\hat{z}\\\ \hat{z}\times\hat{x}&=\hat{y}\\\ \hat{y}\times\hat{z}&=\hat{x}\end{aligned}$ | Eq. 1.36 3-Dimensional Cross Product | $\begin{aligned} \vec{u}\times\vec{v}&=\;\;\;\;\left(v_{y}u_{z}-v_{z}u_{y}\right)\hat{x}\\\ &\;\;\;\;+\left(v_{z}u_{x}-v_{x}u_{z}\right)\hat{y}\\\ &\;\;\;\;+\left(v_{x}u_{y}-v_{y}u_{x}\right)\hat{z}\end{aligned}$ | Eq. 1.37 Definition of Linear Operator | $\mathcal{O}(a\vec{u}+b\vec{v})=a\mathcal{O}(\vec{u})+b\mathcal{O}(\vec{v})$ | Eq. 1.39 ## Chapter 2 Complex Algebra In this chapter, we will start a perhaps unfamiliar form of mathematics for many of you, but it is crucial to not only our understanding of quantum phenomena, but also our understanding of differential equations, Fourier Analysis of signals, all kinds of waves, and various forms of data analysis. By the end of this chapter, we want to be able to add complex numbers to our set of “game pieces” that we have been developing, and then eventually get to the point that we can create functions of complex numbers. This chapter should leave you in a pretty good position to understand most, if not all, the complex mathematics you will cover in your undergraduate physics curriculum. This chapter should also set you up to begin to learn the calculus of complex- valued functions later on in your mathematics career. ### 2.1 The Lie of Imaginary Numbers Before we get going, there is a common misconception that I want to clear up. There is no such distinction between real and imaginary numbers in the colloquial sense; that is, there is no set of objects that are somehow tangible that we call the real numbers, $\mathds{R}$, versus the somehow intangible objects called the imaginary numbers, $\mathds{I}$. These sets of objects are certainly distinct mathematically, but that is due to a rotation rather than some metaphysical and mystical separation that seems to exist by calling two things real and imaginary. The reason why I want to address this is because the term “imaginary” has a totally different connotation in normal life than it does in mathematics. To be clear, at some point in the development of our algebra system, some mathematicians like Rene Decartes did truly believe that imaginary numbers really were not a thing, but a rather convenient way out of an otherwise harder problem [ComplexNumberHistory]. However, mathematicians today effectively only use the word as a label whose name bears no deep meaning. We credit people like Gauss, Cauchy, Euler, and Riemann, among others, for changing the way we think about these objects mathematically. Gauss showed that imaginary numbers are truly just an extension of the more conventional real numbers, and had even tried to get them rebranded as lateral numbers instead. His quote on this subject is below [Gauss_Quote]. > That this subject [imaginary numbers] has hitherto been surrounded by > mysterious obscurity, is to be attributed largely to an ill adapted > notation. If, for example, $+1$, $-1$, and the square root of $-1$ had been > called direct, inverse and lateral units, instead of positive, negative and > imaginary (or even impossible), such an obscurity would have been out of the > question. The problem with taking the word “imaginary” too literally in physics makes the interpretation of certain natural phenomena appear as fake or something akin to pseudoscience. For example, when describing an electron’s spin, the imaginary (lateral!) unit $i=\sqrt{-1}$ appears when talking about the projection of the spin vector along the $y$-axis. If we were to naively see the presence of $i$, we might be tempted to conclude that there is something mystical about this part of our physical world. Or even worse, we might conclude that we could never measure the $y$ component of the spin because it is imaginary! But this interpretation is not true. Gauss’ idea of lateral numbers can be used to more appropriately explain the appearance of $i$ in electron spins. In this case, as you will learn later in your physics career, we can only ever know precisely the projection of an electron’s spin along one axis in space; we denote the forward direction with a $+$ sign and the backward direction with a $-$ sign. However, we can measure the statistical effects of the spin’s vector components in the other two dimensions in 3D space. What this means is we have a total of 3 sets of distinct pairs of basis vectors in this spin space, which is supposed to have physical meaning in all of 3D space. Without going into too much linear algebra, we essentially need 4 components to represent the final two perpendicular axes in 3D space — but this is impossible with only the real numbers! Real components can only ever give you the magnitude and direction that a vector points along a particular line, as we discussed in the chapter on Vectors. Hence, our very real measurements of the natural world force us to extend the real numbers to include their lateral counterparts in order to accurately describe electron spin. Without further ado, I will quit my (legitimate) grumbling, and proceed with our introduction to the world of complex algebra. ### 2.2 Some Important Definitions Before we move on with more algebra, we need to lay down some ground rules for these things that I’m calling complex numbers. The first time they are typically introduced (although not the first time they were ever contrived [ComplexNumberHistory]) is with the standard defining equation $\displaystyle x^{2}+1=0,$ (2.1) where one solves for $x$ to find $x=\pm\sqrt{-1}$. Since we showed in Section 1.2.1 that for any real number111Remember that we use the symbol $\mathds{R}$ to represent the set of all reals and the symbol $\in$ to mean “is an element of.” $a\in\mathds{R}$, $a^{2}>0$, then we find that there can be no real number $x$ that satisfies $x^{2}+1=0$. Thus we define the imaginary unit as $\displaystyle i=\sqrt{-1}.$ (2.2) Likewise, we could easily define the real unit with the following equation $x^{2}-1=0$, and so we obtain $x=\pm 1$. These ideas fit in with our understanding that $1$ represents “unit” length along a number line. We will define any imaginary number $\alpha$ as $\alpha=ai$, where $a\in\mathds{R}$. Such an object would solve the quadratic $x^{2}+a^{2}=0$ for $x$. Likewise, a real number $a\in\mathds{R}$ would solve the quadratic $x^{2}-a^{2}=0$ for $x$. When we move to more complicated quadratics, we have an equation that looks something like $ax^{2}+bx+c=0$, whose solutions are given by the quadratic formula $\displaystyle x=-\frac{b}{2a}\pm\frac{1}{2a}\sqrt{b^{2}-4ac}.$ (2.3) If we pay attention to the quantity called the discriminant, $D=b^{2}-4ac$, we should take note there are exactly three cases for $D$ given by the ordering property of the reals. They are given explicitly as $\displaystyle D>0$ $\displaystyle\Rightarrow b^{2}>4ac,$ $\displaystyle D=0$ $\displaystyle\Rightarrow b^{2}=4ac,$ (2.4) $\displaystyle D<0$ $\displaystyle\Rightarrow b^{2}<4ac.$ The third case, $D<0$, implies that we will again have a negative number inside of a square-root, and so this implies that there will be an imaginary unit involved in some way. Written explicitly, when $D<0$, then it must be true that $-D>0$. Then $\displaystyle x=-\frac{b}{2a}\pm\frac{1}{2a}\sqrt{D}=-\frac{b}{2a}\pm\frac{1}{2a}\sqrt{-(-D)}=-\frac{b}{2a}\pm\frac{i}{2a}\sqrt{-D}=-\frac{b}{2a}\pm\frac{i}{2a}\sqrt{4ac-b^{2}}$ (2.5) What we have now is something that hopefully is a little jarring to you, especially if you have never seen complex algebra before. We have an expression that is somehow telling us to add a real number $-b/2a$ with the imaginary number $i\sqrt{4ac-b^{2}}/2a$. But can we? There is mathematically a pretty large distinction between real numbers and imaginary numbers; the square (and any other nonzero even power) of a real number is always positive. Meanwhile, the square (and any other nonzero even power) of an imaginary number is always negative. We proved the former, whereas we had to define the latter. So what gives? The way we deal with this conundrum is actually by defining something new. It turns out that if you were jarred before, you were right, because there is no way to add a nonzero real number with a nonzero imaginary number and get either a purely real or imaginary number out. The sets of objects are just too different. All of their differences can pretty much be reduced to the fact that $\displaystyle 1^{2}=1\textrm{ and }i^{2}=-1,$ (2.6) and so we get around this issue by saying each of these numbers is totally distinct from one another, but they can be combined to form a greater set of numbers; just as the basis vectors $\hat{x}$ and $\hat{y}$ are totally distinct, but can be combined to form a plane. Actually, if we take the real unit and the imaginary unit as basis vectors, we can create the complex plane, denoted by $\mathds{C}$. Furthermore, we define a complex number $z$ as the vector sum of a real part $z=\mathrm{Re}(z)$ and an imaginary part $z=\mathrm{Im}(z)$ like the following $\displaystyle z=\mathrm{Re}(z)+i\,\mathrm{Im}(z).$ (2.7) It is important to see then that this definition constrains both the real part and the imaginary part to be real numbers! Please read that sentence again — it confuses a lot of people. Even though the imaginary part of a complex number is called the imaginary part, it itself is real. It represents the projection (see Eq. 1.23) of the complex number in the direction of $i$, whereas the real part is the projection of the complex number in the direction of $1$. Using this vector-like interpretation of complex numbers, then it must be true the so-called “real axis” and “imaginary axis” must together span a complex plane, just like the $x$-axis and $y$-axis span the $xy$-plane. By convention, we denote $x=\mathrm{Re}(z)$ and $y=\mathrm{Im}(z)$, so we can write any complex number $z\in\mathds{C}$ as $\displaystyle z\in\mathds{C}\textrm{ if and only if }z=x+iy,\textrm{ where }i=\sqrt{-1}\textrm{ and }x,y\in\mathds{R}.$ (2.8) Since the complex numbers form a plane, then we can more easily see what Gauss was talking about when he claimed that the imaginary unit should be instead named the lateral unit. The existence of this plane just means that the real numbers are accompanied by another orthogonal axis that has had the misfortune of having us silly humans call them “imaginary”. The lateral numbers are present with or without us claiming they are figments of our imaginations — they are an extension of the real numbers into the generalized complex plane. With that said, I will continue to refer to them as imaginary numbers just so you get accustomed to the vernacular. But please do not think any of this is some kind of fantasy. If you are willing to accept the existence of the real number line, it is clear there must exist an accompanying imaginary (lateral!) number line. ### 2.3 Conjugates and Magnitudes A question that we can now ask since we have defined complex numbers is whether there is a way to construct the real and imaginary parts of any complex number computationally. In other words, if we know know any real number $x$ and any real number $y$, we can compute a complex number $z=x+iy$. That’s old news. The new question is whether we can find another complex number $z^{\ast}$ if we know already know a complex number $z$ such that we could calculate $x=\mathrm{Re}(z)$ and $y=\mathrm{Im}(z)$. If we were to assume that this were possible, then we would have to start by clarifying that if $z^{\ast}$ were complex, then $z^{\ast}=u+iv$ for some combination of real numbers $u$ and $v$. Thus, $\displaystyle z$ $\displaystyle=x+iy$ $\displaystyle z^{\ast}$ $\displaystyle=u+iv$ As of right now, we have two separate equations with two unknowns. By “separate”, I mean that as of right now, we don’t have an equation to relate any of the variables. Hence, we are free to choose that $x=\mathrm{Re}(z)=(z+z^{\ast})/2$. Thus, $\displaystyle x=\mathrm{Re}(z)=\frac{1}{2}\left(z+z^{\ast}\right)=\frac{1}{2}\left[(x+u)+i(y+v)\right]=\frac{x+u}{2}+i\,\frac{y+v}{2}$ Since $x$ is purely real, then that means that it cannot be imaginary. Thus the coefficient attached to $i$ must vanish; in other words, $y+v=0\Rightarrow v=-y$. This means that $\displaystyle x=\frac{x+u}{2}+i\,\frac{y+v}{2}=\frac{x+u}{2}+i0=\frac{x+u}{2}$ By multiplying both sides by 2 and subtracting over the remaining x we then find $x=u$. This means we have an expression for $z^{\ast}$, given $z$: $\displaystyle\textrm{if }z=x+iy\textrm{ then }z^{\ast}=x+i(-y)=x-iy.$ (2.9) The complex number $z^{\ast}$ is a very helpful quantity — so helpful, in fact, that it is given the name of the complex conjugate to $\mathbf{z}$. Additionally, this complex conjugate is unique, as based on our rules for the real numbers in Section 1.2.1, there exists only one $v=-y\in\mathds{R}$ if $y\in\mathds{R}$222This uniqueness property justifies our phrasing of the complex conjugate instead of a complex conjugate.. Notice that $z^{\ast}=x-iy=x+(-i)y$. This means that whenever we want a complex conjugate of $z=x+iy$, then all we have to do is replace every $i$ with $-i$. If it is not immediately clear why this is true, it’s because we have defined complex numbers in such a way that any number may be written as a real part plus $i$ times an imaginary part, and then we defined the complex conjugate in terms of those arbitrary real and imaginary parts. This rule is particularly helpful for when you have a rather nasty function of complex variables, but you need the conjugate to compute something useful. Now what would be useful to compute with a complex conjugate? For starters, we know that we can find the real part of $z$ (or $z^{\ast}$) with the following formula. $\displaystyle\mathrm{Re}(z)=\frac{z+z^{\ast}}{2}.$ (2.10) We can actually calculate the imaginary part of $z$ as well. I leave it to you to verify that $\displaystyle\mathrm{Im}(z)=\frac{z-z^{\ast}}{2i}.$ (2.11) To show this, set $y=\mathrm{Im}(z)$ and use Eq. 2.9 to solve for $y$. If the division by $i$ is weird for you, then instead use $1/i=(1/i)(i/i)=i/(i^{2})=i/(-1)=-i$. The third part of Example 2.3 shows another helpful tool that the complex conjugate provides. Consider the complex numbers $z=x+iy$ and $w=u+iv$, where $x,y,u,v\in\mathds{R}$. We seek the real and imaginary parts of $z\pm w$, $zw$, and $z/w$. 1. We start with $z\pm w$. $\displaystyle z\pm w$ $\displaystyle=(x+iy)\pm(u+iv)$ $\displaystyle=(x\pm u)+i(y\pm v).$ (2.12) Hence, $\mathrm{Re}(z\pm w)=x\pm u$ and $\mathrm{Im}(z\pm w)=y\pm v$. 2. Next, we compute $zw$. $\displaystyle zw$ $\displaystyle=(x+iy)(u+iv)$ $\displaystyle=xu+ixv+iyu+i^{2}yv$ $\displaystyle=(xu-yv)+i(xv+yu).$ (2.13) Thus, $\mathrm{Re}(zw)=xu-yv$ and $\mathrm{Im}(zw)=xv+vu$. 3. Lastly, we seek the quantity $z/w$ (this one is a little tricky). $\displaystyle\frac{z}{w}$ $\displaystyle=\frac{x+iy}{u+iv}$ $\displaystyle=\frac{x+iy}{u+iv}\,\frac{w^{\ast}}{w^{\ast}}$ $\displaystyle=\left(\frac{x+iy}{u+iv}\right)\left(\frac{u-iv}{u-iv}\right),\textrm{ Using Eq. \ref{eq: Complex Conjugate} for }w$ $\displaystyle=\frac{(x+iy)(u-iv)}{(u+iv)(u-iv)}$ $\displaystyle=\frac{(xu+yv)+i(xv-yu)}{u^{2}+v^{2}}$ $\displaystyle=\frac{xu+yv}{u^{2}+v^{2}}+i\,\frac{xv-yu}{u^{2}+v^{2}}.$ (2.14) Therefore we have $\displaystyle\mathrm{Re}\left(\frac{z}{w}\right)=\frac{xu+yv}{u^{2}+v^{2}}\textrm{ and }\mathrm{Im}\left(\frac{z}{w}\right)=\frac{xv-yu}{u^{2}+v^{2}}$ It is important to note that since we can establish clear real and imaginary parts for each of the arithmetic operations above — addition, subtraction, multiplication, and division — these quantities are also complex numbers, by definition. Thus, when we add, subtract, multiply, or divide complex numbers, we will always get complex numbers back! For a next calculation, let’s find the product of $z$ with its conjugate $z^{\ast}$ which is often denoted as $z^{\ast}z$; we do so by direct substitution. $\displaystyle z^{\ast}z$ $\displaystyle=(x-iy)(x+iy)$ $\displaystyle=x^{2}-iyx+ixy-i^{2}y^{2}$ $\displaystyle=x^{2}-(-1)^{2}y^{2}+i(xy-yx)$ $\displaystyle=x^{2}+y^{2}+i0$ $\displaystyle=x^{2}+y^{2}.$ Outright, this formula may not look very impressive, so let’s try $\sqrt{z^{\ast}z}$: $\displaystyle\sqrt{z^{\ast}z}=\sqrt{x^{2}+y^{2}}=\sqrt{[\textrm{Re}(z)]^{2}+[\textrm{Im}(z)]^{2}}.$ (2.15) Hopefully this catches your eye as the Pythagorean Theorem, or even more importantly, the equation for the length of a two-dimensional vector Eq. 1.6, where the $x$-component of the vector is just $x$ and the $y$-component is just $y$! Using Eq. 2.15, called the complex modulus of $z$, we can calculate a real modulus and and imaginary modulus if we consider only a purely real number $x$ or purely imaginary number $\Upsilon=iy$. $\displaystyle\sqrt{x^{\ast}x}$ $\displaystyle=\sqrt{x^{2}+0}=|x|,\textrm{ a real number has no imaginary part}$ $\displaystyle\sqrt{\Upsilon^{\ast}\Upsilon}$ $\displaystyle=\sqrt{0+y^{2}}=|y|,\textrm{ an imaginary number has no real part}$ where the absolute value bars are necessary because we implicitly took the positive square root, and so our moduli should be positive for arbitrary $x,y\in\mathds{R}$. But these expressions are just the magnitudes of the real numbers $x$ and $y$. In other words, the complex modulus is the generalization of magnitude — a.k.a. absolute value — in the complex plane! In other words, we may write $\displaystyle|z|=\sqrt{z^{\ast}z}=\sqrt{[\textrm{Re}(z)]^{2}+[\textrm{Im}(z)]^{2}},$ (2.16) when talking about the magnitude of any complex number. Before moving on, there are a couple of more things I want to talk about. Firstly, what happens if we need to find the complex conjugate of a sum (or difference)? Suppose we have two complex numbers $z$ and $w$, as we do in the first part of Example 2.3, and we want to find $(z\pm w)^{\ast}$. We could do so directly as shown below using Eq. 2.12, $\displaystyle(z\pm w)^{\ast}=(x\pm u)+(-i)(y\pm v)=(x-iy)\pm(u-iv)=z^{\ast}\pm w^{\ast}.$ (2.17) Notice that we replaced the $+i$ in Eq. 2.12 with the $-i$ to find the conjugate initially. Hence, when we have a sum (or difference) of two complex numbers, then the conjugate of the sum (or difference) is just the sum (or difference) of the conjugates. How about a product of two complex numbers? Here we use Eq. 2.13, $\displaystyle(zw)^{\ast}$ $\displaystyle=(xu-yv)+(-i)(xv+yu),$ $\displaystyle=(xu+i^{2}yv)-i(xv+yu),$ $\displaystyle=xu+(iy)(iv)-ixv- iyu,\textrm{ combine like-terms in $x$ and $-iy$}$ $\displaystyle=x(u-iv)-iy(u-iv),$ $\displaystyle=(x-iy)(u-iv),$ $\displaystyle=z^{\ast}\,w^{\ast}.$ (2.18) (Note derivation is a little tricky because I needed to remember that $-1=i^{2}$ in the second step and then group one $i$ with $y$ and the other with $v$ in the third step.) This relation shows that the conjugate of the product is simply the product of the conjugates! At this point, however, we have enough information to be certain that complex conjugation is NOT a linear operation for complex numbers (see Eq. 1.39 for the definition of a linear operator). To see this more clearly, we first extend our definition of a linear operator to the complex plane $\displaystyle\mathcal{O}(a\vec{z}+b\vec{w})=a\mathcal{O}(\vec{z})+b\mathcal{O}(\vec{w}),\;\;\textrm{for all }a,b\in\mathds{C}.$ (2.19) For right now, consider the complex vectors $\vec{z}$ and $\vec{w}$ as being normal vectors whose components are complex-valued. The specific details on these complex vectors are not totally necessary (we’ll save that for linear algebra…) for right now. What is important here is that if we have to compute $(a\vec{z}+b\vec{w})^{\ast}$, then we would have $\displaystyle(a\vec{z}+b\vec{w})^{\ast}$ $\displaystyle=(a\vec{z})^{\ast}\pm(b\vec{w})^{\ast},\textrm{ by Eq. \ref{eq: conjugate z pm w}}$ (2.20) $\displaystyle=a^{\ast}\,\vec{z}^{\ast}\pm b^{\ast}\,\vec{w}^{\ast}.\textrm{ by Eq. \ref{eq: conjugate zw}}$ (2.21) The only way this is equal to $a(\vec{z}^{\ast})+b(\vec{w}^{\ast})$ is if $a=a^{\ast}$ and $b=b^{\ast}$, implying that $a,b\in\mathds{R}$. But since our complex vectors have complex-valued components, and we have already shown that multiplication between complex numbers produced complex numbers, it follows that in general the condition that both $a$ and $b$ are real is NOT general. Thus, in general, complex conjugation is not linear333This fact is specifically exploited in quantum mechanics and quantum field theory when dealing with time-reversal symmetry in real physical systems.! We have shown that the conjugate of the sum is the sum of the conjugates (Eq. 2.17) and the conjugate of the product is the product of the conjugates (Eq. 2.18). Show now that the conjugate of the quotient is the quotient of the conjugates. In other words, show that $(z/w)^{\ast}=z^{\ast}/w^{\ast}$. (Hint: either start with $z^{\ast}/w^{\ast}$, substitute in $-i$ for the $+i$, and then show that it is the conjugate of Eq. 2.14, OR start with the conjugate of Eq. 2.14 and manipulate it algebraically into $z^{\ast}/w^{\ast}$. I think both methods take similar amounts of algebra…) With purely real numbers, it is true that $|a|=|-a|$ for all $a\in\mathds{R}$. Thus, magnitudes are not unique; however, we can use a sign difference to order our real numbers from least to greatest. This problem focuses on whether we can do a similar thing in the complex plane — if we could, then it would be true that only $z$ and $-z$ share the same magnitude, just like the reals. We used $\sqrt{z^{\ast}z}$ as the definition of the magnitude of a complex number. Show that the complex numbers cannot be ordered like the reals by finding at least one other complex number that shares the same magnitude of $z$ (and $-z$). For example, show that $|z|=|z^{\ast}|$ for all $z=x+iy\in\mathds{C}$. (Hint: to avoid confusing yourself with all the $z$s and asterisks, define $w=z^{\ast}$ and then find $|w|$ using Eq. 2.16. Finally, compare your result with Eq. 2.15.) ### 2.4 Cartesian versus Polar Representations The fact that the real and imaginary axes form a complex plane implies that we should be able to draw any complex number in a geometric plane. Further, since we have a Pythagorean Theorem-type relationship for the magnitude of the complex number (and $1$ is totally distinct from $i$), we are able to infer that the real and imaginary axes are orthogonal to one another — just like the $x$-axis and the $y$-axis. Figure 2.1 shows a possible complex number $z=x+iy$ in the complex plane. Additionally, if we know the components $x$ and $y$, then we could easily draw the conjugate of $z$, namely $z^{\ast}=x-iy$. It is important to note that the points $z$ and $z^{\ast}$ are the complex numbers; meanwhile, based on our knowledge of vectors, we could easily draw an arrow from the origin to each complex number, where the length of each vector would the the magnitude (or length) of the complex number. Now for some trigonometry. FIG. 2.1: If we consider $z=x+iy$, then we can draw it and its conjugate, $z^{\ast}=x-iy$, in the complex plane. From the definition of sines, cosines, and tangents (see Section 1.3.3), we can use the geometrical right angle between the real and imaginary axes to define the angle of the the complex number, or the so-called argument of a complex number, denoted by $\mathrm{arg}(z)=\theta$. As is shown in Fig. 2.1, $\theta$ is the angular elevation of the complex number above the $+x$-axis. Since we know that $\mathrm{Im}(z^{\ast})=-\mathrm{Im}(z)$ while the real parts are identical, we know that $\mathrm{arg}(z^{\ast})=-\mathrm{arg}(z)$. Using these angles we can then determine $\displaystyle\cos\theta$ $\displaystyle=\frac{\mathrm{adjacent}}{\mathrm{hypotenuse}}=\frac{x}{|z|}=\frac{x}{\sqrt{x^{2}+y^{2}}},$ (2.22) $\displaystyle\sin\theta$ $\displaystyle=\frac{\mathrm{opposite}}{\mathrm{hypotenuse}}=\frac{y}{|z|}=\frac{y}{\sqrt{x^{2}+y^{2}}}.$ (2.23) By solving the second equalities for $x$ and $y$ and then substituting these quantities in $z=x+iy$, we find $\displaystyle z=|z|\cos\theta+i|z|\sin\theta=|z|(\cos\theta+i\sin\theta).$ (2.24) Note that this picture is consistent with just substituting in $-i$ for $+i$ to obtain a conjugate, because $\displaystyle z^{\ast}=|z^{\ast}|[\cos\theta+(-i)\sin\theta]=|z|[\cos(-\theta)+i\sin(-\theta)].$ (2.25) where we have used the result of Problem 2.3 to set $|z^{\ast}|=|z|$, and then we used the even and odd symmetry of the sinusoids: $\cos(-\theta)=\cos(\theta)$ and $\sin(-\theta)=-\sin\theta$. Equation 2.24 shows that any complex number $z=x+iy$ can be represented as the product of a radial part, and an angular part, as shown below $\displaystyle z=\underbrace{|z|}_{\textrm{Radial Part}}(\underbrace{\cos\theta+i\sin\theta}_{\textrm{Angular Part}}),$ (2.26) as a complement to the real-and-imaginary-part representations. Again, for the sake of completeness, the radial part is the modulus of the complex number while the angular part is the argument of the complex number. Take note that while the radial part is a real number like $\mathrm{Re}(z)$ and $\mathrm{Im}(z)$, the angular part is still complex! Thus, we have written any complex number in either a real-and-imaginary representation and in a radial-and-angular representation. These terms, although straight-to-the-point, are fairly clunky, so instead we name them the Cartesian and Polar representations of a complex number, respectively. The “pole” in this case is the distance (magnitude/length) the complex number is from the origin. A much more useful (and elegant) way to write a complex number $z=x+iy\in\mathds{C}$ and its conjugate in the polar representation is actually with an imaginary exponential $\textrm{e}^{i\theta}$, given as $\displaystyle z$ $\displaystyle=r\textrm{e}^{i\theta},$ (2.27) $\displaystyle z^{\ast}$ $\displaystyle=r\textrm{e}^{-i\theta},$ (2.28) where $\displaystyle r$ $\displaystyle=|z|=\sqrt{x^{2}+y^{2}},$ (2.29) $\displaystyle\theta$ $\displaystyle=\mathrm{arg}(z)=\arctan\left(\frac{y}{x}\right).$ (2.30) Of course, to write such a thing, it would have to be true that $\displaystyle\textrm{e}^{i\theta}=\cos\theta+i\sin\theta,$ (2.31) where $\textrm{e}\approx 2.71828182846$ is the base of the natural logarithm. Surprisingly, this result is true for all values of $\theta$! Remarkably, this result is not too difficult to prove even though it took a supergenius like Leonhard Euler444Pronounced oiler, unlike how they said it in The Imitation Game, much to my chagrin… to first do it555Hence, it is usually called the Euler identity. — it can be done with an introductory understanding of Taylor Series in a Calculus II course — although it is a bit beyond the scope of this chapter666Don’t worry! We will come back to it later on and YOU will prove it in Problem 3.4.2 (I help you along though).. It is initially a little strange to go from the Cartesian representation to the polar representation, but ultimately using the polar representation is more convenient than the Cartesian representation (otherwise physicists wouldn’t bother with it!). So this problem is designed to have you practice the conversion for a few important numbers in physics. (a) Show that $i=\textrm{e}^{i\pi/2}$ by arguing $r(i)=1$ and $\mathrm{arg}(i)=\pi/2$. (b) Show that, in the Cartesian representation, $\sqrt{2}\,\textrm{e}^{-i\pi/4}=1-i$ (c) Consider the complex number $z=\textrm{e}^{i\pi}$. What is $z+1$ in both Polar and Cartesian representation? Using the polar representation, it is possible for us to construct two of the most widely used expressions in all of physics — we are going to rewrite sine and cosine in terms of exponentials. To start consider the complex number of unit magnitude given by Eq. 2.31. Then it must be true $\displaystyle\mathrm{Re}\left(\textrm{e}^{i\theta}\right)=\cos\theta\in\mathds{R},$ (2.32) $\displaystyle\mathrm{Im}\left(\textrm{e}^{i\theta}\right)=\sin\theta\in\mathds{R}.$ (2.33) But by Eq. 2.10 and Eq. 2.11, we can write the real and imaginary parts of any complex number as a superposition of it and its complex conjugate. Then, $\displaystyle\cos\theta$ $\displaystyle=\frac{\textrm{e}^{i\theta}+\textrm{e}^{-i\theta}}{2},$ (2.34) $\displaystyle\sin\theta$ $\displaystyle=\frac{\textrm{e}^{i\theta}-\textrm{e}^{-i\theta}}{2i}.$ (2.35) Part of the reason why these relationships are so useful, is it allows us to treat trigonometric functions — objects that are defined geometrically — as exponential functions instead, which then allow us to employ a slew of useful (and quick) multiplication, differentiation, and integration rules to otherwise algebraically cumbersome functions. In physics, this polar representation of trigonometric functions helps us describe certain geometries and spaces in terms of essentially successive multiplications. As you can see in the chapter on Fourier Analysis, being able to convert sines and cosines into combinations of exponential functions makes otherwise impossible algebra much simpler. As we continue through this book, I will highlight more locations where these can be immediately implemented to make your life easier because the sooner you begin to feel comfortable with Eqs. 2.34 and 2.35, the faster you will begin to see through the mathematics of difficult subjects like signals, optics, and quantum mechanics to understand the underlying phenomena in a much more precise way. There is something potentially unsettling by writing geometric formulas as exponential formulas; it seems to imply there is a clear rotational aspect of multiplying two numbers, rather than the simpler stretching-and-shrinking interpretation that was valid with the real numbers. We study this more in the next section. ### 2.5 Multiplication = Dilation + Rotation Let’s consider two complex numbers in polar representation, given by $z=r\textrm{e}^{i\theta}$ and $w=\rho\textrm{e}^{i\phi}$. We will calculate their product and difference in the polar representation. $\displaystyle zw$ $\displaystyle=\left(r\textrm{e}^{i\theta}\right)\left(\rho\textrm{e}^{i\phi}\right)=r\rho\textrm{e}^{i\theta}\textrm{e}^{i\phi}=(r\rho)\,\textrm{e}^{i(\theta+\phi)},$ (2.36) $\displaystyle\frac{z}{w}$ $\displaystyle=\frac{r\textrm{e}^{i\theta}}{\rho\textrm{e}^{i\phi}}=\frac{r}{\rho}\,\textrm{e}^{i\theta}\textrm{e}^{-i\phi}=\left(\frac{r}{\rho}\right)\,\textrm{e}^{i(\theta-\phi)},$ (2.37) Thus, by Eq. 2.36, multiplying a complex number by another is equivalent to dilating the magnitude of the first by the second and rotating the first complex number counter-clockwise by the second’s argument. Figure 2.2 shows the geometry behind multiplication. Likewise, since division is the multiplicative inverse, we should not be surprised by Eq. 2.37 which says that dividing a complex number by another is equivalent to constricting777My word choice for “anti-dilating.” the magnitude of the first by the second and rotating the first complex number clockwise by the second’s argument. FIG. 2.2: When we multiply the complex number $z=r\textrm{e}^{i\theta}$ by $w=\rho\textrm{e}^{i\phi}$, the magnitude of $z$ is dilated by a factor of $\rho$, while the angle of $z$ is rotated by $\mathrm{arg}(w)=\phi$. Thus $|zw|=r\rho$ and $\mathrm{arg}(zw)=\theta+\phi$. What is often the case, at least in physics, when we deal with complex numbers, we are usually only taking about ones with unit magnitude — that is, the complex modulus is equal to one. Any complex number of the form in Eq. 2.31 fits this description. The reason why these numbers are so important is that they do not dilate the modulus of any complex number they are multiplying. Instead, they exclusively rotate the complex number they are multiplying. Figure 2.3 serves to help you get some intuition to how the multiplication = rotation bit works with more concrete numerical examples like $\pm 1$ and $\pm i$. Very often, physicists will talk of the phase of some quantity (such as in electromagnetic theory, optics, or quantum mechanics), and refer to the entire complex number $\textrm{e}^{i\theta}$ as this phase. To be perfectly precise, the actual angle (argument) is the phase, NOT the entire complex number. However, as we have seen by Eq. 2.36 where $r=1$, multiplying by the entire number $z=\textrm{e}^{i\theta}$ only changes the total phase of the product, since it rotates the original number by $\mathrm{arg}(z)=\theta$. Thus, in terms of things we can experimentally detect (and therefore know are truly there), we can only find the resultant change of phase due to the multiplication of a complex number by another of unit magnitude. FIG. 2.3: In this figure, we multiply each of the following points along the complex unit circle by the imaginary (lateral!) unit $i=\textrm{e}^{\pi i/2}$: $(1,0)$, $(0,i)$, $(-1,0)$, and $(0,-i)$. Starting with $(1,0)$, we have $1\cdot i=i=\textrm{e}^{\pi i/2}$ over the solid red arc. Hence, multiplying $1$ by $i$, rotates $1$ by $\pi/2$ radians ($90^{\mathrm{o}}$). Next, we multiply $i$ by $i$ in the dashed blue arc. But $i^{2}=-1=\textrm{e}^{\pi i/2+\pi i/2}=\textrm{e}^{\pi i}$. Thus, again, multiplying by $i$ rotates our starting complex number $i$ by $\pi/2$ radians. I leave you to confirm the other two rotations for yourself. Our ability to detect these so-called phase differences is actually rather remarkable. For example, in both classical and quantum mechanics, magnetic fields exert a torque on any charged object with angular momentum. This torque causes the charged object to precess in a circle; in other words, the charged object will behave similarly to how a toy top begins to wobble in circles around its central axis before it falls over due to gravity. In this case, the field that generates the wobble is the magnetic field instead of the gravitational field. Anyway, in quantum mechanics, a typical experiment to measure the phase difference goes something like this: generate a beam of identical particles, split the beam into two parts, do something to one beam and leave the other alone, then recombine the beams and see if anything happens. So one such experiment deals with measuring the intensity of particles with inherent angular momentum (spin) after splitting up the beam and having one part travel through a magnetic field over some distance. The amount those particles wobble due to the magnetic field then acts as the phase difference between the particles in the beam. As you will eventually learn, the intensity of the recombined beam is a function of the phase difference, meaning we can experimentally vary either the strength of the magnetic field or increase the distance the beam travels through it, and then change the intensity of the resulting beam of particles! For an example of such an experiment, check out [spin_interferometry]. Before proceeding, let’s take note of one special case of multiplication: exponentiation. In other words, we can raise any real number $a\in\mathds{R}$ to the $n\in\mathds{R}$ power by successively multiplying $a$ by itself $n$ times. For example, if $a=2$ and $n=4$, then $a^{n}=2^{4}=2\cdot 2\cdot 2\cdot 2=16$. Likewise, if we have a negative exponent, then we divide successively, while if $n$ has a noninteger fractional part, we take the appropriate root $(2^{1/3}=\sqrt[3]{2})$. Using the polar representation of a complex number $z=r\textrm{e}^{i\theta}$, we can generalize exponentiation rather straightforwardly as $\displaystyle z^{n}=r^{n}\,(\textrm{e}^{i\theta})^{n}=r^{n}\,\textrm{e}^{in\theta}.$ (2.38) By setting a new angle $\psi=n\theta$, and a new radial part as $s=r^{n}\in\mathds{R}$, we have $\displaystyle z^{n}=s\,\textrm{e}^{i\psi}.$ (2.39) Making use of Euler’s Identity (Eq. 2.31), we see that $\displaystyle z^{n}=s(\cos\psi+i\sin\psi)=r^{n}(\cos n\theta+i\sin n\theta).$ (2.40) Since the quantity $z^{n}$ can be written in terms of a radial and angular part, it must be complex-valued, as those parts yield real and imaginary parts. Thus $z^{n}\in\mathds{C}$ which means our rules for complex numbers lead to closure under exponentiation! In other words, we cannot ever possibly end up with a non-complex-valued quantity purely by exponentiation888This is a very good thing, mind you, for it essentially gives us motivation for starting to see if more complicated algebraic and transcendental functions always return complex numbers.. There is, meanwhile, a much more subtle identity that we may not have initially noticed while showing $z^{n}\in\mathds{C}$. We made use of a statement known as De Moivre’s Theorem that says for any integer $n$ $\displaystyle\left(\cos\theta+i\sin\theta\right)^{n}=\cos(n\theta)+i\sin(n\theta).$ (2.41) Looking closely at Eq. 2.41, we see that this statement is actually incredibly complicated. It manages to relate something as easy to compute as raising a number to an integer power $n$ to the much more difficult trigonometric functions, cosine and sine. But more importantly De Moivre’s Theorem puts the exponent $n$ inside of the argument of the trig function! Usually, there is no clear way to translate the argument of a trig function to anything outside of the function — for example $\cos(2x)\neq 2\cos x\neq(\cos x)^{2}$ for every value of $x$. Trigonometric functions just don’t behave nicely like this. However, by making use of the complex plane, De Moivre’s Theorem gives us a way of applying algebraically simply operations to compute otherwise very difficult, if not outright impossible, quantities. I do want to emphasize that De Moivre’s Theorem only applies for integer exponents, whereas Eq. 2.40 applies for all possible values of $n$ (even complex ones!). What gives? Where is there a difference? We look into this question next. #### 2.5.1 Roots of Unity Let’s consider the case in De Moivre’s Theorem where $n=1/2$. In this particular case, then if De Moivre’s Theorem were to hold, then it would be true (but it is not) $\displaystyle\left(\cos\theta+i\sin\theta\right)^{1/2}=\cos\left(\frac{\theta}{2}\right)+i\sin\left(\frac{\theta}{2}\right).$ Let’s choose the easier case of $\theta=0$ and $\theta=2\pi$ since on the left-hand side, we will have $\displaystyle(\cos 0+i\sin 0)^{1/2}=(\cos 2\pi+i\sin 2\pi)^{1/2}=1^{1/2}.$ This follows since $\sin 0=\sin 2\pi=0$ and $\cos 0=\cos 2\pi=1$. There is no problem yet. The issue arises when we look at the right-hand side to find $\displaystyle\cos\left(\frac{0}{2}\right)+i\sin\left(\frac{0}{2}\right)=\cos 0$ $\displaystyle=1$ $\displaystyle\cos\left(\frac{2\pi}{2}\right)+i\sin\left(\frac{2\pi}{2}\right)=\cos\pi$ $\displaystyle=-1.$ (Remember that $\sin\pi=0$.) So, if De Moivre’s Theorem were to be trusted, we would find that $1^{1/2}=1=-1$. In other words, we would find that our answer implies that $1=-1$ and this contradicts our rules for real numbers. This result is not too surprising from an Algebra II point-of-view since we already know that both $(1)^{2}$ and $(-1)^{2}=1$, which we normally write instead as $1^{1/2}=\pm 1$. But this means that the square-root function is mutli-valued, and in general the $n^{\mathrm{th}}$-root is also multi-valued. The problem with De Moivre’s Theorem is that it does not explicitly account for this phenomenon. We could easily generalize it to account for the multi- valuedness in exponentiation by rational exponents. We start with the so-called Roots of Unity as they are both fundamental and something that we have already developed the motivation for. Let’s start off easy, and find the cubic roots of unity. We will proceed in the same way as we did before with the square-roots of unity. Here we have $\displaystyle\left(\cos\theta+i\sin\theta\right)^{1/3}=\cos\left(\frac{\theta}{3}\right)+i\sin\left(\frac{\theta}{3}\right).$ Like before, we choose $\theta=0$ and $\theta=2\pi$ because $\cos 0=\cos 2\pi=1$. There is actually another case we can consider, too: $\theta=4\pi\Rightarrow\cos 4\pi=1$. Calculating each of these cases, we have $\displaystyle\cos\left(\frac{0}{3}\right)+i\sin\left(\frac{0}{3}\right)=1$ $\displaystyle\cos\left(\frac{2\pi}{3}\right)+i\sin\left(\frac{2\pi}{3}\right)=-\frac{1}{2}+i\frac{\sqrt{3}}{2}$ $\displaystyle\cos\left(\frac{4\pi}{3}\right)+i\sin\left(\frac{4\pi}{3}\right)=-\frac{1}{2}-i\frac{\sqrt{3}}{2}$ And so the cubic-roots of unity we found are $\displaystyle 1^{1/3}\in\left\\{1,-\frac{1}{2}+i\frac{\sqrt{3}}{2},-\frac{1}{2}-i\frac{\sqrt{3}}{2}\right\\}$ Hopefully, you aren’t satisfied with this derivation of the cubic-roots of unity so far because I just arbitrarily decided to include the $4\pi$ part when I didn’t include it for the square-roots. The reason why I was able to include $4\pi$ for the cubic-roots and not for the square-roots is because of the oscillatory behavior of sines and cosines — they repeat themselves every $2\pi$ radians. So essentially, when looking for $\theta$ values that would show the roots of unity are indeed multi-valued, I needed to watch out for two criteria. First, I needed to make sure $\textrm{e}^{i\theta}=1$ since we are taking about roots of unity, AND I needed to make sure that the right-hand side did not repeat itself. Let’s analyze each criterion independently, and by doing so, we will generalize the square and cubic cases to the $n^{\mathrm{th}}$-root. If we want $\textrm{e}^{i\theta}=1$, then we need $\cos\theta+i\sin\theta=1$. But the value $1$ is totally real, hence the sine coefficient attached to the $i$ must vanish for our $\theta$ values. This condition holds for $\theta\in\\{0,\pm\pi,\pm 2\pi,\pm 3\pi,\dots\\}$. Next, we are dealing with roots of unity, not roots of negative-unity. Thus, we can only allow for even multiplies of $\pi$ so that $\cos(2\pi m)=1$, where $m\in\\{0,\pm 1,\pm 2,\pm 3,\dots\\}$. So this explains why I kept choosing $\theta=0,2\pi$ and then $4\pi$. But why not the negatives, too? The answer to that comes again from the even symmetry of the cosine function: $\displaystyle\cos(-\theta)=\cos(\theta).$ In other words, the cosine functions ignore the overall negative sign inside of their argument and produce the same result either way. Hence, we could include the negative values, $-2\pi,-4\pi,$ et cetera, but we would ultimately always recover the same set of possible $\theta$ values. Now we move onto the second criterion which says that $\theta$ cannot make the right-hand side repeat itself. We use the first criterion that we had established where all of the roots of unity will have $\theta$ values that are an even multiple of $2\pi$. Let’s call this multiple $m$ and write $1^{1/n}$ as $\displaystyle 1^{1/n}=\cos\left(\frac{2\pi m}{n}\right)+i\sin\left(\frac{2\pi m}{n}\right),\;\;m\in\\{0,1,2,\dots\\}.$ If we start with $m=0$, then we will always find $1^{1/n}=1$, as we expect, since $1^{n}=1$. Then, as we continue to check through all of the possible multiples we have, we find the first repeating value at $m=n$: $\displaystyle 1^{1/n}\underbrace{=}_{m=n}\cos\left(\frac{2\pi n}{n}\right)+i\sin\left(\frac{2\pi n}{n}\right)=\cos 2\pi=1$ But this returns the exact same value as $m=0$. Furthermore, any multiple of $n$ will ALWAYS be the same as the $m=0$ case. Meanwhile, all of the possible multiples up to $n$ are totally allowed because, in general, $\displaystyle\frac{2\pi m}{n}\neq 2\pi,$ the only case where equality holds is when $m=n$. Thus, our $n^{\mathrm{th}}$-roots of unity are given as $\displaystyle 1^{1/n}=\cos\left(\frac{2\pi m}{n}\right)+i\sin\left(\frac{2\pi m}{n}\right),\;\;m\in\\{0,1,\dots,n-1\\},$ (2.42) in the Cartesian representation. In the polar representation, we pack the sinusoids into the exponential function to find $\displaystyle 1^{1/n}=\mathrm{exp}\left(\frac{2\pi mi}{n}\right),\;\;m\in\\{0,1,\dots,n-1\\}.$ (2.43) (Note that $\mathrm{exp}(x)=\textrm{e}^{x}$ for all $x$. I used the exp representation because the equation would have looked gross if I used e instead.) FIG. 2.4: A few plots of the symmetry in the roots of unity. The orange points are the complex numbers centered around their common origin. The vectors attached to each point is to help with our geometrical interpretation of the complex plane. The regular polygons within each unit circle represent the internal symmetry within the set of the $n^{\mathrm{th}}$-roots of unity. The $n^{\mathrm{th}}$-roots of unity are quite nice geometrically because they chop up the unit circle ($2\pi$ radians) into regular $n$-gons by dividing the full $2\pi$ radians into equally-spaced $2\pi/n$ intervals. A few of these are shown in Fig. 2.4. What is more beautiful from a physical point-of-view is that since the roots of unity form regular $n$-gons inscribed within a unit circle, their sum must vanish. This can be directly applied to study cylindrically (and polygonally) symmetrical systems in physics; for example, point like-charges (masses) arranged in some polygonal shape whose net electric (gravitational) fields must vanish at their geometrical center. In more computational applications in physics, this property allows us to compute discrete Fourier Transforms of signals which is a mathematical operation that allows us to understand the signal in terms of its frequency-dependence instead of its time-dependence999The Fourier Analysis chapter in this book does not deal with the discrete transform, but the tools developed in the chapter in the continuum can help you understand the intuition behind the discrete version.. To show how the roots sum to zero explicitly, recall the square-roots of unity: $1$ and $-1$. Together they form a line (lame), but their sum is $1+(-1)=0$. We can also do the cubic-roots of unity that form an equilateral triangle (less lame): $\displaystyle 1-\frac{1}{2}+i\frac{\sqrt{3}}{2}-\frac{1}{2}-i\frac{\sqrt{3}}{2}=1-1+i0=0.$ Now let’s generalize (disclaimer: this will be one of the more abstract things so far). Using Eq. 2.42, find the Cartesian representation of the $4^{\mathrm{th}}$ and $5^{\mathrm{th}}$ roots of unity and show that the sum of their roots is zero. Consider the sum of the $n^{\mathrm{th}}$-roots of unity in polar representation, written as $S_{n}$, and given as $\displaystyle S_{n}=1+\textrm{e}^{2\pi i/n}+\dots+\textrm{e}^{2\pi i(n-1)/n}=\sum_{m=0}^{n-1}\textrm{e}^{2\pi im/n}$ If we look closely at the summation, we will see that we actually have a finite geometric series, defined generally as $\displaystyle S_{n}=1+a+a^{2}+\dots+a^{n-1}=\sum_{m=0}^{n-1}a^{m},$ (2.44) for some value $a$. In this case, $a=\textrm{e}^{2\pi i/n}$ and $\displaystyle S_{n}=\sum_{m=0}^{n-1}\left(\textrm{e}^{2\pi i/n}\right)^{m}$ To determine what this finite sum is, we can employ a few mathemagical101010Bad pun? tricks that I highly recommend you work out with me. Knowing how they work WILL come in handy later on in your physics career, at least when it comes to handling finite sums. If you work these tricks out with me, then you are prone to remembering them in the future. Here we go. We will calculate the quantity $1-S_{n}$ using the definition of $S_{n}$ given in Eq. 2.44, and at the end we will substitute in $a=\textrm{e}^{2\pi i/n}$. $\displaystyle 1-S_{n}$ $\displaystyle=1-\sum_{m=0}^{n-1}a^{m}$ $\displaystyle=1-(1+a+a^{2}+\dots a^{n-1})$ $\displaystyle=-(a+a^{2}+\dots+a^{n-1})$ $\displaystyle=-a(1+a+\dots+a^{n-2})$ $\displaystyle=-aS_{n-1}.$ In the last equality, I used the definition of the finite geometric series again, except since the sum only goes to $n-2$, then that means the proper subscript on $S$ is $n-1$ since $(n-1)-1=n-2$. In this form, we cannot proceed because we needed $S_{n}$, NOT $S_{n-1}$. Thus we need to figure out a way to relate the two different finite sums. Let’s consider now $S_{n-1}=1+a+\dots+a^{n-2}$. If we look again at Eq. 2.44, we will see that if we were to add $a^{n-1}$ to $S_{n-1}$, we would have $\displaystyle S_{n-1}+a^{n-1}=1+a+\dots+a^{n-2}+a^{n-1}=1+a+\dots a^{n-1}=S_{n}.$ Hence, $S_{n}-a^{n-1}=S_{n-1}$. Now we are going to substitute this expression in for $S_{n-1}$ inside the last equality for $1-S_{n}$ and isolate $S_{n}$. $\displaystyle 1-S_{n}=-aS_{n-1}=-a\left(S_{n}-a^{n-1}\right)=-aS_{n}+a^{n}.$ By moving the $S_{n}$ on the left-hand side to the right-hand side we have $\displaystyle 1=S_{n}-aS_{n}+a^{n}\Rightarrow 1-a^{n}=(1-a)S_{n},$ and finally we have $\displaystyle S_{n}=\frac{1-a^{n}}{1-a},$ (2.45) which is the exact way to calculate a finite geometric series for any value of $a\neq 1$ (the formula blows up at $a=1$ because if $a$ were 1 then we would have a $S_{n}-S_{n}=0$ step in our derivation, thus we had to implicitly assume that $a\neq 1$). Finally, we substitute in $a=\textrm{e}^{2\pi i/n}\neq 1$: $\displaystyle S_{n}(\textrm{e}^{2\pi i/n})=\frac{1-(\textrm{e}^{2\pi i/n})^{n}}{1-\textrm{e}^{2\pi i/n}}=\frac{1-\textrm{e}^{2\pi in/n}}{1-\textrm{e}^{2\pi i/n}}=\frac{1-\textrm{e}^{2\pi i}}{1-\textrm{e}^{2\pi i/n}}=\frac{1-1}{1-\textrm{e}^{2\pi i/n}}=0.$ (2.46) And so it is true that the sum of all of the $n^{\mathrm{th}}$-roots of unity will always be identically zero! Furthermore, since a geometric series is built from multiplying the same object $a=\textrm{e}{2\pi i/n}$ by itself a bunch of time, and we know that multiplication by a complex number of unit magnitude is a pure rotation, then the complex number $\textrm{e}^{2\pi i/n}$ represents a symmetry present in the set of the $n^{\mathrm{th}}$-roots of unity. The symmetry here is that our set of complex numbers are invariant under a rotation of $2\pi/n$ radians — i.e. the number of points looks identical if we rotate all of them by the same $2\pi/n$ angle! For example, if we have the square-roots $\\{1,-1\\}$ and we rotate each number through the complex plane by $2\pi/2=\pi$ radians, we have $\displaystyle\\{1,-1\\}\xrightarrow{\textrm{rotate by }\pi\textrm{ radians}}\\{-1,1\\},$ as shown in Since the rotation $\textrm{e}^{\pi i}$ maps the set of square- roots onto themselves, this rotation is the symmetry I was talking about that is directly embedded within the set of square-roots of unity. FIG. 2.5: The symmetry in the square-roots of unity is a rotation by $\pi$ radians since the set of roots is identical under this rotation. Since the ambiguity in De Moivre’s Theorem is taken care of, we can return to Eq. 2.40 to account for all of the different possible roots of $z$. Specifically, we can talk about any root $1/n$, we have $\displaystyle z^{1/n}$ $\displaystyle=\left(r\textrm{e}^{i\theta}\right)^{1/n}$ $\displaystyle=r^{1/n}\cdot 1^{1/n}\cdot\textrm{e}^{i\theta/n}$ $\displaystyle=r^{1/n}\,\textrm{e}^{2\pi im/n}\,\textrm{e}^{i\theta/n},\;\;m\in\\{0,1,\dots,n-1\\}$ $\displaystyle=r^{1/n}\,\mathrm{exp}\left(i\frac{\theta+2\pi m}{n}\right),\;\;m\in\\{0,1,\dots,n-1\\}$ $\displaystyle=r^{1/n}\left[\cos\left(\frac{\theta+2\pi m}{n}\right)+i\sin\left(\frac{\theta+2\pi m}{n}\right)\right],\;\;m\in\\{0,1,\dots,n-1\\}$ (2.47) Then, by extension, a complex number $z$ to any rational power $p/q$ (for example 4/3) can be written as $\displaystyle z^{p/q}$ $\displaystyle=\left[r^{1/q}\,\mathrm{exp}\left(i\frac{\theta+2\pi m}{q}\right)\right]^{p},\;\;m\in\\{0,1,\dots,q-1\\}$ $\displaystyle=r^{p/q}\,\mathrm{exp}\left(ip\frac{\theta+2\pi m}{q}\right)^{p},\;\;m\in\\{0,1,\dots,q-1\\}$ $\displaystyle=r^{p/q}\left\\{\cos\left[\frac{p}{q}(\theta+2\pi m)\right]+i\sin\left[\frac{p}{q}(\theta+2\pi m)\right]\right\\},\;\;m\in\\{0,1,\dots,q-1\\},$ (2.48) The power with raising a complex number to a rational exponent is that we can then use this as a basis for raising a complex number to an irrational exponent since we can always successively approximate an irrational number with a rational number. For example, $\sqrt{2}\approx 1.414=1414/1000$. But how about raising a complex number to a complex exponent? We will study this and more in the next section. ### 2.6 Functions of a Complex Variable In this section, we will study a few algebraic and transcendental functions of a complex variable. Unfortunately, we will not have time to study these functions beyond just domain and range, but rest assured that understanding the inputs and outputs of complex functions is totally sufficient for a B.S. in physics. As one studies either more math or more physics, ideas from Complex Analysis become relevant if not actually crucial, whether they be in the form of circuit analysis, fluid mechanics, or quantum field theory. I have some references for anyone interested at this point in their studies [complex_analysis_wikipedia_2018, beck_marchesi_pixton_sabalka_2017, arnold_complexanalysis] (feel free to come back at a later date for them though!). I have used the term “map” before when talking about functions, and perhaps you’ve heard others use that word in the same context. In more colloquial settings, people use maps to get from one place to another, and in this section the connection between a function and an everyday map will become clearer. We will see that functions, particularly complex-valued functions, serve to get from one place in the complex plane to another. To illustrate this idea, let’s consider the following example where we see how regions of the complex plane are connected through the function, or mapping, $w(z)=z^{2}$. Consider the complex-valued function $w(z)=z^{2}$. We want to study $w$ over a unit square in the first quadrant of the complex plane. To start, we consider $z=x+iy$ in the Cartesian representation. Then, $\displaystyle w(z)=z^{2}=(x+iy)^{2}=x^{2}-y^{2}+2ixy.$ (2.49) If we define $w=u+iv$, then we have $u=x^{2}-y^{2}$ and $v=2xy$ for the real and imaginary parts of $w$, respectively. Now we define the boundary of the unit square for our input, or domain, of interest in the complex plane $\displaystyle z=\begin{cases}x+i0,&x\in[0,1]\\\ 1+iy,&y\in[0,1]\\\ x+1i,&x\in[0,1]\\\ 0+iy,&y\in[0,1]\end{cases}$ If we substitute these coordinates for $z$ into $w$, we have $\displaystyle w(z)=\begin{cases}x^{2},&x\in[0,1],\;y=0\\\ 1-y^{2}+2iy,&x=1,\;y\in[0,1]\\\ x^{2}-1+2ix,&x\in[0,1],\;y=1\\\ -y^{2},&x=0,\;y\in[0,1]\end{cases}$ The “mapping”, in this case, comes in when we rewrite $w(z)$ into its own set of real and imaginary parts, given by $w=u+iv$. By looking at the four same regions, we have $\displaystyle w(\textrm{unit cube})\in\begin{Bmatrix}u=x^{2},\;&v=0\\\ u=1-y^{2},\;&v=2y\\\ u=x^{2}-1,\;&v=2x\\\ u=-y^{2},\;&v=0\end{Bmatrix}=\begin{Bmatrix}u,&u\in[0,1]\\\ u=1-\frac{1}{4}v^{2},&v\in\left[0,2\right]\\\ u=\frac{1}{4}v^{2}-1,&v\in\left[0,2\right]\\\ u,&u\in[-1,0]\end{Bmatrix}$ It is definitely difficult to visualize how the equation above defines a map, at least not in this representation. Figure 2.6 shows how the equation above can serve as a set of instructions of how to turn one plot, or regions of the complex plane, is connected to another through the $w(z)=z^{2}$ function. FIG. 2.6: In the left plot, we define the unit square in the $z$-plane. It is through the $w(z)=z^{2}$ mapping that we connect the unit square to the square-squared region in the $w$-plane in the right plot. The colored curves represent sets of the complex plane that are connected through the $w(z)=z^{2}$ “map.” From Example 2.6, we can conclude a couple of things. First, complex-valued functions unite different regions of the complex plane. And second, specifically for this quadratic function, we chose to only consider the unit square. We could have chosen a square of size 2, or $\pi$, or a gazillion. Then $w$ would also increase in size by Eq. 2.49. I will leave it to you to pick particular points on the square of side length $s$ and plug it into Eq. 2.49 to see how large the region in the $w$-plane becomes (hint: try the points $(s,0),\;(s,s),$ and $(0,s)$). Since we can choose a square of any size, we can artificially stretch the region in the $w$-plane to infinity as we increase the domain in the $z$-plane. Notice though, that as long as our domain is only in the first quadrant in the $z$-plane, we will only ever be in the first or second quadrants in the $w$-plane. This region is sometimes referred to as the upper-half plane. As you may see after doing Problem 2.6, other regions will map to fill the lower-half plane, and by extension, we can stretch regions out as well to fill the entire lower-half plane. Hence, the function $w(z)=z^{2}$ maps the entire complex plane into the entire complex plane! Actually, functions such as these are specifically named entire functions since they are defined111111Technically, functions are entire only if they converge everywhere in the plane. However, convergence is beyond the scope of this chapter, so understanding entire functions as simply being defined everywhere is good enough for right now. everywhere in the plane. Since we have begun to understand complex mappings in terms of complex domains and ranges, we seek to study a few more fairly commonly-used complex functions in physics. Use Eq. 2.49 to show that the square defined by $\displaystyle z=\begin{cases}x+i0,&x\in[-1,0],\;y=0\\\ -1+iy,&x=-1,\;y\in[0,1]\\\ x+1y,&x\in[-1,0],\;y=1\\\ 0+iy,&x=0,\;y\in[0,1]\end{cases}$ maps to the same region in Fig. 2.6, except that it is reflected over the $u$-axis into the lower-half plane. #### 2.6.1 Polynomials For the first type of function, we consider functions of the form $\displaystyle w(z)=P_{n}(z)=\sum_{j=0}^{n}a_{j}z^{j}=a_{0}+a_{1}z+a_{2}z^{2}+\dots+a_{n}z^{n},$ (2.50) where all of the constant coefficients $\\{a_{j}\\}$ are arbitrary complex numbers, but each exponent is a nonnegative integer. This class of functions, often denoted by the symbol $P_{n}$ for “polynomial of $n^{\mathrm{th}}$-order,” is the complex generalization of the real polynomials that you already know. This means that instead of only allowing ourselves to input real numbers into a polynomial, we now are allowing ourselves to insert a complex number $z=r\textrm{e}^{i\theta}$ as the argument of the function (in other words, an element from the domain). With this value of $z$ and by Eq. 2.40, we have $\displaystyle P_{n}\left(z=r\textrm{e}^{i\theta}\right)=\sum_{j=0}^{n}a_{j}r^{j}\textrm{e}^{ij\theta}=a_{0}+a_{1}r\textrm{e}^{i\theta}+a_{2}r^{2}\textrm{e}^{2i\theta}+\dots+a_{n}r^{n}\textrm{e}^{in\theta}.$ (2.51) It is important to note that we are entirely allowed to compute every single one of these terms using the polar representations of the coefficients, if we had them given. Since each term is defined and computable, then we are definitely allowed to add all of their corresponding real and imaginary parts. Thus, for any $z\in\mathds{C}$ that we throw at this thing, we will always be able to find its output as a complex number. Thus a general polynomial maps the complex plane onto121212To be clear, there is a difference between the words into and onto. A function from set $A$ to set $B$, $f:A\rightarrow B$, is said to be onto if every element of $B$ is in the range of $f$. Pictorally, this means that there are no parts of $B$ that are not connected by $f$ to $A$. Otherwise, mathematicians use the word into. the complex plane. Our conclusion is important because, just like with real numbers, complex polynomials are actually relatively easy functions to handle, whether it be in calculus, computational physics, mathematical modeling, et cetera, because we can handle them term-wise. Other functions, (like the transcendental functions) are not so nice. Consider the complex quadratic, $\displaystyle P_{2}(z)=i+\textrm{e}^{i\pi/4}z-2z^{2}.$ Use either the Cartesian or the polar representation to evalutate $P_{2}(z)$ at the following points: $z_{1}=1$, $z_{2}=-i\pi$, $z_{3}=1-i$. Also, write down the set of coefficients $\\{a_{j}\\}$ (this part is NOT a trick. I just want to make sure you know what the set of cofficients is). #### 2.6.2 Complex Exponentiation Revisted The last time we dealt with complex exponentials was with deriving the rational generalization to De Moivre’s Theorem in Eq. 2.48. We were explicit, however, that the exponent itself had to be a rational number $p/q\in\mathds{R}$, where $p$ and $q\neq 0$ are integers along the real axis. Now we have to ask the question of whether it is possible to raise a complex number $z$ to a constant complex exponent $\xi$. In other words, we want to study the function $w(z)=z^{\xi}$. Since $\xi\in\mathds{C}$, we choose to work with it in its Cartesian representation. Let $\alpha=\mathrm{Re}(\xi)$ and $\beta=\mathrm{Im}(\xi)$ such that $\xi=\alpha+i\beta$. Thus $\displaystyle w(z)=z^{\xi}=z^{\alpha+i\beta}=z^{\alpha}\,z^{i\beta}.$ Now, we will make the substitution of $z=r\textrm{e}^{i\theta}$ into the equation above. $\displaystyle w(z)$ $\displaystyle=\left(r\textrm{e}^{i\theta}\right)^{\alpha}\left(r\textrm{e}^{i\theta}\right)^{i\beta},$ $\displaystyle=\left(r^{\alpha}\textrm{e}^{i\theta\cdot\alpha}\right)\,\left(r^{i\beta}\,\textrm{e}^{i\theta\cdot i\beta}\right),$ $\displaystyle=r^{\alpha}\textrm{e}^{i\alpha\theta}\,r^{i\beta}\textrm{e}^{-\beta\theta},$ $\displaystyle=r^{\alpha+i\beta}\textrm{e}^{-\beta\theta}\,\textrm{e}^{i\alpha\theta}.$ Now, before proceeding further, since $r$ is a real number, that means it lies entirely along the real axis, and we can use our regular logarithms on it, since those are defined over the positive real axis. Thus, we choose to write $r=\textrm{e}^{\ln r}$, as the (real) functions $\textrm{e}^{x}$ and $\ln x$ are inverse functions such that $\textrm{e}^{\ln x}=x=\ln(\textrm{e}^{x})$. When we make this substitution into $w$, we can derive our final result. $\displaystyle w(z)$ $\displaystyle=\textrm{e}^{\ln(r)\cdot(\alpha+i\beta)}\textrm{e}^{-\beta\theta}\,\textrm{e}^{i\alpha\theta},$ $\displaystyle=\textrm{e}^{\alpha\ln(r)+i\beta\ln(r)}\textrm{e}^{-\beta\theta}\,\textrm{e}^{i\alpha\theta},$ $\displaystyle=\textrm{e}^{\alpha\ln(r)-\beta\theta}\,\textrm{e}^{i[\beta\ln(r)+\alpha\theta]},$ $\displaystyle=r^{\alpha}\,\textrm{e}^{-\beta\theta}\,\textrm{e}^{i[\beta\ln(r)+\alpha\theta]}.$ (2.52) It is important to note that the long string of symbols given in Eq. 2.52 is something that is entirely computable. What I mean by that is this expression is written in the polar representation of a complex number, where $\displaystyle|w(z)|$ $\displaystyle=r^{\alpha}\textrm{e}^{-\beta\theta}\in\mathds{R}^{+},$ $\displaystyle\mathrm{arg}[w(z)]$ $\displaystyle=\beta\ln(r)+\alpha\theta\in\mathds{R}.$ Hence Eq. 2.52 is already in the form of a complex number, regardless of which values of $\alpha$ and $\beta$ we choose! To be clear, if we were to convert back to a Cartesian representation then we would need to be careful with any ambiguities in the rational exponents, as is given in Eq. 2.48. But otherwise, we are free to map the complex plane $z$ onto131313Since there are many possible roots for complex exponents, the $z$-plane actually gets mapped onto the $w$-plane in multi-valued ways, which we won’t go into. These peculiarities (translated to “headaches”in math-speak) are known as branch cuts, and special note really should be paid to them from a mathematical point-of-view. However, in physics, they are only noted if necessary. the complex plane $w$ via complex exponentiation. Use the polar representation and Eq. 2.52 to show that $i^{i}=\textrm{e}^{-\pi/2}$. This example shows that, while complex exponentiation maps the complex plane onto the complex plane, exponentiation does NOT necessarily map imaginary numbers into other imaginary numbers. This is interesting because exponentiation will always map real numbers into other real numbers. #### 2.6.3 Complex Logarithms Since we have covered the case of exponentiation, it follows that we should cover inverse exponentiation, that is, we need to discuss how to find the logarithm of a complex number. To study this function, consider $w(z)=\ln(z)$, where $\ln$ denotes “natural logarithm141414In many advanced mathematics, the “common log”, denoted by $\log$, is used instead of $\ln$. However, the “common log” of a number $x$ in physics is almost always given by $\ln(x)/\ln(10)=\log_{10}(x)$. ”. Just as before, this problem is most easily tackled in the polar representation. $\displaystyle w(z)=w(r\textrm{e}^{i\theta})=\ln(r\textrm{e}^{i\theta})=\ln(r)+\ln(\textrm{e}^{i\theta})=\ln(r)+i\theta.$ (2.53) To be clear, in this derivation, we implicitly borrowed the logarithm of a product rule from the real numbers, $\ln(ab)=\ln(a)+\ln(b)$, and we implicitly enforce that the natural logarithm is the function-inverse of $\textrm{e}^{x}$. It is very important to note, however, that ANY complex number $\xi=r_{\xi}\textrm{e}^{i\theta_{\xi}}$ is invariant under a full rotation about the origin — therefore $\xi$ invariant under a rotation of $2\pi n$, where $n$ is an integer. This is equivalent to saying $\xi\,\textrm{e}^{2\pi ni}=\xi\cdot 1=\xi$. However, the exponents are additive, therefore $\displaystyle\xi\textrm{e}^{2\pi ni}=r_{\xi}\textrm{e}^{i\theta_{\xi}}\textrm{e}^{2\pi ni}=r_{\xi}\textrm{e}^{i(\theta_{\xi}+2\pi n)}$ But if we apply this idea to $z=r\textrm{e}^{i\theta}$, then we have $z=r\textrm{e}^{i(\theta+2\pi n)}$. Thus, when we take a logarithm, we have $\displaystyle w(z)=\ln(r)+i(\theta+2\pi n)=\ln(r)+i\theta+2\pi ni.$ (2.54) But wait. From Eq. 2.53, have $\displaystyle w(z)=\ln(r)+i\theta=\ln(r)+i\theta+2\pi ni,$ (2.55) which seems to imply that $2\pi ni=0$ if this equation is to be true. Since $2\pi$ and $i$ are both nonzero quantities, this means that $n$ would have to be zero! However, we said $n$ can be any integer, not just zero. What gives? Did we accidentally stumble on a contradiction within complex algebra? So the truth is that, with our current system of complex numbers, we actually did find a contradiction, and it has to do with our interpretation of multiplication as rotation and $\textrm{e}^{2\pi ni}=1$ in the polar representation. But this contraction is not a totally new idea — it is similar to the idea that rational exponents are multivalued. It turns out that in the complex plane, the logarithm also accrues multiple values for the same input. In this sense, the full logarithm is not a function since for any input there are many outputs (so it does not pass the complex-version of the vertical line test). How do we deal with this problem? We use the idea of branch cuts to save the day, where a “branch cut” is a fancy name for specifying a particular range of the logarithm over a particular domain. In fact, this approach is identical to the one you learned in trigonometry, where we would only specify the inverse cosine ($\arccos(x)=\cos^{-1}(x)$) or inverse sine ($\arcsin{}(x)=\sin^{-1}(x)$) on the interval $x\in[-1,1)$, since there are (infinitely) many values of $x$ such that $\cos(x)=\pm 1$ or $\sin(x)=\pm 1$. Although this is (sort of) passing-the-buck on the issue of having the logarithm be mutlivalued, it DOES allow us to retain our interpretation of multiplication as rotations in the complex plane — we just need to be careful about which branch we’re talking about when using the logarithm. So the problem with the logarithm is due to the multiplicity, or rotational symmetry, surrounding the $2\pi n$ angle. Thus, it helps to define a principal branch of the logarithm, so there really isn’t a lot of ambiguity surrounding our equations. All we need to do is define the imaginary part of the logarithm over a $2\pi$-interval to keep things consistent, since it is the imaginary part in Eq. 2.53 that has the multiple values. The interval we choose may initially seem odd, but actually is more intuitive in physics, and is given by $\theta_{principal}\in[-\pi,\pi)$. Therefore, the principal logarithm is defined as $\displaystyle\mathrm{Ln}(z)=\mathrm{Ln}(r\textrm{e}^{i\theta})=\ln(r)+i\theta,\;\theta\in[-\pi,\pi).$ (2.56) Here, the capital $\mathrm{L}$ signifies the special branch of the multivalued logarithm given in Eq. 2.53. Notice though that there is a lowercase $\mathrm{l}$ in the real part of Eq. 2.56. This is because $r$ is a positive real number, and we can always take a normal, non-multivalued logarithm of any positive real number. All of the multivaluedness, again, is only due to the imaginary part. You have probably been told for your whole life that it is impossible to take a logarithm of a negative number. Unfortunately, that is just not true. Use Eq. 2.53 defined for $\mathrm{arg}(z)\in[\pi,3\pi)$ to calculate $\ln(-1)$. To check whether your answer makes sense, suppose you compute $\ln(-1)$, and let us denote it by $w=\ln(-1)$. This would imply that $\textrm{e}^{w}=-1$, or $\textrm{e}^{w}+1=0$. Do you recognize this equation? Next, compute $\ln(-a)$, where $a$ is a positive real number. Hence, $-a$ represents any point along the negative real axis with an absolute value of $a$. For example, if $a=9$, then I would be asking you to evaluate $\ln(-9)$. Since the number $-a$ represents any negative real number, what can you conclude about $\ln(-a)$? Is it ever a purely real number (Hint: No it isn’t.)? Is this why you’ve been taught that you cannot take the logarithm of a negative number (Hint: Yes it is.)? #### 2.6.4 Hyperbolic Trigonometry The final class of functions that I want to discuss is within the set of functions that most people refer to as hyperbolic trigonometry. The geometry is actually remarkably simple to jump to from normal, or circular, trigonometry. The latter is governed by the unit circle, defined by $x^{2}+y^{2}=1$. We are instead going to be interested in the so-called unit hyperbola, defined by $x^{2}-y^{2}=1$. Although the geometry may be interesting, I am going to be honest with you: the only time I’ve ever actually seen it be relevant is in orbital mechanics where trajectories matter, or general relativity, where the geometry of spacetime is the dynamical variable of interest (in other fields of physics, like Physics 1, the dynamical variables are position and velocity as a functions of time). With that said, the actual hyperbolic functions themselves are often incredibly useful tools to use in physics. They, for example, are used to describe how the electrostatic potential changes in space from a source. I have included Fig. 2.7 to help you see what unit hyperbola looks like, and how the hyperbolic trigonometric functions relate triangles to it. FIG. 2.7: The unit hyperbola $x^{2}-y^{2}=1$ parameterized by the functions $x=\cosh t$ and $y=\sinh t$. Think of the parameter $t$ as the time it takes for the black point to move from the point $(x,y)=(1,0)$ to the point $(x,y)=(x(t),y(t))$ along the unit hyperbola. FIG. 2.8: Plots of the hyperbolic cosine (Eq. 2.57) and hyperbolic sine (Eq. 2.58) functions over a few real numbers $x$. The most commonly used hyperbolic functions are the hyperbolic cosine and the hyperbolic sine functions, defined by $\displaystyle\cosh{x}$ $\displaystyle=\frac{\textrm{e}^{x}+\textrm{e}^{-x}}{2},$ (2.57) $\displaystyle\sinh{x}$ $\displaystyle=\frac{\textrm{e}^{x}-\textrm{e}^{-x}}{2},$ (2.58) for any $x\in\mathds{R}$. Here the extra $\mathrm{h}$ that’s attached is the hyperbolic part, and the $\cosh$ notation is pronounced cah-sh while the $\sinh$ part is often pronounced sin-ch. I’m not 100% sure why there is an added ch in the $\sinh$ pronounciation, but my best guess is that sin-h is hard to say. Since I know that these real-valued functions will be brand new to most of you reading, I included a plot of them in Fig. 2.8 so you can visualize them. Notice that as $x\rightarrow+\infty$, the hyperbolic sine and hyperbolic cosine become equal, whereas on the other side of the real axis, the $\sinh$ and $\cosh$ approach values equal magnitudes but opposite signs. Note that if we set $x=\cosh(t)$ and $y=\sinh(t)$, then $\displaystyle x^{2}-y^{2}$ $\displaystyle=(\cosh t)^{2}-(\sinh t)^{2},$ $\displaystyle=\left(\frac{\textrm{e}^{t}+\textrm{e}^{-t}}{2}\right)^{2}-\left(\frac{\textrm{e}^{t}-\textrm{e}^{-t}}{2}\right)^{2},$ $\displaystyle=\frac{1}{4}(\textrm{e}^{2t}+\textrm{e}^{-2t}+2)-\frac{1}{4}(\textrm{e}^{2t}+\textrm{e}^{-2t}-2),$ $\displaystyle=\frac{1}{4}(\textrm{e}^{2t}+\textrm{e}^{-2t}-\textrm{e}^{2t}+\textrm{e}^{-2t}+2+2),$ $\displaystyle=1$ which shows that these definitions of the $\cosh$ and $\sinh$ appropriately parameterize the unit hyperbola with the parameter $t$. To better understand the parameterization, imagine $t$ as a unit of time, and $x(t)$ and $y(t)$ are the $x,y$ positions of a point moving along the unit hyperbola. It is also really important to notice that the $\cosh$ is an even function, whereas the $\sinh$ is an odd function, in the sense that an even function $f(x)$ has the property $f(-x)=f(x)$ while odd functions have the property $f(-x)=-f(x)$. I will prove these properties for the $\cosh$ and $\sinh$ to you below. $\displaystyle\cosh(-x)$ $\displaystyle=\frac{\textrm{e}^{(-x)}+\textrm{e}^{-(-x)}}{2}=\frac{\textrm{e}^{-x}+\textrm{e}^{x}}{2}=\frac{\textrm{e}^{x}+\textrm{e}^{-x}}{2}=\cosh(x).$ $\displaystyle\sinh(-x)$ $\displaystyle=\frac{\textrm{e}^{(-x)}-\textrm{e}^{-(-x)}}{2}=\frac{\textrm{e}^{-x}-\textrm{e}^{x}}{2}=\frac{-(\textrm{e}^{x}-\textrm{e}^{-x})}{2}=-\sinh(x).$ I wanted to point these properties out to you because they actually identically mimic those of the normal trigonometric functions, where the cosine is even and the sine is odd! There are actually a slew of other analogous properties between circular trigonometry and hyperbolic trigonometry, but I will not address them here. From here I am going to shift our attention to uniting circular trig with hyperbolic trig through the complex plane. We defined a hyperbolic cosine and a hyperbolic sine. Recall that the normal (circular) tangent is the quotient of the sine and cosine. Use this quotient to derive the hyperbolic tangent function over the real numbers $x$, denoted by $\tanh{x}$ and pronounced tan-ch, and then sketch it so you see what it looks like. Based on your expression for the $\tanh$, prove that it is an odd function over the reals. Okay, back to complex numbers. Consider the function $w(z)=\cosh{z}$. This time, we will choose to work with $z$ in its Cartesian representation so that $z=x+iy$. Then, using Eq. 2.57, we have $\displaystyle w(z)$ $\displaystyle=\cosh(x+iy),$ $\displaystyle=\frac{\textrm{e}^{x+iy}+\textrm{e}^{-(x+iy)}}{2},$ $\displaystyle=\frac{\textrm{e}^{x}\textrm{e}^{iy}+\textrm{e}^{-x}\textrm{e}^{-iy}}{2},$ $\displaystyle=\frac{\textrm{e}^{x}\left(\cos y+i\sin y\right)+\textrm{e}^{-x}\left(\cos y-i\sin y\right)}{2},$ $\displaystyle=\frac{\left(\textrm{e}^{x}+\textrm{e}^{-x}\right)\cos y+i\left(\textrm{e}^{x}-\textrm{e}^{-x}\right)\sin y}{2},$ $\displaystyle=\frac{\textrm{e}^{x}+\textrm{e}^{-x}}{2}\,\cos y+i\frac{\textrm{e}^{x}-\textrm{e}^{-x}}{2}\,\sin y,$ $\displaystyle=\cosh(x)\cos(y)+i\sinh(x)\sin(y).$ (2.59) Thus, we have shown that the $\cosh$ of a complex argument mixes circular and hyperbolic trigonometry. Furthermore, let’s assume that $z=iy$, meaning it is a purely imaginary (lateral) number. Then $\displaystyle\cosh(iy)=\cos(y).$ (2.60) This relationship goes the other way as well. If we consider $\cos(iy)$ then we have from Eq. 2.34 $\displaystyle\cos(iy)=\frac{\textrm{e}^{i(iy)}+\textrm{e}^{-i(iy)}}{2}=\frac{\textrm{e}^{-y}+\textrm{e}^{y}}{2}=\frac{\textrm{e}^{y}+\textrm{e}^{-y}}{2}=\cosh(y).$ (2.61) Since the imaginary arguments can be used to convert between circular and hyperbolic trig using these equations, we conclude that hyperbolic triangles are really just a lateral/adjacent form of straight-line versions. This conclusion could have actually been reached with the unit circle itself, $x^{2}+y^{2}=1$, if we changed the $y$ part to an imaginary number $y\rightarrow iy$, thus converting the unit circle to $x^{2}+(iy)^{2}=x^{2}-y^{2}=1$, the unit hyperbola! Perhaps if we had not given $i=\sqrt{-1}$ the name of “imaginary”, then we could have known more about geometry, in general, immediately from the start. I derived the relationships between the hyperbolic cosine and the circular cosine. Using an almost identical treatment, derive the following equations: $\displaystyle\sinh(x+iy)$ $\displaystyle=\sinh(x)\cos(y)+i\cosh(x)\sin(y),$ (2.62) $\displaystyle\sinh(iy)$ $\displaystyle=i\sin(y),$ (2.63) $\displaystyle\sin(iy)$ $\displaystyle=i\sinh(y).$ (2.64) ### 2.7 Concluding Remarks In this chapter, the fundamentals of complex algebra were introduced and used to generalize a few commonly used real functions into the complex plane. Knowing how to use complex numbers to study physical systems is particularly helpful. For example, it is pretty common to treat all electromagnetic waves as traveling complex exponentials given by $\textrm{e}^{i(\vec{k}\cdot\vec{r}-\omega t)}$, where $\vec{k}$ is the wavenumber of the wave, $\vec{r}$ is the position of the wavefront, $\omega$ is the frequency of the wave, and $t$ is the time. To be clear, the electromagnetic waves are purely real objects meaning that all of their physical properties would be quantities that we would associate with the real line (they are not lateral quantities, whatever those may be…). So why are these imaginary/lateral parts relevant in physics? It turns out that exponentials have a plethora of useful properties in calculus and they are much more manageable than the trigonometric waves defined by sines and cosines. Thus, we usually only need to worry about the real part of the electric and magnetic fields. Meanwhile, whenever the waves move through a material, it turns out that the $\vec{k}$ can be identified as having both a real part and a lateral part, where the latter actually plays a role in the heat-loss of the wave in that medium (this property is called attenuation). In other words, knowing how to split up complex numbers into real and imaginary (lateral) parts and more can tell us very important things about how physical objects interact with one another. And this case is one where we use complex numbers to make our mathematical modeling simpler — it does not include situations in physics where the whole complex plane is absolutely necessary, such as anything where the word “quantum” is involved. There is still a lot of topics within the topic of complex numbers at large, but this chapter hopefully gives you a solid foundation in something that, once you get the hang of it, will truly make that math you use in physics much simpler, and by extension, your life much easier. Not to mention, complex algebra at least makes things like numbers a little prettier to look at. I have included a table of helpful equations (Table 2.1) for you to reference for whenever you need them later on in your career. They also have a reference to the text where they were described. TABLE 2.1: A summary of the important and general equations in Complex Algebra. Equation Description | Equation Formula | Text Reference ---|---|--- Definition of Complex Numbers | $z=x+iy,\;\;x,y\in\mathds{R}$ | Eq. 2.8 Definition of Complex Conjugate | $z=x-iy,\;\;x,y\in\mathds{R}$ | Eq. 2.9 Real Part of a Complex Number | $\mathrm{Re}(z)=\dfrac{z+z^{\ast}}{2}\in\mathds{R}$ | Eq. 2.10 Imaginary Part of a Complex Number | $\mathrm{Im}(z)=\dfrac{z-z^{\ast}}{2i}\in\mathds{R}$ | Eq. 2.11 Addition of Complex Numbers | $z\pm w=(x\pm u)+i(y\pm v)$ | Eq. 2.12 Product of Complex Numbers | $zw=(xu-yv)+i(xv+yu)$ | Eq. 2.13 Quotient of Complex Numbers | $\dfrac{z}{w}=\dfrac{xu+yv}{u^{2}+v^{2}}+i\,\dfrac{xv-yu}{u^{2}+v^{2}}$ | Eq. 2.14 Modulus of a Complex Number | $|z|=\sqrt{z^{\ast}z}=\sqrt{[\textrm{Re}(z)]^{2}+[\textrm{Im}(z)]^{2}}$ | Eq. 2.16 Polar Representation | $z=|z|\textrm{e}^{i\,\mathrm{arg}(z)}$ | Eq. 2.27 Argument of a Complex Number | $\mathrm{arg}(z)=\arctan\left[\dfrac{\mathrm{Im}(z)}{\mathrm{Re}(z)}\right]$ | Eq. 2.30 Euler’s Identity | $\textrm{e}^{i\theta}=\cos\theta+i\,\sin\theta$ | Eq. 2.31 Cosine Formula | $\cos\theta=\dfrac{\textrm{e}^{i\theta}+\textrm{e}^{-i\theta}}{2}$ | Eq. 2.34 Sine Formula | $\sin\theta=\dfrac{\textrm{e}^{i\theta}-\textrm{e}^{-i\theta}}{2i}$ | Eq. 2.35 Multiplication as Dilation and Rotation | $zw=(r\rho)\,\textrm{e}^{i(\theta+\phi)}$ | Eq. 2.36 De Moivre’ Theorem (for Integer $n$) | $(\cos\theta+i\sin\theta)^{n}=\cos(n\theta)+i\sin(n\theta)$ | Eq. 2.41 $n^{\mathrm{th}}$ Roots of Unity | $1^{1/n}=\exp\left(\dfrac{2\pi mi}{n}\right),\;\,m\in\\{0,1,\dots,n-1\\}$ | Eq. 2.43 $n$-Term Geometric Series | $S_{n}=\sum\limits_{m=0}^{n-1}a^{m}=\dfrac{1-a^{n}}{1-a},\;a\neq 1$ | Eq. 2.45 Sum of $n^{\mathrm{th}}$ Roots of Unity | $\sum\limits_{m=0}^{n-1}\left(\textrm{e}^{2\pi i/n}\right)^{m}=0$ | Eq. 2.46 General Complex Exponentiation | $z^{\alpha+i\beta}=\left(r\textrm{e}^{i\theta}\right)^{\alpha+i\beta}=r^{n}\textrm{e}^{-\beta\theta}\,\textrm{e}^{i[\beta\ln(r)+\alpha\theta]}$ | Eq. 2.52 Principal Logarithm | $\mathrm{Ln}(z)=\mathrm{Ln}(r\textrm{e}^{i\theta})=\ln(r)+i\theta,\;\;\theta\in[-\pi,\pi)$ | Eq. 2.56 ## Chapter 3 Calculus of a Single Variable Up until now, I have not assumed that you have any prior knowledge of calculus, mostly because it is frankly unnecessary to understand vectors and complex algebra. Thus, everything that we have covered is totally static. It, in no way, describes anything that can possibly change at all. For example, if the position vector of a particular object is known at a particular time, we would thus far only have the tools to describe the object’s vector at that time. Sure, we could resolve the vector into its components, we could talk about the direction of the vector, we could talk about it being orthogonal to other vectors, but we would have NO WAY of talking about how that vector is changing at that time. Hence could not speak of how the object will move in time. You might be wondering why we could not just reference the object’s velocity vector at that time, since surely the velocity will tell us about how the position vector changes. As we will see, this is absolutely true. However, none of the mathematical tools we’ve discussed so far allow us to describe this change. Meanwhile, you can definitely intuit how position will change because you have experienced changes in your everyday life. Our goal in this chapter is to develop your natural intuition into something mathematically precise and immutable — our goal is to develop calculus. As a heads up, chapter is going to be dense. Like, degenerate matter dense111A classical model of the degeneracy will probably suffice though.. We are going to cover essentially the same amount of material that one would see as they progressed from a Calculus I and II sequence of courses. Although we will cover a lot of material, it will be condensed to a form that is sufficient for implementation in physics. Truth be told, once you understand derivatives and integrals from a geometric/changey perspective, then the number of dimensions rarely makes a difference to a physicist. Sure, it might add a few extra terms here and there, but the actual modeling of the natural world using rates of change is what matters to us, and it is almost always easy, if not trivial, to discuss the laws of physics using calculus. That is not to say that actually solving problems in physics is trivial, it isn’t, but the nontriviality of the problem solving in physics is partly due to the inclusion of our clunky form of algebra with calculus. The other part is usually just because nature is smarter than us. I plan to introduce many calculus topics at are relevant for physics in ways that are easy to visualize. I am not going to get too bogged down in the details about which functions have which derivatives or antiderivatives, although I will talk about a few important ones. Calculus really should not be about the memorization of a bunch of special-case formulas; when it is, it is usually very hard for students to see its value or applicability. Thus, I will reference outside sources for specific formulas if they are not quick and easy to derive (tables of derivatives and integrals are ubiquitous and freely available on the Internet). Although the transition can be hard, once you are able to see beyond the mess of special cases, using calculus to describe the universe will be effortless, and you will probably wonder how you didn’t already describe it is such a fluent way222Okay, so maybe this is a little anecdotal and waxing poetic, but, hey, if using calculus were worse than what we had before, why would we still be using it?. ### 3.1 Position, Velocity, and Acceleration People usually have an almost instinctive understanding of how things move (with the exception of feathers and bowling balls in free fall). Unfortunately, it took an incredible amount of time for us to develop a means of communicating our understanding with others in a precise way. We can thank people like Galileo and Newton for figuring that last bit out for us. In historical hindsight though, developing the crucial ideas of calculus are pretty straightforward. We are going to start by exploiting the position, velocity, and acceleration of some object (maybe a car or volleyball or bug) and we will be careful to remember that each is a vector. Furthermore, let us assume that we know the position vectors at a few different times, and let us denote these vectors by $\vec{r}(t)$. Using vector addition (recall tip-to-tail), we can then define a vector that changes the position at one time $t$ to one at a later time $t+\Delta t$. Here this $\Delta t$ represents an increment in time, for example, it could represent a year, a century, a second, a femtosecond, a unit of Planck time, et cetera. Physically, this increment in time would be whatever time difference it takes to get the object from one position to the next. This change vector at time $t$ is going to be denoted as $\Delta\vec{r}(t)$ and connects the position at $t$, $\vec{r}(t)$, to the position at $t+\Delta t$, $\Delta\vec{r}(t+\Delta t)$, as is shown in Fig. 3.1. FIG. 3.1: If the position of an object is known at two times, perhaps at $t$ and $t+\Delta t$, it is possible to describe the change in position from $\vec{r}(t)$ to $\vec{r}(t+\Delta t)$ as $\Delta\vec{r}(t)=\vec{r}(t+\Delta t)-\vec{r}(t)$. A question that follows from this set up would be something like, “Is it possible to describe how quickly the object moved from $\vec{r}(t)$ to $\vec{r}(t+\Delta t)$?” Intuitively, we might propose that the change in position, or the position increment, $\Delta\vec{r}(t)$, is proportional to the increment in time, $\Delta t$. For example, if we were running at a specific pace for a longer period of time, we will travel farther. However, time is scalar, whereas the change in position is definitely a vector by Fig. 3.1. Thus, we are left to define the velocity333For anyone who already knows calculus, this is the average velocity. of the object at time $t$ as the vector responsible for the change in the object’s position. This is denoted as $\vec{v}(t)$ and can be written mathematically as $\displaystyle\Delta\vec{r}(t)=\vec{r}(t+\Delta t)-\vec{r}(t)=\vec{v}(t)\Delta t,$ (3.1) or if we divide both sides by the scalar time increment, we have $\displaystyle\vec{v}(t)=\frac{\Delta\vec{r}(t)}{\Delta t}=\frac{\vec{r}(t+\Delta t)-\vec{r}(t)}{\Delta t},$ (3.2) It is important to remember that at this point the $(t)$ things everywhere are meant to symbolize that the letter immediately before is a function of $t$; it does not mean multiply everything by $t$. For example, $\vec{v}(t)$ means that the velocity is a function of time $t$ rather than multiply $\vec{v}$ and the scalar $t$. By looking at Eq. 3.2, we see that the equation has the form of a slope, in that it is a function that has a change divided by a change, or more colloquially, it has a rise-over-run type form. Furthermore, we can see where the units of miles-per-hour or meters-per-second come from, since we multiply the position vector measured in units of length by the scalar $1/\Delta t$ which has units of per-time. Also, using this definition, since the time increment takes on the value of whatever time-recording device we have allows, if there is no change in position, we conclude that there is zero velocity. Likewise, if there is ever a time where the velocity is zero, then there will be no resulting change in position. Alright, now that the relationship between changes in position and velocity is established between two points in time, what happens when we introduce more points? Or from a more scientific perspective, what happens when we measure the position of an object at a greater number of points in time? FIG. 3.2: If the position of an object is known at three times, perhaps at $t$, $t+\Delta t$, and $t+2\Delta t$, it is possible to describe the change in the change in the position from $\vec{r}(t)$ to $\vec{r}(t+2\Delta t)$ as $\Delta\Delta\vec{r}(t+\Delta t)=\Delta\vec{r}(t+\Delta t)-\Delta\vec{r}(t)$. Since the change in position is a measure of velocity by Eq. 3.2, then this figure shows the existence in the change in velocity, also known as the acceleration. For simplicity, let’s assume we know the position of a particle at three points in time: $t$, $t+\Delta t$, and $t+2\Delta t$. Then, just as we did before in Fig. 3.1, it is possible for us to connect the position at $t$ with that at $t+\Delta t$, and we can now connect the position at $t+\Delta t$ with that at $t+2\Delta t$. A picture of this scenario is shown in Fig. 3.2, and in this figure, it is evident that there are really two changes in position: $\displaystyle\Delta\vec{r}(t)$ $\displaystyle=\vec{r}(t+\Delta t)-\vec{r}(t),$ $\displaystyle\Delta\vec{r}(t+\Delta t)$ $\displaystyle=\vec{r}(t+2\Delta t)-\vec{r}(t+\Delta t).$ Thus, we may ask “what is the change in the change in position?” Let us denote this change of change in position as $\Delta\Delta\vec{r}(t+\Delta t)$, since the change of change position occurs at $t+\Delta t$. Let’s denote the change of change as $\Delta\Delta$, and evaluate it directly $\displaystyle\Delta\Delta\vec{r}(t+\Delta t)=\Delta\vec{r}(t+\Delta t)-\Delta\vec{r}(t)$ If we now assume that this change is proportional to some vector rate, just like we did when we came up with the notion of velocity, and denote this rate as $\vec{a}(t+\Delta t)$, then we can posit that $\displaystyle\Delta\Delta\vec{r}(t+\Delta t)=\vec{a}(t+\Delta t)\Delta t^{2},$ Here $\Delta t^{2}=(\Delta t)^{2}$. Solving for $\vec{a}(t+\Delta t)$ directly, we have $\displaystyle\vec{a}(t+\Delta t)=\frac{\Delta\vec{r}(t+\Delta t)-\Delta\vec{r}(t)}{\Delta t^{2}}=\frac{\frac{\Delta\vec{r}(t+\Delta t)}{\Delta t}-\frac{\Delta\vec{r}(t)}{\Delta t}}{\Delta t}.$ In the second equality, I moved one of the $\Delta t$s into the numerator to make two fractions because these fractions are defined explicitly in Eq. 3.2. Using this definition, we have $\displaystyle\vec{a}(t+\Delta t)=\frac{\vec{v}(t+\Delta t)-\vec{v}(t)}{\Delta t},$ (3.3) which we recognize as the definition of acceleration444Average acceleration to be precise. Again, this equation is one that has the form of a slope. Further, we would say that if there is a change in velocity, then there must have been an acceleration to cause it. However, since the acceleration is related to the change of change in position, we could also say that a “second-order” change in position is due to the existence of an acceleration. So what have we learned by measuring more points? Well, it appears that each time we measure a new point, we can define a new rate associated with the addition of a new position measurement. This is generally true for position measurements. Actually, the names for a few of the even higher order changes ($3^{rd}-6^{th}$) are the jerk, snap, crackle, and pop [thompson_systems_technology]. We also know that the little time increment $\Delta t$ is some measurement, but its actual value depends on the time- measuring device we have (for example, atomic clocks have smaller $\Delta t$s than grandfather clocks). In principle, we could have exactly one position measurement for every single time measurement we have, and let’s assume we have $N$ total of these position measurements. Then we could write them down in a table of sorts as $\displaystyle\begin{array}[]{c|c|ccccc}\textbf{Measure:}&\textrm{Time }t&t&t+\Delta t&t+2\Delta t&\dots&t+(N-1)\Delta t\\\ \hline\cr\textbf{Measure:}&\textrm{Position }\vec{r}&\vec{r}(t)&\vec{r}(t+\Delta t)&\vec{r}(t+2\Delta t)&\dots&\vec{r}(t+(N-1)\Delta t)\end{array}$ And so each time there is a change in position, we would presumably have a corresponding velocity measurement, and additionally, each time we have a change in velocity we would have an acceleration. So we would have two more rows in that table $\displaystyle\begin{array}[]{c|c|ccccc}\textbf{Measure:}&\textrm{Time }t&t&t+\Delta t&t+2\Delta t&\dots&t+(N-1)\Delta t\\\ \hline\cr\textbf{Measure:}&\textrm{Position }\vec{r}&\vec{r}(t)&\vec{r}(t+\Delta t)&\vec{r}(t+2\Delta t)&\dots&\vec{r}(t+(N-1)\Delta t)\\\ \hline\cr\textbf{Calculate:}&\textrm{Velocity }\vec{v}&-&\vec{v}(t+\Delta t)&\vec{v}(t+2\Delta t)&\dots&\vec{v}(t+(N-1)\Delta t)\\\ \hline\cr\textbf{Calculate:}&\textrm{Acceleration }\vec{a}&-&\vec{a}(t+\Delta t)&\vec{a}(t+2\Delta t)&\dots&-\end{array}$ Here, the dashed marks appear since there are no changes initially for either the position or velocity; thus there are no velocities or accelerations at this time. To better help visualize this table, imagine that all of these measurements came from an object’s trajectory as that shown in Fig. 3.3, where the object starts at an initial position of $\vec{r}_{0}$ and moves to a final position of $\vec{r}_{f}$. In the figure, the blue points represent the positions as functions of time (remember that these are all vectors, hence the ghostly vectors to $\vec{r}_{0}$ and $\vec{r}_{f}$), the red dashed vectors represent the velocities as functions of time, and the green dotted vectors represent the accelerations as functions of time. FIG. 3.3: The position, velocity, and acceleration of particle as a function of time $t$ and it meanders from an initial position $\vec{r}_{0}$ to a final position $\vec{r}_{f}$. During the trip, it can have many different velocities and accelerations, whose directions are represented by the red dashed arrows and the green dotted arrows, respectively. An important highlight from drawing out all of these position measurements is that the velocity vectors can really be seen to connect all of the position measurements together. More precisely, if we wanted to know the position of the object at any time measurement, let’s say at the $j^{\mathrm{th}}$ time increment, then we could find it by starting at $\vec{r}_{0}$ and then adding all of the changes in position up until the $j^{\mathrm{th}}$ increment. In other terms, $\displaystyle\vec{r}(t+j\Delta t)$ $\displaystyle=\vec{r}_{0}+\Delta\vec{r}(t)+\Delta\vec{r}(t+\Delta t)+\Delta\vec{r}(t+2\Delta t)+\dots\Delta\vec{r}(t+(j-1)\Delta t),$ $\displaystyle=\vec{r}_{0}+\vec{v}(t)\Delta t+\vec{v}(t+\Delta t)\Delta t+\vec{v}(t+2\Delta t)\Delta t+\dots+\vec{v}(t+(j-1)\Delta t)\Delta t.$ All of the $\Delta t$s here show up from the implicit definition of the velocity at a point. The equation above can be written much more succinctly with summation notation, and so we have $\displaystyle\vec{r}(t+j\Delta t)=\vec{r}_{0}+\sum_{k=0}^{j-1}\vec{v}(t+k\Delta t)\Delta t.$ (3.4) Although it is harder to visualize from Fig. 3.3 (but not impossible!), there is actually an analogous summation relationship that exists to relate the velocity at $t+j\Delta t$ to all of the intermediate accelerations: $\displaystyle\vec{v}(t+j\Delta t)=\vec{v}_{0}+\sum_{k=0}^{j-1}\vec{a}(t+k\Delta t)\Delta t.$ (3.5) These equations have the form of the area of a rectangle, where $\Delta t$ is the width of the rectangle, and the velocity or acceleration would be the height of the rectangle. Furthermore, we actually add up all of the intermediate areas to find something new. In short, this is all that calculus is: either evaluating slopes or adding up a bunch of boxes. If we want to describe how quickly something changes in terms of another variable — for example, if we want to find how fast position changes as a function of time — we would use a slope. If, on the other hand, we wanted to describe the aggregate effects of many successive actions — for example, if we want to find out where our final position is after moving with a bunch of different velocities — we would add boxes. Even further, we can undo the effect of taking a slope by adding a bunch of areas together; an operational inversion analogous to how multiplying something by $4$ and then dividing the result by $4$ returns the original something. And that is truly all of calculus. There really is nothing else to conceptualize. However, like most things, this conceptualization is easier said than done. And that’s okay. As we progress through this chapter, we will study many more applications of these ideas and refine them into some theorems. Fortunately for us, the context of this chapter will largely be the natural world, and so if the mathematics ever gets too messy, we can always find real-world analogies to bolster our intuition and understand which message the math is trying to convey. ### 3.2 Continuity We have discussed how slopes and box-addition represent function-transitions between position, velocity, and acceleration. But everything we did was built on the assumption that we were taking measurements of time with some device that only measured in increments of $\Delta t$. While this picture is accurate from a numerical perspective, and definitely from an experimental point-of- view, it is not necessarily physical and certainly not mathematical. Physically, objects have positions at intermediate values between the points we measure. We unfortunately are limited by how quickly we can measure something’s position; however, this is no way implies that objects only move in discrete ways. In principle, if we were able to “take more measurements” then we could describe an object’s position, velocity, or acceleration at any time $t$ that we wanted. To better illustrate this idea, we assume that something only moves over a fixed time interval, perhaps $t_{f}-t_{0}$ and that we have made a total of $N$ measurements. Then the time increment would be $\displaystyle\Delta t=\frac{t_{f}-t_{0}}{N}.$ Notice that this time increment gets smaller when the number of samples $N$ gets larger. If this is not clear, choose $t_{f}-t_{0}$ to be your favorite number (physicists really like $1$) and then divide by successively larger numbers — for example, $1/1=1$, $1/2=0.5$, $1/3=0.33\dots$, …, $1/100=0.01$, … So as long as the $t_{f}-t_{0}$ is a fixed number, $\Delta t$ will get very small as $N$ gets very large. The small time increment is helpful, because it allows us to probe more time- values. For example, if $t_{0}=1$, $t_{f}=2$, and $N=10$, then we can talk about times like $\displaystyle t_{0}+\Delta t$ $\displaystyle=1+1/10=1.1,$ $\displaystyle t_{0}+2\Delta t$ $\displaystyle=1+2/10=1.2,$ $\displaystyle\vdots$ $\displaystyle t_{0}+(N-1)\Delta t$ $\displaystyle=1+9/10=1.9$ But we could never reach a time value like $1.11$, since our time increment is larger than that extra $0.01$. In other words, our resolution is not good enough to see a time increment of $0.01$. So what can we do? Instead of only taking $10$ measurements, how about we take $100$? Then we could have $\displaystyle t_{0}+\Delta t$ $\displaystyle=1+1/100=1.01,$ $\displaystyle t_{0}+2\Delta t$ $\displaystyle=1+2/100=1.02,$ $\displaystyle\vdots$ $\displaystyle t_{0}+(N-1)\Delta t$ $\displaystyle=1+99/100=1.99$ By the same argument, we could jump down to a resolution of 0.001 by taking 1000 measurements to have $\displaystyle t_{0}+\Delta t$ $\displaystyle=1+1/1000=1.001,$ $\displaystyle t_{0}+2\Delta t$ $\displaystyle=1+2/1000=1.002,$ $\displaystyle\vdots$ $\displaystyle t_{0}+(N-1)\Delta t$ $\displaystyle=1+999/1000=1.999$ Hence, it is clear that by increasing the number of samples, we can know the position over smaller time increments and therefore observe what happens at those intermittent steps that we would have been incapable of observing before. So now the question is: “how many samples do we need to get a time increment to probe time up to any arbitrary level of precision $\epsilon$?” To solve this problem, we just need the time increment to be less than this arbitrary precision $\epsilon$ (before $\epsilon\in\\{0.1,0.01,0.001\\}$). Thus, what we do is posit that $\Delta t<\epsilon$, or $\displaystyle\frac{t_{f}-t_{0}}{N}<\epsilon\Rightarrow\frac{t_{f}-t_{0}}{\epsilon}<N,$ (3.6) The second inequality555This inequality rule can be tricky for a lot of people the first time they see it. Most people know that if you multiply an inequality by a negative sign, then you must flip the inequality. The same is true for division. For example, it is true that $1/2<1/1$, but it is not true that $2<1$, when we invert the inequality. is our answer. I recommend you test it for yourself. The beautiful thing about this idea is that $\epsilon$ can literally be anything, as long as it’s positive. That means we could increase it or decrease it at will — this is the math equivalent of increasing or decreasing the volume of your phone666Or whatever the cool sound-emitting technological marvel you future people use., except there is no upper or lower bound! Thus, we could, at least in principle, probe time measurements to any level of precision we want. All we would need to do is take enough measurements. In mathematics, we give this idea the name, limit of a sequence. In this case, the sequence would be $\Delta t$, and the elements of the sequence would be each $\Delta t$ with a incrementally higher $N$ value substituted in. In this case, we would say that the limit of $\Delta t$ would be zero since we can definitively get within a precision of $\epsilon$ of zero using the inequalities above. More formally, we would say that the limit of some sequence $a_{N}$, denoted by $L$, exists by the following criterion: > The sequence $a_{N}$ is said to converge to the limit $L$ if there exists > positive integers $n$ and $M$ such that $|a_{n}-L|<\epsilon$ for every > $\epsilon>0$, whenever $n>M$. When the limit does in fact exist, we say that $\displaystyle\lim_{N\rightarrow\infty}a_{N}=L,$ (3.7) where the calculation of how many measurements are required to reach a precision of $\epsilon$ is given by the positive integer $M$ and the “taking enough measurements” is taken into account when $N\rightarrow\infty$ in Eq. 3.7. Knowing more and more precise time measurements is great, but what does it get us? How can we be sure that there will truly be a position measurement for us to take if we zoom in on the temporal axis with smaller and smaller time increments? To the best of my knowledge, in physics, we assume that this is always true for moving objects. But this would imply that as we more and more precise time measurements, we could get more and more precise position measurements. Or at least, if the position at one point in time is known, then the position as $t+\Delta t$ should be nearby that at $t$. Otherwise, the object would somehow jump/teleport to another location randomly. And that would be weird/not anything we have ever observed, so in physics we assume that we can always zoom in infinitely far for all spatial and temporal variables ($N\rightarrow\infty$). But how would we represent this mathematically? For this case, we will assume that we have taken infinitely many temporal measurements so that we can zoom in arbitrarily around some special point in time, let’s call it $t^{\prime}$. If we have zoomed in to some arbitrary level of precision, now denoted by $\delta t$777The lowercase delta is because our precision is tiny, so I figured a little variable would be appropriate., then we can probe any time values within what’s called an open ball of radius $\delta t$. The ball itself is defined as the set of all $t$-values that are within the radius $\delta t$ of the central point $t^{\prime}$, and for those who are interested it is written as $\displaystyle\mathcal{B}(t^{\prime},\delta t)=\\{t\in\mathds{R}:|t-t^{\prime}|<\delta t\\}.$ In a single dimension though, this ball is just the open interval $t\in(t^{\prime}-\delta t,t^{\prime}+\delta t)$ along the real line. It is a circular disk in two dimensions, and actually is a ball (sphere) in three dimensions. In four and higher dimensions, it is harder to visualize888Yes, this is a dare to prove me wrong.. Anyway, we want to be sure that an object doesn’t suddenly teleport elsewhere, so we need the position of an object at time $t$ to be nearby the position at time $t^{\prime}$. Therefore, if we can find a position ball whose radius $|\delta\vec{r}|$ is somehow the result of the $\delta t$ precision we chose, then we can guarantee that the position of the object at time $t$ will always be within the vicinity of the position at time $t^{\prime}$. And most importantly, means $\displaystyle|\vec{r}(t)-\vec{r}(t^{\prime})|<|\delta\vec{r}|,$ (3.8) where $|\delta\vec{r}|$ is a finite number, of again, arbitrary precision. This whole idea that zooming in far on the independent variable consequently leads to us freely zooming on the dependent variable is what is known as continuity. To be more mathematically precise: > A function $f:\mathds{R}\rightarrow\mathds{R}$ is said to be continuous at > $x^{\prime}$ if there exists a $\delta>0$ for every $\epsilon>0$ such that > for every $x$ where $|x-x^{\prime}|<\delta$ then implies > $|f(x)-f(x^{\prime})|<\epsilon$. When the above statement holds, we call the output $f(x^{\prime})$ the limit of $f(x)$ as $x\rightarrow x^{\prime}$ which is written as $\displaystyle\lim_{x\rightarrow x^{\prime}}f(x)=f(x^{\prime}).$ (3.9) Sometimes, the limit is instead defined as $F$ so we can study functions where $f(x^{\prime})$ need not be defined. The definition above is what you may have seen/heard of before — it is the $\epsilon$-$\delta$ definition of continuity — but it is strictly in one- dimension, whereas actually everything else up until this point was in an arbitrarily high number of spatial dimensions; so maybe we were taking about an object only moving along a single axis (one-dimensional), or on a curve on a surface (two-dimensional), or even in a trajectory through space (three- dimensional). Here we condensed back to one-dimension with the $\epsilon$-$\delta$ definition of continuity, because the mathematical precision in higher dimensions is actually a little too nuanced to jump in with immediately. Actually, to be completely honest, I don’t think the definition, as it is written above, has ever come up in a physics class that I’ve taken. However, the idea of being able to zoom in as far as we like does come up constantly; that is why I spent so much time developing the intuition behind the $\epsilon$-$\delta$ definition of continuity. Nevertheless, I do want to unpack that definition a little, because it will be extremely relevant for things like derivatives later on. First things first, the statement $f:\mathds{R}\rightarrow\mathds{R}$ translates to “function $f$ that has real number inputs (first $\mathds{R}$) and then outputs real numbers (second $\mathds{R}$).” This means we are only dealing with the good-ol’ one-dimensional functions that you learned in high school. The next thing that is listed is that this definition of continuity only applies for functions at a single point $x^{\prime}$ in the domain. It says nothing about the continuity of $f$ at nearby points. The actual requirement for continuity is that there has to be an $\delta>0$, a.k.a. an arbitrarily small number, to zoom in on $x$ nearby $x^{\prime}$ when we zoom in on $f(x^{\prime})$ itself to a chosen precision of $\epsilon$. Therefore, this definition posits that the existence of the $\epsilon$ precision guarantees the existence of the $\delta$ precision in the independent variable. This means that continuity allows us to continue taking smaller and smaller independent-variable increments to measure smaller and smaller dependent-variable increments, just like our intuition with an object’s position informed us. So this definition, although I agree that it is pretty intangible the first time you see it, incorporates all of what we’ve discussed in the section so far, but it does it in a much more concise manner. Example 3.2 shows how we might use the definition of continuity to show simple functions are continuous. The truth is that it usually takes a different algebra tricks to establish the continuity of individual functions, and oftentimes these tricks are NOT obvious. If you ever take a course in mathematical analysis, you will see it. Therefore, it usually helps to establish the continuity of general classes of functions, like that of sums, products, or compositions, because we can then argue that whatever messy function we have is truly just a combination of continuous functions. Thus, I am going to prove a few theorems to you regarding the continuity of sums, products, and compositions. Disclaimer: the proofs will be pretty abstract, and so if it is too hard to follow at first, that’s okay. The fundamentally important continuity rules in physics will be the numbered limit equations that follow each proof, but I wouldn’t be able to live with myself if I just told you what they were without justification. I mean, this is supposed to be science after all, right? Definitions are great, but their utility really comes in applying them. In this example, I will prove to you that constant and linear functions are continuous. We first start with a constant function. Let’s assume our constant function looks like $f(x)=A$ for all $x\in\mathds{R}$. If you don’t like the $A$ here, replace it with the number $\pi$ and then, at the end, replace all the $\pi$s with $A$s. Okay, so the proof goes something like this: Consider the point $x^{\prime}\in\mathds{R}$ and the open ball $\mathcal{B}(f(x^{\prime}),\epsilon)$ centered at $f(x^{\prime})$ with radius $\epsilon>0$. More visually, this means we have zoomed in on the dependent variable $f$ to a precision of $\epsilon$. We want to show that there exists a $\delta>0$ such that $|x-x^{\prime}|<\delta$, no matter what $\epsilon$ precision we choose. Now consider the absolute difference $|f(x)-f(x^{\prime})|$, where I will substitute in the values of $x$ and $x^{\prime}$ into $f(x)=A$. $\displaystyle|f(x)-f(x^{\prime})|=|A-A|=0$ But by definition, $\epsilon>0$. If we let $\delta=\epsilon>0$, then for every $x$ such that $|x-x^{\prime}|<\delta$, we have $\displaystyle|f(x)-f(x^{\prime})|=|A-A|=0<\delta=\epsilon$ This shows that $f$ is continuous at $x^{\prime}$. However, since this argument holds for all $x^{\prime}\in\mathds{R}$, we conclude that constants functions are continuous over all of the real numbers. For the next proof, we look at the linear function $f(x)=bx$ for all $x\in\mathds{R}$, where $b\neq 0$ is a constant coefficient. The proof is similar in set up to the one above, but its execution will probably look very different to you the first time you see it. Remember that the key is to show the existence of any $\delta>0$ for every conceivable $\epsilon>0$. Consider the point $x^{\prime}\in\mathds{R}$ and the open ball $\mathcal{B}(f(x^{\prime}),\epsilon)$ centered at $f(x^{\prime})$ with radius $\epsilon>0$. We will argue that for every $\epsilon>0$, we can find a $\delta=\epsilon/|b|>0$ such that when $|x-x^{\prime}|<\delta$, we have $|f(x)-f(x^{\prime})|<\epsilon$. $\displaystyle|f(x)-f(x^{\prime})|$ $\displaystyle=|bx-bx^{\prime}|$ $\displaystyle=|b(x-x^{\prime})|$ $\displaystyle=|b|\,\underbrace{|x-x^{\prime}|}_{<\delta}$ $\displaystyle<|b|\,\frac{\epsilon}{|b|}$ $\displaystyle=\epsilon$ Thus, $|f(x)-f(x^{\prime})|<\epsilon$ when $|x-x^{\prime}|<\delta$, for all $\delta>0$; thereby completing the proof that $f(x)=bx$ is continuous at $x^{\prime}$. Further, since $x^{\prime}$ is arbitrary, again, this argument holds for all of the real numbers. We conclude that linear functions are continuous over all of the real numbers. Before we proceed to this point, we need one very important theorem concerning the absolute function: the so-called triangle inequality. It states simply that for any $x,y\in\mathds{R}$, we have $\displaystyle|x+y|\leq|x|+|y|.$ (3.10) A proof of this statement is straightforward, and follows from the fact that $x^{2}=(|x|)^{2}$ for all real numbers. It goes as something like this, $\displaystyle(|x|+|y|)^{2}$ $\displaystyle=(|x|)^{2}+(|y|)^{2}+2|x|\,|y|$ $\displaystyle\geq(|x|)^{2}+(|y|)^{2}+2xy,\textrm{ since either $x$ or $y$ could possibly be negative}$ $\displaystyle=x^{2}+y^{2}+2xy$ $\displaystyle=(x+y)^{2}$ $\displaystyle=(|x+y|)^{2}.$ Thus, $(|x+y|)^{2}\leq(|x|+|y|)^{2}$. By taking the square root of both sides, we have the triangle inequality; a theorem that gets its name because it essentially says that if a triangle is made up of three sides, one side cannot have a magnitude greater than the sum of the other two. This inequality will help us establish that sums of continuous functions are continuous. Now for continuity in summation. To prove this, consider the function $h(x)=f(x)+g(x)$, where $f(x)$ and $g(x)$ are both continuous at $x^{\prime}$. Since both $f$ and $g$ are continuous at $x^{\prime}$, then there must exist $\delta$-precisions for both functions for every $\epsilon$-precision, namely $\delta_{f}$ and $\delta_{g}$, respectively, such that $\displaystyle|x-x^{\prime}|<\delta_{f}\textrm{ implies }|f(x)-f(x^{\prime})|$ $\displaystyle<\frac{\epsilon}{2},$ $\displaystyle|x-x^{\prime}|<\delta_{g}\textrm{ implies }|g(x)-g(x^{\prime})|$ $\displaystyle<\frac{\epsilon}{2},$ Then, by substitution into $h$, we have $\displaystyle|h(x)-h(x^{\prime})|$ $\displaystyle=|f(x)+g(x)-f(x^{\prime})-g(x^{\prime})|$ $\displaystyle=|f(x)-f(x^{\prime})+g(x)-g(x^{\prime})|$ $\displaystyle\leq|f(x)-f(x^{\prime})|+|g(x)-g(x^{\prime})|,\textrm{ thanks to the triangle inequality}$ $\displaystyle<\frac{\epsilon}{2}+\frac{\epsilon}{2},\textrm{ whenever }|x-x^{\prime}|<\min\\{\delta_{f},\;\delta_{g}\\},$ $\displaystyle=\epsilon$ Thus, when we define $\delta=\min\\{\delta_{f},\;\delta_{g}\\}$, where the $\min$ function outputs the smaller value of $\delta_{f}$ and $\delta_{g}$, then we have that $|x-x^{\prime}|<\delta$ implies $|h(x)-h(x^{\prime})|<\epsilon$ for every $\epsilon>0$. This means that when $f$ and $g$ are continuous at $x^{\prime}$, so is the sum of the two. I want to emphasize that this conclusion implies the following limit rule: $\displaystyle\lim_{x\rightarrow x^{\prime}}\left[f(x)+g(x)\right]=\left[\lim_{x\rightarrow x^{\prime}}f(x)\right]+\left[\lim_{x\rightarrow x^{\prime}}g(x)\right].$ (3.11) Hence, the limit of the sum is the sum of the limits. Furthermore, if (and only if) both $f$ and $g$ are continuous over all of the real numbers, then so is their sum $h$. We next want to prove that the product of two functions that are continuous at $x^{\prime}$ is also continuous at that point when two functions are defined999The defined condition guarantees that I can write something like $f(x^{\prime})$ and not have is accidentally blow up to infinity. at $x^{\prime}$. Now let $h(x)=f(x)g(x)$, and let the definitions of $\delta_{f}$ and $\delta_{g}$ be modified a little from what is given above as $\displaystyle|x-x^{\prime}|<\delta_{f}\textrm{ implies }|f(x)-f(x^{\prime})|$ $\displaystyle<\frac{\epsilon}{2(1+|g(x^{\prime})|)},$ $\displaystyle|x-x^{\prime}|<\delta_{g}\textrm{ implies }|g(x)-g(x^{\prime})|$ $\displaystyle<\frac{\epsilon}{2(1+|f(x^{\prime})|)}.$ These odd looking definitions will come in handy later on so that we will find $|h(x)-h(x^{\prime})|<\epsilon$ and not some function of $\epsilon$. Then we have by substitution, $\displaystyle|h(x)-h(x^{\prime})|$ $\displaystyle=|f(x)g(x)-f(x^{\prime})g(x^{\prime})|,$ $\displaystyle=|f(x)g(x)+0-f(x^{\prime})g(x^{\prime})|,$ $\displaystyle=|f(x)g(x)-f(x^{\prime})g(x)+f(x^{\prime})g(x)-f(x^{\prime})g(x^{\prime})|,$ $\displaystyle=\left|\left[f(x)g(x)-f(x^{\prime})g(x)\right]+\left[f(x^{\prime})g(x)-f(x^{\prime})g(x^{\prime})\right]\right|,$ $\displaystyle\leq|f(x)-f(x^{\prime})|\,|g(x)|+|f(x^{\prime})|\,|g(x)-g(x^{\prime})|,\textrm{ via the triangle inequality}$ $\displaystyle<\frac{\epsilon}{2(1+|g(x^{\prime})|)}\,|g(x)|+|f(x^{\prime})|\,\frac{\epsilon}{2(1+|f(x^{\prime})|)},\textrm{ when }|x-x^{\prime}|<\min\\{\delta_{f},\;\delta_{g}\\}.$ Before continuing, we must figure out a way to deal with the $|g(x)|$ because we do not know the specific value of $g$ at $x$ — we only know that $g(x^{\prime})$ is defined and that $g$ is continuous at $x^{\prime}$. But since $g$ is continuous at $x^{\prime}$, we are free to zoom in as far as we like on $g(x^{\prime})$ and we are guaranteed to be able to find an $x$-interval that corresponds to our $g$-precision. Thus, what we will do is choose to limit our $g$-precision to be less than a set number, for example, $1$. Then, by the continuity of $g$, there exists a $\delta_{1}$ such that $\displaystyle|x-x^{\prime}|<\delta_{1}\textrm{ implies }|g(x)-g(x^{\prime})|$ $\displaystyle<1.$ Then we can say using the triangle inequality that $\displaystyle|g(x)|=|g(x)+0|=|g(x)-g(x^{\prime})+g(x^{\prime})|\leq|g(x)-g(x^{\prime})|+|g(x^{\prime})|<1+|g(x^{\prime})|,$ where the last inequality holds for every $x$ such that $|x-x^{\prime}|<\delta_{1}$. Okay, so now that we have these conditions in place, and if we remember that $|f(x^{\prime})|\leq 1+|f(x^{\prime})|$, we can define our $\delta$-precision such that $\delta=\min\\{\delta_{f},\;\delta_{g},\;\delta_{1}\\}$. Then for every $x$ such that $|x-x^{\prime}|<\delta$, we have $\displaystyle|h(x)-h(x^{\prime})|$ $\displaystyle<\frac{\epsilon}{2(1+|g(x^{\prime})|)}\,|g(x)|+|f(x^{\prime})|\,\frac{\epsilon}{2(1+|f(x^{\prime})|)},$ $\displaystyle\leq\frac{\epsilon\,(1+|g(x^{\prime})|)}{2(1+|g(x^{\prime})|)}+\frac{\epsilon\,(1+|f(x^{\prime})|)}{2(1+|f(x^{\prime})|)},$ $\displaystyle=\frac{\epsilon}{2}+\frac{\epsilon}{2}$ $\displaystyle=\epsilon.$ Since we were able to find an $\delta>0$ for every $\epsilon>0$ such that$|x-x^{\prime}|<\delta$ implies/leads to $|h(x)-h(x^{\prime})|<\epsilon$, then we can conclude that $h(x)=f(x)g(x)$ is continuous at $x^{\prime}$. This time, the corresponding limit equation would look like $\displaystyle\lim_{x\rightarrow x^{\prime}}\left[f(x)g(x)\right]=\left[\lim_{x\rightarrow x^{\prime}}f(x)\right]\,\left[\lim_{x\rightarrow x^{\prime}}g(x)\right],$ (3.12) or the limit of a product is the product of limits. Just as was the case for the the sum of continuous functions, if both $f$ and $g$ are continuous over all of the reals, so is their product. However, if only one function is continuous everywhere, then the product is NOT continuous everywhere — it, too, is limited101010Nice pun, right? to only be continuous for the values $x^{\prime}$ where both of its factors are. The last continuity rule deals with compositions of functions. As a brief review compositions are sometimes written as $g(f(x))$ or $g\circ f(x)$. They are functions that look something like this: if $g(x)=x^{2}$ and $f(x)=4\sin x$, then $g(f(x))=(4\sin x)^{2}$. Thus, the notation of $g(f(x))$ really just means put whatever the output of $f$ is into $g$ as the input. Although this class of functions may seem like more of a special case compared to sums or products (it did to me the first time I learned of it), compositions of functions appear everywhere in both calculus and physics. Thus, knowing how they behave is imperative. Our goal in particular is to show that compositions of continuous functions are also continuous. To do this, we then need to show that a function $h(x)=g(f(x))$ is continuous at the point $x^{\prime}$, given that $f$ is continuous at $x^{\prime}$ and $g$ is continuous at $f(x^{\prime})$. To make this proof a little clearer, define $\displaystyle u=f(x),\;u^{\prime}=f(x^{\prime}).$ By the continuity of $g$ at $u^{\prime}$, we know that for every $\epsilon>0$, there exists a $\delta_{u}>0$ such that $\displaystyle|u-u^{\prime}|=|f(x)-f(x^{\prime})|<\delta_{u}\textrm{ implies }|g(u)-g(u^{\prime})|<\epsilon.$ Then by the continuity of $f$ at $x^{\prime}$, we know that for every $\epsilon_{f}$-precision, there exists a $\delta>0$ such that $\displaystyle|x-x^{\prime}|<\delta\textrm{ implies }|f(x)-f(x^{\prime})|=|u-u^{\prime}|<\epsilon_{f}.$ Therefore, since we want something that looks like $\displaystyle|x-x^{\prime}|<\delta\textrm{ implies }|h(x)-h(x^{\prime})|=|g(f(x))-g(f(x^{\prime}))|<\epsilon,$ then we choose $\epsilon_{f}=\delta_{u}$ such that our final statement reads as $\displaystyle|x-x^{\prime}|<\delta\textrm{ implies }|f(x)-f(x^{\prime})|=|u-u^{\prime}|<\delta_{u}\textrm{ implies }|g(u)-g(u^{\prime})|=|h(x)-h(x^{\prime})|<\epsilon.$ for every $\epsilon>0$. As long as both $f$ and $g$ are continuous everywhere, then the composite function will be continuous everywhere. If there are any points where $g$ is not continuous, then the composite function will not be continuous at those points111111There is a huge caveat here: sometimes functions that are not specifically defined at a point can still look continuous at that point. These types of function discontinuities are called removable discontinuities or holes.. The corresponding limit equation for this composition property can be written as $\displaystyle\lim_{x\rightarrow x^{\prime}}\left[g(f(x))\right]=g\left(\lim_{x\rightarrow x^{\prime}}f(x)\right),$ (3.13) which can be summarized as: the limit of the composition is the composition of the limits. Again, these continuity rules (Eqs. 3.11, 3.12, and 3.13) are incredibly useful tools to determine if any function is continuous, or at least continuous at a specified point. This problem is designed to show you how to use the continuity rules defined by Eqs. 3.11, 3.12, and 3.13, given that any constant functions or linear functions are continuous from Example 3.2. As a quick example of how to use these continuity rules, by 3.11, it follows that the line $f(x)=bx+A$, where $b$ and $A$ are constants, is continuous for every single real number input, $x$. This is because both $bx$ and $A$ are continuous everywhere from Example 3.2, therefore their sum is also continuous everywhere. (a) For the first part of the problem, use Eq. 3.12 to argue that $f(x)=bx^{2}$ is continuous everywhere. (b) Then, using Eq. 3.11, argue that any parabola $f(x)=ax^{2}+bx+c$ is continuous everywhere. (c) Now argue that any cubic polynomial $P_{3}(x)=a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}$ is continuous everywhere, where every element of the set of coefficients $\\{a_{n}\\}=\\{a_{0},a_{1},a_{2},a_{3}\\}$ is constant. (d) Finally, use the same argument to show that any $n^{\mathrm{th}}$ order polynomial, denoted by $P_{n}$, and written as $P_{n}(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\dots+a_{2}x^{2}+a_{1}x+a_{0}$ and accompanied by the set of constant coefficients $\\{a_{n}\\}=\\{a_{0},a_{1},a_{2}\dots,a_{n-1},a_{n}\\}$ is continuous everywhere. Although the ability to potentially take an infinite amount of measurements is great, it is honestly more helpful to know, for example, how an object is moving because we can then predict where it will be in the next instant in time. In physics, equations like these are called equations of motion, and the
# Stick-slip and convergence of feedback-controlled systems with Coulomb friction Michael Ruderman<EMAIL_ADDRESS>University of Agder, 4604-Norway ###### Abstract An analysis of stick-slip behavior and trajectory convergence in feedback- controlled motion systems with discontinuous Coulomb friction is provided. A closed-form parameter-dependent stiction region, around an invariant equilibrium set, is proved to be always reachable and globally attractive. It is shown that only asymptotic convergence can be achieved, with at least one but mostly an infinite number of consecutive stick-slip cycles, independent of the initial conditions. Theoretical developments are supported by a number of numerical results with dedicated convergence examples. ###### keywords: Coulomb friction , limit cycles , PID control , convergence analysis , sliding mode , discontinuities ## 1 Introduction Feedback-controlled motion systems are mostly subject to nonlinear friction, and the direction-dependent Coulomb friction force plays a crucial role owing to a (theoretical) discontinuity at velocity zero-crossings. Although the more complex dynamic friction laws (see, e.g., [1, 2, 3] and references therein) allow frictional discontinuity to be bypassed during analysis, the basic Coulomb friction phenomenon continues to present the same challenges in terms of controller convergence, especially in the presence of an integral control action. Associated stick-slip behavior and so-called frictional limit cycles were formerly addressed in [4]. An algebraic prediction of stick-slip, with a large set of parametric equalities, was compared to the describing function method, while the Coulomb plus static friction law was assumed for avoiding discontinuity at the velocity zero-crossing. An explicit solution for friction-generated limit cycles has also been proposed in [5], necessitating static friction approximation (to avoid discontinuity) and also requiring the stiction friction which is larger than the Coulomb friction level. Despite including explicit analysis of state trajectories for both sticking and slipping phases, no straightforward conclusions about the appearance and convergence of stick-slip behavior have been reported. The appearance of friction-induced (so-called hunting) limit cycles has been briefly addressed in [6], for the assumed LuGre [7] and so-called switch [8] friction models. Note that a former analysis of stick-slip behavior and associated friction- induced limit cycles can be found in [9]. An explanation of how a proportional-feedback-controlled motion with Coulomb friction comes to sticking was subsequently shown in [10] by using the invariance principle. Stick-slip behavior, as an observable phenomenon known in control engineering practice, was highlighted in [1], and several other control studies have since attempted to analyze and compensate such processes. For instance, a related analysis of under-compensation and over-compensation of static friction was reported in [11]. Issues associated with a slow (creeping-like) convergence of the feedback-controlled motion in the presence of Coulomb friction have been addressed and experimentally demonstrated in [12]. More recently, the convergence problems (of PID feedback control) have been well demonstrated, with accurate experiments reported, in [13] while attempting to reduce settling errors by the reset integral control [14]. The related analysis has been reported before in [15]. Despite a number of experimental observations and elaborated studies reported in the literature, it appears that no general consensus has been established in relation to friction-induced stick-slip cycles in feedback-controlled systems with Coulomb friction. In particular, questions arise over when and under which conditions the stick-slip cycles occur, and how a PID-controlled motion will converge to zero equilibrium in the presence of Coulomb friction (with discontinuity). Note that the problem of slow convergence in vicinity to a set reference position is of particular relevance for advanced motion control systems, see, e.g., [16], but in the PID design and tuning, see, e.g., [17], the associated issues are not widely accepted and have yet to be formalized, that despite huge demand from precision engineering control tasks. This gap should not come as a complete surprise, however, given the nontrivial friction microdynamics, visible in experimental studies [18, 19, 20], and the uncertain and time-varying friction behavior, see, e.g., [21]. This paper is intended to contribute to the analysis of feedback-controlled system convergence with Coulomb friction and, thus, to understanding of stick- slip cycles occurring in servo mechanisms. In order to keep the analysis as general as possible and to clarify the principal phenomenon, we have assumed the classical Coulomb friction law with discontinuity. This led us (unavoidably) to variable structure system dynamics, distinguishing between the modes of motion sticking and slipping. At the same time, we show that all state trajectories always remain continuous and almost always differentiable (except finite switching between both modes). We provide theorems and identify the conditions to demonstrate the sticking region around zero equilibrium reachable and globally attractive. The main contribution of analysis is further reinforced by several illustrative numerical examples. ### 1.1 Problem statement Throughout the paper, we will deal with the feedback-controlled systems described by $\ddot{x}(t)+K_{d}\dot{x}(t)+K_{p}x(t)+K_{i}\int x(t)dt+F(t)=0,$ (1) where the derivative, proportional and integral feedback gains are $K_{d}$, $K_{p}$ and $K_{i}$, respectively. The nonlinear friction (that with discontinuity) is denoted by $F$, and the set-value control problem is reduced to the convergence problem for a non-zero initial condition, i.e., $x(0)\neq 0$. Furthermore, we use the following simplifications of the system plant without loss of generality: the relative motion of an inertial body with unity mass is considered in the generalized $(x,\dot{x})$ coordinates. The inherent system damping (including linear viscous friction) and stiffness (of restoring spring elements) are incorporated (if applicable) into $K_{d}>0$ and $K_{p}>0$, respectively. There are no actuator constraints, so feedback for the integral output error is directly applicable via the gain factor $K_{i}>0$. The control problem (1) has long been associated with issues of slow and/or cyclic convergence of $x(t)$ in the vicinity of steady-state for the set reference value. This (sometimes called hunting behavior or even hunting limit cycles) has been addressed in analysis and also observed in several controlled positioning experiments, e.g., [1, 4, 5, 6, 12, 13]. These phenomena seem to be associated with integral control action and nonlinear (Coulomb-type) friction within a vanishing region around equilibria, where the potential field of proportional feedback weakens and cannot provide $x(t)\rightarrow 0$ at certain (application-required) times $t<\mathrm{const}$. The hunting behavior is directly cognate with the stick-slip, where a smooth (continuous) motion alternates with a sticking phase of zero or slowly creeping displacement. Stick-slip appearance, parametric conditions and convergence in semi-stable limit cycles are the focus of our study, while we assume the Coulomb friction force with discontinuity. ## 2 Stiction due to discontinuous Coulomb friction In this Section, we will analyze the stick-slip behavior of the (1) system, for which the classical Coulomb friction with discontinuity is represented by $F(\dot{x})=F_{c}\,\mathrm{sign}(\dot{x})$. Here, the Coulomb friction coefficient is $F_{c}>0$, and the sign operator is defined by $\mathrm{sign}(z)=\left\\{\begin{array}[]{ll}1,&\;z>0,\\\ \left[1,1\right],&\;z=0,\\\ -1,&\;z<0.\\\ \end{array}\right.$ (2) Note that (2) constitutes an ideal relay with instantaneous switching upon change of the input sign. We also note that for a zero-displacement rate, the friction equation becomes an inclusion $F(0)\in[-F_{c},F_{c}]$ in the Filippov sense [22] when seeking the corresponding solution. We will consider the feedback-controlled system in a minimal state-space representation, as follows: $\displaystyle\dot{x}$ $\displaystyle=$ $\displaystyle Ax+Bu,$ (3) $\displaystyle y$ $\displaystyle=$ $\displaystyle Cx,$ (4) $\displaystyle u$ $\displaystyle=$ $\displaystyle-\mathrm{sign}(y).$ (5) Note that in this way, we also approach the system notation provided in [23] for analysis of relay feedback systems (RFSs). Introducing the state vector $x=(x_{1},x_{2},x_{3})^{T}\in\mathbb{R}^{3}$ of the integral, output and derivative errors, (1) can be rewritten as (3)-(5), with the system matrix $A=\left(\begin{array}[]{ccc}0&1&0\\\ 0&0&1\\\ -K_{i}&-K_{p}&-K_{d}\\\ \end{array}\right),$ (6) and input and output distribution vectors $B=\left(\begin{array}[]{c}0\\\ 0\\\ F_{c}\\\ \end{array}\right),\quad C^{T}=\left(\begin{array}[]{c}0\\\ 0\\\ 1\\\ \end{array}\right)$ (7) correspondingly. ### 2.1 Without integral feedback Firstly, we consider the (3)-(7) system without an integral feedback action, meaning $K_{i}=0$. In this case, the phase-plane $(x_{2},x_{3})\in\mathbb{R}^{2}$ is divided into two regions $P^{+}=\\{x\in\mathbb{R}^{2}:x_{3}>0\\},\quad P^{-}=\\{x\in\mathbb{R}^{2}:x_{3}<0\\}$ (8) by the discontinuity manifold $S=\\{x\in\mathbb{R}^{2}:x_{3}=0\\}$. It can be seen that in the discontinuity manifold $S$, the vector fields of the state value $x_{s}$ are given by $\displaystyle f^{+}(x_{s})$ $\displaystyle=\overset{x\in P^{+}}{\underset{x\rightarrow x_{s}}{\lim}}(Ax+Bu)=\left(\begin{array}[]{c}0\\\ -K_{p}x_{2}-F_{c}\end{array}\right),$ (11) $\displaystyle f^{-}(x_{s})$ $\displaystyle=\overset{x\in P^{-}}{\underset{x\rightarrow x_{s}}{\lim}}(Ax+Bu)=\left(\begin{array}[]{c}0\\\ -K_{p}x_{2}+F_{c}\end{array}\right)$ (14) and are pointing in opposite directions within $|x_{2}|\leq F_{c}K_{p}^{-1}$. In contrast, outside of this region (denoted by $S_{0}$ in Figure 1), both vector fields are pointing in the same direction, towards $P^{+}$ for $x_{2}<-F_{c}K_{p}^{-1}$ and towards $P^{-}$ for $x_{2}>F_{c}K_{p}^{-1}$. Figure 1: Phase portrait of $(x_{2},x_{3})$-trajectories of the (3)-(7) system without integral feedback, attracted to $S_{0}$ from various initial values Since both vector fields are normal to the manifold $S$, neither smooth motion nor sliding mode can occur for the $(x_{2},x_{3})\in S_{0}$ trajectories, which means that any trajectory reaching $S_{0}$ will remain there $\forall\>t\rightarrow\infty$. Thus, $S_{0}$ constitutes the largest invariant set of equilibrium points for (3)-(7) without integral control action. Note that this has also been shown in [10] and is well known when a relative motion with Coulomb friction is controlled by the proportional-derivative (PD) feedback only. In this case, the set value error can be reduced by increasing $K_{p}$ but cannot be driven to zero as long as $F_{c}\neq 0$. The phase portraits of the trajectories converging to $S_{0}$ are exemplary shown in Figure 1. ### 2.2 With integral feedback When allowing for $K_{i}\neq 0$, it is intuitively apparent that having reached point $x(t_{s})\in S_{0}$ at $t=t_{s}$, the trajectory cannot remain there for all times $t_{s}<t<\infty$. While for the motion states $x_{3}(t_{s})=0$ and $x_{2}(t_{s})=\mathrm{const}\neq 0$, the integral control effort $K_{i}x_{2}(t_{s})\int^{t}_{t_{s}}dt$ grows continuously and, at some finite time ($t_{c}>t_{s}$), should lead to the breakaway [24] and a new onset of continuous motion. This alternating phase, upon system sticking, is often referred to as _slipping_ , cf., e.g., [5], so that a stick-slip motion [1, 3] appears, also in the form of limit cycles. In order to analyze the friction- induced limit cycles, sometimes denoted as hunting-limit cycles, cf. [6], we firstly need to look into the system dynamics during stiction, i.e., for $t_{s}<t<t_{c}$. Recall that in this phase, the system (3)-(7) yields as continuously switching, owing to $x_{3}=0$ and to the discontinuous relay nonlinearity in the feedback loop. The developments given below are motivated by analysis of the existence of fast switches, provided in [23] for RFS, while the obtained results rely on the sliding-mode principles, see, e.g., [25, 26]. Consider the switching variable (or, more generally, surface) $S=Cx(t)=0$, for which the sliding mode should occur on the manifold $S$. This requires that the existence and reachability condition, cf. [25], $\dot{S}S\leq-\eta|S|$ (15) is fulfilled, where $\eta$ is a small positive constant. ###### Theorem 1. Given is the control system (3)-(7) with Coulomb friction. The system is sticking at $x_{3}=0$ iff $|K_{i}x_{1}|+|K_{p}x_{2}|\leq F_{c}.$ (16) ###### Proof. The system remains sticking as long as it is in the sliding mode for which (15) is fulfilled. The sliding-mode condition (15) can be rewritten as $\dot{S}\,\mathrm{sign}(S)\leq-\eta,$ (17) while the time derivative of the sliding surface is $\dot{S}=(CAx\pm CB)=\bigl{(}CAx-\mathrm{sign}(S)CB\bigr{)},$ (18) depending on the sign of $Cx$. Substituting (18) into (17) results in $\displaystyle CAx$ $\displaystyle\leq$ $\displaystyle CB-\eta\qquad\hbox{for}\quad\mathrm{sign}(S)>0,$ (19) $\displaystyle-CAx$ $\displaystyle\leq$ $\displaystyle CB-\eta\qquad\hbox{for}\quad\mathrm{sign}(S)<0.$ (20) Since $CB,\eta>0$, the inequalities (19) and (20) can be summarized in $|CAx|\leq CB-\eta.$ (21) Evaluating (21) with $x_{3}=0$ and $0\neq\eta\rightarrow 0^{+}$ results in (16) and completes the proof. ∎ ###### Remark 1. The condition obtained using the Theorem 1 is equivalent to the set of attraction $\\{x\in S:|CAx|<|CB|\\}$ for $CB>0$ demonstrated in [23, Section 4]. Now, we are interested in the state dynamics during stiction, which means within the sliding mode. Since staying in the sliding mode (corresponding to $S\equiv 0$ on the switching surface) requires $\dot{S}=C\dot{x}=CA\dot{x}+CBu=0\qquad\hbox{for}\quad t_{s}<t<t_{c},$ (22) one obtains the so-called equivalent control as $u_{e}=-(CB)^{-1}CAx.$ (23) Recall that an equivalent control, [26], is the linear one (i.e., with no relay action) which is required to maintain the system in an ideal sliding mode without fast-switching. Consequently, substituting (23) into (3) results in the equivalent system dynamics $\dot{x}_{e}=\bigl{[}I-B(CB)^{-1}C\bigr{]}Ax=OAx,$ (24) which governs the state trajectories as long as the system remains in the sliding mode. $O$ is the so-called projection operator of the original system dynamics, satisfying the properties $CO=0$ and $OB=0$. Evaluating (24) with (6) and (7) yields the system dynamics during stiction as $\left(\begin{array}[]{c}\dot{x}_{1}\\\ \dot{x}_{2}\\\ \dot{x}_{3}\\\ \end{array}\right)=\left(\begin{array}[]{ccc}0&1&0\\\ 0&0&1\\\ 0&0&0\\\ \end{array}\right)\left(\begin{array}[]{c}x_{1}(t_{s})\\\ x_{2}(t_{s})\\\ 0\\\ \end{array}\right).$ (25) It can be seen that neither relative displacement nor its rate will change when the system is sticking, although the integral error grows according to $x_{1}(t)=x_{1}(t_{s})+x_{2}(t_{s})\int\limits^{t_{c}}_{t_{s}}dt.$ (26) It can be noted that if $K_{i}=0$ then the condition (16), correspondingly the inequality $|CAx|\leq|CB|$, reduces to $|x_{2}|\leq F_{c}K_{p}^{-1}$, while the sliding mode (25) reduces to the zero dynamics of the system in stiction (the results we obtained in Section 2.1). ### 2.3 Region of attraction The given Theorem 1 provides the necessary (and sufficient) condition as the (3)-(7) system remains sticking. Yet it is also necessary to demonstrate global attraction of state trajectories to the stiction region. Recall that the latter corresponds to the subset $S_{0}=\\{x\in\mathbb{R}^{3}\,:\,x_{3}=0,\,|K_{i}x_{1}|+|K_{p}x_{2}|\leq F_{c}\\}$ (27) where the sliding mode occurs (cf. (16) and (25)). Firstly, we will explore the persistence of the sliding mode, meaning we will prove whether the system can stay incessantly inside of $S_{0}^{s}=\\{S_{0}\,:\,x_{2}\neq 0\\}$, i.e., in all times $t_{s}<t\rightarrow\infty$. By making $(x_{1},x_{2})$-projection of $x\in\mathbb{R}^{3}$, one can show that (16) results in a rhombus, as schematically illustrated in Figure 2. Figure 2: Rhombus-shape, in $(x_{1},x_{2})$-projection, of the $S_{0}$-region of attraction, with vector-field during the stiction mode and example of an entering and leaving trajectory at $t_{s}$ and $t_{c}$, respectively The indicated vector field is unambiguous due to the integral control action (cf. the sliding-mode dynamics (25)). This means that after reaching $S_{0}^{s}$ at $t_{s}$, any trajectory will leave it at $t_{c}$ having hit the boundary of $S_{0}$. Denoting the point of reaching $S_{0}$ by $x(t_{s})\equiv(x^{s}_{1},x^{s}_{2},0)$, one can calculate the new point of leaving $S_{0}^{s}$ as $x_{1}(t_{c})=:x^{c}_{1}=x^{s}_{2}\bigl{(}F_{c}|x^{s}_{2}|^{-1}-K_{p}\bigr{)}K^{-1}_{i}.$ (28) Correspondingly, from (26) and (28) one obtains the time of leaving $S_{0}^{s}$ as $t_{c}=\Bigl{[}x^{s}_{2}\bigl{(}F_{c}|x^{s}_{2}|^{-1}-K_{p}\bigr{)}K^{-1}_{i}-x^{s}_{1}+x^{s}_{2}t_{s}\Bigr{]}(x^{s}_{2})^{-1}.$ (29) From the above, it can be recognized that if $K_{i}\rightarrow 0$, the stiction region $S_{0}$ blows to the entire $(x_{1},x_{2})$-subspace and, consequently, $t_{c}\rightarrow\infty$. This means that a system trajectory will never leave $S_{0}$ having reached it (a result fully in line with what was demonstrated in Section 2.1). On other hand, if allowing for $K_{i}\rightarrow\infty$ the time instant $t_{c}\rightarrow t^{+}_{s}$, according to (29) with $x^{s}_{1}\rightarrow 0$, and this is due to $S_{0}$ collapsing to $\mathrm{proj}_{x_{2}}S_{0}$. Let us now demonstrate that $S_{0}$ is globally attractive for all initial values outside of $S_{0}$, meaning $\forall\>x(t_{0})\in\mathbb{R}^{3}\backslash S_{0}$. As with the eigen- dynamics in (3) and (6), which is linear, one can ensure the global exponential stability by analyzing the characteristic polynomial $s^{3}+K_{d}s^{2}+K_{p}s+K_{i}=0,$ (30) and applying the standard Routh-Hurwitz stability criterion. The control parameters condition $K_{d}K_{p}>K_{i}$ (31) should then be satisfied, guaranteeing that all eigenvalues $\lambda_{i}$ of the (6) system matrix have $\mathrm{Re}\\{\lambda_{i}\\}<0$ with $i=1,\ldots,3$. Then, the resulting (switched) $\dot{x}=Ax\mp B$ subsystems behave as asymptotically stable in both subspaces $\\{x\in\mathbb{R}^{3}\backslash S_{0}:x_{3}\gtrless 0\\}$ correspondingly. It should be noted that the condition of the above parameters is conservative, since the Coulomb friction itself is always dissipative, independently of whether $x_{3}>0$ or $x_{3}<0$. This can be shown by considering the dissipated energy $\bar{V}(t)=-F(t)\int\dot{x}(t)dt=-F(t)\bar{x},$ (32) which is equivalent to a mechanical work provided by the constant friction force $F$ along a unidirectional displacement $\bar{x}$. Taking the time derivative of (32) and substituting the Coulomb friction law results in $\displaystyle\frac{d}{dt}\bar{V}(t)$ $\displaystyle=$ $\displaystyle-\frac{d}{dt}F(t)\bar{x}-F(t)\frac{d}{dt}\bar{x}$ (33) $\displaystyle=$ $\displaystyle 0-F_{c}\mathrm{sign}\bigl{(}\dot{x}(t)\bigr{)}\dot{x}(t)=-F_{c}\bigl{|}\dot{x}(t)\bigr{|}.$ Therefore, $\dot{\bar{V}}(t)<0$ for all $x_{3}(t)\neq 0$. This quite intuitive yet relevant condition reveals the relay feedback (5) as an additional (rate- independent) damping, which contributes to stabilization of the closed-loop dynamics (3)-(7). This result will be further used for proof of the Corollary 1. Notwithstanding this, we will keep the conservative stability condition (31) as the sufficient (but not necessary) one. This appears reasonable due to the usually uncertain Coulomb friction parameter and, hence, in order for increasing overall robustness of the feedback control system. The following example should, however, exemplify the additionally stabilizing behavior of the Coulomb friction, even when (31) is violated. ###### Example 1. Consider the (3)-(7) system with $K_{d}=0$, $K_{p}=100$ and $K_{i}=1$. The eigenvalues of the $A$ system matrix are $\lambda_{1}=-0.01$, $\lambda_{2,3}=0.0005\pm 10j$, which implies the linear subsystem is asymptotically unstable. It should also be noted that (31) is not fulfilled. To evaluate the trajectories of the (3)-(7) system, one can use the particular solution $x(t)=e^{At}x(\tau)+A^{-1}\bigl{(}e^{At}-I\bigr{)}Bu,$ (34) for the constant control $u=\mp 1$, which corresponds to the (5) relay, switched in the $x_{3}>0$ and $x_{3}<0$ subspaces. The initial values $x(\tau)=[x_{1}(t),x_{2}(t),0]^{T}$ at $t=\tau$ should be reassigned each time the trajectory crosses $(x_{1},x_{2})$-plane, meaning the relay switches at $x_{3}=0$ outside of $S_{0}$. The $x_{3}$-state trajectory, with an initial value $x_{3}(0)=10$, is shown in Figure 3, once for $F_{c}=0$ (solid red line) and once for $F_{c}=1$ (blue dash-dot line). Figure 3: Velocity trajectories (from $x_{3}(0)=10$) of the Example 1 system with $F_{c}=0$ (solid red line) and $F_{c}=1$ (blue dash-dot line) It is easy to recognize, here, that even low-amplitude Coulomb friction ($F_{c}=1$ compared to the proportional feedback gain $K_{p}=100$) leads to stabilization of the, otherwise, unstable closed-loop control system. ###### Corollary 1. Consider the (3)-(7) system with the control parameters satisfying (31). The stiction region (27), given by Theorem 1, is globally attractive for all initial values outside of this region: $x(t_{0})\in\mathbb{R}^{3}\backslash S_{0}$. ###### Proof. By virtue of the passivity theorem, e.g., [27, 28], the feedback interconnection of energy dissipating systems is also energy dissipating. Since (3), (4) is dissipative when (31) is fulfilled, and (5) is also dissipative for $x_{3}\neq 0$, their feedback interconnection yields dissipative almost everywhere (except $x_{3}=0$) outside of $S_{0}$. This implies that any $x(t)$ system trajectory, starting from the outside of $S_{0}$, converges continuously, and some ball $\mathcal{B}\equiv\|x\|$ around the origin shrinks over time: $\|x(t_{2})\|<\|x(t_{1})\|\quad\forall\quad t_{2}>t_{1},\>x\in\mathbb{R}^{3}\backslash S_{0}.$ (35) For some $t_{3}>t_{2}$, the shrinking circle $\mathrm{proj}_{(x_{1},x_{2})}\mathcal{B}\subseteq S_{0}$, and for $t_{4}\geq t_{3}$, zero velocity $x_{3}(t_{4})=0$ will be reached. This implies $x(t_{4})\in S_{0}$, which completes the proof. ∎ ###### Remark 2. The sliding-mode condition (17), which results in $|CAx|\leq CB$ and, correspondingly, proves Theorem 1, constitutes the existence and reachability condition for $S_{0}$, which is necessary but not sufficient. This is because (16) does not contain any requirements imposed on the $K_{d}$-parameter value. Theorem 1 and Corollary 1 constitute the necessary and sufficient conditions for $S_{0}$ to be both globally reachable and attractive from outside of $S_{0}$. ## 3 Analysis of stick-slip convergence In this Section, we will analyze the convergence behavior of stick-slip trajectories in the (3)-(7) system. Recall that having reached $S_{0}^{s}$ at $t_{s}$, the $x(t)$ trajectory will leave it at $t_{c}$, given by (29), which is due to the progressing $|x_{1}(t)|$ value, which will (unavoidably) violate the stiction condition (16). To show (qualitatively) how state trajectories evolve during a stick-slip cycle, consider the triple-integrator chain (see Figure 4(a)), which arises out of the closed-loop dynamics (1). (a) (b) (c) (d) Figure 4: Phase portrait at stick-slip; (a) triple-integrator chain, (b) $(x_{1},x_{2})$-projection during sticking, (c) $(x_{2},x_{3})$-projection during slipping, (d) typical trajectory during one stick-slip cycle Eliminating the time argument, which is a standard procedure for a phase-plane construction, one can write in general terms $x_{n}dx_{n}=\dot{x}_{n}dx_{n-1},\quad n=\\{3,2\\},$ (36) for the first and second (from the left) integrator. For a unidirectional motion (here, $\mathrm{sign}(x_{3})=1$, for instance, is assumed) and piecewise constant approximation $\dot{x}_{n}=\mathrm{const}$, one obtains $\displaystyle x_{1}$ $\displaystyle=$ $\displaystyle x^{2}_{2}(2x_{3})^{-1}-x_{1}^{min},$ (37) $\displaystyle x_{2}$ $\displaystyle=$ $\displaystyle x^{2}_{3}(2\dot{x}_{3})^{-1}-x_{2}^{c}$ (38) after integrating the left- and right-hand sides of (36). Obviously, for $t\geq t_{c}$, the $x_{1}(t)$-trajectory evolves parabolically with a dependency of $x_{2}(t)$, while $x_{3}(t)$ is square-root-dependent on $x_{2}(t)$ (see Figures 4(b) and (c) correspondingly). Note that the increasing and decreasing segments of the corresponding trajectories are both asymmetric, effectively due to non-constant $\dot{x}_{n}$-value as the motion evolves. At the same time, one can stress that the extremum $x_{1}^{min}$ (here, minimal due to the assumed positive sign of velocity) always lies on the $x_{1}$-axis (cf. Figure 4(b)) owing to $x_{1}=\int x_{2}dt$ and $\mathrm{sign}(x_{3})=\mathrm{const}$. Differently, the $(x_{2},x_{3})$-projection of the $x(t)$-trajectory can be shifted along the $x_{2}$-axis, while it always ends in $x_{3}(t_{s})=0$ for $x(t)\in S_{0}$ (cf. Figure 4(c)). The resulting alternation of the stick-slip phases is schematically shown in Figure 4(d). ###### Proposition 1. Having reached $S_{0}$, the (3)-(7) system does not leave $\Omega\in\mathbb{R}^{3}$ with $\mathrm{proj}_{(x_{1},x_{2})}\Omega\subseteq S_{0}$ and converges _asymptotically_ to $x(t)\underset{t\rightarrow\infty}{=}\bigl{\\{}(x_{1},0,0):\,|x_{1}|\leq F_{c}K_{i}^{-1}\bigr{\\}}$ with multiple (or at least one) stick-slip cycles. The stick-slip cycles can occur with either zero-crossing of $x_{2}$ or keeping the same $\mathrm{sign}(x_{2}(t_{s,1}))$, where $t_{s,1}$ is the time instant when $x(t)$ reaches $S_{0}$ for the first time. When first disregarding the friction side effect, i.e., $F_{c}=0$, it is well understood that non-overshoot of the set reference value cannot be reached independently of the assigned control parameters, provided $K_{i}\neq 0$ and the initial conditions are such that either $x_{1}(0)=0$ or $\mathrm{sign}(x_{1}(0))=\mathrm{sign}(x_{2}(0))$. This becomes evident since the integral error state $x_{1}(t)$ accumulates the output error over time. That is, in order for $|x_{1}(t)|$ to start decreasing, at least one change of the $x_{2}(t)$-sign is required. An exception is when $\mathrm{sign}(x_{1}(0))\neq\mathrm{sign}(x_{2}(0))$, which allows both $x_{1}(t)$ and $x_{2}(t)$ to converge to zero from the opposite directions. Thus, at least one overshoot should appear, even if all control gains are assigned to have the real poles only; cf. later Example 4. When the Coulomb friction becomes effective, i.e., $F_{c}\neq 0$, the system can change to stiction again, and also without overshoot of $x_{2}=0$ after starting to slip at $x(t_{c})$. This means that a motion trajectory lands again onto $S_{0}$ at time $t_{s,2}>t_{c}>t_{s,1}$ and with $\mathrm{sign}(x_{2}(t_{s,2}))=\mathrm{sign}(x_{2}(t_{s,1}))$. Since the system with $K_{d},F_{c}>0$ is dissipative, the energy level is $V(t_{s,2})<V(t_{c})$, meaning the motion trajectory $x(t)$ always lands onto $S_{0}$ closer to the origin than it was leaving $S_{0}$ in $x(t_{c})$. Note that the system energy within $S_{0}$ can be expressed by the potential field of the proportional and integral control errors, yielding $V(t)=\frac{1}{2}K_{i}x_{1}^{2}+\frac{1}{2}K_{p}x_{2}^{2}\quad\hbox{for}\quad t\in[t_{s},\ldots,t_{c}].$ (39) One can recognize that the energy level (39), of the system in stiction, is an ellipse $\frac{x_{2}^{2}}{a^{2}}+\frac{x_{1}^{2}}{b^{2}}=1\quad\hbox{with}\quad a^{2}=2VK_{p}^{-1},\;b^{2}=2VK_{i}^{-1}.$ (40) Since the energy $V(t_{c})$ is bounded by $S_{0}$, cf. Figure 2, one can show that the semi-major axis $a\leq F_{c}K_{p}^{-1}$ and semi-minor axis $b\leq F_{c}K_{i}^{-1}$. From the system dissipativity and global attractiveness of $S_{0}$, cf. Corollary 1, it follows that the trajectory becomes sticking again, meaning $x(t)\in S_{0}$ for $t\geq t_{s,2}>t_{c}$. Since $V(t_{s,2})<V(t_{c})$, the ellipse (40) shrinks as $a$ and $b$ become smaller (both being proportional to $V(t)$). It is important to notice that during system sticking, on the contrary, the energy level increases as $V(t_{c})>V(t_{s,1})$. This is logical since the integral control action pushes additional energy into the control loop when the system is at a standstill, thus leading to a breakaway and allowing for the motion to restart again once the sticking trajectory reaches the $S_{0}$-boundary. ###### Remark 3. When the state trajectory reattains the stiction region $x(t_{s,2})\in S_{0}$ without overshoot, meaning $\mathrm{sign}(x_{2}(t_{s,2}))=\mathrm{sign}(x_{2}(t_{c}))$, the system is over-damped by the Coulomb friction. Otherwise, $\mathrm{sign}(x_{2}(t_{s,2}))\neq\mathrm{sign}(x_{2}(t_{c}))$ means the system is said to be under-damped by the Coulomb friction. A special (but not feasible, as will be shown) case of $x_{2}(t_{s,2})=0$, meaning the system reaches equilibrium $S_{0}\backslash S_{0}^{s}$ and remains there $\forall\>t\geq t_{s,2}$, is analyzed below by proving Theorem 2. ###### Theorem 2. The (3)-(7) system, with control parameters satisfying (31) and $F_{c}>0$, converges asymptotically to invariant set $\Lambda=\\{(x_{1},0,0):|x_{1}|\leq F_{c}K_{i}^{-1}\\}$ by the number of stick-slip cycles $1\leq N<\infty$. There are no system parameter values and stick-slip initial conditions $(x_{1}(t_{s,n}),x_{2}(t_{s,n}))$ with $n\in N$ which allow $\Lambda$ to be reached by the end of the next stick-slip cycle at time $t_{s,n+1}>t_{c,n}>t_{s,n}$. ###### Proof. Convergence to $\Lambda$ follows from system dissipativity during the slipping and shrinking ellipse (40), which implies an ever-decreasing energy level by the end of one stick-slip cycle, meaning $V(t_{s,n+1})<V(t_{s,n})$. This implies $|x_{2}(t_{s,n+1})|<|x_{2}(t_{s,n})|$ for $n\in N$ and ensures such $x(t)$-trajectory, which starts slipping at $t_{c,n}$ and lands closer to the origin at $t_{s,n+1}$ than before (at $t_{s,n}$). The proof of the second part of Theorem 2, which says it is impossible to reach the invariant equilibrium set $\Lambda$ after one particular stick-slip cycle, follows through contradiction. For this purpose, we should first assume that there is a particular setting $\bigl{(}K_{p},K_{i},K_{d},F_{c},x_{1}(t_{s,n}),x_{2}(t_{s,n})\bigr{)}$ for which the state trajectory $x(t_{s,n+1})\in\Lambda$, i.e. in the next stiction phase at the finite time $t_{s,n+1}>t_{c,n}>t_{s,n}$. The initial conditions of a slipping phase are always given, cf. with Section 2.3, by $\displaystyle x_{2}(t_{c,n})$ $\displaystyle=$ $\displaystyle x_{2}(t_{s,n}),$ (41) $\displaystyle x_{1}(t_{c,n})$ $\displaystyle=$ $\displaystyle\frac{F_{c}}{K_{i}}-\frac{K_{p}}{K_{i}}x_{2}(t_{c,n})\>\hbox{ in 1st quad.,}$ (42) $\displaystyle x_{1}(t_{c,n})$ $\displaystyle=$ $\displaystyle-\frac{F_{c}}{K_{i}}-\frac{K_{p}}{K_{i}}x_{2}(t_{c,n})\>\hbox{ in 3rd quad.}$ (43) This becomes apparent when inspecting the stiction phase dynamics (25), (26) and $S_{0}$-boundary, cf. Figure 2. For reaching $\Lambda$ at a final time instant $\psi=t_{s,n+1}$, while starting at $\tau=t_{c,n}$, an explicit particular solution of $0=C\Bigl{[}e^{A\psi}\left(\begin{array}[]{c}x_{1}(\tau)\\\\[-1.42271pt] x_{2}(\tau)\\\\[-1.42271pt] 0\end{array}\right)+A^{-1}(e^{A\psi}-I)Bu-\left(\begin{array}[]{c}x_{1}(\psi)\\\\[-1.42271pt] 0\\\\[-1.42271pt] 0\end{array}\right)\Bigr{]}$ (44) with $u=\pm 1$, should exist, cf. with (34). Due to the symmetry of solutions, we will consider the 1st quadrant of $S_{0}$ only, i.e., with the above initial condition (42) and $u=+1$ correspondingly, without loss of generality when solving (44). Recall that the matrix exponential $e^{A\psi}=\sum^{\infty}_{k=0}\frac{(A\psi)^{k}}{k!}$ (45) has to be evaluated to find an explicit solution for (44). Substituting the initial condition equalities (41) and (42) into (44), we solve (44) with respect to $x_{2}(\tau)$, and that for an increasing $k=[1,\ldots,40]$. For all the solutions evaluated with the help of the Symbolic Math ToolboxTM, it was found that (44) has no initial-value solution other than zero, meaning $x_{2}(\tau)=x_{2}(t_{s,n})=0$. This means that there are no other initial conditions than zero for which a stick-slip cycle could lead to $x(t)\in\Lambda$ at $t=t_{s,n+1}$. This contradicts our initial assumption that such initial conditions exist and, hence, completes the proof. ∎ ###### Remark 4. Since no relative motion occurs during a stiction phase, cf. Section 2.2, the trajectories solution (44) represents the single descriptor of the system dynamics, determining convergence of trajectories during the slipping phases. One can recognize that discontinuous Coulomb friction contributes as a constant piecewise-continuous input term $u$, into the solution of trajectories $x(t)$ at $t_{c,n}<t<t_{s,n+1}$. Thus, it comes as no surprise that the stick-slip convergence appears only asymptotically, meaning either within one or an (theoretically) infinite number of stick-slip cycles. We also note that this is independent of whether $x(t)$ reattains $S_{0}$ with or without overshooting. ## 4 Numerical examples The following numerical examples serve to illustrate and evaluate the above analysis. A dedicated numerical simulation of the stick-slip dynamics is developed by implementing (25), (26) and (34), while the conditions of Theorem 1 provide switching between the piecewise smooth trajectories of the alternating slipping and sticking phases of relative motion of the (3)-(7) system. ###### Example 2. Consider the (3)-(7) system with $K_{d}=20$, $K_{p}=100$, $K_{i}=1000$ and varying $F_{c}=\\{50,75,100\\}$. The initial values are assigned as $x(0)=\\{0,-1.1,0\\}$, corresponding to a classical positioning task for the feedback- controlled system (1). Note that $|x_{2}(0)|>F_{c}K_{p}^{-1}$ so that the trajectories start outside of $S_{0}$ and are, therefore, inherently in the slipping phase. The transient and convergence responses for all three Coulomb friction values are shown opposite to one another in Figure 5, cf. qualitatively with an experimental convergence pattern reported in [13, Fig 4]. (a) (b) Figure 5: Output response of Example 2: transient phase $t=[0,\ldots,12]$ sec (a), convergence phase $t=[12,\ldots,120]$ sec (b) ###### Example 3. Consider the (3)-(7) system with $K_{d}=10$, $K_{p}=1040$, $K_{i}=8000$ and $F_{c}=100$. The initial values $x(0)=(0,-0.15,0)$ are assigned to be close to but still outside of the $S_{0}$-region. The linear damping $K_{d}$ is selected, with respect to $F_{c}$, so that the system exhibits only one initial overshoot; but the stick and slip phases alternate without changing the sign of $x_{2}$. The output displacement response is shown in Figure 6(a). The stick-slip convergence without zero-crossing is particularly visible on the logarithmic scale in Figure 6(b). Note that after the series of stick-slip cycles, a further evaluation of the alternating dynamics (about $10^{-13}$ in order of magnitude) is no longer feasible, due to a finite time step and corresponding numerical accuracy, cf. 1st quadrant of the $S_{0}$-rhombus in Figure 2. (a) (b) Figure 6: Output (a) and logarithmic (b) response of Example 3 ###### Example 4. Consider the (3)-(7) system with $K_{d}=56$, $K_{p}=1040$, $K_{i}=6400$ and $F_{c}=100$. Note that the control gains are assigned in such a way that the linear dynamics (3) and (6) reveal a double real pole at $\lambda_{1,2}=-20$ and one in its vicinity at $\lambda_{3}=-16$. This ensures that all states converge fairly simultaneously towards zero, once the $\mathrm{sign}(x_{3})$ remains unchanged. For the initial conditions, $x_{1}(0),x_{3}(0)=0$ and the varying initial displacement is $x_{2}(0)=\\{-0.2,-0.25,-0.3,-0.35\\}$. Note that all $x(0)$ are outside of $S_{0}$, while the transient overshoot lands (in all cases) within $S_{0}$, thus leading to the first stiction after overshoot; see Figure 7. One can recognize that the integral state requires quite different times before the system then passes again into slipping. During the slipping phase, all states converge asymptotically towards zero, provided $F_{c}$ remains constant. Here, it is important to notice that in the real physical systems, a varying $F_{c}$-value and so-called frictional adhesion, see, e.g., [29], at extremely low velocities, will both lead to the system passing into a sticking phase again, therefore, further provoking multiple stick-slip cycles. Even though it is not the case here, with our ideal Coulomb friction assumption, the Theorem 2 still holds, since there is only an asymptotic convergence after at least one stick-slip cycle has occurred. Figure 7: Output response of Example 4 for different initial values ###### Example 5. Consider the (3)-(7) system with $K_{d}=20$, $K_{p}=100$, $K_{i}=1000$ and $F_{c}=50$. The initial condition $x(0)=(0,-0.5,0)$ is assigned to be on the boundary of $S_{0}$, thus leading to short initial slipping and, then, realizing a large number of stick-slip cycles by long-term simulation with $t=[0,\ldots,100000]$ sec. The output is shown as logarithmic absolute value (due to the alternating sign) over the logarithmic time argument in Figure 8. One can recognize that each consequent sticking phase gets closer to the origin, while the stick-slip period grows exponentially, cf. logarithmic timescale. This confirms an asymptotic convergence with stick-slip cycles, cf. Theorem 2. Figure 8: Output response of Example 5, the logarithmic absolute value over the logarithmic time argument ## 5 Conclusions An analysis of stick-slip behavior during convergence of feedback-controlled motion with Coulomb friction has been developed. The most general case of frictional discontinuity at velocity zero-crossing has been assumed, and the parametric conditions for appearance of the stiction region around the equilibrium (set independently of the initial values) are derived. Theorem 1 and Corollary 1 proved the stiction region to be globally reachable and attractive. Theorem 2 stated that convergence is only asymptotically possible and occurs with at least one but mostly an infinite number of stick-slip cycles in a sequence. In particular, an ’ideal’ convergence of the control configuration with all real poles in a neighborhood appears with one initial stick-slip cycle, followed by an asymptotic convergence without new stiction phases. The number of illustrative numerical examples, with different initial conditions and parameter settings, argue in favor of the developed analysis and provide additional insight into stick-slip mechanisms of controlled motion with Coulomb friction. ## References * [1] B. Armstrong-Hélouvry, P. Dupont, C. C. De Wit, A survey of models, analysis tools and compensation methods for the control of machines with friction, Automatica 30 (7) (1994) 1083–1138. * [2] J. Awrejcewicz, P. Olejnik, Analysis of Dynamic Systems With Various Friction Laws, ASME Applied Mechanics Reviews 58 (6) (2005) 389–411. * [3] F. Al-Bender, J. Swevers, Characterization of friction force dynamics, IEEE Control Systems Magazine 28 (6) (2008) 64–81. * [4] B. Armstrong, B. Amin, PID control in the presence of static friction: A comparison of algebraic and describing function analysis, Automatica 32 (5) (1996) 679–692. * [5] H. Olsson, K. J. Astrom, Friction generated limit cycles, IEEE Transactions on Control Systems Technology 9 (4) (2001) 629–636. * [6] R. H. Hensen, M. Van de Molengraft, M. Steinbuch, Friction induced hunting limit cycles: A comparison between the LuGre and switch friction model, Automatica 39 (12) (2003) 2131–2137. * [7] C. C. De Wit, H. Olsson, K. J. Astrom, P. Lischinsky, A new model for control of systems with friction, IEEE Transactions on automatic control 40 (3) (1995) 419–425. * [8] D. Karnopp, Computer simulation of stick-slip friction in mechanical dynamic systems, Journal of dynamic systems, measurement, and control 107 (1) (1985) 100–103. * [9] C. J. Radcliffe, S. C. Southward, A property of stick-slip friction models which promotes limit cycle generation, in: American Control Conference, 1990, pp. 1198–1205. * [10] J. Alvarez, I. Orlov, L. Acho, An invariance principle for discontinuous dynamic systems with application to a Coulomb friction oscillator, J. Dyn. Sys., Meas., Control 122 (4) (2000) 687–690. * [11] D. Putra, H. Nijmeijer, N. van de Wouw, Analysis of undercompensation and overcompensation of friction in 1DOF mechanical systems, Automatica 43 (8) (2007) 1387–1394. * [12] M. Ruderman, M. Iwasaki, Analysis of linear feedback position control in presence of presliding friction, IEEJ Journal of Industry Applications 5 (2) (2016) 61–68. * [13] R. Beerens, A. Bisoffi, L. Zaccarian, W. Heemels, H. Nijmeijer, N. van de Wouw, Reset integral control for improved settling of PID-based motion systems with friction, Automatica 107 (2019) 483–492. * [14] J. Clegg, A nonlinear integrator for servomechanisms, Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry 77 (1) (1958) 41–42. * [15] A. Bisoffi, M. Da Lio, A. R. Teel, L. Zaccarian, Global asymptotic stability of a PID control system with Coulomb friction, IEEE Transactions on Automatic Control 63 (8) (2017) 2654–2661. * [16] M. Ruderman, M. Iwasaki, W.-H. Chen, Motion-control techniques of today and tomorrow: A review and discussion of the challenges of controlled motion, IEEE Industrial Electronics Magazine 14 (1) (2020) 41–55. * [17] K. H. Ang, G. Chong, Y. Li, PID control system analysis, design, and technology, IEEE transactions on control systems technology 13 (4) (2005) 559–576. * [18] T. Koizumi, H. Shibazaki, A study of the relationships governing starting rolling friction, Wear 93 (3) (1984) 281–290. * [19] W. Symens, F. Al-Bender, Dynamic characterization of hysteresis elements in mechanical systems. II. experimental validation, Chaos: An Interdisciplinary Journal of Nonlinear Science 15 (1) (2005) 013106. * [20] J. Y. Yoon, D. L. Trumper, Friction microdynamics in the time and frequency domains: Tutorial on frictional hysteresis and resonance in precision motion systems, Precision Engineering 55 (2019) 101–109. * [21] M. Ruderman, M. Iwasaki, Observer of nonlinear friction dynamics for motion control, IEEE Transactions on Industrial Electronics 62 (9) (2015) 5941–5949. * [22] A. Filippov, Differential equations with discontinuous right-hand sides (1988). * [23] K. H. Johansson, A. Rantzer, K. J. Åström, Fast switches in relay feedback systems, Automatica 35 (4) (1999) 539–552. * [24] M. Ruderman, On break-away forces in actuated motion systems with nonlinear friction, Mechatronics 44 (2017) 1–5. * [25] C. Edwards, S. Spurgeon, Sliding mode control: theory and applications, CRC Press, 1998. * [26] Y. Shtessel, C. Edwards, L. Fridman, A. Levant, Sliding mode control and observation, Springer, 2014. * [27] G. Zames, On the input-output stability of time-varying nonlinear feedback systems part one: Conditions derived using concepts of loop gain, conicity, and positivity, IEEE transactions on automatic control 11 (2) (1966) 228–238. * [28] H. Khalil, Nonlinear Systems, 3rd Edition, Prentice Hall, 2002. * [29] H. Zeng, M. Tirrell, J. Israelachvili, Limit cycles in dynamic adhesion and friction processes: a discussion, The Journal of Adhesion 82 (9) (2006) 933–943.
[1] inst1]organization=UK Atomic Energy Authority,addressline=Culham Science Centre, city=Abingdon, postcode=OX14 3DB, country=UK inst2]organization=Department of Materials, University of Manchester,city=Manchester, postcode=M13 9PL, country=UK inst5]organization=Department of Materials Physics, Eötvös University,postcode=PO Box 32, H-1518, city=Budapest, country=Hungary inst6]organization=Deutsches Elektronen-Synchrotron DESY,addressline = Notkestr. 85, postcode=22607, city=Hamburg, country=Germany inst7]organization=Monash University, city=Clayton, postcode =VIC 3800, country = Australia [1]Corresponding author # Dislocation density transients and saturation in irradiated zirconium Andrew R. Warwick<EMAIL_ADDRESS>Rhys Thomas M. Boleininger Ö. Koç G. Zilahi G. Ribárik Z. Hegedues U. Lienert T. Ungar C. Race M. Preuss P. Frankel S. L. Dudarev [ [ [ [ [ ###### Abstract Zirconium alloys are widely used as the fuel cladding material in pressurised water reactors, accumulating a significant population of defects and dislocations from exposure to neutrons. We present and interpret synchrotron microbeam X-ray diffraction measurements of proton-irradiated Zircaloy-4, where we identify a transient peak and the subsequent saturation of dislocation density as a function of exposure. This is explained by direct atomistic simulations showing that the observed variation of dislocation density as a function of dose is a natural result of the evolution of the dense defect and dislocation microstructure driven by the concurrent generation of defects and their subsequent stress-driven relaxation. In the dynamic equilibrium state of the material developing in the high dose limit, the defect content distribution of the population of dislocation loops, coexisting with the dislocation network, follows a power law with exponent $\alpha\approx 2.2$. This corresponds to the power law exponent of $\beta\approx 3.4$ for the distribution of loops as a function of their diameter that compares favourably with the experimentally measured values of $\beta$ in the range $3\leq\beta\leq 4$. ###### keywords: zirconium irradiation dislocations defects Dislocation density evolution in irradiated Zr has been predicted and measured. A transient peak and subsequent saturation in dislocation density has been observed. Dislocation loop diameters in heavily irradiated Zr are power law distributed. ## 1 Introduction In the core of modern boiling (BWR) or pressurized (PWR) water reactors, the uranium dioxide fuel assemblies are immersed in circulating pressurised water and thus it is critical that only the heat produced by the fission reactions is transported by the coolant and there is no contamination of the coolant from the radioactive fuel itself. Hence, the fuel is cladded to protect the reactor environment from contamination, be that during reactor operation or in transit. From the design choices that date back over fifty years (Rickover et al., 1975), zirconium alloys are currently employed as the uranium dioxide fuel cladding in water-cooled reactors. Containing more than 95 wt% Zr, these alloys are mostly pure zirconium, chosen for its low neutron absorption cross section (Pomerance, 1951). Small amounts of Sn, Nb, Fe and/or Cr in the alloys help protect against corrosion and improve structural integrity (Lemaignan, 2012; Onimus and Bechade, 2012). Figure 1: Dislocation density as a function of dose observed experimentally using microbeam synchrotron X-ray diffraction (XRD) measurements of proton- irradiated Zircaloy-4, compared to predictions derived from simulations of pure $\alpha$-Zr performed using the creation relaxation algorithm technique. Four experimental samples with nominal doses of 0.1, 0.5, 1.0 and 2.0 dpa were scanned across the variable dose regime. The simulation results are scaled by a factor of 0.1 and averaged over three different interatomic potentials (see text). The elastic neutron scattering cross-section for a Zr nucleus is, however, similar to that of other elements in the Periodic table (Sears, 2006), and a prolonged exposure to neutron irradiation results in the accumulation of a considerable amount of microscopic radiation defects generated from the atomic recoils initiated by collisions with neutrons. This gives rise to the deterioration of mechanical and physical properties and stimulates dimensional changes (Holt, 1988; Onimus and Bechade, 2012). The high energy $>1\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$ neutrons produced by the fissile uranium oxide fuel (Nicodemus and Staub, 1953) collide with Zr nuclei, initiating collision cascades that displace atoms from their lattice sites, rearrange the crystal structure, and generate crystal lattice defects (Domain and Legris, 2005). Defects accumulate with increasing exposure to neutron irradiation, in the form of pairs of self-interstitial and vacancy defects (Frenkel pairs) as well as in the form of clusters of defects that eventually coalesce into large-scale defects, such as dislocation loops and dislocations (Warwick et al., 2021). At temperatures above 300-$350\text{\,}\mathrm{\SIUnitSymbolCelsius}$ where point defect diffusion occurs at an appreciable rate comparable or faster rates than the dose rate, it is important to consider random Brownian motion of defects to interfaces, grain boundaries, dislocations or other point defect clusters, whereas at lower temperatures the microstructure of accumulating defects is dominated by other factors. Given the significance of the life-limiting effect of structural changes on the properties of zirconium cladding, there is significant research effort aimed at improving the performance of structural zirconium alloys in the operating environment of a fission reactor (Adamson et al., 2019; Zinkle and Was, 2013). The exposure of a material to energetic particles is often quantified by the notion of dose, typically expressed in the units of ‘displacement per atom’ (dpa). Dpa is a simple measure of exposure of a material to radiation and represents the average number of times each atom in the material has been a part of a Frenkel self-interstitial-vacancy defect pair. Typically, a zirconium alloy cladding is exposed to $\sim 15\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ over the five years of service (Zinkle and Was, 2013). Producing an accurate safety case involves the identification of the microstructure formed at a given dose, temperature, and externally applied load. In particular, the formation of dislocation loops and dislocations is known to play a critical role in the resulting deterioration of the cladding’s structural and mechanical properties. For instance, in the large dose limit, high densities of defects and dislocations accumulate in the cladding, causing embrittlement. Another important degradation mode is the so-called ‘irradiation-induced growth’ (IIG) that arises from the anisotropy of the hexagonal close packed (hcp) crystal structure of zirconium ($\alpha$-Zr) and zirconium alloys (Griffiths, 2020; Onimus et al., 2022). This crystal structure is stable up to and beyond the reactor core’s operating temperature range of $280\text{\,}\mathrm{\SIUnitSymbolCelsius}$ to $350\text{\,}\mathrm{\SIUnitSymbolCelsius}$, exhibiting an hcp-bcc instability only at significantly higher temperatures above $860\text{\,}\mathrm{\SIUnitSymbolCelsius}$ (Willaime and Massobrio, 1989). With increasing dose, the interstitial and vacancy-type dislocation loops with the $\frac{1}{3}\langle 2\bar{1}\bar{1}0\rangle$ Burgers vectors, inhabiting the prismatic crystallographic planes, form a population of the so-called ‘a loops’. At relatively low temperatures, this strongly correlates with observed elongation along the ‘a’ $\langle 2\bar{1}\bar{1}0\rangle$ and contraction along the ‘c’ $\langle 0001\rangle$ crystallographic directions, saturating at doses larger than $1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ (Holt, 1988). At temperatures above $\sim 300\text{\,}\mathrm{\SIUnitSymbolCelsius}$ and doses exceeding $\sim 5\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$, large vacancy-type ‘c loops’ with Burgers vectors $\frac{1}{2}\langle 0001\rangle$ or $\frac{1}{6}\langle 20\bar{2}3\rangle$ appear in the basal crystallographic planes. Accompanying this onset of formation of the c-type vacancy loops, the magnitudes of a and c strains increase linearly with dose in a phenomenon called ‘the breakaway growth’ (Choi and Kim, 2013). Whilst it is known that dislocations have substantial elastic relaxation volume (Dudarev and Ma, 2018; Boleininger et al., 2022) giving rise to significant dimensional changes (Christensen et al., 2020), the present understanding of this IIG is mostly phenomenological and existing models are unable to predict, from first principles, the variation of the dislocation content consistent with, or at least verified by, the experimental data. Below we present experimental observations and predictions derived from simulations, showing that the density of dislocations saturates in the temperature and dose range where dimensional changes exhibit saturation. This is summarised in Figure 1 illustrating the dislocation densities experimentally measured in proton-irradiated Zircaloy-4 (Zr-1.5%Sn-0.2%Fe-0.1%Cr) together with the simulation data for irradiated $\alpha$-Zr plotted as a function of dose. In agreement with observations performed using ion irradiation (Yu et al., 2017), we find that the density of dislocations evolves through a transient at doses $<1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ before saturating at larger doses. The microstructures produced by our simulations indicate that at very low doses $\ll 1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$, small dislocation loops form, which subsequently grow and coalesce into a complex interconnected dislocation network developing at moderate doses $<1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$. The dislocation network eventually forms complete crystallographic planes (Mason et al., 2020; Boleininger et al., 2023) and partly dissociates into large dislocation loops at high doses $\gg 1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$. The formation of dislocation loops and dislocations is principally driven by the stress- mediated clustering of self-interstitial atoms, where in the high dose limit the self-interstitial cluster population size distribution follows a power law $p(N)\propto 1/N^{2.2}$ developing in the saturation regime corresponding to high neutron exposure. Below, we show that this fact, also confirmed by experimental observations (Ungár et al., 2021), has important implications for the interpretation of experimental observations of dislocation loops and for our understanding of the dynamics of microstructural evolution in the irradiated cladding. Our manuscript is structured by detailing our methods in §2 and presenting our results in §3 before summarising key conclusions in §4. The concept and relevant definitions of dislocation density are discussed in §2.1. Details of our experimental set-up and and an overview of the cmwp line profile analysis software are given in §2.2 followed by an explanation of our simulation method, its range of validity and our choice of settings in §2.3. We show how the simulated microstructure correlates to the dislocation density profile shown in Figure 1 in §3.1 in addition to characterising the evolution of the power law distribution of dislocation sizes. Finally, the stored energy associated with dislocation content that drives the changing microstructure as a function of dose is investigated in §3.2. ## 2 Theory and Methodology ### 2.1 Dislocation density There are multiple ways to define the density of dislocation lines $\rho$ in a deformed or irradiated material. For all the measurements and computations in this study, $\rho$ is defined as a scalar ratio of the total dislocation line length to volume $V$ containing the dislocations, namely (Hull and Bacon, 2011) $\displaystyle\rho:=\frac{1}{V}\int_{\perp\in V}{\left|\mathrm{d}\mathbf{l}\right|}.$ (1) Here $\mathbf{l}$ is a position vector on a dislocation line such that $\mathrm{d}\mathbf{l}=\bm{\xi}\mathrm{d}s$ for unit tangent vector $\bm{\xi}$ and arc length $s$, and the integration is performed with respect to the arc length over all the dislocation lines $\perp$ in $V$. The choice of volume is somewhat arbitrary but, for a given resolution, is expected to reflect the average amount of dislocations present. For example, in a TEM micrograph this can be chosen to be a region contained in the image and in a molecular dynamics simulation one may use the entire simulation cell. Another possible definition is the areal density $\rho_{A}$ that measures the number of dislocation lines crossing an open surface as (Hull and Bacon, 2011) $\displaystyle\rho_{A}:=\frac{1}{A}\int_{\perp\in A}{\left|\mathrm{d}\mathbf{S}\cdot\bm{\xi}\right|},$ (2) where $\mathrm{d}\mathbf{S}$ is a vector area element of $A$ with direction normal to the surface and, similar to Equation 1, there is a dependence on the choice of the surface. If all the dislocations are co-linear and perpendicular to the chosen surface, Equations 1 and 2 produce the same value. Whilst for an arbitrary distribution of dislocations this is often not the case, generally both measures do not differ by much more than a factor of two (Schoeck, 1962). Needless to say, a dislocation is not solely defined by its tangent vector and it is noteworthy that neither Equation 1 nor Equation 2 contain any information about the Burgers vector $\mathbf{b}$ of the dislocation. Significant physical quantities such as dislocation energy and the Peach- Koehler force both depend on $\mathbf{b}$. The Nye tensor $\bm{\alpha}$ is a tensorial measure of dislocation density that is a linear function of lattice curvature (Nye, 1953) and is also a function of position $\mathbf{x}$ such that (Jones et al., 2016) $\displaystyle\bm{\alpha}(\mathbf{x}):=\int_{\perp}{\delta\left(\mathbf{x}-\mathbf{l}\right)\mathbf{b}(\mathbf{l})\otimes\mathrm{d}\mathbf{l}},$ (3) where $\otimes$ denotes the tensor product, integration is performed over all of the dislocation lines in the system, and $\delta\left(\mathbf{x}\right)$ is the Dirac delta distribution defined by the property $\displaystyle\int\mathrm{d}^{3}\mathbf{x}^{\prime}\,\delta(\mathbf{x}^{\prime}-\mathbf{x})f(\mathbf{x}^{\prime})=f(\mathbf{x}),$ (4) for an arbitrary well-behaved function $f(\mathbf{x})$. Whilst full information about the dislocation content is contained in Equation 3, attempting to average $\bm{\alpha}$ over a volume $V$ can be problematic. Essentially, this stems from the fact that the integral of $\mathrm{d}\mathbf{l}$ along a dislocation segment contained in $V$ that starts and ends at $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$ respectively is $\mathbf{x}_{b}-\mathbf{x}_{a}$. Thus, any information pertaining to curvature of a dislocation line is lost and closed paths in particular, i.e. dislocation loops, provide no contribution to the volume average (Arsenlis and Parks, 1999; Mandadapu et al., 2014). The dislocations sections that integrate to zero are referred to as Statistically Stored Dislocations (SSD) and the surviving contributions are the Geometrically Necessary Dislocations (GND). Experimental techniques that infer GND content, such as micro-beam Laue measurements and high resolution electron back-scattered diffraction, implicitly make use of Equation 3 (Das et al., 2018). As it is difficult to characterise the entire population of dislocations using Equation 3, throughout this manuscript we have chosen to use the scalar measure given by $\rho$ in Equation 1 which is the same definition as that employed in our X-ray line profile analysis. ### 2.2 Experiment Dislocation densities were measured in four $3\text{\,}\mathrm{mm}$ $\times$ $1\text{\,}\mathrm{mm}$ $\times$ $0.5\text{\,}\mathrm{mm}$ Zircaloy-4 samples (composition Zr-0.17Fe-1.24Sn-0.10Cr) proton-irradiated to different doses. The samples possessed a recrystallised equiaxed microstructure with a low dislocation density of $<1\text{\times}{10}^{14}\text{\,}\mathrm{m}^{-2}$ and a characteristic ‘split-basal’ texture due to processing, where the basal poles are aligned along the normal direction (ND), with a $\pm$30 degree tilt towards the transverse direction. Irradiation induced growth strains in similarly textured Zircaloys are known to saturate at doses below $\sim 10\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ at $320\text{\,}\mathrm{\SIUnitSymbolCelsius}$ (Adamson et al., 2019) and thus we may expect dislocation densities to display a similar pattern of evolution in these samples. The ND face of each sample was proton irradiated with $2\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$ protons at $350\text{\,}\mathrm{\SIUnitSymbolCelsius}$ at the University of Manchester’s Dalton Cumbrian Facility, UK. The temperature of the samples during irradiation was monitored via a thermal imaging camera in order to hold it within $\pm 10\text{\,}\mathrm{\SIUnitSymbolCelsius}$ of the target temperature. Unlike neutrons, the Coulomb interaction between the protons and the target material results in the shallow penetration of protons into the material. The resulting radiation exposure, quantified by the dose and dose rate, varies significantly as a function of depth, with the dose rate being of the order of $10^{-5}$ dpa/s. The dose profile was calculated using the quick Kinchin-Pease setting in srim (Ziegler et al., 2010) with the lattice binding energy and threshold displacement energy set to $0\text{\,}\mathrm{e}\mathrm{V}$ and $40\text{\,}\mathrm{e}\mathrm{V}$ respectively (Stoller et al., 2013). A typical dose vs. depth profile in one of our Zircaloy-4 samples consists of a plateau region extending $\sim 10\text{\,}\mathrm{\SIUnitSymbolMicro m}$ from the surface where the dose and dose rate are approximately constant before sharply rising and falling to zero at a region corresponding to protons coming to rest in the material, called the Bragg peak and located at $\sim 30\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}$. The samples were irradiated such that the doses at 60% of the Bragg peak depth from the surface, termed ‘nominal doses’, were 0.1, 0.5, 1 and $2\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ respectively. Within the first $30\text{\,}\mathrm{\SIUnitSymbolMicro m}$ of each sample from the irradiated surface, the calculated dose and dose rate vary from their nominal values by factors of 0.6 to 8.5 thus allowing us to measure data spanning over a wide range of irradiation exposures. Using a small X-ray beam of $2\text{\,}\mathrm{\SIUnitSymbolMicro m}$ $\times$ $100\text{\,}\mathrm{\SIUnitSymbolMicro m}$ cross section, the samples were scanned in cross-section from the surface to a depth of $50\text{\,}\mathrm{\SIUnitSymbolMicro m}$ within the sample at $2\text{\,}\mathrm{\SIUnitSymbolMicro m}$ increments at the P21.2 beamline at the PETRA III synchrotron facility at DESY in Hamburg, Germany. The samples were translated perpendicular to the scanning direction by $200\text{\,}\mathrm{\SIUnitSymbolMicro m}$ during each scan to improve grain statistics and reduce the spottiness of the pattern. The set-up of the diffraction experiment and sample geometry is shown in Figure 2. (a) (b) Figure 2: (a) : Experimental geometry for measuring line profiles from Zircaloy-4 samples on the P21.2 beamline at the DESY synchrotron, Hamburg, Germany. (b): Variation of dislocation density and dose as a function of depth for the nominal $0.1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ sample. Dislocation densities were extracted from the line profiles using the Convolutional Multiple Whole Profile (cmwp) software (Ribárik et al., 2020). The cmwp software models the line profile intensity $I(q)$, where $q$ is the wavevector magnitude, as a convolution of intensities arising from instrumental effects, size broadening, and, in particular, strain broadening due to dislocations. Instrumental effects were determined using a LaB6 standard specimen, and the size broadening was determined assuming a log- normal size distribution of coherently scattering grains (Ribárik et al., 2020). Here, we outline the model underpinning the cmwp software, whereas for a broader context we refer an interested reader to reviews of the method (Wilkens, 1970; Ungár et al., 1999; Ribárik et al., 2020). In the theory of X-ray line profile analysis, the Fourier components of the broadened intensity peak profile corresponding to a reciprocal lattice vector $G$, denoted $I^{D}_{G}$, are related to the strain distribution by $\displaystyle\mathcal{F}\left\\{I^{D}_{G}(q)\right\\}(L)=\exp\left[-2\pi^{2}G^{2}L^{2}\langle\epsilon^{2}_{G,L}\rangle\right],$ (5) where $L$ is the Fourier variable and $\langle\epsilon^{2}_{G,L}\rangle$ is the mean-square strain. Wilkens (1970) derived an expression for $\langle\epsilon^{2}_{G,L}\rangle$ by numerical methods arising from the ‘restrictedly random distribution’ concept of dislocations. Dislocation lines are assumed to be parallel in the sub-areas of equal size $A$ perpendicular to the line direction. A number $N_{\perp}$ of dislocation lines with equal numbers of positive and negative Burgers vectors occupy random positions in a plane normal to the dislocation lines. The characteristic linear size of the sub-areas is chosen to be proportional to a parameter termed the effective outer cut-off radius $R_{e}$. The dislocation density $\rho$ is then defined by Wilkens (1970) as the areal density of dislocations given by Equation 2 that, as discussed in §2.1, is equivalent to the volume density of dislocations defined by Equation 1 for this specific dislocation configuration. The dipole character of the distribution is determined by the arrangement parameter $M=R_{e}\sqrt{\rho}$ . Whilst $R_{e}$ is one of the fitting parameters in the cmwp software, thus affecting the value of $M$, this article is principally concerned with the determination of $\rho$ and thus we do not discuss the arrangement parameter further. An expression for the mean-square strain for a restricted random distribution of dislocations was derived by Wilkens (1970) $\displaystyle\langle\epsilon^{2}_{G,L}\rangle=\frac{\rho Cb^{2}}{4\pi}f(\eta),$ (6) where $C$ is a parameter that is refined by the profile fitting cmwp algorithm, termed the dislocation contrast factor. The value of $C$ was evaluated for dislocations in $\alpha$-Zr by Balogh et al. (2016). The Wilkens function $f(\eta)$, where $\eta=L/R_{e}$, has the following asymptotic forms in the limit of small and large $\eta$, respectively $\displaystyle f(\eta)\sim\begin{cases}\ln{\eta}&,\,\eta\rightarrow 0\\\ \frac{1}{\eta}&,\,\eta\rightarrow\infty.\end{cases}$ (7) The numerically obtained formula for $f(\eta)$ may be found in Wilkens (1970). Although the Wilkens model is formally derived assuming that a dislocation configuration is composed of straight lines, the model is in fact able to mimic the statistical properties of distributions of curved dislocations (Kamminga and Delhez, 2000; Groma and Borbély, 2004). When compared to transmission electron microscope (TEM) measurements of irradiated Zircaloy-2, the cmwp software was able accurately follow the dislocation density evolution as a function of dose (Seymour et al., 2017), and the cmwp approach has now become an accepted tool for determining dislocation densities (Ungár et al., 2021; Topping et al., 2018) as well as other microstructural features (Ungár et al., 2021) in irradiated Zircaloys. The cmwp software evaluates parameters describing effects of both the specimen size and dislocation broadening of diffraction intensity peaks by first employing a statistical Monte Carlo optimisation followed by the Marquardt-Levenberg non-linear least squares algorithm, see Ribárik et al. (2020). All the peaks in the interval from $0\text{\,}\mathrm{n}\mathrm{m}^{-1}$ $<q<$ $13\text{\,}\mathrm{n}\mathrm{m}^{-1}$ were included in the fitting procedure and the uncertainty in $\rho$ was quantified according to the quality of fit as described by Ribárik et al. (2020). The variation of dislocation density with depth calculated by cmwp is shown in Figure 2(b) that, when mapped to the dose calculated by srim at a given depth, enables plotting the dislocation density as a function of dose as illustrated in Figure 1. ### 2.3 Simulation The accumulation of defects and microstructural evolution as a function of dose was simulated using the Creation Relaxation Algorithm (CRA) (Derlet and Dudarev, 2020). The CRA exploits the separation of timescales associated with relatively fast stress driven and comparatively slow thermally activated evolution of defect microstructure. This results in a simple algorithm where, starting with a perfect crystal structure of $\alpha$-Zr, the Frenkel pairs of defects are created at random and the microstructure is subsequently relaxed via direct energy minimisation such that the system evolves purely through the action of internal stresses arising from the generation of defects. The dose is measured in units of ‘canonical displacement per atom’ (cdpa) computed as the ratio of the total number of Frenkel pairs generated by the algorithm to the number of atoms in the system. CRA simulations assume that vacancies remain effectively immobile, in turn also immobilising the dislocation part of the microstructure (Arakawa et al., 2020), leaving the internal fluctuating stresses as the only remaining factor driving the migration and clustering of self-interstitial atom defects. In (Warwick et al., 2021) we identified the approximate temperature and dose rate range where the simulation method retains its validity when applied to $\alpha$-Zr. The IIG strains observed in neutron irradiated zirconium alloys with initially low dislocation densities tend to saturate with increasing dose at temperatures less than $\sim 300\text{\,}\mathrm{\SIUnitSymbolCelsius}$, see Adamson et al. (2019). At temperatures above $\sim 300\text{\,}\mathrm{\SIUnitSymbolCelsius}$, saturation persists over shorter intervals of dose before the strain magnitudes start increasing linearly as a part of the breakaway growth phenomenon (Holt, 1988). A significant change of pattern of thermal evolution has also been found above $\sim 300\text{\,}\mathrm{\SIUnitSymbolCelsius}$ in proton irradiated Zircaloy-2 when samples irradiated to $2\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ were annealed for $1\text{\,}\mathrm{h}$ at various temperatures (Topping et al., 2018). The X-ray line profile measurements performed by Topping et al. (2018) showed that the a-loop density significantly decreased only at temperatures above $\sim 300\text{\,}\mathrm{\SIUnitSymbolCelsius}$. These data offer a valuable insight into the timescales on which thermally activated processes, including vacancy migration, drive the evolution of heavily irradiated zirconium. Earlier (Warwick et al., 2021), noting that the rates of thermally activated processes follow the Arrhenius law (Vineyard, 1957; Landauer and Swanson, 1961; Allnatt and Lidiard, 1993), we showed that the annealing experiment data imply that the characteristic activation energy $E_{a}$ for the processes primarily responsible for the observed thermally activated behaviour must be close to $\sim 2\text{\,}\mathrm{e}\mathrm{V}$. Also, as described in §2.2, the proton-irradiation defect production dose rate $\dot{\phi}$ at all depths in our experiments is high and close to $10^{-5}$ dpa s-1. Given this high dose rate, we can estimate an upper bound $\tilde{T}$ on the range of temperatures where the rate of migration of defects stimulated directly by irradiation is higher than the rate of thermally activated migration of defects. For a given activation energy $E_{a}$, using the dose rate model by Nordlund et al. (2018), we find that the two rates are comparable if $\displaystyle\dot{\phi}\left({2E_{d}\over E_{a}}\right)\approx\nu\exp{\left(-\frac{E_{a}}{k_{B}\tilde{T}}\right)},$ (8) where $E_{d}$ is the threshold displacement energy required for forming a defect, the attempt frequency is ${\nu\approx\omega_{D}/2\pi}=5.84\times 10^{12}$ s-1, given the Debye frequency ${\omega_{D}=3.67\times 10^{13}\,\mathrm{s}^{-1}}$ (Zarestky, 1979) and $k_{B}=0.861\times 10^{-4}$ eV/K is the Boltzmann constant. Taking $E_{a}=2$ eV and $E_{d}=40$ eV, and solving equation (8) for $\tilde{T}$, we find $\tilde{T}=625$ K$\approx 350\text{\,}\mathrm{\SIUnitSymbolCelsius}$. Below this temperature, the eigenrate of thermal relaxation of microstructure is lower than the rate at which defects are driven by irradiation. Hence, at temperatures below $\tilde{T}$, the defect structures generated by irradiation evolve predominantly through fast athermal stress relaxation (Derlet and Dudarev, 2020). The above esitmate for $\tilde{T}$$\sim 350\text{\,}\mathrm{\SIUnitSymbolCelsius}$ is close to the temperature at which Topping et al. (2018) observed the occurrence of a significant change in the thermal response of microstructure during annealing. Notably, the change of pattern of breakaway growth at $\sim 300\text{\,}\mathrm{\SIUnitSymbolCelsius}$ noted by Holt (1988) was observed at significantly lower dose rates than those characterising our proton irradiation experiments. Hence, the above temperature must reflect the fundamental scale of activation energies associated with microstructural evolution of Zircaloys under irradiation. The migration energy of individual vacancies in pure elemental $\alpha$-Zr (Varvenne et al., 2014) of $\sim 0.5\text{\,}\mathrm{eV}$ is too low to account for the observed behaviour, and while the thermal diffusion of vacancies and other point defects affect microstructural evolution, reducing the overall concentration of defects noted in our analysis, experimental observations indicate the presence of a rate-limiting process with a higher activation energy that stabilises the observed dense defect microstructures at temperatures as high as $300\text{\,}\mathrm{\SIUnitSymbolCelsius}$. High activation energies are known to be associated with the formation of immobile vacancy-impurity clusters involving carbon or nitrogen (Fu et al., 2008; Terentyev et al., 2014; Theodorou et al., 2022). In bcc iron, the effective migration energy of vacancies is defined by the energy of dissociation of a cluster involving a vacancy and a carbon dimer (Paxton, 2014), and this dissociation energy can be as high as 2.22 eV (Kabir et al., 2010), far higher than the activation energy of 0.55 eV characterising vacancy migration in pure elemental Fe (Fu et al., 2005). Given that the characteristic formation and migration energies of defects in Zr and Fe are nearly the same (Dudarev, 2013), the high effective activation energy of the order of 2 eV seen in experiments on Zr likely result from the impurity effect similar to that found in Fe. As noted by Kabir et al. (2010), at relatively low temperatures the vacancy-carbon dimer complexes are immobile, making the dissociation temperature of these complexes one of the key parameters determining the response of a material to radiation exposure. CRA simulations (Derlet and Dudarev, 2020) or the simulations involving the production of defects by successive collision cascade events (Mason et al., 2021; Granberg et al., 2023; Boleininger et al., 2023) do not imply the absence of mobility of defects. Self-interstitial atom defects exhibit the non-Arrhenius mobility (Dudarev, 2008) and their motion is strongly affected by elastic strain fields (Dudarev et al., 2010), resulting in the rapid clustering of these defects into interstitial dislocation loops and, subsequently, into a dense entangled network of dislocations (Derlet and Dudarev, 2020; Boleininger et al., 2022). The latter forms spontaneously at doses above approximately $0.3\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$ (Mason et al., 2020). The clustering of self-interstitial defects into dislocation loops and dislocations stems from the fact that this is a highly energetically favourable process, releasing up to $E^{f}_{SIA}\approx 3$ eV per self- interstitial coalescence event (Domain and Legris, 2005; Dudarev, 2013). The fact that it is the SIA formation energy, fundamentally related to the strong elastic interaction between the self-interstitial defects, that drives the evolution of microstructure at relatively low tempeatures rather than the diffusion of self-interstitial per se, is confirmed by finite-temperature simulations by Chartier and Marinica (2019). The simulations were performed at 300K and hence included the thermal diffusion of self-interstitial atom defects, but still exhibited the same pattern of evolution as that predicted by the CRA simulations (Derlet and Dudarev, 2020). This is confirmed by experimental observations by Wang et al. (2023) showing the trends similar to those found in simulations, even though in tungsten the diffusion of self- interstitial defects occurs at temperatures as low as 27 K (Ehrhart et al., 1991; Ma and Dudarev, 2019). The above analysis of ab initio data and experimental information shows that the temperature interval over which the dynamics of microstructural evolution changes from the low-temperature mode dominated by microscopic stress fluctuations (Derlet and Dudarev, 2020) to the high-temperature mode dominated by the Arrhenius thermally activated diffusion (Allnatt and Lidiard, 1993), in zirconium alloys spans approximately from $300\text{\,}\mathrm{\SIUnitSymbolCelsius}$ to $400\text{\,}\mathrm{\SIUnitSymbolCelsius}$, as illustrated particularly well by Fig. 8 from Topping et al. (2018). The qualitative picture of microstructural evolution of zirconium irradiated by high-energy protons can now be summarised as follows. Proton irradiation produces relatively low energy recoils, generating defects in the form of Frenkel pairs or small defect clusters (Boleininger et al., 2023). Self-interstitial atom defects coalesce into dislocation loops and a dislocation network, whereas vacancies either diffuse and recombine with the interstitial dislocation loops or extended dislocation structures, or form immobile vacancy-impurity clusters (Kabir et al., 2010). These vacancy-impurity clusters immobilise and stabilise the dislocation microstructure (Arakawa et al., 2020), but dissociate in the temperature interval from $300\text{\,}\mathrm{\SIUnitSymbolCelsius}$ to $400\text{\,}\mathrm{\SIUnitSymbolCelsius}$. Over this temperature interval, the mode of microstructural evolution changes from that dominated by stress fluctuations and coalescence of self-interstitial defects to the mode dominated by vacancy diffusion. Over the same interval of temperatures, the IIG changes from a mode exhibiting saturation to that of runaway growth. Our observations exhibit the formation of a dense dislocation network, suggesting that the experimental conditions correspond to the low-temperature rather than the high-temperature mode of microstructural evolution. The selection of a simulation approach below reflects and recognises this fact. An algorithm for modelling microstructural evolution at higher temperatures has to include the treatment of microscopic stress fluctuations as well as diffusion and interaction of vacancies and impurities. The development of such an algorithm remains a challenge for future studies. The CRA was implemented in the molecular dynamics program lammps (Plimpton, 1995) 111https://lammps.sandia.gov, 3 Mar and 29 Oct 2020 stable builds. For this study, unless stated otherwise, we present results averaged across all three Embedded Atom Method (EAM) potentials developed in Ref. (Mendelev and Ackland, 2007), as is the case in Figure 1. Whilst there are variations between the potentials with respect to their predicted formation energies and elastic properties of self-interstitials and vacancies (Mendelev and Ackland, 2007; Varvenne et al., 2014; Varvenne and Clouet, 2017), we have found that all three potentials qualitatively produce the same macroscopic dimensional changes and microstructural evolution under the CRA (Warwick et al., 2021). Furthermore, a similar study also employed the CRA on Zr and predicted the same trends with a different potential (Tian et al., 2021). Our simulations employ periodic boundary conditions and supercells containing $\sim$ 2M and $\sim$ 10M atoms, with the cell edges parallel to the $[2\bar{1}\bar{1}0]$, $[\bar{1}2\bar{1}0]$ and $[0001]$ directions. Energy minimisation was performed using a combination of the conjugate gradient and FIRE algorithms (Bitzek et al., 2006) such that the relaxed force on any atom was smaller than $1\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}\text{\AA}^{-1}$. In the interest of computational efficiency, the simulation cell shape and size was kept fixed during relaxation. Whilst these boundary conditions result in a macroscopic internal stress of $\sim 1\text{\,}\mathrm{G}\mathrm{P}\mathrm{a}$, the dimensional changes that would occur if the cell shape relaxed may nevertheless be accurately computed using linear elasticity theory; furthermore, the microstructure resulting from relaxing under zero pressure is similar to that under zero strain (Warwick et al., 2021; Tian et al., 2021). Dislocations were identified directly from atomic positions using the Dislocation eXtraction Algorithm (DXA) (Stukowski et al., 2012). This is achieved by assigning crystal structure types to each atom using common neighbour analysis (Faken and Jónsson, 1994). Given the crystal structure of the hcp reference crystal, Burgers circuits are drawn around regions containing atoms assigned to non-hcp crystal structure in order to compute Burgers vectors and dislocation lines. Dislocation densities are computed according to Equation 1. In order to enable a closer comparison between our simulations and line profile analysis experiments, we simulated the intensity profile of powder diffraction patterns for all of the CRA microstructures using the Debye equation (Debye, 1915) where the intensity of scattered X-rays is proportional to $\displaystyle I(q)=\sum_{i,j}{\mathrm{sinc}\left(2\pi qr_{ij}\right)},$ (9) for wavevector magnitude $q$, and the sum runs over all the pairs of atoms positioned at $\mathbf{r}_{i}$ and $\mathbf{r}_{j}$ separated by distance ${r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|}$. Furthermore, in Equation 9 it is assumed that the atomic form factor is the same for all atoms in the system. The time required for computing all the pairwise distances $r_{ij}$ for a system of $N$ atoms scales unfavourably as $N^{2}$ and thus we parallelised the task. The line profile was calculated without using periodic boundary conditions and thus the powder was treated as if it were composed of randomly oriented nano-grains as large as the simulation box. $I(q)$ was computed over the domain $3\text{\,}\mathrm{n}\mathrm{m}^{-1}$ $\leq q\leq$ $13\text{\,}\mathrm{n}\mathrm{m}^{-1}$ spanning all peaks up to the $\\{22\bar{4}0\\}$ reflections and the wavenumbers were sampled every $2\text{\times}{10}^{-3}\text{\,}\mathrm{n}\mathrm{m}^{-1}$. Data were visualised and processed with the ovito (Stukowski, 2010) and paraview (Ahrens et al., 2005) software packages. For more details, please refer to our recent publication (Warwick et al., 2021). ## 3 Results and Discussion ### 3.1 Dislocation structure and distribution (a) (b) (c) Figure 3: Dislocation microstructures calculated by the CRA simulations using the MA1 potential with 10M atoms. Green, orange and red dislocations indicate $\langle 2\bar{1}\bar{1}0\rangle,\langle 1\bar{1}00\rangle$ and unusual Burgers vectors, respectively. Dots are self-interstitial atoms identified with the Wigner-Seitz analysis and boundaries of interstitial clusters are rendered as transparent surfaces. 3(a): Self interstitial point defects and small dislocation loops at very low dose 0.02 cdpa. 3(b) : Dense dislocation network at low dose 0.13 cdpa. 3(c) : Extended dislocation network at high dose 2.00 cdpa with percolating interstitial cluster. The peak and saturation of dislocation density shown in Figure 1 can be readily understood by inspecting the spatial configuration of dislocations generated by our simulations. The nature of defects evolving through internal stresses in the CRA results in dislocations being almost exclusively formed by the agglomeration of self-interstitial atoms whilst vacancies remain immobile and generally form small clusters containing $\mathcal{O}(10)$ vacancies that are not large enough to relax into dislocation loops. We do find a small number of much larger clusters containing $\mathcal{O}(100)$ vacancies. However, these are fundamentally interstitial in origin as the large vacancy clusters are nothing but vacant spaces in the crystallographic planes formed by the self-interstitial defects, see Mason et al. (2020); Boleininger et al. (2023). The renders shown in Figure 3 suggest that the dislocation structure evolves in three stages. At low dose (Figure 3(a)), small loops form before coalescing into a dislocation network (Figure 3(b)) at which point the dislocation density saturates. At high doses (Figure 3(c)), full interstitial- type atomic planes form and the dislocation network fragments into loops, resulting in a drop in the dislocation density. The visualised microstructures were rendered from simulations employing the MA1 potential. When comparing the interatomic potentials, we discovered that the MA2 and MA3 potentials produce large populations of twinned regions (Warwick et al., 2021). It seems likely that these are artefacts of the potentials since such defects are not commonly observed in experiment. As was noted by Warwick et al. (2021), for the MA3 potential in particular, a large proportion of dislocations coalesce into these twinned regions whose volume fraction also features a transient peak followed by saturation. The twinned regions are composed of dense arrays of dislocations and thus the pattern of evolution of dislocation structures and saturation in density is common to all three potentials. Thus, the microstructures derived from MA1 simulations are presented in order to avoid needlessly complicating our discussion. Figure 4: Simulated $\\{1\bar{1}02\\}$ X-ray diffraction peak profile as a function of dose computed from the MA1 microstructures. The variation in the intensity of the peak and its centre shift is highlighted by the red path. Application of the cmwp software to our simulated line profiles also shows that the dislocation density saturates as a function of dose, approaching a limiting value of $\sim 10^{15}$ m-2. CRA simulations tend to overestimate the defect content in the high dose limit (Boleininger et al., 2023), however the qualitative trends observed under a variety of conditions are predicted accurately (Mason et al., 2020, 2021; Warwick et al., 2021). This overestimation arises from the lack of re- crystallisation induced by collision cascades. Producing the primary knock on atoms with kinetic energies close to the threshold displacement energy results in defect densities very close to those predicted by the CRA whilst much larger recoil energies result in defect densities lower by approximately a factor of approximately 10, see Boleininger et al. (2023). Thus, in agreement with the analysis by Mason et al. (2020, 2021); Boleininger et al. (2023) where the defect content was independently assessed using the experimentally measured deuterium retention, we have scaled down the dislocation density in Figure 1 by a factor of 10 to enable a direct comparison of our experimental and simulation data. After applying this scaling it can be seen that the CRA indeed predicts a qualitatively accurate variation dislocation density profile. The occurrence of a transient peak of dislocation density at a moderate dose was also noted earlier in simulations of iron and tungsten (Chartier and Marinica, 2019; Derlet and Dudarev, 2020). In order to enable a closer comparison with experiment we simulated a powder diffraction line profile for all of our CRA microstructures as described in §2.3. Figure 4 illustrates the evolution of the $\\{1\bar{1}02\\}$ diffraction peak profile as a function of dose, reflecting the eventual saturation of the microstructure in the peak profile seen in the limit of high dose. We observe that in the transient regime, the peak intensity drops before rising and settling at a steady value. Furthermore, the peak broadens at high dose and shifts to higher scattering angles, indicating lattice compression. The formation of extra atomic planes due to the coalescence of dislocation loops does not result in the volumetric expansion but instead causes lattice compression because the simulations are performed at constant cell shape and size. Under zero pressure boundary conditions, we expect the peak centre to shift to lower wave-numbers instead. Saturation of the peak profile has been quantified by extracting the dislocation density from the data using the cmwp software. cmwp is known to infer dislocation densities that are larger than those determined from TEM images and this has been attributed to the ability of X-ray line profile analysis to resolve small loops in power law distributed dislocation loops (Ungár et al., 2021, 2021). Interestingly, we find that the dislocation densities computed by cmwp and DXA nevertheless differ by approximately an order of magnitude although the character of variation of the observed and simulated dislocation densities as functions of dose are the same. We note that the difference between the cmwp software and DXA results brings attention to an important question: at what size is a dislocation loop too small to be counted as a dislocation object? Usually, dislocations are considered to be the sources of long-range strain fluctuations responsible for the X-ray peak line profile broadening detected in irradiated materials (Wilkens, 1970). On the other hand, small dislocation loops produce shorter-range strains and in the far-field limit they are equivalent to point defects, with the associated strain fields resulting in the Huang diffuse scattering, producing a relatively uniform increase in the scattered intensity in X-ray diffraction patterns (Simmons and Baluffi, 1958). Furthermore, when a dislocation loop is so small that the loop diameter is comparable to the core width of a dislocation then the loop is mostly comprised of core atoms and its structure can no longer be described using conventional elasticity (Boleininger et al., 2018; Boleininger and Dudarev, 2019). Thus, one would expect the resulting strains to be unlike those associated with linear elastic fields of dislocation loops and the core effects to be significant. Modelling the strain broadening effects associated with small dislocation loops requires further analysis and we defer it to a future study. A large proportion of the dislocation objects found in the simulated microstructures are so small that they should be treated as point defects by CMWP and would also not be easily detected in TEM images (Zhou et al., 2006). Determining the nature of such defects could be highly relevant to determining mechanisms that cause the complex high dose phenomena such as breakaway growth. (a) (b) Figure 5: 5(a) : Histogram of self-interstitial cluster population sizes detected in MA1 microstructures containing 10M atoms corresponding to Figures 3. The population sizes of clusters, $N$, follow a power law distribution. $N_{\mathrm{SIA}}$ denotes the total number of self-interstitial atoms detected by the Wigner-Seitz algorithm. The data are presented on a log-log scale. Lines of best fit were calculated using the maximum likelihood estimates (MLE) for the exponent in a power law distribution stated in Eq. 10. 5(b): Evolution of the MLEs for the power law exponent $\alpha$ as a function of dose averaged over all three interatomic MA potentials. Shaded regions represent confidence intervals of the MLE. Data for the doses lower than $1\text{\,}\mathrm{c}\mathrm{d}\mathrm{p}\mathrm{a}$ are presented on a log- linear scale whereas in the saturation regime $>1\text{\,}\mathrm{c}\mathrm{d}\mathrm{p}\mathrm{a}$ a linear-linear scale is employed. $\alpha$ was averaged over doses larger than $1\text{\,}\mathrm{c}\mathrm{d}\mathrm{p}\mathrm{a}$ to arrive at the value $\langle\alpha\rangle=2.28\pm 0.01$ indicated by the horizontal dashed blue line for a 2M cell, and $\langle\alpha\rangle=2.23\pm 0.01$ (dashed green line) for a 10M cell. Size distributions of dislocation loops are often investigated in experiment (Yi et al., 2015) and so for the purposes of comparison and characterising the spatial arrangement of dislocations we examined the distribution of defect cluster - dislocation loop sizes. As the dislocations in this simulation are mostly of interstitial type, we can gain insight into the statistics of dislocation structures by examining the statistics of interstitial defect clusters. Figure 5(a) shows the frequency distribution of clusters containing $N$ interstitials as calculated by ovito for the doses corresponding to the renders in Figure 3. For clusters with population sizes $N<10$, the bin widths of the histogram are equal to 1 whereas at larger cluster population sizes the bin widths increase logarithmically (Milojević, 2010). The raw data for the cluster population size frequencies were averaged over the bin widths to produce the step chart shown. Visually, the histograms appear to follow a straight line on a log-log scale, suggesting a power law distribution. Furthermore, simulations employing the CRA have shown evidence for self- organised critical behaviour (Derlet and Dudarev, 2020) and thus it is reasonable to test the cluster population size power law distribution hypothesis such that the probability mass function for clusters containing $N$ interstitials is given by $p(N)=\frac{1}{\zeta(\alpha)N^{\alpha}}$ (10) where the Riemann zeta function (Heynsworth and Goldberg, 1965) is defined as $\zeta(\alpha)=\sum_{k=1}^{\infty}1/k^{\alpha}$ for exponent $\alpha>1$. We may define an exponent that best fits the data shown in Figure 5(a) via a maximum likelihood estimation. The likelihood function $\displaystyle\mathcal{L}\left(\\{N_{i}|i\in[1..N_{\mathrm{tot}}]\\}|p(N|\alpha)\right)=\prod_{i=1}^{N_{\mathrm{tot}}}p(N_{i}|\alpha)$ (11) returns the probability of observing the $N_{\mathrm{tot}}$ measured data points $\\{N_{i}|i\in[1..N_{\mathrm{tot}}]\\}$ if they were produced from a distribution $p(N|\alpha)$ of given parameter(s) $\alpha$. The MLE is produced by determining the value of $\alpha_{\mathrm{MLE}}$ maximising $\ln\mathcal{L}$ such that: $\displaystyle\left.\frac{\partial\ln\mathcal{L}}{\partial\alpha}\right\rvert_{\alpha=\alpha_{\mathrm{MLE}}}=0.$ (12) Equation 12 was solved numerically using the python package powerlaw (Alstott et al., 2014) and as shown in Figure 5(b), the MLEs and associated standard error for $\alpha$ exhibit transients over a range of doses corresponding to the formation of a dislocation network by the coalescence of loops. The minimum of this transient appears to be correlated with the peak in dislocation density before saturating to a constant value at high doses. The twinned regions that emerge when employing the MA2 and MA3 potentials, as described at the beginning of §3.1, are also interstitial defect clusters. Therefore they are included in this statistical analysis and we find the same trend and remarkably similar values of $\alpha$ across all three MA potentials. Averaging across the potentials for doses larger then $1\text{\,}\mathrm{c}\mathrm{d}\mathrm{p}\mathrm{a}$ in the saturation regime, we find the MLE for exponent $\alpha=2.28\pm 0.01$ for a 2M simulation cell and $\alpha=2.23\pm 0.01$ for a 10M simulation cell. These values are close but slightly higher than the exponents found in simulations and observations of collision cascades in tungsten in the limit of low dose (Sand et al., 2013; Yi et al., 2015). Figure 6: Estimation of dislocation density from interstitial cluster perimeter density. $N$ corresponds to the number of atoms in the cluster and the perimeter of a given cluster is estimated according to Equation 13. An immediate observation and the consequence of the power law statistics is that the overwhelming majority of clusters are small. Thus, even the order of magnitude of the calculated dislocation density is sensitive to the choice one makes for a minimum detectable size of a dislocation loop. To show this, assume that clusters with cluster population sizes $N$ lying in a range ${N_{\text{min}}<N<N_{\text{max}}}$ correspond to a circular platelet of interstitial atoms i.e. a dislocation loop (Gilbert et al., 2008). The $N_{\text{min}}$ and $N_{\text{max}}$ thresholds correspond to cluster population sizes that are either sufficiently small to be treated as point defects, or are large enough to be close to the threshold for forming a percolating atomic plane consisting of interconnected dislocation loops. Assume that each platelet contains one extra atom per atomic string in the direction of the Burgers vector (Boleininger et al., 2018; Boleininger and Dudarev, 2019), and involves $N$ atoms with volume $\Omega$ equal to the atomic volume of $\alpha$-Zr. Using the formula $({\bf b}\cdot{\bf A})$ for the volume of a dislocation loop, its perimeter can then be estimated as $P_{S}=2\pi\sqrt{{N\Omega\over\pi b}},$ (13) where $b$ is the length of the Burgers vector along the normal to the loop habit plane. Here, we assume that the platelets form $a$-type dislocation loops such that $b$ is equal to the $a$ lattice parameter. Summing the perimeters corresponding to cluster population size distributions between various choices of $N_{\text{min}}$ and $N_{\text{max}}$ provides a measure of the difference in dislocation density due to different choices of thresholds. Typically, a dislocation core extends over about five interatomic distances (Boleininger et al., 2018; Boleininger and Dudarev, 2019) and thus one would reasonably argue that $N_{\text{min}}$ should at least be greater than five. Furthermore, we should not include the fully formed planes corresponding to a cluster percolating through the periodic boundaries, thus setting $N_{\text{max}}=10^{3}$. In Figure 6 we observe an order of magnitude difference between the results involving the counting of all the viable clusters smaller than the population size of the percolating cluster ($5<N<10^{3}$), and the values found by counting all the clusters but increasing the lower threshold to exclude clusters containing less than 100 self-interstitial atoms. This example illustrates how the power law distribution of defect sizes in highly irradiated microstructures can profoundly affect how dislocations are counted and their density quantified. We may also employ Equation 13 to estimate the power law exponent for the distribution of loops with respect to loop diameter $D$ where it is apparent that, assuming that all the loops have the same Burgers vector $b$, $N\propto D^{2}$. Treating $N$ as a continuous variable, we derive the probability density function as a function of loop diameter $\displaystyle p_{D}(D)=\left\lvert\frac{\mathrm{d}N}{\mathrm{d}D}\right\rvert p(N(D))\propto\frac{1}{D^{2\alpha-1}},$ (14) where $\alpha$ is the exponent entering Equation 10. Hence the diameter of circular loops is expected to be power law distributed with exponent $\beta=2\alpha-1$. Using the data derived from the CRA simulations in the high dose limit illustrated in Figure 5(b) for a 10M atom cell, we find that $\beta=3.46$. In experiments performed by Ungár et al. (2021) dislocation diameter data measured by TEM were combined with an XRD line profile measurement of the total dislocation density. When fitting the dislocation size distribution to a power law the exponent was found to lie in the range $3\leq\beta\leq 4$, which agrees well with the values derived from our simulations. Concluding this section, we note that power law exponent values $\alpha=2$ and $\beta=3$ appear to represent natural low limits characterising the power law statistics of populations of dislocation loops in a heavily irradiated material. Indeed, any power law distribution with $\alpha<2$ would imply a divergent total count of interstitial defects contained in the loops, a paradox that can only be resolved by recognising that loops of very large size are nothing but extra crystallographic atomic planes, which through elastic interactions would tend to modify the finite-size part of the dislocation loop distribution so as to make it normalisable, in this way steering the value of exponent $\alpha$ towards and above the limiting value of 2. ### 3.2 Stored energy Whilst analysing the configuration of atoms helps to describe how the microstructure evolves, on its own this approach provides limited insight into why the observed processes take place. Evidently, the repeated creation and relaxation of defects forces the microstructure into a state that has the energy higher than that of a single crystal. By examining the excess energy $E_{exc}$ that the system is driven to, we may determine how the accumulation of point defects produces the observed dislocation density. For an irradiated microstructure at dose $\phi$ containing $N$ atoms with total energy $E_{total}(\phi)$ we may measure the excess (stored) energy as $\displaystyle E_{exc}(\phi):=E_{total}(\phi)-NE_{coh},$ (15) where $E_{coh}$ is the cohesive energy of $\alpha$-Zr. Our simulations are carried out under zero global strain boundary conditions. Whilst this is computationally convenient, in reality specimens are often irradiated under zero applied stress, allowing the body as a whole to undergo strain. Assuming linear elasticity, we may correct for this and remove the stored elastic energy $E_{el}$ from our simulation results. The defects produced by radiation damage induce eigenstrains, also known as residual strains, that act as sources of elastic strain. Let $\epsilon_{ij}^{0}$ denote the elastic strain that would come about under zero applied stress boundary conditions. In this case, the potential energy of the body is lowered by doing work equal to $\frac{1}{2}\sigma_{ij}\epsilon_{ij}^{0}$ where Hooke’s law determines the stress $\sigma_{ij}=C_{ijkl}\epsilon_{kl}^{0}$ and the elastic constants tensor is $C_{ijkl}$. Enforcing zero global strain requires that the integral of the strain $\epsilon_{ij}$ over the body with volume $V$ is zero or, equivalently, that one has zero volume averaged strain $\displaystyle\langle\epsilon_{ij}\rangle:=\frac{1}{V}\int_{V}\mathrm{d}^{3}\mathbf{x}^{\prime}\,\epsilon_{ij}(\mathbf{x}^{\prime})=0.$ (16) This boundary condition is satisfied by the strain $\displaystyle\epsilon_{ij}(\mathbf{x})=\epsilon_{ij}^{0}(\mathbf{x})-\langle\epsilon^{0}_{ij}\rangle.$ (17) Applying Hooke’s law to Equation 17, we observe that the system is under a state of global stress $\langle\sigma_{ij}\rangle=C_{ijkl}\langle\epsilon^{0}_{kl}\rangle$ and we may treat this as the stress that develops in our simulations. Therefore, we can compute the stored elastic energy $\displaystyle E_{el}=\frac{1}{2}\langle\sigma_{ij}\rangle S_{ijkl}\langle\sigma_{kl}\rangle,$ (18) where $S_{ijkl}$ is the elastic compliance tensor (Warwick et al., 2021) related to $C_{ijkl}$ by $S_{ijpq}C_{pqkl}=\frac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)$. We find across all the employed potentials that $E_{el}$ accounts only for a small fraction $<10\%$ of the stored energy, meaning that the vast majority of stored energy is contained in the point defect centres, dislocation cores and fluctuations arising from the elastic fields of these defects. Indeed, when a high dose snapshot was explicitly relaxed under zero pressure we found that the difference in excess energy to that corrected for by our estimate of $E_{el}$ is of the same order of magnitude. Furthermore, the microstructure did not change significantly providing more indication of the minor role of the global elastic energy. (a) (b) (c) Figure 7: Contributions of microstructural defects to the excess energy. Results shown are for the MA1 potential using 10 million atoms. 7(a): Comparison of vacancy energy $E_{vac}$ estimated by assuming all $N_{V}$ vacancies are isolated with formation energy $E^{f}_{1vac}$ (blue curve) with $E_{vac}$ calculated via explicitly relaxing pristine $\alpha$-Zr containing the same number and arrangement of vacancies. 7(b) : Breakdown of total excess energy $E_{exc}$ into elastic $E_{el}$, interstitial $E_{int}$ and vacancy $E_{vac}$ contributions. 7(c): Comparison of interstitial excess energy, concentration and dislocation density that exhibits the similarity in profile between all three quantities. A peak occurs in all three profiles near $0.1\text{\,}\mathrm{c}\mathrm{d}\mathrm{p}\mathrm{a}$. The remaining stored energy is confined in the two classes of defects associated with vacancies and interstitials. Given that vacancies do not cluster significantly, we may estimate their energetic contribution as $\displaystyle E_{vac}(\phi)=N_{vac}(\phi)E^{f}_{1vac},$ (19) where, for dose $\phi$, the number of vacancies is denoted $N_{vac}$ and $E^{f}_{1vac}$ is the formation energy of a single vacancy. Values of $E^{f}_{1vac}$ for each of the potentials used in this study may be found in Mendelev and Ackland (2007). To check the validity of Equation 19 we isolated the $N_{vac}$ vacancies identified by Wigner-Seitz analysis and subsequently relaxed an $N$-atom supercell of pristine $\alpha$-Zr containing the same number of vacancies in the same positions. The simulation cell was relaxed and the formation energy was computed as $\displaystyle E_{vac}=E^{total}_{vac}-(N-N_{vac})E_{coh},$ (20) where $E^{total}_{vac}$ is the resulting total energy. Figure 7(a) summarises this process and shows the resulting formation energy for these arrangements of vacancies. Thus we observe that Equation 19 is a good approximation, providing further evidence that the majority of $E_{vac}$ is due to isolated vacancies. The contribution to $E_{exc}$ from the formation energy of small interstitial clusters and dislocation content may now be calculated as $\displaystyle E_{int}=E_{exc}-E_{el}-E_{vac}.$ (21) In Figure 7(b) we show the relative proportion of elastic, interstitial and vacancy contributions to the total excess energy where it is evident that the elastic contribution is in the minority, vacancy contributions dominate and the interstitial contribution follows a similar trend to the dislocation density profile shown in Figure 1. The profile of $E_{int}$ is shown in Figure 7(c) together with the interstitial concentration and dislocation density in order to highlight their correlation with each other. Recently, Differential Scanning Calorimetry (DSC) experiments were performed to measure the stored energy of irradiated titanium as a means of inferring the number of defects present (Hirst et al., 2022). The measurements indicated stored energies associated with irradiation induced defects to be on the order of $0.1\text{\,}\mathrm{J}\mathrm{g}^{-1}$. Their analysis also allowed the authors to infer the presence of defects that are invisible to TEM imaging. At $0.07\text{\,}\mathrm{c}\mathrm{d}\mathrm{p}\mathrm{a}$ in Figure 7(c), we find that the peak in specific energy associated with $E_{int}$ is $25\text{\,}\mathrm{J}\mathrm{g}^{-1}$ that subsequently drops and plateaus at high dose to $12\text{\,}\mathrm{J}\mathrm{g}^{-1}$. At elevated temperatures, the defect content is likely to be $\sim 10\%$ of that calculated in our simulations cf. (Mason et al., 2020, 2021) and thus we expect the associated stored energy in Zr to be comparable to $1\text{\,}\mathrm{J}\mathrm{g}^{-1}$. ## 4 Conclusions In summary, we have performed experiments and simulations showing that the dislocation density in irradiated zirconium and zircaloys exhibits a peak at a moderate dose and then saturates at doses greater than $1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$. Simulations indicate that this occurs in a regime of dose rate and temperature where microstructural evolution is predominantly driven by stress relaxation. The material enters a critical state at $\sim 1\text{\,}\mathrm{d}\mathrm{p}\mathrm{a}$, where interstitial clusters grow to a sufficient size to percolate the volume of the material. At high dose, the population of smaller clusters and dislocation loops is distributed as a function of cluster defect content according to a power law statistics with the exponent close to $\alpha\approx 2.2$. As a function of defect diameter, this results in a power law distribution of defect clusters with exponent of $\beta\approx 3.5$, which compares favourably with the range of values $3\leq\beta\leq 4$ derived from experimental observations (Ungár et al., 2021).The analysis highlights the significance of precise definition of defect sizes included in the measured dislocation densities. Irrespectively of the statistics of dislocation structures, the trend in the dislocation density evolution in zirconium irradiated at temperatures below $\sim 350^{\circ}\>\mathrm{C}$ is clear, the dislocation density saturates as a function of dose. ## 5 Acknowledgements This work received funding from the RCUK Energy Programme Grant No. EP/W006839/1 and MIDAS EPSRC Grant No. EP/S01702X/1, and was partially carried out within the framework of the EUROfusion Consortium,funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 — EUROfusion). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. We gratefully acknowledge the use of the high-performance computing facility MARCONI (Bologna, Italy) provided by EUROfusion, and computing resources supplied by the IRIS (STFC) Consortium. This work also received support from the EPSRC Access to HPC Programme on the ARCHER2 UK National Supercomputing Service (http://www.archer2.ac.uk). ## References * Adamson et al. (2019) Adamson, R.B., Coleman, C.E., Griffiths, M., 2019. Irradiation creep and growth of zirconium alloys: A critical review. Journal of Nuclear Materials 521, 167–244. doi:10.1016/j.jnucmat.2019.04.021. * Ahrens et al. (2005) Ahrens, J., Geveci, B., Law, C., 2005. ParaView: An End-User Tool for Large-Data Visualization, in: Hansen, C.D., Johnson, C.R. (Eds.), Visualization Handbook. Elsevier, pp. 717–731. doi:10.1016/B978-012387582-2/50038-1. * Allnatt and Lidiard (1993) Allnatt, A.R., Lidiard, A.B., 1993\. Atomic Transport in Solids. Cambridge University Press, Cambridge, England. doi:10.1017/CBO9780511563904. * Alstott et al. (2014) Alstott, J., Bullmore, E., Plenz, D., 2014. powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions. PLoS ONE 9, e85777. doi:10.1371/journal.pone.0085777. * Arakawa et al. (2020) Arakawa, K., Marinica, M.C., Fitzgerald, S., Proville, L., Nguyen-Manh, D., Dudarev, S.L., Ma, P.W., Swinburne, T.D., Goryaeva, A.M., Yamada, T., Amino, T., Arai, S., Yamamoto, Y., Higuchi, K., Tanaka, N., Yasuda, H., Yasuda, T., Mori, H., 2020\. Quantum de-trapping and transport of heavy defects in tungsten. Nature Materials 19, 508\. doi:10.1038/s41563-019-0584-0. * Arsenlis and Parks (1999) Arsenlis, A., Parks, D., 1999\. Crystallographic aspects of geometrically-necessary and statistically-stored dislocation density. Acta Materialia 47, 1597–1611. doi:10.1016/S1359-6454(99)00020-8. * Balogh et al. (2016) Balogh, L., Long, F., Daymond, M.R., 2016. Contrast factors of irradiation-induced dislocation loops in hexagonal materials. Journal of Applied Crystallography 49, 2184–2200. doi:10.1107/S1600576716018136. * Bitzek et al. (2006) Bitzek, E., Koskinen, P., Gähler, F., Moseler, M., Gumbsch, P., 2006. Structural Relaxation Made Simple. Physical Review Letters 97, 170201\. doi:10.1103/PhysRevLett.97.170201. * Boleininger and Dudarev (2019) Boleininger, M., Dudarev, S.L., 2019\. Continuum model for the core of a straight mixed dislocation. Physical Review Materials 3, 093801\. doi:10.1103/PhysRevMaterials.3.093801. * Boleininger et al. (2022) Boleininger, M., Dudarev, S.L., Mason, D.R., Martínez, E., 2022\. Volume of a dislocation network. Physical Review Materials 6, 063601\. doi:10.1103/PhysRevMaterials.6.063601. * Boleininger et al. (2023) Boleininger, M., Mason, D.R., Sand, A.E., Dudarev, S.L., 2023\. Microstructure of a heavily irradiated metal exposed to a spectrum of atomic recoils. Scientific Reports 13, 1684\. doi:10.1038/s41598-022-27087-w. * Boleininger et al. (2018) Boleininger, M., Swinburne, T.D., Dudarev, S.L., 2018. Atomistic-to-continuum description of edge dislocation core: Unification of the Peierls-Nabarro model with linear elasticity. Physical Review Materials 2, 083803\. doi:10.1103/PhysRevMaterials.2.083803. * Chartier and Marinica (2019) Chartier, A., Marinica, M.C., 2019\. Rearrangement of interstitial defects in alpha-Fe under extreme condition. Acta Materialia 180, 141–148. doi:10.1016/j.actamat.2019.09.007. * Choi and Kim (2013) Choi, S.I., Kim, J.H., 2013\. Radiation-induced dislocation and growth behavior of zirconium and zirconium alloys - a review. Nuclear Engineering and Technology 45, 385–392. doi:10.5516/NET.07.2013.035. * Christensen et al. (2020) Christensen, M., Wolf, W., Freeman, C., Wimmer, E., Adamson, R., Griffiths, M., Mader, E., 2020. Vacancy loops in Breakaway Irradiation Growth of zirconium: Insight from atomistic simulations. Journal of Nuclear Materials 529, 151946. doi:10.1016/j.jnucmat.2019.151946. * Das et al. (2018) Das, S., Hofmann, F., Tarleton, E., 2018. Consistent determination of geometrically necessary dislocation density from simulations and experiments. International Journal of Plasticity 109, 18–42. doi:10.1016/j.ijplas.2018.05.001. * Debye (1915) Debye, P., 1915. Zerstreuung von Röntgenstrahlen. Annalen der Physik 351, 809–823. doi:10.1002/andp.19153510606. * Derlet and Dudarev (2020) Derlet, P.M., Dudarev, S.L., 2020\. Microscopic structure of a heavily irradiated material. Physical Review Materials 4, 023605\. doi:10.1103/PhysRevMaterials.4.023605. * Domain and Legris (2005) Domain, C., Legris, A., 2005\. Ab initio atomic-scale determination of point-defect structure in hcp zirconium. Philosophical Magazine 85, 569–575. doi:10.1080/14786430412331334625. * Dudarev (2008) Dudarev, S.L., 2008. The non-Arrhenius migration of interstitial defects in bcc transition metals. Comptes Rendus Physique 9, 409 – 417. doi:10.1016/j.crhy.2007.09.019. * Dudarev (2013) Dudarev, S.L., 2013. Density functional theory models for radiation damage. Annu. Rev. Mater. Res. 43, 35 – 61. doi:10.1146/annurev-matsci-071312-121626. * Dudarev et al. (2010) Dudarev, S.L., Gilbert, M.R., Arakawa, K., Mori, H., Yao, Z., Jenkins, M.L., Derlet, P.M., 2010. Langevin model for real-time Brownian dynamics of interacting nanodefects in irradiated metals. Physical Review B 81, 224107\. doi:10.1103/PhysRevB.81.224107. * Dudarev and Ma (2018) Dudarev, S.L., Ma, P.W., 2018\. Elastic fields, dipole tensors, and interaction between self-interstitial atom defects in bcc transition metals. Physical Review Materials 2, 033602\. doi:10.1103/PhysRevMaterials.2.033602. * Ehrhart et al. (1991) Ehrhart, P., Jung, P., Schultz, H., Ullmaier, H., 1991\. Landolt-Börnstein - Group III Condensed Matter · Volume 25: “Atomic Defects in Metals”. Springer-Verlag Berlin Heidelberg. doi:10.1007/10011948_45. * Faken and Jónsson (1994) Faken, D., Jónsson, H., 1994\. Systematic analysis of local atomic structure combined with 3D computer graphics. Computational Materials Science 2, 279–286. doi:10.1016/0927-0256(94)90109-0. * Fu et al. (2005) Fu, C.C., Dalla Torre, J., Willaime, F., Bocquet, J.L., Barbu, A., 2005. Multiscale modelling of defect kinetics in irradiated iron. Nature Materials 4, 68 – 74. doi:10.1038/nmat1286. * Fu et al. (2008) Fu, C.C., Meslin, E., Barbu, A., Willaime, F., Oison, V., 2008. Effect of C on vacancy migration in $\alpha$-iron. Solid State Phenomena 139, 157 – 164. doi:10.4028/www.scientific.net/SSP.139.157. * Gilbert et al. (2008) Gilbert, M.R., Dudarev, S.L., Derlet, P.M., Pettifor, D.G., 2008\. Structure and metastability of mesoscopic vacancy and interstitial loop defects in iron and tungsten. J. Phys.: Condens. Matter 20, 345214\. doi:10.1088/0953-8984/20/34/345214. * Granberg et al. (2023) Granberg, F., Mason, D., Byggmästar, J., 2023. Effect of simulation technique on the high-dose damage in tungsten. Computational Materials Science 217, 111902. doi:10.1016/j.commatsci.2022.111902. * Griffiths (2020) Griffiths, M., 2020. 1.11 - Irradiation Growth, in: Konings, R.J., Stoller, R.E. (Eds.), Comprehensive Nuclear Materials. second ed.. Elsevier, Oxford. volume 1, pp. 367–405. doi:10.1016/B978-0-12-803581-8.11646-7. * Groma and Borbély (2004) Groma, I., Borbély, A., 2004\. X-ray Peak Broadening Due to Inhomogeneous Dislocation Distributions, in: Mittemeijer, E.J., Scardi, P. (Eds.), Diffraction Analysis of the Microstructure of Materials. Springer, Berlin, Heidelberg. Springer Series in Materials Science, pp. 287–307. doi:10.1007/978-3-662-06723-9_11. * Heynsworth and Goldberg (1965) Heynsworth, E.Y., Goldberg, K., 1965\. Bernoulli and Euler Polynomials, Riemann Zeta Function, in: Abramovitz, M., Stegun, I. (Eds.), Handbook of Mathematical Functions. Dover, New York, pp. 803 – 819. * Hirst et al. (2022) Hirst, C.A., Granberg, F., Kombaiah, B., Cao, P., Middlemas, S., Kemp, R.S., Li, J., Nordlund, K., Short, M.P., 2022. Revealing hidden defects through stored energy measurements of radiation damage. Science Advances 8. doi:10.1126/sciadv.abn2733. * Holt (1988) Holt, R., 1988. Mechanisms of irradiation growth of alpha-zirconium alloys. Journal of Nuclear Materials 159, 310–338. doi:10.1016/0022-3115(88)90099-2. * Hull and Bacon (2011) Hull, D., Bacon, D., 2011. Introduction to Dislocations : Chapter 1 - Defects in Crystals. fifth ed., Butterworth-Heinemann, Oxford. doi:10.1016/B978-0-08-096672-4.00001-3. * Jones et al. (2016) Jones, R.E., Zimmerman, J.A., Po, G., 2016. Comparison of Dislocation Density Tensor Fields Derived from Discrete Dislocation Dynamics and Crystal Plasticity Simulations of Torsion. Journal of Materials Science Research 5, 44. doi:10.5539/jmsr.v5n4p44. * Kabir et al. (2010) Kabir, M., Lau, T.T., Lin, X., Yip, S., Van Vliet, K.J., 2010\. Effects of vacancy-solute clusters on diffusivity in metastable Fe-C alloys. Physical Review B 82, 134112\. doi:10.1103/PhysRevB.82.134112. * Kamminga and Delhez (2000) Kamminga, J.D., Delhez, R., 2000\. Calculation of diffraction line profiles from specimens with dislocations. A comparison of analytical models with computer simulations. Journal of Applied Crystallography 33, 1122–1127. doi:10.1107/S0021889800006750. * Landauer and Swanson (1961) Landauer, R., Swanson, J.A., 1961\. Frequency factors in the thermally activated processes. Physical Review 121, 1668 – 1674. doi:10.1103/PhysRev.121.1668. * Lemaignan (2012) Lemaignan, C., 2012. 2.07 - Zirconium Alloys: Properties and Characteristics, in: Konings, R.J.M. (Ed.), Comprehensive Nuclear Materials. Elsevier. volume 2, pp. 217–232. doi:10.1016/B978-0-08-056033-5.00015-X. * Ma and Dudarev (2019) Ma, P.W., Dudarev, S.L., 2019\. Symmetry-broken self-interstitial defects in chromium, molybdenum, and tungsten. Physical Review Materials 3, 043606\. doi:10.1103/PhysRevMaterials.3.043606. * Mandadapu et al. (2014) Mandadapu, K.K., Jones, R.E., Zimmerman, J.A., 2014. On the microscopic definitions of the dislocation density tensor. Mathematics and Mechanics of Solids 19, 744–757. doi:10.1177/1081286513486792. * Mason et al. (2020) Mason, D.R., Das, S., Derlet, P.M., Dudarev, S.L., London, A.J., Yu, H., Phillips, N.W., Yang, D., Mizohata, K., Xu, R., Hofmann, F., 2020. Observation of Transient and Asymptotic Driven Structural States of Tungsten Exposed to Radiation. Physical Review Letters 125, 225503\. doi:10.1103/PhysRevLett.125.225503. * Mason et al. (2021) Mason, D.R., Granberg, F., Boleininger, M., Schwarz-Selinger, T., Nordlund, K., Dudarev, S.L., 2021\. Parameter-free quantitative simulation of high-dose microstructure and hydrogen retention in ion-irradiated tungsten. Physical Review Materials 5, 095403\. doi:10.1103/PhysRevMaterials.5.095403. * Mendelev and Ackland (2007) Mendelev, M.I., Ackland, G.J., 2007\. Development of an interatomic potential for the simulation of phase transformations in zirconium. Philosophical Magazine Letters 87, 349–359. doi:10.1080/09500830701191393. * Milojević (2010) Milojević, S., 2010. Power law distributions in information science: Making the case for logarithmic binning. Journal of the American Society for Information Science and Technology 61, 2417–2425. doi:10.1002/asi.21426. * Nicodemus and Staub (1953) Nicodemus, D.B., Staub, H.H., 1953\. Fission Neutron Spectrum of U235. Physical Review 89, 1288\. doi:10.1103/PhysRev.89.1288. * Nordlund et al. (2018) Nordlund, K., Zinkle, S.J., Sand, A.E., Granberg, F., Averback, R.S., Stoller, R., Suzudo, T., Malerba, L., Banhart, F., Weber, W.J., Willaime, F., Dudarev, S.L., Simeone, D., 2018. Improving atomic displacement and replacement calculations with physically realistic damage models. Nature Communications 9, 1084\. doi:10.1038/s41467-018-03415-5. * Nye (1953) Nye, J., 1953. Some geometrical relations in dislocated crystals. Acta Metallurgica 1, 153–162. doi:10.1016/0001-6160(53)90054-6. * Onimus and Bechade (2012) Onimus, F., Bechade, J.L., 2012\. 4.01 - Radiation Effects in Zirconium Alloys, in: Konings, R.J. (Ed.), Comprehensive Nuclear Materials. Elsevier. volume 4, pp. 1–31. doi:10.1016/B978-0-08-056033-5.00064-1. * Onimus et al. (2022) Onimus, F., Gélébart, L., Brenner, R., 2022. Polycrystalline simulations of in-reactor deformation of recrystallized Zircaloy-4 tubes: Fast Fourier Transform computations and mean-field self-consistent model. International Journal of Plasticity 153, 103272. doi:10.1016/j.ijplas.2022.103272. * Paxton (2014) Paxton, A.T., 2014. From quantum mechanics to physical metallurgy of steels. Materials Science and Technology 30, 1063. doi:10.1179/1743284714Y.0000000521. * Plimpton (1995) Plimpton, S., 1995. Fast Parallel Algorithms for Short-Range Molecular Dynamics. Journal of Computational Physics 117, 1–19. doi:10.1006/jcph.1995.1039. * Pomerance (1951) Pomerance, H., 1951. Thermal Neutron Capture Cross Sections. Physical Review 83, 641–645. doi:10.1103/PhysRev.83.641. * Ribárik et al. (2020) Ribárik, G., Jóni, B., Ungár, T., 2020. The Convolutional Multiple Whole Profile (CMWP) Fitting Method, a Global Optimization Procedure for Microstructure Determination. Crystals 10, 623\. doi:10.3390/cryst10070623. * Rickover et al. (1975) Rickover, H.G., Geiger, L.D., Lustman, B., 1975. Technical Report. Technical Information Center. U.S. Department of Energy. doi:10.2172/4240391. * Sand et al. (2013) Sand, A.E., Dudarev, S.L., Nordlund, K., 2013. High-energy collision cascades in tungsten: Dislocation loops structure and clustering scaling laws. EPL 103, 46003. doi:10.1209/0295-5075/103/46003. * Schoeck (1962) Schoeck, G., 1962. Correlation between Dislocation Length and Density. Journal of Applied Physics 33, 1745–1747. doi:10.1063/1.1728821. * Sears (2006) Sears, V.F., 2006. Neutron scattering lengths and cross sections. Neutron News 3, 26–37. doi:10.1080/10448639208218770. * Seymour et al. (2017) Seymour, T., Frankel, P., Balogh, L., Ungár, T., Thompson, S., Jädernäs, D., Romero, J., Hallstadius, L., Daymond, M., Ribárik, G., Preuss, M., 2017. Evolution of dislocation structure in neutron irradiated Zircaloy-2 studied by synchrotron X-ray diffraction peak profile analysis. Acta Materialia 126, 102–113. doi:10.1016/j.actamat.2016.12.031. * Simmons and Baluffi (1958) Simmons, R.O., Baluffi, R.W., 1958\. X-Ray Study of Deuteron-Irradiated Copper near 10∘K. Physical Review 109, 1142\. doi:10.1103/PhysRev.109.1142. * Stoller et al. (2013) Stoller, R., Toloczko, M., Was, G., Certain, A., Dwaraknath, S., Garner, F., 2013\. On the use of SRIM for computing radiation damage exposure. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 310, 75–80. doi:10.1016/j.nimb.2013.05.008. * Stukowski (2010) Stukowski, A., 2010. Visualization and analysis of atomistic simulation data with OVITO-the Open Visualization Tool. Modelling and Simulation in Materials Science and Engineering 18, 015012. doi:10.1088/0965-0393/18/1/015012. * Stukowski et al. (2012) Stukowski, A., Bulatov, V.V., Arsenlis, A., 2012. Automated identification and indexing of dislocations in crystal interfaces. Modelling and Simulation in Materials Science and Engineering 20, 085007. doi:10.1088/0965-0393/20/8/085007. * Terentyev et al. (2014) Terentyev, D., Heinola, K., Bakaev, A., Zhurkin, E.E., 2014\. Carbon–vacancy interaction controls lattice damage recovery in iron. Scripta Materialia 86, 9 – 12. doi:10.1016/j.scriptamat.2014.04.003. * Theodorou et al. (2022) Theodorou, A., Syskaki, M.A., Kotsina, Z., Axiotis, M., Apostolopoulos, G., Fu, C.C., 2022\. Interactions between irradiation defects and nitrogen in $\alpha$-Fe: an integrated experimental and theoretical study. Acta Materialia 239, 118227\. doi:10.1016/j.actamat.2022.118227. * Tian et al. (2021) Tian, J., Wang, H., Feng, Q., Zheng, J., Liu, X., Zhou, W., 2021. Heavy radiation damage in alpha zirconium at cryogenic temperature: A computational study. Journal of Nuclear Materials 555, 153159. doi:10.1016/j.jnucmat.2021.153159. * Topping et al. (2018) Topping, M., Ungár, T., Race, C.P., Harte, A., Garner, A., Baxter, F., Dumbill, S., Frankel, P., Preuss, M., 2018. Investigating the thermal stability of irradiation-induced damage in a zirconium alloy with novel in situ techniques. Acta Materialia 145, 255–263. doi:10.1016/j.actamat.2017.11.051. * Ungár et al. (2021) Ungár, T., Frankel, P., Ribárik, G., Race, C.P., Preuss, M., 2021. Size-distribution of irradiation-induced dislocation-loops in materials used in the nuclear industry. Journal of Nuclear Materials 550, 152945. doi:10.1016/j.jnucmat.2021.152945. * Ungár et al. (2021) Ungár, T., Ribarik, G., Topping, M., Jones, R.M.A., Xu, X.D., Hulse, R., Harte, A., Tichy, G., Race, C.P., Frankel, P., Preuss, M., 2021. Characterizing dislocation loops in irradiated polycrystalline Zr alloys by X-ray line profile analysis of powder diffraction patterns with satellites. Journal of Applied Crystallography 54, 803–821. doi:10.1107/S1600576721002673. * Ungár et al. (1999) Ungár, T., Dragomir, I., Révész, Á., Borbély, A., 1999\. The contrast factors of dislocations in cubic crystals: the dislocation model of strain anisotropy in practice. Journal of Applied Crystallography 32, 992–1002. doi:10.1107/S0021889899009334. * Varvenne and Clouet (2017) Varvenne, C., Clouet, E., 2017\. Elastic dipoles of point defects from atomistic simulations. Physical Review B 96, 224103\. doi:10.1103/PhysRevB.96.224103. * Varvenne et al. (2014) Varvenne, C., Mackain, O., Clouet, E., 2014. Vacancy clustering in zirconium: An atomic-scale study. Acta Materialia 78, 65–77. doi:10.1016/j.actamat.2014.06.012. * Vineyard (1957) Vineyard, G.H., 1957. Frequency factors and isotope effects in solid state rate processes. J. Phys. Chem. Solids 3, 121 – 127. doi:10.1016/0022-3697(57)90059-8. * Wang et al. (2023) Wang, S., Guo, W., Schwarz-Selinger, T., Yuan, Y., Ge, L., Cheng, L., Zhang, X., Cao, X., Fu, E., Lu, G.H., 2023. Dynamic equilibrium of displacement damage defects in heavy-ion irradiated tungsten. Acta Materialia , 118578doi:10.1016/j.actamat.2022.118578. * Warwick et al. (2021) Warwick, A.R., Boleininger, M., Dudarev, S.L., 2021. Microstructural complexity and dimensional changes in heavily irradiated zirconium. Physical Review Materials 5, 113604\. doi:10.1103/PhysRevMaterials.5.113604. * Wilkens (1970) Wilkens, M., 1970. Theoretical Aspects of Kinematical X-ray Diffraction Profiles from Crystals Containing Dislocation Distribution, in: Simmons, J.A., de Wit, R., Bullough, R. (Eds.), Fundamental Aspects of Dislocation Theory, Washington, D.C.. pp. 1195–1221. * Willaime and Massobrio (1989) Willaime, F., Massobrio, C., 1989\. Temperature-Induced hcp-bcc Phase Transformation in Zirconium: A Lattice and Molecular-Dynamics Study Based on an N-Body Potential. Physical Review Letters 63, 2244–2247. doi:10.1103/PhysRevLett.63.2244. * Yi et al. (2015) Yi, X., Sand, A.E., Mason, D.R., Kirk, M.A., Roberts, S.G., Nordlund, K., Dudarev, S.L., 2015\. Direct observation of size scaling and elastic interaction between nano-scale defects in collision cascades. EPL 110, 36001. doi:10.1209/0295-5075/110/36001. * Yu et al. (2017) Yu, H., Yao, Z., Idrees, Y., Zhang, H.K., Kirk, M.A., Daymond, M.R., 2017. Accumulation of dislocation loops in the $\alpha$ phase of Zr Excel alloy under heavy ion irradiation. Journal of Nuclear Materials 491, 232–241. doi:10.1016/j.jnucmat.2017.04.038. * Zarestky (1979) Zarestky, J.L., 1979. Lattice dynamics of hcp and bcc zirconium. Ph.D. thesis. Iowa State University. doi:10.31274/rtd-180813-3520. * Zhou et al. (2006) Zhou, Z., Jenkins, M.L., Dudarev, S.L., Sutton, A.P., Kirk, M.A., 2006. Simulations of weak-beam diffraction contrast images of dislocation loops by the many-beam Howie–Basinski equations. Philosophical Magazine 86, 4851–4881. doi:10.1080/14786430600615041. * Ziegler et al. (2010) Ziegler, J.F., Ziegler, M.D., Biersack, J.P., 2010. SRIM - The stopping and range of ions in matter (2010). Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 268, 1818–1823. doi:10.1016/J.NIMB.2010.02.091. * Zinkle and Was (2013) Zinkle, S., Was, G., 2013. Materials challenges in nuclear energy. Acta Materialia 61, 735–758. doi:10.1016/j.actamat.2012.11.004.